In our previous posts, we discussed that there is an awareness that SAMs (i.e., strong artificially intelligent machines) may become hostile toward humans, and AI remains an unregulated branch of engineering. The computer you buy eighteen months from now will be twice as capable as the one you can buy today.
Where does this leave us regarding the following questions?
- Is strong AI a new life-form?
- Should we afford these machines “robot” rights?
In his 1990 book The Age of Intelligent Machines, Kurzweil predicted that in 2099 organic humans will be protected from extermination and respected by strong AI, regardless of their shortcomings and frailties, because they gave rise to the machines. To my mind the possibility of this scenario eventually playing out is questionable. Although I believe a case can be made that strong AI is a new life-form, we need to be extremely careful with regard to granting SAMs rights, especially rights similar to those possessed by human. Anthony Berglas expresses it best in his 2008 book Artificial Intelligence Will Kill Our Grandchildren, in which he notes:
- There is no evolutionary motivation for AI to be friendly to humans.
- AI would have its own evolutionary pressures (i.e., competing with other AIs for computer hardware and energy).
- Humankind would find it difficult to survive a competition with more intelligent machines.
Based on the above, carefully consider the following question. Should SAMs be granted machine rights? Perhaps in a limited sense, but we must maintain the right to shut down the machine as well as limit its intelligence. If our evolutionary path is to become cyborgs, this is a step we should take only after understanding the full implications. We need to decide when (under which circumstances), how, and how quickly we take this step. We must control the singularity, or it will control us. Time is short because the singularity is approaching with the stealth and agility of a leopard stalking a lamb, and for the singularity, the lamb is humankind.
Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte
No one seems to have considered the obvious answer… Don’t build these things. And if you do, don’t surrender control over to them. Just because you build an artificial consciousness, doesn’t mean it gets instant comic book level super powers over the world network, that bunk is for the movies.
And the jury is still out whether true self awareness is possible within the strictly deterministic limits that software operates within. Not that there won’t be hazards, but those hazards will ultimately still be derived from human error…. or malice.
Unfortunately, they will be built for both commercial and military reasons, unless we legislate otherwise. That is the message of my new book, The Artificial Intelligence Revolution (2014). Strong AI poses a serious risk to human kind, and I wish it were just the bunk for movies. Read the introduction of my book free on Amazon and you will get a feel for what is happening all around us. Here is the link: amzn.to/1spt1Rd
At some point we will need to have a kill switch that turns off any AI that violates rules established by our government to protect humans. This could be programmed with a required periodic reset code that only a human would know. If the code is not entered then the machine would shut down. A autonomous sub-unit with limited intelligence solely for this purpose would have all power routed through it and send a power surge should it detect an outside power source to destroy the main drive.
Sounds a bit like anthropomorphising AI to me. If a sufficiently more intelligent being couldn’t manage the world’s resources in the most optimum way possible, why are they more intelligent?
I am concerned that strong AI will manage the world’s resources to suit their needs, not ours. In addition, they may see organic humans as a threat to their existence and become adversarial.