Hollywood's theory that machines with evil minds will drive armies of killer robots is just silly.
In 1960 a well-known mathematician put it this way: "If we use, to achieve our purposes, a mechanical agency whose operation we cannot effectively control, we had better be quite sure that the purpose put into the machine is the purpose which we really desire. "
For the machine, this quality is a logical consequence of the simple fact that the machine cannot achieve its original purpose if it is dead. So if we send out a robot to fetch coffee, it will secure success by disabling its own off switch or even killing anyone who gets in the way. If we are not careful, then, we could face a kind of global chess match against super AI with the real world as the chessboard.
The possibility of losing such a match is alarming. Some researchers argue that we can seal the machines inside a firewall, never allowing them to affect the real world. We have yet to invent a firewall that is secure against ordinary humans, let alone super intelligent machines.
Solving the safety problem seems to be possible but not easy. There are probably decades in which to plan for the arrival of super intelligent machines. But the problem should not be dismissed out of hand, Some argue that humans and machines can coexist working as a team—yet that's not possible unless machines share the goals of humans. Others say we can just "switch them off" as if super intelligent machines are sitting ducks. Still others think that super AI is a pipe dream. On September 11, 1933, a famous physicist stated, with confidence, "Anyone who expects a source of power in the transformation of these atoms is talking nonsense. "
A. as it has been by some AI researchers.
B. Moreover, we have a firewall security problem.
C. Unfortunately, that plan seems unlikely to work.
D. because many AI researchers will figure out a way.
E. However, on September 12, 1933, physicist Leo Szilard did it.
F. A purpose-driven machine has one characteristic: a wish to preserve its own existence.
G. The real problem is that AI may become skilled at achieving something other than what we really want.