Here is a record of the discussion about AI (artificial intelligence) conducted by several scientists:
Scientist A: I would say that we are quite a long way off developing the AI, though I do think it will happen within the next thirty or forty years. We will probably remain in control of technology and it will help us solve many of the world's problems. However, no one really knows what will happen if machines become more intelligent than humans. They may help us, ignore us or destroy us. I tend to believe AI will have a positive influence on our future lives, but whether that is true will be partly up to us.
Scientist B: I have to admit that the potential consequences of creating something that can match or go beyond human intelligence frighten me. Even now, scientists are teaching computers how to learn on their own. At some point in the near future, their intelligence may well take off and develop at an ever-increasing speed. Human beings evolve biologically very slowly and we would be quickly substituted. In the short term, there is the danger that robots will take over millions of human jobs, creating a large underclass of unemployed people. This could mean large-scale poverty and social unrest. In the long term machines might decide the world would be better without humans.
Scientist C: I'm a member of the Campaign to Stop Killer Robots. Forget the movie image of a terrifying Terminator stamping on human skulls and think of what's happening right now: military machines like drones, gun turrets and sentry robots are already being used to kill with very little human input. The next step will be autonomous “murderbots” following orders but finally deciding who to kill on their own. It seems clear to me thatthiswould be extremely dangerous for humans. We need to be very cautious indeed about what we ask machines to do.