Nick Bostrom in his book “Superintelligence: Paths, Dangers, Strategies” asks what will happen once we manage to build computers that are smarter than us, including what we need to do, how it is going to work, and why it has to be done the exact right way to make sure the human race does not go extinct. Will artificial agents in the end save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful – possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence. Nevertheless, we have one advantage: we get to make the first move. Will it be possible to construct a seed artificial intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? Nick Bostrom’s work reveals some concepts regarding those questions.1. In your opinion, what is the most interesting thought you encountered in the book?In recent times, prominent figures such as Stephen Hawking, Bill Gates and Elon Musk have expressed serious concerns about the development of strong artificial intelligence technology, arguing that the dawn of super-intelligence might well bring about the end of humankind. Nick Bostrom in his book endeavours to shed some light on the subject and delves into quite a few particulars concerning the future of AI research. The central argument of the book is the theory that the first superintelligence to be created will have a decisive first-mover advantage and, in a world where there is no other system remotely comparable, it will be very powerful. Such a system will shape the world according to its “preferences”, and will probably be able to overcome any resistance that humans can put up. The bad news is that the preferences such an artificial agent could have will, if fully realized, involve the complete destruction of human life and most plausible human values. The default outcome, then, is catastrophe. In addition, Bostrom argues that we are not out of the woods even if his initial premise is false and a unipolar superintelligence never appears. “Before the prospect of an intelligence explosion,” he writes, “we humans are like small children playing with a bomb.” It will, he says, be very difficult – but perhaps not impossible – to engineer a superintelligence with preferences that make it friendly to humans or able to be controlled. So, will we create artificial agents, which will destroy us? Will the machines really be able to rebel against us? Frankly speaking, the idea about robots, AI agents, taking the control over humans is scaring itself. Therefore, apparently, humankind should apply itself those questions before we achieve super intelligent machines. I find this concept and idea utterly topical. Our world changes every minute, every second, and artificial agents are being developed more and more. Nick Bostrom’s “Superintelligence” tells what the consequences of the developing AI might be for humanity, but mostly, those consequences are shown as the bad ones. However, in my opinion, artificial superintelligence will be an entirely new kind of intelligent entity, and therefore, we must discover its all profits and advantages. Humanity’s first goal, over and above utilizing artificial intelligence for the betterment of our species, ought to be to respect and preserve the radical alterity and well-being of whatever artificial minds we create. Ultimately, I believe this approach will give us a greater chance of a peaceful coexistence with artificial superintelligence than any of the strategies for “control” (containment of abilities and actions) and “value loading” (getting AIs to understand and act in accordance with human values) outlined by Bostrom and other AI experts. We could use AI agents in our daily life, as well as in creating and engineering new technologies. Artificial intelligence will certainly automate some jobs, particularly those that rely on assembly lines or data collection. AI also will help businesses with high-speed customer demands — conversational AI chatbots and other virtual assistants will manage the day-to-day flow of work. It is estimated that 85% of customer interactions will be managed by artificial intelligence by 2020. We see that AI agents can ?onsiderably ease our lives.