Technology helps us with many things, and we expect Artificial Intelligence (AI) to give usmuch more in the future. However, there are certain risks involved. In science fiction AI hasbeen described as something apocalyptic: Artificial General Intelligence or Super Intelligencetakes over the world using its thinking power. Humans become slaves, laboratory animals,zoo/reservation inhabitants, or simply exterminated.That has been fiction. Not anymore. Recent technological developments especially inMachine Learning, and AI achievements in complex games for example, created worriesabout the imminence of the above apocalypse.The discussions focus on issues like the probability of AI acquiring an independent existenceof itself, transforming us into something we do not want to be, affect or even direct evolutionin a radically different direction, etc. Not everyone agrees on whether any of these things willhappen, or when they may happen.AI is seen as a technology providing answers, products, services to us in order to satisfy ourneeds, solve our problems and make our world balanced and perfect. In accordance to that,the discussion about its benevolence or cruelty is about whether its deliveries will be good orbad to humans, animals, or the whole universe. This is a significant issue and we have tohandle it somehow.We suggest a different approach. It would be possible to handle the issue of the impact of AIif we changed focus from the product to the process: AI designed to help us use the “right”process of thinking instead of delivering answers to make our world perfect.In order to be able to design such an AI we need to know what we want. The answer to thisquestion demands knowledge about what we are. Are we recipients of services and productsthat we need according to our nature? Only that? Partly that? Are we recipients but throughus, through our thinking and through our choices? Or are we only thinking and choices, a kindof a Socratic psyche?If we think we are only recipients, and design AI in order to be successful in making our worldperfect, we may soon go to ruin like the old despots who could have all their wishes satisfied.Our thinking, making choices and feeling anxiety will unavoidably languish and go away. Itseems also that this would lead rapidly to the emergence of an independent AI with owngoals and existence. Not only because no one will be there to stop it, but also because therewill be a well-defined goal from the very beginning for AI to work for the best it can.If we design AI to make us think exclusively in the “right” way it will never let us be in peace.It will soon perplex our mind to dissolution, meaning we will not exist anymore. On the otherhand, AI would have a very clear goal to achieve, and being undisturbed because of our nonexistence,should very fast make itself independent.If we base the design of AI on the idea that we are both processors and recipients it could bejust right. This approach would be in accordance with the idea of thinking and knowledgebeing interdependent, and of us thinking in order to solve our problems and to satisfy ourneeds. Moreover, the goal would not be well-defined: Delivery or choice? Both delivery andchoice? Who chooses? Who delivers? Who thinks?
2018. p. 150-150