uu.seUppsala University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
AI as gadfly
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.ORCID iD: 0000-0003-3806-5216
2018 (English)In: Wabi-Sabi: Imperfection, incompleteness and impermanence in Organisational Life / [ed] Masayasu Takahashi et al., 2018, p. 150-150Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Technology helps us with many things, and we expect Artificial Intelligence (AI) to give usmuch more in the future. However, there are certain risks involved. In science fiction AI hasbeen described as something apocalyptic: Artificial General Intelligence or Super Intelligencetakes over the world using its thinking power. Humans become slaves, laboratory animals,zoo/reservation inhabitants, or simply exterminated.That has been fiction. Not anymore. Recent technological developments especially inMachine Learning, and AI achievements in complex games for example, created worriesabout the imminence of the above apocalypse.The discussions focus on issues like the probability of AI acquiring an independent existenceof itself, transforming us into something we do not want to be, affect or even direct evolutionin a radically different direction, etc. Not everyone agrees on whether any of these things willhappen, or when they may happen.AI is seen as a technology providing answers, products, services to us in order to satisfy ourneeds, solve our problems and make our world balanced and perfect. In accordance to that,the discussion about its benevolence or cruelty is about whether its deliveries will be good orbad to humans, animals, or the whole universe. This is a significant issue and we have tohandle it somehow.We suggest a different approach. It would be possible to handle the issue of the impact of AIif we changed focus from the product to the process: AI designed to help us use the “right”process of thinking instead of delivering answers to make our world perfect.In order to be able to design such an AI we need to know what we want. The answer to thisquestion demands knowledge about what we are. Are we recipients of services and productsthat we need according to our nature? Only that? Partly that? Are we recipients but throughus, through our thinking and through our choices? Or are we only thinking and choices, a kindof a Socratic psyche?If we think we are only recipients, and design AI in order to be successful in making our worldperfect, we may soon go to ruin like the old despots who could have all their wishes satisfied.Our thinking, making choices and feeling anxiety will unavoidably languish and go away. Itseems also that this would lead rapidly to the emergence of an independent AI with owngoals and existence. Not only because no one will be there to stop it, but also because therewill be a well-defined goal from the very beginning for AI to work for the best it can.If we design AI to make us think exclusively in the “right” way it will never let us be in peace.It will soon perplex our mind to dissolution, meaning we will not exist anymore. On the otherhand, AI would have a very clear goal to achieve, and being undisturbed because of our nonexistence,should very fast make itself independent.If we base the design of AI on the idea that we are both processors and recipients it could bejust right. This approach would be in accordance with the idea of thinking and knowledgebeing interdependent, and of us thinking in order to solve our problems and to satisfy ourneeds. Moreover, the goal would not be well-defined: Delivery or choice? Both delivery andchoice? Who chooses? Who delivers? Who thinks?

Place, publisher, year, edition, pages
2018. p. 150-150
Keywords [en]
Artificial Intelligence, Philosophizing, Dialog, Ethics
National Category
Human Computer Interaction Computer Sciences Ethics
Research subject
Human-Computer Interaction
Identifiers
URN: urn:nbn:se:uu:diva-360594OAI: oai:DiVA.org:uu-360594DiVA, id: diva2:1248460
Conference
SCOS/ACSCOS, 2018, August 17-20, Tokyo
Projects
ETHCOMPAvailable from: 2018-09-14 Created: 2018-09-14 Last updated: 2018-09-17

Open Access in DiVA

No full text in DiVA

Other links

http://scos2018.org/Book of abstracts

Authority records BETA

Kavathatzopoulos, Iordanis

Search in DiVA

By author/editor
Kavathatzopoulos, Iordanis
By organisation
Division of Visual Information and InteractionComputerized Image Analysis and Human-Computer Interaction
Human Computer InteractionComputer SciencesEthics

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 12 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf