Logo: to the web site of Uppsala University

uu.sePublications from Uppsala University
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Thinking responsibly about responsible AI and 'the dark side' of AI
Norwegian Univ Sci & Technol NTNU, Dept Comp Sci, Trondheim, Norway.
NUI Galway Ireland, Sch Business & Econ, Galway, Ireland.
Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Social Sciences, Department of Informatics and Media, Information Systems.ORCID iD: 0000-0001-8112-2066
NEOMA Business Sch, Sch Business & Econ, Mont St Aignan, France.
2022 (English)In: European Journal of Information Systems, ISSN 0960-085X, E-ISSN 1476-9344, Vol. 31, no 3, p. 257-268Article in journal, Editorial material (Other academic) Published
Abstract [en]

Artificial Intelligence (AI) has been argued to offer a myriad of improvements in how we work and live. The notion of AI comprises a wide-ranging set of technologies that allow individuals and organizations to integrate and analyze data and use that insight to improve or automate decision-making. While most attention has been placed on the positive aspects companies realize by the adoption by the adoption and use of AI, there is a growing concern around the negative and unintended consequences of such technologies. In this special issue we have made a call for research papers that help us explore the dark side of AI use. By adopting a dark side lens, we aimed to expand our understanding of how AI should be implemented in practice, and how to minimize or avoid negative outcomes. In this editorial, we build on the notion of responsible AI, to highlight the different ways in which AI can potentially produce unintended consequences, as well as to suggest alternative paths future IS research can follow to improve our knowledge about how to mitigate such occurrences. We further expand on dark side theorizing in order to uncover hidden assumptions of current literature as well as to propose other prominent themes that can guide future IS research on AI adoption and use.

Place, publisher, year, edition, pages
Taylor & Francis Group, 2022. Vol. 31, no 3, p. 257-268
Keywords [en]
Responsible AI, AI Ethics, Artificial Intelligence, Explainable AI, Dark side
National Category
Information Systems, Social aspects Business Administration
Identifiers
URN: urn:nbn:se:uu:diva-485156DOI: 10.1080/0960085X.2022.2026621ISI: 000754428800001OAI: oai:DiVA.org:uu-485156DiVA, id: diva2:1697490
Funder
European Commission, 13/RC/2094_P2Wallenberg Foundations, WASP-HS BioMe MMW2019.0112Available from: 2022-09-20 Created: 2022-09-20 Last updated: 2022-09-20Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full texthttps://www.tandfonline.com/doi/full/10.1080/0960085X.2022.2026621

Authority records

Eriksson Lundström, Jenny

Search in DiVA

By author/editor
Eriksson Lundström, Jenny
By organisation
Information Systems
In the same journal
European Journal of Information Systems
Information Systems, Social aspectsBusiness Administration

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 334 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf