uu.seUppsala University Publications
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Ramverk för att motverka algoritmisk snedvridning
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction.
2019 (Swedish)Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
Abstract [sv]

Användningen av artificiell intelligens (AI) har tredubblats på ett år och och anses av vissa vara det viktigaste paradigmskiftet i teknikhistorien. Den rådande AI-kapplöpningen riskerar att underminera frågor om etik och hållbarhet, vilket kan ge förödande konsekvenser. Artificiell intelligens har i flera fall visat sig avbilda, och till och med förstärka, befintliga snedvridningar i samhället i form av fördomar och värderingar. Detta fenomen kallas algoritmisk snedvridning (algorithmic bias). Denna studie syftar till att formulera ett ramverk för att minimera risken att algoritmisk snedvridning uppstår i AI-projekt och att anpassa det efter ett medelstort konsultbolag. Studiens första del är en litteraturstudie på snedvridningar - både ur ett kognitivt och ur ett algoritmiskt perspektiv. Den andra delen är en undersökning av existerande rekommendationer från EU, AI Sustainability Center, Google och Facebook. Den tredje och sista delen består av ett empiriskt bidrag i form av en kvalitativ intervjustudie, som har använts för att justera ett initialt ramverk i en iterativ process.

Abstract [en]

In the use of the third generation Artificial Intelligence (AI) for the development of products and services, there are many hidden risks that may be difficult to detect at an early stage. One of the risks with the use of machine learning algorithms is algorithmic bias which, in simplified terms, means that implicit prejudices and values are comprised in the implementation of AI. A well-known case is Google’s image recognition algorithm, which identified black people as gorillas. The purpose of this master thesis is to create a framework with the aim to minimise the risk of algorithmic bias in AI development projects. To succeed with this task, the project has been divided into three parts. The first part is a literature study of the phenomenon bias, both from a human perspective as well as from an algorithmic bias perspective. The second part is an investigation of existing frameworks and recommendations published by Facebook, Google, AI Sustainability Center and the EU. The third part consists in an empirical contribution in the form of a qualitative interview study which has been used to create and adapt an initial general framework.

The framework was created using an iterative methodology where two whole iterations were performed. The first version of the framework was created using insights from the literature studies as well as from existing recommendations. To validate the first version, the framework was presented for one of Cybercom’s customers in the private sector, who also got the possibility to ask questions and give feedback regarding the framework. The second version of the framework was created using results from the qualitative interview studies with machine learning experts at Cybercom. As a validation of the applicability of the framework on real projects and customers, a second qualitative interview study was performed together with Sida - one of Cybercom’s customers in the public sector. Since the framework was formed in a circular process, the second version of the framework should not be treated as constant or complete. The interview study at Sida is considered the beginning of a third iteration, which in future studies could be further developed.

Place, publisher, year, edition, pages
2019. , p. 95
Series
UPTEC STS, ISSN 1650-8319 ; 19015
Keywords [en]
algorithmic bias, artificial intelligence, framework, cognitive bias, automation
Keywords [sv]
algoritmisk snedvridning, artificiell intelligens, ramverk, kognitiv snedvridning, automatisering
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:uu:diva-385348OAI: oai:DiVA.org:uu-385348DiVA, id: diva2:1323915
External cooperation
Cybercom Group
Educational program
Systems in Technology and Society Programme
Supervisors
Examiners
Available from: 2019-06-13 Created: 2019-06-12 Last updated: 2019-06-13Bibliographically approved

Open Access in DiVA

fulltext(5026 kB)5 downloads
File information
File name FULLTEXT01.pdfFile size 5026 kBChecksum SHA-512
4489458116d5699b206f9a25ab5d6e5a68b3fcf75235134638da55ad992b3b937db3f7da25eca8b3aa3544466bd7334a4c13f94c53a9d3f64197ff11afda502b
Type fulltextMimetype application/pdf

By organisation
Division of Visual Information and Interaction
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
Total: 5 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 19 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf