Logo: to the web site of Uppsala University

uu.sePublications from Uppsala University
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Let the Robot Speak!: AI-Generated Speech and Freedom of Expression
Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Law, Department of Law. (Public Law)ORCID iD: 0000-0002-3028-6084
2022 (English)In: YSEC Yearbook of Socio-Economic Constitutions 2021: Triangulating Freedom of Speech / [ed] Steffen Hindelang, Andreas Moberg, Cham: Springer, 2022, p. 1-23Chapter in book (Refereed)
Abstract [en]

Up until very recently, AI-generated, or more precisely, machine learning (ML)-generated content was still in the realm of sci-fi. A recent series of important inventions gave AI the power of creation: Variational Autoencoders (VAEs) in 2013, Generative Adversarial Networks (GANs) in 2014, and Generative Pre-trained Transformers (GPT) in 2017. Synthetic products based on generative ML are useful in diverse fields of application. For example, generative ML can be used for the synthetic resuscitation of a dead actor, or a deceased loved one. Can ML be a source of speech that is protected by the right to freedom of expression in Article 10 ECHR? In contrast to a tool, such as a pen or a typewriter, ML can be such a decisive element in the generative process, that speech is no longer (indisputably) attributable to a human speaker. Is speech generated by a machine protected by the right to freedom of expression in Article 10 ECHR? I first discuss if ML-generated utterances fall within the protective scope of freedom of expression (Article 10(1) ECHR). After concluding that this is the case, I look at specific complexities raised by ML-generated content in terms of limitations to freedom of expression (Article 10(2) ECHR). The first set of potential limitations that I explore are those following from copyright, data protection, privacy and confidentiality law. Some types of ML-generated content could potentially circumvent these limitations. Second, I study how new types of content generated by ML can create normative grey areas where the boundaries of constitutionally protected and unprotected speech are not always easy to draw. In this context, I discuss two types of ML-generated content: virtual child pornography and fake news/disinformation. Third, I argue that the nuances of Article 10 ECHR are not easily captured in an automated filter and I discuss the potential implications of the arms race between automated filters and ML-generated content.

Place, publisher, year, edition, pages
Cham: Springer, 2022. p. 1-23
Series
YSEC Yearbook of Socio-Economic Constitutions., ISSN 2662-7124, E-ISSN 2662-7132
National Category
Law (excluding Law and Society)
Research subject
Artificial Intelligence; Constitutional Law
Identifiers
URN: urn:nbn:se:uu:diva-474270DOI: 10.1007/16495_2021_38OAI: oai:DiVA.org:uu-474270DiVA, id: diva2:1657572
Available from: 2022-05-11 Created: 2022-05-11 Last updated: 2022-05-11Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full texthttps://link.springer.com/chapter/10.1007/16495_2021_38

Authority records

de Vries, Katja

Search in DiVA

By author/editor
de Vries, Katja
By organisation
Department of Law
Law (excluding Law and Society)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 1067 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf