Up until very recently, AI-generated, or more precisely, machine learning (ML)-generated content was still in the realm of sci-fi. A recent series of important inventions gave AI the power of creation: Variational Autoencoders (VAEs) in 2013, Generative Adversarial Networks (GANs) in 2014, and Generative Pre-trained Transformers (GPT) in 2017. Synthetic products based on generative ML are useful in diverse fields of application. For example, generative ML can be used for the synthetic resuscitation of a dead actor, or a deceased loved one. Can ML be a source of speech that is protected by the right to freedom of expression in Article 10 ECHR? In contrast to a tool, such as a pen or a typewriter, ML can be such a decisive element in the generative process, that speech is no longer (indisputably) attributable to a human speaker. Is speech generated by a machine protected by the right to freedom of expression in Article 10 ECHR? I first discuss if ML-generated utterances fall within the protective scope of freedom of expression (Article 10(1) ECHR). After concluding that this is the case, I look at specific complexities raised by ML-generated content in terms of limitations to freedom of expression (Article 10(2) ECHR). The first set of potential limitations that I explore are those following from copyright, data protection, privacy and confidentiality law. Some types of ML-generated content could potentially circumvent these limitations. Second, I study how new types of content generated by ML can create normative grey areas where the boundaries of constitutionally protected and unprotected speech are not always easy to draw. In this context, I discuss two types of ML-generated content: virtual child pornography and fake news/disinformation. Third, I argue that the nuances of Article 10 ECHR are not easily captured in an automated filter and I discuss the potential implications of the arms race between automated filters and ML-generated content.