Question: What are governments and the UN doing in order to demolish the problem of AI-generated deepfakes, incest, and non-ethical pornography? All of these are harmful, right?

 

Answer: Thank you for your question – this is an important policy topic that has been prominent in the news and media.  

To answer your second question first, yes, there are certainly serious risks of harm for deepfakes, virtual child sexual abuse material (CSAM), and synthetic porn. For more about those risks, see our previous response The Impact of Deepfakes, Synthetic Pornography, & Virtual Child Sexual Abuse Material that outlines the harms of AI-generated sexual images and videos, particularly for children and teens.  

Now, let’s talk about your main concern: Policies and penalties surrounding the creation of this inappropriate, harmful online content. 

What do the laws say? International, national, and state policies 

Internationally 

The United Nations (UN) International Telecommunication Union (ITU) has encouraged social media companies to use digital tools to get rid of deepfake content, primarily because they are worried about these fake materials impacting elections or leading to financial fraud. These digital tools include verification systems that can authenticate images and videos before they’re shared, ensuring they’re human-generated and real. 

The ITU highlights the importance of moving forward with a global approach to this problem. Currently, there is not a single international entity, organization, or group whose sole focus and purpose is to detect and get rid of manipulated, harmful images online. Right now, the ITU is developing standards for watermarking videos, placing a logo or unique code within the video. Standards like watermarking assist in collecting data like the creator’s identity which is especially important as videos contribute to 80% of Internet traffic.  

For deepfakes involving children (or CSAM), since 2006 the International Centre for Missing & Exploited Children (ICMEC) has been tracking the presence of global CSAM laws. Their most recent review showed that out of 196 countries in the world: 

  • 156 countries have introduced or improved legislation against CSAM 
  • 111 countries meet four of the five criteria (listed below) that they believe demonstrate necessary legislative policies  
  • 27 countries meet all criteria 
  • 10 countries have no legislation at all 

ICMEC core criteria include whether national legislation: 

  • “Exists within specific regard to CSAM 
  • Provides an exact definition of CSAM 
  • Criminalizes technology-affiliated CSAM-related offenses 
  • Criminalizes the knowing possession of CSAM, regardless of the intent to distribute 
  • Requires Internet Service Providers (ISPs) to report suspected CSAM to law enforcement or some other mandated agency.” 

National Policies   

On May 19, 2025, the TAKE IT DOWN act was signed into law. This law makes it a crime to post and publish non-consensual intimate imagery online, including content that is AI-generated, like deepfakes. Non-consensual intimate imagery refers to “realistic, computer-generated pornographic images and videos that depict identifiable, real people.” The act also states that it’s unlawful for an individual to intentionally publish, OR threaten to publish, this imagery on social media platforms. Just because a person has said yes to the creation of an AI image of themself doesn’t mean that they’ve also said yes to it being posted online.  

Secondly, TAKE IT DOWN mandates that social media companies put clear processes in place to eliminate this content within 48 hours of a victim reporting it to them. Companies have one year from the signing of the law to fully establish this process. 

State-level policies  

Several US states have implemented laws regarding deepfakes. Here are some examples of what they cover and the penalties

  • Tennessee: Sharing deepfakes without permission is a felony. Perpetrators who make and distribute these images or videos can serve up to 15 years in prison and pay a maximum of $10,000 in fines. 
  • Iowa: Creation of CSAM is a felony punishable by a maximum of 5 years in prison and a $10,245 fine for the first offense. 
  • New Jersey: Making and sending malicious deepfakes leads to prison time and a fine of up to $30,000. 

Digital tools and advancements in deepfake detection 

One CSAM detection product called Safer by Thorn helps companies lower the risks of generating CSAM by using machine learning models to identify potential abusive and inappropriate content on a digital device, helping policymakers and investigators safeguard kids faster. 

Launched by Tech Coalition, Lantern is another initiative that finds and uses signals to detect harmful deepfake materials. Signals are information tied to the social media accounts of the perpetrator of CSAM. These include email addresses, usernames, CSAM hashes, or keywords used to groom victims as well as to buy and sell CSAM. Lantern is being integrated into tech companies’ safety protocols like that of Discord, Google, Meta, Roblox, Snap, and Twitch. Also, Lantern aims to increase reporting of criminal offenses to the authorities and increase awareness of predatory tactics that CSAM perpetrators are using. 

Helpful Resources 

  • If you or a peer needs support with deepfakes talk to your parent, a trusted adult or your pediatrician.  
  • If your parents want to know more about this topic, or you want to share this information with the parents of a friend or peer who may be involved in image-based sexual abuse, this resource may be helpful: Tips for Parents: Deepfakes, Synthetic Pornography, & Virtual Child Sexual Abuse Material
  • NoFiltr is an online community that empowers youth with resources and peer support. Its purpose is to guide teens in navigating sexual exploration in a healthy and safe way. Some of its features include: 
    • 500+ advice submissions from teens themselves about tricky online interactions and experiences. Individuals must be aged 13 or older to submit their own advice. 
    • Quizzes on digital safety – on topics like Digital Mindfulness, Sextortion, Healthy Online Relationships, Online Grooming, Seeking Help 

References  

 

Submit a New Question

Have additional questions after reading this response? Or have any other questions about social media and youth mental health? Submit your own question to be answered by our expert team. Your answer will then be added to our Q&A Portal library to help others with similar questions.

Ask a Question

Last Updated

08/12/2025

Source

American Academy of Pediatrics