President Joe Biden’s administration is urging the tech industry and financial institutions to combat the growing market of AI-generated abusive sexual images. New generative AI tools have enabled the easy creation and dissemination of realistic, sexually explicit deepfakes, affecting both celebrities and children, with victims having limited recourse.
Key Points:
- Call for Voluntary Action: The White House seeks voluntary cooperation from companies to address the issue, pending federal legislation. The aim is to curb the creation, spread, and monetization of nonconsensual AI images, especially those depicting children.
- Chief Science Adviser’s Statement: Arati Prabhakar, director of the White House’s Office of Science and Technology Policy, highlighted the rapid increase in nonconsensual imagery targeting women and girls, urging companies to take responsibility.
- Targeting the Private Sector: The administration’s document calls for action from AI developers, payment processors, financial institutions, cloud computing providers, and mobile app store gatekeepers like Apple and Google.
Specific Measures:
- Disrupting Monetization: The private sector is encouraged to block payment access to sites advertising explicit images of minors.
- Stricter Enforcement: Payment platforms and financial institutions need to rigorously enforce terms of service against businesses promoting abusive imagery.
- Curbing App Services: Cloud services and mobile app stores should restrict services marketed for creating or altering sexual images without consent.
- Removal of Images: Online platforms should facilitate easier removal of nonconsensual explicit images, whether AI-generated or real.
High-Profile Cases and Broader Impact:
- Taylor Swift Incident: AI-generated deepfake images of Taylor Swift circulated on social media, prompting Microsoft to strengthen safeguards.
- School Impact: Schools are facing challenges with AI-generated deepfake nudes of students, often created by fellow teenagers.
Previous and Ongoing Efforts:
- Voluntary Commitments: Last summer, major tech companies like Amazon, Google, Meta, and Microsoft committed to AI safeguards.
- Executive Order: Biden’s October executive order aims to guide AI development while addressing public safety and national security concerns, including AI-generated child abuse imagery.
- Legislative Support: The administration’s safeguards require congressional action, with a bipartisan push for $32 billion in AI development and safety measures over three years.
Legal and Oversight Challenges:
- Criminal Charges: A Wisconsin man was charged for using AI to create thousands of realistic sexual images of minors. Despite laws against creating and possessing such images, oversight of the tools enabling their creation is lacking.
- AI Database Issues: The Stanford Internet Observatory found suspected child abuse images in the AI database LAION, used by models like Stable Diffusion. Stability AI, the company behind Stable Diffusion, distanced itself from the earlier open-source model used by the Wisconsin man, but controlling the misuse of open-source technology remains challenging.
Prabhakar emphasized the widespread nature of the issue, noting that both open-source and proprietary AI technologies contribute to the problem. The administration continues to push for comprehensive industry action and legislative support to address the escalating misuse of AI in generating abusive imagery.