Closing Pandora’s Box: The Role of PKI and AI-Detection Platforms in Curbing the AI Deepfake Crisis
By Aisling Dawson |
26 Jan 2026 |
IN-8036
Log In to unlock this content.
You have x unlocks remaining.
This content falls outside of your subscription, but you may view up to five pieces of premium content outside of your subscription each month
You have x unlocks remaining.
By Aisling Dawson |
26 Jan 2026 |
IN-8036
NEWSAI Generation of Non-Consensual Sexual Deepfakes by X's Grok Met with Uproar |
The creation and publication of millions of non-consensual images and videos by X’s AI chatbot, Grok, between late December and January has been met with public and government backlash. The virality of deepfake content—depicting mostly women and girls in a sexually explicit and non-consensual manner—exploded following the sharing of artificially-generated images of Musk by Musk himself on the platform in late December. Following estimates of between 1.1 and 3 million sexualized images generated over 9 days, calls to strengthen both regulatory regimes and technical protections to combat the worryingly fast-evolving misuse of online chatbots are being echoed worldwide. This includes temporary bans of X from Indonesia and Malaysia, the launch of U.S. state (California) and national (France) investigations into xAI, European Union (EU) and U.K. briefings and parliamentary debates on the legality of such content, and the removal of posts and accounts, along with the threat of further legal action from India.
IMPACTCombatting the Deepfake Crisis via Regulatory Efforts and Established and Emerging Security Technology |
Despite X’s announcement of a paywall for Grok AI image generation on January 8, and expansion of Grok “guardrails” by the January 15 to ban prompts that request revealing images of real people, concerns persist regarding the effectiveness of existing mitigatory measures. Members of the U.K. Parliament claim that current “guardrails” serve only to “monetize abuse,” while the slow response from X to stifle content generation came nearly 10 days late, meaning the damage was largely already done. Also, although limiting Grok’s generative capabilities behind a paywall is alleged to simplify the identification of individuals misusing those tools (e.g., via banking or payment details), reliance on false credentials and disposable payment methods enables easy circumvention of such measures, destroying any meaningful traceability for law enforcement or punitive purposes.
The failures of current “guardrails” to address the deepfake crisis is kickstarting national regulatory engines into action. Following the January publication of its baseline cybersecurity standard (ETSI EN 304 223) for Artificial Intelligence (AI) models and systems, ETSI promises prescriptive measures for Generative Artificial Intelligence (Gen AI), deepfakes, and misinformation, while both India and the United Kingdom debate the applicability of existing legal frameworks to the Grok incident (e.g., the U.K. Online Safety Act and U.K. Data (Use and Access) Act) and potential amendments to enhance the legislative force of those frameworks against those misusing AI chatbots, e.g., via India’s Draft IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules. South Korea is at the vanguard of AI regulation. Published January 22, its AI Basic Act mandates clear labeling and human oversight for AI-generated content, with immediate implementation and steep non-compliance penalties, unlike the piecemeal approach adopted elsewhere, i.e., the EU AI Act. Yet, despite regulatory advancements, apprehension remains regarding the prospective success of regimes like that of South Korea, particularly regarding the “Streisand Effect,” as, despite strict pornography regulation, South Korean women remain the leading victims of deepfake, sexually explicit content. Further, ambiguity surrounding the definitional limits for manipulated, AI-generated, or simulated content are raising concerns about the potential for governmental abuse of power when it comes to bans of AI-generation platforms, like Grok, citing freedom of expression as a defense against potentially draconian measures. As a result, regulatory efforts face an uphill battle.
At the same time, the deepfake crisis is generating heightened interest in new and established security technologies to restrain the potentially dire consequences of AI-generated, non-consensual content. Public Key Infrastructure (PKI) has been heralded as a cryptographic means of digital assurance of the provenance of AI-generated content, with vendors positing that digital signatures such as a “watermark” on images or videos establish their authenticity as “real” rather than AI-generated. The Content Consumption and Provenance Authority (C2PA) open standard is making strides forward in this regard, as are leading camera manufacturers working on integrating PKI to sign content metadata at the point of capture. Alongside established technologies, like PKI, new tooling is emerging that seeks to address the deepfake crisis, including real-time detection platforms like Reality Defender, pi-labs, and Sensity AI; these platforms rely on Deep Learning (DL) and AI models to analyze and detect deepfakes across various media formats. Yet, while both established and new technologies help with the identification of deepfakes post-generation, this alone is not sufficient to battle the harm caused.
RECOMMENDATIONSPandora's Box Is Open, and Technology Alone Will Not Close It |
While misinformation is the driving force behind AI-generated content, PKI and AI-detection tooling are valid forces against deepfakes, but when the intention behind non-consensual sexual content is degradation and humiliation (which it often is), digitally watermarking or cryptographically signing that content does little to palliate its impact on the victims affected. Even when it is obvious that content is AI-generated, non-consensual, sexually-explicit content has a profound impact on the wellbeing and, potentially, human rights of women and girls who are disproportionately impacted by the generated content, particularly given the unique scale of accessibility to such tooling by platforms like Grok. This includes violations of the right to privacy, prohibition of discrimination, and an infringement of the general spirit underpinning human rights protections, the inviolate dignity of the person. Without enforceable criminal frameworks and punitive regimes that prioritize both individual and organizational accountability, targeting not only those misusing AI chatbots but also the platforms enabling that misuse, watermarking tools, detection capabilities, and behavioral analytics have little effect in pacifying the relentless surge of AI-generated content and the harm it carries with it.
PKI and AI-detection tools can help in this regard from a forensic standpoint; easing the identification of content and users who are liable to face criminal sanctions or punishment, as well as indicating the failures and negligence of platform operators in guarding against such misuse. Similarly, regulation itself will need to avoid reliance on subjective standards of “harm,” including focuses on penalizing that which is “vulgar” or “obscene,” as such thresholds often collapse into an imposition of subjective standards of morality, opening criminal regulation to accusations of encroaching on freedom of speech or expression rights. Orientating legislation around the non-consensual component of the deepfake crisis offers a more objective legal standard. The horizontal application of human rights on organizations has been and will remain legally challenging for governments, thus expanding existing legal regimes rather than relying on existing rights and regulation is likely the most effective route forward. This involves governments—and private lobbyists—shedding the outdated narrative that regulation only serves to choke innovation. AI innovation can be achieved while oversight and liability for misuse are kept at the forefront of that innovation. Alone, both technology and regulation lack the teeth required to truly make a dent in the impact of AI-generated, sexually-explicit content. When it comes to AI-generated content, Pandora’s Box is open, and putting the lid back on may not be possible. However, through a unified regulatory and technology-based approach, governments and vendors may be able to able to crack down and close the lid some of the way further.
Written by Aisling Dawson
Related Service
- Competitive & Market Intelligence
- Executive & C-Suite
- Marketing
- Product Strategy
- Startup Leader & Founder
- Users & Implementers
Job Role
- Telco & Communications
- Hyperscalers
- Industrial & Manufacturing
- Semiconductor
- Supply Chain
- Industry & Trade Organizations
Industry
Services
Spotlights
5G, Cloud & Networks
- 5G Devices, Smartphones & Wearables
- 5G, 6G & Open RAN
- Cellular Standards & Intellectual Property Rights
- Cloud
- Enterprise Connectivity
- Space Technologies & Innovation
- Telco AI
AI & Robotics
Automotive
Bluetooth, Wi-Fi & Short Range Wireless
Cyber & Digital Security
- Citizen Digital Identity
- Digital Payment Technologies
- eSIM & SIM Solutions
- Quantum Safe Technologies
- Trusted Device Solutions