Open Research & Design Challenges

For Preventing Content Distribution

  • In high-risk contexts like sending or receiving content while in a country that criminalizes one’s sexuality, message ephemerality can offer protection.

    However, the intimate sharer may need a record of having sent the content in the event that it is later shared non-consensually and the sender wants to pursue legal justice.

  • Intimate sharers want the ability to track their content and stop it from being shared non-consensually.

    Existing methods that aim to offer this functionality include “digital fingerprinting,” a technique that distills content to a string of letters and numbers to create a “fingerprint” (i.e., perceptual hash).

    Fingerprints can be used by platforms to compare images and videos in order to find the same (or similar) content more easily.

    However, open questions remain:

    • Preliminary research suggests that AI can use fingerprints to reconstruct the original content: As AI techniques improve, with how much resolution can they reconstruct fingerprinted content?

    • A platform may require human validation to verify a content match: How comfortable are victim-survivors with platform workers seeing their matched content? Is privacy-preserving match validation possible?

    • Perpetrators may be able to edit or otherwise transform content to evade detection: How can we stop them?

  • A core principle of crime prevention is deterring perpetration.

    There are two classes of NDII perpetrators:

    1. The original recipient of the content who shared it without consent

    2. The subsequent viewers of the intimate content, particularly those who purposefully seek it & reshare it

    Deterrence messaging is currently used to fight CSAM by establishing strong social norms. R&D is needed to develop effective deterrence messaging that can be shown when NCII is taken down.

For Preventing Synthetic Content Creation

Elissa M. Redmiles, Lucy Qin & Tadayoshi Kohno

  • To generate synthetic NCII, perpetrators may collect pictures & videos of a target as input to an AI model as training data or prompts.

    Stopping synthetic NCII requires technical mechanisms that block the collection and misuse of the content people post online.

    Photocopy machines support “forced secure watermarks” to stop unauthorized copying. AI researchers are prototyping provenance protocols like C2PA and tools like PhotoGuard and Glaze to stop content misuse. But more R&D is needed to stop non-consensual use of media to generate synthetic NCII.

  • Users may prompt AI models to generate harmful outputs like synthetic NCII.

    Two points of intervention:

    1. Stopping AI systems from producing synthetic NCII

    2. Stopping synthetic NCII from being uploaded or viewed on online platforms

    Stopping synthetic NCII of adults requires technically differentiating abusive vs. non-abusive content by translating legal and social understandings of consent into technical metrics that can be used to govern AI system outputs and uploads to content-hosting platforms.

    Is it possible for AI to produce “generic intimate content” that does not infringe on the likeness of any real person or does all AI-generated intimate content so closely resemble a real person’s likeness that it is NCII.

  • There are two classes of synthetic NCII perpetrators:

    1. The creator of the synthetic intimate content without the subject’s consent

    2. The subsequent viewers of the synthetic intimate content, particularly those who seek it purposefully.

    Stopping synthetic NCII requires techniques to attribute AI outputs to their creator & deterrence messaging targeting creators and viewers to stop subsequent harmful behavior.

    A core principle of crime prevention is deterring perpetration. Deterrence messaging is currently used to fight CSAM. But R&D is needed to adapt messages to synthetic NCII.