On September 19, a heavily redacted filing emerged, revealing that Hive, led by cofounder and CEO Kevin Guo, is involved in a contract related to its AI detection algorithms aimed at identifying child sexual abuse material (CSAM). Guo stated he could not disclose specifics about the contract, but acknowledged its purpose.
The filing references data from the National Center for Missing and Exploited Children, which noted a significant 1,325% rise in incidents involving generative AI in 2024. The filing emphasizes the need for automated tools to efficiently process and analyze the increasing volume of digital content available online.
Investigators focusing on child exploitation are primarily tasked with stopping ongoing abuse, yet the surge in AI-generated CSAM complicates their efforts. It has become challenging to ascertain whether certain images portray real victims in immediate danger. A tool capable of effectively identifying genuine victims could enhance the prioritization of cases.
The document highlights that distinguishing between AI-generated images and real depictions of victims enables investigators to concentrate resources on cases involving actual victims, thereby increasing the program’s effectiveness in protecting vulnerable individuals.
Hive AI develops tools for generating videos and images, along with content moderation systems that can identify violence, spam, and sexual content, as well as recognize public figures. In December, reports indicated that Hive was also marketing its deepfake detection technology to the US military.
Source: https://www.technologyreview.com/2025/09/26/1124343/us-investigators-are-using-ai-to-detect-child-abuse-images-made-by-ai/

