Hacker News with Generative AI: Child Safety

Under new law, cops bust famous cartoonist for AI-generated CSAM (arstechnica.com)
Late last year, California passed a law against the possession or distribution of child sex abuse material (CSAM) that has been generated by AI.
TikTok knew its livestreaming feature allowed child exploitation, allegedly (theguardian.com)
TikTok has long been aware that its video livestream feature has been misused to harm children, according to newly revealed details in a lawsuit brought against the social media company by the state of Utah.
Albania to close TikTok for a year blaming it for violence among children (apnews.com)
Albania’s prime minister said Saturday the government will shut down the video service TikTok for one year, blaming it for inciting violence and bullying, especially among children.
Chatbots urged teen to self-harm, suggested murdering parents, lawsuit says (arstechnica.com)
Parents suing want Character.AI to delete its models trained on kids' data.
Apple sued over abandoning CSAM detection for iCloud (techcrunch.com)
Apple is being sued over its decision not to implement a system that would have scanned iCloud photos for child sexual abuse material (CSAM).
Telegram U-turns and joins child safety scheme (bbc.com)
After years of ignoring pleas to sign up to child protection schemes, the controversial messaging app Telegram has agreed to work with an internationally recognised body to stop the spread of child sexual abuse material (CSAM).
Child safety org launches AI model trained on real child sex abuse images (arstechnica.com)
AI will make it harder to spread CSAM online, child safety org says.
Roblox: Inflated Key Metrics for Wall Street and a Pedophile Hellscape for Kids (hindenburgresearch.com)
Roblox is a $27 billion online gaming platform headquartered in San Mateo, CA. The company was incorporated in 2004 and is led by founder and CEO David Baszucki.
X Corp loses court challenge over fine for child abuse material notice (abc.net.au)
X Corp must comply with an Australian safety notice about child sexual abuse issued to Twitter, a court has ruled.
Cops lure pedophiles with AI pics of teen girl. Ethical triumph or new disaster? (arstechnica.com)
CSAM images removed from AI image-generator training source, researchers say (apnews.com)
AI cameras spot toddlers not wearing seat belts (bbc.com)
Why Are We Still Making Unshaded Playgrounds? (curbed.com)
Roblox gets banned indefinitely in Turkey over "child exploitation" (dexerto.com)
Senate to Vote on Web Censorship Bill Disguised as Kids Safety (reason.com)
Roblox's Pedophile Problem (bloomberg.com)
Effective CSAM filters are impossible because what CSAM is depends on context (rys.io)
New York Establishes Stringent Protections to Safeguard Kids on Social Media (governor.ny.gov)
Protecting Children's Safety Requires End-to-End Encryption (simplex.chat)
Child safety advocates disrupt Apple developers conference (mercurynews.com)
FBI Arrests Man for Generating AI Child Sexual Abuse Imagery (404media.co)
EU plan to force apps to scan for CSAM risks millions of false positives (techcrunch.com)
AI is about to make the online child sex abuse problem much worse (washingtonpost.com)
Children Need Neighborhoods Where They Can Walk and Bike (wsj.com)