Hacker News with Generative AI: Child Safety

New Jersey Sues Discord for Allegedly Failing to Protect Children (wired.com)
Discord is facing a new lawsuit from the state of New Jersey, which claims that the chat app is engaged in “deceptive and unconscionable business practices” that put its younger users in danger.
AI images of child sexual abuse are becoming "significantly more realistic" (theguardian.com)
Images of child sexual abuse created by artificial intelligence are becoming “significantly more realistic”, according to an online safety watchdog.
Restoring Old Software for Child Learning Safety (rietta.com)
We live in a day where web applications and apps have become the mainstream. These bring many conveniences, such as inter-connectivity with multiple devices, and backups of personal data. It also brings many issues like risk of data theft, loss of software access when the publisher ceases operations, and the risks of cyber-bullying for school-aged children. There are more issues to consider, but these will suffice to set the scene.
Snapchat is harming children at an industrial scale? (afterbabel.com)
On October 1, 2024, investigative journalist Jeff Horwitz reported a startling statistic from an internal Snap Inc. email quoted in a court case against Snap Inc., the company which owns Snapchat. The email noted that the company receives around 10,000 reports of sextortion each month—and that figure is likely “only a fraction of the total abuse occurring on the platform.”
Unofficial parental control apps put children's safety and privacy at risk (ucl.ac.uk)
Some ‘unofficial’ parental control apps have excessive access to personal data and hide their presence, raising concerns about their potential for unethical surveillance as well as domestic abuse, according to new research from UCL and St. Pölten UAS, Austria.
Why did the US let a child safety bill for social media die? (theguardian.com)
When Congress adjourned for the holidays in December, a landmark bill meant to overhaul how tech companies protect their youngest users had officially failed to pass.
Under new law, cops bust famous cartoonist for AI-generated CSAM (arstechnica.com)
Late last year, California passed a law against the possession or distribution of child sex abuse material (CSAM) that has been generated by AI.
TikTok knew its livestreaming feature allowed child exploitation, allegedly (theguardian.com)
TikTok has long been aware that its video livestream feature has been misused to harm children, according to newly revealed details in a lawsuit brought against the social media company by the state of Utah.
Albania to close TikTok for a year blaming it for violence among children (apnews.com)
Albania’s prime minister said Saturday the government will shut down the video service TikTok for one year, blaming it for inciting violence and bullying, especially among children.
Chatbots urged teen to self-harm, suggested murdering parents, lawsuit says (arstechnica.com)
Parents suing want Character.AI to delete its models trained on kids' data.
Apple sued over abandoning CSAM detection for iCloud (techcrunch.com)
Apple is being sued over its decision not to implement a system that would have scanned iCloud photos for child sexual abuse material (CSAM).
Telegram U-turns and joins child safety scheme (bbc.com)
After years of ignoring pleas to sign up to child protection schemes, the controversial messaging app Telegram has agreed to work with an internationally recognised body to stop the spread of child sexual abuse material (CSAM).
Child safety org launches AI model trained on real child sex abuse images (arstechnica.com)
AI will make it harder to spread CSAM online, child safety org says.
Roblox: Inflated Key Metrics for Wall Street and a Pedophile Hellscape for Kids (hindenburgresearch.com)
Roblox is a $27 billion online gaming platform headquartered in San Mateo, CA. The company was incorporated in 2004 and is led by founder and CEO David Baszucki.
X Corp loses court challenge over fine for child abuse material notice (abc.net.au)
X Corp must comply with an Australian safety notice about child sexual abuse issued to Twitter, a court has ruled.
Cops lure pedophiles with AI pics of teen girl. Ethical triumph or new disaster? (arstechnica.com)
CSAM images removed from AI image-generator training source, researchers say (apnews.com)
AI cameras spot toddlers not wearing seat belts (bbc.com)
Why Are We Still Making Unshaded Playgrounds? (curbed.com)
Roblox gets banned indefinitely in Turkey over "child exploitation" (dexerto.com)
Senate to Vote on Web Censorship Bill Disguised as Kids Safety (reason.com)
Roblox's Pedophile Problem (bloomberg.com)
Effective CSAM filters are impossible because what CSAM is depends on context (rys.io)
New York Establishes Stringent Protections to Safeguard Kids on Social Media (governor.ny.gov)
Protecting Children's Safety Requires End-to-End Encryption (simplex.chat)
Child safety advocates disrupt Apple developers conference (mercurynews.com)
FBI Arrests Man for Generating AI Child Sexual Abuse Imagery (404media.co)
EU plan to force apps to scan for CSAM risks millions of false positives (techcrunch.com)
AI is about to make the online child sex abuse problem much worse (washingtonpost.com)
Children Need Neighborhoods Where They Can Walk and Bike (wsj.com)