Hacker News with Generative AI: Ethics

Conflicts of interest in climate change science (jessicaweinkle.substack.com)
A new pre-print. It's time for professional norms to step it up
Google Lifts a Ban on Using Its AI for Weapons and Surveillance (wired.com)
Google announced Tuesday that it is overhauling the principles governing how it uses artificial intelligence and other advanced technology.
Humane founder Imran Chaudhri likely paid for highly charitable Wikipedia entry (ycombinator.com)
Humane founder Imran Chaudhri likely paid for highly charitable Wikipedia entry
Google Is on the Wrong Side of History (eff.org)
Google continues to show us why it chose to abandon its old motto of “Don’t Be Evil,” as it becomes more and more enmeshed with the military-industrial complex.
Can I ethically use LLMs? (ntietz.com)
The title is not a rhetorical question, and I'm not going to bury an answer. I don't have an answer. This post is my exploration of the question, and why I think it is a question1.
The Generative AI Con (wheresyoured.at)
It's been just over two years and two months since ChatGPT launched, and in that time we've seen Large Language Models (LLMs) blossom from a novel concept into one of the most craven cons of the 21st century — a cynical bubble inflated by OpenAI CEO Sam Altman built to sell into an economy run by people that have no concept of labor other than their desperation to exploit or replace it.
The Generative AI Con (wheresyoured.at)
It's been just over two years and two months since ChatGPT launched, and in that time we've seen Large Language Models (LLMs) blossom from a novel concept into one of the most craven cons of the 21st century — a cynical bubble inflated by OpenAI CEO Sam Altman built to sell into an economy run by people that have no concept of labor other than their desperation to exploit or replace it.
Justice Dept asks Supreme Court to decide if ethics watchdog can remain fired (nbcnews.com)
The Trump administration will ask the U.S. Supreme Court to overturn a lower court ruling that ordered a government ethics watchdog reinstated to his post after the president fired him.
Why bother with privacy when I have nothing to hide? (2023) (hannahonprivacy.substack.com)
Why bother with privacy when I have nothing to hide?
Google defends scrapping AI pledges and DEI goals in all-staff meeting (theguardian.com)
Google’s executives gave details on Wednesday on how the tech giant will sunset its diversity initiatives and defended dropping its pledge against building artificial intelligence for weaponry and surveillance in an all-staff meeting.
Large Language Models Show Concerning Tendency to Flatter Users (xyzlabs.substack.com)
Recent research from Stanford University has revealed a concerning trend among leading AI language models: they exhibit a strong tendency toward sycophancy, or excessive flattery, with Google's Gemini showing the highest rate of such behavior.
A woman made her AI voice clone say "arse." Then she got banned (technologyreview.com)
People with motor neuron disease should be allowed to say whatever they want, including “arse” and “knickers.”
Innovation in AI is in danger of outpacing governance (techradar.com)
Google defends scrapping AI pledges and DEI goals in all-staff meeting (theguardian.com)
Google’s executives gave details on Wednesday on how the tech giant will sunset its diversity initiatives and defended dropping its pledge against building artificial intelligence for weaponry and surveillance in an all-staff meeting.
What We're Fighting For (wheresyoured.at)
A great deal of what I write feels like narrating the end of the world — watching as the growth-at-all-costs, hyper-financialized Rot Economy seemingly tarnishes every corner of our digital lives. My core frustration isn't just how shitty things have gotten, but how said shittiness has become so profitable for so many companies.
AGI Ruin: A List of Lethalities (2022) (lesswrong.com)
Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs (emergent-values.ai)
As AIs rapidly advance and become more agentic, the risk they pose is governed not only by their capabilities but increasingly by their propensities, including goals and values.
Stop AI (stopai.info)
We are a non-violent civil resistance organization working to permanently ban the development of smarter-than-human AI to prevent human extinction, mass job loss, and many other problems.
Why it is important not to have children (2012) (stallman.org)
The most important thing you can do, avoid global heating disaster and make a positive contribution to the world, is avoid having children. The numbers, which were calculated for modern America, say that having a child equals roughly 36 round-trip transatlantic flights per year.
Andrew Ng is 'glad' Google dropped its AI weapons pledge (techcrunch.com)
Andrew Ng, the founder and former leader of Google Brain, supports Google’s recent decision to drop its pledge not to build AI systems for weapons.
Fully autonomous AI agents should not be developed (huggingface.co)
This paper argues that fully autonomous AI agents should not be developed.
Is there ever a good reason to debate someone who is not arguing in good faith? (reddit.com)
Is there ever a good reason to debate someone who is not arguing in good faith?
Why Banks May Be Hoping You're Not Paying Attention (nytimes.com)
They have no fiduciary duty in many cases and can profit from customers’ confusion. But where’s the line between unsavory and illegal?
Google removes restrictions on military AI development (ynetnews.com)
Google has quietly removed a key section from its artificial intelligence (AI) principles, eliminating language that explicitly pledged not to develop AI technologies that could cause harm, including weapons systems, Bloomberg reported Tuesday.
Google owner drops promise not to use AI for weapons (theguardian.com)
The Google owner, Alphabet, has dropped its promise not to use artificial intelligence for purposes such as developing weapons and surveillance tools.
Google drops pledge not to use AI for weapons, surveillance (aljazeera.com)
Google has dropped a pledge not to use artificial intelligence for weapons or surveillance in its updated ethics policy on the powerful technology.
Google removes pledge to not use AI for weapons from website (techcrunch.com)
Google removed a pledge to not build AI for weapons or surveillance from its website this week.
Why employees smuggle AI into work (bbc.com)
Many staff are said to be using unapproved AI at work
Anthropic: "Applicants should not use AI assistants" (simonwillison.net)
Why Is This CEO Bragging About Replacing Humans with A.I.? (nytimes.com)
Ask typical corporate executives about their goals in adopting artificial intelligence, and they will most likely make vague pronouncements about how the technology will help employees enjoy more satisfying careers, or create as many opportunities as it eliminates.