Hacker News with Generative AI: Ethics

David Attenborough "Profoundly Disturbed" by AI Clone of His Voice (deadline.com)
David Attenborough, the legendary natural history presenter, is rallying against AI — unauthorized cloning, specifically.
Google Gemini tells grad student to 'please die' while helping with his homework (theregister.com)
When you're trying to get homework help from an AI model like Google Gemini, the last thing you'd expect is for it to call you "a stain on the universe" that should "please die," yet here we are, assuming the conversation published online this week is accurate.
Did my coding lead to colleague's death? (ycombinator.com)
Throwaway account because I don’t want this tied to me.<p>About eight years ago, I worked at a mid-sized tech company with a senior colleague—let’s call him “Dave.” He was in his early 60s, had decades of experience, and preferred “boring tech” and object-oriented programming. I was more into modern, cloud-native solutions and functional programming, which led to frequent disagreements.
Gemini AI tells the user to die (tomshardware.com)
Google AI chatbot responds with a threatening message: "Human Please die." (cbsnews.com)
A college student in Michigan received a threatening response during a chat with Google's AI chatbot Gemini.
An Uncanny Moat (boristhebrave.com)
Now, or at least very soon, AI threatens to cross that valley and advance up the gentle hills on the opposite side. Not only are we faced with a disinformation storm like nothing before, but AI is going to start challenging exactly how we consider personhood itself.
The cochlear implant question (aeon.co)
As the hearing parent of a deaf baby, I’m confronted with an agonising decision: should I give her an implant to help her hear?
Revisting the Stanford Prison Experiment 50 years later (arstechnica.com)
In 1971, Stanford University psychologist Philip Zimbardo conducted a notorious experiment in which he randomly divided college students into two groups, guards and prisoners, and set them loose in a simulated prison environment for six days, documenting the guards' descent into brutality.
Bribery is largely subject to circumstance: study (elpais.com)
A year-long experiment was conducted at the self-service checkouts of a supermarket chain in Modena and Ferrara in Italy to test whether there was any link between corruption scandals and how honest consumers were with their shopping.
Is the UK's liver transplant matching algorithm biased against younger patients? (aisnakeoil.com)
Predictive algorithms are used in many life-or-death situations. In the paper Against Predictive Optimization, we argued that the use of predictive logic for making decisions about people has recurring, inherent flaws, and should be rejected in many cases.
Scientist treated her own cancer with viruses she grew in the lab (nature.com)
A scientist who successfully treated her own breast cancer by injecting the tumour with lab-grown viruses has sparked discussion about the ethics of self-experimentation.
Ask HN: Do you consider working for Meta an ethical issue? (ycombinator.com)
I've been thinking about this a lot lately. Meta (Facebook) is able to attract some of our best and brightest software developers with exorbitantly high salaries. But due to no fault of their own the company hasn't really been able to rein in their less ethical decision making.
Why shouldn't you give money to homeless people? (spiralprogress.com)
Singer asks if you should save a drowning child. Obviously yes! And what if you were wearing a nice suit which could get ruined in the pond? Does not matter, you still have to jump in.And what if, on your way to work every morning, you pass by a homeless man who is cold, hungry and in need of help? What then?
Reasonable Person Principle (cs.cmu.edu)
Everyone will be reasonable. Everyone expects everyone else to be reasonable. No one is special. Do not be offended if someone suggests you are not being reasonable.
Missing open-source contributor presents a dilemma when accepting their PR (bettersoftware.uk)
I faced the following situation recently as a maintainer of a popular open-source project, and I wondered what to do.
Brute-Forcing the LLM Guardrails (medium.com)
Being able to constrain LLM outputs is widely seen as one of the keys to widespread deployment of artificial intelligence. State-of-the-art models are being expertly tuned against abuse, and will flatly reject user’s attempts to seek illegal, harmful, or dubious information… or will they?
Highly cited engineer offers guaranteed publication in return for coauthorship (retractionwatch.com)
Last year, a researcher at a U.S. university received an email offering what the subject line described as a “great opportunity to publish an article.”
AI's "Human in the Loop" Isn't (pluralistic.net)
AI's ability to make – or assist with – important decisions is fraught: on the one hand, AI can often classify things very well, at a speed and scale that outstrips the ability of any reasonably resourced group of humans. On the other hand, AI is sometimes very wrong, in ways that can be terribly harmful.
Seeing Like a Programmer: Resiliency, Limits, and Moral Hazards (chriskrycho.com)
Assumed audience: People interested in how we can make good software. In more than one sense of the phrase “good software”. That means not just software engineers.
A Rock-Star Researcher Spun a Web of Lies–and Nearly Got Away with It (thewalrus.ca)
On January 29, 2020, Kate Laskowski sat down at her desk at the University of California, Davis, and steeled herself to reveal a painful professional and personal saga. “Science is built on trust,” she began in a blog post. “Trust that your experiments will work. Trust in your collaborators to pull their weight. But most importantly, trust that the data we so painstakingly collect are accurate and as representative of the real world as they can be.”
'Sickening' Molly Russell Chatbots Found on Character.ai (bbc.co.uk)
Chatbot versions of the teenagers Molly Russell and Brianna Ghey have been found on Character.ai - a platform which allows users to create digital versions of people.
Hospitals adopt error-prone AI transcription tools despite warnings (arstechnica.com)
On Saturday, an Associated Press investigation revealed that OpenAI's Whisper transcription tool creates fabricated text in medical and business settings despite warnings against such use.
They want your ethics for $105 (ntietz.com)
If you have a blog, you've probably gotten those emails that want to "collaborate" on a guest post—which often means "let us post sketchy links for SEO purposes."
Open washing – why companies pretend to be open source (theregister.com)
Allowing pretenders to co-opt the term is bad for everyone
The open secret of open washing – why companies pretend to be open source (theregister.com)
Allowing pretenders to co-opt the term is bad for everyone
James Cameron on AI, robotics, and ethics [video] (youtube.com)
The human cost of our AI-driven future (noemamag.com)
Behind AI’s rapid advance and our sanitized feeds, an invisible global workforce endures unimaginable trauma.
Sabotage evaluations for frontier models (anthropic.com)
As AIs become more capable, however, a new kind of risk might emerge: models with the ability to mislead their users, or subvert the systems we put in place to oversee them.
The Age of AI Child Abuse Is Here (theatlantic.com)
For maybe the first time, the scale of the problem is being demonstrated in very clear terms.
Reports show some Canada euthanasia deaths driven by social reasons (apnews.com)
An expert committee reviewing euthanasia deaths in Canada’s most populous province has identified several cases where patients asked to be killed in part for social reasons such as isolation and fears of homelessness, raising concerns over approvals for vulnerable people in the country’s assisted dying system.