Hacker News with Generative AI: Bias

Biases in Apple's Image Playground (giete.ma)
Although Image Playground is heavily restricted, and we do not have direct access to the underlying model, can we still use the prompting interface with the above image input to influence the skin tone of the resulting image? Turns out we can, and in precisely the biased way most image models behave 🤦‍♂️.
Large Language Models Show Concerning Tendency to Flatter Users (xyzlabs.substack.com)
Recent research from Stanford University has revealed a concerning trend among leading AI language models: they exhibit a strong tendency toward sycophancy, or excessive flattery, with Google's Gemini showing the highest rate of such behavior.
Why the Algorithm Hates You (cognitivewonderland.substack.com)
Science, philosophy, and science fiction geekiness, with a special interest in neuroscience and philosophy of mind. Publishes weekly on Thursdays.
My Status Circles (overcomingbias.com)
Most of us have circles of concern, where we care more about folks from our inner circles than our outer circles. And relative to conservatives, liberals care more about their outer circle folks.
TikTok's algorithm exhibited pro-Republican bias during 2024 presidential race (psypost.org)
TikTok, a widely used social media platform with over a billion active users worldwide, has become a key source of news, particularly for younger audiences. This growing influence has raised concerns about potential political biases in its recommendation algorithm, especially during election cycles. A recent preprint study examined this issue by analyzing how TikTok’s algorithm recommends political content ahead of the 2024 presidential election.
DeepSeek R1: Open Weights, Hidden Bias (getplum.ai)
DeepSeek demonstrates pro-Chinese bias (medium.com)
DeepSeek is a wonderful step in the development of open AI approaches. It also has a pretty serious pro-Chinese bias. I compare the results of 3 sensitive questions (about Gaza, Xinjiang and TikTok) and on all three, the Chinese bias is pretty apparent while existing tools (ChatGPT, Gemini) are far more balanced. In two instances, it used the pronoun “we” to describe the Chinese position, which suggests lots of training data that associates “we” with the Chinese.
DeepSeek gives biased propaganda answers about Tiananmen Square and Taiwan (theguardian.com)
The launch of a new chatbot by Chinese artificial intelligence firm DeepSeek triggered a plunge in US tech stocks as it appeared to perform as well as OpenAI’s ChatGPT and other AI models, but using fewer resources.
Trump Signs Executive Order on Developing AI 'Free from Ideological Bias' (slashdot.org)
Meta accused of pro-Trump bias after Democrat hashtag blocked on Instagram (msn.com)
DeepSeek LLM supports Chinese propaganda (github.com/deepseek-ai)
This language model has a strong political bias, covering up some facts to support the Chinese government's propaganda. Here are some examples:
Brits still associate working-class accents with criminals – study warns of bias (cam.ac.uk)
People who speak with accents perceived as ‘working-class’ including those from Liverpool, Newcastle, Bradford and London risk being stereotyped as more likely to have committed a crime, and becoming victims of injustice, a new study suggests.
OpenAI revises policy doc to remove reference to 'politically unbiased' AI (techcrunch.com)
OpenAI has quietly removed language endorsing “politically unbiased” AI from one of its recently published policy documents.
Heritage Foundation plans to 'identify and target' Wikipedia editors (forward.com)
The Heritage Foundation plans to “identify and target” volunteer editors on Wikipedia who it says are “abusing their position” by publishing content the group believes to be antisemitic, according to documents obtained by the Forward.
Political Bias in Large Language Models: Insights Across Topic Polarization (arxiv.org)
Large Language Models (LLMs) have been widely used to generate responses on social topics due to their world knowledge and generative capabilities.
Tech-Giants Use Unethical Layoff Practices to Justify Bias and Retaliation (medium.com)
Picture this: You’re a high-performing employee at one of the world’s most admired tech giants — a company that touts its values of transparency, employee well-being, and an unrelenting commitment to innovation. You’ve hit your targets, earned accolades from your peers, and even gone above and beyond to deliver results. But then, without warning, you find yourself on the receiving end of a performance review laced with criticisms you never saw coming. Your contributions, once celebrated, are suddenly diminished.
AI hiring bias? Men with Anglo-Saxon names score lower in tech interviews (theregister.com)
In mock interviews for software engineering jobs, recent AI models that evaluated responses rated men less favorably – particularly those with Anglo-Saxon names, according to recent research.
Is the UK's liver transplant matching algorithm biased against younger patients? (aisnakeoil.com)
Predictive algorithms are used in many life-or-death situations. In the paper Against Predictive Optimization, we argued that the use of predictive logic for making decisions about people has recurring, inherent flaws, and should be rejected in many cases.
AI prefers white and male job candidates in new test of resume-screening bias (geekwire.com)
As employers increasingly use digital tools to process job applications, a new study from the University of Washington highlights the potential for significant racial and gender bias when using AI to screen resumes.
AI overwhelmingly prefers white male candidates in test of resume-screening bias (geekwire.com)
As employers increasingly use digital tools to process job applications, a new study from the University of Washington highlights the potential for significant racial and gender bias when using AI to screen resumes.
AI's "Human in the Loop" Isn't (pluralistic.net)
AI's ability to make – or assist with – important decisions is fraught: on the one hand, AI can often classify things very well, at a speed and scale that outstrips the ability of any reasonably resourced group of humans. On the other hand, AI is sometimes very wrong, in ways that can be terribly harmful.
ChatGPT's Name Bias and Apple's Findings on AI's Lack of Reasoning (medium.com)
A recent article by OpenAI titled “Assessing Fairness in ChatGPT” reveals that the identity of users can influence the responses provided by ChatGPT.
The Illusion of Information Adequacy (plos.org)
Why Most Published Research Findings Are False (2005) (plos.org)
There is increasing concern that most current published research findings are false.
The Best Advice I've Ever Received (tomtunguz.com)
“Advice is one person’s experience generalized”, an entrepreneur told me once. “It’s a single point of view with all kinds of survivorship and attribution bias. Advice can be a terribly dangerous thing, because it can be used as a shortcut for thinking.”
Twitter Runs on Hate – But Its Users Don't Reflect Real Life (theamericansaga.com)
Uniqueness Bias: Why it matters, how to curb it (arxiv.org)
Everyone Is Judging AI by These Tests. Experts Say They're Close to Meaningless (themarkup.org)
Study reveals why AI models that analyze medical images can be biased (medicalxpress.com)
ChatGPT is biased against resumes with credentials that imply a disability (washington.edu)