Hacker News with Generative AI: Bias

Tech-Giants Use Unethical Layoff Practices to Justify Bias and Retaliation (medium.com)
Picture this: You’re a high-performing employee at one of the world’s most admired tech giants — a company that touts its values of transparency, employee well-being, and an unrelenting commitment to innovation. You’ve hit your targets, earned accolades from your peers, and even gone above and beyond to deliver results. But then, without warning, you find yourself on the receiving end of a performance review laced with criticisms you never saw coming. Your contributions, once celebrated, are suddenly diminished.
AI hiring bias? Men with Anglo-Saxon names score lower in tech interviews (theregister.com)
In mock interviews for software engineering jobs, recent AI models that evaluated responses rated men less favorably – particularly those with Anglo-Saxon names, according to recent research.
Is the UK's liver transplant matching algorithm biased against younger patients? (aisnakeoil.com)
Predictive algorithms are used in many life-or-death situations. In the paper Against Predictive Optimization, we argued that the use of predictive logic for making decisions about people has recurring, inherent flaws, and should be rejected in many cases.
AI prefers white and male job candidates in new test of resume-screening bias (geekwire.com)
As employers increasingly use digital tools to process job applications, a new study from the University of Washington highlights the potential for significant racial and gender bias when using AI to screen resumes.
AI overwhelmingly prefers white male candidates in test of resume-screening bias (geekwire.com)
As employers increasingly use digital tools to process job applications, a new study from the University of Washington highlights the potential for significant racial and gender bias when using AI to screen resumes.
AI's "Human in the Loop" Isn't (pluralistic.net)
AI's ability to make – or assist with – important decisions is fraught: on the one hand, AI can often classify things very well, at a speed and scale that outstrips the ability of any reasonably resourced group of humans. On the other hand, AI is sometimes very wrong, in ways that can be terribly harmful.
ChatGPT's Name Bias and Apple's Findings on AI's Lack of Reasoning (medium.com)
A recent article by OpenAI titled “Assessing Fairness in ChatGPT” reveals that the identity of users can influence the responses provided by ChatGPT.
The Illusion of Information Adequacy (plos.org)
Why Most Published Research Findings Are False (2005) (plos.org)
There is increasing concern that most current published research findings are false.
The Best Advice I've Ever Received (tomtunguz.com)
“Advice is one person’s experience generalized”, an entrepreneur told me once. “It’s a single point of view with all kinds of survivorship and attribution bias. Advice can be a terribly dangerous thing, because it can be used as a shortcut for thinking.”
Twitter Runs on Hate – But Its Users Don't Reflect Real Life (theamericansaga.com)
Uniqueness Bias: Why it matters, how to curb it (arxiv.org)
Everyone Is Judging AI by These Tests. Experts Say They're Close to Meaningless (themarkup.org)
Study reveals why AI models that analyze medical images can be biased (medicalxpress.com)
ChatGPT is biased against resumes with credentials that imply a disability (washington.edu)
How to fix “AI’s original sin” (oreilly.com)
A discussion of discussions on AI Bias (danluu.com)
Creativity has left the chat: The price of debiasing language models (arxiv.org)
An Analysis of Chinese LLM Censorship and Bias with Qwen 2 Instruct (huggingface.co)
Zero Tolerance for Bias (queue.acm.org)
GPT-4o's Chinese token-training data is polluted by spam and porn websites (technologyreview.com)
Study found corporate recruiters have a bias against ex-entrepreneurs (fortune.com)
Evaluating bias and noise induced by the U.S. Census Bureau's privacy protection (science.org)
Anonymizing research funding applications could reduce 'prestige privilege' (science.org)
U-M finds students with alphabetically lower-ranked names receive lower grades (record.umich.edu)
Bloomberg's analysis didn't show that ChatGPT is racist (interviewing.io)