The diverse use cases for AI raise ethical and moral questions about how technology is used in a fair and just manner. Artificial intelligence (AI) is doing what the tech-world Cassandras have been predicting for some time: It is sending out curve balls, leaving a trail of misadventures and tricky questions around the ethics of using synthetic intelligence. Sometimes, spotting and understanding the dilemmas AI presents is easy, but often it is difficult to pin down the exact nature of the ethical questions it raises.
Read MoreTag: bias
How to avoid ethical missteps by artificial intelligence
Artificial intelligence has sometimes fallen into glaring and embarrassing errors. In the industry, you only have to mention the ‘gorilla case’ and everyone will understand that you are referring to the 2015 incident, when the Google Photos AI model providing a description to the images indicated two black people as ‘gorillas’. One of the people involved, developer Jacky Alciné, reported it on Twitter and Google apologised profusely…
Read MoreAddressing bias in artificial intelligence
Artificial intelligence (AI) has great potential, but care needs to be taken to ensure it doesn’t continue to propagate inherent biases that exist in society and hamper the achievement of diversity and inclusion goals.
Read MoreFacebook’s news summarization tool reeks of bad intentions
This week, BuzzFeed News, citing sources familiar with the matter, wrote that Facebook is developing an AI tool that summarizes news articles so that users don’t have to read them. The tool — codenamed “TLDR” in reference to the acronym “too long, didn’t read” — reportedly reduces articles to bullet points and provides narration, as well as a virtual assistant to answer questions.
Read MoreTips for applying an intersectional framework to AI development
y now, most of us in tech know that the inherent bias we possess as humans creates an inherent bias in AI applications — applications that have become so sophisticated they’re able to shape the nature of our everyday lives and even influence our decision-making. The more prevalent and powerful AI systems become, the sooner the industry must address questions like: What can we do to move away from using AI/ML models that demonstrate unfair bias?
Read MoreEU human rights agency issues report on AI ethical considerations
The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology. FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting…
Read MoreAI registers: finally, a tool to increase transparency in AI/ML
Transparency, explainability, and trust are pressing topics in AI/ML today. While much has been written about why they are important and what you need to do, no tools have existed until now.
Read MoreStanford and Carnegie Mellon find race and age bias in mobility data that drives COVID-19 policy
Smartphone-based mobility data has played a major role in responses to the pandemic. Describing the movement of millions of people, location information from Google, Apple, and others has been used to analyze the effectiveness of social distancing polices and probe how different sectors of the economy have been affected.
Read MoreIdentifying and correcting Label Bias in Machine Learning
As machine learning (ML) becomes more effective and widespread it is becoming more prevalent in systems with real-life impact, from loan recommendations to job application decisions. With the growing usage comes the risk of bias – biased training data could lead to biased ML algorithms, which in turn could perpetuate discrimination and bias in society.
Read MorePowerTransformer uses AI to rewrite text to correct gender biases in character portrayals
Unconscious biases are pervasive in text and media. For example, female characters in stories are often portrayed as passive and powerless while men are portrayed as proactive and powerful.
Read MoreZest raises $15 million to reduce loan algorithm bias
Zest AI, a company developing AI-powered loan decisioning products, today closed a $15 million funding round led by Insight Partners. A spokesperson says the capital will be used to accelerate Zest’s go-to-market efforts and product R&D.
Read MoreLinkedIn open-sources toolkit to measure AI model fairness
LinkedIn today released the LinkedIn Fairness Toolkit (LiFT), an open source software library designed to enable the measurement of fairness in AI and machine learning workflows.
Read MoreCould machine learning help bring marginalized voices into historical archives?
Researchers at the Montreal AI Ethics Institute and Microsoft propose using machine learning to build comprehensive archives that could bridge gaps in cultural understanding, knowledge, and views. They assert that including more voices in archival processes — with the help of machine learning — can have positive effects on communities, particularly those archivists have historically marginalized.
Read MoreResearchers claim bias in AI named entity recognition models
Twitter researchers claim to have found evidence of demographic bias in named entity recognition. They say their analysis reveals AI performs better at identifying names from specific groups, and the biases manifest in syntax, semantics, and how word uses vary across linguistic contexts.
Read MoreData from a female point of view
Why you need more women in data science. Meet the initiative which helps companies set a better habit to generally consider women for data-intense roles.
Read MoreThoughts on AI: Will a bias-free AI even be human-like?
In the past few weeks, there has been much debate around existing biases in our day to day lives and how we should tackle them, promoting the idea of a bias-free space. The AI community wasn’t left untouched, and someone tried the existing language models for their developed biases.
Read More