It is a well-known fact that systems built with A.I. technology are often showing unethical behavior. They can be racist, sexist, or otherwise biased. The problem are not the algorithms. They are just applied mathematics and therefore quite incorruptible. But the models built with these algorithms are trained using data created by humans. And this is where the bias comes from. Humans behave often unethical and is is therefore not surprising for A.I. to show corresponding behavior.
Read MoreTag: Ethics
We need Ethical Artificial Intelligence
The diverse use cases for AI raise ethical and moral questions about how technology is used in a fair and just manner. Artificial intelligence (AI) is doing what the tech-world Cassandras have been predicting for some time: It is sending out curve balls, leaving a trail of misadventures and tricky questions around the ethics of using synthetic intelligence. Sometimes, spotting and understanding the dilemmas AI presents is easy, but often it is difficult to pin down the exact nature of the ethical questions it raises.
Read MoreBias in Machine Learning Algorithms
The progress made in the field of machine learning and its capabilities in solving practical problems have heralded a new era in the broader domains of artificial intelligence (AI) and technology. Machine learning algorithms can now identify groups of cancerous cells in radiographs, write persuasive advertising copy and ensure the safety of self-driving cars.
Read MoreWhen can Transcendence go too far?
A.I. or Artificial Intelligence has been the big buzz tech for a long time. The obvious good is its ability to sift through incredible mounds of data faster and more effectively than humans can. Data thus being a kind of gold mine of modern-day for companies and governments to collect and be able to analyze that data for whatever task, program policy, information, and the like.
Read MoreMachines misbehaving: 62 scientific studies showing how Big Tech’s Algorithms can harm the public
Might it be time to create an “FDA for algorithms? In the United States, there is currently no federal institution that protects the public from harmful algorithms. We can buy eggs, get a vaccine, and drive on highways knowing there are systems in place to protect our safety: the USDA checks our eggs for salmonella, the FDA checks vaccines for safety and effectiveness, the NHTSA makes sure highway turns are smooth and gentle for high speeds. But what about when we run a Google search or look up a product on Amazon?
Read MoreGoogle is leaking AI talent following ethicist’s controversial firing
Some high-profile AI experts have departed Google after the controversial firing of leading ethicist Timnit Gebru. Gebru was fired from Google after criticising the company’s practices in an email following a dispute over a paper she was told not to publish which questioned whether language models can be too big and whether they can increase…
Read MoreAI for Good — Best Ethical Artificial Intelligence Stocks for Q1 2021
It can be frustrating for an ethical AI investor to find stocks that actually exist. Some of the most exciting, groundbreaking AI applications that are sure to change the world are held entirely by private companies. AI designed and managed vertical farming comes to mind, with companies like Plenty, AeroFarms, and Bowery getting hundreds of millions in private investment from all the big tech players.
Read MoreA quick reflection on some ethical implications of creative AI
AI is increasingly being applied to more creative areas, raising concerns about the protection of intellectual property. Recently I started to analyze a trend taking AI towards a more “creative” space with the integration with human creativity. As content generation capabilities of AI increase, it starts to be applied more often in the more diverse areas, raising concerns about how its content can be protected as intellectual property.
Read MoreTop-down and end-to-end governance for the responsible use of AI
Responsible AI is a broad topic covering multiple dimensions of the socio-technical system called Artificial Intelligence. We refer to AI as a socio-technical system here as it captures the interaction between humans and how we interact with AI. In the first part of this series we looked at AI risks from five dimensions. In the second part of this series we look at the ten principles of Responsible AI for corporates.
Read MoreWhen AI gets it wrong
This is a recent example of the risk associated with the increasing capabilities and scalability of artificial intelligence models. Without careful thought, they can easily endanger our society in many ways. In order for these new products to gain the trust and recognition of the users of the system, a clear ethical framework will have to be defined and, as showed above, internal ethical control is probably not the solution.
Read More10 top Artificial Intelligence (AI) trends in 2021
Pre-pandemic, artificial intelligence was already poised for huge growth in 2020. Back in September 2019, IDC predicted that spending on AI technologies would grow more than two and a half times to $97.9 billion by 2023. Since then, COVID-19 has only increased the potential value of AI to the enterprise. According to McKinsey’s State of AI survey published in November 2020, half of respondents say their organizations have adopted AI in at least one function.
Read MoreNavigate the road to Responsible AI
se of machine learning (ML) applications has moved beyond the domains of academia and research into mainstream product development across industries looking to add artificial intelligence (AI) capabilities. Along with the increase in AI and ML applications is a growing interest in principles, tools, and best practices for deploying AI ethically and responsibly.
Read MoreFacebook is developing a news-summarising AI called TL;DR
Facebook is developing an AI called TL;DR which summarises news into shorter snippets. Anyone who’s spent much time on the web will know what TL;DR stands for—but, for everyone else, it’s an acronym for “Too Long, Didn’t Read”. It’s an understandable sentiment we’ve all felt at some point. People lead busy lives. Some outlets now…
Read MoreEU human rights agency issues report on AI ethical considerations
The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology. FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting…
Read MoreAI registers: finally, a tool to increase transparency in AI/ML
Transparency, explainability, and trust are pressing topics in AI/ML today. While much has been written about why they are important and what you need to do, no tools have existed until now.
Read MoreGoogle fires ethical AI researcher Timnit Gebru after critical email
A leading figure in ethical AI development has been fired by Google after criticising the company. Timnit Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. Gebru claims she was fired by Google over an unpublished paper and sending an email critical of the company’s…
Read MoreFive views of AI Risk
The risks of AI have been documented quite extensively in a number of articles. AI Now has a timeline of the key mishaps. These mishaps are not always ‘errors of omission’ or negligence or even ignorance. Sometimes they are deliberate and malicious as well. The malicious AI report details the key aspects of such AI systems.
Read MoreMaking Artificial Intelligence accountable for Ethical Misconducts
It became clear that ethical misconduct of AI systems and social robots are becoming a global issue. The ethical scandals of AI systems will not be limited to organizations, but also permeate public life. It is made abundantly clear by recent scandals involving politicians in the social media U.S. The use of Artificial Intelligence system for political misuse or misconducts must be under control. Harassment risks, political voter manipulation, favoritism and discrimination are the key areas of ethical AI systems.
Read MoreCan we just turn off dangerous AI?
The most common misconception about AI safety. There’s this meme out there to make people who care about artificial intelligence safety look crazy. It goes like this. If AI ever starts doing something that might destroy humanity, we’ll just shut it off.
Read MoreAI ethics is not an optimization problem
Often, AI researchers and engineers think of themselves as neutral and “objective”, operating in a framework of strict formalization. Fairness and absence of bias, however, are social constructs; there is no objectivity, no LaTeX-typesettable remedies, no algorithmic way out. AI models are…
Read More