Why ethical A.I. is nonsense

Robo Saint and Devil

It is a well-known fact that systems built with A.I. technology are often showing unethical behavior. They can be racist, sexist, or otherwise biased. The problem are not the algorithms. They are just applied mathematics and therefore quite incorruptible. But the models built with these algorithms are trained using data created by humans. And this is where the bias comes from. Humans behave often unethical and is is therefore not surprising for A.I. to show corresponding behavior.

Read More

We need Ethical Artificial Intelligence

Robotic Process Automation (RPA), Insights: The Productivity Step

The diverse use cases for AI raise ethical and moral questions about how technology is used in a fair and just manner. Artificial intelligence (AI) is doing what the tech-world Cassandras have been predicting for some time: It is sending out curve balls, leaving a trail of misadventures and tricky questions around the ethics of using synthetic intelligence. Sometimes, spotting and understanding the dilemmas AI presents is easy, but often it is difficult to pin down the exact nature of the ethical questions it raises.

Read More

Bias in Machine Learning Algorithms

The progress made in the field of machine learning and its capabilities in solving practical problems have heralded a new era in the broader domains of artificial intelligence (AI) and technology. Machine learning algorithms can now identify groups of cancerous cells in radiographs, write persuasive advertising copy and ensure the safety of self-driving cars.

Read More

When can Transcendence go too far?

Robot on piano

A.I. or Artificial Intelligence has been the big buzz tech for a long time. The obvious good is its ability to sift through incredible mounds of data faster and more effectively than humans can. Data thus being a kind of gold mine of modern-day for companies and governments to collect and be able to analyze that data for whatever task, program policy, information, and the like.

Read More

Machines misbehaving: 62 scientific studies showing how Big Tech’s Algorithms can harm the public

Might it be time to create an “FDA for algorithms? In the United States, there is currently no federal institution that protects the public from harmful algorithms. We can buy eggs, get a vaccine, and drive on highways knowing there are systems in place to protect our safety: the USDA checks our eggs for salmonella, the FDA checks vaccines for safety and effectiveness, the NHTSA makes sure highway turns are smooth and gentle for high speeds. But what about when we run a Google search or look up a product on Amazon?

Read More

Google is leaking AI talent following ethicist’s controversial firing

Leaking tap

Some high-profile AI experts have departed Google after the controversial firing of leading ethicist Timnit Gebru. Gebru was fired from Google after criticising the company’s practices in an email following a dispute over a paper she was told not to publish which questioned whether language models can be too big and whether they can increase…

Read More

AI for Good — Best Ethical Artificial Intelligence Stocks for Q1 2021

It can be frustrating for an ethical AI investor to find stocks that actually exist. Some of the most exciting, groundbreaking AI applications that are sure to change the world are held entirely by private companies. AI designed and managed vertical farming comes to mind, with companies like Plenty, AeroFarms, and Bowery getting hundreds of millions in private investment from all the big tech players.

Read More

A quick reflection on some ethical implications of creative AI

Reflections ethical use of ai

AI is increasingly being applied to more creative areas, raising concerns about the protection of intellectual property. Recently I started to analyze a trend taking AI towards a more “creative” space with the integration with human creativity. As content generation capabilities of AI increase, it starts to be applied more often in the more diverse areas, raising concerns about how its content can be protected as intellectual property.

Read More

Top-down and end-to-end governance for the responsible use of AI

Responsible AI is a broad topic covering multiple dimensions of the socio-technical system called Artificial Intelligence. We refer to AI as a socio-technical system here as it captures the interaction between humans and how we interact with AI. In the first part of this series we looked at AI risks from five dimensions. In the second part of this series we look at the ten principles of Responsible AI for corporates.

Read More

When AI gets it wrong

This is a recent example of the risk associated with the increasing capabilities and scalability of artificial intelligence models. Without careful thought, they can easily endanger our society in many ways. In order for these new products to gain the trust and recognition of the users of the system, a clear ethical framework will have to be defined and, as showed above, internal ethical control is probably not the solution.

Read More

10 top Artificial Intelligence (AI) trends in 2021

Pre-pandemic,  artificial intelligence was already poised for huge growth in 2020. Back in September 2019, IDC predicted that spending on AI technologies would grow more than two and a half times to $97.9 billion by 2023. Since then, COVID-19 has only increased the potential value of AI to the enterprise. According to McKinsey’s State of AI survey published in November 2020, half of respondents say their organizations have adopted AI in at least one function.

Read More

Navigate the road to Responsible AI

se of machine learning (ML) applications has moved beyond the domains of academia and research into mainstream product development across industries looking to add artificial intelligence (AI) capabilities. Along with the increase in AI and ML applications is a growing interest in principles, tools, and best practices for deploying AI ethically and responsibly.

Read More

Facebook is developing a news-summarising AI called TL;DR

Facebook is developing an AI called TL;DR which summarises news into shorter snippets. Anyone who’s spent much time on the web will know what TL;DR stands for⁠—but, for everyone else, it’s an acronym for “Too Long, Didn’t Read”. It’s an understandable sentiment we’ve all felt at some point. People lead busy lives. Some outlets now…

Read More

EU human rights agency issues report on AI ethical considerations

The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology. FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting…

Read More

Google fires ethical AI researcher Timnit Gebru after critical email

A leading figure in ethical AI development has been fired by Google after criticising the company. Timnit Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. Gebru claims she was fired by Google over an unpublished paper and sending an email critical of the company’s…

Read More

Making Artificial Intelligence accountable for Ethical Misconducts

It became clear that ethical misconduct of AI systems and social robots are becoming a global issue. The ethical scandals of AI systems will not be limited to organizations, but also permeate public life. It is made abundantly clear by recent scandals involving politicians in the social media U.S. The use of Artificial Intelligence system for political misuse or misconducts must be under control. Harassment risks, political voter manipulation, favoritism and discrimination are the key areas of ethical AI systems.

Read More
1 2