This post was originally published by Rahulraj Singh at Medium [AI]
Visiting human labor, stereotypes, and naturally occurring resources that empower Artificial Intelligence, in reality!
I worked in the software industry for three years. Every day at work I wrote Machine Learning algorithms, managed existing models, automated workflows and I was proud that not only is AI enabling me in performing better, but it is also increasing my organization’s productivity. To my realization, it was not AI, it was years of work that humans have done by using resources that occur in nature. Do not get me wrong, I believe in AI and do not dislike it, in fact, that’s my daily driver of work. I am only against its usage at present, where it is exploiting more resources than it is generating. My idea is to use AI responsibly and use it for the betterment of lives.
It is important to use the immense power of AI for the good of society and the environment. No technology can excel if our lives cannot!
The Common Man’s Perception of AI
In daily regular conversations, Artificial Intelligence is an abstract concept that has a high regard in the technological society, today. But the audience using AI today is unaware of the hardships that went inside building AI into what it is today. We will delve deeper into the building of intelligent systems and how their operation eats natural resources, but for now, I want to emphasize the fact that AI is not a supernatural power brought upon by Gods. We, humans, created it and we also hold the power to rectify its mistakes.
Natural Resource Costs of Building AI
If the things I mentioned earlier were unclear and you are still wondering how AI is affecting the environment, let’s take a case study and walk through the movement of an AI workflow. Imagine you have an Amazon’s Home kit enabled on devices like Alexa Echo, Echo dot, FireTV, etc. that is powered by Alexa’s Neural Engine. You say the words, “Hey Alexa, order a Pizza from Dominos” from Dubai. This might seem like processing that happened instantly on the device, but on the backend, it starts an enormous series of actions that move data across the globe. Your speech goes into the device and gets encrypted. This encrypted speech with the help of your internet service provider travels to Amazon’s S3 cloud storage facility, let’s say, located in Singapore. This storage facility then transfers your encrypted data to the Alexa Skills service hosted by the Alexa Developer, probably in Germany, and then the analyzed output is sent to your device to run the command of order the Pizza. Now, while this takes just about less than a minute to execute, all this data transfer, connections, and use of energy in this operation are far from being “Green Energy” at this moment.
Another example that will make things more clear is ordering something from Amazon. You ask Alexa to order an Apple AirPods. Here, in addition to the workflow we mentioned above where a lot of energy gets used to executing your digital command, there is also manual work involved. Hours and hours of low-wage manual work that goes into arranging and packaging boxes in Amazon warehouses and months of crowd work would have helped build the categorization AI model we use today to differentiate between chosen products.
Therefore, I say AI is not really intelligent. It is the human performing these tasks and giving a sense of systems appearing autonomous.
Social and Gender Bias during AI Model Training
We have come a long way in treating Gender Bias and advance our social statuses to become more inclusive, yet, many AI models are yet to implement these thoughts during their training phase. Josh Feast from Harvard Business Review states that Gender Bias has increased more in machine predictions in comparison to our personal lives because Machines are learning our inherent ideologies. Advanced NLP processing mechanisms like Alexa and Siri have shown gender bias in case studies and it is not an isolated or standalone event. This is a trend that runs across AI systems. Error rates of recognizing voices have been seen higher on women with darker skin tones. To understand the cause of these AI biases, we need to go back in time when NLP as a mechanism was built. Natural Language Processing (NLP) works on word embeddings, these are the numerical conversions of text. Word-embeddings represent words as a sequence or a vector of numbers. If two words have similar meanings, their associated embeddings will be close to each other — in a mathematical sense.
With this idea in mind, let’s see what causes this bias in data.
What causes Bias in AI?
- Skewed Data: The most common and utterly basic Machine Learning error is data skewness. If the group of individuals being used for training of a data model had 10% females and 90% males, this trained model will likely perform poorly on females. Going deeper into these classes, females of a darker skin tone or an underrepresented society are less likely to be part of a training survey, which further decreases the probability of an AI recognizing and distinguishing their voices.
- Data Labels: Keeping the NLP example in mind, the word-embeddings I mentioned above derive their ranks based on the context of the presented text. The context is learned by the algorithm from human perception since a majority of algorithms we run in organizations today are supervised. For instance, the way these algorithms are trained, they can guess that the word ‘Queen’ represents a ‘Woman’, so can fill a sentence like ‘A man is a king and the woman is a Queen’. Similarly, the algorithm also can predict sentences like ‘Father is a doctor and the mother is a Nurse’. This is an inherent gender bias that machines have learned from the historical data we presented.
- Modeling Techniques: The measurements we introduce for machine learning can also introduce unintended bias during the training. For years, it is known that speech synthesis algorithms work better on male voices in comparison to female voices. The reason being, field speech synthesis, text-to-speech, and speech-to-text algorithms have worked better on tall males with longer vocal cords and low-pitched voices. This automatically makes it difficult for the model to understand high pitch voices, which are mostly women.
Training datasets used for machine learning software that casually categorise people into just one of two genders; that label people according to their skin colour into one of five racial categories, and which attempt, based on how people look, to assign moral or ethical character. The idea that you can make these determinations based on appearance has a dark past and unfortunately the politics of classification has become baked into the substrates of AI. — Kate Crawford, AI Research at Microsoft and Professor at University of Southern California.
What should we do Next? Building Ethical Practices and Dispersing right amount of Power to AI
The first requirement is the need for strict regulatory regimes by governing bodies around the world, that question the datasets constructed for analysis. We know the root cause for bias is the collection and analysis of incorrectly represented data and that is what needs fixing. Australia and the EU have recently passed regulations to monitor AI-based systems, and I believe this is the right step.
Ethics are definitely important. For developers, consumers, and survey participants, every individual needs to be ethical in the data they are individually dealing with. But ethics alone is not enough. We need to understand who benefits from AI systems and who is harmed the most. We have seen that these technologies are benefitting already powerful organizations like Fortune 500 companies, militaries, and billion-dollar economies. What exactly is AI doing today to help malnourished children? Not much.
It is Time. It is time to Strengthen the use of AI for the Good of Society and the Betterment of our Environment
The cost of implementing Artificial Intelligence is extremely high but even higher are the hopes of achieving results from these systems. As users and developers, it is now time for us to reward the individuals who have helped us build AI to the level it has reached today. The daily wage workers who worked relentlessly to categorize data we see today as ImageNet, or the people working in factories to enable us with adequate machinery to build AI mechanisms. Not just humans, it is also time for us to give back to nature from where we are consuming resources to power our systems. Unless all energy we use today is green and clean, every AI-powered algorithm is actually burning more sustainable resources than manual labor ever would.
If you are someone who works in the world of AI, you should take charge. Projects like AI For Earth as I have explained in articles — Powering Earth’s sustainability with AI and Farming: The AI Transformation, the onus is on us to build machines that are not limited to making the rich richer. At the same time, we also need AI For Social Good to empower equality in our society.
This post was originally published by Rahulraj Singh at Medium [AI]