This post was originally published by Digital Bulletin at Medium [AI]
Humans still hold power over what role artificial intelligence will play in our lives — but the clock is ticking.
George Tilesch and Omar Hatamleh, co-authors of BetweenBrains, tackle some of the most important technology questions of our time.
George Tilesch and Omar Hatamleh first met over a plate of sandwiches. Twenty minutes of “deep” conversation later, Hatamleh — a NASA technologist for more than two decades — had invited Tilesch to the space agency’s next Cross Industry Innovation Summit. “The rest is history,” laughs Tilesch as they settle in for an exclusive interview with Tech For Good.
That encounter proved significant. This year, Tilesch and Hatamleh are celebrating the release of their first co-authored book. BetweenBrains is the outcome of that chance lunchtime exchange. The book is nothing if not broad and deep, confronting the multi-layered topic of artificial intelligence (AI) and exploring the immediate and near-term future of a divisive technology standing on the cusp of ubiquity.
The pair are certainly well-qualified to assess the economic, societal and moral implications of AI. Tilesch is a former strategy chief at research giant Ipsos and an expert in technology governance — a central theme in the book — and has acted as a consultant to governments, global social innovation leaders, international corporates and everything in between. Hatamleh was until very recently the head of engineering innovation at NASA and has four engineering degrees.
“The great value is how George and I complement each other,” says Hatamleh. “I come from a perspective of being a technical person and an engineer, and George from a law background and being expert in tech policy, corporate citizenship and social innovation. That contributed to there being an excellent synergy which was reflected very well in the book.”
A broad perspective is highly recommended if you brave the subject of AI. As is so often the case with public discourse in this era, a bi-polar signal emerges from the noise: those vehemently “for” or “against” AI’s capabilities. Tilesch and Hatamleh have deliberately adopted a position of balanced impartiality, with both caution and optimism shared throughout our hour-long conversation.
Many AI texts, and there are a lot, pontificate on what influence AI may have on the world over the next half-century. Tilesch and Hatamleh prefer to discuss what AI is doing for (and to) us right now, where it may lead soon, and what needs to be done in the short-term to ensure future generations reap the benefits of this extraordinarily powerful toolset.
Tilesch begins by highlighting AI’s rapid evolution and how our outlook on this technology has changed, even in the time that he and Hatamleh have been collaborating.
“Three years ago this was a very, very different subject from today,” he explains. “At that time it was like a toy of the few. Omar then put the NASA X Summit together around when businesses, governments and venture capital started to be hungry for AI. At the time we started writing, there was an overall optimistic and ambitious tone — yet almost nobody was talking about the challenges.”
Challenges there are plenty, and both Tilesch and Hatamleh aren’t hiding from them. Early applications of AI have been a major factor in shaping a world where the spread of disinformation is rife, and where a global shift towards digital-first interpersonal communication appears to have created a very divided society.
“There are some examples right now where we say ‘the roof is on fire’,” adds Tilesch. “This means that direct intervention is needed in the very short-term. I’ve been doing a lot of work with the [World Leadership Alliance] Club de Madrid on AI trust and democracy. Unchecked algorithm-led social media has led to societal discord and a loss of public trust in institutions. I cannot emphasise enough how that has led to the present, sorry state of the world.”
It’s a damning verdict. While there are positive examples of AI’s role online, not least in the areas of customer and user experience, AI has also been deployed in controversial ways by technology companies to exploit data (remember Cambridge Analytica?), and also by bad actors intent on causing disruption via cybersecurity breaches or the manipulation of social media.
Hatamleh points to the rise of “deep fakes” — where advanced AI is used to make text, audio, images and videos of fake events — as a pertinent example to back up Tilesch’s view.
“Imagine that one day before the release of a company’s profits, somebody creates a deep fake statement from the CEO designed to destroy the market for that company,” he speculates. “People get their news nowadays from online sources, so we need to ensure it is bias-free, unaltered and accurate.”
“With well-defined visions and a lot of hard steering work, AI also has the potential to augment our civilisation, even create some kind of utopia. The question is how to get to it and do we want to get there?”
But Hatamleh also highlights AI’s capacity to combat nefarious activity in an “AI vs AI” scenario, and it is this race between “good” and “bad” AI that both men continually reference. In the example of deep fakes, this is the potential for “good” AI to identify and remove such materials before much damage has occurred, or even before they’ve been disseminated.
Tilesch, with his strong background in policy, is demanding more policies around online media and how information is shared in the public sphere.
“We are talking about at least 20 policy recommendations for the public-dialogue domain but I think the most important factor is that trust is ebbing and at historic lows,” he adds. “Very focused attention and intervention is needed to build new kinds of frameworks of trust that people will appreciate. It has been voiced for many years that this is an issue but I don’t see a lot happening on the ground. If we are not fixing the trust and the authority factor of that, then we are in terrible trouble.”
Then there is the age-old question of jobs. While the impact of AI on the distribution and consumption of media is being felt right now — we can see its effects on the streets, and in voting booths — there continues to be an underlying concern about its greater effect on the labour market that needs continuous scrutiny. You will find multiple studies and hear opinions supporting both arguments: Either AI will displace us, or it will augment us and be the catalyst for a happier and more productive workforce.
The oft-quoted 2018 World Economic Forum report on the future of jobs presented evidence that backs both theories. It forecasted that, by 2022, humans will carry out 58% of task hours compared to 71% in 2018. But it also projected that, while technology advances could displace as many as 75 million jobs, emerging tasks are to generate upward of 130 million new roles.
Therein lies an inherent truth: it’s nigh-on impossible to say where the AI and jobs debate is going in the long-term. For now, however, Tilesch is clear.
“In the near-term it will definitely take away more jobs than it adds. So that’s something that society needs to prepare for,” he admits. “Currently we are in an age that we call the ‘artificial narrow intelligence age’. The baby is still a baby, but it’s very impressive in certain narrow applications, and it’s getting more powerful as AI becomes general-purpose tech — and then maybe general intelligence (AGI). We don’t know if the AGI milestone will be in three years, five years, 10 years or never — but we’ll have hybrid AI systems out in the world with huge impact. We have to get anticipatory, fluid policy-making, and conversations within nations as well as global social movements about what kind of AI we want.”
Hatamleh adds: “The way we see it evolving is that any repetitive jobs will be the first ones to go. The problem we’re then looking at is that the more mature this system becomes, it might reach a point where AI does the whole job. That’s when things will start becoming critical and iffy — and it may be too late by then.”
He goes on to give radical examples of white-collar jobs where AI could have a transformative impact: doctors who are superseded by AI that can call on billions of medical records in a matter of seconds to make more accurate diagnoses, or engineers who cannot match a system performing solid modelling in the blink of an eye.
“If you ask me about any job, I see a big impact,” Hatamleh continues. “Even if you start diving into creative jobs, too. What’s more creative than painting, art, poetry or music? Well we’re starting to see the emergence of AI into all these fields. That’s when you start to see how incredible the impact could be in different segments.”
It’s easy to become overwhelmed by the possibilities presented by AI, both negative and positive. As Tilesch and Hatamleh have alluded to, it has the power to filter into any industry and to change any job. But the growing pervasiveness of AI-powered applications today means we have reached a critical point in how we choose to govern this revolution.
At the beginning of March, the European Parliament published a 128-page report titled “The Ethics of Artificial Intelligence: Issues and Initiatives”. The paper examined many of the topics Tilesch and Hatamleh touch on in BetweenBrains, before then auditing the current frameworks that are directing crucial policy-making in this field.
It concluded that while existing frameworks have surfaced the major ethical concerns and make recommendations for governments to manage them, notable gaps exist and that there is a “clear need” for the development of viable and applicable legislation that will confront the multifaceted challenges associated with AI. Tilesch agrees, saying that AI is crying out for more vision, action and leadership.
“What’s more creative than painting, art, poetry or music? Well we’re starting to see the emergence of AI into all these fields. That’s when you start to see how incredible the impact could be in different segments”
“The leadership is currently not in line with the power, speed and application of the technology itself,” he states. “Countries and political leaders, together with communities and the social sector, have to be bolder in defining the actual purposes of AI. The problem right now is that they are in catchup mode — they were in catchup mode with previous waves of technology, but AI, which is improving at hyperspeed, is something that needs a new kind of policy thinking focusing on AI stewardship.
“From the citizen or consumer perspective, what we lack is both actionable definitions of purpose and practical safeguards. We need Augmented, not Artificial Intelligence. We need data privacy safeguards, we need ownership of data, we need trusted providers of technology where we can believe that they are serving our interests and our interests only. We need Public Interest AI — I think that’s the biggest point.
“I’m a firm believer that beneficial AI, serving citizens and governments together, should be a goal and should be definitely something that we set our eyes for. With well-defined visions and a lot of hard steering work, AI also has the potential to augment our civilisation, even create some kind of utopia. The question is how to get to it and do we want to get there?”
BetweenBrains concludes that AI will serve as the most important driver and instrument of power redistribution in the 21st century, and that ultimately, despite a fragmented approach up to now, we still have the opportunity to make clear choices on what kind of AI we want. The narrative of inevitability should be changed to that of purpose and stewardship.
Hatamleh believes we’re on a journey not dissimilar to that of previous revolutions.
“Look at the beginning of the 19th century and how the Industrial Revolution disrupted people and jobs but, eventually, it was an engine for creating incredible economies,” he says.
Tilesch agrees that AI could potentially end up doing far more good than bad but only if we take the opportunity now to steer the ship with newly built, AI-fit frameworks, models and movements.
“AI pessimism or optimism right now would probably mean, at least to me, that we are powerless,” he concludes. “Currently we may think that the image AI takes in the end is out of our hands, but actually we should develop that capacity, coming together as society and sectors to define the actionable trajectories that take AI in a beneficial, humanistic direction. Then we will be in a much better position to be optimistic about it.”
This post was originally published by Digital Bulletin at Medium [AI]