Artificial Intelligence: The 12 best ways to create superior solutions

mediumThis post was originally published by Max Dufour at Medium [AI]

Autonomous intelligence is going to harm us all if we cannot manage it properly.

Image for post
Animation from MotionElements

The stakes are high for businesses, as they plan for the future and invest in the latest technology. Artificial Intelligence has become a key component of strategic planning and 5 years roadmaps: 84% of business leaders are convinced that leveraging AI is necessary to reach their revenue growth targets, according to the latest Accenture research.

Artificial Intelligence is credited with both cutting costs and increasing revenues. Key benefits include automation, speed, consistency, and new insights for companies.

In the long run, AI will be boosting productivity and multiplying profits. It also lets humans step back from fixing processes and running the day-to-day.

AI is expected to generate double-digit gains in productivity for most leading economies by 2035:

Chart by Statista under Creative Commons License CC BY-ND 3.0

Managing the artificial intelligence risk

As the use of AI expands, we see use cases sprawling but also new complexities and sometimes unpredictable autonomous behaviors. It should encourage us to reflect on the road ahead and explore the new risks created by our innovation, to better mitigate them.

As AI usage grows and takes over critical applications, we are not just building standalone AI systems, we are building the foundations of the solutions we will be using for years to come as they evolve.

We have an opportunity to set the right rules, best practices, and frameworks that will allow us to strive years from now, rather than struggle with the downsides of our initially flawed creations.

It does not seem practical, ideal, or even risk-free to teach values to an AI which is already up and running, trying to protect ourselves and avoid unwanted situations. It seems preferable to take the lead and pro-actively build intelligent systems the right way.

Hence the questions: How do we teach values, thought processes, or best approaches to an autonomous intelligence? Can we codify them or simply enter them somewhere in the system? Is it more of an iterative process where we will correct parameters on the fly as systems learn on their own and potentially behave unexpectedly?

Image for post
Photo by Lyman Gerona on Unsplash

We already have some frameworks and rules

The AI space has had high-level foundational principles. Google has published a list of 7 principles. An organization, The Partnership on AI, has published high-level tenets to preserve AI as a positive and promising force.

Those are the first steps forward although it does not address the day-to-day needs on the ground, especially as we go from experimenting to releasing AIs in the wild.

On the technology side, maybe the best starting point and the main gap would be to define design principles intended first for technologists who are building the AIs and second for teams managing those advanced intelligence systems.

Image for post
Photo by Christina Morillo from Pexels

There are many shades of AI

Of course, there is AI and AI. They are not all created equal and for the same purposes:

  • They have various levels of independence: from following a script under human supervision to independently allocating resources to robots in a factory
  • They have a wide range of responsibilities: from tweeting comments to managing armed drones
  • They operate in different environments: from a lab not connected to the internet to a live trading environment
Image for post
Photo by Austin Distel on Unsplash

A checklist for the pioneers

There are many considerations when designing AI systems to keep the risk to society manageable, especially for scenarios involving high independence, key responsibilities, and sensitive environments.

1. Non-discriminatory

As AI software products are provided to law enforcement and the military, they are processing immense amounts of sensitive information and can lead to life-changing decisions. It is the case with facial recognition, for example, which has shown different levels of accuracy by gender and race.

Dr. Timnit Gebru, who was the co-leader of Google’s Ethical AI team has completed extensive research on the topic, which was published by MIT and Oxford University Press, among others. We need to create programs that do not amplify inequities nor contribute to biased decisions, actions, and information.

2. No black box and explicability (XAI)

It has to be possible to check inside the program and review the code, logs, and timelines to understand how a system made a decision and which sources were checked.

It should not be all machine code: users should be able to visualize and quickly understand the steps followed. It would avoid those situations where programs are shut down because nobody can fix bad behaviors or unintended actions.

Image for post
Animation from MotionElements

3. Debug mode

Artificial intelligence systems should have a debug mode, which could be turned on when the system makes mistakes, deliver unexpected results, or acts erratically.

That would allow system administrators and support teams to quickly find root causes and to track more parameters, at the risk of temporarily slowing down processing. It would be beneficial to identify root causes.

4. Fail-safe

For higher-risk cases, systems should have a fail-safe switch to reduce or turn off any capabilities creating issues if they cannot be fixed on the fly or explained quickly, to prevent potential damages.

It is similar to the quality control process in a factory where an employee can stop an assembly line if he perceives an issue.

Image for post
Photo by Ashutosh Dave on Unsplash

5. Circuit breaker

For extreme cases, it must be possible to shut down the entire system. Some systems cannot be troubleshot in real-time and could do more harm than good when left active.

Stock exchanges have automated circuit breakers to manage volatility and avoid crashes. Automated trading systems using AI should have the same systems in place, even if they have never had issues.

That would prevent back swan situations, bugs, hacks, or any one-time situations leading to erratic trading and massive losses.

6. Approval matrices

One day systems will fully mimic human reasoning and follow complex decision trees, applying judgment, and making decisions. Humans should be in the chain of command and approve key decisions, especially when those are not repetitive and require some independent thinking.

It can be useful to keep the RACI framework in mind. If an autonomous bus takes sometimes a slight detour to skip traffic, it should notify a human. If it decides to use a new road for the first time, then it should be approved by a human to avoid accidents.

Giving systems control over resources such as electric power, security, and internet bandwidth can prove problematic, especially if bugs, security flaws, and other issues are discovered.

Image for post
Photo by Maxim Hopman on Unsplash

7. Keeping track of assets, delegation, and autonomy

Humans get substantial leverage by transferring work to machines, especially if tasks become too complex, fast, expensive, or time-consuming. Algorithmic trading or real-time optimization solutions are good examples.

However users should never delegate decision-making capability completely, nor stay on the sideline until issues arise nor lose track of what processes are automated/delegated to an AI. It is particularly relevant, for example, with the advances of Robotic Process Automation (RPA).

As it expands (it is currently the fastest-growing software solution for enterprises), employees will start setting up user-owned routines, which could be running in the cloud indefinitely without anybody’s direct involvement. Companies should track centrally what routines are running and what AI agents are doing/creating.

They should also implement policies preventing employees from using RPA they created from a USB drive or from the cloud, to outsource tasks that should be controlled and owned by the company.

Companies and users should also ensure they have a back door to be able to access any bots or AI processes running in the background, in case the main account gets disabled and users are locked out or in case of emergency if the regular account stops working.

8. No completely virtual or decentralized environments

A while back, sites such as Kazaa, Skype, and other peer-to-peer networks touted the idea of fully decentralized systems. They would not reside in one location but instead would be hosted fractionally on a multitude of computers. The setup would be enhanced and reinforced with the ability to replicate its content and repair itself as hosts drop from the network. It is also one of the foundations of blockchain.

That could become a major threat if an autonomous AI system had this ability, went haywire, and became indestructible.

Image for post
Photo by Nastya Dulhiier on Unsplash

9. Feedback with discernment

The ability to receive and process feedback can be a great differentiator. It already allows voice recognition AI to understand and translate more languages than any human could ever learn. It can also enable machines to understand any accents or local dialects.

However in some applications, for example, social media bots or in a newsroom, consuming all the feedback and using it can prove problematic. Between fake news, trolls, and users testing systems’ limits, processing feedback properly can be challenging for most AIs.

In those areas, AIs need filters and tools to use feedback optimally and remain useful. Tay, the social bot from Microsoft, quickly fell off the deep end after misusing raw feedback and taunts, prompting it to release offensive content to its followers because it could not determine right from wrong inputs leading to unwanted outputs.

10. Annotated and editable code

In the event where machines write, edit, and update code, all code should automatically have embedded comments, to explain the system’s logic behind the change.

Humans or another system should be able to review and change the code if needed, with the proper context and by being able to see any prior revisions.

Image for post
Photo by Yancy Min on Unsplash

11. Plan C

As with all systems, AIs in live environments have backups. Unlike typical IT systems, we are reaching a point where we cannot fully explore, understand, or test the AI systems we are building.

If an AI system failed, went blank, or had major issues, we could revert to a backup that contains the same issues and ends up reproducing the problematic behaviors.

In those cases, there should always be a plan C to switch back to human operations and use an alternative technology. As an example, a call center could handle thousands of automated AI-based voice interactions a day and dispatch users based on keywords.

As volumes grow or peak, performance could decrease, cause dropped calls, and eventually crash the system. The backup could be restored but still contain the same flaw. The only option would be to turn off everything and decline all calls or to have a plan C in place, by redirecting incoming calls to humans or by using an alternative system.

12. Keeping track of returns and resources

Our consumption of data, funding, and energy to support technology has grown exponentially over the years. As we build AI software to handle everything, we need to ensure we do not lose sight of the returns on our investments.

That includes fundamental research, capital investments but also recurring costs such as software licenses, talent, and energy. We often forget that when we use voice recognition, for example, we often send and receive data from the cloud, which is itself powered by servers located in a data center. As billions of users rely on those services, we create use large amounts of resources on the back end.

In many cases, it is warranted but we do not want to carelessly burn our limited resources. We need to consider optimizing the value created and limiting our impact on the environment when possible.

Image for post
Photo by Kevin Ku from Pexels

What could happen long-term?

In the worst-case scenario, a dystopian scenario: we end up with ever-expanding systems that we do not control very well and that we have trouble fixing or managing, leading to catastrophes. Skynet and HAL 9000 come to mind. Many additional dark scenarios come to life in Black Mirror on Netflix.

Great innovation can lead to collisions. The quest for growth, efficiencies, and profits can open the door to unsustainable risks.

In a less dramatic, middle-of-the-road scenario, we end up with seriously flawed systems, which perform their duties correctly most of the time but keep causing issues. We would have to deal with any ethical lapses and any bugs we decided to look past to speed up commercialization, for example with facial recognition software:

Chart by Statista under Creative Commons License CC BY-ND 3.0

In the best-case scenario, we manage to strike a balance between using intelligent machines for efficiency and ensuring prosperity for our civilization. It translates into better jobs and a higher quality of life for all.

One way to do this is to build systems that self-improve across the board and their skillset, becoming more efficient and trustworthy as they progress.

We have already seen this with algorithmic trading. Many firms experienced issues when the markets turned or volatility increased, as systems were generating losses and were unable to adapt at first. Over time, trading systems became better at navigating different markets and processing information, reducing risk, and generating more consistent returns.

We have also seen AI grow into increasingly complex fields. For example, Demis Hassabis, DeepMind’s founder and CEO pointed out that “These algorithms are now becoming mature enough and powerful enough to be applicable to really challenging scientific problems.” DeepMind started by learning and mastering video games and is now able to model the unfolding of a protein. It is a 50 years old challenge for scientists, which impacts our understanding of life and has very promising ramifications.

What do you think? Are there valid reasons to fear unchecked autonomous intelligences? Are we doing it well today? What other principles can you think of?

As a next step, Cameron Sim, an AI Transformation and Enablement Expert and CTO wrote a great article on Developing a Roadmap for AI-Enabled Products. Product roadmaps should incorporate the elements unique to AI and Cameron shares the best ways to do that.

Max Dufour is a Partner at Harmeda. He leads strategic engagements for Financial Services, Technology, and Strategy Consulting clients. Connect at mdufour@harmeda.com, on LinkedIn, or visit Harmeda. Any links to external sites can be affiliate links and therefore generate compensation as part of the Amazon Associates Program and other similar programs.

Spread the word

This post was originally published by Max Dufour at Medium [AI]

Related posts