That is not intelligence

mediumThis post was originally published by Igor Lobanov at Medium [AI]

Image for post

My friends and family know me as someone who is always on the lookout for technology advancements and who gets easily excited about their potential. From distributed ledgers to 3D printing, I really do think we are fortunate to live in such unbelievably interesting times. And that’s not just the way I spend my spare time. My work is all about creating solutions that allow people to achieve more, which is, of course, easier with tomorrow’s technology.

That said, I am finding myself uninspired by the progress in the field of artificial intelligence. Despite daily news about algorithms reaching new heights in their ability to outplay humans in Starcraft, predict protein folding, spot suspicious financial transactions, and help blind people navigate the world, I can’t help thinking all that taken as a whole — whilst being marginally helpful — is simply missing the mark of what needed to overcome the productivity paradox, which is what I believe gets in a way of creating a prosperous society worldwide. In my view, by and large, AI still remains a niche tool. In this post I would like to justify that view, and also explore the shortcomings of modern-day ‘AI strategies’ pursued by enterprises. In the end I am speculating on what a better strategy may look like.

Let’s start by saying that perhaps 99% of what’s called AI these days, even the most fashionable ‘deep learning’ — merely amounts to a vastly complicated statistical analysis of humongous amounts of data enabled by the unprecedented drop of the costs of storage and compute (bytes/$ and FLOPS/$ respectively). When I am saying ‘merely’, it is not because I think the analysis techniques are somehow inadequate. Algorithms can always be improved, but what you can do with this line of development already is quite impressive — see above. People will continue to come up with endlessly creative ways of using these methods, and undoubtedly in near future we will see AI outplaying humans in other games.

The problem is that there is only so much you can achieve with statistics alone. Even the most sophisticated quantitative analysis cannot uncover abstractions and explore causal relationships between them, which is what drives physical and social mechanics of the world around us. I am not talking about some obscure metaphysical truths. Even our everyday normal three-dimensional space is quite abstract if all you have to go with is a camera feed. For example, if you are standing in a long corridor, the perception of walls, ceiling and floor converging at some point in front of you is, in fact, an optical illusion, and you need to have a good set of preconceptions about the world to make sense of what you are looking at. No wonder all the games AI excel at are played from a top-down view, and it’s probably going to be a while before an algorithm can beat a human in Fortnite. Note I’m not talking about ‘in-game AI’, which can ‘cheat’ by accessing the spatial model and the locomotion physics directly, but the one that can use visual inputs and controls available to a human player.

It is, broadly, the same reason why you cannot teach a dog to speak or become a composer just by listening to music. A neural network trained on a diverse data set — artificial or natural — is usually able to identify salient features of an input and represent them as weights across all layers. Ultimately that’s what allows the network to differentiate subsequent inputs. Problem is that the features are well spread-out throughout the network and, as such, are inaccessible for introspection. You can train an artificial neural network to recognize animals in images, but you cannot get it to point a finger to anything specific representing ‘whiskers’, ‘paws’ and ‘tail’, ask it to draw a cat cartoon, or answer a riddle about cats. In other words, it cannot be said that the network understands anything about cats in a way a three-year-old would, it’s just a pattern-matching tool that can assign labels to images in a way that most humans would agree with.

Again, this is not a small feat and has many useful applications, but it’s far from what would be enough to completely remove the need for human involvement in performing a typical non-trivial task that normally requires a judgement. There are always edge cases stemming from the fact that there could be no understanding of anything in an artificial neural network, and it easily gets confused by the messiness of the real world. Where the stakes are high, a human will always need to take a look as well. That, plus the fact that engineering an AI-based system correctly and maintaining it for years to come is a costly undertaking, gives a lacklustre ROI over its entire lifecycle.

Good engineers designing AI solutions recognize that. There is an apt observation that one should not try to create artificial colleagues, and instead focus on smarter tools. The consensus seems to be a model whereby a human worker is ‘augmented’ with an AI assistant, which helps to sift through the data, points out the most salient cases requiring human attention, while leaving more mundane situations to resolve themselves through some kind of automated workflow in the background. This works exceptionally well in some fields, but in reality there are two issues that prevent this model to be universally successful. Firstly, there are not that many jobs in which the vast majority of time is spent on work that fits this description. If you are a biochemist, solving protein folding is of huge help, but what if you are a beautician, a builder, a lawyer, or a software engineer? Yes, there are aspects of your role which could be streamlined with targetted application of AI algorithms, but, no, it’s almost certainly not going to make you twice as productive compared to what you could achieve with modern-day technology already.

The other issue is brought about by the messiness of the real world. Outside of a few very narrow set of problems, even the most sophisticated AI algorithms do marginally better than what could already be achieved with traditional statistical tools, that are by and large remain unchanged since operations research days of the Second World War. Machine learning can be more precise and dynamic, but the there isn’t generally enough training data around to deal with important edge cases, which makes the ROI a bit of a gamble.

All in all, the use of AI at present could be described as a creative and non-linear search for low-hanging fruits across a particular enterprise or an industry. There are infinitely many use cases, but only few of them generate commercially viable business cases when the costs are properly taken into account. This is where lies another problem plaguing some enterprises trying to exploit the potential of AI technologies. Looking for viable use cases needs a strong grasp of the rapidly evolving field of AI research and engineering practice, as well as intimacy with the details of how the work is done locally. The latter is hard if all AI experts are gathered in a centralized team to be assigned to work on projects. In theory this would work, but, in practice, lack of business domain knowledge messes up with the allocation of resources. This often results in the central team churning out proof-of-concepts for far fetched ‘problems’, whilst the rest of the firm are getting on with their own business largely ignoring that — all too familiar ‘innovation theatre’ scenario.

In a way, this is a bit like creating a centralized ‘electricity team’. Electricity is fantastic, but, in order to create a real value, it’s more important to know what to use it for than to know how to use it. If your job is to create business value with electricity, you would want to get everyone up to speed with the merits and capabilities of electric energy, and therefore create a demand for solutions where they would be most applicable, whilst also justifying investment into the power distribution network. This time-honoured principle works with just about any technology, and, as such, would be a valid strategy for creating value with AI in the short-to-medium term — more on this later.

Unfortunately, going back to the earlier metaphor, the sad truth is that what the business really needs is artificial workers, because smarter tools do not cut it. In the grand scheme of things, only ‘artificial workers’ with human-like agency and human-like understanding of matters could be reasonably expected to take on non-trivial job tasks unsupervised, and understanding, as we saw earlier, is just not there yet in the AI field. Even the purported grasp of natural language exhibited by the rock star celebrity of AI GPT-3 is not a human kind of understanding that could be relied upon in the same way you would rely on a human being understanding language. That’s not to say that GPT-3 is useless — I think it’s awesome at what it does, but it’s a tool that is used for very clever party tricks far more often than for anything of business value.

To summarise, the AI field is evolving quickly and with amusing visual effects. The emerging algorithms are capable of doing very well in a narrow set of problem domains, but are seemingly still far from cracking the human-like understanding and agency. Inevitably then, in practice the AI is a solution looking for a problem. Consequently, the right stance seems to be getting better in matchmaking now whilst scouting for further advances down the road.

Therefore, I think a reasonable strategy for commercialising the potential for AI technology that a typical enterprise can follow has two aspects. The first one is focusing on the short-to-medium term and draws inspiration from the electricity analogy mentioned above. In short, it boils down to a few guiding principles.

  1. Educate your constituency on the possibilities of modern AI tools focusing specifically on allowing everyone to recognise patterns of problems where AI could conceivably give an advantage. The emphasis must be on creating a pull, not pushing. Bonus points for creating a couple of mind-blowing yet relevant demos to bring ideas to life and get people excited, although don’t forget they are means to an end.
  2. Create channels and forums for ‘grassroots’ ideas to be tabled, filtered, and gather political support, so that you always have a list of viable potential projects. Allocate resources to help with feasibility studies according to the potential of the idea, and prepare to fail fast. It’s crucial that every project has an influential local champion outside the ‘core AI team’ whatever that means. Bonus points for managing to incentivise such behaviour.
  3. Secure funding and deploy engineering talent to work on the most promising projects or two. Have a rigorous filter to make sure you are going after a sweet-spot, not a moonshot, so you have a good chance of realising the benefits in-year. Bonus points for the ability to have a discretion on funding.
  4. Deliver benefits and repeat, whilst making sure that a) the success is widely celebrated and publicised thus forming a play-book for subsequent cycles, and b) your ability to deliver further projects increases. The latter means that you institutionalise the learnings, stay abreast of the latest technology advances, and accumulate reusable assets. Bonus points for creating a technology platform accelerating subsequent delivery.
  5. Finally, develop AI talent locally in order to make business teams self-sufficient. Aim to move from turning the wheels of the process to providing a platform to build on top of. Bonus points for making yourself unnecessary altogether in the long run.

If this looks familiar and resembles more general strategy for a data-driven business transformation, it’s because it is. At the moment any practical application of AI for the needs of a business is essentially a data strategy. Local data experts and teams are your friends and likely champions for the most promising ideas. Moreover, all technology-driven business transformations — from electricity to blockchain — follow the same cookbook.

The other aspect of the strategy deals with the long term and is a waiting game of sorts. There are some promising avenues of research on more human-like behaviour of artificial agents, specifically casual reasoning and 4E cognition. This is very much a fundamental research and at times more philosophical than scientific, let alone engineering. Still, we are arguably getting close to a ‘peak pattern recognition’ where further increase in complexity of models and network architectures is likely to give only diminishing returns. You have to be in a position to quickly build on any breakthroughs in this area at pace, which means being ready to work with academic researchers and innovators.

Thankfully, successful execution of the first part of the strategy positions you to do exactly that. There will inevitably be questions without answers, and a great deal of awareness of the limits of the current approaches. Further paradigm shifts will undoubtedly remove some of the limits, giving your more to work with. Still, it’s somewhat ironic that the job of finding the right use for AI, at least in the foreseeable future, will be reserved for humans.

Spread the word

This post was originally published by Igor Lobanov at Medium [AI]

Related posts