When will AI become superhuman?

mediumThis post was originally published by Pratyaksh Jain at Medium [AI]

It’s a question most of us want answered

Image for post

AI, as most of us would know, stands for Artificial Intelligence. This means that we — humans — have tried to replicate the intelligence present in our minds using technology. John McCarthy, who coined the term in 1956, defines AI as “the science and engineering of making intelligent machines.” Let me tell you where technology was in 1956 — the first videotape recorder was sold for $50,000. The first computer hard disk was ready and held only 5MB of data, it cost $10,000 per megabyte. It was the size of two refrigerators and weighed about 2 tons. And arguably, the greatest invention — the first snooze alarm clock — was also made by General Electric! Suffice to say, since the idea was first birthed, we’ve come pretty far.

In 2018, IBM released the world’s smallest computer — 1mm x 1mm. It’s smaller than a grain of rice and has enough computing power to handle basic AI tasks, and can even work with blockchain. AI is now able to detect cancer better than human doctors, beat world champions at chess, create art and drive a complete car by itself! I think John McCarthy would’ve been proud of the rate at which tech has been rising in the world. But it doesn’t seem like we’re stopping any time soon. So, let’s get down to why you’re reading this article — when will AI become superhuman?

Image for post

For this, there are a couple of things we need to understand first, like what is necessary for it to reach the human level of intelligence — Artificial General Intelligence. AGI is the intelligence of a machine that has the capability to understand and learn any intellectual task as a human being can. It’s sometimes referred to as “Strong AI” and “Full AI”. Strong AI’s ultimate goal is to have self-aware consciousness as we do and use it to solve problems, learn and plan for the future. A Strong AI, just like a human child, would have to be “born” and eventually develop into an adult through inputs and experiences, constantly progressing and advancing its abilities over time.

A lot many researchers argue that the single largest stumbling block to Strong AI is the definition of intelligence. Since it’s very ambiguous, undefined and the concept differs from person to person, it’s very hard to actually measure the amount of success in this field. Until intelligence and understanding can be explicitly defined, some researchers are true to believe that we may never achieve Strong AI.

The most common test for intelligence right now is the Turing test. For those who haven’t seen “The Imitation Game” and haven’t heard about Alan Turing, Alan was an English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist. So, it would be pretty unfortunate if you didn’t know about him.

He devised the Turing test, originally called the imitation game in 1950, which tests a machine’s capability to exhibit intelligent behaviour indistinguishable from that of a human. The issue with the Turing Test is that it only tests for one skill set — text output. A Strong AI needs to be able to do a number of things equally well, which led to the Extended Turing Test. The extended version assesses for the textual, visual and auditory performance of an AI and compares it to a human-generated output.

Now, we know that an AI needs to pass the Extended Turing Test and only then would it be considered a Strong AI or AGI by some experts.

In 2013, the president of the European Association for Cognitive Systems — Vincent C. Muller, and Nick Bostrom from Oxford University (who wrote roughly 200 papers on superintelligence and AGI) conducted a survey of 500 AI researchers and had them answer “When is AGI likely to happen?”

– 10% believed it would happen by 2022.

– Half of them perceived that it’s likely to happen by 2040.

– 90% believed that it is bound to happen by 2075.

The 10% who believed AGI would be present a year from now were very, very hopeful and optimistic. They gave themselves only a decade, which seems unrealistic. But in the most recent survey, which was conducted in 2019 –

– 45% predicted a date before 2060.

– 34% of the participants forecasted a date after 2060.

And along with this, 21% said that humans will never achieve Strong AI. There are a large number of people who think this way. We’d all like to say that it’s a very pessimistic POV (point-of-view), but let’s try to understand why they think like this.

There are two facts that we must consider –

– Human intelligence is fixed unless we merge our cognitive capabilities with a machine. Elon Musk’s neural lace start-up aims to do this.

– Machine intelligence depends on algorithms, processing power, and memory.

We’ve been providing machines with good processing power and memory, and these grow at an exponential rate. As for the algorithms, we’ve been giving algos that use the processing power and memory effectively.

Since our intelligence is fixed and machine intelligence doesn’t seem to have any limits, it’s surely only a matter of time until they surpass us. This is what is meant by exponential growth. While machines seem very dumb right now, they can grow to be very smart, very fast.

Image for post

However, we should take all the predictions with a grain of salt. The problems that the pioneers of AI had were much more complicated than the ones they had anticipated. The same can be said today. It’s been more than sixty years since John birthed the idea, and he had no indication of the vast, foggy terrain ahead of him. Since then, we have no clue as to how much further we have to go or how far we can go. The important thing is, we’re still moving.

Spread the word

This post was originally published by Pratyaksh Jain at Medium [AI]

Related posts