AI vs. Human: A comparison of human perception with Artificial Intelligence (AI)

This post was originally published by Rida Nasir at Think ML

AI vs. Human: A Comparison of Human Perception with Artificial Intelligence (AI)

Human intelligence is based on a fantastic duality (a) perception of patterns (b) rational and structural decisions. Both properties are distinct yet complementary to each other and run parallel in the decision-making process of humans. A Machine learning (ML) based system also has two similar structures.

a) Artificial Intelligence uses Deep Learning to interpret patterns in the input data to make the final decision. This behavior is similar to human perception intelligence.

b) Computer uses standard instructions (Code or Program) to work logic and rationale similar to human rational intelligence.

Have you ever wondered when you can recall a piece of music but not the lyrics? Sometimes, our brain remembers faces easier than remembering names, tastes than the name of the drink, and recognizes smell faster than the name of a perfume. It means that the human brain can retain and store intricate patterns of face, smell, sound, and taste better than verbal descriptions.

Mark Zuckerberg, the CEO of Facebook, stated that machine learning and AI would surpass human capabilities in computer vision and speech recognition areas in the next five to ten years. The social media giants, including Facebook, utilized ML technology to deliver the best recommendation services. The advanced algorithms stay active in the background to see the user’s interest and block that he doesn’t want to see. Further, the organizations are introducing facial recognition feature on their sites to enhance security.

Deep neural networks are very significant in the field of artificial intelligence. They are influencing lives via image recognition, precision medicine, automatic translation, and many others. Moreover, many other factors are joining these advanced artificial perception algorithms with biological brains. The given systems are similar in performing functions like they can solve complex task because of structured neurons in their anatomical structure.

Simplistic Model of the Human Brain

Perception is the human ability to hear, see, or become alert of something via senses. The brain consists of two primary parts.

Perception-based: The right one

Rational-based: The left one

The brain’s right part generates perceptions after receiving patterns from taste, smell, sight, touch, and hearing. On the contrary, the brain’s left function responds to our logical interpretations and produce a rational or structured understanding of a particular problem.

When humans study any subject, they use the rational part of the brain that provides structural information for that subject. Further, five senses of humans are the primary source of creating perceptions from unique patterns. Many situations come with the mix of both designs and logic; our brain uses both parts to make decisions. Both parts are a vital source of human intelligence.

The Dominant Factor in Human Decision Making

While discussing the human brain, it is not fair to ignore emotions and feelings. The brain generates perceptions based on the type of experience. For instance, if some sound or sight patterns create a perception of fear in someone, then the brain stores this perception of fear as an emotion. Emotions are of various types, including liking, disliking, fear, love, anger, hate, etc. Both parts of the brain stay active all the time to deal with different situations. The right amount generates perceptions, and simultaneously, the rational part starts constructing a sensible interpretation for the same problem. Which domain will succeed depends on that particular situation.

Several questions arise in mind regarding the comparison of machine learning and human perception. Human vs AI, which will exceed the other? Questions are given as:

  • How similar machines and humans are in their functionality?
  • How can machines understand human vision and respond in the same manner?
  • Can humans use their precision methods to achieve excellence in machine learning?

All these questions lead to the comparison of these two intriguing fields. With similarities, there are prominent differences in the two systems which open up new challenges. Hence, it’s necessary to understand DNNs and humans carefully.

How Do These Rules Apply to Artificial Intelligence?

Most artificial intelligence systems use Deep Learning, and learning happens via exposing machines to thousands of illustrative examples. The system absorbs complex information and changes nuances in videos, sounds, or pictures to the AI system’s neural network parameters. After required training, the system can perceive the input information based on patterns in faces, images, objects, or movements. The system’s decision-making process is based on the perceptions related to the input data patterns. However, it behaves like the right part of the brain, an expert at receiving patterns.

Deep neural networks have already been used for speech recognition and image classification assessment with unique stimuli. The researchers at MIT presented their findings at a Conference on Neural Information Processing Systems.

Their findings were particularly for describing metamers’ concept; these are physically different stimuli that generate the same perceptual effect. Most humans have three different types of retina responsible for colour vision. Three major types of metamer stimuli are involved in viewing and perceiving right colour. The perceived colour of a single light wavelength can match with a combination of three particular colours; for instance, red, blue, and green lights. It helped scientists to conclude that there are three types of bright-light detectors in the human eye. It is the basis of all colours that we view every day on electronic screens.

Challenges in Comparing Human Perception with AI

Regarding difficulty in comparing human perception with AI, three hidden pitfalls will lead to brittle conclusions:

  1. Humans are too quick in concluding that machine learning exceeded human intelligence in perception. It’s similar when a person says his pet is smiling because of its human-like expressions.
  2. It’s challenging to get results without proper testing and training procedures.
  3. Experimental conditions should be the same for comparing humans and machines in perceiving information.

The researchers from the German University of Tubingen and organizations published a study, “The Notorious Difficulty of Comparing Human and Machine Perception.” They focused on challenges in evaluating current issues in comparing neural networks and the human visual system. In their study, the researchers performed in-depth experiments on advanced deep learning and found their relevancy with humans.

The Complication of Human Perception and Computer Vision

The vision of machines with thinking and acting ability came from fiction movies to real-worlds. A long time ago, humans started developing intelligence in machines to support humans in their works. These are commonly known as humanoid bots, robots, and digital devices that work with humans in a friendly way.

The human visual cortex comprises 140 million neurons, and it is known as the most mysterious part of the brain. It processes and interprets visual data to make perception and create memories related to it. Humans can explain a lot from a little information just by seeing a specific image. On the other edge, it isn’t straightforward to train computers to behave like humans. Computer science is the field that’s 60 years old, and computer vision is its novel branch. Before the introduction of deep learning in compute vision, the people started using Template Matching Approach. It could recognize objects and detect problems with the sliding window approach. Machine learning approach was first introduced in computer vision in the year 2000. One of the best face detection algorithms was developed by Paul Viola and Michael Jones in 2001.

The researchers reconstructed human perception and developed a new field of Computer Vision and Deep Learning. Traditional software cannot perform the required tasks; hence, they used Convolutional Neural Networks (CNNs) in deep learning algorithms. It’s still a challenge to compare human perception with neural networks as the system is not fully matured. Further, there is still much to explore about human vision and brain working system.

Deep neural networks also work in a complicated way as they can even confuse their creator team. The German Researchers wrote in their published work:

“Despite a multitude of studies, comparing human and machine perception is not straightforward”

Their study was mainly focused on three branches that how humans and machines deal with visual data? The researchers compared two systems to discover how to build a human-level AGI (artificial general intelligence). They stated that:

“While comparing, studies can extend our understanding, they are not easy to conduct. Dissimilarities between the two systems can complicate the procedure and open up several challenges”

How Do Neural Networks Perceive Contours?

They considered contour detection in their experiment for testing the processing of visual data. The researchers selected a contour detection, namely the Synthetic Visual Reasoning Test (SVRT) for accessing the recognition gap. Their goal was to test either deep learning algorithms to learn about open and closed shape and identify them in different situations.

The team used a closed contour detection test to see if ResNet-50 would identify an image with lines that can form closed contours. The ResNet-50 is a deep learning Convolutional Neural Network (CNN) that involves in classifying images. The given process of identifying image patterns is relatively easy for humans. Initially, the system showed that it could locate closed contour shapes; the shapes were hard-edged and curved.

Can Machine Learning (ML) Reason About Images?

The other part of the study was involved in testing the deep learning algorithm for visual reasoning. When researchers changed line thickness and colours, the system failed here. They predicted that this human-level performance in AI would lead to failure if they vary other factors. Interestingly, they also found that AI may find unexpected solutions beyond human perceptual capacity. It’s the experiment where they said that humans are too fast in saying machines are experts enough.

The researchers assumed that a DNN couldn’t do the same-different task but perform well on spatial tasks from the previous studies. The algorithms were asked for identifying two images in a frame that either they’re same or different. Humans are expert at doing both tasks with full proficiency. However, the results were excitingly different; the DNNs performed both jobs very well. They suggested that the deficiencies may occur because of:

  • The wrong process of training neural networks
  • Less training data availability
  • How they are structured

Borowski stated that:

“This second case study hpoints out the difficulty of drawing general conclusions about the mechanism involved. It reached beyond the tested architectures and training procedures”

They noticed that the human system is naturally trained to scratch information and identify unique patterns. But it’s unfair to test the human-made model with the least information.

Measuring the Recognition Gap of Deep Learning

In their next experiment, they used cropped, zoomed-in, and zoomed-out images to confuse the system. They did to a great extent that the system couldn’t recognize the original image, the process known as “recognition gap.” In their last step, the investigators measured the recognition gap in case of DNNs. The researchers zoomed-in pictures till the artificial intelligent system started degrading from gradually to great extent.

Earlier studies showed a huge gap between deep neural networks and human perception. But in this paper, the scientists pointed out that in earlier examinations on neural network recognition gaps were held by humans and based on image patches of human interest. Thus, these patches favored hominoid vision. Their study was based on machine-selected patches for deep learning models, and this time, they observed a similar gap in AI as was in humans.

Funke said their team is revising the paper and updating it on arXiv, an online platform for scientific research articles. It will come with new ideas to improve, conduct, and interpret comparative experiments.

Interesting Comparison Between Human Visual Perception and Intelligent Perception

  1. Human intelligence is because of the rational brain part and associated with sensible thinkers like Newton and Einstein. However, deep learning comes in association with perceptional skills of the human brain.
  2. Artificial intelligence understands patterns of certain input information and derives AI perception based on its deep learning. Then AI takes decisions based on these perceptions known as the level of confidence. A perfect AI machine mimics human brain capability related to perception.
  3. Software used in a typical computer works analogously to the rational brain part.
  4. Human perception-based intelligence comes from millions of years in evolutionary history. This quality is more profound than human’s rational abilities which they developed recently.
  5. It’s not easy to describe human perception-based thinking with words in detail. The process is fully automated and not in sub-conscious control. On the other hand, logic is describable with exact words.
  6. Most problems use both rational and perceptional brain parts and interconnect them to get a solution. It’s a mystery that remains unique because of the internal networking of brain parts.
  7. It’s an early stage of interconnection between standard computing and AI in real systems. It will boost the future of computer vision and AI in a much more diverse way.
  8. The researchers are focused on interconnecting hundreds or thousands neural network in an AI system that will get a more comprehensive intelligence. It works similar to the human brain as it receives information from different parts to perform a specific task.
  9. AI can’t reach human-level excellence as it can work in multi-dimensions simultaneously. The human brain has the potential to solve complex multi-discipline problems even without any previous knowledge. The brain generates new ideas using creativity and emotions to achieve what seems unachievable. Plus, it gives self-awareness and a fantastic sense of consciousness to humans.
  10. The present system of AI, with many limitations, is still capable of bringing evolutionary changes. It’s going to change the way of human living and work with its incredible opportunities. Hence, AI object recognition, personalizing, and simplifying applications are the start of a massive change.

Ending Words

The German researchers put great effort into measuring artificial intelligence and differentiating AI from human intelligence. The two central brain’s decision-making strategies (via perception and logic) are used to develop computer perception technique. AI uses Deep Learning techniques to make decisions based on perceptions, while standard computers are designed with logic for rational decision making. If the researchers mix both flavors, they will obtain a better and balanced decision.  Computer vision still needs tremendous efforts as it can work in only a few constrained environments and not in all. The researcher must have the necessary knowledge about the characteristics of human brain perception, mathematical aspects and technological ability.

Spread the word

This post was originally published by Rida Nasir at Think ML

Related posts