15°C New York
September 1, 2024
Facts On Artificial Intelligence That People Without Certain AI Knowledge Will Not Believe
AI Marketing Tool

Facts On Artificial Intelligence That People Without Certain AI Knowledge Will Not Believe

Dec 17, 2023

Given that artificial intelligence (AI) has been around for about 60 years, if we date its inception to the well-known Dartmouth workshop where it was first developed, what have we learned about the field that would surprise both laypeople and scientists?

The universe is expanding, gravity is caused by the curvature of space and time wrapped up into a four-dimensional surface, there is an inherent limit to how accurately you can measure where some particle is, and its speed, and so on are just a few of the shocking facts that we didn’t know before physics became the accepted method of determining the nature of matter. This is true for any scientific field, such as physics.

60 years of AI research have produced similar “mind-blowing” revelations about the brain or mind. Okay, let’s try to answer this question.

Object recognition, speech recognition, natural language processing, and other “simple” tasks that we perform frequently are extremely challenging. There was a lot of arrogance at the outset about how “easy” certain AI problems would turn out to be. Marvin Minsky, it is said, gave a graduate student at MIT a summer project to “solve” the computer vision problem. He did not anticipate much difficulty. After all, humans can identify things in as little as one hundred milliseconds. Six decades later, we are still attempting to solve human-like tasks like object recognition (such as identifying your mother’s distorted voice on the phone or distinguishing one conversation from hundreds in a noisy restaurant).

A recent paper demonstrated that it only takes one pixel to change its value for a deep learning network to change its label prediction from “dog” to “computer desktop,” despite its alleged prowess. The best deep-learning networks, on the other hand, are astonishingly fragile. Human perception, on the other hand, is extremely reliable and resistant to large amounts of noise. The fact that the tasks that we thought were “trivial” are the hardest for machines to complete is the most important lesson learned. On the other hand, activities that were deemed to be “intellectually challenging,” such as playing chess or solving Sudoku, are relatively minor and can be easily automated.

The superiority of pure logical modeling to probabilistic and statistical models in cognition. Initially, logic was the preferred method for posing AI-studied problems. My AI textbook, Principles of AI by Nils Nilsson, was largely based on using logic and deterministic computation when I was a graduate student in the middle of the 1980s. In the spring of 1984, I took my first course on machine learning. Not a single statistical concept was covered. What a shift in the AI industry over the past three decades! As James Clerk Maxwell so eloquently stated, “The true logic of the world lies in the calculus of probabilities,” it was not immediately apparent at the time that modeling the world required working with data that was extremely noisy and incomplete. AI had to re-learn the lesson that physicists had to learn the hard way with quantum mechanics at the beginning of the 20th century.

In the beginning, it was believed that logic would suffice once cognition reached “higher” levels, such as planning or reasoning, and that if statistics and probability were required, it would be in “peripheral” processing at the lowest sensory levels. Now, that idea seems old. It would appear that statistical modeling is at the center of everything that organisms do and how they deal with the significant uncertainty in their perception and behavior.

Intelligence is not directly related to being able to play chess well, solve puzzles quickly, or do esoteric math; rather, it is related to the numerous everyday activities that we perform regularly, such as reading the newspaper, watching a movie, and being able to tell a friend a summary of the main storyline the next day. Playing chess is computationally easier than watching a Netflix movie with friends and arguing about the plot. Processing a terabyte of motion video and summarizing it in a few sentences takes a lot of computing power. The majority, if not all, of AI textbooks, started with puzzles that were “intellectually” challenging, like playing chess or solving math problems. In any case, chess players were regarded as “geniuses” by human society.

A recent UK story demonstrates how widespread and incorrect this idea is. According to a group of MPs who petitioned the UK Home Secretary, an Indian immigrant family won their petition to overstay their temporary visitor’s visa because their 10-year-old child is a chess prodigy and the “best young player in the UK for several generations.” Playing games like chess at the grandmaster level is still regarded as evidence of superior intelligence. The opposite is undoubtedly the case. The fact that a person is a grandmaster at chess does not imply that they are good at anything else; in fact, the opposite is true.

Generally speaking, chess grandmasters are not particularly successful in science, music, or business. They are limited to playing chess. A person’s capacity to “tinker,” to take apart toys to see how they work, to imagine, and to ask a lot of questions (such as “Why is the sky blue?”) is far more indicative of whether or not they will be a great scientist. Although there are some exceptions, most child prodigies do not succeed in adulthood. Math can be incredibly difficult for even the most talented scientists, musicians, and artists. Darwin, the founder of modern biology, detested math and dreaded “trigonometry,” whose obscure symbolism was the bane of every student in 19th-century England’s rigid curriculum.

Artificial Intelligence


The brain handles a lot of noisy data very well and stores a lot of information that can be remembered almost immediately over decades. We now know that dimensionality reduction techniques that locate the underlying structure in the high-dimensional visual, speech, and proprioceptive data are the key to intelligence. Human sensors are amazing, sometimes to the point where they seem miraculous. Human hearing, for instance, is based on how the eardrum moves, and sounds that move the eardrum less than the width of a hydrogen atom are heard at the most sensitive frequency.

It is truly amazing how the auditory system distinguishes signal from noise. Anyone who has used Siri, Alexa, or any of the others is aware of how poorly they function even under low noise conditions. When I tried to demonstrate the power of Siri in a room with running air conditioners that were loud, I was humiliated in front of everyone during an invited talk I gave last year in India. Except for Siri, which lacked the fundamental capability of distinguishing between “noise” (the air conditioner) and “signal” (my voice), everyone else in the room had no trouble hearing what I was saying. This is known as blind source separation and is currently the focus of machine learning research.

The human brain is the most complex machine ever built, and understanding how its 100 billion neurons and what is now known as its connectome, or three-dimensional geometry, work together to produce minds is perhaps the greatest scientific challenge ever attempted by humans. It is significantly more challenging than mapping the universe or figuring out DNA’s structure, both of which appear to be relatively simple (planets move in very predictable orbits and, as Einstein put it, “the most incomprehensible thing about the universe is that it is so comprehensible”).

There are only four types of forces, not three million, according to our knowledge, and even these are being reduced. That contrasts with explaining how the brain processes enormous amounts of noisy data using an unimaginably large number of computing units that work asynchronously in parallel. Each unit operates in a time that is appallingly slow in comparison to the speed at which modern transistors in Intel chips turn on and off, but the brain can handle almost any computer program that does something that requires true intelligence. When compared to “average” humans recruited off of Amazon Turk, who solve the same video games more than 1000x faster than Deep Mind’s patented DQN architecture, even the much-heralded work on deep reinforcement learning of Atari video games — considered by many to be the showcase result of how far AI has advanced in the 21st century — is pitifully slow (see Tsividis et al., AAAI 2017).

These and other brain-related studies in neuroscience, psychology, and other related fields are just a few of the many insights that AI has provided. The fact that physics research continues to receive significantly more funding than research in AI is truly sad. One example is the billions of dollars that continue to pour into the CERN Particle Physics Center in Switzerland. On the other hand, research in the brain or AI receives relatively less funding at the national or international level.

Despite the enormous potential for a society that could result from unlocking the secrets to our brains, as a society, we continue to be singularly uninterested in learning how our brains compare to nature’s workings at the level of “string theory,” which is billions and billions of dimensions smaller than anything we can comprehend. Will this alter in the twenty-first century as we progress? For the sake of the next generation of AI researchers, I can only hope that society places a greater emphasis on brain science than on obscure models of string theory or the appearance of galaxies 15 billion light years away. Because it will take much larger sums of money to discover the secrets of the brain, just as national funding of research in physics or biology led to breakthroughs in these fields.

Related Articles:

Leave a Reply

Your email address will not be published. Required fields are marked *