May 25, 2022
New Delhi
science

According to its pioneers, the future of deep learning

Deep neural networks would overcome their shortcomings without the help of symbolic artificial intelligence, claim the three pioneers of deep learning in an article published in the July issue of the journal ACM Communications.

In their article, Joshua Bengio, Jeffrey Hinton and Jan Lacan, 2018 Turing Prize recipients, explain the current challenges of deep learning and how it differs from learning in humans and animals. They are also examining the latest advances in this field that may provide blueprints for future directions for deep learning research.

The article, titled “Deep Learning for AI,” predicts a future in which deep learning models can learn from humans without a little help, and are flexible to changes in their environment and respond to reflexes and cognitive problems. Can solve a wide range.

The challenges of deep learning

Deep learning : is often compared to the brains of humans and animals. However, recent years have shown that artificial neural networks are the main components used in deep-learning, ineffective, flexible and multidimensional models of their biological counterparts.

In their paper, Bangio, Hinton and Lacon acknowledge these shortcomings. “Supervised learning, although successful in a wide range of tasks, usually requires large amounts of data with human labels. Similarly, when reinforcement learning is based only on rewards, it requires a very large number of Interactions are required,” he writes.

Supervised learning: is a popular subset of machine learning algorithms that present a model with labeled examples, such as a list of images and their associated content. The model is trained to find repeating patterns in similarly labeled patterns. Then he uses the learned pattern to associate the new examples with the correct labels. Supervised learning is particularly useful for problems where labeled examples are available in abundance.

Reinforcement learning:  is another branch of machine learning, in which an “agent” learns to maximize the “reward” in the environment. An environment can be as simple as a tic-tac-toe board where an AI player is rewarded for stabilizing three axes or an ace, or as complex as an urban environment where a self-driving one can avoid collisions. The car is rewarded, complying with traffic rules, and reaching its destination. The agent starts on random tasks. When he receives feedback from his environment, he finds sequences of actions that yield better rewards.

In both cases, as scientists believe, machine learning models require a huge workforce. It is difficult to obtain data sets with labels, especially in specialized areas that do not have open source data sets, which means they require the tedious and expensive labor of human annotation. And complex reinforcement learning models require tremendous computational resources to run myriad training sessions, making them available to some very affluent AI labs and technology companies.

Bangio, Hinton and Lacon acknowledge that current deep learning systems are still limited in the range of problems they can solve. They do well on specialized missions but are “often fragile outside the narrow range for which they were trained.” Often, minor changes such as changes in a few pixels in an image or very minor changes in rules in the environment can lead to deviations from deep learning systems.

The fragility of deep learning systems is largely due to the fact that machine learning models are based on the “independent and distributed distribution” assumption (i.i.d.), which assumes that real-world data has a similar distribution to training data. i.i.d also assumes that the observations do not affect each other (for example, the task or die task are not interdependent).

“From the earliest days, machine learning theorists have focused on a single assumption. unfortunately, this is not a realistic assumption in the real world,” the scientist writes.

Real-world definitions are constantly changing due to a variety of factors, many of which are nearly impossible to represent without causal models. Smart agents must constantly observe and learn from their environment and other factors, and must adapt their behavior to changes.

“The performance of the best AI systems today take a hit as they move from lab to field,” the scientists write.

I.I.D. Perception becomes even more delicate when applied to fields such as computer vision and natural language processing, where the agent has to deal with high entropy environments. Currently, many researchers and companies are trying to overcome the limitations of deep learning by training neural networks for more data, hoping that larger data sets will cover a wider distribution and reduce the chance of failure in the real world.

Deep learning vs hybrid AI

The ultimate goal of AI scientists is to restore the kind of general intelligence that humans have. And we know that humans do not suffer from deep learning systems today.

Bangio, Hinton and Lacan write in their article, “Humans and animals are able to learn a wide-ranging background knowledge about the world, primarily by observation, in a mission-independent manner.” “This knowledge underlies common sense and allows humans to learn complex tasks like driving with only a few hours of practice.”

Elsewhere in the paper, the scientist notes, “[H] humans can generalize differently and more powerfully than typical IID generalizations: we can correctly interpret new combinations of existing concepts, even though these combinations will be very impossible under ours. Distribute training, as long as they respect the The syntactic and semantic patterns at the high level we have already learned. ”

Scientists offer different solutions to bridge the gap between AI and human intelligence. One approach that has been much discussed in recent years is hybrid artificial intelligence that combines neural networks with classical symbolic systems. Symbolic manipulation is a very important part of a person’s ability to think about the world. This is one of the major challenges of deep learning systems.

Bangio, Hinton and Lacan do not believe in mixing neural networks and symbolic AI internet. In a video accompanying the ACM newspaper, Bengio says, “Some people think that there are problems that neural networks cannot solve and that we need to take classic AI, the symbolic approach. But our work is something else.” Offers. ”

Deep learning pioneers believe that improved neural network architectures will eventually lead to all aspects of human and animal intelligence, including symbol manipulation, reasoning, causation, and common sense.

In their paper, Bangio, Hinton and Lacon highlight recent advances in deep learning, which have helped advance some areas in which deep learning is struggling. One example is Transformers, a neural network architecture that has been at the heart of language models such as OpenAI’s GPT-3 and Google’s Mina. One of the advantages of transformers is their ability to learn without the need for tagged data. Transformers can develop representations through unsupervised learning, and they can then apply those representations to fill in the blanks with complete sentences or form a coherent text after being instructed.

Recently, researchers have shown that transformers can also be applied to computer vision tasks. With loosely neural networks, transformers can predict the contents of the masked regions.

A more promising technique is contrast learning, which attempts to find vector representations of missing regions, rather than predicting exact pixel values. It’s an intriguing approach and seems to be similar to what the human psyche does. When we see a picture like the one below, we may not be able to imagine a photo-realistic depiction of the missing parts, but our brain may be able to visualize what is in those masked areas (e.g., doors). Can come with a higher level of representation. , windows, etc.). (My observation: This may fit in well with other studies in the field that aimed to align vector representations in neural networks to real-world concepts.)

The push to make neural networks with human labels less dependent on data fits into the discussion of self-directed learning, a concept Lacon is working on.

The paper also touches on “System 2 Deep Learning”, a term borrowed from Nobel Prize-winning psychologist Daniel Kahneman. System 2 treats brain functions that require conscious thinking, including symbol manipulation, thinking, multi-step planning, and solving complex mathematical problems. Deep learning of System 2 is still in its early stages, but if it becomes a reality, it could solve some of the major problems of neural networks, including out-of-distribution inclusion, causal heating, strong transfer learning and symbol manipulation.

The scientists also support work on “neural networks that provide internal frames of reference for objects and their parts and identify objects that use geometric connections.” This is a reference to “capsule networks,” a field of research that Hinton has focused on over the years. Capsule networks strive to upgrade neural networks by locating features in images to detect objects, their physical features, and their hierarchical relationships with each other. Capsule networks can provide deep learning with “intuitive physics”, an ability that allows humans and animals to understand three-dimensional environments.

“There’s a long way to go in terms of our understanding of how to make neural networks really effective. And we expect to have fundamentally new ideas,” Hinton told ACM.

.

Leave feedback about this

  • Quality
  • Price
  • Service

PROS

+
Add Field

CONS

+
Add Field
Choose Image
Choose Video
X