HomeTECHNOLOGYCLOUD COMPUTINGDeep Learning, The Future Passes Through GPU Computing

Deep Learning, The Future Passes Through GPU Computing

By leveraging ever more powerful GPUs, many companies build artificial intelligence systems capable of automatically analyzing and cataloging images. Computers watch us, scrutinize what we do and how we behave. And not just figuratively. Computer vision (” artificial vision “) has become an increasingly marked trend in computer systems. Facebook, for example, has developed software capable of recognizing the faces posted on the social network, helping you to identify and tag your friends in the images.

Google Photos uses computer vision algorithms to identify shapes and objects in photos and pictures to categorize them automatically; Twitter can detect pornographic photos without needing a live supervisor review and automatically delete them. If the Net could “acquire” the sense of sight, it is thanks to artificial intelligence and, in particular, to the neural networks that make deep learning possible. Thanks to this technology and its algorithms, computer scientists worldwide are attempting to reproduce the functioning of the human brain so that they can analyze the structure of language and the distribution of terms online.

The GPUs For Artificial Intelligence

The secret of the system devised by the Canadian engineers lies in using GPUs instead of CPUs. The computing units of the graphics cards have a different architecture than the central computing units. Instead of processing one instruction at a time in a serial manner, they carry out thousands of operations simultaneously (in parallel). 

This means that the computing power of a typical GPU is tens of times higher than that of a standard CPU. For this reason, in a process called GPU computing, the graphics cards are used with increasing frequency in IT tasks and operations that require excellent computing capacity: visual recognition, language analysis, web search (search engines), and IT security.

Deep Learning To The Nth Degree

A group of Microsoft researchers demonstrated what a neural network can be capable of when it can deploy it powerfully. In fact, in the Redmond laboratories, they managed to create a deep learning system, based on 152 levels of abstraction: if you think that the most powerful supercomputers currently used for artificial intelligence tasks can consider, in the best of cases, “just” about thirty levels of abstraction, we understand that the Microsoft project can be truly revolutionary.

Levels Of Neurons

However, to better understand what we are talking about, it is necessary to understand what is meant by “levels of abstraction” or, by analogy with the human brain, by “levels of neurons.” Artificial intelligence or a deep learning system, capable of running only one algorithm at a time, has only one level of abstraction; an AI (an acronym for artificial intelligence ) equipped with a second algorithm that analyzes the results of the first abstraction has two degrees of conception; and so on. 

Therefore, the deep learning system developed by Microsoft engineers exploits the power of thousands of GPUs to perform 152 different operations on the same image or photo to distinguish even the most insignificant details. The importance of the levels of abstraction is soon said: the greater the number of groups, the better the automatic learning capacity of these machines. Put, a device with more abstraction/neurons is more “intelligent” than a system with fewer.

Difficult Construction

Microsoft scientists faced quite a few problems in fine-tuning their neural network at 152 levels of abstraction. However, the most complex concerns the configuration of the neural network and its construction. As the levels of abstraction increase, the intelligence of the computer system increases and its complexity and its management: scientists are forced to guess the most efficient of several extremely complex mutually alternative systems.

To bypass this problem, Microsoft scientists have relied on computer science itself. The expert computer system analyzes the configuration devised by the research team and begins to formulate alternative versions until it finds the most effective and efficient. In technical jargon, this practice is called hyperparameter optimization (” search grid, “in Italian) and allows you to save time, computing power, and critical economic resources.

Hardware Problem

According to the Microsoft research team engineers, we are just at the beginning of a new revolution in artificial intelligence in which video cards and GPU computing will play a fundamental role. We can hypothesize scenarios where the development of neural networks and deep learning algorithms is automated and entrusted to a sort of “natural selection” based on silicon.

However, research centers specializing in artificial intelligence must continue to invest in increasingly powerful computer systems for this landscape to come true and deep learning to create ever-smarter machines. Therefore, we are faced with a hardware problem: for the algorithms to refine and become more and more precise, more computing power is needed. For this reason, Microsoft and other large hi-tech companies are investing with conviction in FPGA ( field-programmable gate array ) chips. These integrated circuits can be programmed via software to perform specific computing operations.

Also Read: Blockchain And Artificial Intelligence

Techno Publishhttps://www.technopublish.com
Technopublish.com is a reliable online destination for tech news readers who want to keep themselves updated on current innovations and advancements on topics related to technology.
RELATED ARTICLES

RECENT ARTICLES