AI In Everyday Life: 8 Things AI Is In

Artificial intelligence is not a dream of the future. It has long since arrived in our everyday lives. There is AI in these ten simple things. The term artificial intelligence is an excellent argument. Does he mean human-like intelligence? Synthetic imitation of biological intelligence? Is it about computers pushing icons back and forth? Or does he mean machine systems that learn?

Depending on their perspective and background, people assign entirely different things to the term AI. No matter how you define AI Today, everyone benefits from the technology which has not missed out on digitization. In this article, I’ll explain ten AI applications that affect our everyday lives.

“Googling”

Google is pioneering the use of AI technology. The group has been using artificial intelligence in its core search engine business since at least 2015. The AI-based search algorithm RankBrain helps to understand search queries and delivers relevant results. RankBrain tries to display presumably desired search results for previously unknown search queries by linking the search query with already known, meaning-like search queries. Google records the search results and uses them to train RankBrain.

In October 2019, Google announced the most significant update of the search engine: the BERT voice KI understands natural language and thus helps answer search queries even more precisely. As an example of BERT’s capabilities, Google cites the search query “2019 Brazilian travellers to the USA need a visa”.  With BERT, Google can grasp the critical role of the word “to” in this context and outputs the appropriate result: Visa information for people entering the country.

Netflix And Chill

In the evening, in front of the TV and watch a little Tiger King. But why does the series even appear in my recommendations? Netflix relies on recommendation algorithms to display entertainment that is appropriate to customer tastes. Netflix collects all sorts of data that flows into the AI ​​algorithm:

  1. What I watch.
  2. What I watch before and after.
  3. What I saw a year ago.
  4. What recently and what time I saw it.

If Netflix knows my preferences, the algorithm will select content and generate a suitable presentation. Different customers not only see various recommendations – even the preview images are adapted to their taste. One sees a dramatic close-up of the main character for a film. The other sees a wedding scene for the same movie. My subsequent selection trains the recommendation algorithm again.

Search algorithms also drive recommendations from YouTube, Spotify, Amazon, Facebook, Twitter or Instagram. They are based on variants of deep learning. Netflix also uses artificial intelligence to help corporate executives fund titles that are worthwhile for the respective market. To do this, the AI ​​analyzes countless films and series and can use this information to predict content categories and expected audience numbers in different countries.

Spam & Hatefilter

In addition to recommendation algorithms, social networks rely on control and verification algorithms that recognize questionable content. This includes pornographic, violent or politically extreme content. This content is automatically marked and, in extreme cases, deleted.  Control and verification algorithms work better in email traffic: There, they work as spam filters. Services like Gmail have been using artificial intelligence in addition to conventional rule-based filters for years.

Rule-based filters can block obvious spam emails but are not self-learning. This is where the AI ​​filters come into play: the algorithms analyze a large amount of information, from email formatting to sending time, to find patterns in the data streams that indicate spam. Here are supervised and unsupervised learning methods used.

In the context of the corona crisis, control and testing algorithms are moving even more into focus as fewer moderators are available. Therefore, the machine must inevitably take on more responsibility to continue operations on platforms such as YouTube and Facebook. At least that’s how Google and Co. explain the move to more algorithmic control.

Face Recognition

For many people, looking at their smartphones in the morning has long become a routine. Often we unlock our smartphones directly with our faces. And that works reliably even after a night of partying and with a stormy hairstyle. Artificial intelligence makes face recognition possible: Face recognition algorithms work in the smartphone that can reliably and quickly identify our face after a short training session.

In many cases, the AI ​​is constantly learning and gets better with each unlock. The AI training runs thanks to efficient algorithms and specialized hardware directly on the smartphone. Similar image analysis AIs automatically search for people in our digital photo albums and mark them. This now also works for animals and some objects. Companies like Shadow use this ability to reunite lost dogs with their owners. 

The social dog network Pet2Net, on the other hand, would like to enable “complex analyzes of dog relationships” – big data for dogs. Face recognition is also used in research: For the BearID project, bear researcher Melanie Clapham relies on “Bearface”, a grizzly credit AI that identifies bears based on their face, thus making the researcher’s work easier. Face recognition is being discussed critically in the context of mass surveillance: AI-supported cameras can identify individual people and observe them from different locations – entirely automatically.

Alexa & Google Assistant

Digital assistants such as Amazon’s Alexa or Google’s Assistant come closest to the pop-culture image of artificial intelligence: They have a name and can speak. In many households, they are part of everyday computer life. Computers that speak and understand language are the remarkable success story of AI research in recent years: thanks to machine learning, language processing, synthesis, and translation have become much better. 

The assistants understand us reliably, can answer questions and do simple tasks. Your AI voices are sounding more and more like those of a human. Integrated services such as Google Translate or Google Lens facilitate machine translations, identify objects or signs and thus remove language barriers.

Further improvements are imminent: At the beginning of 2021, language AIs from Google and Microsoft beat the language comprehension benchmark SuperGLUE and reached the human level. These systems cannot replace a real assistant either, but they represent a significant leap in performance compared to older designs.

Robot Vacuum Cleaners And Tesla

In addition to speaking, machines are also learning to see better and better. Computer vision is fundamental to (partially) autonomous robots: from vacuum cleaner robots that autonomously map out our homes and then clean up to self-driving cars, which should bring us safely through traffic.

The breakthrough for machine vision came in 2012 with an image analysis AI that won the ImageNet competition considerably. Since then, computer vision has increased: Almost 170,000 machine vision patents were registered by early 2019 – almost half of all AI patents.

Deep Fakes

Videos manipulated with AI – so-called deep fakes – achieve millions of clicks on YouTube, for example, by credibly exchanging the faces of well-known Hollywood actors in films. This face swap principle is not new. But thanks to the AI ​​process, it now works more efficiently and is accessible to more people. 

This increases the spread of related videos. Since the quality of deep fakes has risen rapidly in recent years, some experts, researchers or politicians warn that masses of credible video fakes could undermine our decision-making ability as a society – for example, if social media were flooded with deep fake election advertising videos.

“Deepfakes can weaken social trust in the fundamental authenticity of audio and video recordings and thus the credibility of publicly available information.” They could therefore represent a “great danger to society and politics”. However, the risk should not be overestimated. Platforms such as Twitter and YouTube regulated deepfakes in the 2020 US election campaign. 

The feared glut of the fake video did not materialize, however. Instead, there was a deepfake series, possibly a deep fake appearance in the Star Wars series “The Mandalorian ” and a deepface queen on British Channel 4. And Nvidia uses the technology to keep you in video telephony to look into the camera.

GPT-3: Machine Writer On The Internet

Speech AIs have come a long way in the past few years. At least since OpenAI published the results of GPT-2, it was clear that machines now write so well that it is difficult for people to distinguish them from human texts. Such skills are used, for example, by marketing experts to generate advertising slogans or by Microsoft to display AI-generated suggestions for improvement in Word. 

AI also writes short sports and business news, for example, for the New York Times or Reuters. This development continues to pick up speed: The publication and sale of OpenAIs GPT-3 show what market potential the AI ​​company sees in machine texts. Microsoft has directly bought an exclusive license for GPT-3 and wants to use the capabilities of AI ​​for its products.

Also Read: Artificial Intelligence Evolves: It Will Be Able To Program

Techno Publish: Technopublish.com is a reliable online destination for tech news readers who want to keep themselves updated on current innovations and advancements on topics related to technology.