Artificial Intelligence Explained

Author: Chris Russell

Artificial intelligence is all around, although it’s not a trait of humans or animals. Living creatures use natural intelligence. Humans have programmed artificial intelligence into machines so that these systems can understand the environment they function in and have a better chance of achieving their programmers’ goals. Common AI applications include Internet search engines like Google and tools that understand human speech, like Apple’s Siri does. AI was first developed back in the 1950s at research universities.

What Is Artificial Intelligence?

AI, or artificial intelligence, is a part of the field of computer science that concentrates on developing machines and programs that can perform tasks that once required human intervention. AI is becoming more and more common in most people’s daily lives due to the Internet and Internet-connected devices like smartwatches.

What Are Examples of Artificial Intelligence?

Modern life is full of examples of artificial intelligence. When you open up Netflix, you see recommendations of things to watch that were curated by AI. When you interact with a chatbot on a website, that’s AI, too. AI also helps email spam filters to detect which messages you’re not likely to open. And search engines like Google use AI to better figure out what you’re searching for and deliver results you’ll want.

How Does Artificial Intelligence Work?

AI is a branch of computer science that aims to simulate human intelligence with computers. This research was inspired by British mathematician Alan Turing, best known for his work as a code-breaker during World War II, who published an article that asked, “Can machines think?” Today, the development of AI relies on coders, designers, and computer engineers who use huge amounts of data to “teach” computers to think.

The Four Types of Artificial Intelligence

There are four types of artificial intelligence: reactive machines, limited memory, theory of mind, and self-awareness.

Reactive Machines

A reactive machine is a very basic type of AI created to perform a small number of very well-designed tasks. Reactive machines don’t have memory storage, which means they can’t pull from previous experiences when completing a task. Although they are limited, reactive machines are highly reliable because they can only react to a set of circumstances in the same way each time. One famous example of a reactive machine is IBM’s Deep Blue, a chess-playing computer first introduced in the 1990s. The computer couldn’t formulate a strategy when playing or analyze its human opponent’s strategy: It could only make a move based on the pieces in play.

Limited Memory

Artificial intelligence machines with storage that can gather information and draw upon memory stores when making decisions about how to act are known as limited memory. Six steps are followed when using limited memory for the purposes of machine learning: Training data must be inputted, the machine learning model has to be crafted, the model has to have the capacity for making predictions, the model has to be able to accept feedback from humans or from the environment, the feedback has to be able to be stored in the machine’s memory banks, and the machine must be able to repeat these steps.

Limited memory artificial intelligence is used in three major machine learning models:

  • Reinforcement learning, where the machine learns through trial and error
  • Long short term memory (LSTM), where the machine learns to use past experiences (in the form of data) to make predictions about what will happen next. LSTMs give more weight to more recent data but still consider all data when making a decision.
  • Evolutionary generative adversarial networks (E-GAN), which evolve over time. The model constantly searches for a better path and uses statistics and simulations to predict outcomes even while it continues to evolve.

 

Theory of Mind

The technology doesn’t yet exist to make theory of mind AI a reality. Right now, it’s a theory based on the idea that machines can learn how humans make decisions by using thoughts and feelings. It would allow AI machines to take these feelings and thoughts into account when interacting with humans.

Self-Awareness

Self-awareness can’t become reality until the theory of mind moves from the theoretical realm into reality. Self-awareness is when AI reaches the stage where the machine has its own consciousness and understands the parameters of its own existence.

How Is AI Used?

How AI is used depends on if the AI in question is considered narrow artificial intelligence or artificial general intelligence.

Narrow Artificial Intelligence

This type of AI is also called weak AI. It’s a form of artificial intelligence that is usually focused on performing one task and doing it extremely well. These machines are very dependable, but they function in a very limited context.

Artificial General Intelligence

Also known as strong AI, AGI is the type of AI beloved by science-fiction writers and movie-makers. For example, the androids from HBO’s Westworld are examples of AGI. Like humans, AGI has a strong general intelligence that’s very flexible. Elements of that intelligence can be applied to any problem it faces.

A Brief History of Artificial Intelligence

AI has been featured in stories dating all the way back to the ancient Greeks. In fact, Aristole developed methods of deductive reasoning as a way for humans to understand how their own intelligence functioned. However, AI the way most people today think of it is very much rooted in modern history.

1940s

In “A Logical Calculus of Ideas Immanent in Nervous Activity,” Walter Pitts and Warren McCullough propose a way to build a mathematical model for creating an artificial neural network.

1950s

Turing publishes “Computing Machinery and Intelligence,” a paper that included a proposed method for determining whether a machine is intelligent, which would become known as the Turing test. In 1954, a collaboration between Georgetown University and IBM produces a machine that translates 60 Russian sentences into English. Meanwhile, the term “artificial intelligence” is first used at the Dartmouth Summer Research Project on Artificial Intelligence in 1956.

1960s

The AI Lab is founded at Stanford University in 1963. In 1966, the U.S. government’s Automatic Language Processing Advisory Committee (ALPAC) reports on a concerning lack of progress in the field of machine translation research. The report causes the government to cancel all funding for machine translation projects.

1970s

Continued government frustration with the lack of development in the field of AI leads to more funding cuts in both the U.S. and Britain. Research grinds to a near halt; this period would become known as “the First AI Winter.”

1980s

The commercial expert system Rl (known also as XCON) debuts in 1980. Its purpose is to configure commercial orders for computer systems. The Japanese government introduces the Fifth Generation Computer Systems project, and the U.S. resumes significant investment in AI in response.

1990s

During the Gulf War, the U.S. military uses an automated logistics tool known as DART. IBM debuts the Deep Blue chess computer, which defeats chess champion Gary Kasparov in 1997.

2000s

The U.S. military invests in AI-driven robots like the PackBot and Big Dog. A self-driving car wins the DARPA Grand Challenge in 2005. In 2008, Google introduces speech recognition in the company’s iPhone app.

2010-14

IBM’s Watson defeats human champions on Jeopardy! in 2011. Also that year, Apple releases the first version of Siri. Meanwhile, the Google Brain Deep Learning Project teaches a neural network to recognize a cat without ever specifically teaching the network what a cat was.

2015-21

In 2016, Hanson Robotics creates Sophia, a humanoid robot capable of making facial expressions and recognizing other faces. Google releases BERT, a natural language processing engine, in 2018. During the COVID-19 pandemic, Baidu releases the LinearFold AI algorithm to help researchers predict the RNA sequence of the SARS-CoV-2 virus 120 times faster than any other method is capable of doing.

About Chris Russell

Chris Russell has spent 20 years in the online recruiting space building job boards, recruiting apps and publishing a variety of content for the recruiting industry through articles, podcasting and webinars. Also a former talent practitioner, he’s a popular voice helping to inform the modern recruiter.
Book a Demo