Artificial intelligence is all around, although it’s not a trait of humans or animals. Living creatures use natural intelligence. Humans have programmed artificial intelligence into machines so that these systems can understand the environment they function in and have a better chance of achieving their programmers’ goals. Common AI applications include Internet search engines like Google and tools that understand human speech, like Apple’s Siri does. AI was first developed back in the 1950s at research universities.
AI, or artificial intelligence, is a part of the field of computer science that concentrates on developing machines and programs that can perform tasks that once required human intervention. AI is becoming more and more common in most people’s daily lives due to the Internet and Internet-connected devices like smartwatches.
Modern life is full of examples of artificial intelligence. When you open up Netflix, you see recommendations of things to watch that were curated by AI. When you interact with a chatbot on a website, that’s AI, too. AI also helps email spam filters to detect which messages you’re not likely to open. And search engines like Google use AI to better figure out what you’re searching for and deliver results you’ll want.
AI is a branch of computer science that aims to simulate human intelligence with computers. This research was inspired by British mathematician Alan Turing, best known for his work as a code-breaker during World War II, who published an article that asked, “Can machines think?” Today, the development of AI relies on coders, designers, and computer engineers who use huge amounts of data to “teach” computers to think.
There are four types of artificial intelligence: reactive machines, limited memory, theory of mind, and self-awareness.
A reactive machine is a very basic type of AI created to perform a small number of very well-designed tasks. Reactive machines don’t have memory storage, which means they can’t pull from previous experiences when completing a task. Although they are limited, reactive machines are highly reliable because they can only react to a set of circumstances in the same way each time. One famous example of a reactive machine is IBM’s Deep Blue, a chess-playing computer first introduced in the 1990s. The computer couldn’t formulate a strategy when playing or analyze its human opponent’s strategy: It could only make a move based on the pieces in play.
Artificial intelligence machines with storage that can gather information and draw upon memory stores when making decisions about how to act are known as limited memory. Six steps are followed when using limited memory for the purposes of machine learning: Training data must be inputted, the machine learning model has to be crafted, the model has to have the capacity for making predictions, the model has to be able to accept feedback from humans or from the environment, the feedback has to be able to be stored in the machine’s memory banks, and the machine must be able to repeat these steps.
Limited memory artificial intelligence is used in three major machine learning models:
The technology doesn’t yet exist to make theory of mind AI a reality. Right now, it’s a theory based on the idea that machines can learn how humans make decisions by using thoughts and feelings. It would allow AI machines to take these feelings and thoughts into account when interacting with humans.
Self-awareness can’t become reality until the theory of mind moves from the theoretical realm into reality. Self-awareness is when AI reaches the stage where the machine has its own consciousness and understands the parameters of its own existence.
How AI is used depends on if the AI in question is considered narrow artificial intelligence or artificial general intelligence.
This type of AI is also called weak AI. It’s a form of artificial intelligence that is usually focused on performing one task and doing it extremely well. These machines are very dependable, but they function in a very limited context.
Also known as strong AI, AGI is the type of AI beloved by science-fiction writers and movie-makers. For example, the androids from HBO’s Westworld are examples of AGI. Like humans, AGI has a strong general intelligence that’s very flexible. Elements of that intelligence can be applied to any problem it faces.
AI has been featured in stories dating all the way back to the ancient Greeks. In fact, Aristole developed methods of deductive reasoning as a way for humans to understand how their own intelligence functioned. However, AI the way most people today think of it is very much rooted in modern history.
In “A Logical Calculus of Ideas Immanent in Nervous Activity,” Walter Pitts and Warren McCullough propose a way to build a mathematical model for creating an artificial neural network.
Turing publishes “Computing Machinery and Intelligence,” a paper that included a proposed method for determining whether a machine is intelligent, which would become known as the Turing test. In 1954, a collaboration between Georgetown University and IBM produces a machine that translates 60 Russian sentences into English. Meanwhile, the term “artificial intelligence” is first used at the Dartmouth Summer Research Project on Artificial Intelligence in 1956.
The AI Lab is founded at Stanford University in 1963. In 1966, the U.S. government’s Automatic Language Processing Advisory Committee (ALPAC) reports on a concerning lack of progress in the field of machine translation research. The report causes the government to cancel all funding for machine translation projects.
Continued government frustration with the lack of development in the field of AI leads to more funding cuts in both the U.S. and Britain. Research grinds to a near halt; this period would become known as “the First AI Winter.”
The commercial expert system Rl (known also as XCON) debuts in 1980. Its purpose is to configure commercial orders for computer systems. The Japanese government introduces the Fifth Generation Computer Systems project, and the U.S. resumes significant investment in AI in response.
During the Gulf War, the U.S. military uses an automated logistics tool known as DART. IBM debuts the Deep Blue chess computer, which defeats chess champion Gary Kasparov in 1997.
The U.S. military invests in AI-driven robots like the PackBot and Big Dog. A self-driving car wins the DARPA Grand Challenge in 2005. In 2008, Google introduces speech recognition in the company’s iPhone app.
IBM’s Watson defeats human champions on Jeopardy! in 2011. Also that year, Apple releases the first version of Siri. Meanwhile, the Google Brain Deep Learning Project teaches a neural network to recognize a cat without ever specifically teaching the network what a cat was.
In 2016, Hanson Robotics creates Sophia, a humanoid robot capable of making facial expressions and recognizing other faces. Google releases BERT, a natural language processing engine, in 2018. During the COVID-19 pandemic, Baidu releases the LinearFold AI algorithm to help researchers predict the RNA sequence of the SARS-CoV-2 virus 120 times faster than any other method is capable of doing.