What Do We Look for in AI?

by

When we think of artificial intelligence at its peak, we usually think of one of two things: humans and robots coexisting happily together in society, or robots killing or enslaving humans. These are both very human things to want to do, and we have a storied history of doing both. These thoughts, or perhaps expectations, are informed by us, and the society we live in. As we develop artificial intelligence, we become robot parents, and we teach to our programmed children what we know.

AI has been modeled after humans almost since its inception, because the human benchmark for what intelligence is is ourselves. A figurehead of the AI movement was Alan Turing, an English computer scientist who, in 1950, posed the question of whether a machine could imitate anything that a thinking human could do. Turing posed the question in the form of the Imitation Game, which had a person communicate blindly with a person, and then with a machine, and try to figure out which was which. The purpose of the Imitation Game was for each subject to try to convince the interrogator that they were the human – the purpose was to appear human. The Imitation Game, which is commonly referred to as the Turing test, has garnered many opinions, positive and negative, and is not really a useful goal anymore for artificial intelligence because it depends more on convincing people that a machine is human than giving a machine its own intelligence and emotion. Nevertheless, the Turing test has stuck around for decades, because in the tenuously defined world of artificial intelligence, it represents one of the few concrete goals that we have.

Chatterbots were some of the first ventures into artificial intelligence, and their goal is fairly straightforward: mimic human conversation. A chatterbot is the simplest implementation of the Turing test. If a chatterbot can convince a human that it is human, it passes the Turing test, whether or not it has any understanding of anything it said or heard. In fact, one of the earliest well known chatterbots was called ELIZA, and it was designed by Joseph Weizenbaum to show just how shallow conversation between man and machine was. Because ELIZA was not actually ‘intelligent’ and could not learn anything, it was programmed with the barest of real world knowledge and communicated mostly by turning a human participant’s words back on them in the form of a question. In programming terms, this is easy – we dissect the parts of the sentence (the subject, the verb, the statement, etc.), and then we find a corresponding subject value to turn the statement around and we put the pieces together again. “I feel tired” breaks apart into ‘I’ and ‘feel tired’, to which ELIZA might respond ‘Why do you feel tired?’ There is no understanding going on, but the parts of the sentence can be rearranged to make it appear that ELIZA not only understands but cares about the person speaking to it. This was certainly the case when ELIZA first came out – many people had an emotional response and believed in ELIZA’s understanding, despite knowing that ELIZA was only a computer program. Weizenbaum, who had created ELIZA exactly counter to this sentiment, was distressed and shut ELIZA down, potentially depriving the field of AI of some early advancements.

Chatterbots have evolved over the years from ELIZA. Every year, the Loebner Prize, an event that seeks to improve AI standards, runs a competition to see what programmer can build the most convincing AI. Entries tend to be chatterbots, as convincing a human they are talking to a human is much, much easier than teaching a robot to pick up a cup in a convincing manner. However, entries into the Loebner Prize reflect the problems of the Turing test that Loebner supports itself: programs that do well in the Loebner Prize judging sessions tend to employ deception and programmers spend more time trying to trick judges than they do trying to develop actual artificial understanding. In some ways, this is appropriate; Alan Turing also moved away from the question of whether machines can think to whether machines can imitate humans. But appropriate or not, painting a human exterior on a program that cannot think does not an artificial intelligence make.

Better advances in AI have come from machine learning and deep learning. (For a good, quick overview of machine learning, you can check out this earlier blog post by Doug.) From the start, programmers have tried to model artificial intelligence after humanity. An early concept was a neural network, which is a network patterned after the human brain. The neurons are values that have various weights and biases to how important those values are, and the connections between neurons represent inputs to other neurons, which have more weights and biases and connections to other neurons, and so on. Neural networks pattern the way human neurons fire, padding information through to try to make sense of it. Early neural networks were somewhat ineffective and didn’t take off, but now, partially thanks to a massive increase in both memory and computing power, neural networks are back in style and working better than ever before.

Deep Learning is a subset of Machine Learning, which is a subset of Artificial Intelligence. Deep Learning focuses on teaching software to teach itself.

One of the hallmarks of machine and deep learning is a machine’s ability to teach itself. We don’t consider a machine very intelligent if we have to tell it what to do (even though that’s how most of us learn too). Previous iterations of the AI booms have focused on explicit and task oriented programming, but now that data and memory are more readily available in mass quantities than they were in the 1950s, machines can start to teach themselves. This is where machine learning and all its subgroups start to deviate from traditional AI – there is a heavy dependency on statistics. Give a program and input and an output, and it can see if the output it comes up with is different than the one you gave it. In the past, that would have been information a human could use to make adjustments, but now, it’s information the program can use to make adjustments on its own. When we ‘train’ AIs, we give them lots and lots of data telling them where to start and where they should finish, and given enough data, an AI can train itself to complete the task without any further human input. A basic example is a program learning to read handwritten numbers. This is something that is very easy for humans to do, but something programs have consistently struggled with. By starting with random guesses, the program takes the input, runs it through its network, comes out with an answer, and adjusts accordingly. The easiest method of adjustment is called backpropagation, where information goes through the network to the end layer, the difference between the calculated result and the expected result in a cost function, and then adjustments go back up to the input layer. Using just human inputs and outputs, time, and memory, a program can teach itself to recognize handwritten numbers without any human interference at all. One threat with self taught neural networks is that the network will become too dependent on its training data. Because programs are often made to do one thing or set of things, giving a program an input it isn’t expecting will probably not give you an error or a failure. Instead, the program will just try to make the most sense of the input that it can, whether or not its output makes any sense to us. (an event you can see in this youtube video)

But sometimes AIs training themselves can lead to breakthroughs. Google’s DeepMind created the AlphaGo program, an AI that plays the board game Go. Go has long been thought to be beyond the scope of a program, since there are huge numbers of possible moves and game results at any given time and Go is a game of strategy that requires thinking far ahead. In 2015, DeepMind’s AlphaGo first beat a professional Go player, and each iteration of the project went on to beat better and better opponents, beating the best Go player in the world, Ke Jie, in 2017. All these versions of AlphaGo were trained by reviewing hundreds of human-played games and learning patterns based on that, then sharpening its skills by playing against humans and other Go-playing bots. AlphaGo Zero, however, the successor to the dynasty of robots that play Chinese board games, was given no training data. It was simply given the rules of Go and set to play itself over and over. At first, the moves it played were random, but over a short amount of time, it began learning and making better and more strategic decisions. AlphaGo Zero passed the level of the first AlphaGo that beat a professional in only three days, and in just over a month was superior to the last version of AlphaGo (AlphaGo Master), which had been awarded a ninth dan professional rating. As AlphaGo Master beat the best human player in the world, and AlphaGo Zero beat AlphaGo Master, is AlphaGo Zero the best Go player in the world? If a program can be awarded a ninth dan ranking, it very well may be.

a graph of AlphaGo Zero’s progress, in comparison to its predecessors

Of course, AlphaGo isn’t much good for doing anything besides playing Go yet, like our neural network can’t read anything that isn’t a number. In our current state of AI development, most AIs can only do one thing. But they are getting better and better at doing their one thing, and as AI developments filter out into computer science at large and become more mainstream, perhaps soon we will have a robot that can play Go and read a phone number at the same time.

 

Featured image credit to robohub.org

Leave a Reply

Your email address will not be published. Required fields are marked