18
Demystifying Artificial intelligence
Artificial intelligence is the concept of machines making ‘smart’ decisions. Artificial Intelligence applies what is called machine learning. Machine learning refers to training a program to make accurate predictions when provided with a relevant dataset. The idea is that you supply your program with information about the patterns you would like it to recognize. This can take time, computational energy, and lots of data.
Machine learning is currently one of the most saturated fields of research. Every month there are about 100 new academic publications on the topic. Staying up to date on the latest optimizations for your back-propagation, activation function, and model architecture can be exhausting.
Machine Learning also is a hot topic in the land of science fiction. People like to speculate about an omniscient AI that will either take over the world and destroy humanity or help and perhaps work with us. The reality is far from speculation. AI is good at predicting something when given data about previous things. The entirety of world phenomena presents a plethora of variables that constitute data incomprehensible to current machines. We are barely able to process entire genomes, nonetheless all of them.
About a year ago, I took Introduction to Machine Learning, a class offered at my university. I was excited about the class. I knew that machine learning relies heavily on linear algebra, and I had taken Linear Algebra 1 and 2 in my undergraduate program. Linear algebra is all about matrices and their properties, from multiplying them to performing calculus with them. Because machine learning deals with so much data, it is often represented in matrices of numbers. The numbers represent the weight of the data. The weight is a measure of how much of an effect the data will have on the variable you are trying to predict.
In the machine learning class, we built the elementary convolutional Neural Network that classifies written numbers. The class demystified the term ‘artificial intelligence. I learned about the network architecture, error rates, and how the program used data to create predictions and classifications. The math of machine learning is not too complicated. With a linear algebra background, it only required multidimensional calculus (Calculus 3).
While there are many different model architectures in machine learning, I believe that the most critical concept to understand is back-propagation. The theory of back-propagation is that as data passes through your model and it makes predictions. The model needs to have some amount of feedback to improve its predictions. Hence ~training~. What happens is your model measures the error of its prediction and sends it back through the architecture so that the corresponding weights of indicators can be adjusted. This happens again and again until your model reaches its desired error rate.
One of the most pleasant discoveries in my education of machine learning was AlphaGo. Go is one of the oldest strategy games in the world, and while the rules are simple, the game is incredibly complex. For this reason, many people thought there would never be a machine that could beat the grandmaster Go players of the world. AlphaGo is the name of an AI that aimed to do precisely that. There is a beautiful documentary on the story free on youtube that I highly recommend. Maybe I’m a big nerd, but the film brought tears to my eyes.
For comparison, a Chess game has about 35 possible moves each turn (called a branching factor), and each game lasts about 80 moves (depth). Thus, the number of possible moves in a typical chess game is 35^80 or 10^123, which is considerable. In a game of Go, there are about 250 possible moves each turn, and the depth of a game is usually 150. The number of possibilities in a game of go 250^150 or 10^360. Analytically, the complexity of Go is hundreds of magnitudes more significant than that of chess. 1996 was the first time in history that a computer beat a grandmaster chess player Garry Kasparov.
I grew up playing chess with my father early every morning and consequently had built a love of the strategy game. However, after watching the AlphaGo documentary, I got myself a go board and started playing with my roommate every morning. Since then, I have fallen in love with the game Go. It’s a beautiful, ancient game and is often described in proverbs. If any readers care to learn or share a game, I’ll link my OGS (online-go-server) account below. I’m happy to play or teach people of any skill level.
-Build a Convolutional Neural Network in Five minutes
-Play Go with me
-The math of Neural Networks
-AlphaGo Documentary
-CS 445 resources (including code).
18