Pedro Domingos: The Master Algorithm (2015)

It's been over 200 years since Babbage, and around 60 years since the first serious forays into developing intelligent electronic machines, and we finally seem to have come to a point in history where machine learning is coming of age. The history of AI cautions one against excessive optimism, but the successes of machine learning are becoming impossible to ignore. Machine learners have steadily encroached on areas of intelligence previously thought to be accessible only to the human mind. The lowlands of human thought are rapidly being filled by the flood of artificial intelligence, and there's little reason to believe that the rising tide is going to stop any time soon. There is hardly a field of academic research or sector of the economy that has not been affected in some way by machine learning, and more radical changes loom. Advances in neuroscience along with a torrent of machine learning research hint at a general learning algorithm that would make all other advances in human science seem trivial in comparison. To repeat an oft-quoted phrase, "the first ultraintelligent machine is the last invention that man need ever make". That, in essence, is what Pedro Domingos' book is about.

It starts off with a broad overview of how machine learning algorithms (he calls them learners), are omnipresent in modern society. Algorithms help decide what you see on your social media news feed, which videos you watch, which products you buy, which stocks to add to your portfolio, which ads you see, which political candidates you vote for and who you decide to date. I don't think it's an exaggeration to say that machine learning has revolutionized almost every aspect of how we live our day-to-day lives.

And that's just getting started.


Domingos divides machine learning research into five 'tribes' and spends some time discussing each of them in detail: the Symbolists, the Connectionists, the Evolutionaries, the Bayesians, and the Analogizers.

The Symbolists advocate a rule-based approach to learning, and use logical inference as their main tool. This was a common tactic with the expert AI systems of the 80s, and still has a large place in automated systems today. A big problem that every machine learning algorithm has to deal with when understanding large datasets is the vast set of possible interpretations for that data, and the Symbolists have a few effective ways of dealing with this. If you want to build a system that can diagnose an illness, you might go about it by forming a decision tree. Does the person have a rash? If so, does that person also have a cold? And so on. Anyone who's played the game 20 questions can probably see why this approach might be an effective way of building intelligent systems. It also has the downside that these systems are costly and time-consuming to construct, not to mention they lack a certain degree of generality. Domingos doesn't seem to be a fan of this school of research, and for the most part he quickly passes it by so he can move on to other methods. Rule-based inference, for the most part, does not seem like a particularly effective way of interpreting data, and it certainly isn't how human brains interpret data. For a more biologically inspired learning method we have the connectionists.

If you've heard any of the recent headlines about DeepMind's AlphaGo beating the world Go champion, you've heard about connectionism. Connectionists are the practitioners of neural networks--artificial simulations of the sort of neurons we see in animal brains. Since human-level intelligence has so far only been exhibited in human brains, a natural approach to programming intelligence is by reverse engineering the human brain. People have been programming neural networks for about as long as programmable computers have been around--the first attempts were made in the 50s, and despite a steady following of researchers in the meantime it is only recently that neural networks have really begun to have their full potential realized. This is due in part to the fact that processors today are more powerful than ever, but also because the Internet is a rich sea of data that we can use to train our networks. Still, neural nets, despite their amazing versatility for solving problems, are quite cumbersome and aren't exact representations of the human brain. It's not even clear whether an exact simulation of how the brain works would be an efficient computational model. They also take a lot of computing power to train, so for certain situations they are not ideal.

Another biologically-inspired approach to machine learning is evolutionary computing. The idea behind this one is sound: human intelligence arose through evolutionary processes, so why can't we write programs to 'evolve' through a sort of Darwinian selection process? If we select for the tributes we want a program to have, say for a self-driving car to drive safely, and reject the attributes we don't want it to have, say, crashing into stop signs, we can eventually produce a program that will be a good driver. In theory, at least. The problem, of course, is that human intelligence took hundreds of millions of years to evolve to where it is now, and researchers don't have that kind of time to spend. It also seems to be a bit of a fluke that intelligence even arose at all, considering all the time in prehistory where it wasn't around. Evolution is a tricky thing to recreate, and the successes of evolutionary computing have been overshadowed by other methods. It remains to be seen if they will one day come into the spotlight like connectionism has in recent years.

The Bayesians are mostly involved in applying Bayes' theorem to machine learning problem. Thomas Bayes, a 17th century Englishman, was a statistician who formulated one of the most famous theorems in probability theory. His theorem helps us understand how events are related to each other, and how the probability of one event affects another. Applications of this fairly simple theory are among the most popular approaches to machine learning and still seen to be the most widely used. While this approach isn't as biologically inspired as the other methods, Bayesian inference does still represent something our brains have to do to interact intelligently with the world. Bayesian theory is very robust in its application and can be applied to a wide range of real-world problems. Things do tend to get a bit hairy when dealing with problems that have a lot of variables, but with the right simplifying assumptions this has proved to be a very powerful theory.

The Analogizers, as Domingos calls them, aren't as unified as the other tribes. Human learning could perhaps be best characterized as learning by analogy. We learn new concepts by associating them with old concepts we already know. It turns out statistical formulations of this type of learning are also quite effective, as can be seen by various automated recommendation systems designed by companies like Netflix or Amazon. Truth be told it wasn't entirely clear to me how this approach differed significantly from the other approaches already discussed, but I guess Domingos felt the distinction was necessary.

He then goes on to discuss a hodgepodge of various other learning strategies that take influence from human psychology, notably reinforcement learning. Reinforcement learning agents choose between possible actions to take in an environment based on the expectation of future rewards from that action. If pressing a button rewards a reinforcement learning agent, the agent will quickly learning to spend as much of its time pressing that button as possible. Reinforcement learning by itself doesn't generalize to other problems outside of a narrow domain especially well, but when combined with neural networks it can be a powerful approach for machine learning problems.


The final chapter of the book is about the future of machine learning, and full of the typical sort of predictions that authors of this type of book seem to love. Personally I feel like most of what Domingos has to say in this chapter has been dealt with better by other authors, but I digress. Machine learning will certainly have a huge impact on every aspect of our lives. In a very real way you are the data that you create. Other people make models of you based on your behavior just like Netflix makes models of your movie-viewing habits. The reason so many companies have started racing to collect your data is because that data is powerful. It influences the choices we make at an individual level and thus influences the whole of our society. This is a scary thought to some, and is certainly something that should be taken into consideration by human rights organizations and governments. On the flipside, data is becoming democratized very quickly--a fact which is helping facilitate positive social change at a faster rate than ever before in the history of civilization. Some are worried about machines achieving human levels of intelligence, but I think those fears are misguided. It is far more advantageous for us to consider what new avenues of possibility have opened up than to dwell on the roads we will leave behind. To quote Domingos: "The real history of automation is not what it replaces but what it enables."

Overall the book is a good general overview of the research being done in machine learning today. It does not, however, substitute for a more serious mathematical treatment of the subject, and those wishing to delve deeper into the subject would be better served elsewhere. Still, as a light introduction to the subject this book is worth looking into.