Deep Learning Illustrated
Written by Mike James   

Authors: Jon Krohn, Grant Beyleveld and AglaƩ Bassens
Publisher: Addison-Wesley
Date: September 2019
Pages: 416
ISBN: 978-0135116692
Print: 0135116694
Kindle: B07W585JGG
Audience: Python developers interested in deep learning techniques
Rating: 3.5
Reviewer: Mike James
A picture is worth a thousand words, so "illustrated AI" sounds good.

The most important thing to say right from the go is that this might not be the sort of "illustrated" book you are expecting. There have been a few high profile comic-book style introductions to AI and other technical topic - see Google Comics Factory Makes ML Easy. This is not that sort of book. There may be 100 illustrations, but this is not much more than you might find in almost any book on the subject and quite a bit of the time they don't help you understand the material more than any technical diagram - indeed many are just technical diagrams. The illustrations are often of people or stereotyped objects. Personally I can't say that seeing a sketch of a pioneer in the field is very important to me when I'm trying to learn the ideas - but you may be different. Even so, nearly all the people illustrated are alive and a photo would have been just as good, perhaps better.

Banner

So this isn't a particularly "visual" introduction to deep learning but how does it do otherwise? It starts of in a fairly traditional way with four chapters giving an overview of AI. It looks at the history. We have the early biological investigations into the neuron and the way the brain works. Then on to some more recent work - LeNet 5, AlexNet and so on. You get a flavour of what happened and it's all true, but it doesn't really emphasize the fact that the early explorers were on the right track, they just needed more data and bigger machines. It does mention ImageNet and the role it played as a big database, but blink and you could miss the fact that nothing much has changed apart from the data and the processing power.

The next chapter looks at natural language processing, which isn't a good place to start if you are trying to get across the ideas of deep learning. Then we have a chapter on art and AI and, of course, adverserial networks. Finally, we have a chapter on game playing and reinforcement learning in particular. This is a reasonable account of what has happened, but if you don't know the theory you are going to be hard pushed to see what is going on.  

 

The second part is called Essential Theory Illustrated and it starts off with an implementation of a digit recognition network implemented using Keras. All fun, but again if you don't know the theory you are simply following orders. Then the Perceptron is introduced - surely this should have been in the first section - then other neuron models. The following chapter shows you how these neurons may be organized into layers. No mention, that I could find, of why we need non-linear activation functions to make multilayer networks work at all. Next we come to training and this first introduces cost functions, then gradient decent and then back-propagation, as if it was some new technique based on the calculus chain rule. My way of thinking about it is that back-propagation is the application of gradient decent to the multilayer network. It was difficult to work out how to do it but the back-progagation algorithm is just an application of gradient descent not some wonder algorithm. This book leads you to believe that you could somehow apply gradient descent in some other way - and you can't.  The final chapter in the section is on improving models, and here we meet regularization via dropout and so on. Nothing about cross validation at all.

The final section is on applications and it's really section one repeated, but this time with real examples. The first chapter is about vision and at last we reach a good description of convolutional networks. The next chapter goes over natural language processing and introduces recurrent networks, but not BERT, which is difficult and recent. Next we have GANs in more detail and finally reinforcement learning and the complexities of  hidden Markov processes and Q learning.

The final chapter is an overview of what to do to get your project off the ground - some deep learning libraries and so on. All useful, but the sort of thing you can find out with a search.

So what is the verdict? There is no particular reason to highlight "illustrated" in the title of this book - it has illustrations, yes, but so do other books on the subject. There is math, but not enough for you to understand the foundations, just enough for you to follow the code. There are also some big blind spots in the book. There is no attempt to explain why deep networks are more useful than big shallow networks. Nothing about autoencoders, which are the basis of feature extraction. Nothing about cross validation or the many ways of evaluating a model and nothing about the failings of networks, i.e. adverserial examples.

At the end of the day, however, this isn't bad book on the subject -it just isn't a good one. Buy a copy if you want a code-based introduction to some practical deep learning. But then master the math and read something deeper.

 

To keep up with our coverage of books for programmers, follow @bookwatchiprog on Twitter or subscribe to I Programmer's Books RSS feed for each day's new addition to Book Watch and for new reviews.

Banner


Python Distilled (Addison-Wesley)

Author: David Beazley
Publisher: Addison-Wesley
Date: September 2021
Pages: 352
ISBN: 978-0134173276
Print: 0134173279
Rating: 4
Reviewer: Alex Armstrong
Python isn't a big language but it's getting bigger all the time.



Core Java for the Impatient, 3rd Ed

Authors:  Cay S. Horstmann 
Publisher: Addison Wesley
Pages: 576
ISBN: 9780138052102
Print: 0138052107
Kindle: B0B8RZZBDJ
Audience: Smart programmers wanting in-depth coverage
Rating: 4.8
Reviewer: Mike James

The key to this book is the word "impatient" in the title. What does this m [ ... ]


More Reviews

 

 

 

Last Updated ( Tuesday, 11 August 2020 )