Quick Start Guide to Large Language Models |
Author: Sinan Ozdemir This is a good book, but only if you are the right reader. We all know about Large Language Models and how they are taking over many tasks that were once the sole province of humans. We also know that getting in on the act might be a good way to make sure that we have jobs in the future - not to mention it might be interesting and fun - but how? If you have studied the math behind the idea then you might still be wondering what it's all about. The point is that LLMs have become tools that you can use with only a little idea of how they actually work or were created - do you need to know how a hammer was made to knock in a nail? What is more, even if you do know the math and theory behind the current LLMs you might still be very puzzled by the range of possible ways of using off-the-shelf LLMs in your own projects. This only connects with theory in terms of training and issues of overfitting and generalization. In short, this book gives you a good idea what the modern use of pretrained LLMs looks like. If you are looking for details of how LLMs work and digging deep into their theory you are going to be disapointed and possibly chasing the wrong information. The book starts with an overview of LLMs and a very rough idea of how they work. Don't expect to understand attention or transformers after reading this, but you will know that BERT, GPT. ChatGPT and T5 exist and what they can do. From here we move on to using pretrained models - semantic search and prompt engineering. The second part of the book is titled "Getting the Most Out of LLMs" which really means fine tuning and advanced prompt engineering. This ends with an example - a recommendation system. Part Three is called "Advanced LLM Usage". This possibly is more advanced than you want to be but it is really interesting. The first part deals with different types of model incluidng vision transformers and reinforcement learning. Next we look at fine tuning LLMs with examples - Anime classification using BERT, LaTex generation using GPT2, and a niche chatbot. The final chapter discusses getting an LLM into production both closed and open source. As long as your intent is to use LLMs rather than learn about their deep theory so that you can develop the next breakthrough then this is a book you need. It is fairly practical, low on hype and deals with topics in a way that others simply gloss over. It's not perfect and I have to say its layout is terrible, but it is worth reading. For more recommendations of books on Deep Learning see AI Books To Inspire You in our Programmer's Bookshelf section. To keep up with our coverage of books for programmers, follow @bookwatchiprog on Twitter or subscribe to I Programmer's Books RSS feed for each day's new addition to Book Watch and for new reviews.
|
|||
Last Updated ( Tuesday, 23 April 2024 ) |