Google Releases Gemma Open Models
Written by Kay Ewbank   
Wednesday, 28 February 2024

Google has released a set of lightweight open models that have been built from the same research and technology used to create Google's recent Gemini models.

The models in Gemma are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants.

gemma

The Gemma team says its models share technical and infrastructure components with Gemini, thus enabling the two sizes of models being introduced - Gemma 2B and 7B - to perform well for their sizes compared to other open models. Gemma models are capable of running directly on a developer laptop or desktop computer.

The Gemma team says the different sizes of model weights are being released with pre-trained and instruction-tuned variants. The release is accompanied by a new Responsible Generative AI Toolkit that provides guidance and essential tools for creating safer AI applications with Gemma. The toolkit has resources for applying best practices for responsible use of open models including guidance on setting safety policies, safety tuning, safety classifiers and model evaluation. It also has a tool called the Learning Interpretability Tool (LIT) that can be used to investigate Gemma's behavior and to address any potential issues.

Google is also providing toolchains for inference and supervised fine-tuning (SFT) across frameworks including JAX, PyTorch, and TensorFlow through native Keras 3.0. There are ready-to-use Colab and Kaggle notebooks, and the software is integrated with tools such as Hugging Face, MaxText, NVIDIA NeMo and TensorRT-LLM.

Google says Gemma is optimized across several AI hardware platforms including NVIDIA GPUs and Google Cloud TPUs. The cloud optimization comes via Vertex AI, which Google describes as providing a broad MLOps toolset with a range of tuning options and one-click deployment using built-in inference optimizations. Advanced customization is available with fully-managed Vertex AI tools or with self-managed GKE.

Alongside Google Gemma, several versions of the models are available on GitHub. There's an official Pytorch implementation of the models; a lightweight, standalone C++ inference engine for the Gemma foundation models; and an inference implementation and examples, based on Flax and JAX.

UPDATE: Google Gemma is available on Kaggle

gemma

More Information

Google Gemma

Gemma on Kaggle

Pytorch Implementation Of Gemma Models On GitHub

Lightweight C++ Inference Engine On GitHub

Inference Implementation Based on Flax and JAX

Related Articles

Google Rebrands Bard With Subscription

Google Adds Gemini To Bard

Google Adds Code Generation To Bard

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner


Google Intensive AI Course - Free On Kaggle
05/11/2024

Google is offering a 5-Day Gen AI Intensive Course designed to equip data scientists with the knowledge and skills to tackle generative AI projects with confidence. It runs on the Kaggle platform from [ ... ]



Prompt Engineering Techniques To Make You An Expert
18/11/2024

Introducing a GitHub repository full of hot tips and instructions on how to build the perfect prompt presented in a collection of Jupiter Notebooks.


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Wednesday, 08 May 2024 )