Google Updates Responsible AI Toolkit
Written by Kay Ewbank   
Friday, 01 November 2024

Google has announced updates to the Responsible Generative AI Toolkit to enable it to be used with any LLM model. The Responsible GenAI Toolkit provides resources to design, build, and evaluate open AI models, and it can now work with any LLM, whether it's Gemma, Gemini, or any other model. 

The toolkit provides guidance and essential tools for creating safer AI applications, originally with Gemma but now more generally. It has resources for applying best practices for responsible use of open models including guidance on setting safety policies, safety tuning, safety classifiers and model evaluation. It also has a tool called the Learning Interpretability Tool (LIT) that can be used to investigate LLM's behavior and to address any potential issues.

googlelogo1

The updated version has a new SynthID Text tool that can be used for watermarking and detecting AI-generated content. The thinking is that it's increasingly difficult to tell if a text was written by a human or generated by AI, so SynthID Text can be used to watermark and detect text generated by your GenAI product. SynthID watermarks and identifies AI-generated content by embedding digital watermarks directly into AI-generated text.  It is open source for developers, and SynthID for text is accessible to all developers through Hugging Face and the Responsible GenAI Toolkit.

The new version also has a tool that will refine your prompts with LLM assistance. This makes use of a Model Alignment library that helps you refine your prompts with support from LLMs. Model Alignment is a Python library from the PAIR team that enable users to create model prompts through user feedback instead of manual prompt writing and editing. The technique makes use of constitutional principles to align prompts to users' desired values. The PyPI library ships with two different APIs, one that works with single-run prompts, and a second that uses labeled training data to automatically create a prompt based on principles derived from the data.

Another enhancement improves prompt debugging to make it easier and faster thanks to an improved deployment experience for the Learning Interpretability Tool (LIT) on Google Cloud. Developers can now make use of LIT's new model server container to deploy any Hugging Face or Keras LLM with support for generation, tokenization, and salience scoring on Cloud Run GPUs.

The updated version of the toolkit is available now. 

googlelogo1

More Information

Responsible GenAI Toolkit

PyPI Model Alignment Library

Related Articles

Google Releases Gemma Open Models

Google Rebrands Bard With Subscription

Google Adds Gemini To Bard

Google Adds Code Generation To Bard

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner


Hour Of Code 2024 Is About To Kick Off
04/12/2024

This year the event that aims to provide a coding experience for all school students and anyone else who wants to join in runs between December 9th and 15th and includes new activities. Let's find out [ ... ]



Amazon Adds Agents To Q Developer
05/12/2024

Amazon has announced enhancements to Amazon Q Developer, including agents that automate unit testing, documentation generation, code reviews, and a capability to help users "address operational issues [ ... ]


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info