Meta Releases AI Safety Tools
Written by Kay Ewbank   
Monday, 01 January 2024

Meta has released open source tools for checking the safety of generative AI models before they are used publicly. The interestingly named Purple Llama is an umbrella project featuring open trust and safety tools and evaluations that Meta says is meant to level the playing field for developers to responsibly deploy generative AI models and experiences in accordance with best practices.

The first group of tools being released is CyberSec Eval, a set of cybersecurity safety evaluations benchmarks for LLMs; and Llama Guard, a safety classifier for input/output filtering that is optimized for ease of deployment.

purple llama logo

 

If you're thinking Purple Llama has worrying overtones of Barney the Purple Dinosaur, relax. Meta believes that mitigating the challenges that generative AI presents means taking both attack (red team) and defensive (blue team) postures. Purple teaming, composed of both red and blue team responsibilities, is a collaborative approach to evaluating and mitigating potential risks. So now you know.

CyberSecEval is a benchmark developed to help bolster the cybersecurity of Large Language Models (LLMs) employed as coding assistants. It provides a thorough evaluation of LLMs in two crucial security domains: their propensity to generate insecure code and their level of compliance when asked to assist in cyberattacks. In a paper on the benchmark, Meta researchers described a case study involving seven models from the Llama2, codeLlama, and OpenAI GPT large language model families, in which CyberSecEval pinpointed key cybersecurity risks along with practical insights for refining these models. A significant observation from the study was the tendency of more advanced models to suggest insecure code.

Llama Guard, the second tool, is an LLM-based input-output safeguard model geared towards Human-AI conversation uses. The model incorporates a tool for categorizing a specific set of safety risks found in LLM prompts (i.e., prompt classification).  Llama Guard is instruction-tuned on Meta's collected dataset. It functions as a language model, carrying out multi-class classification and generating binary decision scores.

purple llama logo

More Information

Welcome to Purple Llama

CyberSec Eval

Llama Guard

Related Articles

Meta Releases Buck2 Build System

Meta Builds AI Supercomputer

Meta Announces Conversational AI Project

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner


Microsoft Releases Dev Proxy 0.21
08/10/2024

Microsoft has released an update to Dev Proxy, its command-line API simulator. The updated version, v0.21, adds the ability to simulate authentication and authorization using API keys and OAuth2 among [ ... ]



Apache Lucene Improves Sparce Indexing
22/10/2024

Apache Lucene 10 has been released. The updated version adds a new IndexInput prefetch API, support for sparse indexing on doc values, and upgraded Snowball dictionaries resulting in improved tokeniza [ ... ]


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Monday, 01 January 2024 )