Meta Releases AI Safety Tools
Written by Kay Ewbank   
Monday, 01 January 2024

Meta has released open source tools for checking the safety of generative AI models before they are used publicly. The interestingly named Purple Llama is an umbrella project featuring open trust and safety tools and evaluations that Meta says is meant to level the playing field for developers to responsibly deploy generative AI models and experiences in accordance with best practices.

The first group of tools being released is CyberSec Eval, a set of cybersecurity safety evaluations benchmarks for LLMs; and Llama Guard, a safety classifier for input/output filtering that is optimized for ease of deployment.

purple llama logo

 

If you're thinking Purple Llama has worrying overtones of Barney the Purple Dinosaur, relax. Meta believes that mitigating the challenges that generative AI presents means taking both attack (red team) and defensive (blue team) postures. Purple teaming, composed of both red and blue team responsibilities, is a collaborative approach to evaluating and mitigating potential risks. So now you know.

CyberSecEval is a benchmark developed to help bolster the cybersecurity of Large Language Models (LLMs) employed as coding assistants. It provides a thorough evaluation of LLMs in two crucial security domains: their propensity to generate insecure code and their level of compliance when asked to assist in cyberattacks. In a paper on the benchmark, Meta researchers described a case study involving seven models from the Llama2, codeLlama, and OpenAI GPT large language model families, in which CyberSecEval pinpointed key cybersecurity risks along with practical insights for refining these models. A significant observation from the study was the tendency of more advanced models to suggest insecure code.

Llama Guard, the second tool, is an LLM-based input-output safeguard model geared towards Human-AI conversation uses. The model incorporates a tool for categorizing a specific set of safety risks found in LLM prompts (i.e., prompt classification).  Llama Guard is instruction-tuned on Meta's collected dataset. It functions as a language model, carrying out multi-class classification and generating binary decision scores.

purple llama logo

More Information

Welcome to Purple Llama

CyberSec Eval

Llama Guard

Related Articles

Meta Releases Buck2 Build System

Meta Builds AI Supercomputer

Meta Announces Conversational AI Project

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner


Programmer Gifts - Pi For Xmas
13/12/2024

The holiday season is a good time to learn about computers - you have the time. But where to start? Our advice is to ignore the pudding and go for a Pi.



Remembering Grace Hopper On Her 114th Anniversary
09/12/2024

Today sees the start of Computer Science Education Week and  the 2024 Hour of Code. These educational event are timed to coincide with Grace Hopper's birthday on January 9th, 1906 due to her conc [ ... ]


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Monday, 01 January 2024 )