IBM Launches The Granite Code LLM Series
Written by Nikos Vaggalis   
Friday, 31 May 2024

IBM is introducing decoder-only code models for code generative tasks as part of its Granite collection. The models have been trained with code written in 116 programming languages and range in size from 3 to 34 billion parameters.

AI for Coding is all the rage right now. New coding assistants are constantly springing out, adding to an already impressive pool comprising Amazon Q, Github Copilot, Tabnine, Codeium and DeepCode among others.Those assistants help developers debug, write better tests, autocomplete, look up documentation, even generate full blocks of code. They are powerful to the point of giving rise to speculations as to whether AI will ultimately replace the programmer. We addressed that in Why Software Engineering Will Never Die, rounding out with:

In conclusion, Software Engineering will never die or be replaced. It might shift shape, adapt and embrace technologies such as generative AI, but there never will be a substitute for the human programmer.

If you want to find out how we reached that conclusion then please go on to read that article.

At this point it's important to note that code assistants connect to one or multiple LLMs behind the scenes for performing their magic. For instance Tabnine uses Mistral, GPT-3.5 Turbo and GPT-4.0 Turbo while Copilot uses Codex and GPT4.

Like the initial models in the watsonx Granite series, see IBM Releases Watsonx Granite Models, the newcomers are tasked with backing IBM's watsonx Code Assistant (WCA) family of products. As a matter of fact they were initially conceived as a way for enterprises to transform monolithic COBOL applications into services optimized for IBM Z.

But, since they're open source and released under the Apache 2.0 license, anyone can use them for any purpose. In fact you can experiment with them up on Hugginface, or download them and through Ollama integrate them locally into your code.

But we've gone ahead of ourselves.What are these Granite models actually good for?

As they have been trained on code written in 116 programming languages, they are perfect for code generative tasks, like fixing bugs, explaining code, documenting and writing code and complex application modernization tasks. The Granite family consists of models ranging in size from 3 to 34 billion parameters, in both a base model and instruction-following model variants.

And while the 8B Granite code model variant is a good fit for enterprises, there's also lighter and weightier versions that anyone in the open-source community can try out.

Finally in benchmarking them, the results are proving to be encouraging. Testing on benchmarks including HumanEvalPack, HumanEvalPlus, and RepoBench, showed strong performances on code synthesis, fixing, explanation, editing, and translation, across most major programming languages, including Python, JavaScript, Java, Go, C++, and Rust.

So let's welcome Granite to the family of code-affiliated LLMs.

 watsonlogo

More Information

Introduction to Granite Code Models
Granite Code Models on Hugginface

Related Articles

IBM Releases Watsonx Granite Models

Why Software Engineering Will Never Die

Amazon Bedrock Adds Support For Anthropic's Claude3 Opus 

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner


Azul Outperforms OpenJDK By Up To 37%
23/10/2024

Azul has announced that its Azul Platform Prime outperforms comparable OpenJDK distributions by as much as 37%. The company has also launched the Azul Java Performance Engineering Lab (JPEL) aimed at  [ ... ]



Rare Computer History Memorabilia Being Auctioned By Bonhams
23/10/2024

Invitations handwritten and signed by Charles Babbage, seminal papers by  Alan Turing and Claude Shannon, a "Blue Box" phone hacking device, a prototype Apple Macintosh and an Apple Lisa 2/10 are [ ... ]


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info