A New Era of Open-Source AI Models is Introduced by Google with Gemma

The tech giant Google recently unveiled Gemma, their newest AI model, through Google DeepMind, claiming that it achieves best-in-class performance for its size. In the world of open AI models, Gemma represents a major advancement, coming soon after the release of Gemini Pro 1.5 Pro.

The CEO of Alphabet and Google, Sundar Pichai, announced the launch of Gemma on the X platform, calling it “a family of lightweight, state-of-the-art open models for their class built from the same research & tech used to create the Gemini models.”

Google has released two versions of Gemma: Gemma 2B and Gemma 7B. Both versions include models that have been trained and adjusted in line with instructions. According to Google, the Gemini models’ infrastructure and basic components are shared by both versions.

Frameworks, Tools And Hardware

The main goal of creating Gemma is to give researchers and developers the resources they need to create AI responsibly. The new AI chatbot can integrate with many of the technologies that Google Cloud engineers frequently use, such as Kaggle and Colab notebooks. It also adds support for widely used frameworks as Hugging Face Transformers, PyTorch, JAX, and Keras 3.0. Google highlights Gemma’s flexibility in working on a variety of platforms, including desktops, laptops, and the Google Cloud infrastructure.

Built for ethical AI development, Gemma leverages the same technology and research as the Gemini models, allowing it to run on several platforms and be adjusted to fit various business use cases. Google emphasizes that Gemma meets the company’s strict standards for output responsibility and safety while outperforming much larger models on a number of important benchmarks. When combined with Vertex AI, Gemma gives developers the ability to create generative AI apps that are suited for simple tasks like text production, summarization, and question and answer sessions. It also meets low latency requirements for real-time generative AI use applications. Google and Nvidia collaborated together to optimize Gemma’s compatibility with NVIDIA GPUs in order to significantly boost Gemma’s efficiency.

Accountability of the design

Google’s AI Principles are key to Gemma’s design. In order to ensure the safety and dependability of Gemma’s pre-trained models, Google employed automated methods to remove sensitive data, including some forms of personal identification, from training sets. Google also employed reinforcement learning from human feedback (RLHF) and a great deal of fine-tuning to match responsible behaviours with instruction-tuned models.
Google carried out thorough assessments, including manual red-teaming, automated adversarial testing, and evaluations of model capabilities for risky operations, in order to comprehend and lower the risk profile for Gemma models. The Model Card provides an overview of these assessments.

Along with Gemma, Google is also introducing a new Responsible Generative AI Toolkit to assist researchers and developers in prioritizing the creation of responsible and safe AI applications. Included in the toolset are:
Safety classification: Google offers a cutting-edge process for creating reliable safety classifiers with few samples.
Debugging: To look into Gemma’s behaviour and fix possible problems, use a model debugging tool.
Guidance: Model builders can take advantage of best practices derived from Google’s vast experience in creating and implementing language models.

Gemini, Google’s largest and most powerful AI model currently on the market, and Gemma models share similar infrastructure and technical elements. Because of this, when compared to other open models, Gemma 2B and 7B perform best-in-class for their sizes. Moreover, developers can use Gemma models to run directly on a desktop or laptop computer.

Share this

Leave a Reply