Meta’s Llama 2 Model Integrated into IBM’s watsonx AI Platform

IBM has unveiled its plan to integrate Meta’s Llama 2 large language model into its watsonx AI and data platform, as part of the ongoing effort to democratize access to AI through open-source models. The collaboration between IBM and Meta strengthens their shared commitment to open innovation in the field of AI.

Collaboration for Open Innovation in AI

Having already introduced the earlier Llama model earlier this year, Meta continues to broaden accessibility to the rapidly progressing AI domain. The latest iteration, Llama 2, boasting an impressive 70 billion parameters, was officially introduced in July. This model underwent pretraining using publicly accessible online data sources, harnessing instruction datasets and over a million human annotations.

The partnership between Meta and IBM in the realm of open innovation for AI has been evolving. Meta’s contributions to open-source projects, such as the PyTorch machine learning framework and the Presto query engine utilized in, have laid the groundwork for this joint endeavor. This initiative aligns with IBM’s strategic vision, encompassing the incorporation of third-party as well as proprietary AI models.

The present landscape of empowers AI developers to leverage IBM and Hugging Face community models, pre-trained to facilitate a diverse range of Natural Language Processing (NLP) tasks. These tasks encompass question answering, content generation, summarization, text classification, and extraction.

A Roadmap for Generative AI

The forthcoming integration of Llama 2 into marks a significant step along IBM’s generative AI roadmap. Anticipating this milestone are subsequent releases, including the AI tuning studio, fact sheets for models within, and an array of additional AI models.

Upholding paramount principles of trust and security, IBM is resolute in ensuring the responsible deployment of its generative AI capabilities. Users engaging with the Llama 2 model through the prompt lab in can activate the AI guardrails function. This feature assists in automatically filtering out harmful language from both input prompt text and the resultant output generated by the model. Furthermore, Meta divulges insights into their fine-tuning methodology applied to their expansive language models.

In addition to its extensive Center of Excellence for Generative AI, housing over a thousand specialized consultants, IBM Consulting brings together the expertise of 21,000 data, AI, and automation consultants. This collective knowledge is harnessed to drive transformative enhancements in core business processes for global clients.

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here