Llama 2, the sequel to the academic-oriented open-source AI model, allows businesses to adapt the technology to their specific needs, such as chatbots and image generation. By being open-source, the model also offers outsiders the opportunity to check for biases, inaccuracies, and other potential problems.
Developers used a red-team approach to test the safety of the models and drafted an acceptable use policy to curb misuses like criminal activity, misleading representations, and spam. Furthermore, the tech giants provide a comprehensive guide on responsible usage.
Meta is releasing pre-trained and conversation-oriented versions of Llama 2 for free. Additionally, Microsoft plans to offer Llama 2 through the Azure AI catalog, where it can be used with cloud tools like content filtering. The model can also be run directly on Windows PCs and will be accessible through external providers like Amazon Web Services and Hugging Face.
Open-source AI models are not new; Stability’s Stable Diffusion is a well-known example. However, many key competitors, such as OpenAI’s GPT-4, tend to restrict access to their AI models to generate subscription or licensing revenue. This approach raises concerns about the potential misuse of these tools by hackers and other malicious actors.
With the tech industry increasingly wary of the potential risks posed by large language AI models, including the propagation of misinformation and the development of harmful autonomous systems, Llama 2’s focus on responsible use is timely. In fact, some experts and company leaders have called for a six-month freeze on experimentation to address ethical and safety concerns. Proposed legislation in the Senate also aims to make AI creators accountable for harmful content.
For Microsoft, Llama 2 presents an opportunity to maintain its competitive edge over AI rivals such as Google.
The collaboration with Meta provides business customers with more choices, especially for those interested in fine-tuning a model to meet their specific needs.