Meta Platforms Unveils Llama 3 Language Model

353

 

In an attempt to overtake OpenAI, the leader in the generative AI market, Meta Platforms released early versions of its most recent large language model, Llama 3, and an image generator that updates images in real-time while users type prompts on Thursday.

The models will be integrated into its virtual assistant Meta AI, which the company is pitching as the most sophisticated of its free-to-use peers, citing performance comparisons on subjects like reasoning, coding and creative writing against offerings from rivals including Alphabet’s Google and French startup Mistral AI.

The updated Meta AI assistant will be given more prominent billing within Meta’s Facebook, Instagram, WhatsApp and Messenger apps as well as a new standalone website that positions it to compete more directly with Microsoft-backed OpenAI’s breakout hit, ChatGPT.

A landing page greeting visitors on that site prompts them to try having the assistant create a vacation packing list, play 1990s music trivia with them, provide homework help and paint pictures of the New York City skyline.

Meta has been scrambling to push generative AI products out to its billions of users to challenge OpenAI’s leading position on the technology, involving a pricey overhaul of computing infrastructure and the consolidation of previously distinct research and product teams.

The social media giant has been openly releasing its Llama models for use by developers building AI apps as part of its catch-up effort, as a powerful free option could stymie rivals’ plans to earn revenue off their proprietary technology. The strategy has elicited safety concerns from critics wary of what unscrupulous actors may use the model to build.

Meta equipped Llama 3 with new computer coding capabilities and fed it images as well as text in training this time, though for now the model will output only text, Meta Chief Product Officer Chris Cox said in an interview.

More advanced reasoning, like the ability to craft longer multi-step plans, will follow in subsequent versions, he added. Versions planned for release in the coming months will also be capable of “multimodality,” meaning they can generate both text and images, Meta said in blog posts.

 

 

Reuters

 

 

 

Comments are closed.