//
3 mins read

Meta’s own powerful ChatGPT equivalent AI language model has leaked online

Meta released LLaMA, its newest Artificial language model, two weeks ago. LLaMA is Meta’s contribution to a boom in AI language technology that promises new ways to engage with our computers as well as new dangers, despite not being publicly accessible like OpenAI’s ChatGPT or Microsoft’s Bing.

Although Facebook’s owner is developing those as well, Meta released LLaMA as an open-source package that anyone in the AI community can seek access to. The goal, according to the company, is to “further democratise access” to AI in order to encourage study into its issues. Meta will gladly spend the money to build the model and make it available for others to use in troubleshooting if it results in these systems being less buggy.

“Even with all the recent advancements in large language models, full research access to them remains limited because of the resources that are required to train and run such large models,” said the company in a blog post. “This restricted access has limited researchers’ ability to understand how and why these large language models work, hindering progress on efforts to improve their robustness and mitigate known issues, such as bias, toxicity, and the potential for generating misinformation.”

However, the model was leaked online just one week after Meta began responding to requests for entry to LLaMA. The system was made available as a download on 4chan on March 3rd, and it has since spread to other AI communities, igniting discussion about the proper method to disseminate cutting-edge research in an era of quick technological advancement.

Some say the leak will have troubling consequences and blame Meta for distributing the technology too freely. “Get ready for loads of personalized spam and phishing attempts,” tweeted cybersecurity researcher Jeffrey Ladish after the news broke. “Open sourcing these models was a terrible idea.”

Others are more upbeat, claiming that open access is essential for creating AI system safeguards and that equally complex language models have already been made public without significantly harming them.

“We’ve been told for a while now that a wave of malicious use [of AI language models] is coming,” wrote researchers Sayash Kapoor and Arvind Narayanan in a blog post. “Yet, there don’t seem to be any documented cases.” (Kapoor and Narayanan discount reports of students cheating using ChatGPT or sites being overrun by AI spam or the publication of error-filled AI journalism, as these applications are not intended to cause harm and are, by their definition, not malicious.)

The leaked system is legitimate, according to a number of AI researchers who have downloaded it, including one named Matthew Di Ferrante who was able to compare it to the official LLaMA model distributed by Meta and validate that they matched. While Joelle Pineau, managing director of Meta AI, acknowledged in a press release that “While the [LLaMA] model is not available to all… some have attempted to circumvent the approval process,” Meta declined to respond to queries from The Verge about the veracity or source of the leak.

So how dangerous is an LLaMA who is free? How does Meta’s approach stack up against chatbots that are available to the general public, such as ChatGPT and the new Bing?

The most crucial thing to remember is that the typical internet user will benefit very little from downloading LLaMA. This is not a pre-programmed chatbot; rather, it is a “raw” AI system that requires some technological know-how to set up and operate.

Di Ferrante tells The Verge that, “given enough time and clear directions,” “anyone familiar with setting up servers and development environments for complicated projects” ought to be able to get LLaMA up and running. (However, it should be noted that Di Ferrante is also an accomplished machine learning engineer with access to a “machine learning workstation that has four 24GB GPUs” and is thus not typical of the general populace.)

Along with expertise and hardware limitations, LLaMA has not yet been “fine-tuned” for conversation like ChatGPT or Bing. A language model’s flexible text-generating capabilities can be fine-tuned to concentrate on a more specialised job. Instructing a system to “answer users’ queries as accurately and plainly as possible” is an example of a task that may seem broad, but such fine-tuning is a necessary and frequently challenging step in developing a user-friendly product.

Given these constraints, it might be beneficial to imagine LLaMA as an unfurnished residential building. Even though the frame has been constructed, electricity and plumbing are already in place, there are still no doors, floors, or furniture. You can’t simply move in and start calling it home.

The model’s computational needs, according to Stella Biderman, head of non-profit AI research lab EleutherAI and a machine learning researcher at Booz Allen Hamilton, would be the “number one constraint” on its efficient use. According to Biderman, “the majority of people don’t own the hardware needed to operate [the largest version of LLaMA] at all, let alone efficiently.

Despite these limitations, LLaMA is still a very potent instrument. The model is available in four sizes, each of which has billions of characteristics (a metric that roughly translates to the number of connections within each system). A LLaMA-7B, 13B, 30B, and 65B exist. According to Meta, the 13 billion-parameter version outperforms OpenAI’s 175 billion-parameter GPT-3 model on many benchmarks for AI language models. It can be run on a single A100 GPU, an enterprise-grade system that is relatively affordable and costs a few dollars an hour to rent on cloud platforms.

Of course, there is considerable discussion surrounding the accuracy of these parallels. Some LLaMA users have reported difficulty receiving satisfactory results from the system because AI standards are infamous for failing to translate to real-world use (while others have suggested this is merely a skill issue). But when viewed collectively, these measures imply that, if improved, LLaMA will provide features akin to ChatGPT. Additionally, a lot of observers think that LLaMA’s compact design will significantly aid in promoting growth.

Leave a Reply