/
1 min read

Microsoft Introduces Language Model Phi-2 with 2.7 Billion Parameters

Microsoft has unveiled its Phi-2 language model, featuring an impressive 2.7 billion parameters, showcasing exceptional reasoning and language understanding capabilities. Phi-2 builds upon the success of its predecessors, Phi-1 and Phi-1.5, by either matching or surpassing models up to 25 times larger, thanks to innovations in model scaling and training data curation.

One of Phi-2’s standout features is its relatively compact size, making it an ideal playground for researchers to explore mechanistic interpretability, safety improvements, and fine-tuning experimentation across various tasks. Microsoft emphasizes the critical role of training data quality in model performance, and Phi-2 leverages “textbook-quality” data, including synthetic datasets designed to impart common-sense reasoning and general knowledge. The training corpus is augmented with carefully selected web data, filtered based on educational value and content quality.

The innovative scaling techniques adopted by Microsoft contribute to Phi-2’s success, building upon its predecessor, Phi-1.5. Knowledge transfer from the 1.3 billion parameter model accelerates training convergence, leading to a clear boost in benchmark scores.

 

 

 

 

Microsoft unveils 2.7B parameter language model Phi-2
Microsoft unveils 2.7B parameter language model Phi-2

Phi-2 has undergone rigorous evaluation across various benchmarks, including Big Bench Hard, commonsense reasoning, language understanding, math, and coding. Remarkably, with only 2.7 billion parameters, Phi-2 outperforms larger models, including Mistral and Llama-2, and matches or outperforms Google’s recently-announced Gemini Nano 2.

Beyond benchmarks, Phi-2 has demonstrated its capabilities in real-world scenarios. Tests involving prompts commonly used in the research community reveal Phi-2’s prowess in solving physics problems and correcting student mistakes, showcasing its versatility beyond standard evaluations.

Phi-2 is a Transformer-based model with a next-word prediction objective, trained on an extensive 1.4 trillion tokens from synthetic and web datasets. The training process, conducted on 96 A100 GPUs over 14 days, focuses on maintaining a high level of safety and claims to surpass open-source models in terms of toxicity and bias.

Microsoft’s announcement of Phi-2 demonstrates the company’s commitment to pushing the boundaries of what smaller base language models can achieve, setting a new standard for performance among models with fewer than 13 billion parameters. Researchers and developers alike can leverage Phi-2’s capabilities for various applications, benefitting from its impressive reasoning and language understanding capabilities in a more compact form factor.

Leave a Reply