/
1 min read

Persistent Launches GenAI Hub to Power New Era of Enterprise AI Adoption

Persistent Systems (BSE and NSE: PERSISTENT), a global Digital Engineering and Enterprise Modernization leader, announced the launch of GenAI Hub, an innovative platform designed to accelerate the creation and deployment of Generative AI (GenAI) applications within enterprises. This platform seamlessly integrates with an organization’s existing infrastructure, applications, and data, enabling the rapid development of tailored, industry specific GenAI solutions. GenAI Hub supports the adoption of GenAI across various Large Language Models (LLMs) and clouds, without provider lock-in. 

To effectively leverage the potential of GenAI and translate ideas into tangible business outcomes, enterprises must seamlessly integrate it into their existing systems. With a wide array of AI models ranging from extensive to specialized, clients require a robust platform like the GenAI Hub. This platform simplifies the development and management of multiple GenAI models, expediting market readiness through pre-built software components, all while upholding responsible AI principles.

The GenAI Hub is comprised of five major components: 

Playground is a no-code tool for domain experts to explore and apply GenAI with LLMs on enterprise data without the need for programming skills. It provides a single uniform interface to LLMs from private providers like Azure OpenAI, AWS Bedrock, and Google Gemini, and open models from Hugging Face like LLaMA2 and Mistral.

Agents Framework provides a versatile architecture for GenAI application development, leveraging libraries like LangChain and LlamaIndex for innovative solutions, including Retrieval Augmented Generation (RAG).

Evaluation Framework uses an “AI to validate AI” approach and can auto-generate ground-truth questions to be verified by a human-in-the-loop. It employs metrics to track application performance and measures any drift and bias that can be addressed.

Gateway serves as a router across LLMs, enabling application compatibility and improving the management of service priorities and load balancing. It also offers detailed insights into token consumption and associated costs.

Custom Model Pipelines facilitate the creation and integration of bespoke LLMs and Small Language Models (SLMs) into the GenAI ecosystem, supporting a streamlined process for data preparation and model fine-tuning suitable for both cloud and on-premises deployments. 

The GenAI Hub streamlines the development of use cases for enterprises, offering step-by-step guidance and seamless integration of data in LLMs, enabling the rapid creation of efficient and secure GenAI solutions at scale, whether for end users, customers, or employees. 

Leave a Reply