The eVa AI ecosystem is built on a sophisticated AI-driven multi-agent architecture, where specialized AI agents collaborate to handle user requests efficiently while ensuring content consistency, compliance, and brand alignment.
This multi-agent system follows a hierarchical approach with the Supervisor agent at its core, orchestrating five specialized agents that handle distinct operational domains. Each agent operates independently, leverages large language models (LLMs), and connects to your shared knowledge base. The eVa AI architecture is extensible and supports adding new agents and linking them to your knowledge base.
The architecture description in detail is given below.
Supervisor agent
The Supervisor agent serves as the main coordinator. When you submit a request, it:
Determines which specialized agent is best suited to handle the task.
Orchestrates the workflow across several agents if needed.
Delivers the final response back to you.
Specialized agents
Managed by the Supervisor agent, five specialized agents perform their target tasks:
Handles general queries and retrieves information from your knowledge base.
Provides accurate answers on medical, regulatory, and content-related topics.
Translate agent
Translates and localizes text and images.
Follows pharma standards, terminology, and your knowledge base context to maintain compliance.
Suggests relevant texts, images, and modules from the digital asset library (DAM).
Uses semantic search to support content creation and reuse.
Creates and adapts content (texts, images, interactive components) based on templates and your knowledge base.
Ensures consistency and compliance with branding and regulatory guidelines.
Reviews and validates MLR content, including emails, e-Detailers, and legal materials.
Checks grammar, references, and compliance using approved sources and VVPM data.
Centralized knowledge base
All agents interact with your central knowledge base, which stores structured and vectorized content as follows:
Original documents are pulled from external sources such as Veeva Vault PromoMats, Microsoft SharePoint/OneDrive, Google Drive/Workspace, Confluence, AWS S3, and others.
Documents are processed by the Exporter, which adds metadata and converts them into vectorized formats for semantic search (OpenSearch vector store).
Scheduled jobs ensure continuous synchronization between external systems and your knowledge base.
The eVa AI architecture is built on the retrieval-augmented generation (RAG) approach. RAG enhances large language models (LLMs) by combining their generative power with external information retrieval from your knowledge base. This ensures that answers are not only fluent but also grounded in verified content.
With the RAG approach, the user flow is as follows:
Submit a query. You enter a question in the eVa AI chat.
Retrieve information. The retriever searches your knowledge base for the most relevant documents.
Generate a response. The generator combines your query with the retrieved data.
Receive the result. You get a contextually accurate, grounded answer.

