Skip to content

AI Model & Inference Providers

Composable Prompts supports leading GenAI models and inference providers. Customers connect to major AI providers and access their LLM foundation models using open-source connectors. Enterprise teams can also assemble multiple models from different providers into a synthetic LLM environment for load balancing between models or multi-head execution and LLM-mediated selection.

OpenAI-600x240

The integration with OpenAI provides access to AI models such as GPT-3.5 and GPT-4.

Bedrock-600x240

Bedrock provides access to models such as Claude, Cohere, LLama 2, AI21, and AWS Titan.

Vertex-600x240

Google's Vertex AI is a machine learning (ML) platform for use in AI-powered applications.

TogetherAI-600x240

Together AI offers one of the fastest inference stacks available for open-source models.

Groq-600x240

Groq provides extremely fast inference for computationally intensive applications.

Replicate-600x240

Run and fine-tune open-source models and deploy custom models at scale.

HuggingFace-600x240

Easily deploy Hugging Face Transformers and Diffusion models.

Mistral-600x240

The MistralAI integration provides access to models such as Mistral 7B and Mixtral 8×7B.

Prevent Vendor Lock-In

Composable Prompts centrally manages LLMs and inference providers without being tied to a single vendor or technology

We abstract the complexity of format variability so your team can easily switch between LLMs without worrying about the underlying format variation between GPT4.5 on OpenAI and Claude 3 on Bedrock, for instance.

This flexibility ensures enterprises can adapt to technological advancements and market demands without disruptions, costs associated with switching platforms, settling for workarounds, or kludgy development processes (i.e., cutting and pasting code, as many must do today).

The platform was designed to enable enterprise standards that modern digital workers demand across application scalability, security, and performance.

For instance, a Synthetic or Virtualized LLM environment allows teams to distribute load across multiple LLMs to support benchmarking, migration, and cost distribution strategies. In case of a task failure on one LLM, the Synthetic LLM automatically redirects to the next weighted LLM in the configuration group. This ensures consistent and reliable task execution.

Environments - 700x446

Get started with Composable Prompts