Skip to content

Prompts as Source Code, not Conversations

Uncover the transformative approach to using prompts with LLMs in modern applications. This post delves into why prompts are more akin to source code than simple instructions and how Composable Prompts is shaping the future of LLM integration in apps.

After diving into the world of prompts to get LLMs to execute tasks within applications — or even utilizing them as assistants — it becomes evident that they bear more resemblance to source code than to conventional conversations or the typical directives you'd share with a team member.

When you're developing features for your app, there's often a need to reuse certain segments of a prompt (e.g., application context, safety messages, user specifics, recent interactions). Before long, you'll find yourself wanting to incorporate variables, conditions, and more. Trying to adjust a segment of the prompt (like the application context) across various instances can quickly turn into a maintenance challenge, reminiscent of the dreaded spaghetti code.

Composable-Prompts-Screenshot-getAppInstruction

Imagine wanting to test your prompts with multiple LLMs, each with its own unique input format. Add to that the need for consistent tests to verify model outputs or the requirement for LLM responses to fit a particular format to seamlessly integrate within the user interface of your app. The complexity escalates, and you realize you're crafting a dedicated LLM layer for your application. But, what if there's a more streamlined way? Enter Composable Prompts!

Through this lens, prompts undeniably function as source code. They're integral to your application's blueprint and get executed by an engine — albeit a non-deterministic one. This engine harnesses the potential of LLMs to process unstructured textual or visual data, enabling your application to present refined outputs to users or make informed decisions. Just as you wouldn't have users drafting SQL queries for your database but would rather use sanitized user inputs as part of those queries, the same approach applies here.

That's the essence of Composable Prompts: providing a toolkit that simplifies the creation of LLM-powered applications. LLM have tremendous power and revolutionize how applications can deal with content. We champion the incorporation of LLMs into applications by adopting time-tested approaches to managing source code: through composition, rigorous testing, structured data handling, and abstracting away the intricacies of low-level APIs.

Eager to give it a spin? Reach out, and we'd be delighted to send an invite for our beta program your way!