Vibe Coding Our Way to Multi-Provider LLM Support
Quick Summary Vibe coding is the process of building software primarily by using natural language prompts to guide advanced AI assistants, allowing developers to focus on high-level architecture rather than manual syntax. We utilized this flow state to rapidly abstract our OpenAI-centric platform and implement multi-provider LLM support, integrating Gemini and Claude effortlessly.
Last updated: 2026-03-03
Vibe Coding Our Way to Multi-Provider LLM Support
In the fast-evolving landscape of AI development, staying locked into a single provider is like betting your entire future on a single stock. That's why we recently embarked on a journey to transform our platform from being tightly coupled with OpenAI's Assistants API to supporting multiple LLM providers. And we did it all through the art of "vibe coding."
What is Vibe Coding?
If you haven't heard of "vibe coding" yet, you're not alone. It's a relatively new concept that's been making waves in developer circles. As defined by Andrej Karpathy and popularized in tech memes, vibe coding involves leveraging advanced AI tools to generate code through natural language prompts, allowing developers to focus on high-level problem-solving rather than manual coding tasks.
It's that magical state where your fingers move faster than your thoughts, the code just flows, and you're basically a DJ mixing beats—except it's JavaScript and TypeScript. It's coding with the perfect soundtrack, the right ambiance, and AI assistance that helps you stay in the flow state.
As the meme wisely says: "Code like nobody's watching, vibe like you've already fixed all the bugs." 🚀
2026 Update: The Rise of Multi-Agent Frameworks
As of early 2026, the landscape of AI tooling has expanded significantly to favor multi-agent architectures over single-provider dependency. With every major AI lab releasing its own agent framework (such as OpenAI's Agents SDK, Microsoft's AutoGen, and LangGraph), developers are building composable systems where multiple specialized agents collaborate. Embracing multi-provider support early on allowed us to seamlessly integrate these sophisticated, stateful, multi-agent frameworks, ensuring we aren't tied to the architectural decisions of any single vendor.
The Challenge: Breaking Free from Provider Lock-in
Our platform was built from the ground up with OpenAI's Assistants API. It was deeply integrated into every aspect of our codebase:
- Database schema with OpenAI-specific fields
- API routes tightly coupled with OpenAI's client library
- UI components built around OpenAI's specific data structures
- Business logic dependent on OpenAI's assistants, threads, and messages model
But with the rise of powerful alternatives like Google's Gemini, Anthropic's Claude, and others, we knew we needed to create a more flexible architecture that would allow our users to choose the right model for their specific needs.
Our Vibe Coding Approach
Instead of approaching this as a traditional refactoring project with weeks of planning and careful execution, we decided to embrace the vibe coding philosophy. Here's how we did it:
1. Setting the Vibe
First, we created the right environment:
- A collaborative coding session with the perfect lo-fi beats playlist
- A shared Cursor IDE session for real-time collaboration
- A clear vision of what we wanted to achieve
- AI assistants at the ready to help us generate code and solve problems
2. Abstracting the Provider Layer
With the vibe set, we started by creating an abstraction layer for LLM providers. We defined a standard interface that all providers would implement:
1// lib/llm/interfaces/index.ts
2export interface LLMProvider {
3 createAssistant(params: CreateAssistantParams): Promise<Assistant>;
4 updateAssistant(
5 id: string,
6 params: UpdateAssistantParams,
7 ): Promise<Assistant>;
8 retrieveAssistant(id: string): Promise<Assistant>;
9 createThread(): Promise<Thread>;
10 addMessageToThread(threadId: string, message: Message): Promise<void>;
11 runThread(
12 threadId: string,
13 assistantId: string,
14 params?: RunParams,
15 ): Promise<Run>;
16 getThreadMessages(threadId: string): Promise<Message[]>;
17 getRunStatus(threadId: string, runId: string): Promise<Run>;
18 cancelRun(threadId: string, runId: string): Promise<void>;
19}This interface became our North Star. Any provider we wanted to support would need to implement these methods, ensuring a consistent experience regardless of the underlying LLM service.
3. Implementing Provider-Specific Classes
Next, we created implementations for each provider we wanted to support:
- OpenAI Assistants Provider: A direct wrapper around the OpenAI Assistants API
- Gemini AI SDK Provider: An implementation that simulates the assistants functionality on top of Google's Gemini API
The magic of vibe coding came into play here. Instead of meticulously planning each implementation detail, we let the code flow naturally, using AI assistants to help us generate boilerplate code and solve tricky implementation challenges.
4. Database Schema Evolution
One of the trickiest parts was updating our database schema to support multiple providers. We needed to:
- Add a
providerfield to identify which service was being used - Add a generic
provider_assistant_idfield to store provider-specific IDs - Rename the existing
openAiAssistantIdtolegacy_openai_assistant_idfor backward compatibility
With the help of our AI assistant, we quickly drafted the migration script:
1// Migration script to move openAiAssistantId to provider_assistant_id
2async function migrateAgents() {
3 const existingAgents = await db.query.agents.findMany({
4 where: isNotNull(agents.openAiAssistantId),
5 });
6
7 for (const agent of existingAgents) {
8 await db
9 .update(agents)
10 .set({
11 provider: "openai-assistants",
12 provider_assistant_id: agent.openAiAssistantId,
13 legacy_openai_assistant_id: agent.openAiAssistantId,
14 })
15 .where(eq(agents.id, agent.id));
16 }
17}5. The Model-Provider Mapping Magic
One of the most elegant solutions we came up with during our vibe coding session was a simple utility to map models to their respective providers:
1// lib/utils/model-utils.ts
2export function getProviderForModel(modelId: string): ProviderType {
3 const model = getAllModels().find((model) => model.id === modelId);
4 return model?.provider || "openai-assistants";
5}This allowed us to automatically determine the correct provider based on the selected model, creating a seamless experience for users who might not even realize they're switching between different LLM providers.
The Vibe Coding Flow State
What made this project special wasn't just the technical implementation, but the flow state we achieved through vibe coding. There were moments when the code seemed to write itself, when solutions emerged organically rather than through forced planning.
We found ourselves in that magical state where:
- Problems that would normally take days to solve were resolved in hours
- Complex architectural decisions felt intuitive rather than laborious
- The codebase evolved naturally, as if it knew where it wanted to go
This wasn't just about using AI to generate code—it was about creating an environment where creativity and technical execution could flow unimpeded.
Results and Lessons Learned
Within just a few weeks, we transformed our platform from being tightly coupled with OpenAI to supporting multiple LLM providers. The new architecture is flexible, extensible, and future-proof.
Some key lessons we learned:
-
Embrace the flow: Sometimes the best solutions emerge when you're in a state of flow rather than forcing a rigid plan.
-
Let AI handle the boring parts: Use AI assistants to generate boilerplate code, allowing you to focus on the interesting architectural challenges.
-
Think in abstractions: Design your interfaces first, then implement the details. This makes it easier to support new providers in the future.
-
Migrate incrementally: We created a migration path that allowed existing users to continue using the platform without disruption while we added new functionality.
-
Document as you go: We added comprehensive documentation to our README, making it easy for users to understand the new multi-provider support.
What's Next?
Our journey with vibe coding and multi-provider support is just beginning. We're planning to:
- Add support for more providers like Anthropic Claude and Mistral AI
- Create provider-specific optimizations to get the best performance from each service
- Develop a more sophisticated provider selection UI that helps users choose the right model for their specific use case
Conclusion: Vibe On, Coders
Vibe coding isn't just a meme; it's a glimpse into the future of software development. By combining the right environment, the right tools, and the right mindset, we were able to tackle a complex architectural challenge with creativity and efficiency.
As AI tools continue to advance, approaches like vibe coding are likely to become integral to the development workflow, reshaping how applications are built and maintained.
So put on your favorite coding playlist, set up your environment just right, and remember: code like nobody's watching, vibe like you've already fixed all the bugs. 🚀
Have you tried vibe coding on your projects? Share your experiences in the comments below!
FAQ
What is vibe coding?
Vibe coding refers to the practice of generating and modifying code primarily through natural language interactions with advanced AI assistants. It enables a "flow state" where developers focus on high-level problem solving, architecture, and creativity instead of getting bogged down by manual syntax and boilerplate coding.
How does vibe coding help development?
Vibe coding significantly speeds up development cycles by allowing AI to handle repetitive tasks like boilerplate generation, database schema migrations, and interface abstraction. It helps software engineers tackle complex architectural challenges with greater efficiency and focus.
What is multi-provider LLM support?
Multi-provider LLM support is the architectural capability of an AI application to seamlessly switch between different Large Language Models (LLMs) like OpenAI's GPT, Google's Gemini, or Anthropic's Claude. This flexibility prevents vendor lock-in and allows users to choose the best model for their specific needs.