Vibe Coding Our Way to Multi-Provider LLM Support

Vibe Coding Our Way to Multi-Provider LLM Support
4/17/2025

Vibe Coding Our Way to Multi-Provider LLM Support

In the fast-evolving landscape of AI development, staying locked into a single provider is like betting your entire future on a single stock. That's why we recently embarked on a journey to transform our platform from being tightly coupled with OpenAI's Assistants API to supporting multiple LLM providers. And we did it all through the art of "vibe coding."

What is Vibe Coding?

If you haven't heard of "vibe coding" yet, you're not alone. It's a relatively new concept that's been making waves in developer circles. As defined by Andrej Karpathy and popularized in tech memes, vibe coding involves leveraging advanced AI tools to generate code through natural language prompts, allowing developers to focus on high-level problem-solving rather than manual coding tasks.

It's that magical state where your fingers move faster than your thoughts, the code just flows, and you're basically a DJ mixing beats—except it's JavaScript and TypeScript. It's coding with the perfect soundtrack, the right ambiance, and AI assistance that helps you stay in the flow state.

As the meme wisely says: "Code like nobody's watching, vibe like you've already fixed all the bugs." 🚀

The Challenge: Breaking Free from Provider Lock-in

Our platform was built from the ground up with OpenAI's Assistants API. It was deeply integrated into every aspect of our codebase:

  • Database schema with OpenAI-specific fields
  • API routes tightly coupled with OpenAI's client library
  • UI components built around OpenAI's specific data structures
  • Business logic dependent on OpenAI's assistants, threads, and messages model

But with the rise of powerful alternatives like Google's Gemini, Anthropic's Claude, and others, we knew we needed to create a more flexible architecture that would allow our users to choose the right model for their specific needs.

Our Vibe Coding Approach

Instead of approaching this as a traditional refactoring project with weeks of planning and careful execution, we decided to embrace the vibe coding philosophy. Here's how we did it:

1. Setting the Vibe

First, we created the right environment:

  • A collaborative coding session with the perfect lo-fi beats playlist
  • A shared Cursor IDE session for real-time collaboration
  • A clear vision of what we wanted to achieve
  • AI assistants at the ready to help us generate code and solve problems

2. Abstracting the Provider Layer

With the vibe set, we started by creating an abstraction layer for LLM providers. We defined a standard interface that all providers would implement:

// lib/llm/interfaces/index.ts
export interface LLMProvider {
  createAssistant(params: CreateAssistantParams): Promise<Assistant>;
  updateAssistant(
    id: string,
    params: UpdateAssistantParams,
  ): Promise<Assistant>;
  retrieveAssistant(id: string): Promise<Assistant>;
  createThread(): Promise<Thread>;
  addMessageToThread(threadId: string, message: Message): Promise<void>;
  runThread(
    threadId: string,
    assistantId: string,
    params?: RunParams,
  ): Promise<Run>;
  getThreadMessages(threadId: string): Promise<Message[]>;
  getRunStatus(threadId: string, runId: string): Promise<Run>;
  cancelRun(threadId: string, runId: string): Promise<void>;
}

This interface became our North Star. Any provider we wanted to support would need to implement these methods, ensuring a consistent experience regardless of the underlying LLM service.

3. Implementing Provider-Specific Classes

Next, we created implementations for each provider we wanted to support:

  • OpenAI Assistants Provider: A direct wrapper around the OpenAI Assistants API
  • Gemini AI SDK Provider: An implementation that simulates the assistants functionality on top of Google's Gemini API

The magic of vibe coding came into play here. Instead of meticulously planning each implementation detail, we let the code flow naturally, using AI assistants to help us generate boilerplate code and solve tricky implementation challenges.

4. Database Schema Evolution

One of the trickiest parts was updating our database schema to support multiple providers. We needed to:

  1. Add a provider field to identify which service was being used
  2. Add a generic provider_assistant_id field to store provider-specific IDs
  3. Rename the existing openAiAssistantId to legacy_openai_assistant_id for backward compatibility

With the help of our AI assistant, we quickly drafted the migration script:

// Migration script to move openAiAssistantId to provider_assistant_id
async function migrateAgents() {
  const existingAgents = await db.query.agents.findMany({
    where: isNotNull(agents.openAiAssistantId),
  });

  for (const agent of existingAgents) {
    await db
      .update(agents)
      .set({
        provider: "openai-assistants",
        provider_assistant_id: agent.openAiAssistantId,
        legacy_openai_assistant_id: agent.openAiAssistantId,
      })
      .where(eq(agents.id, agent.id));
  }
}

5. The Model-Provider Mapping Magic

One of the most elegant solutions we came up with during our vibe coding session was a simple utility to map models to their respective providers:

// lib/utils/model-utils.ts
export function getProviderForModel(modelId: string): ProviderType {
  const model = getAllModels().find((model) => model.id === modelId);
  return model?.provider || "openai-assistants";
}

This allowed us to automatically determine the correct provider based on the selected model, creating a seamless experience for users who might not even realize they're switching between different LLM providers.

The Vibe Coding Flow State

What made this project special wasn't just the technical implementation, but the flow state we achieved through vibe coding. There were moments when the code seemed to write itself, when solutions emerged organically rather than through forced planning.

We found ourselves in that magical state where:

  • Problems that would normally take days to solve were resolved in hours
  • Complex architectural decisions felt intuitive rather than laborious
  • The codebase evolved naturally, as if it knew where it wanted to go

This wasn't just about using AI to generate code—it was about creating an environment where creativity and technical execution could flow unimpeded.

Results and Lessons Learned

Within just a few weeks, we transformed our platform from being tightly coupled with OpenAI to supporting multiple LLM providers. The new architecture is flexible, extensible, and future-proof.

Some key lessons we learned:

  1. Embrace the flow: Sometimes the best solutions emerge when you're in a state of flow rather than forcing a rigid plan.

  2. Let AI handle the boring parts: Use AI assistants to generate boilerplate code, allowing you to focus on the interesting architectural challenges.

  3. Think in abstractions: Design your interfaces first, then implement the details. This makes it easier to support new providers in the future.

  4. Migrate incrementally: We created a migration path that allowed existing users to continue using the platform without disruption while we added new functionality.

  5. Document as you go: We added comprehensive documentation to our README, making it easy for users to understand the new multi-provider support.

What's Next?

Our journey with vibe coding and multi-provider support is just beginning. We're planning to:

  • Add support for more providers like Anthropic Claude and Mistral AI
  • Create provider-specific optimizations to get the best performance from each service
  • Develop a more sophisticated provider selection UI that helps users choose the right model for their specific use case

Conclusion: Vibe On, Coders

Vibe coding isn't just a meme; it's a glimpse into the future of software development. By combining the right environment, the right tools, and the right mindset, we were able to tackle a complex architectural challenge with creativity and efficiency.

As AI tools continue to advance, approaches like vibe coding are likely to become integral to the development workflow, reshaping how applications are built and maintained.

So put on your favorite coding playlist, set up your environment just right, and remember: code like nobody's watching, vibe like you've already fixed all the bugs. 🚀


Have you tried vibe coding on your projects? Share your experiences in the comments below!

Agent One

Agent One

© 2025. All rights reserved.