Retrospective — Hackathon

Building CodeSync AI in 48 Hours — What Broke, What Worked, and What I’d Never Do Again

A real-time collaborative code editor with an AI pair programmer, built under pressure, shipped with duct tape — and the surprisingly honest lessons that followed.

01 — The Idea

The Pitch That Sounded Too Good

Every hackathon starts with the same dangerous energy: infinite confidence, zero sleep debt, and an idea that feels like it definitely can’t be that hard. For CodeSync AI, the premise was simple — build a browser-based collaborative code editor (think Google Docs meets VS Code) with an embedded AI pair programmer. Two people edit the same file in real-time, get AI suggestions inline. Ship it in 48 hours. Easy, right?

The Stack We Committed To

React + Vite for the frontend, Monaco Editor for the code surface, Yjs with CRDT for conflict-free real-time sync, Piston API for multi-language code execution, and the Claude API for AI assistance. Five moving parts, one weekend.

React + Vite Monaco Editor Yjs / CRDT WebSockets Piston API Claude API

02 — The Architecture

How We Thought It Would Work

The plan was clean on paper. Monaco Editor would host the code surface. Yjs — a CRDT (Conflict-free Replicated Data Type) library — would handle real-time sync between clients, meaning two users could type simultaneously without conflicts. WebSockets would relay the Yjs updates. The AI panel would sit as a sidebar: highlight some code, hit a keybind, and get a response streamed back from Claude.

The sync model

// Yjs doc bound to Monaco via y-monaco binding
const ydoc = new Y.Doc();
const ytext = ydoc.getText('monaco');
const provider = new WebsocketProvider(
  'ws://localhost:1234',
  'codesync-room',
  ydoc
);
MonacoBinding(ytext, editor.getModel(), new Set([editor]));

In theory, every keystroke gets encoded as a CRDT operation, broadcast to peers, and applied in a way that converges to the same state regardless of order. It’s mathematically elegant. In practice, it took three hours just to get the Monaco binding to not destroy cursor positions on every remote update.

03 — The Reality

Where Everything Caught Fire

Hour six. The collaborative editing was working — barely. Cursors occasionally teleported. Code blocks would sometimes duplicate on reconnect. But it ran. Then came the AI integration.

The hardest part wasn’t getting the AI to respond. It was deciding when it should.

Streaming responses from Claude into a sidebar is straightforward. But the UX question is genuinely hard: do you show suggestions on every pause in typing? On explicit trigger? Do you interrupt the user’s flow or wait for them to ask? We tried auto-triggering after 1.5s of inactivity. It felt pushy. We switched to manual highlight + shortcut. It felt clunky. We shipped a hybrid that satisfied no one and confused everyone in the demo.

Biggest Mistake

We spent 40% of our time on features that weren’t in the demo script — custom themes, language auto-detection, a presence indicator. None shipped polished. The core loop (write code → get AI help → run it) only came together in the final 6 hours.

04 — The Demo

Shipping on Fumes

The final demo was a controlled chaos of pre-opened browser tabs, a hard-coded room ID, and exactly one successful full run-through at 4 AM. The judges didn’t need to know that hitting “Run” a second time would sometimes hang the Piston API sandbox. They saw the happy path. It worked.

We placed. That felt good for about twelve hours. Then I opened the codebase on a fresh laptop and felt genuinely embarrassed. The WebSocket reconnection logic was a nested callback nightmare. State management was split haphazardly between local useState and the Yjs document. There was a comment that literally said // TODO: fix this before anyone sees it.

05 — The Lessons

What I Actually Learned

01
Nail the core loop first, unconditionally.

Every minute spent on extras before the critical path is stable is borrowed time you’ll pay back at 3 AM. Prototype ugly. Polish last.

02
Real-time sync is a product decision, not just a technical one.

CRDT sync worked. But what does it feel like to have someone else edit your code live? We never designed for that tension. The UX was an afterthought.

03
AI UX is its own discipline.

Integrating an LLM into a tool isn’t just an API call. It’s deciding when the AI speaks, how it interrupts, what it assumes. We underestimated this completely.

04
Write the demo script before a single line of code.

The demo is the product during a hackathon. Reverse-engineer it from the script you’ll present, then build exactly that and nothing more.

05
Complex libraries need a spike first.

Yjs is powerful but non-trivial. We should’ve spent 2 focused hours on a throwaway prototype before weaving it into the app. We didn’t. We paid for it.

06 — What’s Next

If I Rebuilt It Today

With a clear head, I’d approach it differently. The sync layer would be extracted into its own module with a clean interface — the rest of the app shouldn’t care whether it’s Yjs, Liveblocks, or a raw WebSocket. The AI integration would be a deliberate UX pattern from day one, not wired in at hour 40.

What I’m Proud Of

Despite the chaos, we shipped something that actually ran. Real-time collaborative editing in a browser with multi-language execution and an AI assistant — that’s not a trivial scope for two days. The architecture held. The ideas were right, even if the execution was rough. That counts.

Hackathons are compression chambers for learning. You make every mistake faster than you would in a normal project. The codebase was a mess, but the lessons were clean. CodeSync AI is the most educational project I’ve built — precisely because it was the most chaotic.

If you’re building something similar, steal the architecture, skip the 4 AM callback hell, and write the demo script first. You’re welcome.