Post

Building a Budget-Friendly Lab VPS Platform – Part 6: Refactoring a 2,400-Line Monolith with Claude Code

Building a Budget-Friendly Lab VPS Platform – Part 6: Refactoring a 2,400-Line Monolith with Claude Code

TL;DR: My main server file had grown to 2,426 lines. I used Claude Code to refactor it into clean modules, cutting 600 lines while keeping everything working in production.


The Problem: When One File Does Everything

If you’ve been following this series, you know the LocalEdge Datacenter platform started simple: one server.js file handling VM provisioning, Stripe billing, Cloudflare tunnels, and everything else.

That approach works great when you’re building fast and figuring out what the platform needs to do. But after adding subscription management, payment failure handling, automated cleanup jobs, and all the features from Parts 1-5, that single file had grown to 2,426 lines.

Everything worked fine in production. Users could provision VMs, billing ran correctly, cleanup jobs handled failures. But adding new features was becoming painful:

  • Need to change email templates? Search through 2,400 lines
  • Want to modify Proxmox VM creation? It’s scattered across multiple route handlers
  • Database models defined inline, mixed with API routes
  • Utility functions scattered everywhere

The real issue wasn’t that the code was broken — it’s that it was becoming hard to understand and modify. I knew if I didn’t refactor soon, I’d be stuck with a monster file that nobody (including future me) would want to touch.

Why I Used Claude Code

I’ve done plenty of refactoring by hand, and it’s risky. You move some functions around, update the imports, and suddenly something breaks in production because you missed a reference or changed behavior without realizing it.

I wanted to refactor methodically, but I didn’t want to spend weeks on it. The platform was running, users were paying for VMs, and I needed to keep shipping features.

Claude Code turned out to be perfect for this. It can read the entire codebase, understand dependencies between functions, and systematically extract code while preserving behavior. What would have taken me weeks of careful manual work happened over a few sessions.

How I Broke It Down

I tackled the refactoring in four phases, starting with the easiest stuff and working up to the complex service integrations.

Phase 1: Utility Functions

I started by pulling out all the helper functions — things like sleep(), withTimeout(), string formatting, money formatting, encryption utilities. These had no dependencies on the rest of the app, so they were safe to extract first.

I asked Claude Code to identify all utility functions and extract them into a utils/ directory:

1
2
3
4
5
utils/
├── encryption.js    # AES-256-GCM encryption/decryption
├── formatting.js    # String sanitization, money formatting
├── helpers.js       # Async utilities with timeout/retry logic
└── billing.js       # Stripe Price ID helpers

This phase removed 84 lines from server.js and gave me confidence that the refactoring process worked. Nothing broke, tests still passed, and now I had reusable utility modules.

Phase 2: Database Models

Next up were the Mongoose schema definitions. These were defined inline in server.js, which meant 200+ lines of schema definitions mixed in with route handlers.

Before:

1
2
3
4
5
6
7
8
// Inline schemas in server.js
const orderSchema = new mongoose.Schema({
  userId: { type: mongoose.Schema.Types.ObjectId, required: true },
  plan: String,
  billingInterval: { type: String, enum: ["month", "year"] },
  // ... 50+ more lines
});
const Order = mongoose.model("Order", orderSchema);

After:

1
2
3
4
5
// Clean imports in server.js
const User = require('./models/User');
const Vm = require('./models/Vm');
const Order = require('./models/Order');
const EmailLog = require('./models/EmailLog');

Each model got its own file with validation rules, methods, and clean documentation. This removed another 55 lines and made the data structure much clearer.

Phase 3: Service Layer (The Big One)

This was the phase that actually mattered. The platform integrates with three external services:

  1. Email - SMTP, templates, deduplication logic
  2. Cloudflare - Tunnel creation, DNS records, hostname generation
  3. Proxmox - VM lifecycle, cloud-init configs, SSH commands

All of this logic was embedded directly in server.js. Route handlers would make inline Proxmox API calls, construct cloud-init templates, generate Cloudflare tunnel tokens — hundreds of lines of service-specific code scattered throughout.

I asked Claude Code to extract each service into its own module. The Proxmox service alone was 350+ lines.

Before:

1
2
3
4
5
6
7
8
app.post('/vm/destroy', async (req, res) => {
  // Inline API calls, error handling, retry logic...
  const pveRes = await fetch(`${PROXMOX_HOST}/api2/json/nodes/${node}/qemu/${vmId}`, {
    method: "DELETE",
    headers: { Authorization: `PVEAPIToken=${PROXMOX_TOKEN_ID}=${PROXMOX_TOKEN_SECRET}` }
  });
  // ... another 50 lines
});

After:

1
2
3
4
5
6
const { destroyVm } = require('./services/proxmoxService');

app.post('/vm/destroy', requireLogin, async (req, res) => {
  await destroyVm(vmId);
  res.redirect('/dashboard');
});

Now each service lives in its own file:

1
2
3
4
services/
├── emailService.js       # 161 lines
├── cloudflareService.js  # 218 lines
└── proxmoxService.js     # 356 lines

This phase removed 414 lines from server.js. More importantly, it made each service testable in isolation and much easier to modify.

Phase 4: Middleware and Billing Helpers

The final phase extracted middleware like authentication and flash message handling, plus some Stripe billing utilities.

1
2
3
4
5
6
middleware/
├── auth.js    # Session authentication
└── flash.js   # Flash messaging

utils/
└── billing.js # Stripe Price ID helpers

Another 47 lines removed. At this point, server.js had gone from 2,426 lines to 1,826 lines — a 600-line reduction.

What Actually Improved

The line count reduction is nice, but here’s what actually got better:

Finding things is faster. Need to change how emails are sent? Everything is in services/emailService.js. Want to modify VM creation? It’s all in services/proxmoxService.js.

IDE features work now. Jump to definition, find all references, refactor/rename — none of that worked when everything was in one massive file. Now it all works perfectly.

Testing became possible. Before, testing anything meant spinning up the entire Express app. Now I can test utility functions and service modules in isolation.

Onboarding is easier. When someone asks “where’s the VM provisioning code?”, I can point them to services/proxmoxService.js instead of saying “lines 847-1253 in server.js, but also check lines 234-298, 1654-1702, and 1891-1947.”

Adding features is faster. Want to add a new email template? Modify one function in emailService.js. Want to change Proxmox behavior? Everything you need is in one service file.

How Claude Code Helped

The key thing Claude Code did was maintain context across the entire refactoring process. It understood:

  • Which functions depended on each other
  • What could be safely extracted without breaking behavior
  • How to preserve error handling and retry logic
  • Where imports needed to be updated

For each extraction, it would:

  1. Create the new module file
  2. Move the code with proper exports
  3. Update all imports in server.js
  4. Verify no references were broken

This systematic approach is why the refactoring didn’t break anything in production. No failed deployments, no rollbacks, no panicked debugging sessions.

Example: Extracting the Proxmox Service

Here’s how the actual process worked for extracting the Proxmox service.

Step 1: I asked Claude Code to identify all Proxmox-related code in server.js.

It found API authentication, VM ID generation, template cloning, lifecycle operations, cloud-init rendering, SSH commands — about 350 lines spread across the file.

Step 2: I told it to create services/proxmoxService.js with all that functionality.

It moved the code, set up proper module exports, added timeout handling and retry logic:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
// services/proxmoxService.js
const proxmoxApi = require('proxmox-api');
const { mustEnv } = require('../config/env');
const { withTimeout, retryOperation } = require('../utils/helpers');

async function generateVmId() {
  const min = parseInt(mustEnv('VM_ID_MIN'), 10);
  const max = parseInt(mustEnv('VM_ID_MAX'), 10);

  const allVms = await withTimeout(
    proxmox.nodes.$(NODE).qemu.$get(),
    12000,
    'proxmox:list-vms'
  );

  const usedIds = new Set(allVms.map(v => v.vmid));

  for (let id = min; id <= max; id++) {
    if (!usedIds.has(id)) return id;
  }

  throw new Error(`No available VMID in range ${min}-${max}`);
}

module.exports = {
  proxmox,
  generateVmId,
  createVmFromTemplate,
  startVm,
  stopVm,
  destroyVm,
  // ... all exports
};

Step 3: It updated server.js to import the service.

1
2
3
4
5
6
7
const {
  generateVmId,
  createVmFromTemplate,
  startVm,
  stopVm,
  destroyVm
} = require('./services/proxmoxService');

Step 4: I ran tests and deployed.

Everything worked. No behavior changes, no broken functionality. Just cleaner code organization.

The New Structure

Here’s what the codebase looks like now:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
server.js (1,826 lines - routes and app logic)
├── config/
│   └── env.js
├── middleware/
│   ├── auth.js
│   └── flash.js
├── models/
│   ├── User.js
│   ├── Vm.js
│   ├── Order.js
│   └── EmailLog.js
├── services/
│   ├── emailService.js
│   ├── cloudflareService.js
│   └── proxmoxService.js
└── utils/
    ├── encryption.js
    ├── formatting.js
    ├── helpers.js
    └── billing.js

Each module has a clear purpose. File names tell you what’s inside. You can modify one piece without understanding the entire system.

What I’d Do Differently

Looking back, I should have started this refactoring sooner. Waiting until the file hit 2,400 lines made it more intimidating than it needed to be.

The phased approach was the right call though. Starting with utilities built confidence before tackling the complex service extractions.

I also should have added tests during the refactoring instead of after. Now that modules are isolated, testing is straightforward — but it would have been nice to have tests confirming behavior wasn’t changing during the extraction process.

What’s Next

The refactoring sets up some improvements I’ve been wanting to make:

  • Proper unit tests for each service module
  • Better error handling now that services are isolated
  • Performance profiling (easier when services are separate)
  • Potentially splitting server.js further into route modules

But the main win is that the codebase is now maintainable. Adding features doesn’t require understanding the entire system. Debugging issues doesn’t mean searching through thousands of lines.

The platform is still the same from a user perspective — nothing changed functionally. But from a development perspective, it’s much healthier. And that matters for keeping the platform running and improving it over time.


The provisioning pipeline from Part 4, the cleanup jobs from Part 5, and everything else still works exactly the same. The code is just organized better now. Sometimes that’s the most important kind of improvement.

Need a real lab environment?

I run a small KVM-based lab VPS platform designed for Containerlab and EVE-NG workloads — without cloud pricing nonsense.

Visit localedgedatacenter.com →
This post is licensed under CC BY 4.0 by the author.