Pulsecheck needed a backend. Most SaaS apps reach for a VPS, spin up a Node/Express server, connect a Postgres database, and start worrying about uptime. I went a different direction.
The entire stack runs on Cloudflare's edge: Workers for the API, D1 for the database, R2 for file storage, KV for rate limiting. One wrangler deploy command ships everything. No servers to manage, no infrastructure to monitor at 3am, no monthly bill that grows faster than revenue.
Why Edge?
The honest answer: cost. Cloudflare's free tier is absurdly generous for a small SaaS. Workers get 100K requests/day free. D1 gets 5M reads/day free. R2 gets 10GB storage free. For an early-stage product with uncertain revenue, "basically free until it works" is the right pricing tier.
The engineering answer: deployment simplicity. The React frontend and Hono API both deploy from the same command to the same origin. No CORS configuration. No separate hosting for static assets vs. API. No nginx reverse proxy. Just one deployment target that serves everything.
The Constraints That Helped
Cloudflare Workers have real limitations. Some of them made the architecture better.
No bcrypt. Workers use the Web Crypto API, which doesn't support bcrypt. Instead of fighting it, I used PBKDF2 with 100K iterations. Same security outcome, native to the platform. The constraint eliminated a dependency.
No long-running processes. Workers have a 30-second CPU time limit. This means no background jobs, no scheduled tasks running inside the request. For rate limiting, I used KV with TTL-based expiration. For email notifications, I call Resend's API inline (with fire-and-forget for non-critical notifications). The constraint forced a stateless, request-scoped architecture. Which is what you should build anyway.
D1 is SQLite, not Postgres. No full-text search, no JSON operators, no LISTEN/NOTIFY. Search uses LIKE queries with proper wildcard escaping. It's not elegant. It works for the scale I'm at. If Pulsecheck outgrows D1, migrating to Postgres is a known path.
Multi-Tenant from Day One
Every project in Pulsecheck belongs to an owner. Every API query is scoped to the authenticated user's owner_id. This wasn't bolted on later. It was the first database migration.
// Every query is owner-scoped
const projects = await db
.prepare("SELECT * FROM projects WHERE owner_id = ?")
.bind(userId)
.all();
// Feature gating is server-side, not UI-hiding
if (plan === "free" && projectCount >= 1) {
return c.json({ error: "Upgrade to Pro for multiple projects" }, 403);
}Free tier gets 1 project. Pro tier ($19/month) gets unlimited. The gating happens at the API level, not the UI. Hiding a button is not security. Returning a 403 is security.
The Widget: 4.9KB of Vanilla TypeScript
Pulsecheck's embeddable widget lets customers submit feedback from any website. It's 4.9KB of vanilla TypeScript compiled as an IIFE. No React, no framework, no dependencies.
The widget creates a Shadow DOM container (for style isolation), opens a lazy-loaded iframe on first interaction, and communicates with the parent page via PostMessage. Because everything deploys to the same Cloudflare origin, there's no cross-origin complexity.
Building it without a framework felt weird at first. Then I saw the bundle size: 4.9KB. The equivalent in React would be 40KB+ before I wrote a single line of business logic. For something that loads on someone else's website, size matters.
What I'd Do Differently
D1 migrations are manual. There's no Prisma, no Drizzle, no migration runner. I wrote raw SQL migration files and applied them by hand. For 20 migrations, this was fine. For 200, it won't be. I should have set up a migration runner early.
No staging environment. I deploy to production, test on production. The npm run preview command uploads a versioned deployment that I can test before promoting, which is close but not the same as a real staging environment with its own database. So far nothing has caught fire. Emphasis on "so far."
The constraints were worth it. Cloudflare's edge platform forced a simpler architecture than I would have built on a traditional server. And the hosting bill for a live SaaS with real users is currently $0/month. That's hard to argue with.