Back to Blog

Edge Computing with Cloudflare Workers and D1

A practical guide to building a full REST API on Cloudflare Workers with D1, covering schema design, query patterns, authentication, and what we learned deploying to production.

Edge Computing with Cloudflare Workers and D1

We recently built an internal time-tracking and invoicing API and decided to try something different: instead of spinning up an EC2 instance or a container, we deployed the entire thing on Cloudflare Workers with D1 as the database. The result was a fully functional REST API with zero servers to manage, sub-10ms database queries, and a monthly cost that rounds to zero.

Here is what we learned.

What Are Workers and D1?

Cloudflare Workers are serverless functions that run at the edge — meaning your code executes in the data center closest to the user, not in a single region. Unlike AWS Lambda, there is no cold start penalty worth mentioning. Workers spin up in under 5 milliseconds.

D1 is Cloudflare’s edge database built on SQLite. Your data lives alongside your compute, which eliminates the network round-trip to a centralized database. If you have ever used SQLite locally, you already know the query syntax.

Project Structure

Our API is a single TypeScript file (about 460 lines) that handles 27 RESTful endpoints across customers, plans, projects, time entries, invoices, and settings. The entire deployment configuration is 16 lines of TOML:

name = "tempo-api"
main = "src/index.ts"
compatibility_date = "2024-12-01"

routes = [
  { pattern = "tempo-api.example.com/*", zone_id = "your-zone-id" }
]

[[d1_databases]]
binding = "DB"
database_name = "tempo-db"
database_id = "your-database-id"

The binding = "DB" line is key — it injects the D1 database instance into every request handler as env.DB, making queries as simple as:

const result = await env.DB
  .prepare('SELECT * FROM customers WHERE id = ?')
  .bind(params.id)
  .first();

Schema Design for D1

D1 is SQLite under the hood, so you get full SQL support including foreign keys, cascading deletes, and aggregate functions. Our schema uses 8 tables with proper relational constraints:

CREATE TABLE customers (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    name TEXT NOT NULL,
    ad_hoc_hourly_rate REAL DEFAULT 0,
    created_at TEXT DEFAULT (datetime('now'))
);

CREATE TABLE time_entries (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    project_id INTEGER NOT NULL,
    team_member_id INTEGER NOT NULL,
    description TEXT NOT NULL,
    raw_hours REAL NOT NULL,
    entry_date TEXT NOT NULL,
    created_at TEXT DEFAULT (datetime('now')),
    FOREIGN KEY (project_id) REFERENCES projects(id) ON DELETE CASCADE,
    FOREIGN KEY (team_member_id) REFERENCES team_members(id) ON DELETE CASCADE
);

A few things worth noting:

  • CASCADE deletes keep your data clean. Deleting a customer automatically removes their projects, time entries, and invoices — no orphaned records.
  • DEFAULT timestamps via datetime('now') reduce application logic. You do not need to generate timestamps in your code.
  • REAL for currency works fine for internal tools where exact decimal precision is not critical. For customer-facing billing, you would want to store cents as integers.

Routing Without a Framework

Workers do not need Express or Fastify. A simple pattern-matching router with regex handles all 27 endpoints:

type Handler = (
  request: Request,
  env: Env,
  params: Record<string, string>
) => Promise<Response>;

interface Route {
  method: string;
  pattern: RegExp;
  handler: Handler;
}

const routes: Route[] = [
  { method: 'GET', pattern: /^\/customers$/, handler: listCustomers },
  { method: 'POST', pattern: /^\/customers$/, handler: createCustomer },
  { method: 'PUT', pattern: /^\/customers\/(?<id>\d+)$/, handler: updateCustomer },
  // ...
];

Named capture groups in the regex ((?<id>\d+)) extract path parameters cleanly. The router iterates through routes, matches the method and pattern, and calls the handler. Total overhead: negligible.

Authentication

For an internal API, bearer token authentication is straightforward:

const authHeader = request.headers.get('Authorization');
if (!env.API_KEY || authHeader !== `Bearer ${env.API_KEY}`) {
  return cors(json({ success: false, error: 'Unauthorized' }, 401));
}

The API_KEY is stored as a Workers secret (not in code or config). For a public-facing API, you would want JWT validation or OAuth, but for internal tooling this is simple and effective.

Patterns That Worked Well

On-write aggregation for invoice totals. When a line item is added, updated, or deleted, we immediately recalculate the invoice total:

const total = await env.DB
  .prepare('SELECT COALESCE(SUM(amount), 0) as total FROM invoice_line_items WHERE invoice_id = ?')
  .bind(invoiceId)
  .first<{ total: number }>();

await env.DB
  .prepare("UPDATE invoices SET total_amount = ?, updated_at = datetime('now') WHERE id = ?")
  .bind(total?.total || 0, invoiceId)
  .run();

This avoids the need to recalculate totals on every read. Since D1 queries are colocated with compute, the extra write is nearly free.

JOINs work great in D1. We were initially cautious about complex queries, but D1 handles JOINs without issues:

const entries = await env.DB.prepare(`
  SELECT te.*, p.code as project_code,
         tm.initials as team_initials, tm.overhead_factor
  FROM time_entries te
  JOIN projects p ON te.project_id = p.id
  JOIN team_members tm ON te.team_member_id = tm.id
  WHERE p.customer_id = ? AND te.entry_date BETWEEN ? AND ?
`).bind(customerId, start, end).all();

Upsert with ON CONFLICT for the settings table:

INSERT INTO settings (key, value) VALUES (?, ?)
ON CONFLICT(key) DO UPDATE SET value = ?

This pattern lets us use the settings table as a simple key-value store for API keys and configuration without checking for existence first.

What We Would Do Differently

Migration tooling is minimal. D1 supports migration files, but the tooling is basic compared to something like Prisma or TypeORM. For a small schema this is fine, but for a complex application you will want to plan your migration strategy early.

No connection pooling to worry about — but also no transactions. D1 does not support multi-statement transactions yet. For our use case (single-row updates with aggregation), this was not a problem. But if you need atomic multi-table writes, this is a real limitation.

Local development uses Miniflare, which emulates D1 locally. It works well, but there are occasional behavioral differences with production D1. Test against your actual D1 database before deploying breaking schema changes.

Cost and Performance

For our usage (a few hundred API calls per day), the cost is effectively zero. Workers’ free tier includes 100,000 requests per day, and D1’s free tier includes 5 million row reads per day. You would need significant traffic before hitting paid tiers.

Response times are consistently under 50ms globally, with most of that being D1 query time. There is no cold start, no connection establishment, and no VPC networking to traverse. The database is just there.

When to Use Workers + D1

This stack works well for:

  • Internal tools and APIs where you want zero infrastructure management
  • Read-heavy workloads where D1’s SQLite performance shines
  • Global APIs where edge latency matters
  • Prototypes that might grow into production services

It is less suitable for:

  • Write-heavy workloads with transaction requirements
  • Complex relational schemas that need advanced ORM support
  • Applications requiring real-time features (use Durable Objects instead)

For us, it replaced what would have been an EC2 instance, an RDS database, load balancer configuration, and ongoing maintenance — all for a small internal tool that did not warrant that infrastructure. Sometimes the best server is no server at all.


KeyQ designs and manages cloud infrastructure for businesses that want to move fast without accumulating operational debt. Reach out if you are evaluating serverless or edge architectures.