Migration Completed

Full Cloudflare Platform Migration

Full stack on the edge: AI, storage, containers and more

Comprehensive migration to the Cloudflare platform leveraging Workers AI, Hyperdrive, R2, Containers, D1, KV, Queues, and virtually all available services to achieve an edge-first architecture with minimal latency and predictable costs.

cloudflare
12 weeks
2025
2 engineers

The Problem

Infrastructure spread across multiple providers (AWS, Vercel, own services) with growing costs, variable latency by region, and operational complexity to keep everything in sync. Each service with its own deployment, monitoring, and billing system.

The Solution

Progressive migration to Cloudflare as the primary platform. Workers for edge business logic, Hyperdrive for connecting to existing databases without data migration, R2 for asset storage, Workers AI for inference without own GPUs, Containers for full-runtime workloads, D1 and KV for edge-native data, and Queues for async processing.

The Results

Global latency drastically reduced by running logic at the edge, more predictable infrastructure costs, elimination of cold starts, unified operations under a single provider with a single dashboard, and AI inference capability without managing GPUs.

Measurable Results

Global latency (p50)

180-400ms

<30ms edge

90% improvement

Cold starts

1-5s (Lambda)

0ms (Workers)

100% improvement

CF services used

14+

Providers consolidated

4+

1 (Cloudflare)

Want results like these?

Let's scope your project — 30 min, no commitment.

Schedule assessment

Project Phases

Inventory & migration plan

1 week

Mapping all services, dependencies, and data. Migration prioritization by impact and complexity.

Workers & business logic

3 weeks

Migration of APIs and business logic to Workers with Hono framework. Hyperdrive to connect to existing PostgreSQL.

Storage: R2, D1, KV

2 weeks

R2 for assets and uploads, D1 for edge-native data, KV for cache and configuration. S3 to R2 migration.

Workers AI & Containers

3 weeks

Workers AI integration for edge inference, Containers for workloads needing full runtime (e.g. headless Chrome, image processing).

Queues, Durable Objects & async

2 weeks

Async processing with Queues, distributed state with Durable Objects, and cron job migration.

DNS, WAF & security

1 week

DNS consolidation, WAF rules, page rules, and unified security configuration.

Tech Stack

Technologies

typescriptcloudflare-workershonoastroreactpostgresqlsqlite

Cloud Services (CLOUDFLARE)

WorkersWorkers AIHyperdriveR2ContainersD1KVQueuesDurable ObjectsPagesDNSWAFImagesStream

Tools

wranglergithub-actionsterraformminiflare

Implementation Details

The bet on edge-first

Instead of a traditional central server + CDN architecture, we chose to run logic directly at the edge, as close as possible to the user.

Cloudflare services in use

ServiceUse
WorkersAPIs, business logic, SSR
Workers AIEmbeddings, classification, text generation
HyperdriveAccelerated connection to existing PostgreSQL
R2File storage, assets, backups
ContainersHeadless Chrome, heavy processing
D1SQLite database at the edge
KVCache, feature flags, configuration
QueuesEmails, webhooks, async processing
Durable ObjectsDistributed state, rate limiting
PagesStatic sites and SSR with Astro

Why Cloudflare and not just AWS

  • 0ms cold starts in Workers (vs 1-5s in Lambda)
  • Predictable pricing based on requests, not compute time
  • Natively edge — not a CDN on top of a server, it’s compute at the edge
  • Hyperdrive — no need to migrate your database, only the connection is accelerated

Have a similar technical challenge?

Let's talk about your infrastructure, architecture or pipeline. No commitment.

Schedule a Technical Assessment