๐ Scaling โ
Strategies for scaling KitchenAsty beyond a single server.
๐ฅ๏ธ Multiple API Server Instances โ
Run multiple instances of the API server behind a load balancer (nginx, HAProxy, or a cloud load balancer).
โโโโ Server 1 (:3000)
Load Balancer โโโโโโโผโโโ Server 2 (:3001)
โโโโ Server 3 (:3002)All instances connect to the same PostgreSQL database.
๐ด Redis Adapter for Socket.IO โ
When running multiple server instances, Socket.IO events need to be shared across instances. Use the Redis adapter:
npm install @socket.io/redis-adapter redisimport { createAdapter } from '@socket.io/redis-adapter';
import { createClient } from 'redis';
const pubClient = createClient({ url: 'redis://localhost:6379' });
const subClient = pubClient.duplicate();
await Promise.all([pubClient.connect(), subClient.connect()]);
io.adapter(createAdapter(pubClient, subClient));This ensures that events emitted on one server instance are delivered to clients connected to any instance.
๐๏ธ Database Connection Pooling โ
For high-traffic deployments, use PgBouncer as a connection pooler in front of PostgreSQL:
Server instances โ PgBouncer โ PostgreSQLUpdate the DATABASE_URL to point to PgBouncer instead of PostgreSQL directly.
๐ CDN for Static Assets โ
Serve the admin and storefront static builds from a CDN:
- Build the frontends:
npm run build - Upload
packages/admin/dist/andpackages/storefront/dist/to your CDN - Configure the CDN to serve
index.htmlfor all routes (SPA fallback)
๐ Session Stickiness โ
If not using the Redis adapter for Socket.IO, you'll need sticky sessions to ensure WebSocket connections stay with the same server instance. Most load balancers support this via cookies or IP hashing.
With the Redis adapter, sticky sessions are not required.
๐ Database Read Replicas โ
For read-heavy workloads, set up PostgreSQL read replicas and configure Prisma to route reads to replicas:
Write queries โ Primary
Read queries โ Replica(s)This is an advanced configuration typically needed only at very high traffic levels.