Use these as starting points for [resources] and [data] in your percher.toml. You can always adjust after deploying based on actual usage.
Static site / landing page
MinimalHTML, CSS, and client-side JS. Marketing sites, documentation, portfolios. Served via a lightweight static file server.
[app]
name = "my-site"
runtime = "node" # or "static"
[web]
port = 3000
health = "/health" # or "/" for static sites
[resources]
memory = "128mb"
cpu = 0.25
- For pure static sites, use a minimal HTTP server like serve or http-server
- No database needed — skip [data] entirely
- 128mb is plenty since there's no server-side processing
- Frameworks: Astro (static), Vite, 11ty, Hugo (with Node server wrapper)
API service / webhook handler
LightREST or GraphQL endpoints, webhook receivers, microservices. Stateless request-response with no frontend.
[app]
name = "my-api"
runtime = "node" # or "python"; use "docker" for custom runtimes
[web]
port = 3000
health = "/health"
[resources]
memory = "256mb"
cpu = 0.5
- Go and Rust APIs can often run with 128mb due to lower memory overhead
- For Python (Flask, FastAPI): 256mb minimum, 512mb if using pandas/numpy
- Add [data] mode = "pocketbase" if you need persistent storage
- Set env vars for external service keys: bunx percher env set STRIPE_KEY=...
- Frameworks: Express, Fastify, Hono, Flask, FastAPI, Gin, Actix, Fiber
Fullstack web app (SSR)
StandardServer-rendered apps with both frontend and backend. The most common deployment type — includes HTML rendering, API routes, and asset serving.
[app]
name = "my-app"
runtime = "node"
[web]
port = 3000
health = "/health" # or "/api/health"
[resources]
memory = "512mb"
cpu = 0.5
- Next.js, Remix, SvelteKit, Nuxt, and Astro SSR all work out of the box
- 512mb handles SSR rendering, API routes, and typical traffic well
- For apps with image processing or heavy SSR: consider 768mb
- Next.js: ensure standalone output mode or standard start script in package.json
- Health endpoint: most frameworks serve / successfully which works as a health check
Fullstack app with database
Standard + DataCRUD applications, admin panels, dashboards, SaaS apps. Server-rendered frontend with a managed PocketBase database for auth, storage, and data.
[app]
name = "my-app"
runtime = "node"
[web]
port = 3000
health = "/health"
[resources]
memory = "512mb"
cpu = 0.5
[data]
mode = "pocketbase"
- PocketBase provides: SQLite database, REST API, auth, file storage, realtime subscriptions
- Your app receives POCKETBASE_URL (internal) and POCKETBASE_PUBLIC_URL (public with SSL)
- Percher auto-creates a superuser and injects POCKETBASE_ADMIN_EMAIL + POCKETBASE_ADMIN_PASSWORD
- For Vite-based frontends, VITE_POCKETBASE_URL is also injected automatically
- PocketBase admin UI is at pb-<app-name>.percher.run/_/
- The PocketBase sidecar runs in its own container with 128mb — separate from your app's resources
- Good for: todo apps, blogs, CMS, user dashboards, internal tools, MVPs
Real-time application
StandardChat apps, live dashboards, collaborative editors, notification systems. Uses WebSocket or Server-Sent Events for persistent connections.
[app]
name = "my-realtime-app"
runtime = "node"
[web]
port = 3000
health = "/health"
[resources]
memory = "512mb" # more if many concurrent connections
cpu = 0.5
- WebSocket and SSE connections are supported through the Caddy reverse proxy
- Memory scales with concurrent connections — estimate ~1mb per active WebSocket connection
- For 500+ concurrent connections: increase to 768mb or 1gb
- If using PocketBase realtime subscriptions, add [data] mode = "pocketbase"
- Frameworks: Socket.io, ws, Hono WebSocket, Fastify WebSocket
Background worker / queue processor
LightJob processors, scheduled tasks, queue consumers. Typically no web frontend — but still needs a health endpoint for Percher's health checks.
[app]
name = "my-worker"
runtime = "node" # or "python"; use "docker" for custom runtimes
[web]
port = 3000 # minimal health server
health = "/health"
[resources]
memory = "256mb" # depends on job complexity
cpu = 0.5
- Your worker MUST expose an HTTP health endpoint — Percher needs it to verify the container is running
- Pattern: run your job processor alongside a minimal HTTP server on port 3000 that returns 200 on /health
- For CPU-intensive batch jobs: increase cpu to 1.0
- For memory-intensive processing (large files, data transforms): increase memory to 512mb or 1gb
- Use env vars for queue connection strings: bunx percher env set REDIS_URL=...
Python data app / ML inference
HeavyData processing apps, ML model serving, Jupyter-backed APIs, scientific computing. Python with numpy, pandas, scikit-learn, or lightweight model inference.
[app]
name = "my-ml-app"
runtime = "python"
[web]
port = 8000 # FastAPI/uvicorn default
health = "/health"
[resources]
memory = "1gb" # or more for large models
cpu = 1.0
- Python apps with numpy/pandas need at least 512mb — 1gb recommended
- For model inference (scikit-learn, small transformers): 1gb minimum
- Large language models or image models likely exceed Percher's resource limits — use a dedicated GPU service
- FastAPI with uvicorn: use port 8000, set in percher.toml [web] port = 8000
- Flask: default port 5000, update accordingly
- requirements.txt or pyproject.toml is auto-detected by Nixpacks
- Percher has a 30-second health check timeout — ensure your app starts within that window
Go or Rust microservice
MinimalCompiled language services with minimal memory footprint. Ideal for high-throughput, low-latency APIs and utilities.
[app]
name = "my-service"
runtime = "docker" # Go/Rust via Dockerfile until native labels land
[web]
port = 8080 # Go default; Rust varies
health = "/health"
[resources]
memory = "128mb" # compiled binaries are very efficient
cpu = 0.25
- Go and Rust binaries are extremely memory-efficient — 128mb handles most workloads
- For services processing large payloads or many concurrent requests: 256mb
- Use a Dockerfile for Go/Rust projects until native runtime labels are added to percher.toml
- Build a small final image with a multi-stage Dockerfile for best cold start and disk usage
- Rust build times can be long (2-5 min) — prefer release binaries in the final stage
- Common ports: Go stdlib net/http defaults to 8080, Actix/Axum default varies — set explicitly
Monorepo / multi-package app
StandardApps built from a monorepo with multiple packages (e.g., shared libraries + app). Nixpacks builds from the project root.
[app]
name = "my-app"
runtime = "node"
[web]
port = 3000
health = "/health"
[resources]
memory = "512mb"
cpu = 0.5
- Percher uploads and builds from the project root — all workspace packages are included
- Ensure your start script in package.json builds and starts the right package
- For Turborepo/Nx: the start script should handle the build chain
- If using workspace dependencies, ensure they're built before the app starts
- Larger monorepos may need more memory during the build phase (this doesn't affect runtime)
PHP application
StandardLaravel, Symfony, or vanilla PHP apps. Use a Dockerfile until native PHP runtime labels are added to Percher config.
[app]
name = "my-php-app"
runtime = "docker" # PHP via Dockerfile until native labels land
[web]
port = 8080
health = "/health"
[resources]
memory = "256mb"
cpu = 0.5
- Use a Dockerfile for PHP projects until native PHP runtime labels are added to percher.toml
- Laravel: ensure APP_KEY is set via env vars, port matches artisan serve
- For Laravel with database: add [data] mode = "pocketbase" or set DATABASE_URL env var to an external DB
- PHP built-in server: php -S 0.0.0.0:8080 -t public
- Memory: 256mb for most PHP apps, 512mb for Laravel with queues or heavy ORM usage