Introduction
Automation shouldn’t be hard—or risky. If you’ve ever tried stitching tools together with custom scripts or paid SaaS zaps, you know the pain: fragile logic, vendor lock-in, and growing costs. n8n fixes that. It’s an open-source, self-hostable automation platform that connects your apps and APIs with a clean, visual builder—while still giving you full control.
In this guide, you’ll learn how to install n8n on macOS with Docker Desktop the right way: persistent storage, PostgreSQL (not SQLite), encrypted credentials, safe webhooks, and clean upgrade/backup routines. We’ll keep things fast and beginner-friendly without skipping the production-grade details. By the end, you’ll have a reliable local automation hub you can use for real work—testing webhooks, building lead capture flows, or orchestrating AI pipelines—then promote to a server with confidence.
What You’ll Build
- A local n8n instance running in Docker containers
- PostgreSQL database for durable storage
- A tidy project folder with volumes for backups and Git-friendly exports
- A working webhook and a simple automation you can reuse anywhere
- Optional extras (Redis queue workers, HTTPS, tunnel for public webhooks)
Who This Guide Is For
- Developers who want a safe local environment to build and test automations
- Makers and marketers who need Zapier-style workflows without the monthly bill
- Teams who plan to self-host n8n later but want to start on a Mac today
Prerequisites
- macOS (Apple Silicon or Intel)
- Docker Desktop installed and running
- Basic Terminal comfort
- (Optional) Homebrew to install extras like ngrok or Cloudflare Tunnel
The Big Picture (Why Docker + Postgres)
Why Docker? Consistency. You get the same environment locally and in production—no “works on my machine” drama. Why PostgreSQL? It’s robust and production-preferred. SQLite is fine for quick tests, but Postgres keeps your workflows, execution logs, and credentials solid and recoverable. Why local first? You can iterate safely. Once your workflow is stable, you can move the exact setup to a VPS with almost no changes.
Step 1: Create a Clean Project
Open Terminal and create a dedicated folder:
mkdir -p ~/n8n-stack/{n8n_data,db_data}
cd ~/n8n-stack
n8n_datastores n8n configuration, credentials, and exportsdb_datastores your PostgreSQL database files- Keeping them side-by-side makes backups simple (just copy the folder)
Step 2: Add Environment Variables
Create a .env file in ~/n8n-stack. These values configure your containers and keep secrets out of version control.
# ---- n8n core ----
N8N_HOST=localhost
N8N_PORT=5678
N8N_PROTOCOL=http
NODE_ENV=production
# Use a long random string (store in your password manager)
N8N_ENCRYPTION_KEY=CHANGE_ME_TO_A_LONG_RANDOM_64_CHAR_STRING
# Privacy & UX settings
N8N_USER_MANAGEMENT_DISABLED=false
N8N_DIAGNOSTICS_ENABLED=false
N8N_PERSONALIZATION_ENABLED=false
# Execution housekeeping (keeps DB slim)
EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=336 # hours (14 days)
EXECUTIONS_DATA_PRUNE_MAX_COUNT=10000
# ---- database ----
DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=db
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_USER=n8n_user
DB_POSTGRESDB_PASSWORD=CHANGE_THIS_STRONG_PASSWORD
# ---- queue mode (optional) ----
N8N_EXECUTIONS_MODE=regular
QUEUE_BULL_REDIS_HOST=redis
QUEUE_BULL_REDIS_PORT=6379
Pro tip: Generate a secure key right in Terminal:
python3 - <<'PY'
import secrets, string
alphabet = string.ascii_letters + string.digits + string.punctuation
print(''.join(secrets.choice(alphabet) for _ in range(80)))
PY
Paste the output into N8N_ENCRYPTION_KEY.
Step 3: Create the Docker Compose File
Add a docker-compose.yml in the same folder:
version: "3.8"
services:
n8n:
image: n8nio/n8n:latest
restart: unless-stopped
env_file: .env
ports:
- "5678:5678"
environment:
- WEBHOOK_URL=${N8N_PROTOCOL}://${N8N_HOST}:${N8N_PORT}/
volumes:
- ./n8n_data:/home/node/.n8n
depends_on:
- db
db:
image: postgres:15
restart: unless-stopped
environment:
- POSTGRES_USER=${DB_POSTGRESDB_USER}
- POSTGRES_PASSWORD=${DB_POSTGRESDB_PASSWORD}
- POSTGRES_DB=${DB_POSTGRESDB_DATABASE}
volumes:
- ./db_data:/var/lib/postgresql/data
# Optional: enable later for parallel/queued executions
# redis:
# image: redis:7-alpine
# restart: unless-stopped
Why this layout works:
- The n8n container persists everything important inside
./n8n_data - Postgres data lives in
./db_data env_filekeeps secrets outside the YAML and out of Git
Step 4: Start n8n
From ~/n8n-stack:
docker compose up -d
open http://localhost:5678
You’ll see the owner setup screen. Create your admin account with a strong password.
Step 5: Lock In Your Base Settings
Go to Settings → General and confirm:
- Timezone: set your local time
- Webhook URL:
http://localhost:5678/for local testing - Executions: pruning enabled (already set via
.env)
Save changes.
Step 6: Smoke Test a Webhook (Your First Automation)
-
Click Create Workflow → Start from scratch
-
Add Webhook node
- HTTP Method:
POST - Path:
hello(or leave the auto path)
- HTTP Method:
-
Add Respond to Webhook node
-
Response: JSON
-
Body:
{ "ok": true, "received": "{{$json.body.ping || 'pong'}}" }
-
-
Connect Webhook → Respond to Webhook
-
Click Listen for test event on the Webhook node
-
In Terminal:
curl -X POST "http://localhost:5678/webhook-test/<your-id>" \
-H "Content-Type: application/json" \
-d '{"ping":"pong"}'
You’ll see the request arrive in the node output and receive a 200 JSON response.
Click Activate to get a Production URL you can call even when the editor isn’t listening.
Step 7: Expose Your Webhook Publicly (Optional, Handy)
When you want to test webhooks from external services (forms, Stripe, GitHub), use a secure tunnel.
Option A — ngrok
brew install ngrok/ngrok/ngrok
ngrok http http://localhost:5678
Option B — Cloudflare Tunnel (free, stable)
brew install cloudflare/cloudflare/cloudflared
cloudflared tunnel --url http://localhost:5678
Copy the HTTPS URL it gives you. In n8n Settings → General, temporarily set Webhook URL to that value so your Production URLs use HTTPS.
Step 8: Build a Simple but Significant Automation
Here’s a real, useful starter: Uptime & Error Watchdog.
- Every 5 minutes: ping a list of URLs
- If any return
>= 400, send a Slack alert with the URL and status code
Import the workflow below (n8n → Import → paste JSON). Then edit the URL list and Slack channel.
{
"name": "Uptime & Error Watchdog",
"nodes": [
{
"parameters": {
"triggerTimes": { "item": [ { "mode": "everyX", "unit": "minutes", "value": 5 } ] }
},
"id": "Cron",
"name": "Cron (every 5m)",
"type": "n8n-nodes-base.cron",
"typeVersion": 1,
"position": [220, 280]
},
{
"parameters": {
"functionCode": "const urls = [\n 'https://www.ramlit.com/',\n 'https://www.colorpark.io/',\n 'https://www.xcybersecurity.io/',\n 'https://www.mejba.me/'\n];\nreturn urls.map(u => ({ url: u }));"
},
"id": "Seed",
"name": "Seed URLs",
"type": "n8n-nodes-base.function",
"typeVersion": 2,
"position": [440, 280]
},
{
"parameters": {
"url": "={{$json.url}}",
"responseFormat": "string",
"options": { "ignoreResponseCode": true, "timeout": 10000 }
},
"id": "HTTP",
"name": "HTTP Check",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4,
"position": [660, 280]
},
{
"parameters": {
"conditions": {
"number": [ { "value1": "={{$json.statusCode}}", "operation": "largerEqual", "value2": 400 } ]
}
},
"id": "IF",
"name": "IF error (>=400)",
"type": "n8n-nodes-base.if",
"typeVersion": 1,
"position": [880, 280]
},
{
"parameters": {
"channel": "#alerts",
"text": "Uptime alert: {{$json.url}} returned HTTP {{$json.statusCode}}"
},
"id": "Slack",
"name": "Slack Alert",
"type": "n8n-nodes-base.slack",
"typeVersion": 1,
"position": [1100, 240],
"credentials": { "slackApi": { "id": "replace-in-ui" } }
}
],
"connections": {
"Cron (every 5m)": { "main": [ [ { "node": "Seed URLs", "type": "main", "index": 0 } ] ] },
"Seed URLs": { "main": [ [ { "node": "HTTP Check", "type": "main", "index": 0 } ] ] },
"HTTP Check": { "main": [ [ { "node": "IF error (>=400)", "type": "main", "index": 0 } ] ] },
"IF error (>=400)": { "main": [ [ { "node": "Slack Alert", "type": "main", "index": 0 } ], [] ] }
}
}
Why this is valuable: You’ll catch downtime or routing mistakes fast—before customers do. Extend it by logging failures to Google Sheets or Notion.
Step 9: Credentials, Privacy, and Security Basics
Even locally, treat secrets with care:
- Store API tokens in Credentials (they’re encrypted with your
N8N_ENCRYPTION_KEY). - For inbound webhooks, add a secret header (e.g.,
X-Webhook-Secret) and verify it with a Function node. - Keep diagnostics and personalization disabled for privacy.
- Use a password manager to store your
.envsecrets—don’t commit them to Git.
Step 10: Backups and Workflow Exports
Your most important assets are in two places:
~/n8n-stack/n8n_data(n8n config & credential store)~/n8n-stack/db_data(Postgres data)
Quick manual backup:
- Stop containers:
docker compose down - Copy the whole
~/n8n-stackfolder to a safe drive or S3 - Start again:
docker compose up -d
Export workflows to a single JSON file (great for Git):
docker exec -it $(docker ps --filter name=n8n -q) \
n8n export:workflow --all --output=/home/node/.n8n/exports.json
# The file will appear inside ./n8n_data
Step 11: Upgrading n8n Safely
- Export your workflows (
exports.json) - Make a quick copy of the
n8n_dataanddb_datafolders - Pull the latest image and recreate:
docker compose pull
docker compose up -d
If anything looks off, you can roll back by swapping your backup folders back in.
Step 12: Optional Performance Boost (Workers + Redis)
When workflows become heavy (AI calls, file uploads, slow APIs), switch to queue mode and spawn workers.
-
Uncomment the
redisservice indocker-compose.yml -
Set in
.env:N8N_EXECUTIONS_MODE=queue -
Start services and a worker:
docker compose up -d
docker compose run --no-deps --name n8n-worker-1 \
-e N8N_EXECUTIONS_MODE=queue \
-e QUEUE_BULL_REDIS_HOST=redis \
n8n n8n worker
You can run multiple workers for parallel execution.
Step 13: From Mac to Production (When You’re Ready)
The beauty of Docker is that promotion is simple. On a VPS (or in Kubernetes):
- Reuse the same images and environment variables
- Put n8n behind Caddy/Nginx with HTTPS (Let’s Encrypt)
- Add Cloudflare Access or your SSO for a secure admin UI
- Schedule regular backups of the Postgres DB and
n8n_data
Your local learnings move with you—no rebuild required.
Common Pitfalls (and Quick Fixes)
“Port already in use.”
Change N8N_PORT in .env and the ports mapping in docker-compose.yml, then docker compose up -d.
“Permission denied” on volumes. Fix ownership/permissions:
chmod -R 775 ~/n8n-stack/n8n_data ~/n8n-stack/db_data
Webhook test URL errors.
- Use the Test URL only while “Listen for test event” is active.
- After clicking Activate, use the Production URL.
Big database growth.
- Keep pruning on (already configured).
- Avoid saving large binary data in nodes; use S3/Drive where possible.
Comparisons: Why n8n vs. Zapier/Make?
- Cost control: n8n is open source and self-hostable—scale without per-zap penalties.
- Flexibility: code-level logic via Function nodes; create custom nodes if needed.
- Data control: run on your hardware; align with compliance or client demands.
- Portability: Dockerized setup mirrors production—change environments easily.
Real-World Use Cases You Can Build Next
- Lead Intake: Webhook → Normalize → Slack notify → Google Sheets append
- Daily AI Digest: Cron → Search API → OpenAI summarize → Post to Notion
- Incident Routing: GitHub/CI Webhook → Parse logs → Slack/PagerDuty alert
- Finance Ops: CSV import → clean data → store in Postgres → email report
- Sales Ops: Stripe event → enrich with CRM → notify and tag in Slack
Each of these can start on your Mac and be productionized later with queues and HTTPS.
Bullet Points / Quick Takeaways
- Docker + Postgres gives you a stable, production-like n8n on macOS.
- Encrypt credentials with
N8N_ENCRYPTION_KEYand avoid committing secrets. - Use Test URLs while listening; Production URLs after activation.
- Back up
n8n_dataanddb_dataor export workflows regularly. - Add Redis + workers when flows get heavier; you’ll gain parallelism.
- Tunnels (ngrok/Cloudflare) make external webhook testing simple and secure.
- Promotion to a VPS is mostly copy-paste: same Compose file, stronger perimeter.
Call to Action
If you’re serious about automation, don’t stop at “hello world.” Pick one process—lead intake, uptime alerts, or a daily AI report—and ship it today. Want a tailored starter workflow (Slack + Sheets, or AI summary → Notion)? Tell me your destination tools and I’ll provide an import-ready JSON you can run immediately.
🤝 Hire / Work with me:
- 🔗 Fiverr (custom builds, integrations, performance): https://www.fiverr.com/s/EgxYmWD
- 🌐 Mejba Personal Portfolio: https://www.mejba.me
- 🏢 Ramlit Limited: https://www.ramlit.com
- 🎨 ColorPark Creative Agency: https://www.colorpark.io
- 🛡 xCyberSecurity Global Services: https://www.xcybersecurity.io
FAQ
1) Can I use SQLite instead of Postgres? You can, but it’s not ideal for anything beyond quick experiments. Postgres is more reliable, easier to back up, and recommended for real work.
2) Do I need HTTPS on my Mac? For local development, no. If you expose webhooks publicly, use a tunnel (ngrok/Cloudflare). For production, put n8n behind a reverse proxy with HTTPS.
3) How do I keep my workflows safe during upgrades?
Export workflows to JSON, back up n8n_data and db_data, then upgrade. If something breaks, roll back the folders.
4) Why isn’t my Test URL working after a minute? Test URLs only work while the editor listens for events. Click “Listen for test event” again or use the Production URL after activation.
5) Can I run multiple workflows in parallel on my Mac? Yes. Switch to queue mode with Redis and start one or more workers.
6) How do I connect to Slack, Google Sheets, or Notion?
Create credentials in Credentials (OAuth or token), then select them in the node. n8n stores them encrypted using your N8N_ENCRYPTION_KEY.
7) Is n8n suitable for enterprise? Yes—when self-hosted properly with SSO/OIDC, HTTPS, backups, and role-based access. Your Mac setup mirrors the architecture you’ll use in production.
8) What about AI workflows? n8n integrates with OpenAI and other providers. Add Function nodes for custom prompts, or chain APIs (search → summarize → publish) with retries and timeouts.
You’re done. You’ve installed n8n on your Mac with Docker Desktop, verified webhooks, and learned a production-minded way to build automations you can trust. Now put it to work.