
Building a Self-Hosted Raspberry Pi Server Cluster
Complete guide to setting up a professional 4-node Raspberry Pi 5 cluster with rack mounting, PoE, NVMe storage, and Coolify orchestration for production workloads.
Building a Self-Hosted Raspberry Pi Server Cluster
Draft - Work in Progress
This blog post is currently a draft and work in progress. Content may be incomplete or subject to changes.
When I decided to move away from expensive cloud hosting for my development projects, I knew I wanted something powerful, cost-effective, and fun to build. Enter the Raspberry Pi 5 cluster - a professional-grade server setup that delivers impressive performance while keeping costs reasonable.
Why Raspberry Pi 5 for Production?
Pi 5 Performance Leap
The Raspberry Pi 5 represents a massive performance jump with its ARM Cortex-A76 CPU, up to 8GB RAM, and PCIe support. It's finally powerful enough for serious production workloads.
The Pi 5 brought several game-changing improvements:
- 8GB RAM option: Finally enough memory for containerized applications
- PCIe support: Direct NVMe storage for dramatically better I/O
- Improved networking: Gigabit Ethernet with better throughput
- PoE+ support: Clean power delivery without individual adapters
Hardware Setup: Professional Rack Mounting
UCTRONICS Pi 5 Rack Mount System
The foundation of any serious Pi cluster is proper mounting. The UCTRONICS Pi 5 Rack Mount transforms four individual Pis into a clean, professional 1U server.

Key benefits:
- 1U standard rack size: Fits perfectly in server racks
- Excellent cooling: Built-in fans with temperature control
- Clean cable management: Organized power and network connections
- Easy access: Individual Pi modules slide out for maintenance
Power Over Ethernet with UCTRONICS PoE HAT
Rather than dealing with individual power supplies, I implemented PoE (Power over Ethernet) for clean, centralized power management.
"hl-comment"># Each Pi draws approximately 15W under load
"hl-comment"># PoE+ standard provides up to 25.5W per port
"hl-comment"># Perfect "hl-keyword">for Pi 5 + NVMe + PoE HAT overhead
The UCTRONICS Pi 5 PoE HAT provides:
- IEEE 802.3at PoE+ compliance: Reliable 25W power delivery
- Active cooling: Integrated fan with PWM control
- GPIO passthrough: Maintains access to all Pi 5 features
- Temperature monitoring: Automatic fan speed adjustment
PoE Benefits
PoE enables remote power cycling through network switches - essential for headless servers. No more physical access needed for hard resets!
Network Infrastructure: UniFi PoE Pro Max 16
For networking and PoE delivery, I chose the UniFi PoE Pro Max 16:
UniFi PoE Pro Max 16 Specifications:
- 16x Gigabit PoE+ ports (25.5W each)
- 400W total PoE budget
- Layer 2/3 switching capabilities
- VLAN support for network segmentation
- Remote power cycling per port
VLAN Configuration: I isolated the Pi cluster on a dedicated VLAN for security and management:
# VLAN Configuration
cluster_vlan:
vlan_id: 100
subnet: 192.168.100.0/24
gateway: 192.168.100.1
dns: 1.1.1.1, 1.0.0.1
# Pi assignments
pi_nodes:
pi-01: 192.168.100.10
pi-02: 192.168.100.11
pi-03: 192.168.100.12
pi-04: 192.168.100.13
This setup enables:
- Remote power cycling: Reset frozen nodes without physical access
- Network isolation: Cluster traffic separated from main network
- Centralized management: Single switch controls entire cluster
- Monitoring: Per-port power consumption tracking
Storage: NVMe Performance Boost
UCTRONICS NVMe Board Installation
The biggest performance bottleneck on previous Pi generations was storage. The Pi 5's PCIe support changes everything.
UCTRONICS NVMe Board benefits:
- M.2 2280 NVMe support: Full-size SSDs for maximum capacity
- PCIe Gen 2 interface: ~450MB/s throughput vs 50MB/s on SD cards
- HAT+ form factor: Stacks cleanly with PoE HAT
- No external power: Powered directly from Pi 5
Raspberry Pi Branded 512GB NVMe
I chose the official Raspberry Pi NVMe SSD (512GB) for each node:
"hl-comment"># Performance comparison
SD Card (Class 10): ~50MB/s read/write
Official Pi NVMe: ~450MB/s read/write
"hl-comment"># 9x performance improvement!
Why the official drive:
- Optimized firmware: Specifically tuned for Pi 5 PCIe implementation
- Thermal management: Designed for Pi thermal constraints
- Reliability: Rated for 24/7 operation
- Support: Full compatibility guarantee from Raspberry Pi Foundation
Operating System: Ubuntu Server Setup
Balena Etcher for NVMe Flashing
Setting up Ubuntu on NVMe requires a specific workflow:
"hl-comment"># 1. Download Ubuntu Server 24.04 LTS "hl-keyword">for Raspberry Pi
wget https://releases.ubuntu.com/24.04/ubuntu-24.04-preinstalled-server-arm64+raspi.img.xz
"hl-comment">
# 2. Flash to NVMe using Balena Etcher
"hl-comment"># - Connect NVMe via USB adapter
"hl-comment"># - Flash Ubuntu image to NVMe drive
"hl-comment"># - Do NOT boot Pi yet!
Pre-boot Configuration with user-data
Before first boot, I customize each Pi with cloud-init user-data:
# /boot/firmware/user-data
#cloud-config
# Basic system configuration
hostname: pi-cluster-01
timezone: Australia/Brisbane
# Users and SSH
users:
- name: james
groups: [adm, docker, sudo]
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD:ALL']
ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5... # Your public key
# Packages to install
packages:
- docker.io
- docker-compose
- htop
- iotop
- git
- curl
- ufw
# Docker configuration
runcmd:
- systemctl enable docker
- usermod -aG docker james
- ufw --force enable
- ufw allow ssh
- ufw allow 80/tcp
- ufw allow 443/tcp
# Network configuration for static IP
write_files:
- path: /etc/netplan/99-cluster.yaml
content: |
network:
version: 2
ethernets:
eth0:
addresses: [192.168.100.10/24]
gateway4: 192.168.100.1
nameservers:
addresses: [1.1.1.1, 1.0.0.1]
SSH Certificates for Secure Access
For enhanced security, I implement SSH certificates:
"hl-comment"># Generate CA key on management machine
ssh-keygen -t ed25519 -f ~/.ssh/cluster_ca -C "cluster-ca"
"hl-comment">
# Create host certificate "hl-keyword">for each Pi
ssh-keygen -s ~/.ssh/cluster_ca \
-I pi-cluster-01 \
-h \
-n pi-cluster-01,192.168.100.10 \
-V +52w \
/etc/ssh/ssh_host_ed25519_key.pub
"hl-comment">
# Configure SSH daemon
echo "HostCertificate /etc/ssh/ssh_host_ed25519_key-cert.pub" >> /etc/ssh/sshd_config
echo "TrustedUserCAKeys /etc/ssh/cluster_ca.pub" >> /etc/ssh/sshd_config
Orchestration: Coolify Across 4 Nodes
Why Coolify Over Kubernetes?
While Kubernetes is powerful, it's overkill for small clusters. Coolify provides the perfect balance:
Coolify Advantages
Coolify offers Docker Swarm orchestration with a beautiful web UI, built-in reverse proxy, SSL certificates, and simple deployment workflows - perfect for small clusters.
Coolify benefits:
- Simple setup: No complex YAML configurations
- Built-in reverse proxy: Automatic Traefik configuration
- SSL automation: Let's Encrypt integration
- Git deployment: Direct GitHub/GitLab integration
- Resource monitoring: Built-in metrics and alerting
Docker Swarm Cluster Setup
First, initialize the Docker Swarm cluster:
"hl-comment"># On pi-cluster-01 (manager node)
docker swarm init --advertise-addr 192.168.100.10
"hl-comment">
# Join other nodes as workers
"hl-comment"># Run this on pi-cluster-02, 03, 04
docker swarm join --token SWMTKN-... 192.168.100.10:2377
"hl-comment">
# Verify cluster
docker node ls
Coolify Installation
Install Coolify on the manager node:
"hl-comment"># Install Coolify
curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash
"hl-comment">
# Configure "hl-keyword">for multi-node
"hl-comment"># Coolify automatically detects Swarm nodes
Coolify Configuration:
# coolify.yml
version: '3.8'
services:
coolify:
image: coollabsio/coolify:latest
ports:
- '8000:80'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- coolify_data:/data
environment:
- APP_URL=https://coolify.yourdomain.com
- DB_PASSWORD=secure_password_here
deploy:
placement:
constraints:
- node.role == manager
Application Deployment Strategy
With Coolify, deploying applications becomes trivial:
"hl-comment"># Deploy a Next.js application
"hl-comment"># 1. Connect GitHub repository in Coolify UI
"hl-comment"># 2. Configure build settings:
"hl-comment"># - Build command: npm run build
"hl-comment"># - Start command: npm start
"hl-comment"># - Port: 3000
"hl-comment"># 3. Set resource constraints per node
"hl-comment"># 4. Deploy with automatic SSL
Resource allocation strategy:
- pi-cluster-01: Coolify management + databases
- pi-cluster-02: Frontend applications (Next.js, React)
- pi-cluster-03: Backend APIs (Node.js, Python)
- pi-cluster-04: Services (Redis, monitoring tools)
Secure Internet Access: Cloudflare Tunnels
Why Cloudflare Tunnels?
Traditional port forwarding exposes your home IP and requires complex firewall rules. Cloudflare Tunnels provide a secure alternative:
Internet → Cloudflare Edge → Encrypted Tunnel → Your Pi Cluster
Benefits:
- No exposed ports: No inbound firewall rules needed
- DDoS protection: Cloudflare's global network shields your cluster
- Zero-trust security: Built-in access controls
- SSL termination: Automatic HTTPS for all services
Tunnel Setup
Install cloudflared on the manager node:
"hl-comment"># Install cloudflared
wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-arm64.deb
sudo dpkg -i cloudflared-linux-arm64.deb
"hl-comment">
# Authenticate with Cloudflare
cloudflared tunnel login
"hl-comment">
# Create tunnel
cloudflared tunnel create pi-cluster
"hl-comment">
# Configure tunnel
cat > ~/.cloudflared/config.yml << EOF
tunnel: pi-cluster
credentials-file: /home/james/.cloudflared/tunnel-credentials.json
ingress:
- hostname: coolify.yourdomain.com
service: http://192.168.100.10:8000
- hostname: app1.yourdomain.com
service: http://192.168.100.11:3000
- hostname: api.yourdomain.com
service: http://192.168.100.12:8080
- service: http_status:404
EOF
"hl-comment">
# Start tunnel service
sudo cloudflared service install
sudo systemctl enable cloudflared
sudo systemctl start cloudflared
DNS Configuration
Configure your domain in Cloudflare:
"hl-comment"># Add CNAME records pointing to tunnel
coolify.yourdomain.com → tunnel-id.cfargotunnel.com
app1.yourdomain.com → tunnel-id.cfargotunnel.com
api.yourdomain.com → tunnel-id.cfargotunnel.com
Performance and Monitoring
Cluster Performance Metrics
After optimization, my 4-node cluster delivers:
Total Specifications:
- CPU: 16 cores (4x Cortex-A76 quad-core)
- RAM: 32GB (4x 8GB)
- Storage: 2TB NVMe (4x 512GB)
- Network: 4Gbps aggregate
- Power: ~60W total
Real-world performance:
- Web applications: Easily handles 1000+ concurrent users
- API responses: Sub-100ms response times
- Database operations: 10x faster than SD card setups
- Container startup: 3-5 seconds vs 30+ seconds on SD
Monitoring Stack
I use a lightweight monitoring setup:
# docker-compose.yml for monitoring
version: '3.8'
services:
prometheus:
image: prom/prometheus:latest
ports:
- '9090:9090'
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
deploy:
placement:
constraints:
- node.labels.role == monitor
grafana:
image: grafana/grafana:latest
ports:
- '3001:3000'
environment:
- GF_SECURITY_ADMIN_PASSWORD=secure_password
deploy:
placement:
constraints:
- node.labels.role == monitor
Cost Analysis
Total Investment
Hardware Costs (AUD):
- 4x Raspberry Pi 5 (8GB): $120 × 4 = $480
- 4x UCTRONICS PoE HAT: $45 × 4 = $180
- 4x UCTRONICS NVMe Board: $35 × 4 = $140
- 4x Pi Official NVMe 512GB: $85 × 4 = $340
- UCTRONICS Rack Mount: $150
- UniFi PoE Pro Max 16: $650
- Miscellaneous cables: $50
Total Hardware: $1,990 AUD (~$1,300 USD)
Operating Costs
Monthly Costs:
- Power (60W × 24h × 30d × $0.25/kWh): $10.80
- Internet (included in home plan): $0
- Domain registration: $1.50
- Cloudflare Pro (optional): $20
Total Monthly: $32.30 AUD (~$21 USD)
ROI Comparison:
- Equivalent cloud resources: $200-300/month
- Break-even point: 6-7 months
- 5-year savings: $10,000+ AUD
Lessons Learned
What Works Great
- PoE power management: Remote resets are invaluable
- NVMe storage: Night and day performance difference
- Coolify simplicity: Much easier than Kubernetes for small setups
- Cloudflare Tunnels: Rock-solid security without complexity
Challenges Faced
- Heat management: Ensure good airflow in rack enclosures
- NVMe compatibility: Stick to officially supported drives
- Network planning: VLAN setup requires careful planning
- Initial complexity: Setup takes time but pays off long-term
Future Improvements
- Add load balancer: HAProxy for better traffic distribution
- Implement backup strategy: Automated backups to cloud storage
- Expand monitoring: More detailed application metrics
- Consider Pi 5 Compute Modules: Even more professional form factor
Conclusion
Building a Raspberry Pi 5 cluster has been incredibly rewarding. What started as a cost-saving exercise became a learning journey in modern DevOps practices. The combination of rack mounting, PoE, NVMe storage, and professional orchestration tools creates a surprisingly capable platform.
Production Ready
This cluster now runs my development environments, personal projects, and client demos. It's proven reliable, performant, and cost-effective for real production workloads.
The Pi 5 represents a turning point where ARM-based mini computers become genuinely viable for serious computing tasks. Combined with modern tooling like Coolify and Cloudflare Tunnels, you can build enterprise-grade infrastructure on a hobbyist budget.
Whether you're a developer looking to reduce cloud costs, a student learning DevOps, or a hobbyist wanting to understand modern infrastructure, a Pi cluster project offers hands-on experience with real-world technologies.
Building your own Pi cluster? I'd love to hear about your setup! Connect with me on LinkedIn or check out more of my projects on my portfolio.