On-premise Deployment Options
Scaleout Edge is designed to scale from a single laptop to a multi-region cloud cluster. To support this flexibility, we offer two primary deployment architectures:
Starter (Docker Compose): For R&D, proof-of-concept, and small-scale pilots.
Production (Kubernetes): For mission-critical, high-availability enterprise deployments.
1. Starter: Docker Compose
The Starter deployment runs the entire Control Plane (UI, Controller, Combiner, Database) on a single machine using Docker Compose.
Use Cases
R&D & Prototyping: Rapidly iterate on model architectures and client logic locally.
Small Fleets: Manage up to ~50 edge nodes for a pilot study.
Air-Gapped Demos: Run a complete, self-contained network without internet access.
Architecture
In this mode, all services run as containers on a shared Docker network:
Postgres: Stores metadata, users, and audit logs.
MinIO: Acts as a local S3-compatible object store for models and compute packages.
Controller & Combiner: Run as lightweight services exposing ports 8090 (API) and 12080 (gRPC).
Getting Started
To spin up a Starter instance:
# 1. Clone the repository
git clone https://github.com/scaleoutsystems/scaleout-stack.git
cd scaleout-stack
# 2. Start the stack
docker compose up -d
# 3. Access the UI
# Navigate to http://localhost:8090
2. Production: Kubernetes (K8s)
For enterprise scale, high availability, and strict security requirements, Scaleout Edge should be deployed on Kubernetes. We provide official Helm Charts to automate this process.
Use Cases
Large Scale: Orchestrating thousands or millions of edge nodes.
High Availability: Redundant Controllers and Combiners to ensure zero downtime.
Enterprise Integration: Connecting to external identity providers (OIDC/LDAP), managed databases (RDS/CloudSQL), and observability stacks (Datadog/Prometheus).
Architecture Changes
Moving to production introduces several architectural enhancements:
Component |
Starter (Docker) |
Production (K8s) |
|---|---|---|
Orchestration |
docker-compose.yml |
Helm Charts |
Storage |
Local MinIO Container |
Managed Object Storage (AWS S3, Azure Blob, GCS) |
Database |
Local Postgres or MongoDB Container |
Managed Database (OEM or Self-Hosted) |
Ingress |
Direct Port Exposure |
K8s Ingress Controller (NGINX/Traefik) with TLS termination |
Scaling |
Single Instance |
Horizontal Pod Autoscaling (HPA) |
Deployment Workflow
Deploying to a K8s cluster (e.g., EKS, AKS, GKE) typically involves:
Configure values.yaml: Define your domain, external database credentials, and resource limits.
Install via Helm:
helm repo add scaleout https://charts.scaleoutsystems.com
helm install scaleout-edge scaleout/scaleout-edge -f values.yaml
Scale: As your fleet grows, simply increase the
replicaCountof the services (or use Horizontal Pod Autoscaling) to handle more concurrent connections.