OKTO Traceability System - Deployment Guide¶
Complete deployment instructions for all environments.
Table of Contents¶
- Deployment Modes
- Docker Deployment
- Native Linux Deployment
- Direct Cloud Deployment
- Configuration Reference
- Production Hardening
- Monitoring Setup
- Backup and Recovery
Deployment Modes¶
Standalone Terminal (Direct Cloud) - Recommended¶
Deploy a single terminal connecting directly to OKTO Cloud. This is the primary deployment model.
Use when:
- Single production line (most common)
- Remote/disconnected location
- Pilot deployment
- Simple setup required
- Quick deployment needed
Benefits: - Single Docker container - No additional infrastructure - Works offline, syncs when connected - Direct cloud integration
Full Stack (Factory with Server) - Enterprise¶
Deploy all components for large factories with centralized management.
Components:
├── Factory Server (1 instance)
├── PostgreSQL Database
├── Edge Services (N instances)
├── Operator UI
├── Management Dashboard
└── Monitoring Stack (optional)
Use when: - Multiple production lines (5+) - Centralized data management needed - Network connectivity to cloud is limited - Local reporting required
Development/Testing¶
Full stack with mock mode for testing.
Docker Deployment¶
Prerequisites¶
- Docker 20.10+
- Docker Compose 2.0+
- 4GB RAM minimum (8GB recommended)
- 20GB disk space
Full Stack Deployment¶
# Clone repository
git clone <repository-url> /opt/okto-traceability
cd /opt/okto-traceability
# Create environment file
cp .env.example .env
nano .env # Edit configuration
# Start all services
docker-compose up -d
# Check status
docker-compose ps
# View logs
docker-compose logs -f
Environment Variables¶
Create .env file:
# Database
POSTGRES_USER=okto
POSTGRES_PASSWORD=your-secure-password
POSTGRES_DB=okto_factory
# Cloud Integration
CLOUD_AUTH_TOKEN=your-cloud-token
CLOUD_BASE_URL=https://app.okto.ru/api/v1
# JWT Secret
JWT_SECRET=your-jwt-secret-key
# Monitoring (optional)
GRAFANA_ADMIN_PASSWORD=admin
# Deployment Mode
DEPLOYMENT_MODE=VIA_LOCAL_SERVER
Docker Compose Services¶
# docker-compose.yml services
services:
# Core Services
postgres: # PostgreSQL database
factory-server: # Factory server API
edge-service: # Edge service API
# Web UIs
operator-ui: # Operator interface
dashboard: # Management dashboard
# Monitoring (optional)
prometheus: # Metrics collection
grafana: # Dashboards
loki: # Log aggregation
Scaling Edge Services¶
For multiple production lines:
# docker-compose.override.yml
services:
edge-service-line1:
extends:
service: edge-service
environment:
DEVICE_IDENTIFIER: edge-line-1
DEVICE_NAME: Production Line 1
edge-service-line2:
extends:
service: edge-service
environment:
DEVICE_IDENTIFIER: edge-line-2
DEVICE_NAME: Production Line 2
Native Linux Deployment¶
Prerequisites¶
- Ubuntu 20.04+ / Debian 11+ / RHEL 8+
- JDK 17+
- Node.js 20+ (for UI)
- 4GB RAM minimum
Installation¶
Using Package Manager (Recommended)¶
# Add OKTO repository
curl -fsSL https://packages.okto.ru/gpg | sudo gpg --dearmor -o /usr/share/keyrings/okto.gpg
echo "deb [signed-by=/usr/share/keyrings/okto.gpg] https://packages.okto.ru/apt stable main" | \
sudo tee /etc/apt/sources.list.d/okto.list
# Install
sudo apt update
sudo apt install okto-edge-service okto-operator-ui
# Start service
sudo systemctl enable --now okto-edge
Manual Installation¶
# Build from source
make build-backend-jars
make build-frontend
# Copy files
sudo mkdir -p /opt/okto
sudo cp edge-service/build/libs/edge-service-all.jar /opt/okto/
sudo cp -r operator-ui/dist /opt/okto/operator-ui
# Create config
sudo mkdir -p /etc/okto
sudo cp config/edge-service.yaml /etc/okto/
# Create service file
sudo tee /etc/systemd/system/okto-edge.service << 'EOF'
[Unit]
Description=OKTO Edge Service
After=network.target
[Service]
Type=simple
User=okto
WorkingDirectory=/opt/okto
ExecStart=/usr/bin/java -jar edge-service-all.jar /etc/okto/edge-service.yaml
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
# Create user and start
sudo useradd -r -s /bin/false okto
sudo systemctl daemon-reload
sudo systemctl enable --now okto-edge
Service Management¶
# Status
sudo systemctl status okto-edge
# Logs
sudo journalctl -u okto-edge -f
# Restart
sudo systemctl restart okto-edge
# Stop
sudo systemctl stop okto-edge
Direct Cloud Deployment¶
For standalone edge service without factory server.
Quick Setup¶
# Using installation script
curl -fsSL https://install.okto.ru/edge | sudo bash -s -- \
--cloud-url https://app.okto.ru/api/v1 \
--device-id edge-001
# Start service
sudo systemctl start okto-edge
# Open browser for provisioning
xdg-open http://localhost:3000/provision
Configuration¶
# /etc/okto/edge-service.yaml
server:
host: 0.0.0.0
port: 8080
device:
identifier: edge-001
name: Production Line 1
connection:
mode: DIRECT_CLOUD
allowModeSwitch: false
offlineBehavior: QUEUE_ONLY
cloud:
baseUrl: https://app.okto.ru/api/v1
connectionTimeoutMs: 30000
requestTimeoutMs: 60000
# No factoryServer section needed
Provisioning¶
After installation:
- Open
http://localhost:3000/provision - Sign in with OKTO Cloud credentials
- Select your device
- Click Activate
Configuration Reference¶
Edge Service Configuration¶
# config/edge-service.yaml
# Server settings
server:
host: 0.0.0.0 # Listen address
port: 8080 # Listen port
# Device identification
device:
identifier: edge-001 # Unique device ID
name: Production Line 1 # Display name
productionLineId: line-001
# Connection mode
connection:
mode: DIRECT_CLOUD # DIRECT_CLOUD or VIA_LOCAL_SERVER
allowModeSwitch: true # Allow runtime mode changes
offlineBehavior: QUEUE_ONLY # QUEUE_ONLY or QUEUE_AND_WARN
# Factory server (for VIA_LOCAL_SERVER mode)
factoryServer:
host: factory-server
port: 8081
syncIntervalMs: 5000
# Cloud settings (for DIRECT_CLOUD mode)
cloud:
baseUrl: https://app.okto.ru/api/v1
connectionTimeoutMs: 30000
requestTimeoutMs: 60000
retryAttempts: 3
retryDelayMs: 1000
# Mock mode (testing)
mock:
enabled: false
credentials:
email: demo@okto.ru
password: demo123
# Database
database:
path: data/edge.db # SQLite database path
poolSize: 5
# Hardware integration
modbus:
enabled: true
host: 192.168.1.100
port: 502
slaveId: 1
printers:
- name: label-printer
type: VIDEOJET # VIDEOJET, MARKEM, or ZPL
host: 192.168.1.101
port: 8888
scanners:
- name: main-scanner
type: MATRIX
host: 192.168.1.102
port: 3000
index: 0
# Logging
logging:
level: INFO # DEBUG, INFO, WARN, ERROR
file: logs/edge.log
maxSize: 100MB
maxFiles: 10
Factory Server Configuration¶
# config/factory-server.yaml
server:
host: 0.0.0.0
port: 8081
database:
host: postgres
port: 5432
database: okto_factory
username: okto
password: ${DB_PASSWORD}
poolSize: 10
connectionMode:
defaultMode: VIA_LOCAL_SERVER
allowDeviceOverride: true
cloudSync:
enabled: true
cloudServerUrl: https://app.okto.ru/api/v1
authToken: ${CLOUD_AUTH_TOKEN}
syncIntervalMs: 10000
batchSize: 100
auth:
jwtSecret: ${JWT_SECRET}
tokenExpiration: 86400 # 24 hours
logging:
level: INFO
file: logs/factory.log
Environment Variables¶
| Variable | Description | Default |
|---|---|---|
DEPLOYMENT_MODE |
Force deployment mode | auto-detect |
DB_PASSWORD |
Database password | - |
JWT_SECRET |
JWT signing secret | - |
CLOUD_AUTH_TOKEN |
Cloud API token | - |
CLOUD_BASE_URL |
Cloud API URL | https://app.okto.ru/api/v1 |
LOG_LEVEL |
Logging level | INFO |
MOCK_ENABLED |
Enable mock mode | false |
Production Hardening¶
Security Checklist¶
- Change all default passwords
- Use HTTPS with valid certificates
- Configure firewall rules
- Enable audit logging
- Set up log rotation
- Configure backup schedule
- Test disaster recovery
HTTPS Setup¶
# Generate certificates (or use Let's Encrypt)
sudo certbot certonly --standalone -d okto.yourfactory.com
# Configure nginx
sudo tee /etc/nginx/sites-available/okto << 'EOF'
server {
listen 443 ssl;
server_name okto.yourfactory.com;
ssl_certificate /etc/letsencrypt/live/okto.yourfactory.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/okto.yourfactory.com/privkey.pem;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /api {
proxy_pass http://localhost:8080;
}
}
EOF
Firewall Rules¶
# Allow only necessary ports
sudo ufw default deny incoming
sudo ufw allow ssh
sudo ufw allow 443/tcp # HTTPS
sudo ufw allow from 192.168.1.0/24 to any port 8080 # Internal API
sudo ufw enable
Resource Limits¶
# docker-compose.yml
services:
edge-service:
deploy:
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '0.5'
memory: 512M
Monitoring Setup¶
Prometheus Configuration¶
# prometheus/prometheus.yml
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'edge-service'
static_configs:
- targets: ['edge-service:8080']
- job_name: 'factory-server'
static_configs:
- targets: ['factory-server:8081']
Grafana Dashboards¶
Import dashboards from monitoring/dashboards/:
- factory-overview.json - Overall factory metrics
- edge-device.json - Per-device metrics
- sync-status.json - Synchronization health
Alerting Rules¶
# prometheus/alerts.yml
groups:
- name: okto-alerts
rules:
- alert: DeviceOffline
expr: up{job="edge-service"} == 0
for: 5m
labels:
severity: critical
annotations:
summary: "Edge device {{ $labels.instance }} is offline"
- alert: SyncQueueBacklog
expr: edge_sync_queue_size > 1000
for: 10m
labels:
severity: warning
annotations:
summary: "Sync queue backlog on {{ $labels.instance }}"
Backup and Recovery¶
Database Backup¶
# PostgreSQL backup (factory server)
pg_dump -h postgres -U okto okto_factory > backup-$(date +%Y%m%d).sql
# SQLite backup (edge service)
sqlite3 /opt/okto/data/edge.db ".backup /backup/edge-$(date +%Y%m%d).db"
Automated Backups¶
# /etc/cron.daily/okto-backup
#!/bin/bash
BACKUP_DIR=/backup/okto
DATE=$(date +%Y%m%d)
# Backup PostgreSQL
docker exec okto-postgres pg_dump -U okto okto_factory | gzip > $BACKUP_DIR/postgres-$DATE.sql.gz
# Backup edge databases
for db in /opt/okto/*/data/*.db; do
name=$(basename $(dirname $(dirname $db)))
cp $db $BACKUP_DIR/$name-edge-$DATE.db
done
# Cleanup old backups (keep 30 days)
find $BACKUP_DIR -mtime +30 -delete
Recovery Procedure¶
# Stop services
docker-compose down
# Restore PostgreSQL
gunzip -c backup-20240101.sql.gz | docker exec -i okto-postgres psql -U okto okto_factory
# Restore edge database
cp edge-20240101.db /opt/okto/data/edge.db
# Start services
docker-compose up -d
Troubleshooting Deployment¶
Container Won't Start¶
# Check logs
docker-compose logs edge-service
# Check configuration
docker-compose config
# Verify volumes
docker volume ls
Database Connection Issues¶
# Test PostgreSQL connection
docker exec -it okto-postgres psql -U okto -c "SELECT 1"
# Check network
docker network inspect okto_default
Port Conflicts¶
# Find process using port
sudo lsof -i :8080
sudo netstat -tlnp | grep 8080
# Change port in config
# Or stop conflicting service
MARS L2 (WET/DRY) deployment¶
The L2 cabinets run on either RED OS 7/8 or Debian 12. Both are
supported by the same edge-service binary; packaging scripts under
packaging/ produce native artifacts per target.
RED OS 7/8 (RHEL family)¶
# Build the RPM on CI (CentOS Stream 9 or a matching RHEL-family host)
./packaging/rpm/build-rpm.sh 1.1.0 el8
# On the target
sudo dnf install -y java-17-openjdk-headless libgpiod libplctag
sudo rpm -i okto-edge-service-1.1.0-1.el8.x86_64.rpm
sudo systemctl enable --now okto-edge
The post-install script creates the okto system user, enables the
okto-edge.service unit and writes a default /etc/okto/edge-service.yaml.
Debian 12¶
./packaging/build-deb.sh 1.1.0
sudo apt install -y libgpiod2 libplctag1 network-ups-tools-client openjdk-17-jre-headless
sudo dpkg -i build/deb/okto-edge-service_1.1.0.deb
Per-variant Docker¶
# DRY cabinet (IPC + control cabinet, PLC over Modbus TCP / OPC UA)
docker compose -f docker-compose.yml -f docker-compose.dry.yml up -d
# WET cabinet (UPS-monitored; expects OWEN IBP120K on /dev/ttyUSB0)
export OKTO_UPS_DEVICE_PATH=/dev/ttyUSB0
docker compose -f docker-compose.yml -f docker-compose.wet.yml up -d
VARIANT / SITE install.sh¶
For single-host kiosks the one-line installer now accepts the cabinet variant and site:
MARS migration¶
See MARS_MIGRATION.md for the Android → Linux
procedure, and ROLLOUT.md for the per-site schedule of
the 51 cabinets.