# CM Dashboard - Infrastructure Monitoring TUI ## Overview A high-performance Rust-based TUI dashboard for monitoring CMTEC infrastructure. Built to replace Glance with a custom solution tailored for our specific monitoring needs and API integrations. ## Project Goals ### Core Objectives - **Real-time monitoring** of all infrastructure components - **Multi-host support** for cmbox, labbox, simonbox, steambox, srv01 - **Performance-focused** with minimal resource usage - **Keyboard-driven interface** for power users - **Integration** with existing monitoring APIs (ports 6127, 6128, 6129) ### Key Features - **NVMe health monitoring** with wear prediction - **CPU / memory / GPU telemetry** with automatic thresholding - **Service resource monitoring** with per-service CPU and RAM usage - **Disk usage overview** for root filesystems - **Backup status** with detailed metrics and history - **Unified alert pipeline** summarising host health - **Historical data tracking** and trend analysis ## Technical Architecture ### Technology Stack - **Language**: Rust πŸ¦€ - **TUI Framework**: ratatui (modern tui-rs fork) - **Async Runtime**: tokio - **HTTP Client**: reqwest - **Serialization**: serde - **CLI**: clap - **Error Handling**: anyhow - **Time**: chrono ### Dependencies ```toml [dependencies] ratatui = "0.24" # Modern TUI framework crossterm = "0.27" # Cross-platform terminal handling tokio = { version = "1.0", features = ["full"] } # Async runtime reqwest = { version = "0.11", features = ["json"] } # HTTP client serde = { version = "1.0", features = ["derive"] } # JSON parsing clap = { version = "4.0", features = ["derive"] } # CLI args anyhow = "1.0" # Error handling chrono = "0.4" # Time handling ``` ## Project Structure ``` cm-dashboard/ β”œβ”€β”€ Cargo.toml β”œβ”€β”€ README.md β”œβ”€β”€ CLAUDE.md # This file β”œβ”€β”€ src/ β”‚ β”œβ”€β”€ main.rs # Entry point & CLI β”‚ β”œβ”€β”€ app.rs # Main application state β”‚ β”œβ”€β”€ ui/ β”‚ β”‚ β”œβ”€β”€ mod.rs β”‚ β”‚ β”œβ”€β”€ dashboard.rs # Main dashboard layout β”‚ β”‚ β”œβ”€β”€ nvme.rs # NVMe health widget β”‚ β”‚ β”œβ”€β”€ services.rs # Services status widget β”‚ β”‚ β”œβ”€β”€ memory.rs # RAM optimization widget β”‚ β”‚ β”œβ”€β”€ backup.rs # Backup status widget β”‚ β”‚ └── alerts.rs # Alerts/notifications widget β”‚ β”œβ”€β”€ api/ β”‚ β”‚ β”œβ”€β”€ mod.rs β”‚ β”‚ β”œβ”€β”€ client.rs # HTTP client wrapper β”‚ β”‚ β”œβ”€β”€ smart.rs # Smart metrics API (port 6127) β”‚ β”‚ β”œβ”€β”€ service.rs # Service metrics API (port 6128) β”‚ β”‚ └── backup.rs # Backup metrics API (port 6129) β”‚ β”œβ”€β”€ data/ β”‚ β”‚ β”œβ”€β”€ mod.rs β”‚ β”‚ β”œβ”€β”€ metrics.rs # Data structures β”‚ β”‚ β”œβ”€β”€ history.rs # Historical data storage β”‚ β”‚ └── config.rs # Host configuration β”‚ └── config.rs # Application configuration β”œβ”€β”€ config/ β”‚ β”œβ”€β”€ hosts.toml # Host definitions β”‚ └── dashboard.toml # Dashboard layout config └── docs/ β”œβ”€β”€ API.md # API integration documentation └── WIDGETS.md # Widget development guide ``` ### Data Structures ```rust #[derive(Deserialize, Debug)] pub struct SmartMetrics { pub status: String, pub drives: Vec, pub summary: DriveSummary, pub issues: Vec, pub timestamp: u64, } #[derive(Deserialize, Debug)] pub struct ServiceMetrics { pub summary: ServiceSummary, pub services: Vec, pub timestamp: u64, } #[derive(Deserialize, Debug)] pub struct ServiceSummary { pub healthy: usize, pub degraded: usize, pub failed: usize, pub memory_used_mb: f32, pub memory_quota_mb: f32, pub system_memory_used_mb: f32, pub system_memory_total_mb: f32, pub disk_used_gb: f32, pub disk_total_gb: f32, pub cpu_load_1: f32, pub cpu_load_5: f32, pub cpu_load_15: f32, pub cpu_freq_mhz: Option, pub cpu_temp_c: Option, pub gpu_load_percent: Option, pub gpu_temp_c: Option, } #[derive(Deserialize, Debug)] pub struct BackupMetrics { pub overall_status: String, pub backup: BackupInfo, pub service: BackupServiceInfo, pub timestamp: u64, } ``` ## Dashboard Layout Design ### Main Dashboard View ``` β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ CM Dashboard β€’ cmbox β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ Storage β€’ ok:1 warn:0 crit:0 β”‚ Services β€’ ok:1 warn:0 fail:0 β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ β”‚ β”‚ β”‚Drive Temp Wear Spare Hours β”‚ β”‚ β”‚Service memory: 7.1/23899.7 MiBβ”‚ β”‚ β”‚ β”‚nvme0n1 28Β°C 1% 100% 14489 β”‚ β”‚ β”‚Disk usage: β€” β”‚ β”‚ β”‚ β”‚ Capacity Usage β”‚ β”‚ β”‚ Service Memory Disk β”‚ β”‚ β”‚ β”‚ 954G 77G (8%) β”‚ β”‚ β”‚βœ” sshd 7.1 MiB β€” β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ └─────────────────────────────── β”‚ β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ CPU / Memory β€’ warn β”‚ Backups β”‚ β”‚ System memory: 5251.7/23899.7 MiB β”‚ Host cmbox awaiting backup β”‚ β”‚ β”‚ CPU load (1/5/15): 2.18 2.66 2.56 β”‚ metrics β”‚ β”‚ β”‚ CPU freq: 1100.1 MHz β”‚ β”‚ β”‚ β”‚ CPU temp: 47.0Β°C β”‚ β”‚ β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ Alerts β€’ ok:0 warn:3 fail:0 β”‚ Status β€’ ZMQ connected β”‚ β”‚ cmbox: warning: CPU load 2.18 β”‚ Monitoring β€’ hosts: 3 β”‚ β”‚ β”‚ srv01: pending: awaiting metrics β”‚ Data source: ZMQ – connected β”‚ β”‚ β”‚ labbox: pending: awaiting metrics β”‚ Active host: cmbox (1/3) β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Keys: [←→] hosts [r]efresh [q]uit ``` ### Multi-Host View ``` β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ πŸ–₯️ CMTEC Host Overview β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ Host β”‚ NVMe Wear β”‚ RAM Usage β”‚ Services β”‚ Last Alert β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ srv01 β”‚ 4% βœ… β”‚ 32% βœ… β”‚ 8/8 βœ… β”‚ 04:00 Backup OK β”‚ β”‚ cmbox β”‚ 12% βœ… β”‚ 45% βœ… β”‚ 3/3 βœ… β”‚ Yesterday Email test β”‚ β”‚ labbox β”‚ 8% βœ… β”‚ 28% βœ… β”‚ 2/2 βœ… β”‚ 2h ago NVMe temp OK β”‚ β”‚ simonbox β”‚ 15% βœ… β”‚ 67% ⚠️ β”‚ 4/4 βœ… β”‚ Gaming session active β”‚ β”‚ steambox β”‚ 23% βœ… β”‚ 78% ⚠️ β”‚ 2/2 βœ… β”‚ High RAM usage β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Keys: [Enter] details [r]efresh [s]ort [f]ilter [q]uit ``` ## Architecture Principles - CRITICAL ### Agent-Dashboard Separation of Concerns **AGENT IS SINGLE SOURCE OF TRUTH FOR ALL STATUS CALCULATIONS** - Agent calculates status ("ok"/"warning"/"critical"/"unknown") using defined thresholds - Agent sends status to dashboard via ZMQ - Dashboard NEVER calculates status - only displays what agent provides **Data Flow Architecture:** ``` Agent (calculations + thresholds) β†’ Status β†’ Dashboard (display only) β†’ TableBuilder (colors) ``` **Status Handling Rules:** - Agent provides status β†’ Dashboard uses agent status - Agent doesn't provide status β†’ Dashboard shows "unknown" (NOT "ok") - Dashboard widgets NEVER contain hardcoded thresholds - TableBuilder converts status to colors for display ### Current Agent Thresholds (as of 2025-10-12) **CPU Load (service.rs:392-400):** - Warning: β‰₯ 2.0 (testing value, was 5.0) - Critical: β‰₯ 4.0 (testing value, was 8.0) **CPU Temperature (service.rs:412-420):** - Warning: β‰₯ 70.0Β°C - Critical: β‰₯ 80.0Β°C **Memory Usage (service.rs:402-410):** - Warning: β‰₯ 80% - Critical: β‰₯ 95% ### Email Notifications **System Configuration:** - From: `{hostname}@cmtec.se` (e.g., cmbox@cmtec.se) - To: `cm@cmtec.se` - SMTP: localhost:25 (postfix) - Timezone: Europe/Stockholm (not UTC) **Notification Triggers:** - Status degradation: any β†’ "warning" or "critical" - Recovery: "warning"/"critical" β†’ "ok" - Rate limiting: configurable (set to 0 for testing, 30 minutes for production) **Monitored Components:** - system.cpu (load status) - SystemCollector - system.memory (usage status) - SystemCollector - system.cpu_temp (temperature status) - SystemCollector (disabled) - system.services (service health status) - ServiceCollector - storage.smart (drive health) - SmartCollector - backup.overall (backup status) - BackupCollector ### Pure Auto-Discovery Implementation **Agent Configuration:** - No config files required - Auto-detects storage devices, services, backup systems - Runtime discovery of system capabilities - CLI: `cm-dashboard-agent [-v]` (only verbose flag) **Service Discovery:** - Scans running systemd services - Filters by predefined interesting patterns (gitea, nginx, docker, etc.) - No host-specific hardcoded service lists ### Current Implementation Status **Completed:** - [x] Pure auto-discovery agent (no config files) - [x] Agent-side status calculations with defined thresholds - [x] Dashboard displays agent status (no dashboard calculations) - [x] Email notifications with Stockholm timezone - [x] CPU temperature monitoring and notifications - [x] ZMQ message format standardization - [x] Removed all hardcoded dashboard thresholds - [x] CPU thresholds restored to production values (5.0/8.0) - [x] All collectors output standardized status strings (ok/warning/critical/unknown) - [x] Dashboard connection loss detection with 5-second keep-alive - [x] Removed excessive logging from agent - [x] Fixed all compiler warnings in both agent and dashboard - [x] **SystemCollector architecture refactoring completed (2025-10-12)** - [x] Created SystemCollector for CPU load, memory, temperature, C-states - [x] Moved system metrics from ServiceCollector to SystemCollector - [x] Updated dashboard to parse and display SystemCollector data - [x] Enhanced service notifications to include specific failure details - [x] CPU temperature thresholds set to 100Β°C (effectively disabled) - [x] **SystemCollector bug fixes completed (2025-10-12)** - [x] Fixed CPU load parsing for comma decimal separator locale (", " split) - [x] Fixed CPU temperature to prioritize x86_pkg_temp over generic thermal zones - [x] Fixed C-state collection to discover all available states (including C10) - [x] **Dashboard improvements and maintenance mode (2025-10-13)** - [x] Host auto-discovery with predefined CMTEC infrastructure hosts (cmbox, labbox, simonbox, steambox, srv01) - [x] Host navigation limited to connected hosts only (no disconnected host cycling) - [x] Storage widget restructured: Name/Temp/Wear/Usage columns with SMART details as descriptions - [x] Agent-provided descriptions for Storage widget (agent is source of truth for formatting) - [x] Maintenance mode implementation: /tmp/cm-maintenance file suppresses notifications - [x] NixOS borgbackup integration with automatic maintenance mode during backups - [x] System widget simplified to single row with C-states as description lines - [x] CPU load thresholds updated to production values (9.0/10.0) **Production Configuration:** - CPU load thresholds: Warning β‰₯ 9.0, Critical β‰₯ 10.0 - CPU temperature thresholds: Warning β‰₯ 100Β°C, Critical β‰₯ 100Β°C (effectively disabled) - Memory usage thresholds: Warning β‰₯ 80%, Critical β‰₯ 95% - Connection timeout: 15 seconds (agents send data every 5 seconds) - Email rate limiting: 30 minutes (set to 0 for testing) ### Maintenance Mode **Purpose:** - Suppress email notifications during planned maintenance or backups - Prevents false alerts when services are intentionally stopped **Implementation:** - Agent checks for `/tmp/cm-maintenance` file before sending notifications - File presence suppresses all email notifications while continuing monitoring - Dashboard continues to show real status, only notifications are blocked **Usage:** ```bash # Enable maintenance mode touch /tmp/cm-maintenance # Run maintenance tasks (backups, service restarts, etc.) systemctl stop service # ... maintenance work ... systemctl start service # Disable maintenance mode rm /tmp/cm-maintenance ``` **NixOS Integration:** - Borgbackup script automatically creates/removes maintenance file - Automatic cleanup via trap ensures maintenance mode doesn't stick ### Development Guidelines **When Adding New Metrics:** 1. Agent calculates status with thresholds 2. Agent adds `{metric}_status` field to JSON output 3. Dashboard data structure adds `{metric}_status: Option` 4. Dashboard uses `status_level_from_agent_status()` for display 5. Agent adds notification monitoring for status changes **NEVER:** - Add hardcoded thresholds to dashboard widgets - Calculate status in dashboard with different thresholds than agent - Use "ok" as default when agent status is missing (use "unknown") - Calculate colors in widgets (TableBuilder's responsibility) # Important Communication Guidelines NEVER write that you have "successfully implemented" something or generate extensive summary text without first verifying with the user that the implementation is correct. This wastes tokens. Keep responses concise. NEVER implement code without first getting explicit user agreement on the approach. Always ask for confirmation before proceeding with implementation. ## Commit Message Guidelines **NEVER mention:** - Claude or any AI assistant names - Automation or AI-generated content - Any reference to automated code generation **ALWAYS:** - Focus purely on technical changes and their purpose - Use standard software development commit message format - Describe what was changed and why, not how it was created - Write from the perspective of a human developer **Examples:** - ❌ "Generated with Claude Code" - ❌ "AI-assisted implementation" - ❌ "Automated refactoring" - βœ… "Implement maintenance mode for backup operations" - βœ… "Restructure storage widget with improved layout" - βœ… "Update CPU thresholds to production values"