Implement agent version tracking to diagnose deployment issues:
- Add get_agent_hash() method to extract Nix store hash from executable path
- Collect system_agent_hash metric in NixOS collector
- Display "Agent Hash" in system panel under NixOS section
- Update metric filtering to include agent hash
This helps identify which version of the agent is actually running
when troubleshooting deployment or metric collection issues.
Update metric filtering to use exact metric names instead of prefix matching.
This resolves the issue where build version showed 'unknown' despite agent
correctly collecting the metric.
- Change from showing version to build format: 'hash dd/mm/yy H:M:S'
- Parse nixos-version output to extract short hash and format date
- Update system widget to display 'Build:' instead of 'Version:'
- Remove version/build_date fields in favor of single build string
- Follow TODO.md specification for NixOS section layout
- Add memory_tmp_usage_percent, memory_tmp_used_gb, memory_tmp_total_gb metric parsing
- Fix tmpfs display showing as —% —GB/—GB in dashboard
- System widget now properly receives and displays tmpfs metrics from memory collector
- Remove /tmp autodetection from disk collector (57 lines removed)
- Add tmpfs monitoring to memory collector with get_tmpfs_metrics() method
- Generate memory_tmp_* metrics for proper RAM-based tmpfs monitoring
- Fix type annotations in tmpfs parsing for compilation
- System widget now correctly displays tmpfs usage in RAM section
- Create NixOS collector for version and active users detection
- Add SystemWidget combining all system information in TODO.md layout
- Replace separate CPU/Memory widgets with unified system display
- Add tree structure for storage with drive temperature/wear info
- Support NixOS version, active users, load averages, memory usage
- Follow exact decimal formatting from specification
- Fix duplicate storage pool issue by clearing cache on agent startup
- Change storage pool header text to normal color for better readability
- Improve services panel tree icons with proper └─ symbols for last items
- Ensure fresh metrics data on each agent restart
Eliminate duplicate storage entries by removing old disk_count dependency.
Dashboard now uses pure auto-discovery of disk_{pool}_usage_percent metrics.
Fixes multiple storage instances (Storage 0, Storage 1, Storage root)
showing only proper tree structure format.
Add proper hierarchical tree display for storage pools and drives:
- Pool headers with status icons and type indication (Single/multi-drive)
- Individual drive lines with ├─ tree symbols and health status
- Usage summary with └─ end symbol and capacity status
- T: and W: prefixes for temperature and wear level metrics
- Themed status icons using StatusIcons::get_icon() with proper colors
- 2-space indentation for clean tree structure appearance
Replace flat storage display with beautiful tree format:
● Storage steampool (multi-drive):
├─ ● sdb T:35°C W:12%
├─ ● sdc T:38°C W:8%
└─ ● 78.1% 1250.3GB/1600.0GB
Uses agent-calculated status from NixOS-configured thresholds.
Update CLAUDE.md with complete implementation specification.
Restructure storage display to handle new individual metrics architecture:
- Parse disk_{pool}_* metrics instead of indexed disk_{index}_* format
- Support individual drive metrics disk_{pool}_{drive}_health/temperature/wear
- Display tree structure: "Storage {pool} ({type}): drive details"
- Show pool usage summary with individual drive health/temp/wear status
- Auto-discover storage pools and drives from metric patterns
- Maintain proper status aggregation from individual metrics
The dashboard now correctly displays the new enhanced disk collector output
with storage pools containing multiple drives and their individual metrics.
- Dashboard now automatically looks for /etc/cm-dashboard/dashboard.toml
- No need to specify --config flag when using standard NixOS deployment
- Fallback to manual config path if default not found
- Update help text to reflect optional config parameter
- Simplifies dashboard usage - just run 'cm-dashboard' without arguments
- Remove all unused configuration options from dashboard config module
- Eliminate hardcoded defaults - dashboard now requires config file like agent
- Keep only actually used config: zmq.subscriber_ports and hosts.predefined_hosts
- Remove unused get_host_metrics function from metric store
- Clean up missing module imports (hosts, utils)
- Make dashboard fail fast if no configuration provided
- Align dashboard config approach with agent configuration pattern
Add comprehensive hysteresis support to prevent status oscillation near
threshold boundaries while maintaining responsive alerting.
Key Features:
- HysteresisThresholds with configurable upper/lower limits
- StatusTracker for per-metric status history
- Default gaps: CPU load 10%, memory 5%, disk temp 5°C
Updated Components:
- CPU load collector (5-minute average with hysteresis)
- Memory usage collector (percentage-based thresholds)
- Disk temperature collector (SMART data monitoring)
- All collectors updated to support StatusTracker interface
Cache Interval Adjustments:
- Service status: 60s → 10s (faster response)
- Disk usage: 300s → 60s (more frequent checks)
- Backup status: 900s → 60s (quicker updates)
- SMART data: moved to 600s tier (10 minutes)
Architecture:
- Individual metric status calculation in collectors
- Centralized StatusTracker in MetricCollectionManager
- Status aggregation preserved in dashboard widgets
- Add support for both proxied and static nginx sites
- Proxied sites show 'P' prefix and check backend URLs
- Static sites check external HTTPS URLs
- Fix services panel column alignment for main services
- Keep 10-second timeout for all site checks
- Fix host_index calculation for localhost to use actual position in sorted list
- Remove incorrect assumption that localhost is always at index 0
- Host navigation (Tab key) now works correctly with all hosts in alphabetical order
Fixes issue where only 3 of 5 hosts were accessible via Tab navigation.
- Remove predefined host order that was causing random display order
- Sort hosts alphabetically for consistent title display
- Localhost is still auto-selected at startup but doesn't affect display order
- Title will now show: cmbox ● labbox ● simonbox ● srv01 ● srv02 ● steambox
Eliminates confusing random host order in dashboard title bar.
- Add has_data() method to BackupWidget to check if backup metrics exist
- Modify dashboard layout to conditionally show backup panel only when data exists
- When no backup data: system panel takes full left side height
- When backup data exists: system and backup panels share left side equally
Prevents empty backup panel from taking up screen space unnecessarily.
- Sort physical devices by name to prevent random HashMap iteration order
- Sort partitions within each device by disk index for consistency
- Eliminates flickering caused by disks changing positions randomly
The dashboard storage section now maintains stable disk order across updates.
- Add user_navigated_away flag to track manual navigation
- Only auto-switch to localhost if user hasn't manually navigated away
- Reset flag when host disconnects to allow auto-selection
- Preserves user's tab navigation choices while still prioritizing localhost initially
- Dashboard now switches to localhost even if another host is already selected
- Ensures localhost is always preferred regardless of connection order
- Resolves issue where srv01 connecting first would prevent localhost selection
- Always select localhost as default host at startup
- Order hosts with localhost first, then predefined sequence
- Display hostname status colors in title bar based on metric aggregation
- Add gethostname dependency for localhost detection
Removed unused widget subscription system, cache utilities, error variants,
theme functions, and struct fields. Replaced subscription-based widgets
with direct metric filtering. Build now completes with zero warnings.
Resolves widget data persistence issue where switching hosts left stale data
from the previous host displayed in widgets.
Key improvements:
- Add Clone derives to all widget structs (CpuWidget, MemoryWidget,
ServicesWidget, BackupWidget)
- Create HostWidgets struct to cache widget states per hostname
- Update TuiApp with HashMap<String, HostWidgets> for per-host storage
- Fix borrowing issues by cloning hostname before mutable self borrow
- Implement instant widget state restoration when switching hosts
Tab key host switching now displays cached widget data for each host
without stale information persistence between switches.
- Add KeyCode::Tab support to main dashboard event loop
- Add Tab key handling to TuiApp handle_input method
- Tab key now cycles to next host using existing navigate_host logic
- Host switching infrastructure was already implemented, just needed Tab key support
- Current host displayed in bold in title bar, other hosts shown normally
- Metrics filtered by selected host, full navigation working
- Add BackupCollector for reading TOML status files with disk space metrics
- Implement BackupWidget with disk usage display and service status details
- Fix backup script disk space parsing by adding missing capture_output=True
- Update backup widget to show actual disk usage instead of repository size
- Fix timestamp parsing to use backup completion time instead of start time
- Resolve timezone issues by using UTC timestamps in backup script
- Add disk identification metrics (product name, serial number) to backup status
- Enhance UI layout with proper backup monitoring integration
This commit addresses several key issues identified during development:
Major Changes:
- Replace hardcoded top CPU/RAM process display with real system data
- Add intelligent process monitoring to CpuCollector using ps command
- Fix disk metrics permission issues in systemd collector
- Optimize service collection to focus on status, memory, and disk only
- Update dashboard widgets to display live process information
Process Monitoring Implementation:
- Added collect_top_cpu_process() and collect_top_ram_process() methods
- Implemented ps-based monitoring with accurate CPU percentages
- Added filtering to prevent self-monitoring artifacts (ps commands)
- Enhanced error handling and validation for process data
- Dashboard now shows realistic values like "claude (PID 2974) 11.0%"
Service Collection Optimization:
- Removed CPU monitoring from systemd collector for efficiency
- Enhanced service directory permission error logging
- Simplified services widget to show essential metrics only
- Fixed service-to-directory mapping accuracy
UI and Dashboard Improvements:
- Reorganized dashboard layout with btop-inspired multi-panel design
- Updated system panel to include real top CPU/RAM process display
- Enhanced widget formatting and data presentation
- Removed placeholder/hardcoded data throughout the interface
Technical Details:
- Updated agent/src/collectors/cpu.rs with process monitoring
- Modified dashboard/src/ui/mod.rs for real-time process display
- Enhanced systemd collector error handling and disk metrics
- Updated CLAUDE.md documentation with implementation details
- Remove 'r' key handler that was causing hang on refresh
- Remove RefreshRequested event and check_refresh_request method
- Remove send_refresh_commands function and ZMQ command protocol
- Remove refresh_requested field from App struct
- Clean up status line text (refresh -> tick)
The refresh functionality was causing the dashboard to become unresponsive
when pressing 'r' key. This removes all refresh-related code to fix the issue.
Implements ZMQ command protocol for dashboard-to-agent communication:
- Agents listen on port 6131 for REQ/REP commands
- Dashboard sends "refresh" command when 'r' key is pressed
- Agents force immediate collection of all metrics via force_refresh_all()
- Fresh data is broadcast immediately to dashboard
- Updated help text to show "r: Refresh all metrics"
Also includes metric-level caching architecture foundation for future
granular control over individual metric update frequencies.
- Remove unreachable descriptions from failed nginx sites
- Show complete site URLs instead of truncating at first dot
- Implement service-specific disk quotas (docker: 4GB, immich: 4GB, others: 1-2GB)
- Truncate process names to show only executable name without full path
- Display only highest C-state instead of all C-states for cleaner output
- Format system RAM as xxxMB/GB (totalGB) to match services format
- Remove check_site_accessibility function and filtering logic
- Monitor ALL nginx sites from config regardless of current status
- Site status determined by measure_site_latency, not accessibility filter
- Fixes missing git.cmtec.se when backend is down (502 errors)
- Sites with errors now show as failed instead of being filtered out
- Change site latency timeout from 5s to 2s for faster error detection
- Replace curl with reqwest for external connectivity checks (consistent timeouts)
- Remove unused gitea-specific monitoring functionality
- Update dashboard: show 'unreachable' for latency > 2000ms, add arrows (→) between site and latency
- Add percentage signs to CPU metrics display
- All HTTP requests now use reqwest with 2-second timeouts
- Implement get_top_cpu_process() and get_top_ram_process() functions in SystemCollector
- Add top_cpu_process and top_ram_process fields to SystemSummary data structure
- Update System widget to display top processes as description rows
- Show process name and percentage usage for highest CPU and RAM consumers
- Skip kernel threads and filter out processes with minimal usage (<0.1%)
- Show 'unreachable' status for nginx sites that fail connection tests
- Set service status to error (red) for unreachable sites
- Display latency in milliseconds for responsive sites
- Properly count failed sites in service summary statistics
- Improve nginx site monitoring reliability and visibility
Agent improvements:
- Add reqwest dependency for HTTP latency testing
- Implement measure_site_latency() function for nginx sites
- Add latency_ms field to ServiceData structure
- Measure response times for nginx sites using HEAD requests
- Handle connection failures gracefully with 5-second timeout
- Use HTTPS for external sites, HTTP for localhost
Dashboard improvements:
- Add latency_ms field to ServiceInfo structure
- Display latency for nginx sites: "docker.cmtec.se 134ms"
- Only show latency for nginx sub-services, not other services
- Change disk usage "0" to "<1MB" for better readability
The Services widget now shows:
- Nginx sites with response times when measurable
- Cleaner disk usage formatting for small values
- Improved user experience with meaningful latency data
Agent improvements:
- Add get_logged_in_users() function to SystemCollector using 'who' command
- Collect unique, sorted list of currently logged-in users
- Include logged_in_users field in system metrics JSON output
- Change C-state formatting to show 2 states per row instead of 4
Dashboard improvements:
- Update Backups widget to show "Archives: XX, ..." format
- System widget ready to display logged-in users with proper formatting
The System widget will now show:
- C-states formatted as 2 per row for better readability
- Logged-in users displayed as "Logged in: user" or "Logged in: X users (user1, user2)"
Services widget:
- Fix disk quota formatting with proper rounding instead of truncation
- Remove decimals from RAM quotas and use GB instead of G
- Change quota display to use GB consistently
Backups widget:
- Change GiB to GB for consistency
- Remove spaces between numbers and units
- Update disk usage format to match other widgets: used (totalGB)
- Remove percentage display for cleaner format
System widget:
- Add support for logged-in users in description lines
- Format C-states with "C-State:" prefix on first line, indent subsequent lines
- Add logged_in_users field to SystemSummary data structure
Documentation:
- Add example hash error output to NixOS update instructions
Services widget:
- Remove SB (sandbox) column and related formatting function
- Fix quota formatting to show decimals when needed (1.5G not 1G)
- Remove spaces in unit display (128MB not 128 MB)
Storage widget:
- Change usage format to 23GB (932GB) for better readability
Documentation:
- Add NixOS configuration update process to CLAUDE.md
- Update column headers to be more concise: RAM (GB) → RAM, CPU (%) → CPU, Disk (GB) → Disk
- Change sandbox column "no(ok)" to "-" for excluded services
- Implement smart unit formatting for memory and disk values (kB/MB/GB)
- Display quotas as (XG) format without decimals when limits exist
- Add format_bytes() helper for consistent unit display across metrics
Implement exclusion list for services that don't require sandboxing due
to their nature (SSH, Docker, system services). These services now show
"no(ok)" in SB column and maintain green status instead of warning.
Changes:
- Add is_sandbox_excluded field to ServiceData and ServiceInfo structs
- Add is_sandbox_excluded() method with system service exclusions:
- sshd/ssh (needs system access for auth/shell)
- docker (needs broad system access)
- systemd services, dbus, NetworkManager, etc.
- Update status determination to accept excluded services as ok
- Update format_sandbox_value to show "no(ok)" for excluded services
- Update all ServiceData constructors with exclusion field
Service status logic:
- Sandboxed: Status=Running, SB="yes"
- Excluded: Status=Running, SB="no(ok)"
- Should be sandboxed but isn't: Status=Degraded, SB="no"
This provides clear distinction between services that legitimately don't
need sandboxing vs. those requiring security attention.
Add new "SB" column to services widget showing systemd sandboxing status.
Service status now reflects security posture with unsandboxed services
showing as degraded/warning status.
Changes:
- Add is_sandboxed field to ServiceData and ServiceInfo structs
- Add check_service_sandbox method detecting systemd hardening features
- Add format_sandbox_value function showing "yes"/"no" for sandboxing
- Update service status determination to consider sandbox status:
- Sandboxed + Running = "Running" (green/ok)
- Unsandboxed + Running = "Degraded" (yellow/warning)
- Failed services = "Stopped" (red/critical)
- Add "SB" column header to services widget
Services without proper NixOS hardening (PrivateTmp, ProtectSystem, etc.)
now show warning status to highlight security concerns.