Compare commits

...

7 Commits

Author SHA1 Message Date
3858309a5d Fix Docker container detection with sudo permissions
Some checks failed
Build and Release / build-and-release (push) Failing after 1m19s
Update systemd collector to use sudo for docker ps command to resolve
permission issues when cm-agent user lacks docker group membership.
This ensures Docker containers are properly discovered and displayed
as sub-services under the docker service.

Version: 0.1.160
2025-11-25 12:40:27 +01:00
df104bf940 Remove debug prints and unused code
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
- Remove all debug println statements
- Remove unused service_tracker module
- Remove unused struct fields and methods
- Remove empty placeholder files (cpu.rs, memory.rs, defaults.rs)
- Fix all compiler warnings
- Clean build with zero warnings

Version bump to 0.1.159
2025-11-25 12:19:04 +01:00
d5ce36ee18 Add support for additional SMART attributes
All checks were successful
Build and Release / build-and-release (push) Successful in 1m30s
- Support Temperature_Case attribute for Intel SSDs
- Support Media_Wearout_Indicator attribute for wear percentage
- Parse wear value from column 3 (VALUE) for Media_Wearout_Indicator
- Fixes temperature and wear display for Intel PHLA847000FL512DGN drives
2025-11-25 11:53:08 +01:00
4f80701671 Fix NVMe serial display and improve pool health logic
All checks were successful
Build and Release / build-and-release (push) Successful in 1m20s
- Fix physical drive serial number display in dashboard
- Improve pool health calculation for arrays with multiple disks
- Support proper tree symbols for multiple parity drives
- Read git commit hash from /var/lib/cm-dashboard/git-commit for Build display
2025-11-25 11:44:20 +01:00
267654fda4 Improve NVMe serial parsing and restructure MergerFS display
All checks were successful
Build and Release / build-and-release (push) Successful in 1m25s
- Fix NVMe serial number parsing to handle whitespace variations
- Move mount point to MergerFS header, remove drive count
- Restructure data drives to same level as parity with Data_1, Data_2 labels
- Remove "Total:" label from pool usage line
- Update parity to use closing tree symbol as last item
2025-11-25 11:28:54 +01:00
dc1105eefe Display disk serial numbers instead of device names
All checks were successful
Build and Release / build-and-release (push) Successful in 1m18s
- Add serial_number field to DriveData structure
- Collect serial numbers from SMART data for all drives
- Display truncated serial numbers (last 8 chars) in dashboard
- Fix parity drive label to show status icon before "Parity:"
- Fix mount point label styling to match other labels
2025-11-25 11:06:54 +01:00
c9d12793ef Replace device names with serial numbers in MergerFS pool display
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
Updates disk collector and dashboard to show drive serial numbers
instead of device names (sdX) for MergerFS data/parity drives.
Agent extracts serial numbers from SMART data and dashboard
displays them when available, falling back to device names.
2025-11-25 10:30:37 +01:00
22 changed files with 150 additions and 1513 deletions

110
CLAUDE.md
View File

@@ -307,25 +307,19 @@ exclude_fs_types = ["tmpfs", "devtmpfs", "sysfs", "proc"]
CPU: CPU:
● Load: 0.23 0.21 0.13 ● Load: 0.23 0.21 0.13
└─ Freq: 1048 MHz └─ Freq: 1048 MHz
RAM: RAM:
● Usage: 25% 5.8GB/23.3GB ● Usage: 25% 5.8GB/23.3GB
├─ ● /tmp: 2% 0.5GB/2GB ├─ ● /tmp: 2% 0.5GB/2GB
└─ ● /var/tmp: 0% 0GB/1.0GB └─ ● /var/tmp: 0% 0GB/1.0GB
Storage: Storage:
mergerfs (2+1): 844B9A25 T: 25C W: 4%
├─ Total: ● 63% 2355.2GB/3686.4GB
├─ Data Disks:
│ ├─ ● sdb T: 24°C W: 5%
│ └─ ● sdd T: 27°C W: 5%
├─ Parity: ● sdc T: 24°C W: 5%
└─ Mount: /srv/media
● nvme0n1 T: 25C W: 4%
├─ ● /: 55% 250.5GB/456.4GB ├─ ● /: 55% 250.5GB/456.4GB
└─ ● /boot: 26% 0.3GB/1.0GB └─ ● /boot: 26% 0.3GB/1.0GB
● mergerfs /srv/media:
├─ ● 63% 2355.2GB/3686.4GB
├─ ● Data_1: WDZQ8H8D T: 28°C
├─ ● Data_2: GGA04461 T: 28°C
└─ ● Parity: WDZS8RY0 T: 29°C
Backup: Backup:
● WD-WCC7K1234567 T: 32°C W: 12% ● WD-WCC7K1234567 T: 32°C W: 12%
├─ Last: 2h ago (12.3GB) ├─ Last: 2h ago (12.3GB)
@@ -361,98 +355,6 @@ Keep responses concise and focused. Avoid extensive implementation summaries unl
- ✅ "Restructure storage widget with improved layout" - ✅ "Restructure storage widget with improved layout"
- ✅ "Update CPU thresholds to production values" - ✅ "Update CPU thresholds to production values"
## Completed Architecture Migration (v0.1.131)
## ✅ COMPLETE MONITORING SYSTEM RESTORATION (v0.1.141)
**🎉 SUCCESS: All Issues Fixed - Complete Functional Monitoring System**
### ✅ Completed Implementation (v0.1.141)
**All Major Issues Resolved:**
```
✅ Data Collection: Agent collects structured data correctly
✅ Storage Display: Perfect format with correct mount points and temperature/wear
✅ Status Evaluation: All metrics properly evaluated against thresholds
✅ Notifications: Working email alerts on status changes
✅ Thresholds: All collectors using configured thresholds for status calculation
✅ Build Information: NixOS version displayed correctly
✅ Mount Point Consistency: Stable, sorted display order
```
### ✅ All Phases Completed Successfully
#### ✅ Phase 1: Storage Display - COMPLETED
- ✅ Use `lsblk` instead of `findmnt` (eliminated `/nix/store` bind mount issue)
- ✅ Add `sudo smartctl` for permissions (SMART data collection working)
- ✅ Fix NVMe SMART parsing (`Temperature:` and `Percentage Used:` fields)
- ✅ Consistent filesystem/tmpfs sorting (no more random order swapping)
-**VERIFIED**: Dashboard shows `● nvme0n1 T: 28°C W: 1%` correctly
#### ✅ Phase 2: Status Evaluation System - COMPLETED
-**CPU Status**: Load averages and temperature evaluated against `HysteresisThresholds`
-**Memory Status**: Usage percentage evaluated against thresholds
-**Storage Status**: Drive temperature, health, and filesystem usage evaluated
-**Service Status**: Service states properly tracked and evaluated
-**Status Fields**: All AgentData structures include status information
-**Threshold Integration**: All collectors use their configured thresholds
#### ✅ Phase 3: Notification System - COMPLETED
-**Status Change Detection**: Agent tracks status between collection cycles
-**Email Notifications**: Alerts sent on degradation (OK→Warning/Critical, Warning→Critical)
-**Notification Content**: Detailed alerts with metric values and timestamps
-**NotificationManager Integration**: Fully restored and operational
-**Maintenance Mode**: `/tmp/cm-maintenance` file support maintained
#### ✅ Phase 4: Integration & Testing - COMPLETED
-**AgentData Status Fields**: All structured data includes status evaluation
-**Status Processing**: Agent applies thresholds at collection time
-**End-to-End Flow**: Collection → Evaluation → Notification → Display
-**Dynamic Versioning**: Agent version from `CARGO_PKG_VERSION`
-**Build Information**: NixOS generation display restored
### ✅ Final Architecture - WORKING
**Complete Operational Flow:**
```
Collectors → AgentData (with Status) → NotificationManager → Email Alerts
↘ ↗
ZMQ → Dashboard → Perfect Display
```
**Operational Components:**
1.**Collectors**: Populate AgentData with metrics AND status evaluation
2.**Status Evaluation**: `HysteresisThresholds.evaluate()` applied per collector
3.**Notifications**: Email alerts on status change detection
4.**Display**: Correct mount points, temperature, wear, and build information
### ✅ Success Criteria - ALL MET
**Display Requirements:**
- ✅ Dashboard shows `● nvme0n1 T: 28°C W: 1%` format perfectly
- ✅ Mount points show `/` and `/boot` (not `root`/`boot`)
- ✅ Build information shows actual NixOS version (not "unknown")
- ✅ Consistent sorting eliminates random order changes
**Monitoring Requirements:**
- ✅ High CPU load triggers Warning/Critical status and email alert
- ✅ High memory usage triggers Warning/Critical status and email alert
- ✅ High disk temperature triggers Warning/Critical status and email alert
- ✅ Failed services trigger Warning/Critical status and email alert
- ✅ Maintenance mode suppresses notifications as expected
### 🚀 Production Ready
**CM Dashboard v0.1.141 is a complete, functional infrastructure monitoring system:**
- **Real-time Monitoring**: All system components with 1-second intervals
- **Intelligent Alerting**: Email notifications on threshold violations
- **Perfect Display**: Accurate mount points, temperatures, and system information
- **Status-Aware**: All metrics evaluated against configurable thresholds
- **Production Ready**: Full monitoring capabilities restored
**The monitoring system is fully operational and ready for production use.**
## Implementation Rules ## Implementation Rules
1. **Agent Status Authority**: Agent calculates status for each metric using thresholds 1. **Agent Status Authority**: Agent calculates status for each metric using thresholds

6
Cargo.lock generated
View File

@@ -279,7 +279,7 @@ checksum = "a1d728cc89cf3aee9ff92b05e62b19ee65a02b5702cff7d5a377e32c6ae29d8d"
[[package]] [[package]]
name = "cm-dashboard" name = "cm-dashboard"
version = "0.1.153" version = "0.1.160"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"chrono", "chrono",
@@ -301,7 +301,7 @@ dependencies = [
[[package]] [[package]]
name = "cm-dashboard-agent" name = "cm-dashboard-agent"
version = "0.1.153" version = "0.1.160"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-trait", "async-trait",
@@ -324,7 +324,7 @@ dependencies = [
[[package]] [[package]]
name = "cm-dashboard-shared" name = "cm-dashboard-shared"
version = "0.1.153" version = "0.1.160"
dependencies = [ dependencies = [
"chrono", "chrono",
"serde", "serde",

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "cm-dashboard-agent" name = "cm-dashboard-agent"
version = "0.1.153" version = "0.1.160"
edition = "2021" edition = "2021"
[dependencies] [dependencies]

View File

@@ -16,7 +16,6 @@ use crate::collectors::{
systemd::SystemdCollector, systemd::SystemdCollector,
}; };
use crate::notifications::NotificationManager; use crate::notifications::NotificationManager;
use crate::service_tracker::UserStoppedServiceTracker;
use cm_dashboard_shared::AgentData; use cm_dashboard_shared::AgentData;
pub struct Agent { pub struct Agent {
@@ -25,7 +24,6 @@ pub struct Agent {
zmq_handler: ZmqHandler, zmq_handler: ZmqHandler,
collectors: Vec<Box<dyn Collector>>, collectors: Vec<Box<dyn Collector>>,
notification_manager: NotificationManager, notification_manager: NotificationManager,
service_tracker: UserStoppedServiceTracker,
previous_status: Option<SystemStatus>, previous_status: Option<SystemStatus>,
} }
@@ -90,17 +88,12 @@ impl Agent {
let notification_manager = NotificationManager::new(&config.notifications, &hostname)?; let notification_manager = NotificationManager::new(&config.notifications, &hostname)?;
info!("Notification manager initialized"); info!("Notification manager initialized");
// Initialize service tracker
let service_tracker = UserStoppedServiceTracker::new();
info!("Service tracker initialized");
Ok(Self { Ok(Self {
hostname, hostname,
config, config,
zmq_handler, zmq_handler,
collectors, collectors,
notification_manager, notification_manager,
service_tracker,
previous_status: None, previous_status: None,
}) })
} }

View File

@@ -1,5 +1,4 @@
use async_trait::async_trait; use async_trait::async_trait;
use chrono::{NaiveDateTime, DateTime};
use cm_dashboard_shared::{AgentData, BackupData, BackupDiskData}; use cm_dashboard_shared::{AgentData, BackupData, BackupDiskData};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use std::collections::HashMap; use std::collections::HashMap;

View File

@@ -19,10 +19,8 @@ pub struct DiskCollector {
/// A physical drive with its filesystems /// A physical drive with its filesystems
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
struct PhysicalDrive { struct PhysicalDrive {
name: String, // e.g., "nvme0n1", "sda" name: String, // e.g., "nvme0n1", "sda"
health: String, // SMART health status health: String, // SMART health status
temperature_celsius: Option<f32>, // Drive temperature
wear_percent: Option<f32>, // SSD wear level
filesystems: Vec<Filesystem>, // mounted filesystems on this drive filesystems: Vec<Filesystem>, // mounted filesystems on this drive
} }
@@ -351,8 +349,6 @@ impl DiskCollector {
let physical_drive = PhysicalDrive { let physical_drive = PhysicalDrive {
name: drive_name, name: drive_name,
health: "UNKNOWN".to_string(), // Will be updated with SMART data health: "UNKNOWN".to_string(), // Will be updated with SMART data
temperature_celsius: None,
wear_percent: None,
filesystems, filesystems,
}; };
physical_drives.push(physical_drive); physical_drives.push(physical_drive);
@@ -437,12 +433,14 @@ impl DiskCollector {
// Return unknown data rather than failing completely // Return unknown data rather than failing completely
return Ok(SmartData { return Ok(SmartData {
health: "UNKNOWN".to_string(), health: "UNKNOWN".to_string(),
serial_number: None,
temperature_celsius: None, temperature_celsius: None,
wear_percent: None, wear_percent: None,
}); });
} }
let mut health = "UNKNOWN".to_string(); let mut health = "UNKNOWN".to_string();
let mut serial_number = None;
let mut temperature = None; let mut temperature = None;
let mut wear_percent = None; let mut wear_percent = None;
@@ -455,8 +453,21 @@ impl DiskCollector {
} }
} }
// Serial number parsing (both SATA and NVMe)
if line.contains("Serial Number:") {
if let Some(serial_part) = line.split("Serial Number:").nth(1) {
let serial_str = serial_part.trim();
if !serial_str.is_empty() {
// Take first whitespace-separated token
if let Some(serial) = serial_str.split_whitespace().next() {
serial_number = Some(serial.to_string());
}
}
}
}
// Temperature parsing for different drive types // Temperature parsing for different drive types
if line.contains("Temperature_Celsius") || line.contains("Airflow_Temperature_Cel") { if line.contains("Temperature_Celsius") || line.contains("Airflow_Temperature_Cel") || line.contains("Temperature_Case") {
// Traditional SATA drives: attribute table format // Traditional SATA drives: attribute table format
if let Some(temp_str) = line.split_whitespace().nth(9) { if let Some(temp_str) = line.split_whitespace().nth(9) {
if let Ok(temp) = temp_str.parse::<f32>() { if let Ok(temp) = temp_str.parse::<f32>() {
@@ -474,7 +485,15 @@ impl DiskCollector {
} }
// Wear level parsing for SSDs // Wear level parsing for SSDs
if line.contains("Wear_Leveling_Count") || line.contains("SSD_Life_Left") { if line.contains("Media_Wearout_Indicator") {
// Media_Wearout_Indicator stores remaining life % in column 3 (VALUE)
if let Some(wear_str) = line.split_whitespace().nth(3) {
if let Ok(remaining) = wear_str.parse::<f32>() {
wear_percent = Some(100.0 - remaining); // Convert remaining life to wear
}
}
} else if line.contains("Wear_Leveling_Count") || line.contains("SSD_Life_Left") {
// Other wear attributes store value in column 9 (RAW_VALUE)
if let Some(wear_str) = line.split_whitespace().nth(9) { if let Some(wear_str) = line.split_whitespace().nth(9) {
if let Ok(wear) = wear_str.parse::<f32>() { if let Ok(wear) = wear_str.parse::<f32>() {
wear_percent = Some(100.0 - wear); // Convert remaining life to wear wear_percent = Some(100.0 - wear); // Convert remaining life to wear
@@ -497,6 +516,7 @@ impl DiskCollector {
Ok(SmartData { Ok(SmartData {
health, health,
serial_number,
temperature_celsius: temperature, temperature_celsius: temperature,
wear_percent, wear_percent,
}) })
@@ -522,6 +542,7 @@ impl DiskCollector {
agent_data.system.storage.drives.push(DriveData { agent_data.system.storage.drives.push(DriveData {
name: drive.name.clone(), name: drive.name.clone(),
serial_number: smart.and_then(|s| s.serial_number.clone()),
health: smart.map(|s| s.health.clone()).unwrap_or_else(|| drive.health.clone()), health: smart.map(|s| s.health.clone()).unwrap_or_else(|| drive.health.clone()),
temperature_celsius: smart.and_then(|s| s.temperature_celsius), temperature_celsius: smart.and_then(|s| s.temperature_celsius),
wear_percent: smart.and_then(|s| s.wear_percent), wear_percent: smart.and_then(|s| s.wear_percent),
@@ -587,6 +608,7 @@ impl DiskCollector {
cm_dashboard_shared::PoolDriveData { cm_dashboard_shared::PoolDriveData {
name: d.name.clone(), name: d.name.clone(),
serial_number: smart.and_then(|s| s.serial_number.clone()),
temperature_celsius: temperature, temperature_celsius: temperature,
health, health,
wear_percent: smart.and_then(|s| s.wear_percent), wear_percent: smart.and_then(|s| s.wear_percent),
@@ -611,6 +633,7 @@ impl DiskCollector {
cm_dashboard_shared::PoolDriveData { cm_dashboard_shared::PoolDriveData {
name: d.name.clone(), name: d.name.clone(),
serial_number: smart.and_then(|s| s.serial_number.clone()),
temperature_celsius: temperature, temperature_celsius: temperature,
health, health,
wear_percent: smart.and_then(|s| s.wear_percent), wear_percent: smart.and_then(|s| s.wear_percent),
@@ -620,10 +643,19 @@ impl DiskCollector {
}).collect(); }).collect();
// Calculate overall pool health string and status // Calculate overall pool health string and status
let (pool_health, health_status) = match (failed_data, failed_parity) { // SnapRAID logic: can tolerate up to N parity drive failures (where N = number of parity drives)
(0, 0) => ("healthy".to_string(), cm_dashboard_shared::Status::Ok), // If data drives fail AND we've lost parity protection, that's critical
(1, 0) | (0, 1) => ("degraded".to_string(), cm_dashboard_shared::Status::Warning), let (pool_health, health_status) = if failed_data == 0 && failed_parity == 0 {
_ => ("critical".to_string(), cm_dashboard_shared::Status::Critical), ("healthy".to_string(), cm_dashboard_shared::Status::Ok)
} else if failed_data == 0 && failed_parity > 0 {
// Parity failed but no data loss - degraded (reduced protection)
("degraded".to_string(), cm_dashboard_shared::Status::Warning)
} else if failed_data == 1 && failed_parity == 0 {
// One data drive failed, parity intact - degraded (recoverable)
("degraded".to_string(), cm_dashboard_shared::Status::Warning)
} else {
// Multiple data drives failed OR data+parity failed = data loss risk
("critical".to_string(), cm_dashboard_shared::Status::Critical)
}; };
// Calculate pool usage status using config thresholds // Calculate pool usage status using config thresholds
@@ -808,6 +840,7 @@ impl Collector for DiskCollector {
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
struct SmartData { struct SmartData {
health: String, health: String,
serial_number: Option<String>,
temperature_celsius: Option<f32>, temperature_celsius: Option<f32>,
wear_percent: Option<f32>, wear_percent: Option<f32>,
} }

View File

@@ -5,21 +5,18 @@ use std::process::Command;
use tracing::debug; use tracing::debug;
use super::{Collector, CollectorError}; use super::{Collector, CollectorError};
use crate::config::NixOSConfig;
/// NixOS system information collector with structured data output /// NixOS system information collector with structured data output
/// ///
/// This collector gathers NixOS-specific information like: /// This collector gathers NixOS-specific information like:
/// - System generation/build information /// - System generation/build information
/// - Version information /// - Version information
/// - Agent version from Nix store path /// - Agent version from Nix store path
pub struct NixOSCollector { pub struct NixOSCollector;
config: NixOSConfig,
}
impl NixOSCollector { impl NixOSCollector {
pub fn new(config: NixOSConfig) -> Self { pub fn new(_config: crate::config::NixOSConfig) -> Self {
Self { config } Self
} }
/// Collect NixOS system information and populate AgentData /// Collect NixOS system information and populate AgentData
@@ -83,14 +80,25 @@ impl NixOSCollector {
std::env::var("CM_DASHBOARD_VERSION").unwrap_or_else(|_| "unknown".to_string()) std::env::var("CM_DASHBOARD_VERSION").unwrap_or_else(|_| "unknown".to_string())
} }
/// Get NixOS system generation (build) information /// Get NixOS system generation (build) information from git commit
async fn get_nixos_generation(&self) -> Option<String> { async fn get_nixos_generation(&self) -> Option<String> {
match Command::new("nixos-version").output() { // Try to read git commit hash from file written during rebuild
Ok(output) => { let commit_file = "/var/lib/cm-dashboard/git-commit";
let version_str = String::from_utf8_lossy(&output.stdout); match fs::read_to_string(commit_file) {
Some(version_str.trim().to_string()) Ok(content) => {
let commit_hash = content.trim();
if commit_hash.len() >= 7 {
debug!("Found git commit hash: {}", commit_hash);
Some(commit_hash.to_string())
} else {
debug!("Git commit hash too short: {}", commit_hash);
None
}
}
Err(e) => {
debug!("Failed to read git commit file {}: {}", commit_file, e);
None
} }
Err(_) => None,
} }
} }
} }

View File

@@ -22,8 +22,6 @@ pub struct SystemdCollector {
struct ServiceCacheState { struct ServiceCacheState {
/// Last collection time for performance tracking /// Last collection time for performance tracking
last_collection: Option<Instant>, last_collection: Option<Instant>,
/// Cached service data
services: Vec<ServiceInfo>,
/// Cached complete service data with sub-services /// Cached complete service data with sub-services
cached_service_data: Vec<ServiceData>, cached_service_data: Vec<ServiceData>,
/// Interesting services to monitor (cached after discovery) /// Interesting services to monitor (cached after discovery)
@@ -50,20 +48,10 @@ struct ServiceStatusInfo {
sub_state: String, sub_state: String,
} }
/// Internal service information
#[derive(Debug, Clone)]
struct ServiceInfo {
name: String,
status: String, // "active", "inactive", "failed", etc.
memory_mb: f32, // Memory usage in MB
disk_gb: f32, // Disk usage in GB
}
impl SystemdCollector { impl SystemdCollector {
pub fn new(config: SystemdConfig) -> Self { pub fn new(config: SystemdConfig) -> Self {
let state = ServiceCacheState { let state = ServiceCacheState {
last_collection: None, last_collection: None,
services: Vec::new(),
cached_service_data: Vec::new(), cached_service_data: Vec::new(),
monitored_services: Vec::new(), monitored_services: Vec::new(),
service_status_cache: std::collections::HashMap::new(), service_status_cache: std::collections::HashMap::new(),
@@ -73,7 +61,7 @@ impl SystemdCollector {
last_nginx_check_time: None, last_nginx_check_time: None,
nginx_check_interval_seconds: config.nginx_check_interval_seconds, nginx_check_interval_seconds: config.nginx_check_interval_seconds,
}; };
Self { Self {
state: RwLock::new(state), state: RwLock::new(state),
config, config,
@@ -95,7 +83,6 @@ impl SystemdCollector {
}; };
// Collect service data for each monitored service // Collect service data for each monitored service
let mut services = Vec::new();
let mut complete_service_data = Vec::new(); let mut complete_service_data = Vec::new();
for service_name in &monitored_services { for service_name in &monitored_services {
match self.get_service_status(service_name) { match self.get_service_status(service_name) {
@@ -145,14 +132,6 @@ impl SystemdCollector {
} }
} }
let service_info = ServiceInfo {
name: service_name.clone(),
status: active_status.clone(),
memory_mb,
disk_gb,
};
services.push(service_info);
// Create complete service data // Create complete service data
let service_data = ServiceData { let service_data = ServiceData {
name: service_name.clone(), name: service_name.clone(),
@@ -177,7 +156,6 @@ impl SystemdCollector {
{ {
let mut state = self.state.write().unwrap(); let mut state = self.state.write().unwrap();
state.last_collection = Some(start_time); state.last_collection = Some(start_time);
state.services = services;
state.cached_service_data = complete_service_data; state.cached_service_data = complete_service_data;
} }
@@ -483,25 +461,6 @@ impl SystemdCollector {
} }
} }
/// Get service memory usage (if available)
fn get_service_memory(&self, service: &str) -> Option<f32> {
let output = Command::new("systemctl")
.args(&["show", &format!("{}.service", service), "--property=MemoryCurrent"])
.output()
.ok()?;
let output_str = String::from_utf8(output.stdout).ok()?;
for line in output_str.lines() {
if line.starts_with("MemoryCurrent=") {
let memory_str = line.strip_prefix("MemoryCurrent=")?;
if let Ok(memory_bytes) = memory_str.parse::<u64>() {
return Some(memory_bytes as f32 / (1024.0 * 1024.0)); // Convert to MB
}
}
}
None
}
/// Calculate service status, taking user-stopped services into account /// Calculate service status, taking user-stopped services into account
fn calculate_service_status(&self, service_name: &str, active_status: &str) -> Status { fn calculate_service_status(&self, service_name: &str, active_status: &str) -> Status {
match active_status.to_lowercase().as_str() { match active_status.to_lowercase().as_str() {
@@ -549,7 +508,7 @@ impl SystemdCollector {
/// Check if service collection cache should be updated /// Check if service collection cache should be updated
fn should_update_cache(&self) -> bool { fn should_update_cache(&self) -> bool {
let state = self.state.read().unwrap(); let state = self.state.read().unwrap();
match state.last_collection { match state.last_collection {
None => true, None => true,
Some(last) => { Some(last) => {
@@ -559,16 +518,6 @@ impl SystemdCollector {
} }
} }
/// Get cached service data if available and fresh
fn get_cached_services(&self) -> Option<Vec<ServiceInfo>> {
if !self.should_update_cache() {
let state = self.state.read().unwrap();
Some(state.services.clone())
} else {
None
}
}
/// Get cached complete service data with sub-services if available and fresh /// Get cached complete service data with sub-services if available and fresh
fn get_cached_complete_services(&self) -> Option<Vec<ServiceData>> { fn get_cached_complete_services(&self) -> Option<Vec<ServiceData>> {
if !self.should_update_cache() { if !self.should_update_cache() {
@@ -807,9 +756,9 @@ impl SystemdCollector {
fn get_docker_containers(&self) -> Vec<(String, String)> { fn get_docker_containers(&self) -> Vec<(String, String)> {
let mut containers = Vec::new(); let mut containers = Vec::new();
// Check if docker is available // Check if docker is available (use sudo for permissions)
let output = Command::new("docker") let output = Command::new("sudo")
.args(&["ps", "--format", "{{.Names}},{{.Status}}"]) .args(&["docker", "ps", "--format", "{{.Names}},{{.Status}}"])
.output(); .output();
let output = match output { let output = match output {

View File

@@ -1,2 +0,0 @@
// This file is now empty - all configuration values come from config files
// No hardcoded defaults are used

View File

@@ -8,7 +8,6 @@ mod collectors;
mod communication; mod communication;
mod config; mod config;
mod notifications; mod notifications;
mod service_tracker;
use agent::Agent; use agent::Agent;

View File

@@ -1,164 +0,0 @@
use anyhow::Result;
use serde::{Deserialize, Serialize};
use std::collections::HashSet;
use std::fs;
use std::path::Path;
use std::sync::{Arc, Mutex, OnceLock};
use tracing::{debug, info, warn};
/// Shared instance for global access
static GLOBAL_TRACKER: OnceLock<Arc<Mutex<UserStoppedServiceTracker>>> = OnceLock::new();
/// Tracks services that have been stopped by user action
/// These services should be treated as OK status instead of Warning
#[derive(Debug)]
pub struct UserStoppedServiceTracker {
/// Set of services stopped by user action
user_stopped_services: HashSet<String>,
/// Path to persistent storage file
storage_path: String,
}
/// Serializable data structure for persistence
#[derive(Debug, Serialize, Deserialize)]
struct UserStoppedData {
services: Vec<String>,
}
impl UserStoppedServiceTracker {
/// Create new tracker with default storage path
pub fn new() -> Self {
Self::with_storage_path("/var/lib/cm-dashboard/user-stopped-services.json")
}
/// Initialize global instance (called by agent)
pub fn init_global() -> Result<Self> {
let tracker = Self::new();
// Set global instance
let global_instance = Arc::new(Mutex::new(tracker));
if GLOBAL_TRACKER.set(global_instance).is_err() {
warn!("Global service tracker was already initialized");
}
// Return a new instance for the agent to use
Ok(Self::new())
}
/// Check if a service is user-stopped (global access for collectors)
pub fn is_service_user_stopped(service_name: &str) -> bool {
if let Some(global) = GLOBAL_TRACKER.get() {
if let Ok(tracker) = global.lock() {
tracker.is_user_stopped(service_name)
} else {
debug!("Failed to lock global service tracker");
false
}
} else {
debug!("Global service tracker not initialized");
false
}
}
/// Update global tracker (called by agent when tracker state changes)
pub fn update_global(updated_tracker: &UserStoppedServiceTracker) {
if let Some(global) = GLOBAL_TRACKER.get() {
if let Ok(mut tracker) = global.lock() {
tracker.user_stopped_services = updated_tracker.user_stopped_services.clone();
} else {
debug!("Failed to lock global service tracker for update");
}
} else {
debug!("Global service tracker not initialized for update");
}
}
/// Create new tracker with custom storage path
pub fn with_storage_path<P: AsRef<Path>>(storage_path: P) -> Self {
let storage_path = storage_path.as_ref().to_string_lossy().to_string();
let mut tracker = Self {
user_stopped_services: HashSet::new(),
storage_path,
};
// Load existing data from storage
if let Err(e) = tracker.load_from_storage() {
warn!("Failed to load user-stopped services from storage: {}", e);
info!("Starting with empty user-stopped services list");
}
tracker
}
/// Clear user-stopped flag for a service (when user starts it)
pub fn clear_user_stopped(&mut self, service_name: &str) -> Result<()> {
if self.user_stopped_services.remove(service_name) {
info!("Cleared user-stopped flag for service '{}'", service_name);
self.save_to_storage()?;
debug!("Service '{}' user-stopped flag cleared and saved to storage", service_name);
} else {
debug!("Service '{}' was not marked as user-stopped", service_name);
}
Ok(())
}
/// Check if a service is marked as user-stopped
pub fn is_user_stopped(&self, service_name: &str) -> bool {
let is_stopped = self.user_stopped_services.contains(service_name);
debug!("Service '{}' user-stopped status: {}", service_name, is_stopped);
is_stopped
}
/// Save current state to persistent storage
fn save_to_storage(&self) -> Result<()> {
// Create parent directory if it doesn't exist
if let Some(parent_dir) = Path::new(&self.storage_path).parent() {
if !parent_dir.exists() {
fs::create_dir_all(parent_dir)?;
debug!("Created parent directory: {}", parent_dir.display());
}
}
let data = UserStoppedData {
services: self.user_stopped_services.iter().cloned().collect(),
};
let json_data = serde_json::to_string_pretty(&data)?;
fs::write(&self.storage_path, json_data)?;
debug!(
"Saved {} user-stopped services to {}",
data.services.len(),
self.storage_path
);
Ok(())
}
/// Load state from persistent storage
fn load_from_storage(&mut self) -> Result<()> {
if !Path::new(&self.storage_path).exists() {
debug!("Storage file {} does not exist, starting fresh", self.storage_path);
return Ok(());
}
let json_data = fs::read_to_string(&self.storage_path)?;
let data: UserStoppedData = serde_json::from_str(&json_data)?;
self.user_stopped_services = data.services.into_iter().collect();
info!(
"Loaded {} user-stopped services from {}",
self.user_stopped_services.len(),
self.storage_path
);
if !self.user_stopped_services.is_empty() {
debug!("User-stopped services: {:?}", self.user_stopped_services);
}
Ok(())
}
}

View File

@@ -1,1001 +0,0 @@
warning: fields `total_services`, `backup_disk_filesystem_label`, `services_completed_count`, `services_failed_count`, and `services_disabled_count` are never read
--> dashboard/src/ui/widgets/backup.rs:22:5
|
14 | pub struct BackupWidget {
| ------------ fields in this struct
...
22 | total_services: Option<i64>,
| ^^^^^^^^^^^^^^
...
36 | backup_disk_filesystem_label: Option<String>,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
37 | /// Number of completed services
38 | services_completed_count: Option<i64>,
| ^^^^^^^^^^^^^^^^^^^^^^^^
39 | /// Number of failed services
40 | services_failed_count: Option<i64>,
| ^^^^^^^^^^^^^^^^^^^^^
41 | /// Number of disabled services
42 | services_disabled_count: Option<i64>,
| ^^^^^^^^^^^^^^^^^^^^^^^
|
= note: `BackupWidget` has a derived impl for the trait `Clone`, but this is intentionally ignored during dead code analysis
= note: `#[warn(dead_code)]` on by default
warning: field `exit_code` is never read
--> dashboard/src/ui/widgets/backup.rs:53:5
|
50 | struct ServiceMetricData {
| ----------------- field in this struct
...
53 | exit_code: Option<i64>,
| ^^^^^^^^^
|
= note: `ServiceMetricData` has derived impls for the traits `Clone` and `Debug`, but these are intentionally ignored during dead code analysis
warning: associated function `extract_service_name` is never used
--> dashboard/src/ui/widgets/backup.rs:115:8
|
58 | impl BackupWidget {
| ----------------- associated function in this implementation
...
115 | fn extract_service_name(metric_name: &str) -> Option<String> {
| ^^^^^^^^^^^^^^^^^^^^
warning: method `update_from_metrics` is never used
--> dashboard/src/ui/widgets/backup.rs:157:8
|
156 | impl BackupWidget {
| ----------------- method in this implementation
157 | fn update_from_metrics(&mut self, metrics: &[&Metric]) {
| ^^^^^^^^^^^^^^^^^^^
warning: associated function `extract_service_info` is never used
--> dashboard/src/ui/widgets/services.rs:50:8
|
38 | impl ServicesWidget {
| ------------------- associated function in this implementation
...
50 | fn extract_service_info(metric_name: &str) -> Option<(String, Option<String>)> {
| ^^^^^^^^^^^^^^^^^^^^
warning: method `update_from_metrics` is never used
--> dashboard/src/ui/widgets/services.rs:285:8
|
284 | impl ServicesWidget {
| ------------------- method in this implementation
285 | fn update_from_metrics(&mut self, metrics: &[&Metric]) {
| ^^^^^^^^^^^^^^^^^^^
warning: field `health_status` is never read
--> dashboard/src/ui/widgets/system.rs:53:5
|
43 | struct StoragePool {
| ----------- field in this struct
...
53 | health_status: Status, // Separate status for pool health vs usage
| ^^^^^^^^^^^^^
|
= note: `StoragePool` has a derived impl for the trait `Clone`, but this is intentionally ignored during dead code analysis
warning: `cm-dashboard` (bin "cm-dashboard") generated 7 warnings
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.16s
Running `target/debug/cm-dashboard --headless --raw-data`
RAW AGENT DATA FROM cmbox:
{
"hostname": "cmbox",
"agent_version": "v0.1.133",
"timestamp": 1763936501,
"system": {
"cpu": {
"load_1min": 1.82,
"load_5min": 2.1,
"load_15min": 2.1,
"frequency_mhz": 3743.09,
"temperature_celsius": 55.0
},
"memory": {
"usage_percent": 27.183601,
"total_gb": 23.339516,
"used_gb": 6.3445206,
"available_gb": 16.994995,
"swap_total_gb": 14.634708,
"swap_used_gb": 0.17599106,
"tmpfs": [
{
"mount": "/tmp",
"usage_percent": 15.094376,
"used_gb": 0.3018875,
"total_gb": 2.0
}
]
},
"storage": {
"drives": [
{
"name": "nvme0n1",
"health": "PASSED",
"temperature_celsius": 28.0,
"wear_percent": 1.0,
"filesystems": [
{
"mount": "root",
"usage_percent": 24.404377,
"used_gb": 226.51398,
"total_gb": 928.1695
},
{
"mount": "boot",
"usage_percent": 10.666672,
"used_gb": 0.10645676,
"total_gb": 0.9980316
}
]
}
],
"pools": []
}
},
"services": [
{
"name": "tailscaled",
"status": "active",
"memory_mb": 25.582031,
"disk_gb": 0.0,
"user_stopped": false
},
{
"name": "sshd",
"status": "active",
"memory_mb": 4.3085938,
"disk_gb": 0.0,
"user_stopped": false
}
],
"backup": {
"status": "unknown",
"last_run": null,
"next_scheduled": null,
"total_size_gb": null,
"repository_health": null
}
}
────────────────────────────────────────────────────────────────────────────────
RAW AGENT DATA FROM cmbox:
{
"hostname": "cmbox",
"agent_version": "v0.1.133",
"timestamp": 1763936502,
"system": {
"cpu": {
"load_1min": 1.82,
"load_5min": 2.1,
"load_15min": 2.1,
"frequency_mhz": 3743.09,
"temperature_celsius": 55.0
},
"memory": {
"usage_percent": 27.183601,
"total_gb": 23.339516,
"used_gb": 6.3445206,
"available_gb": 16.994995,
"swap_total_gb": 14.634708,
"swap_used_gb": 0.17599106,
"tmpfs": [
{
"mount": "/tmp",
"usage_percent": 15.094376,
"used_gb": 0.3018875,
"total_gb": 2.0
}
]
},
"storage": {
"drives": [
{
"name": "nvme0n1",
"health": "PASSED",
"temperature_celsius": 28.0,
"wear_percent": 1.0,
"filesystems": [
{
"mount": "root",
"usage_percent": 24.404377,
"used_gb": 226.51398,
"total_gb": 928.1695
},
{
"mount": "boot",
"usage_percent": 10.666672,
"used_gb": 0.10645676,
"total_gb": 0.9980316
}
]
}
],
"pools": []
}
},
"services": [
{
"name": "tailscaled",
"status": "active",
"memory_mb": 25.582031,
"disk_gb": 0.0,
"user_stopped": false
},
{
"name": "sshd",
"status": "active",
"memory_mb": 4.3085938,
"disk_gb": 0.0,
"user_stopped": false
}
],
"backup": {
"status": "unknown",
"last_run": null,
"next_scheduled": null,
"total_size_gb": null,
"repository_health": null
}
}
────────────────────────────────────────────────────────────────────────────────
RAW AGENT DATA FROM cmbox:
{
"hostname": "cmbox",
"agent_version": "v0.1.133",
"timestamp": 1763936503,
"system": {
"cpu": {
"load_1min": 1.82,
"load_5min": 2.1,
"load_15min": 2.1,
"frequency_mhz": 3743.09,
"temperature_celsius": 55.0
},
"memory": {
"usage_percent": 27.183601,
"total_gb": 23.339516,
"used_gb": 6.3445206,
"available_gb": 16.994995,
"swap_total_gb": 14.634708,
"swap_used_gb": 0.17599106,
"tmpfs": [
{
"mount": "/tmp",
"usage_percent": 15.094376,
"used_gb": 0.3018875,
"total_gb": 2.0
}
]
},
"storage": {
"drives": [
{
"name": "nvme0n1",
"health": "PASSED",
"temperature_celsius": 28.0,
"wear_percent": 1.0,
"filesystems": [
{
"mount": "root",
"usage_percent": 24.404377,
"used_gb": 226.51398,
"total_gb": 928.1695
},
{
"mount": "boot",
"usage_percent": 10.666672,
"used_gb": 0.10645676,
"total_gb": 0.9980316
}
]
}
],
"pools": []
}
},
"services": [
{
"name": "tailscaled",
"status": "active",
"memory_mb": 25.582031,
"disk_gb": 0.0,
"user_stopped": false
},
{
"name": "sshd",
"status": "active",
"memory_mb": 4.3085938,
"disk_gb": 0.0,
"user_stopped": false
}
],
"backup": {
"status": "unknown",
"last_run": null,
"next_scheduled": null,
"total_size_gb": null,
"repository_health": null
}
}
────────────────────────────────────────────────────────────────────────────────
RAW AGENT DATA FROM cmbox:
{
"hostname": "cmbox",
"agent_version": "v0.1.133",
"timestamp": 1763936505,
"system": {
"cpu": {
"load_1min": 1.75,
"load_5min": 2.08,
"load_15min": 2.1,
"frequency_mhz": 3600.005,
"temperature_celsius": 56.0
},
"memory": {
"usage_percent": 26.780334,
"total_gb": 23.339516,
"used_gb": 6.2504005,
"available_gb": 17.089115,
"swap_total_gb": 14.634708,
"swap_used_gb": 0.17599106,
"tmpfs": [
{
"mount": "/tmp",
"usage_percent": 15.095139,
"used_gb": 0.30190277,
"total_gb": 2.0
}
]
},
"storage": {
"drives": [
{
"name": "nvme0n1",
"health": "PASSED",
"temperature_celsius": 28.0,
"wear_percent": 1.0,
"filesystems": [
{
"mount": "root",
"usage_percent": 24.404377,
"used_gb": 226.51398,
"total_gb": 928.1695
},
{
"mount": "boot",
"usage_percent": 10.666672,
"used_gb": 0.10645676,
"total_gb": 0.9980316
}
]
}
],
"pools": []
}
},
"services": [
{
"name": "tailscaled",
"status": "active",
"memory_mb": 25.59375,
"disk_gb": 0.0,
"user_stopped": false
},
{
"name": "sshd",
"status": "active",
"memory_mb": 4.3085938,
"disk_gb": 0.0,
"user_stopped": false
}
],
"backup": {
"status": "unknown",
"last_run": null,
"next_scheduled": null,
"total_size_gb": null,
"repository_health": null
}
}
────────────────────────────────────────────────────────────────────────────────
RAW AGENT DATA FROM cmbox:
{
"hostname": "cmbox",
"agent_version": "v0.1.133",
"timestamp": 1763936506,
"system": {
"cpu": {
"load_1min": 1.75,
"load_5min": 2.08,
"load_15min": 2.1,
"frequency_mhz": 3600.005,
"temperature_celsius": 56.0
},
"memory": {
"usage_percent": 26.780334,
"total_gb": 23.339516,
"used_gb": 6.2504005,
"available_gb": 17.089115,
"swap_total_gb": 14.634708,
"swap_used_gb": 0.17599106,
"tmpfs": [
{
"mount": "/tmp",
"usage_percent": 15.095139,
"used_gb": 0.30190277,
"total_gb": 2.0
}
]
},
"storage": {
"drives": [
{
"name": "nvme0n1",
"health": "PASSED",
"temperature_celsius": 28.0,
"wear_percent": 1.0,
"filesystems": [
{
"mount": "root",
"usage_percent": 24.404377,
"used_gb": 226.51398,
"total_gb": 928.1695
},
{
"mount": "boot",
"usage_percent": 10.666672,
"used_gb": 0.10645676,
"total_gb": 0.9980316
}
]
}
],
"pools": []
}
},
"services": [
{
"name": "tailscaled",
"status": "active",
"memory_mb": 25.59375,
"disk_gb": 0.0,
"user_stopped": false
},
{
"name": "sshd",
"status": "active",
"memory_mb": 4.3085938,
"disk_gb": 0.0,
"user_stopped": false
}
],
"backup": {
"status": "unknown",
"last_run": null,
"next_scheduled": null,
"total_size_gb": null,
"repository_health": null
}
}
────────────────────────────────────────────────────────────────────────────────
RAW AGENT DATA FROM cmbox:
{
"hostname": "cmbox",
"agent_version": "v0.1.133",
"timestamp": 1763936507,
"system": {
"cpu": {
"load_1min": 1.75,
"load_5min": 2.08,
"load_15min": 2.1,
"frequency_mhz": 3600.005,
"temperature_celsius": 56.0
},
"memory": {
"usage_percent": 26.780334,
"total_gb": 23.339516,
"used_gb": 6.2504005,
"available_gb": 17.089115,
"swap_total_gb": 14.634708,
"swap_used_gb": 0.17599106,
"tmpfs": [
{
"mount": "/tmp",
"usage_percent": 15.095139,
"used_gb": 0.30190277,
"total_gb": 2.0
}
]
},
"storage": {
"drives": [
{
"name": "nvme0n1",
"health": "PASSED",
"temperature_celsius": 28.0,
"wear_percent": 1.0,
"filesystems": [
{
"mount": "root",
"usage_percent": 24.404377,
"used_gb": 226.51398,
"total_gb": 928.1695
},
{
"mount": "boot",
"usage_percent": 10.666672,
"used_gb": 0.10645676,
"total_gb": 0.9980316
}
]
}
],
"pools": []
}
},
"services": [
{
"name": "tailscaled",
"status": "active",
"memory_mb": 25.59375,
"disk_gb": 0.0,
"user_stopped": false
},
{
"name": "sshd",
"status": "active",
"memory_mb": 4.3085938,
"disk_gb": 0.0,
"user_stopped": false
}
],
"backup": {
"status": "unknown",
"last_run": null,
"next_scheduled": null,
"total_size_gb": null,
"repository_health": null
}
}
────────────────────────────────────────────────────────────────────────────────
RAW AGENT DATA FROM cmbox:
{
"hostname": "cmbox",
"agent_version": "v0.1.133",
"timestamp": 1763936508,
"system": {
"cpu": {
"load_1min": 1.75,
"load_5min": 2.08,
"load_15min": 2.1,
"frequency_mhz": 3600.005,
"temperature_celsius": 56.0
},
"memory": {
"usage_percent": 26.780334,
"total_gb": 23.339516,
"used_gb": 6.2504005,
"available_gb": 17.089115,
"swap_total_gb": 14.634708,
"swap_used_gb": 0.17599106,
"tmpfs": [
{
"mount": "/tmp",
"usage_percent": 15.095139,
"used_gb": 0.30190277,
"total_gb": 2.0
}
]
},
"storage": {
"drives": [
{
"name": "nvme0n1",
"health": "PASSED",
"temperature_celsius": 28.0,
"wear_percent": 1.0,
"filesystems": [
{
"mount": "root",
"usage_percent": 24.404377,
"used_gb": 226.51398,
"total_gb": 928.1695
},
{
"mount": "boot",
"usage_percent": 10.666672,
"used_gb": 0.10645676,
"total_gb": 0.9980316
}
]
}
],
"pools": []
}
},
"services": [
{
"name": "tailscaled",
"status": "active",
"memory_mb": 25.59375,
"disk_gb": 0.0,
"user_stopped": false
},
{
"name": "sshd",
"status": "active",
"memory_mb": 4.3085938,
"disk_gb": 0.0,
"user_stopped": false
}
],
"backup": {
"status": "unknown",
"last_run": null,
"next_scheduled": null,
"total_size_gb": null,
"repository_health": null
}
}
────────────────────────────────────────────────────────────────────────────────
RAW AGENT DATA FROM cmbox:
{
"hostname": "cmbox",
"agent_version": "v0.1.133",
"timestamp": 1763936509,
"system": {
"cpu": {
"load_1min": 1.75,
"load_5min": 2.08,
"load_15min": 2.1,
"frequency_mhz": 3638.71,
"temperature_celsius": 56.0
},
"memory": {
"usage_percent": 27.014532,
"total_gb": 23.339516,
"used_gb": 6.3050613,
"available_gb": 17.034454,
"swap_total_gb": 14.634708,
"swap_used_gb": 0.17599106,
"tmpfs": [
{
"mount": "/tmp",
"usage_percent": 15.095139,
"used_gb": 0.30190277,
"total_gb": 2.0
}
]
},
"storage": {
"drives": [
{
"name": "nvme0n1",
"health": "PASSED",
"temperature_celsius": 28.0,
"wear_percent": 1.0,
"filesystems": [
{
"mount": "root",
"usage_percent": 24.404377,
"used_gb": 226.51398,
"total_gb": 928.1695
},
{
"mount": "boot",
"usage_percent": 10.666672,
"used_gb": 0.10645676,
"total_gb": 0.9980316
}
]
}
],
"pools": []
}
},
"services": [
{
"name": "tailscaled",
"status": "active",
"memory_mb": 25.59375,
"disk_gb": 0.0,
"user_stopped": false
},
{
"name": "sshd",
"status": "active",
"memory_mb": 4.3085938,
"disk_gb": 0.0,
"user_stopped": false
}
],
"backup": {
"status": "unknown",
"last_run": null,
"next_scheduled": null,
"total_size_gb": null,
"repository_health": null
}
}
────────────────────────────────────────────────────────────────────────────────
RAW AGENT DATA FROM cmbox:
{
"hostname": "cmbox",
"agent_version": "v0.1.133",
"timestamp": 1763936509,
"system": {
"cpu": {
"load_1min": 0.0,
"load_5min": 0.0,
"load_15min": 0.0,
"frequency_mhz": 0.0,
"temperature_celsius": null
},
"memory": {
"usage_percent": 0.0,
"total_gb": 0.0,
"used_gb": 0.0,
"available_gb": 0.0,
"swap_total_gb": 0.0,
"swap_used_gb": 0.0,
"tmpfs": []
},
"storage": {
"drives": [],
"pools": []
}
},
"services": [],
"backup": {
"status": "unknown",
"last_run": null,
"next_scheduled": null,
"total_size_gb": null,
"repository_health": null
}
}
────────────────────────────────────────────────────────────────────────────────
RAW AGENT DATA FROM cmbox:
{
"hostname": "cmbox",
"agent_version": "v0.1.133",
"timestamp": 1763936510,
"system": {
"cpu": {
"load_1min": 1.75,
"load_5min": 2.08,
"load_15min": 2.1,
"frequency_mhz": 3638.71,
"temperature_celsius": 56.0
},
"memory": {
"usage_percent": 27.014532,
"total_gb": 23.339516,
"used_gb": 6.3050613,
"available_gb": 17.034454,
"swap_total_gb": 14.634708,
"swap_used_gb": 0.17599106,
"tmpfs": [
{
"mount": "/tmp",
"usage_percent": 15.095139,
"used_gb": 0.30190277,
"total_gb": 2.0
}
]
},
"storage": {
"drives": [
{
"name": "nvme0n1",
"health": "PASSED",
"temperature_celsius": 28.0,
"wear_percent": 1.0,
"filesystems": [
{
"mount": "root",
"usage_percent": 24.404377,
"used_gb": 226.51398,
"total_gb": 928.1695
},
{
"mount": "boot",
"usage_percent": 10.666672,
"used_gb": 0.10645676,
"total_gb": 0.9980316
}
]
}
],
"pools": []
}
},
"services": [
{
"name": "tailscaled",
"status": "active",
"memory_mb": 25.59375,
"disk_gb": 0.0,
"user_stopped": false
},
{
"name": "sshd",
"status": "active",
"memory_mb": 4.3085938,
"disk_gb": 0.0,
"user_stopped": false
}
],
"backup": {
"status": "unknown",
"last_run": null,
"next_scheduled": null,
"total_size_gb": null,
"repository_health": null
}
}
────────────────────────────────────────────────────────────────────────────────
RAW AGENT DATA FROM cmbox:
{
"hostname": "cmbox",
"agent_version": "v0.1.133",
"timestamp": 1763936511,
"system": {
"cpu": {
"load_1min": 1.75,
"load_5min": 2.08,
"load_15min": 2.1,
"frequency_mhz": 3638.71,
"temperature_celsius": 56.0
},
"memory": {
"usage_percent": 27.014532,
"total_gb": 23.339516,
"used_gb": 6.3050613,
"available_gb": 17.034454,
"swap_total_gb": 14.634708,
"swap_used_gb": 0.17599106,
"tmpfs": [
{
"mount": "/tmp",
"usage_percent": 15.095139,
"used_gb": 0.30190277,
"total_gb": 2.0
}
]
},
"storage": {
"drives": [
{
"name": "nvme0n1",
"health": "PASSED",
"temperature_celsius": 28.0,
"wear_percent": 1.0,
"filesystems": [
{
"mount": "root",
"usage_percent": 24.404377,
"used_gb": 226.51398,
"total_gb": 928.1695
},
{
"mount": "boot",
"usage_percent": 10.666672,
"used_gb": 0.10645676,
"total_gb": 0.9980316
}
]
}
],
"pools": []
}
},
"services": [
{
"name": "tailscaled",
"status": "active",
"memory_mb": 25.59375,
"disk_gb": 0.0,
"user_stopped": false
},
{
"name": "sshd",
"status": "active",
"memory_mb": 4.3085938,
"disk_gb": 0.0,
"user_stopped": false
}
],
"backup": {
"status": "unknown",
"last_run": null,
"next_scheduled": null,
"total_size_gb": null,
"repository_health": null
}
}
────────────────────────────────────────────────────────────────────────────────
RAW AGENT DATA FROM cmbox:
{
"hostname": "cmbox",
"agent_version": "v0.1.133",
"timestamp": 1763936512,
"system": {
"cpu": {
"load_1min": 1.75,
"load_5min": 2.08,
"load_15min": 2.1,
"frequency_mhz": 3638.71,
"temperature_celsius": 56.0
},
"memory": {
"usage_percent": 27.014532,
"total_gb": 23.339516,
"used_gb": 6.3050613,
"available_gb": 17.034454,
"swap_total_gb": 14.634708,
"swap_used_gb": 0.17599106,
"tmpfs": [
{
"mount": "/tmp",
"usage_percent": 15.095139,
"used_gb": 0.30190277,
"total_gb": 2.0
}
]
},
"storage": {
"drives": [
{
"name": "nvme0n1",
"health": "PASSED",
"temperature_celsius": 28.0,
"wear_percent": 1.0,
"filesystems": [
{
"mount": "root",
"usage_percent": 24.404377,
"used_gb": 226.51398,
"total_gb": 928.1695
},
{
"mount": "boot",
"usage_percent": 10.666672,
"used_gb": 0.10645676,
"total_gb": 0.9980316
}
]
}
],
"pools": []
}
},
"services": [
{
"name": "tailscaled",
"status": "active",
"memory_mb": 25.59375,
"disk_gb": 0.0,
"user_stopped": false
},
{
"name": "sshd",
"status": "active",
"memory_mb": 4.3085938,
"disk_gb": 0.0,
"user_stopped": false
}
],
"backup": {
"status": "unknown",
"last_run": null,
"next_scheduled": null,
"total_size_gb": null,
"repository_health": null
}
}
────────────────────────────────────────────────────────────────────────────────
Terminated

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "cm-dashboard" name = "cm-dashboard"
version = "0.1.153" version = "0.1.160"
edition = "2021" edition = "2021"
[dependencies] [dependencies]

View File

@@ -20,13 +20,12 @@ pub struct Dashboard {
tui_app: Option<TuiApp>, tui_app: Option<TuiApp>,
terminal: Option<Terminal<CrosstermBackend<io::Stdout>>>, terminal: Option<Terminal<CrosstermBackend<io::Stdout>>>,
headless: bool, headless: bool,
raw_data: bool,
initial_commands_sent: std::collections::HashSet<String>, initial_commands_sent: std::collections::HashSet<String>,
config: DashboardConfig, config: DashboardConfig,
} }
impl Dashboard { impl Dashboard {
pub async fn new(config_path: Option<String>, headless: bool, raw_data: bool) -> Result<Self> { pub async fn new(config_path: Option<String>, headless: bool) -> Result<Self> {
info!("Initializing dashboard"); info!("Initializing dashboard");
// Load configuration - try default path if not specified // Load configuration - try default path if not specified
@@ -120,7 +119,6 @@ impl Dashboard {
tui_app, tui_app,
terminal, terminal,
headless, headless,
raw_data,
initial_commands_sent: std::collections::HashSet::new(), initial_commands_sent: std::collections::HashSet::new(),
config, config,
}) })
@@ -205,13 +203,6 @@ impl Dashboard {
.insert(agent_data.hostname.clone()); .insert(agent_data.hostname.clone());
} }
// Show raw data if requested (before processing)
if self.raw_data {
println!("RAW AGENT DATA FROM {}:", agent_data.hostname);
println!("{}", serde_json::to_string_pretty(&agent_data).unwrap_or_else(|e| format!("Serialization error: {}", e)));
println!("{}", "".repeat(80));
}
// Store structured data directly // Store structured data directly
self.metric_store.store_agent_data(agent_data); self.metric_store.store_agent_data(agent_data);

View File

@@ -51,10 +51,6 @@ struct Cli {
/// Run in headless mode (no TUI, just logging) /// Run in headless mode (no TUI, just logging)
#[arg(long)] #[arg(long)]
headless: bool, headless: bool,
/// Show raw agent data in headless mode
#[arg(long)]
raw_data: bool,
} }
#[tokio::main] #[tokio::main]
@@ -90,7 +86,7 @@ async fn main() -> Result<()> {
} }
// Create and run dashboard // Create and run dashboard
let mut dashboard = Dashboard::new(cli.config, cli.headless, cli.raw_data).await?; let mut dashboard = Dashboard::new(cli.config, cli.headless).await?;
// Setup graceful shutdown // Setup graceful shutdown
let ctrl_c = async { let ctrl_c = async {

View File

@@ -225,9 +225,6 @@ impl Layout {
pub const LEFT_PANEL_WIDTH: u16 = 45; pub const LEFT_PANEL_WIDTH: u16 = 45;
/// Right panel percentage (services) /// Right panel percentage (services)
pub const RIGHT_PANEL_WIDTH: u16 = 55; pub const RIGHT_PANEL_WIDTH: u16 = 55;
/// System vs backup split (equal)
pub const SYSTEM_PANEL_HEIGHT: u16 = 50;
pub const BACKUP_PANEL_HEIGHT: u16 = 50;
} }
/// Typography system /// Typography system

View File

@@ -1 +0,0 @@
// This file is intentionally left minimal - CPU functionality is handled by the SystemWidget

View File

@@ -1 +0,0 @@
// This file is intentionally left minimal - Memory functionality is handled by the SystemWidget

View File

@@ -1,7 +1,5 @@
use cm_dashboard_shared::AgentData; use cm_dashboard_shared::AgentData;
pub mod cpu;
pub mod memory;
pub mod services; pub mod services;
pub mod system; pub mod system;

View File

@@ -239,8 +239,11 @@ impl SystemWidget {
}; };
// Add drive info // Add drive info
let display_name = drive.serial_number.as_ref()
.map(|s| truncate_serial(s))
.unwrap_or(drive.name.clone());
let storage_drive = StorageDrive { let storage_drive = StorageDrive {
name: drive.name.clone(), name: display_name,
temperature: drive.temperature_celsius, temperature: drive.temperature_celsius,
wear_percent: drive.wear_percent, wear_percent: drive.wear_percent,
status: Status::Ok, status: Status::Ok,
@@ -310,9 +313,12 @@ impl SystemWidget {
} else { } else {
Status::Unknown Status::Unknown
}; };
let display_name = drive.serial_number.as_ref()
.map(|s| truncate_serial(s))
.unwrap_or(drive.name.clone());
let storage_drive = StorageDrive { let storage_drive = StorageDrive {
name: drive.name.clone(), name: display_name,
temperature: drive.temperature_celsius, temperature: drive.temperature_celsius,
wear_percent: drive.wear_percent, wear_percent: drive.wear_percent,
status: drive_status, status: drive_status,
@@ -332,9 +338,12 @@ impl SystemWidget {
} else { } else {
Status::Unknown Status::Unknown
}; };
let display_name = drive.serial_number.as_ref()
.map(|s| truncate_serial(s))
.unwrap_or(drive.name.clone());
let storage_drive = StorageDrive { let storage_drive = StorageDrive {
name: drive.name.clone(), name: display_name,
temperature: drive.temperature_celsius, temperature: drive.temperature_celsius,
wear_percent: drive.wear_percent, wear_percent: drive.wear_percent,
status: drive_status, status: drive_status,
@@ -359,12 +368,8 @@ impl SystemWidget {
// Pool header line with type and health // Pool header line with type and health
let pool_label = if pool.pool_type == "drive" { let pool_label = if pool.pool_type == "drive" {
// For physical drives, show the drive name with temperature and wear percentage if available // For physical drives, show the drive name with temperature and wear percentage if available
// Look for any drive with temp/wear data (physical drives may have drives named after the pool) // Physical drives only have one drive entry
let drive_info = pool.drives.iter() if let Some(drive) = pool.drives.first() {
.find(|d| d.name == pool.name)
.or_else(|| pool.drives.first());
if let Some(drive) = drive_info {
let mut drive_details = Vec::new(); let mut drive_details = Vec::new();
if let Some(temp) = drive.temperature { if let Some(temp) = drive.temperature {
drive_details.push(format!("T: {}°C", temp as i32)); drive_details.push(format!("T: {}°C", temp as i32));
@@ -372,18 +377,18 @@ impl SystemWidget {
if let Some(wear) = drive.wear_percent { if let Some(wear) = drive.wear_percent {
drive_details.push(format!("W: {}%", wear as i32)); drive_details.push(format!("W: {}%", wear as i32));
} }
if !drive_details.is_empty() { if !drive_details.is_empty() {
format!("{} {}", pool.name, drive_details.join(" ")) format!("{} {}", drive.name, drive_details.join(" "))
} else { } else {
pool.name.clone() drive.name.clone()
} }
} else { } else {
pool.name.clone() pool.name.clone()
} }
} else { } else {
// For mergerfs pools, show pool name with format like "mergerfs (2+1):" // For mergerfs pools, show pool type with mount point
format!("{}:", pool.pool_type) format!("mergerfs {}:", pool.mount_point)
}; };
let pool_spans = StatusIcons::create_status_spans(pool.status.clone(), &pool_label); let pool_spans = StatusIcons::create_status_spans(pool.status.clone(), &pool_label);
@@ -422,7 +427,7 @@ impl SystemWidget {
// └─ Mount: /srv/media // └─ Mount: /srv/media
// Pool total usage // Pool total usage
let total_text = format!("Total: {:.0}% {:.1}GB/{:.1}GB", let total_text = format!("{:.0}% {:.1}GB/{:.1}GB",
pool.usage_percent.unwrap_or(0.0), pool.usage_percent.unwrap_or(0.0),
pool.used_gb.unwrap_or(0.0), pool.used_gb.unwrap_or(0.0),
pool.total_gb.unwrap_or(0.0) pool.total_gb.unwrap_or(0.0)
@@ -433,22 +438,37 @@ impl SystemWidget {
total_spans.extend(StatusIcons::create_status_spans(Status::Ok, &total_text)); total_spans.extend(StatusIcons::create_status_spans(Status::Ok, &total_text));
lines.push(Line::from(total_spans)); lines.push(Line::from(total_spans));
// Data Disks section // Data drives - at same level as parity
if !pool.data_drives.is_empty() { let has_parity = !pool.parity_drives.is_empty();
lines.push(Line::from(vec![ for (i, drive) in pool.data_drives.iter().enumerate() {
Span::styled(" ├─ ", Typography::tree()), let is_last_data = i == pool.data_drives.len() - 1;
Span::styled("Data Disks:", Typography::secondary()) let mut drive_details = Vec::new();
])); if let Some(temp) = drive.temperature {
for (i, drive) in pool.data_drives.iter().enumerate() { drive_details.push(format!("T: {}°C", temp as i32));
let is_last = i == pool.data_drives.len() - 1;
let tree_symbol = if is_last { " │ └─ " } else { " │ ├─ " };
render_mergerfs_drive(drive, tree_symbol, &mut lines);
} }
if let Some(wear) = drive.wear_percent {
drive_details.push(format!("W: {}%", wear as i32));
}
let drive_text = if !drive_details.is_empty() {
format!("Data_{}: {} {}", i + 1, drive.name, drive_details.join(" "))
} else {
format!("Data_{}: {}", i + 1, drive.name)
};
// Last data drive uses └─ if there's no parity, otherwise ├─
let tree_symbol = if is_last_data && !has_parity { " └─ " } else { " ├─ " };
let mut data_spans = vec![
Span::styled(tree_symbol, Typography::tree()),
];
data_spans.extend(StatusIcons::create_status_spans(drive.status.clone(), &drive_text));
lines.push(Line::from(data_spans));
} }
// Parity section // Parity drives - last item(s)
if !pool.parity_drives.is_empty() { if !pool.parity_drives.is_empty() {
for drive in &pool.parity_drives { for (i, drive) in pool.parity_drives.iter().enumerate() {
let is_last = i == pool.parity_drives.len() - 1;
let mut drive_details = Vec::new(); let mut drive_details = Vec::new();
if let Some(temp) = drive.temperature { if let Some(temp) = drive.temperature {
drive_details.push(format!("T: {}°C", temp as i32)); drive_details.push(format!("T: {}°C", temp as i32));
@@ -456,27 +476,21 @@ impl SystemWidget {
if let Some(wear) = drive.wear_percent { if let Some(wear) = drive.wear_percent {
drive_details.push(format!("W: {}%", wear as i32)); drive_details.push(format!("W: {}%", wear as i32));
} }
let drive_text = if !drive_details.is_empty() { let drive_text = if !drive_details.is_empty() {
format!("{} {}", drive.name, drive_details.join(" ")) format!("Parity: {} {}", drive.name, drive_details.join(" "))
} else { } else {
drive.name.clone() format!("Parity: {}", drive.name)
}; };
let tree_symbol = if is_last { " └─ " } else { " ├─ " };
let mut parity_spans = vec![ let mut parity_spans = vec![
Span::styled(" ├─ ", Typography::tree()), Span::styled(tree_symbol, Typography::tree()),
Span::styled("Parity: ", Typography::secondary()),
]; ];
parity_spans.extend(StatusIcons::create_status_spans(drive.status.clone(), &drive_text)); parity_spans.extend(StatusIcons::create_status_spans(drive.status.clone(), &drive_text));
lines.push(Line::from(parity_spans)); lines.push(Line::from(parity_spans));
} }
} }
// Mount point
lines.push(Line::from(vec![
Span::styled(" └─ Mount: ", Typography::tree()),
Span::styled(&pool.mount_point, Typography::secondary())
]));
} }
} }
@@ -484,53 +498,14 @@ impl SystemWidget {
} }
} }
/// Helper function to render a drive in a MergerFS pool /// Truncate serial number to last 8 characters
fn render_mergerfs_drive<'a>(drive: &StorageDrive, tree_symbol: &'a str, lines: &mut Vec<Line<'a>>) { fn truncate_serial(serial: &str) -> String {
let mut drive_details = Vec::new(); let len = serial.len();
if let Some(temp) = drive.temperature { if len > 8 {
drive_details.push(format!("T: {}°C", temp as i32)); serial[len - 8..].to_string()
}
if let Some(wear) = drive.wear_percent {
drive_details.push(format!("W: {}%", wear as i32));
}
let drive_text = if !drive_details.is_empty() {
format!("{} {}", drive.name, drive_details.join(" "))
} else { } else {
drive.name.clone() serial.to_string()
};
let mut drive_spans = vec![
Span::styled(tree_symbol, Typography::tree()),
];
drive_spans.extend(StatusIcons::create_status_spans(drive.status.clone(), &drive_text));
lines.push(Line::from(drive_spans));
}
/// Helper function to render a drive in a storage pool
fn render_pool_drive(drive: &StorageDrive, is_last: bool, lines: &mut Vec<Line<'_>>) {
let tree_symbol = if is_last { " └─" } else { " ├─" };
let mut drive_details = Vec::new();
if let Some(temp) = drive.temperature {
drive_details.push(format!("T: {}°C", temp as i32));
} }
if let Some(wear) = drive.wear_percent {
drive_details.push(format!("W: {}%", wear as i32));
}
let drive_text = if !drive_details.is_empty() {
format!("{} {}", drive.name, drive_details.join(" "))
} else {
format!("{}", drive.name)
};
let mut drive_spans = vec![
Span::styled(tree_symbol, Typography::tree()),
Span::raw(" "),
];
drive_spans.extend(StatusIcons::create_status_spans(drive.status.clone(), &drive_text));
lines.push(Line::from(drive_spans));
} }
impl SystemWidget { impl SystemWidget {
@@ -540,6 +515,7 @@ impl SystemWidget {
// First line: serial number with temperature and wear // First line: serial number with temperature and wear
if let Some(serial) = &self.backup_disk_serial { if let Some(serial) = &self.backup_disk_serial {
let truncated_serial = truncate_serial(serial);
let mut details = Vec::new(); let mut details = Vec::new();
if let Some(temp) = self.backup_disk_temperature { if let Some(temp) = self.backup_disk_temperature {
details.push(format!("T: {}°C", temp as i32)); details.push(format!("T: {}°C", temp as i32));
@@ -549,9 +525,9 @@ impl SystemWidget {
} }
let disk_text = if !details.is_empty() { let disk_text = if !details.is_empty() {
format!("{} {}", serial, details.join(" ")) format!("{} {}", truncated_serial, details.join(" "))
} else { } else {
serial.clone() truncated_serial
}; };
let backup_status = match self.backup_status.as_str() { let backup_status = match self.backup_status.as_str() {
@@ -597,43 +573,6 @@ impl SystemWidget {
lines lines
} }
/// Format time ago from timestamp
fn format_time_ago(&self, timestamp: u64) -> String {
let now = chrono::Utc::now().timestamp() as u64;
let seconds_ago = now.saturating_sub(timestamp);
let hours = seconds_ago / 3600;
let minutes = (seconds_ago % 3600) / 60;
if hours > 0 {
format!("{}h ago", hours)
} else if minutes > 0 {
format!("{}m ago", minutes)
} else {
"now".to_string()
}
}
/// Format time until from future timestamp
fn format_time_until(&self, timestamp: u64) -> String {
let now = chrono::Utc::now().timestamp() as u64;
if timestamp <= now {
return "overdue".to_string();
}
let seconds_until = timestamp - now;
let hours = seconds_until / 3600;
let minutes = (seconds_until % 3600) / 60;
if hours > 0 {
format!("in {}h", hours)
} else if minutes > 0 {
format!("in {}m", minutes)
} else {
"soon".to_string()
}
}
/// Render system widget /// Render system widget
pub fn render(&mut self, frame: &mut Frame, area: Rect, hostname: &str, config: Option<&crate::config::DashboardConfig>) { pub fn render(&mut self, frame: &mut Frame, area: Rect, hostname: &str, config: Option<&crate::config::DashboardConfig>) {
let mut lines = Vec::new(); let mut lines = Vec::new();

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "cm-dashboard-shared" name = "cm-dashboard-shared"
version = "0.1.153" version = "0.1.160"
edition = "2021" edition = "2021"
[dependencies] [dependencies]

View File

@@ -66,6 +66,7 @@ pub struct StorageData {
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DriveData { pub struct DriveData {
pub name: String, pub name: String,
pub serial_number: Option<String>,
pub health: String, pub health: String,
pub temperature_celsius: Option<f32>, pub temperature_celsius: Option<f32>,
pub wear_percent: Option<f32>, pub wear_percent: Option<f32>,
@@ -104,6 +105,7 @@ pub struct PoolData {
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PoolDriveData { pub struct PoolDriveData {
pub name: String, pub name: String,
pub serial_number: Option<String>,
pub temperature_celsius: Option<f32>, pub temperature_celsius: Option<f32>,
pub wear_percent: Option<f32>, pub wear_percent: Option<f32>,
pub health: String, pub health: String,