Compare commits

..

11 Commits

Author SHA1 Message Date
67034c84b9 Add responsive column visibility to service panel
All checks were successful
Build and Release / build-and-release (push) Successful in 1m47s
Service panel now dynamically shows/hides columns based on terminal width:
- ≥80 chars: All columns (Name, Status, RAM, Uptime, Restarts)
- ≥60 chars: Hide Restarts only
- ≥45 chars: Hide Uptime and Restarts
- <45 chars: Minimal (Name and Status only)

Improves dashboard usability on smaller terminal sizes.
2025-11-30 10:50:08 +01:00
c62c7fa698 Remove debug logging from disk collector
All checks were successful
Build and Release / build-and-release (push) Successful in 1m11s
Removed all debug! statements from disk collector to reduce log noise.

Bump version to v0.1.226
2025-11-30 00:44:38 +01:00
0b1d8c0a73 Fix Data_3 showing as unknown by handling smartctl warning exit codes
All checks were successful
Build and Release / build-and-release (push) Successful in 1m11s
Root cause: sda's temperature exceeded threshold in the past, causing
smartctl to return exit code 32 (warning: "Attributes have been <= threshold
in the past"). The agent checked output.status.success() and rejected the
entire output as failed, even though the data (serial, temperature, health)
was perfectly valid.

Smartctl exit codes are bit flags for informational warnings:
- Exit 0: No warnings
- Exit 32 (bit 5): Attributes were at/below threshold in past
- Exit 64 (bit 6): Error log has entries
- etc.

The output data is valid regardless of these warning flags.

Solution: Parse output as long as it's not empty, ignore exit code.
Only return UNKNOWN if output is actually empty (command truly failed).

Result: Data_3 will now show "ZDZ4VE0B T: 31°C" instead of "? Data_3: sda"

Bump version to v0.1.225
2025-11-30 00:35:19 +01:00
c77aa6eaaa Fix Data_3 timeout by removing sequential SMART during pool detection
All checks were successful
Build and Release / build-and-release (push) Successful in 1m34s
Root cause: SMART data was collected TWICE:
1. Sequential collection during pool detection in get_drive_info_for_path()
   using problematic tokio::task::block_in_place() nesting
2. Parallel collection in get_smart_data_for_drives() (v0.1.223)

The sequential collection happened FIRST during pool detection, causing
sda (Data_3) to timeout due to:
- Bad async nesting: block_in_place() wrapping block_on()
- Sequential execution causing runtime issues
- sda being third in sequence, runtime degraded by then

Solution: Remove SMART collection from get_drive_info_for_path().
Pool drive temperatures are populated later from the parallel SMART
collection which properly uses futures::join_all.

Benefits:
- Eliminates problematic async nesting
- All SMART queries happen once in parallel only
- sda/Data_3 should now show serial (ZDZ4VE0B) and temperature

Bump version to v0.1.224
2025-11-30 00:14:25 +01:00
8a0e68f0e3 Fix Data_3 timeout by parallelizing SMART collection
All checks were successful
Build and Release / build-and-release (push) Successful in 1m10s
Root cause: SMART data was collected sequentially, one drive at a time.
With 5 drives taking ~500ms each, total collection time was 2.5+ seconds.
When disk collector runs every 1 second, this caused overlapping
collections creating resource contention. The last drive (sda/Data_3)
would timeout due to the drive being accessed by the previous collection.

Solution: Query all drives in parallel using futures::join_all. Now all
drives get their SMART data collected simultaneously with independent
3-second timeouts, eliminating contention and reducing total collection
time from 2.5+ seconds to ~500ms (the slowest single drive).

Benefits:
- All drives complete in ~500ms instead of 2.5+ seconds
- No overlapping collections causing resource contention
- Each drive gets full 3-second timeout window
- sda/Data_3 should now show temperature and serial number

Bump version to v0.1.223
2025-11-29 23:51:43 +01:00
2d653fe9ae Fix empty Storage section by configuring stdio pipes
All checks were successful
Build and Release / build-and-release (push) Successful in 1m15s
Root cause: run_command_with_timeout() was calling cmd.spawn() without
configuring stdout/stderr pipes. This caused command output to go to
journald instead of being captured by wait_with_output(). The disk
collector received empty output and failed silently.

Solution: Configure stdout(Stdio::piped()) and stderr(Stdio::piped())
before spawning commands. This ensures wait_with_output() can properly
capture command output.

Fixes: Empty Storage section, lsblk output appearing in journald
Bump version to v0.1.222
2025-11-29 23:25:17 +01:00
caba78004e Fix empty Storage section by properly aliasing command types
All checks were successful
Build and Release / build-and-release (push) Successful in 2m6s
v0.1.220 broke disk collector by changing the import from
std::process::Command to tokio::process::Command, but lines 193 and
767 explicitly used std::process::Command::new() which silently failed.

Solution: Import both as aliases (TokioCommand/StdCommand) and use
appropriate type for each operation - async commands use TokioCommand
with run_command_with_timeout, sync commands use StdCommand with
system timeout wrapper.

Fixes: Empty Storage section after v0.1.220 deployment
Bump version to v0.1.221
2025-11-29 21:29:33 +01:00
77bf08a978 Fix blocking smartctl commands with proper async/timeout handling
All checks were successful
Build and Release / build-and-release (push) Successful in 2m2s
- Changed disk collector to use tokio::process::Command instead of std::process::Command
- Updated run_command_with_timeout to properly kill processes on timeout
- Fixes issue where smartctl hangs on problematic drives (/dev/sda) freezing entire agent
- Timeout now force-kills hung processes using kill -9, preventing orphaned smartctl processes

This resolves the issue where Data_3 showed unknown status because smartctl was hanging
indefinitely trying to read from a problematic drive, blocking the entire collector.

Bump version to v0.1.220

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 21:09:04 +01:00
929870f8b6 Bump version to v0.1.219
All checks were successful
Build and Release / build-and-release (push) Successful in 1m11s
2025-11-29 18:35:14 +01:00
7aae852b7b Bump version to v0.1.218
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
2025-11-29 17:59:33 +01:00
40f3ff66d8 Show archive count range to detect inconsistencies
- Display single number if all services have same count
- Display min-max range if counts differ (indicates problem)
2025-11-29 17:59:24 +01:00
10 changed files with 233 additions and 78 deletions

48
Cargo.lock generated
View File

@@ -279,7 +279,7 @@ checksum = "a1d728cc89cf3aee9ff92b05e62b19ee65a02b5702cff7d5a377e32c6ae29d8d"
[[package]] [[package]]
name = "cm-dashboard" name = "cm-dashboard"
version = "0.1.216" version = "0.1.226"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"chrono", "chrono",
@@ -301,7 +301,7 @@ dependencies = [
[[package]] [[package]]
name = "cm-dashboard-agent" name = "cm-dashboard-agent"
version = "0.1.216" version = "0.1.226"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-trait", "async-trait",
@@ -309,6 +309,7 @@ dependencies = [
"chrono-tz", "chrono-tz",
"clap", "clap",
"cm-dashboard-shared", "cm-dashboard-shared",
"futures",
"gethostname", "gethostname",
"lettre", "lettre",
"reqwest", "reqwest",
@@ -324,7 +325,7 @@ dependencies = [
[[package]] [[package]]
name = "cm-dashboard-shared" name = "cm-dashboard-shared"
version = "0.1.216" version = "0.1.226"
dependencies = [ dependencies = [
"chrono", "chrono",
"serde", "serde",
@@ -552,6 +553,21 @@ dependencies = [
"percent-encoding", "percent-encoding",
] ]
[[package]]
name = "futures"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "65bc07b1a8bc7c85c5f2e110c476c7389b4554ba72af57d8445ea63a576b0876"
dependencies = [
"futures-channel",
"futures-core",
"futures-executor",
"futures-io",
"futures-sink",
"futures-task",
"futures-util",
]
[[package]] [[package]]
name = "futures-channel" name = "futures-channel"
version = "0.3.31" version = "0.3.31"
@@ -559,6 +575,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2dff15bf788c671c1934e366d07e30c1814a8ef514e1af724a602e8a2fbe1b10" checksum = "2dff15bf788c671c1934e366d07e30c1814a8ef514e1af724a602e8a2fbe1b10"
dependencies = [ dependencies = [
"futures-core", "futures-core",
"futures-sink",
] ]
[[package]] [[package]]
@@ -567,12 +584,34 @@ version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "05f29059c0c2090612e8d742178b0580d2dc940c837851ad723096f87af6663e" checksum = "05f29059c0c2090612e8d742178b0580d2dc940c837851ad723096f87af6663e"
[[package]]
name = "futures-executor"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e28d1d997f585e54aebc3f97d39e72338912123a67330d723fdbb564d646c9f"
dependencies = [
"futures-core",
"futures-task",
"futures-util",
]
[[package]] [[package]]
name = "futures-io" name = "futures-io"
version = "0.3.31" version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9e5c1b78ca4aae1ac06c48a526a655760685149f0d465d21f37abfe57ce075c6" checksum = "9e5c1b78ca4aae1ac06c48a526a655760685149f0d465d21f37abfe57ce075c6"
[[package]]
name = "futures-macro"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "162ee34ebcb7c64a8abebc059ce0fee27c2262618d7b60ed8faf72fef13c3650"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]] [[package]]
name = "futures-sink" name = "futures-sink"
version = "0.3.31" version = "0.3.31"
@@ -591,8 +630,11 @@ version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9fa08315bb612088cc391249efdc3bc77536f16c91f6cf495e6fbe85b20a4a81" checksum = "9fa08315bb612088cc391249efdc3bc77536f16c91f6cf495e6fbe85b20a4a81"
dependencies = [ dependencies = [
"futures-channel",
"futures-core", "futures-core",
"futures-io", "futures-io",
"futures-macro",
"futures-sink",
"futures-task", "futures-task",
"memchr", "memchr",
"pin-project-lite", "pin-project-lite",

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "cm-dashboard-agent" name = "cm-dashboard-agent"
version = "0.1.217" version = "0.1.227"
edition = "2021" edition = "2021"
[dependencies] [dependencies]
@@ -20,4 +20,5 @@ gethostname = { workspace = true }
chrono-tz = "0.8" chrono-tz = "0.8"
toml = { workspace = true } toml = { workspace = true }
async-trait = "0.1" async-trait = "0.1"
reqwest = { version = "0.11", features = ["json", "blocking"] } reqwest = { version = "0.11", features = ["json", "blocking"] }
futures = "0.3"

View File

@@ -142,12 +142,17 @@ impl BackupCollector {
// Build service list for this disk // Build service list for this disk
let services: Vec<String> = backup_status.services.keys().cloned().collect(); let services: Vec<String> = backup_status.services.keys().cloned().collect();
// Get archive count per service (use minimum to show if any service has fewer backups) // Get min and max archive counts to detect inconsistencies
let total_archives: i64 = backup_status.services.values() let archives_min: i64 = backup_status.services.values()
.map(|service| service.archive_count) .map(|service| service.archive_count)
.min() .min()
.unwrap_or(0); .unwrap_or(0);
let archives_max: i64 = backup_status.services.values()
.map(|service| service.archive_count)
.max()
.unwrap_or(0);
// Create disk data // Create disk data
let disk_data = BackupDiskData { let disk_data = BackupDiskData {
serial: backup_status.disk_serial_number.unwrap_or_else(|| "Unknown".to_string()), serial: backup_status.disk_serial_number.unwrap_or_else(|| "Unknown".to_string()),
@@ -161,7 +166,8 @@ impl BackupCollector {
disk_total_gb: total_gb, disk_total_gb: total_gb,
usage_status, usage_status,
services, services,
total_archives, archives_min,
archives_max,
}; };
disks.push(disk_data); disks.push(disk_data);

View File

@@ -3,10 +3,9 @@ use async_trait::async_trait;
use cm_dashboard_shared::{AgentData, DriveData, FilesystemData, PoolData, HysteresisThresholds, Status}; use cm_dashboard_shared::{AgentData, DriveData, FilesystemData, PoolData, HysteresisThresholds, Status};
use crate::config::DiskConfig; use crate::config::DiskConfig;
use std::process::Command; use tokio::process::Command as TokioCommand;
use std::time::Instant; use std::process::Command as StdCommand;
use std::collections::HashMap; use std::collections::HashMap;
use tracing::debug;
use super::{Collector, CollectorError}; use super::{Collector, CollectorError};
@@ -67,9 +66,6 @@ impl DiskCollector {
/// Collect all storage data and populate AgentData /// Collect all storage data and populate AgentData
async fn collect_storage_data(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> { async fn collect_storage_data(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
let start_time = Instant::now();
debug!("Starting clean storage collection");
// Step 1: Get mount points and their backing devices // Step 1: Get mount points and their backing devices
let mount_devices = self.get_mount_devices().await?; let mount_devices = self.get_mount_devices().await?;
@@ -104,9 +100,6 @@ impl DiskCollector {
self.populate_drives_data(&physical_drives, &smart_data, agent_data)?; self.populate_drives_data(&physical_drives, &smart_data, agent_data)?;
self.populate_pools_data(&mergerfs_pools, &smart_data, agent_data)?; self.populate_pools_data(&mergerfs_pools, &smart_data, agent_data)?;
let elapsed = start_time.elapsed();
debug!("Storage collection completed in {:?}", elapsed);
Ok(()) Ok(())
} }
@@ -114,7 +107,7 @@ impl DiskCollector {
async fn get_mount_devices(&self) -> Result<HashMap<String, String>, CollectorError> { async fn get_mount_devices(&self) -> Result<HashMap<String, String>, CollectorError> {
use super::run_command_with_timeout; use super::run_command_with_timeout;
let mut cmd = Command::new("lsblk"); let mut cmd = TokioCommand::new("lsblk");
cmd.args(&["-rn", "-o", "NAME,MOUNTPOINT"]); cmd.args(&["-rn", "-o", "NAME,MOUNTPOINT"]);
let output = run_command_with_timeout(cmd, 2).await let output = run_command_with_timeout(cmd, 2).await
@@ -141,7 +134,6 @@ impl DiskCollector {
} }
} }
debug!("Found {} mounted block devices", mount_devices.len());
Ok(mount_devices) Ok(mount_devices)
} }
@@ -154,8 +146,8 @@ impl DiskCollector {
Ok((total, used)) => { Ok((total, used)) => {
filesystem_usage.insert(mount_point.clone(), (total, used)); filesystem_usage.insert(mount_point.clone(), (total, used));
} }
Err(e) => { Err(_e) => {
debug!("Failed to get filesystem info for {}: {}", mount_point, e); // Silently skip filesystems we can't read
} }
} }
} }
@@ -176,8 +168,6 @@ impl DiskCollector {
// Only add if we don't already have usage data for this mount point // Only add if we don't already have usage data for this mount point
if !filesystem_usage.contains_key(&mount_point) { if !filesystem_usage.contains_key(&mount_point) {
if let Ok((total, used)) = self.get_filesystem_info(&mount_point) { if let Ok((total, used)) = self.get_filesystem_info(&mount_point) {
debug!("Added MergerFS filesystem usage for {}: {}GB total, {}GB used",
mount_point, total as f32 / (1024.0 * 1024.0 * 1024.0), used as f32 / (1024.0 * 1024.0 * 1024.0));
filesystem_usage.insert(mount_point, (total, used)); filesystem_usage.insert(mount_point, (total, used));
} }
} }
@@ -189,7 +179,7 @@ impl DiskCollector {
/// Get filesystem info for a single mount point /// Get filesystem info for a single mount point
fn get_filesystem_info(&self, mount_point: &str) -> Result<(u64, u64), CollectorError> { fn get_filesystem_info(&self, mount_point: &str) -> Result<(u64, u64), CollectorError> {
let output = std::process::Command::new("timeout") let output = StdCommand::new("timeout")
.args(&["2", "df", "--block-size=1", mount_point]) .args(&["2", "df", "--block-size=1", mount_point])
.output() .output()
.map_err(|e| CollectorError::SystemRead { .map_err(|e| CollectorError::SystemRead {
@@ -252,9 +242,8 @@ impl DiskCollector {
} else { } else {
mount_point.trim_start_matches('/').replace('/', "_") mount_point.trim_start_matches('/').replace('/', "_")
}; };
if pool_name.is_empty() { if pool_name.is_empty() {
debug!("Skipping mergerfs pool with empty name: {}", mount_point);
continue; continue;
} }
@@ -282,8 +271,7 @@ impl DiskCollector {
// Categorize as data vs parity drives // Categorize as data vs parity drives
let (data_drives, parity_drives) = match self.categorize_pool_drives(&all_member_paths) { let (data_drives, parity_drives) = match self.categorize_pool_drives(&all_member_paths) {
Ok(drives) => drives, Ok(drives) => drives,
Err(e) => { Err(_e) => {
debug!("Failed to categorize drives for pool {}: {}. Skipping.", mount_point, e);
continue; continue;
} }
}; };
@@ -298,8 +286,7 @@ impl DiskCollector {
}); });
} }
} }
debug!("Found {} mergerfs pools", pools.len());
Ok(pools) Ok(pools)
} }
@@ -386,9 +373,9 @@ impl DiskCollector {
device.to_string() device.to_string()
} }
/// Get SMART data for drives /// Get SMART data for drives in parallel
async fn get_smart_data_for_drives(&self, physical_drives: &[PhysicalDrive], mergerfs_pools: &[MergerfsPool]) -> HashMap<String, SmartData> { async fn get_smart_data_for_drives(&self, physical_drives: &[PhysicalDrive], mergerfs_pools: &[MergerfsPool]) -> HashMap<String, SmartData> {
let mut smart_data = HashMap::new(); use futures::future::join_all;
// Collect all drive names // Collect all drive names
let mut all_drives = std::collections::HashSet::new(); let mut all_drives = std::collections::HashSet::new();
@@ -404,9 +391,24 @@ impl DiskCollector {
} }
} }
// Get SMART data for each drive // Collect SMART data for all drives in parallel
for drive_name in all_drives { let futures: Vec<_> = all_drives
if let Ok(data) = self.get_smart_data(&drive_name).await { .iter()
.map(|drive_name| {
let drive = drive_name.clone();
async move {
let result = self.get_smart_data(&drive).await;
(drive, result)
}
})
.collect();
let results = join_all(futures).await;
// Build HashMap from results
let mut smart_data = HashMap::new();
for (drive_name, result) in results {
if let Ok(data) = result {
smart_data.insert(drive_name, data); smart_data.insert(drive_name, data);
} }
} }
@@ -420,7 +422,7 @@ impl DiskCollector {
// Use direct smartctl (no sudo) - service has CAP_SYS_RAWIO and CAP_SYS_ADMIN capabilities // Use direct smartctl (no sudo) - service has CAP_SYS_RAWIO and CAP_SYS_ADMIN capabilities
// For NVMe drives, specify device type explicitly // For NVMe drives, specify device type explicitly
let mut cmd = Command::new("smartctl"); let mut cmd = TokioCommand::new("smartctl");
if drive_name.starts_with("nvme") { if drive_name.starts_with("nvme") {
cmd.args(&["-d", "nvme", "-a", &format!("/dev/{}", drive_name)]); cmd.args(&["-d", "nvme", "-a", &format!("/dev/{}", drive_name)]);
} else { } else {
@@ -435,8 +437,10 @@ impl DiskCollector {
let output_str = String::from_utf8_lossy(&output.stdout); let output_str = String::from_utf8_lossy(&output.stdout);
if !output.status.success() { // Note: smartctl returns non-zero exit codes for warnings (like exit code 32
// Return unknown data rather than failing completely // for "temperature was high in the past"), but the output data is still valid.
// Only check if we got any output at all, don't reject based on exit code.
if output_str.is_empty() {
return Ok(SmartData { return Ok(SmartData {
health: "UNKNOWN".to_string(), health: "UNKNOWN".to_string(),
serial_number: None, serial_number: None,
@@ -444,7 +448,7 @@ impl DiskCollector {
wear_percent: None, wear_percent: None,
}); });
} }
let mut health = "UNKNOWN".to_string(); let mut health = "UNKNOWN".to_string();
let mut serial_number = None; let mut serial_number = None;
let mut temperature = None; let mut temperature = None;
@@ -763,7 +767,7 @@ impl DiskCollector {
/// Get drive information for a mount path /// Get drive information for a mount path
fn get_drive_info_for_path(&self, path: &str) -> anyhow::Result<PoolDrive> { fn get_drive_info_for_path(&self, path: &str) -> anyhow::Result<PoolDrive> {
// Use lsblk to find the backing device with timeout // Use lsblk to find the backing device with timeout
let output = Command::new("timeout") let output = StdCommand::new("timeout")
.args(&["2", "lsblk", "-rn", "-o", "NAME,MOUNTPOINT"]) .args(&["2", "lsblk", "-rn", "-o", "NAME,MOUNTPOINT"])
.output() .output()
.map_err(|e| anyhow::anyhow!("Failed to run lsblk: {}", e))?; .map_err(|e| anyhow::anyhow!("Failed to run lsblk: {}", e))?;
@@ -785,20 +789,13 @@ impl DiskCollector {
// Extract base device name (e.g., "sda1" -> "sda") // Extract base device name (e.g., "sda1" -> "sda")
let base_device = self.extract_base_device(&format!("/dev/{}", device)); let base_device = self.extract_base_device(&format!("/dev/{}", device));
// Get temperature from SMART data if available // Temperature will be filled in later from parallel SMART collection
let temperature = if let Ok(smart_data) = tokio::task::block_in_place(|| { // Don't collect it here to avoid sequential blocking with problematic async nesting
tokio::runtime::Handle::current().block_on(self.get_smart_data(&base_device))
}) {
smart_data.temperature_celsius
} else {
None
};
Ok(PoolDrive { Ok(PoolDrive {
name: base_device, name: base_device,
mount_point: path.to_string(), mount_point: path.to_string(),
temperature_celsius: temperature, temperature_celsius: None,
}) })
} }

View File

@@ -1,8 +1,7 @@
use async_trait::async_trait; use async_trait::async_trait;
use cm_dashboard_shared::{AgentData}; use cm_dashboard_shared::{AgentData};
use std::process::{Command, Output}; use std::process::Output;
use std::time::Duration; use std::time::Duration;
use tokio::time::timeout;
pub mod backup; pub mod backup;
pub mod cpu; pub mod cpu;
@@ -16,16 +15,34 @@ pub mod systemd;
pub use error::CollectorError; pub use error::CollectorError;
/// Run a command with a timeout to prevent blocking /// Run a command with a timeout to prevent blocking
pub async fn run_command_with_timeout(mut cmd: Command, timeout_secs: u64) -> std::io::Result<Output> { /// Properly kills the process if timeout is exceeded
pub async fn run_command_with_timeout(mut cmd: tokio::process::Command, timeout_secs: u64) -> std::io::Result<Output> {
use tokio::time::timeout;
use std::process::Stdio;
let timeout_duration = Duration::from_secs(timeout_secs); let timeout_duration = Duration::from_secs(timeout_secs);
match timeout(timeout_duration, tokio::task::spawn_blocking(move || cmd.output())).await { // Configure stdio to capture output
Ok(Ok(result)) => result, cmd.stdout(Stdio::piped());
Ok(Err(e)) => Err(std::io::Error::new(std::io::ErrorKind::Other, e)), cmd.stderr(Stdio::piped());
Err(_) => Err(std::io::Error::new(
std::io::ErrorKind::TimedOut, let child = cmd.spawn()?;
format!("Command timed out after {} seconds", timeout_secs) let pid = child.id();
)),
match timeout(timeout_duration, child.wait_with_output()).await {
Ok(result) => result,
Err(_) => {
// Timeout - force kill the process using system kill command
if let Some(process_id) = pid {
let _ = tokio::process::Command::new("kill")
.args(&["-9", &process_id.to_string()])
.output()
.await;
}
Err(std::io::Error::new(
std::io::ErrorKind::TimedOut,
format!("Command timed out after {} seconds", timeout_secs)
))
}
} }
} }

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "cm-dashboard" name = "cm-dashboard"
version = "0.1.217" version = "0.1.227"
edition = "2021" edition = "2021"
[dependencies] [dependencies]

View File

@@ -11,6 +11,59 @@ use tracing::debug;
use crate::ui::theme::{Components, StatusIcons, Theme, Typography}; use crate::ui::theme::{Components, StatusIcons, Theme, Typography};
use ratatui::style::Style; use ratatui::style::Style;
/// Column visibility configuration based on terminal width
#[derive(Debug, Clone, Copy)]
struct ColumnVisibility {
show_name: bool,
show_status: bool,
show_ram: bool,
show_uptime: bool,
show_restarts: bool,
}
impl ColumnVisibility {
/// Determine which columns to show based on available width
fn from_width(width: u16) -> Self {
if width >= 80 {
// Full layout: Name (25) + Status (10) + RAM (8) + Uptime (8) + Restarts (5) = 56 chars
Self {
show_name: true,
show_status: true,
show_ram: true,
show_uptime: true,
show_restarts: true,
}
} else if width >= 60 {
// Hide restarts: Name (25) + Status (10) + RAM (8) + Uptime (8) = 51 chars
Self {
show_name: true,
show_status: true,
show_ram: true,
show_uptime: true,
show_restarts: false,
}
} else if width >= 45 {
// Hide uptime and restarts: Name (25) + Status (10) + RAM (8) = 43 chars
Self {
show_name: true,
show_status: true,
show_ram: true,
show_uptime: false,
show_restarts: false,
}
} else {
// Minimal: Name (25) + Status (10) = 35 chars
Self {
show_name: true,
show_status: true,
show_ram: false,
show_uptime: false,
show_restarts: false,
}
}
}
}
/// Services widget displaying hierarchical systemd service statuses /// Services widget displaying hierarchical systemd service statuses
#[derive(Clone)] #[derive(Clone)]
pub struct ServicesWidget { pub struct ServicesWidget {
@@ -76,7 +129,7 @@ impl ServicesWidget {
} }
/// Format parent service line - returns text without icon for span formatting /// Format parent service line - returns text without icon for span formatting
fn format_parent_service_line(&self, name: &str, info: &ServiceInfo) -> String { fn format_parent_service_line(&self, name: &str, info: &ServiceInfo, columns: ColumnVisibility) -> String {
// Truncate long service names to fit layout (account for icon space) // Truncate long service names to fit layout (account for icon space)
let short_name = if name.len() > 22 { let short_name = if name.len() > 22 {
format!("{}...", &name[..19]) format!("{}...", &name[..19])
@@ -129,10 +182,25 @@ impl ServicesWidget {
} }
}); });
format!( // Build format string based on column visibility
"{:<23} {:<10} {:<8} {:<8} {:<5}", let mut parts = Vec::new();
short_name, status_str, memory_str, uptime_str, restart_str if columns.show_name {
) parts.push(format!("{:<23}", short_name));
}
if columns.show_status {
parts.push(format!("{:<10}", status_str));
}
if columns.show_ram {
parts.push(format!("{:<8}", memory_str));
}
if columns.show_uptime {
parts.push(format!("{:<8}", uptime_str));
}
if columns.show_restarts {
parts.push(format!("{:<5}", restart_str));
}
parts.join(" ")
} }
@@ -476,11 +544,28 @@ impl ServicesWidget {
.constraints([Constraint::Length(1), Constraint::Min(0)]) .constraints([Constraint::Length(1), Constraint::Min(0)])
.split(inner_area); .split(inner_area);
// Header // Determine which columns to show based on available width
let header = format!( let columns = ColumnVisibility::from_width(inner_area.width);
"{:<25} {:<10} {:<8} {:<8} {:<5}",
"Service:", "Status:", "RAM:", "Uptime:", "↻:" // Build header based on visible columns
); let mut header_parts = Vec::new();
if columns.show_name {
header_parts.push(format!("{:<25}", "Service:"));
}
if columns.show_status {
header_parts.push(format!("{:<10}", "Status:"));
}
if columns.show_ram {
header_parts.push(format!("{:<8}", "RAM:"));
}
if columns.show_uptime {
header_parts.push(format!("{:<8}", "Uptime:"));
}
if columns.show_restarts {
header_parts.push(format!("{:<5}", "↻:"));
}
let header = header_parts.join(" ");
let header_para = Paragraph::new(header).style(Typography::muted()); let header_para = Paragraph::new(header).style(Typography::muted());
frame.render_widget(header_para, content_chunks[0]); frame.render_widget(header_para, content_chunks[0]);
@@ -492,11 +577,11 @@ impl ServicesWidget {
} }
// Render the services list // Render the services list
self.render_services(frame, content_chunks[1], is_focused); self.render_services(frame, content_chunks[1], is_focused, columns);
} }
/// Render services list /// Render services list
fn render_services(&mut self, frame: &mut Frame, area: Rect, is_focused: bool) { fn render_services(&mut self, frame: &mut Frame, area: Rect, is_focused: bool, columns: ColumnVisibility) {
// Build hierarchical service list for display // Build hierarchical service list for display
let mut display_lines: Vec<(String, Status, bool, Option<(ServiceInfo, bool)>)> = Vec::new(); let mut display_lines: Vec<(String, Status, bool, Option<(ServiceInfo, bool)>)> = Vec::new();
@@ -506,7 +591,7 @@ impl ServicesWidget {
for (parent_name, parent_info) in parent_services { for (parent_name, parent_info) in parent_services {
// Add parent service line // Add parent service line
let parent_line = self.format_parent_service_line(parent_name, parent_info); let parent_line = self.format_parent_service_line(parent_name, parent_info, columns);
display_lines.push((parent_line, parent_info.widget_status, false, None)); display_lines.push((parent_line, parent_info.widget_status, false, None));
// Add sub-services for this parent (if any) // Add sub-services for this parent (if any)

View File

@@ -566,9 +566,15 @@ impl SystemWidget {
} }
// Show usage with status and archive count // Show usage with status and archive count
let archive_display = if disk.archives_min == disk.archives_max {
format!("{}", disk.archives_min)
} else {
format!("{}-{}", disk.archives_min, disk.archives_max)
};
let usage_text = format!( let usage_text = format!(
"Usage: ({}) {:.0}% {:.0}GB/{:.0}GB", "Usage: ({}) {:.0}% {:.0}GB/{:.0}GB",
disk.total_archives, archive_display,
disk.disk_usage_percent, disk.disk_usage_percent,
disk.disk_used_gb, disk.disk_used_gb,
disk.disk_total_gb disk.disk_total_gb

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "cm-dashboard-shared" name = "cm-dashboard-shared"
version = "0.1.217" version = "0.1.227"
edition = "2021" edition = "2021"
[dependencies] [dependencies]

View File

@@ -195,7 +195,8 @@ pub struct BackupDiskData {
pub disk_total_gb: f32, pub disk_total_gb: f32,
pub usage_status: Status, pub usage_status: Status,
pub services: Vec<String>, pub services: Vec<String>,
pub total_archives: i64, pub archives_min: i64,
pub archives_max: i64,
} }
impl AgentData { impl AgentData {