Compare commits

...

35 Commits

Author SHA1 Message Date
5da9213da6 Fix nftables query by using full path to nft binary
All checks were successful
Build and Release / build-and-release (push) Successful in 1m31s
Use explicit path /run/current-system/sw/bin/nft to match sudoers
configuration. Previously using 'nft' without path was resolving to
wrong location and failing permission checks.

- Change from 'sudo nft' to 'sudo /run/current-system/sw/bin/nft'
- Matches sudoers entry for passwordless execution
- Update version to v0.1.255
2025-12-04 16:02:14 +01:00
a7755f02ae Use 5-minute load average for CPU status calculation
All checks were successful
Build and Release / build-and-release (push) Successful in 1m27s
Change CPU load status to use load_5min instead of load_1min for more
stable status reporting. 5-minute average smooths out temporary spikes
and provides better indication of sustained load.

- Change load_status calculation from load_1min to load_5min
- Add comment explaining use of 5-minute average for stability
- Update version to v0.1.254
2025-12-04 15:59:11 +01:00
b886fb2045 Add detailed error logging for nft command failures
All checks were successful
Build and Release / build-and-release (push) Successful in 1m28s
- Log exit status code when nft command fails
- Log stderr output to diagnose issues
- Distinguish between command execution failure and non-zero exit
- Update version to v0.1.253
2025-12-04 15:53:12 +01:00
cfb02e1763 Change nftables logging from debug to info
All checks were successful
Build and Release / build-and-release (push) Successful in 1m21s
- Add info to tracing imports
- Change debug!() calls to info!() for visibility in logs
- Update version to v0.1.252
2025-12-04 15:48:34 +01:00
5b53ca3d52 Move VPN route display from vpn-connection to vpn-download
All checks were successful
Build and Release / build-and-release (push) Successful in 1m46s
Consolidate VPN-related information under openvpn-vpn-download service.
Now shows both VPN route and torrent statistics as sub-services.

- Remove route from openvpn-vpn-connection
- Add route to openvpn-vpn-download (displayed first)
- Torrent stats displayed second
- Update version to v0.1.251
2025-12-04 15:36:34 +01:00
92a30913b4 Add debug logging and fix chain end detection for nftables
All checks were successful
Build and Release / build-and-release (push) Successful in 1m28s
- Detect chain end with closing brace
- Add debug logging to trace nft command execution and port collection
- Update version to v0.1.250
2025-12-04 15:33:43 +01:00
a288a8ef9a Fix nftables parser to use input_wan chain
All checks were successful
Build and Release / build-and-release (push) Successful in 1m27s
Change nftables port parser to specifically look for 'chain input_wan'
instead of any chain with 'input' in the name. This ensures we only
collect WAN/external ports, not LAN or other internal chains.

- Look for 'chain input_wan' specifically
- Remove internal network filters (no longer needed)
- Update version to v0.1.249
2025-12-04 15:26:20 +01:00
c65d596099 Add sudo for nftables query command
All checks were successful
Build and Release / build-and-release (push) Successful in 1m26s
Update nftables port collector to use sudo when querying ruleset.
Requires corresponding sudoers configuration in NixOS.

- Change nft command to use sudo
- Update version to v0.1.248
2025-12-04 13:40:31 +01:00
98ed17947d Add nftables WAN open ports as sub-services
All checks were successful
Build and Release / build-and-release (push) Successful in 1m53s
Display open external ports from nftables firewall rules as sub-services
grouped by protocol. Only shows WAN incoming ports by filtering input chain
rules and excluding private network sources.

- Parse nftables ruleset for accept rules with dport in input chain
- Filter out internal network traffic (192.168.x, 10.x, 172.16.x, loopback)
- Extract single ports and port sets from rules
- Group and display as "TCP: 22, 80, 443" and "UDP: 53, 123"
- Update version to v0.1.247
2025-12-04 12:50:10 +01:00
1cb6abf58a Replace Transmission with qBittorrent for torrent statistics
All checks were successful
Build and Release / build-and-release (push) Successful in 1m27s
Update collector to use qBittorrent Web API instead of Transmission RPC.
Query qBittorrent through VPN namespace using existing passwordless sudo
permissions for ip netns exec commands.

- Change service name from transmission-vpn to openvpn-vpn-download
- Replace get_transmission_stats() with get_qbittorrent_stats()
- Use curl through VPN namespace to access qBittorrent API at localhost:8080
- Parse qBittorrent JSON response for state, dlspeed, upspeed
- Count active torrents (downloading, uploading, stalledDL, stalledUP)
- Update version to v0.1.246
2025-12-02 23:31:56 +01:00
477724b4f4 Unify sub-service display formatting for Info status
All checks were successful
Build and Release / build-and-release (push) Successful in 1m39s
Change docker images to use name field for all data instead of metrics,
matching the pattern used by torrent stats and VPN routes. Increase display
width for Status::Info sub-services from 18 to 50 characters to accommodate
longer informational text without truncation.

- Docker images now show: "image-name size: 994.0 MB" in name field
- Torrent stats show: "17 active, ↓ 2.5 MB/s, ↑ 1.2 MB/s" in name field
- Remove fixed-width padding for Info status sub-services
- Update version to v0.1.245
2025-12-02 11:36:27 +01:00
7a3ed17952 Add torrent statistics to transmission-vpn service
All checks were successful
Build and Release / build-and-release (push) Successful in 1m47s
Implement aggregate torrent statistics display for transmission-vpn service
via Transmission RPC API. Shows active torrent count and total download/upload
speeds. Change VPN route label from "ip:" to "route:" for clarity.

- Add get_transmission_stats() method to query Transmission RPC
- Display format: "X active, ↓ MB/s, ↑ MB/s"
- Update version to v0.1.244
2025-12-02 11:12:14 +01:00
7e1962a168 Remove ZMQ debug packet counter from display
All checks were successful
Build and Release / build-and-release (push) Successful in 1m23s
- Remove ZMQ stats display from system widget
- Remove update_zmq_stats method
- Remove zmq_packets_received and zmq_last_packet_age fields
- Clean up display to only show essential information
2025-12-01 19:42:05 +01:00
5bb7d6cf57 Fix CPU model extraction for newer Intel generations
All checks were successful
Build and Release / build-and-release (push) Successful in 1m24s
- Handle 12th/13th Gen Intel format (e.g., "12th Gen Intel(R) Core(TM) i7-12700K")
- Extract full model including suffix (i7-12700K instead of truncated name)
- Simplify pattern matching logic
- Reduce fallback truncation to 15 chars
2025-12-01 19:35:03 +01:00
7a0dc27846 Extract CPU model number only to save display space
All checks were successful
Build and Release / build-and-release (push) Successful in 1m35s
- Parse Intel models (i3/i5/i7/i9-XXXX) from full name
- Parse AMD Ryzen models (Ryzen X XXXX) from full name
- Display format: "i7-9700 (8 cores)" instead of full CPU name
- Reduces CPU section width significantly
2025-12-01 19:23:26 +01:00
5bc250a738 Add static CPU model and core count to CPU collector
All checks were successful
Build and Release / build-and-release (push) Successful in 1m47s
- Collect CPU model name and core count from /proc/cpuinfo
- Only collect once at startup (check if fields already set)
- Display below C-state row in dashboard CPU section
- Move CPU info collection from NixOS collector to CPU collector
2025-12-01 19:15:57 +01:00
5c3ac8b15e Remove Docker image icon and use Status::Info
All checks were successful
Build and Release / build-and-release (push) Successful in 1m20s
Docker images now use Status::Info like VPN IP.
No "D" prefix, no status icon - just name and metrics.
All informational sub-services handled consistently.

Version: v0.1.239
2025-12-01 18:45:28 +01:00
bdfff942f7 Remove VPN external IP logging
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
Clean up debug logging for production.

Version: v0.1.238
2025-12-01 15:16:34 +01:00
47ab1e387d Add Status::Info for informational sub-services
All checks were successful
Build and Release / build-and-release (push) Successful in 1m9s
Agent uses Status enum to control display:
- Status::Info: no icon, no status text (VPN IP)
- Other statuses: icon + text (containers, nginx sites)

Dashboard checks status, no hardcoded service_type exceptions.

Version: v0.1.237
2025-12-01 15:11:16 +01:00
966ba27b1e Remove status icon from VPN IP and change to lowercase
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
Change display from "IP: X.X.X.X" to "ip: X.X.X.X".
Remove status icon for vpn_route service type.

Version: v0.1.236
2025-12-01 14:40:21 +01:00
6c6c9144bd Add info-level logging for VPN external IP debugging
All checks were successful
Build and Release / build-and-release (push) Successful in 1m10s
Change debug to info/warn logging to diagnose VPN IP query issues.
Use exact service name match instead of contains.

Version: v0.1.235
2025-12-01 14:30:25 +01:00
3fdcec8047 Add sudo for VPN namespace access
All checks were successful
Build and Release / build-and-release (push) Successful in 1m10s
Use sudo to access vpn namespace for external IP query.
Requires corresponding sudo permission in NixOS config.

Version: v0.1.234
2025-12-01 14:14:41 +01:00
1fcaf4a670 Fix VPN namespace name for external IP query
All checks were successful
Build and Release / build-and-release (push) Successful in 1m9s
Use correct namespace name 'vpn' instead of 'openvpn-namespace'.

Version: v0.1.233
2025-12-01 14:09:07 +01:00
885e19f7fd Add external IP display for OpenVPN connections
All checks were successful
Build and Release / build-and-release (push) Successful in 1m20s
Display VPN external IP as sub-service under openvpn-vpn-connection.
Query external IP through openvpn-namespace using curl ifconfig.me.

Version: v0.1.232
2025-12-01 13:49:54 +01:00
a7b69b8ae7 Fix duplicate data by clearing vectors before collection
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
Collectors now clear their target vectors (tmpfs, drives, pools, services)
before populating to prevent duplicates when updating cached AgentData.

- Clear tmpfs list in memory collector
- Clear drives and pools in disk collector
- Clear services in systemd collector
- Bump version to v0.1.231
2025-12-01 13:21:26 +01:00
2d290f40b2 Fix data caching to prevent empty broadcasts
All checks were successful
Build and Release / build-and-release (push) Successful in 1m33s
CRITICAL FIX: Collectors now update cached AgentData instead of
creating new empty data each cycle. This prevents the dashboard
from seeing flashing/disappearing data.

- Add cached_agent_data field to Agent struct
- Update cached data when collectors run
- Always broadcast the full cached data every 2s
- Only individual collectors respect their intervals
- Bump version to v0.1.230
2025-12-01 13:14:53 +01:00
ad1fcaa27b Fix collector interval timing to prevent excessive SMART checks
All checks were successful
Build and Release / build-and-release (push) Successful in 1m46s
Collectors now respect their configured intervals instead of running
every transmission cycle (2s). This prevents disk SMART checks from
running every 2 seconds, which was causing constant disk activity.

- Add TimedCollector wrapper with interval tracking
- Only collect from collectors whose interval has elapsed
- Disk collector now properly runs every 300s instead of every 2s
- Bump version to v0.1.229
2025-12-01 13:03:45 +01:00
60ab4d4f9e Fix service panel column width calculation
All checks were successful
Build and Release / build-and-release (push) Successful in 1m10s
Replace hardcoded terminal width thresholds with dynamic calculation
based on actual column requirements. Column visibility now adapts
correctly at 58, 52, 43, and 34 character widths instead of the
previous arbitrary 80, 60, 45 thresholds.

- Add width constants for each column (NAME=23, STATUS=10, etc)
- Calculate cumulative widths dynamically for each layout tier
- Ensure header and data formatting use consistent width values
- Fix service name truncation to respect calculated column width
2025-11-30 12:09:44 +01:00
67034c84b9 Add responsive column visibility to service panel
All checks were successful
Build and Release / build-and-release (push) Successful in 1m47s
Service panel now dynamically shows/hides columns based on terminal width:
- ≥80 chars: All columns (Name, Status, RAM, Uptime, Restarts)
- ≥60 chars: Hide Restarts only
- ≥45 chars: Hide Uptime and Restarts
- <45 chars: Minimal (Name and Status only)

Improves dashboard usability on smaller terminal sizes.
2025-11-30 10:50:08 +01:00
c62c7fa698 Remove debug logging from disk collector
All checks were successful
Build and Release / build-and-release (push) Successful in 1m11s
Removed all debug! statements from disk collector to reduce log noise.

Bump version to v0.1.226
2025-11-30 00:44:38 +01:00
0b1d8c0a73 Fix Data_3 showing as unknown by handling smartctl warning exit codes
All checks were successful
Build and Release / build-and-release (push) Successful in 1m11s
Root cause: sda's temperature exceeded threshold in the past, causing
smartctl to return exit code 32 (warning: "Attributes have been <= threshold
in the past"). The agent checked output.status.success() and rejected the
entire output as failed, even though the data (serial, temperature, health)
was perfectly valid.

Smartctl exit codes are bit flags for informational warnings:
- Exit 0: No warnings
- Exit 32 (bit 5): Attributes were at/below threshold in past
- Exit 64 (bit 6): Error log has entries
- etc.

The output data is valid regardless of these warning flags.

Solution: Parse output as long as it's not empty, ignore exit code.
Only return UNKNOWN if output is actually empty (command truly failed).

Result: Data_3 will now show "ZDZ4VE0B T: 31°C" instead of "? Data_3: sda"

Bump version to v0.1.225
2025-11-30 00:35:19 +01:00
c77aa6eaaa Fix Data_3 timeout by removing sequential SMART during pool detection
All checks were successful
Build and Release / build-and-release (push) Successful in 1m34s
Root cause: SMART data was collected TWICE:
1. Sequential collection during pool detection in get_drive_info_for_path()
   using problematic tokio::task::block_in_place() nesting
2. Parallel collection in get_smart_data_for_drives() (v0.1.223)

The sequential collection happened FIRST during pool detection, causing
sda (Data_3) to timeout due to:
- Bad async nesting: block_in_place() wrapping block_on()
- Sequential execution causing runtime issues
- sda being third in sequence, runtime degraded by then

Solution: Remove SMART collection from get_drive_info_for_path().
Pool drive temperatures are populated later from the parallel SMART
collection which properly uses futures::join_all.

Benefits:
- Eliminates problematic async nesting
- All SMART queries happen once in parallel only
- sda/Data_3 should now show serial (ZDZ4VE0B) and temperature

Bump version to v0.1.224
2025-11-30 00:14:25 +01:00
8a0e68f0e3 Fix Data_3 timeout by parallelizing SMART collection
All checks were successful
Build and Release / build-and-release (push) Successful in 1m10s
Root cause: SMART data was collected sequentially, one drive at a time.
With 5 drives taking ~500ms each, total collection time was 2.5+ seconds.
When disk collector runs every 1 second, this caused overlapping
collections creating resource contention. The last drive (sda/Data_3)
would timeout due to the drive being accessed by the previous collection.

Solution: Query all drives in parallel using futures::join_all. Now all
drives get their SMART data collected simultaneously with independent
3-second timeouts, eliminating contention and reducing total collection
time from 2.5+ seconds to ~500ms (the slowest single drive).

Benefits:
- All drives complete in ~500ms instead of 2.5+ seconds
- No overlapping collections causing resource contention
- Each drive gets full 3-second timeout window
- sda/Data_3 should now show temperature and serial number

Bump version to v0.1.223
2025-11-29 23:51:43 +01:00
2d653fe9ae Fix empty Storage section by configuring stdio pipes
All checks were successful
Build and Release / build-and-release (push) Successful in 1m15s
Root cause: run_command_with_timeout() was calling cmd.spawn() without
configuring stdout/stderr pipes. This caused command output to go to
journald instead of being captured by wait_with_output(). The disk
collector received empty output and failed silently.

Solution: Configure stdout(Stdio::piped()) and stderr(Stdio::piped())
before spawning commands. This ensures wait_with_output() can properly
capture command output.

Fixes: Empty Storage section, lsblk output appearing in journald
Bump version to v0.1.222
2025-11-29 23:25:17 +01:00
caba78004e Fix empty Storage section by properly aliasing command types
All checks were successful
Build and Release / build-and-release (push) Successful in 2m6s
v0.1.220 broke disk collector by changing the import from
std::process::Command to tokio::process::Command, but lines 193 and
767 explicitly used std::process::Command::new() which silently failed.

Solution: Import both as aliases (TokioCommand/StdCommand) and use
appropriate type for each operation - async commands use TokioCommand
with run_command_with_timeout, sync commands use StdCommand with
system timeout wrapper.

Fixes: Empty Storage section after v0.1.220 deployment
Bump version to v0.1.221
2025-11-29 21:29:33 +01:00
16 changed files with 738 additions and 159 deletions

48
Cargo.lock generated
View File

@@ -279,7 +279,7 @@ checksum = "a1d728cc89cf3aee9ff92b05e62b19ee65a02b5702cff7d5a377e32c6ae29d8d"
[[package]]
name = "cm-dashboard"
version = "0.1.219"
version = "0.1.254"
dependencies = [
"anyhow",
"chrono",
@@ -301,7 +301,7 @@ dependencies = [
[[package]]
name = "cm-dashboard-agent"
version = "0.1.219"
version = "0.1.254"
dependencies = [
"anyhow",
"async-trait",
@@ -309,6 +309,7 @@ dependencies = [
"chrono-tz",
"clap",
"cm-dashboard-shared",
"futures",
"gethostname",
"lettre",
"reqwest",
@@ -324,7 +325,7 @@ dependencies = [
[[package]]
name = "cm-dashboard-shared"
version = "0.1.219"
version = "0.1.254"
dependencies = [
"chrono",
"serde",
@@ -552,6 +553,21 @@ dependencies = [
"percent-encoding",
]
[[package]]
name = "futures"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "65bc07b1a8bc7c85c5f2e110c476c7389b4554ba72af57d8445ea63a576b0876"
dependencies = [
"futures-channel",
"futures-core",
"futures-executor",
"futures-io",
"futures-sink",
"futures-task",
"futures-util",
]
[[package]]
name = "futures-channel"
version = "0.3.31"
@@ -559,6 +575,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2dff15bf788c671c1934e366d07e30c1814a8ef514e1af724a602e8a2fbe1b10"
dependencies = [
"futures-core",
"futures-sink",
]
[[package]]
@@ -567,12 +584,34 @@ version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "05f29059c0c2090612e8d742178b0580d2dc940c837851ad723096f87af6663e"
[[package]]
name = "futures-executor"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e28d1d997f585e54aebc3f97d39e72338912123a67330d723fdbb564d646c9f"
dependencies = [
"futures-core",
"futures-task",
"futures-util",
]
[[package]]
name = "futures-io"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9e5c1b78ca4aae1ac06c48a526a655760685149f0d465d21f37abfe57ce075c6"
[[package]]
name = "futures-macro"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "162ee34ebcb7c64a8abebc059ce0fee27c2262618d7b60ed8faf72fef13c3650"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "futures-sink"
version = "0.3.31"
@@ -591,8 +630,11 @@ version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9fa08315bb612088cc391249efdc3bc77536f16c91f6cf495e6fbe85b20a4a81"
dependencies = [
"futures-channel",
"futures-core",
"futures-io",
"futures-macro",
"futures-sink",
"futures-task",
"memchr",
"pin-project-lite",

View File

@@ -1,6 +1,6 @@
[package]
name = "cm-dashboard-agent"
version = "0.1.220"
version = "0.1.255"
edition = "2021"
[dependencies]
@@ -20,4 +20,5 @@ gethostname = { workspace = true }
chrono-tz = "0.8"
toml = { workspace = true }
async-trait = "0.1"
reqwest = { version = "0.11", features = ["json", "blocking"] }
reqwest = { version = "0.11", features = ["json", "blocking"] }
futures = "0.3"

View File

@@ -1,6 +1,6 @@
use anyhow::Result;
use gethostname::gethostname;
use std::time::Duration;
use std::time::{Duration, Instant};
use tokio::time::interval;
use tracing::{debug, error, info};
@@ -19,13 +19,22 @@ use crate::collectors::{
use crate::notifications::NotificationManager;
use cm_dashboard_shared::AgentData;
/// Wrapper for collectors with timing information
struct TimedCollector {
collector: Box<dyn Collector>,
interval: Duration,
last_collection: Option<Instant>,
name: String,
}
pub struct Agent {
hostname: String,
config: AgentConfig,
zmq_handler: ZmqHandler,
collectors: Vec<Box<dyn Collector>>,
collectors: Vec<TimedCollector>,
notification_manager: NotificationManager,
previous_status: Option<SystemStatus>,
cached_agent_data: AgentData,
}
/// Track system component status for change detection
@@ -55,36 +64,78 @@ impl Agent {
config.zmq.publisher_port
);
// Initialize collectors
let mut collectors: Vec<Box<dyn Collector>> = Vec::new();
// Initialize collectors with timing information
let mut collectors: Vec<TimedCollector> = Vec::new();
// Add enabled collectors
if config.collectors.cpu.enabled {
collectors.push(Box::new(CpuCollector::new(config.collectors.cpu.clone())));
collectors.push(TimedCollector {
collector: Box::new(CpuCollector::new(config.collectors.cpu.clone())),
interval: Duration::from_secs(config.collectors.cpu.interval_seconds),
last_collection: None,
name: "CPU".to_string(),
});
info!("CPU collector initialized with {}s interval", config.collectors.cpu.interval_seconds);
}
if config.collectors.memory.enabled {
collectors.push(Box::new(MemoryCollector::new(config.collectors.memory.clone())));
collectors.push(TimedCollector {
collector: Box::new(MemoryCollector::new(config.collectors.memory.clone())),
interval: Duration::from_secs(config.collectors.memory.interval_seconds),
last_collection: None,
name: "Memory".to_string(),
});
info!("Memory collector initialized with {}s interval", config.collectors.memory.interval_seconds);
}
if config.collectors.disk.enabled {
collectors.push(Box::new(DiskCollector::new(config.collectors.disk.clone())));
collectors.push(TimedCollector {
collector: Box::new(DiskCollector::new(config.collectors.disk.clone())),
interval: Duration::from_secs(config.collectors.disk.interval_seconds),
last_collection: None,
name: "Disk".to_string(),
});
info!("Disk collector initialized with {}s interval", config.collectors.disk.interval_seconds);
}
if config.collectors.systemd.enabled {
collectors.push(Box::new(SystemdCollector::new(config.collectors.systemd.clone())));
collectors.push(TimedCollector {
collector: Box::new(SystemdCollector::new(config.collectors.systemd.clone())),
interval: Duration::from_secs(config.collectors.systemd.interval_seconds),
last_collection: None,
name: "Systemd".to_string(),
});
info!("Systemd collector initialized with {}s interval", config.collectors.systemd.interval_seconds);
}
if config.collectors.backup.enabled {
collectors.push(Box::new(BackupCollector::new()));
collectors.push(TimedCollector {
collector: Box::new(BackupCollector::new()),
interval: Duration::from_secs(config.collectors.backup.interval_seconds),
last_collection: None,
name: "Backup".to_string(),
});
info!("Backup collector initialized with {}s interval", config.collectors.backup.interval_seconds);
}
if config.collectors.network.enabled {
collectors.push(Box::new(NetworkCollector::new(config.collectors.network.clone())));
collectors.push(TimedCollector {
collector: Box::new(NetworkCollector::new(config.collectors.network.clone())),
interval: Duration::from_secs(config.collectors.network.interval_seconds),
last_collection: None,
name: "Network".to_string(),
});
info!("Network collector initialized with {}s interval", config.collectors.network.interval_seconds);
}
if config.collectors.nixos.enabled {
collectors.push(Box::new(NixOSCollector::new(config.collectors.nixos.clone())));
collectors.push(TimedCollector {
collector: Box::new(NixOSCollector::new(config.collectors.nixos.clone())),
interval: Duration::from_secs(config.collectors.nixos.interval_seconds),
last_collection: None,
name: "NixOS".to_string(),
});
info!("NixOS collector initialized with {}s interval", config.collectors.nixos.interval_seconds);
}
info!("Initialized {} collectors", collectors.len());
@@ -93,6 +144,9 @@ impl Agent {
let notification_manager = NotificationManager::new(&config.notifications, &hostname)?;
info!("Notification manager initialized");
// Initialize cached agent data
let cached_agent_data = AgentData::new(hostname.clone(), env!("CARGO_PKG_VERSION").to_string());
Ok(Self {
hostname,
config,
@@ -100,6 +154,7 @@ impl Agent {
collectors,
notification_manager,
previous_status: None,
cached_agent_data,
})
}
@@ -149,24 +204,47 @@ impl Agent {
async fn collect_and_broadcast(&mut self) -> Result<()> {
debug!("Starting structured data collection");
// Initialize empty AgentData
let mut agent_data = AgentData::new(self.hostname.clone(), env!("CARGO_PKG_VERSION").to_string());
// Collect data from collectors whose intervals have elapsed
// Update cached_agent_data with new data
let now = Instant::now();
for timed_collector in &mut self.collectors {
let should_collect = match timed_collector.last_collection {
None => true, // First collection
Some(last_time) => now.duration_since(last_time) >= timed_collector.interval,
};
// Collect data from all collectors
for collector in &self.collectors {
if let Err(e) = collector.collect_structured(&mut agent_data).await {
error!("Collector failed: {}", e);
// Continue with other collectors even if one fails
if should_collect {
if let Err(e) = timed_collector.collector.collect_structured(&mut self.cached_agent_data).await {
error!("Collector {} failed: {}", timed_collector.name, e);
// Update last_collection time even on failure to prevent immediate retries
timed_collector.last_collection = Some(now);
} else {
timed_collector.last_collection = Some(now);
debug!(
"Collected from {} ({}s interval)",
timed_collector.name,
timed_collector.interval.as_secs()
);
}
}
}
// Update timestamp on cached data
self.cached_agent_data.timestamp = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs();
// Clone for notification check (to avoid borrow issues)
let agent_data_snapshot = self.cached_agent_data.clone();
// Check for status changes and send notifications
if let Err(e) = self.check_status_changes_and_notify(&agent_data).await {
if let Err(e) = self.check_status_changes_and_notify(&agent_data_snapshot).await {
error!("Failed to check status changes: {}", e);
}
// Broadcast the structured data via ZMQ
if let Err(e) = self.zmq_handler.publish_agent_data(&agent_data).await {
// Broadcast the cached structured data via ZMQ
if let Err(e) = self.zmq_handler.publish_agent_data(&agent_data_snapshot).await {
error!("Failed to broadcast agent data: {}", e);
} else {
debug!("Successfully broadcast structured agent data");

View File

@@ -119,6 +119,71 @@ impl CpuCollector {
utils::parse_u64(content.trim())
}
/// Collect static CPU information from /proc/cpuinfo (only once at startup)
async fn collect_cpu_info(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
let content = utils::read_proc_file("/proc/cpuinfo")?;
let mut model_name: Option<String> = None;
let mut core_count: u32 = 0;
for line in content.lines() {
if line.starts_with("model name") {
if let Some(colon_pos) = line.find(':') {
let full_name = line[colon_pos + 1..].trim();
// Extract just the model number (e.g., "i7-9700" from "Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz")
let model = Self::extract_cpu_model(full_name);
if model_name.is_none() {
model_name = Some(model);
}
}
} else if line.starts_with("processor") {
core_count += 1;
}
}
agent_data.system.cpu.model_name = model_name;
if core_count > 0 {
agent_data.system.cpu.core_count = Some(core_count);
}
Ok(())
}
/// Extract CPU model number from full model name
/// Examples:
/// - "Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz" -> "i7-9700"
/// - "12th Gen Intel(R) Core(TM) i7-12700K" -> "i7-12700K"
/// - "AMD Ryzen 9 5950X 16-Core Processor" -> "Ryzen 9 5950X"
fn extract_cpu_model(full_name: &str) -> String {
// Look for Intel Core patterns (both old and new gen): i3, i5, i7, i9
// Match pattern like "i7-12700K" or "i7-9700"
for prefix in &["i3-", "i5-", "i7-", "i9-"] {
if let Some(pos) = full_name.find(prefix) {
// Find end of model number (until space or end of string)
let after_prefix = &full_name[pos..];
let end = after_prefix.find(' ').unwrap_or(after_prefix.len());
return after_prefix[..end].to_string();
}
}
// Look for AMD Ryzen pattern
if let Some(pos) = full_name.find("Ryzen") {
// Extract "Ryzen X XXXX" pattern
let after_ryzen = &full_name[pos..];
let parts: Vec<&str> = after_ryzen.split_whitespace().collect();
if parts.len() >= 3 {
return format!("{} {} {}", parts[0], parts[1], parts[2]);
}
}
// Fallback: return first 15 characters or full name if shorter
if full_name.len() > 15 {
full_name[..15].to_string()
} else {
full_name.to_string()
}
}
/// Collect CPU C-state (idle depth) and populate AgentData with top 3 C-states by usage
async fn collect_cstate(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
// Read C-state usage from first CPU (representative of overall system)
@@ -192,6 +257,11 @@ impl Collector for CpuCollector {
debug!("Collecting CPU metrics");
let start = std::time::Instant::now();
// Collect static CPU info (only once at startup)
if agent_data.system.cpu.model_name.is_none() || agent_data.system.cpu.core_count.is_none() {
self.collect_cpu_info(agent_data).await?;
}
// Collect load averages (always available)
self.collect_load_averages(agent_data).await?;
@@ -212,8 +282,8 @@ impl Collector for CpuCollector {
);
}
// Calculate status using thresholds
agent_data.system.cpu.load_status = self.calculate_load_status(agent_data.system.cpu.load_1min);
// Calculate status using thresholds (use 5-minute average for stability)
agent_data.system.cpu.load_status = self.calculate_load_status(agent_data.system.cpu.load_5min);
agent_data.system.cpu.temperature_status = if let Some(temp) = agent_data.system.cpu.temperature_celsius {
self.calculate_temperature_status(temp)
} else {

View File

@@ -3,10 +3,9 @@ use async_trait::async_trait;
use cm_dashboard_shared::{AgentData, DriveData, FilesystemData, PoolData, HysteresisThresholds, Status};
use crate::config::DiskConfig;
use tokio::process::Command;
use std::time::Instant;
use tokio::process::Command as TokioCommand;
use std::process::Command as StdCommand;
use std::collections::HashMap;
use tracing::debug;
use super::{Collector, CollectorError};
@@ -67,8 +66,9 @@ impl DiskCollector {
/// Collect all storage data and populate AgentData
async fn collect_storage_data(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
let start_time = Instant::now();
debug!("Starting clean storage collection");
// Clear drives and pools to prevent duplicates when updating cached data
agent_data.system.storage.drives.clear();
agent_data.system.storage.pools.clear();
// Step 1: Get mount points and their backing devices
let mount_devices = self.get_mount_devices().await?;
@@ -104,9 +104,6 @@ impl DiskCollector {
self.populate_drives_data(&physical_drives, &smart_data, agent_data)?;
self.populate_pools_data(&mergerfs_pools, &smart_data, agent_data)?;
let elapsed = start_time.elapsed();
debug!("Storage collection completed in {:?}", elapsed);
Ok(())
}
@@ -114,7 +111,7 @@ impl DiskCollector {
async fn get_mount_devices(&self) -> Result<HashMap<String, String>, CollectorError> {
use super::run_command_with_timeout;
let mut cmd = Command::new("lsblk");
let mut cmd = TokioCommand::new("lsblk");
cmd.args(&["-rn", "-o", "NAME,MOUNTPOINT"]);
let output = run_command_with_timeout(cmd, 2).await
@@ -141,7 +138,6 @@ impl DiskCollector {
}
}
debug!("Found {} mounted block devices", mount_devices.len());
Ok(mount_devices)
}
@@ -154,8 +150,8 @@ impl DiskCollector {
Ok((total, used)) => {
filesystem_usage.insert(mount_point.clone(), (total, used));
}
Err(e) => {
debug!("Failed to get filesystem info for {}: {}", mount_point, e);
Err(_e) => {
// Silently skip filesystems we can't read
}
}
}
@@ -176,8 +172,6 @@ impl DiskCollector {
// Only add if we don't already have usage data for this mount point
if !filesystem_usage.contains_key(&mount_point) {
if let Ok((total, used)) = self.get_filesystem_info(&mount_point) {
debug!("Added MergerFS filesystem usage for {}: {}GB total, {}GB used",
mount_point, total as f32 / (1024.0 * 1024.0 * 1024.0), used as f32 / (1024.0 * 1024.0 * 1024.0));
filesystem_usage.insert(mount_point, (total, used));
}
}
@@ -189,7 +183,7 @@ impl DiskCollector {
/// Get filesystem info for a single mount point
fn get_filesystem_info(&self, mount_point: &str) -> Result<(u64, u64), CollectorError> {
let output = std::process::Command::new("timeout")
let output = StdCommand::new("timeout")
.args(&["2", "df", "--block-size=1", mount_point])
.output()
.map_err(|e| CollectorError::SystemRead {
@@ -252,9 +246,8 @@ impl DiskCollector {
} else {
mount_point.trim_start_matches('/').replace('/', "_")
};
if pool_name.is_empty() {
debug!("Skipping mergerfs pool with empty name: {}", mount_point);
continue;
}
@@ -282,8 +275,7 @@ impl DiskCollector {
// Categorize as data vs parity drives
let (data_drives, parity_drives) = match self.categorize_pool_drives(&all_member_paths) {
Ok(drives) => drives,
Err(e) => {
debug!("Failed to categorize drives for pool {}: {}. Skipping.", mount_point, e);
Err(_e) => {
continue;
}
};
@@ -298,8 +290,7 @@ impl DiskCollector {
});
}
}
debug!("Found {} mergerfs pools", pools.len());
Ok(pools)
}
@@ -386,9 +377,9 @@ impl DiskCollector {
device.to_string()
}
/// Get SMART data for drives
/// Get SMART data for drives in parallel
async fn get_smart_data_for_drives(&self, physical_drives: &[PhysicalDrive], mergerfs_pools: &[MergerfsPool]) -> HashMap<String, SmartData> {
let mut smart_data = HashMap::new();
use futures::future::join_all;
// Collect all drive names
let mut all_drives = std::collections::HashSet::new();
@@ -404,9 +395,24 @@ impl DiskCollector {
}
}
// Get SMART data for each drive
for drive_name in all_drives {
if let Ok(data) = self.get_smart_data(&drive_name).await {
// Collect SMART data for all drives in parallel
let futures: Vec<_> = all_drives
.iter()
.map(|drive_name| {
let drive = drive_name.clone();
async move {
let result = self.get_smart_data(&drive).await;
(drive, result)
}
})
.collect();
let results = join_all(futures).await;
// Build HashMap from results
let mut smart_data = HashMap::new();
for (drive_name, result) in results {
if let Ok(data) = result {
smart_data.insert(drive_name, data);
}
}
@@ -420,7 +426,7 @@ impl DiskCollector {
// Use direct smartctl (no sudo) - service has CAP_SYS_RAWIO and CAP_SYS_ADMIN capabilities
// For NVMe drives, specify device type explicitly
let mut cmd = Command::new("smartctl");
let mut cmd = TokioCommand::new("smartctl");
if drive_name.starts_with("nvme") {
cmd.args(&["-d", "nvme", "-a", &format!("/dev/{}", drive_name)]);
} else {
@@ -435,8 +441,10 @@ impl DiskCollector {
let output_str = String::from_utf8_lossy(&output.stdout);
if !output.status.success() {
// Return unknown data rather than failing completely
// Note: smartctl returns non-zero exit codes for warnings (like exit code 32
// for "temperature was high in the past"), but the output data is still valid.
// Only check if we got any output at all, don't reject based on exit code.
if output_str.is_empty() {
return Ok(SmartData {
health: "UNKNOWN".to_string(),
serial_number: None,
@@ -444,7 +452,7 @@ impl DiskCollector {
wear_percent: None,
});
}
let mut health = "UNKNOWN".to_string();
let mut serial_number = None;
let mut temperature = None;
@@ -763,7 +771,7 @@ impl DiskCollector {
/// Get drive information for a mount path
fn get_drive_info_for_path(&self, path: &str) -> anyhow::Result<PoolDrive> {
// Use lsblk to find the backing device with timeout
let output = std::process::Command::new("timeout")
let output = StdCommand::new("timeout")
.args(&["2", "lsblk", "-rn", "-o", "NAME,MOUNTPOINT"])
.output()
.map_err(|e| anyhow::anyhow!("Failed to run lsblk: {}", e))?;
@@ -785,20 +793,13 @@ impl DiskCollector {
// Extract base device name (e.g., "sda1" -> "sda")
let base_device = self.extract_base_device(&format!("/dev/{}", device));
// Get temperature from SMART data if available
let temperature = if let Ok(smart_data) = tokio::task::block_in_place(|| {
tokio::runtime::Handle::current().block_on(self.get_smart_data(&base_device))
}) {
smart_data.temperature_celsius
} else {
None
};
// Temperature will be filled in later from parallel SMART collection
// Don't collect it here to avoid sequential blocking with problematic async nesting
Ok(PoolDrive {
name: base_device,
mount_point: path.to_string(),
temperature_celsius: temperature,
temperature_celsius: None,
})
}

View File

@@ -200,13 +200,16 @@ impl Collector for MemoryCollector {
debug!("Collecting memory metrics");
let start = std::time::Instant::now();
// Clear tmpfs list to prevent duplicates when updating cached data
agent_data.system.memory.tmpfs.clear();
// Parse memory info from /proc/meminfo
let info = self.parse_meminfo().await?;
// Populate memory data directly
self.populate_memory_data(&info, agent_data).await?;
// Collect tmpfs data
// Collect tmpfs data
self.populate_tmpfs_data(agent_data).await?;
let duration = start.elapsed();

View File

@@ -18,8 +18,13 @@ pub use error::CollectorError;
/// Properly kills the process if timeout is exceeded
pub async fn run_command_with_timeout(mut cmd: tokio::process::Command, timeout_secs: u64) -> std::io::Result<Output> {
use tokio::time::timeout;
use std::process::Stdio;
let timeout_duration = Duration::from_secs(timeout_secs);
// Configure stdio to capture output
cmd.stdout(Stdio::piped());
cmd.stderr(Stdio::piped());
let child = cmd.spawn()?;
let pid = child.id();

View File

@@ -4,7 +4,7 @@ use cm_dashboard_shared::{AgentData, ServiceData, SubServiceData, SubServiceMetr
use std::process::Command;
use std::sync::RwLock;
use std::time::Instant;
use tracing::debug;
use tracing::{debug, info};
use super::{Collector, CollectorError};
use crate::config::SystemdConfig;
@@ -142,23 +142,68 @@ impl SystemdCollector {
// Add Docker images
let docker_images = self.get_docker_images();
for (image_name, image_status, image_size_mb) in docker_images {
let mut metrics = Vec::new();
metrics.push(SubServiceMetric {
label: "size".to_string(),
value: image_size_mb,
unit: Some("MB".to_string()),
});
for (image_name, _image_status, image_size_mb) in docker_images {
let metrics = Vec::new();
sub_services.push(SubServiceData {
name: image_name.to_string(),
service_status: self.calculate_service_status(&image_name, &image_status),
name: format!("{} size: {:.1} MB", image_name, image_size_mb),
service_status: Status::Info, // Informational only, no status icon
metrics,
service_type: "image".to_string(),
});
}
}
if service_name == "openvpn-vpn-download" && status_info.active_state == "active" {
// Add VPN route
if let Some(external_ip) = self.get_vpn_external_ip() {
let metrics = Vec::new();
sub_services.push(SubServiceData {
name: format!("route: {}", external_ip),
service_status: Status::Info,
metrics,
service_type: "vpn_route".to_string(),
});
}
// Add torrent stats
if let Some((active_count, download_mbps, upload_mbps)) = self.get_qbittorrent_stats() {
let metrics = Vec::new();
sub_services.push(SubServiceData {
name: format!("{} active, ↓ {:.1} MB/s, ↑ {:.1} MB/s", active_count, download_mbps, upload_mbps),
service_status: Status::Info,
metrics,
service_type: "torrent_stats".to_string(),
});
}
}
if service_name == "nftables" && status_info.active_state == "active" {
let (tcp_ports, udp_ports) = self.get_nftables_open_ports();
if !tcp_ports.is_empty() {
let metrics = Vec::new();
sub_services.push(SubServiceData {
name: format!("TCP: {}", tcp_ports),
service_status: Status::Info,
metrics,
service_type: "firewall_port".to_string(),
});
}
if !udp_ports.is_empty() {
let metrics = Vec::new();
sub_services.push(SubServiceData {
name: format!("UDP: {}", udp_ports),
service_status: Status::Info,
metrics,
service_type: "firewall_port".to_string(),
});
}
}
// Create complete service data
let service_data = ServiceData {
name: service_name.clone(),
@@ -836,11 +881,221 @@ impl SystemdCollector {
_ => value, // Assume bytes if no unit
}
}
/// Get VPN external IP by querying through the vpn namespace
fn get_vpn_external_ip(&self) -> Option<String> {
let output = Command::new("timeout")
.args(&[
"5",
"sudo",
"ip",
"netns",
"exec",
"vpn",
"curl",
"-s",
"--max-time",
"4",
"https://ifconfig.me"
])
.output()
.ok()?;
if output.status.success() {
let ip = String::from_utf8_lossy(&output.stdout).trim().to_string();
if !ip.is_empty() && ip.contains('.') {
return Some(ip);
}
}
None
}
/// Get nftables open ports grouped by protocol
/// Returns: (tcp_ports_string, udp_ports_string)
fn get_nftables_open_ports(&self) -> (String, String) {
let output = Command::new("timeout")
.args(&["3", "sudo", "/run/current-system/sw/bin/nft", "list", "ruleset"])
.output();
let output = match output {
Ok(out) if out.status.success() => out,
Ok(out) => {
info!("nft command failed with status: {:?}, stderr: {}",
out.status, String::from_utf8_lossy(&out.stderr));
return (String::new(), String::new());
}
Err(e) => {
info!("Failed to execute nft command: {}", e);
return (String::new(), String::new());
}
};
let output_str = match String::from_utf8(output.stdout) {
Ok(s) => s,
Err(_) => {
info!("Failed to parse nft output as UTF-8");
return (String::new(), String::new());
}
};
let mut tcp_ports = std::collections::HashSet::new();
let mut udp_ports = std::collections::HashSet::new();
// Parse nftables output for WAN incoming accept rules with dport
// Looking for patterns like: tcp dport 22 accept or tcp dport { 22, 80, 443 } accept
// Only include rules in input_wan chain
let mut in_wan_chain = false;
for line in output_str.lines() {
let line = line.trim();
// Track if we're in the input_wan chain
if line.contains("chain input_wan") {
in_wan_chain = true;
continue;
}
// Reset when exiting chain (closing brace) or entering other chains
if line == "}" || (line.starts_with("chain ") && !line.contains("input_wan")) {
in_wan_chain = false;
continue;
}
// Only process rules in input_wan chain
if !in_wan_chain {
continue;
}
// Skip if not an accept rule
if !line.contains("accept") {
continue;
}
// Parse TCP ports
if line.contains("tcp dport") {
for port in self.extract_ports_from_nft_rule(line) {
tcp_ports.insert(port);
}
}
// Parse UDP ports
if line.contains("udp dport") {
for port in self.extract_ports_from_nft_rule(line) {
udp_ports.insert(port);
}
}
}
// Sort and format
let mut tcp_vec: Vec<u16> = tcp_ports.into_iter().collect();
let mut udp_vec: Vec<u16> = udp_ports.into_iter().collect();
tcp_vec.sort();
udp_vec.sort();
let tcp_str = tcp_vec.iter().map(|p| p.to_string()).collect::<Vec<_>>().join(", ");
let udp_str = udp_vec.iter().map(|p| p.to_string()).collect::<Vec<_>>().join(", ");
info!("nftables WAN ports - TCP: '{}', UDP: '{}'", tcp_str, udp_str);
(tcp_str, udp_str)
}
/// Extract port numbers from nftables rule line
/// Returns vector of ports (handles both single ports and sets)
fn extract_ports_from_nft_rule(&self, line: &str) -> Vec<u16> {
let mut ports = Vec::new();
// Pattern: "tcp dport 22 accept" or "tcp dport { 22, 80, 443 } accept"
if let Some(dport_pos) = line.find("dport") {
let after_dport = &line[dport_pos + 5..].trim();
// Handle port sets like { 22, 80, 443 }
if after_dport.starts_with('{') {
if let Some(end_brace) = after_dport.find('}') {
let ports_str = &after_dport[1..end_brace];
// Parse each port in the set
for port_str in ports_str.split(',') {
if let Ok(port) = port_str.trim().parse::<u16>() {
ports.push(port);
}
}
}
} else {
// Single port
if let Some(port_str) = after_dport.split_whitespace().next() {
if let Ok(port) = port_str.parse::<u16>() {
ports.push(port);
}
}
}
}
ports
}
/// Get aggregate qBittorrent torrent statistics
/// Returns: (active_count, download_mbps, upload_mbps)
fn get_qbittorrent_stats(&self) -> Option<(u32, f32, f32)> {
// Query qBittorrent API through VPN namespace
let output = Command::new("timeout")
.args(&[
"5",
"sudo",
"ip",
"netns",
"exec",
"vpn",
"curl",
"-s",
"--max-time",
"4",
"http://localhost:8080/api/v2/torrents/info"
])
.output()
.ok()?;
if !output.status.success() {
return None;
}
let output_str = String::from_utf8_lossy(&output.stdout);
let torrents: Vec<serde_json::Value> = serde_json::from_str(&output_str).ok()?;
let mut active_count = 0u32;
let mut total_download_bps = 0.0f64;
let mut total_upload_bps = 0.0f64;
for torrent in torrents {
let state = torrent["state"].as_str().unwrap_or("");
let dlspeed = torrent["dlspeed"].as_f64().unwrap_or(0.0);
let upspeed = torrent["upspeed"].as_f64().unwrap_or(0.0);
// States: downloading, uploading, stalledDL, stalledUP, queuedDL, queuedUP, pausedDL, pausedUP
// Count as active if downloading or uploading (seeding)
if state.contains("downloading") || state.contains("uploading") ||
state == "stalledDL" || state == "stalledUP" {
active_count += 1;
}
total_download_bps += dlspeed;
total_upload_bps += upspeed;
}
// qBittorrent returns bytes/s, convert to MB/s
let download_mbps = (total_download_bps / 1024.0 / 1024.0) as f32;
let upload_mbps = (total_upload_bps / 1024.0 / 1024.0) as f32;
Some((active_count, download_mbps, upload_mbps))
}
}
#[async_trait]
impl Collector for SystemdCollector {
async fn collect_structured(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
// Clear services to prevent duplicates when updating cached data
agent_data.services.clear();
// Use cached complete data if available and fresh
if let Some(cached_complete_services) = self.get_cached_complete_services() {
for service_data in cached_complete_services {

View File

@@ -1,6 +1,6 @@
[package]
name = "cm-dashboard"
version = "0.1.220"
version = "0.1.255"
edition = "2021"
[dependencies]

View File

@@ -110,14 +110,6 @@ impl TuiApp {
host_widgets.system_widget.update_from_agent_data(agent_data);
host_widgets.services_widget.update_from_agent_data(agent_data);
// Update ZMQ stats
if let Some(zmq_stats) = metric_store.get_zmq_stats(&hostname) {
host_widgets.system_widget.update_zmq_stats(
zmq_stats.packets_received,
zmq_stats.last_packet_age_secs
);
}
host_widgets.last_update = Some(Instant::now());
}
}

View File

@@ -142,6 +142,7 @@ impl Theme {
/// Get color for status level
pub fn status_color(status: Status) -> Color {
match status {
Status::Info => Self::muted_text(), // Gray for informational data
Status::Ok => Self::success(),
Status::Inactive => Self::muted_text(), // Gray for inactive services in service list
Status::Pending => Self::highlight(), // Blue for pending
@@ -240,6 +241,7 @@ impl StatusIcons {
/// Get status icon symbol
pub fn get_icon(status: Status) -> &'static str {
match status {
Status::Info => "", // No icon for informational data
Status::Ok => "",
Status::Inactive => "", // Empty circle for inactive services
Status::Pending => "", // Hollow circle for pending
@@ -254,6 +256,7 @@ impl StatusIcons {
pub fn create_status_spans(status: Status, text: &str) -> Vec<ratatui::text::Span<'static>> {
let icon = Self::get_icon(status);
let status_color = match status {
Status::Info => Theme::muted_text(), // Gray for info
Status::Ok => Theme::success(), // Green
Status::Inactive => Theme::muted_text(), // Gray for inactive services
Status::Pending => Theme::highlight(), // Blue

View File

@@ -11,6 +11,74 @@ use tracing::debug;
use crate::ui::theme::{Components, StatusIcons, Theme, Typography};
use ratatui::style::Style;
/// Column visibility configuration based on terminal width
#[derive(Debug, Clone, Copy)]
struct ColumnVisibility {
show_name: bool,
show_status: bool,
show_ram: bool,
show_uptime: bool,
show_restarts: bool,
}
impl ColumnVisibility {
/// Calculate actual width needed for all columns
const NAME_WIDTH: u16 = 23;
const STATUS_WIDTH: u16 = 10;
const RAM_WIDTH: u16 = 8;
const UPTIME_WIDTH: u16 = 8;
const RESTARTS_WIDTH: u16 = 5;
const COLUMN_SPACING: u16 = 1; // Space between columns
/// Determine which columns to show based on available width
/// Priority order: Name > Status > RAM > Uptime > Restarts
fn from_width(width: u16) -> Self {
// Calculate cumulative widths for each configuration
let minimal = Self::NAME_WIDTH + Self::COLUMN_SPACING + Self::STATUS_WIDTH; // 34
let with_ram = minimal + Self::COLUMN_SPACING + Self::RAM_WIDTH; // 43
let with_uptime = with_ram + Self::COLUMN_SPACING + Self::UPTIME_WIDTH; // 52
let full = with_uptime + Self::COLUMN_SPACING + Self::RESTARTS_WIDTH; // 58
if width >= full {
// Show all columns
Self {
show_name: true,
show_status: true,
show_ram: true,
show_uptime: true,
show_restarts: true,
}
} else if width >= with_uptime {
// Hide restarts
Self {
show_name: true,
show_status: true,
show_ram: true,
show_uptime: true,
show_restarts: false,
}
} else if width >= with_ram {
// Hide uptime and restarts
Self {
show_name: true,
show_status: true,
show_ram: true,
show_uptime: false,
show_restarts: false,
}
} else {
// Minimal: Name + Status only
Self {
show_name: true,
show_status: true,
show_ram: false,
show_uptime: false,
show_restarts: false,
}
}
}
}
/// Services widget displaying hierarchical systemd service statuses
#[derive(Clone)]
pub struct ServicesWidget {
@@ -76,16 +144,19 @@ impl ServicesWidget {
}
/// Format parent service line - returns text without icon for span formatting
fn format_parent_service_line(&self, name: &str, info: &ServiceInfo) -> String {
// Truncate long service names to fit layout (account for icon space)
let short_name = if name.len() > 22 {
format!("{}...", &name[..19])
fn format_parent_service_line(&self, name: &str, info: &ServiceInfo, columns: ColumnVisibility) -> String {
// Truncate long service names to fit layout
// NAME_WIDTH - 3 chars for "..." = max displayable chars
let max_name_len = (ColumnVisibility::NAME_WIDTH - 3) as usize;
let short_name = if name.len() > max_name_len {
format!("{}...", &name[..max_name_len.saturating_sub(3)])
} else {
name.to_string()
};
// Convert Status enum to display text
let status_str = match info.widget_status {
Status::Info => "", // Shouldn't happen for parent services
Status::Ok => "active",
Status::Inactive => "inactive",
Status::Critical => "failed",
@@ -129,10 +200,25 @@ impl ServicesWidget {
}
});
format!(
"{:<23} {:<10} {:<8} {:<8} {:<5}",
short_name, status_str, memory_str, uptime_str, restart_str
)
// Build format string based on column visibility
let mut parts = Vec::new();
if columns.show_name {
parts.push(format!("{:<width$}", short_name, width = ColumnVisibility::NAME_WIDTH as usize));
}
if columns.show_status {
parts.push(format!("{:<width$}", status_str, width = ColumnVisibility::STATUS_WIDTH as usize));
}
if columns.show_ram {
parts.push(format!("{:<width$}", memory_str, width = ColumnVisibility::RAM_WIDTH as usize));
}
if columns.show_uptime {
parts.push(format!("{:<width$}", uptime_str, width = ColumnVisibility::UPTIME_WIDTH as usize));
}
if columns.show_restarts {
parts.push(format!("{:<width$}", restart_str, width = ColumnVisibility::RESTARTS_WIDTH as usize));
}
parts.join(" ")
}
@@ -144,9 +230,12 @@ impl ServicesWidget {
info: &ServiceInfo,
is_last: bool,
) -> Vec<ratatui::text::Span<'static>> {
// Informational sub-services (Status::Info) can use more width since they don't show columns
let max_width = if info.widget_status == Status::Info { 50 } else { 18 };
// Truncate long sub-service names to fit layout (accounting for indentation)
let short_name = if name.len() > 18 {
format!("{}...", &name[..15])
let short_name = if name.len() > max_width {
format!("{}...", &name[..(max_width.saturating_sub(3))])
} else {
name.to_string()
};
@@ -154,6 +243,7 @@ impl ServicesWidget {
// Get status icon and text
let icon = StatusIcons::get_icon(info.widget_status);
let status_color = match info.widget_status {
Status::Info => Theme::muted_text(),
Status::Ok => Theme::success(),
Status::Inactive => Theme::muted_text(),
Status::Pending => Theme::highlight(),
@@ -174,6 +264,7 @@ impl ServicesWidget {
} else {
// Convert Status enum to display text for sub-services
match info.widget_status {
Status::Info => "",
Status::Ok => "active",
Status::Inactive => "inactive",
Status::Critical => "failed",
@@ -185,34 +276,34 @@ impl ServicesWidget {
};
let tree_symbol = if is_last { "└─" } else { "├─" };
// Docker images use docker whale icon
if info.service_type == "image" {
vec![
if info.widget_status == Status::Info {
// Informational data - no status icon, show metrics if available
let mut spans = vec![
// Indentation and tree prefix
ratatui::text::Span::styled(
format!(" {} ", tree_symbol),
Typography::tree(),
),
// Docker icon (simple character for performance)
// Service name (no icon) - no fixed width padding for Info status
ratatui::text::Span::styled(
"D ".to_string(),
Style::default().fg(Theme::highlight()).bg(Theme::background()),
),
// Service name
ratatui::text::Span::styled(
format!("{:<18} ", short_name),
short_name,
Style::default()
.fg(Theme::secondary_text())
.bg(Theme::background()),
),
// Status/metrics text
ratatui::text::Span::styled(
];
// Add metrics if available (e.g., Docker image size)
if !status_str.is_empty() {
spans.push(ratatui::text::Span::styled(
status_str,
Style::default()
.fg(Theme::secondary_text())
.bg(Theme::background()),
),
]
));
}
spans
} else {
vec![
// Indentation and tree prefix
@@ -476,11 +567,28 @@ impl ServicesWidget {
.constraints([Constraint::Length(1), Constraint::Min(0)])
.split(inner_area);
// Header
let header = format!(
"{:<25} {:<10} {:<8} {:<8} {:<5}",
"Service:", "Status:", "RAM:", "Uptime:", "↻:"
);
// Determine which columns to show based on available width
let columns = ColumnVisibility::from_width(inner_area.width);
// Build header based on visible columns
let mut header_parts = Vec::new();
if columns.show_name {
header_parts.push(format!("{:<width$}", "Service:", width = ColumnVisibility::NAME_WIDTH as usize));
}
if columns.show_status {
header_parts.push(format!("{:<width$}", "Status:", width = ColumnVisibility::STATUS_WIDTH as usize));
}
if columns.show_ram {
header_parts.push(format!("{:<width$}", "RAM:", width = ColumnVisibility::RAM_WIDTH as usize));
}
if columns.show_uptime {
header_parts.push(format!("{:<width$}", "Uptime:", width = ColumnVisibility::UPTIME_WIDTH as usize));
}
if columns.show_restarts {
header_parts.push(format!("{:<width$}", "↻:", width = ColumnVisibility::RESTARTS_WIDTH as usize));
}
let header = header_parts.join(" ");
let header_para = Paragraph::new(header).style(Typography::muted());
frame.render_widget(header_para, content_chunks[0]);
@@ -492,11 +600,11 @@ impl ServicesWidget {
}
// Render the services list
self.render_services(frame, content_chunks[1], is_focused);
self.render_services(frame, content_chunks[1], is_focused, columns);
}
/// Render services list
fn render_services(&mut self, frame: &mut Frame, area: Rect, is_focused: bool) {
fn render_services(&mut self, frame: &mut Frame, area: Rect, is_focused: bool, columns: ColumnVisibility) {
// Build hierarchical service list for display
let mut display_lines: Vec<(String, Status, bool, Option<(ServiceInfo, bool)>)> = Vec::new();
@@ -506,7 +614,7 @@ impl ServicesWidget {
for (parent_name, parent_info) in parent_services {
// Add parent service line
let parent_line = self.format_parent_service_line(parent_name, parent_info);
let parent_line = self.format_parent_service_line(parent_name, parent_info, columns);
display_lines.push((parent_line, parent_info.widget_status, false, None));
// Add sub-services for this parent (if any)

View File

@@ -15,10 +15,6 @@ pub struct SystemWidget {
nixos_build: Option<String>,
agent_hash: Option<String>,
// ZMQ communication stats
zmq_packets_received: Option<u64>,
zmq_last_packet_age: Option<f64>,
// Network interfaces
network_interfaces: Vec<cm_dashboard_shared::NetworkInterfaceData>,
@@ -27,6 +23,8 @@ pub struct SystemWidget {
cpu_load_5min: Option<f32>,
cpu_load_15min: Option<f32>,
cpu_cstates: Vec<cm_dashboard_shared::CStateInfo>,
cpu_model_name: Option<String>,
cpu_core_count: Option<u32>,
cpu_status: Status,
// Memory metrics
@@ -90,13 +88,13 @@ impl SystemWidget {
Self {
nixos_build: None,
agent_hash: None,
zmq_packets_received: None,
zmq_last_packet_age: None,
network_interfaces: Vec::new(),
cpu_load_1min: None,
cpu_load_5min: None,
cpu_load_15min: None,
cpu_cstates: Vec::new(),
cpu_model_name: None,
cpu_core_count: None,
cpu_status: Status::Unknown,
memory_usage_percent: None,
memory_used_gb: None,
@@ -155,12 +153,6 @@ impl SystemWidget {
pub fn _get_agent_hash(&self) -> Option<&String> {
self.agent_hash.as_ref()
}
/// Update ZMQ communication statistics
pub fn update_zmq_stats(&mut self, packets_received: u64, last_packet_age_secs: f64) {
self.zmq_packets_received = Some(packets_received);
self.zmq_last_packet_age = Some(last_packet_age_secs);
}
}
use super::Widget;
@@ -184,6 +176,8 @@ impl Widget for SystemWidget {
self.cpu_load_5min = Some(cpu.load_5min);
self.cpu_load_15min = Some(cpu.load_15min);
self.cpu_cstates = cpu.cstates.clone();
self.cpu_model_name = cpu.model_name.clone();
self.cpu_core_count = cpu.core_count;
self.cpu_status = Status::Ok;
// Extract memory data directly
@@ -805,18 +799,6 @@ impl SystemWidget {
Span::styled(format!("Agent: {}", agent_version_text), Typography::secondary())
]));
// ZMQ communication stats
if let (Some(packets), Some(age)) = (self.zmq_packets_received, self.zmq_last_packet_age) {
let age_text = if age < 1.0 {
format!("{:.0}ms ago", age * 1000.0)
} else {
format!("{:.1}s ago", age)
};
lines.push(Line::from(vec![
Span::styled(format!("ZMQ: {} pkts, last {}", packets, age_text), Typography::secondary())
]));
}
// CPU section
lines.push(Line::from(vec![
Span::styled("CPU:", Typography::widget_title())
@@ -830,11 +812,31 @@ impl SystemWidget {
lines.push(Line::from(cpu_spans));
let cstate_text = self.format_cpu_cstate();
let has_cpu_info = self.cpu_model_name.is_some() || self.cpu_core_count.is_some();
let cstate_tree = if has_cpu_info { " ├─ " } else { " └─ " };
lines.push(Line::from(vec![
Span::styled(" └─ ", Typography::tree()),
Span::styled(cstate_tree, Typography::tree()),
Span::styled(format!("C-state: {}", cstate_text), Typography::secondary())
]));
// CPU model and core count (if available)
if let (Some(model), Some(cores)) = (&self.cpu_model_name, self.cpu_core_count) {
lines.push(Line::from(vec![
Span::styled(" └─ ", Typography::tree()),
Span::styled(format!("{} ({} cores)", model, cores), Typography::secondary())
]));
} else if let Some(model) = &self.cpu_model_name {
lines.push(Line::from(vec![
Span::styled(" └─ ", Typography::tree()),
Span::styled(model.clone(), Typography::secondary())
]));
} else if let Some(cores) = self.cpu_core_count {
lines.push(Line::from(vec![
Span::styled(" └─ ", Typography::tree()),
Span::styled(format!("{} cores", cores), Typography::secondary())
]));
}
// RAM section
lines.push(Line::from(vec![
Span::styled("RAM:", Typography::widget_title())

View File

@@ -1,6 +1,6 @@
[package]
name = "cm-dashboard-shared"
version = "0.1.220"
version = "0.1.255"
edition = "2021"
[dependencies]

View File

@@ -57,6 +57,11 @@ pub struct CpuData {
pub temperature_celsius: Option<f32>,
pub load_status: Status,
pub temperature_status: Status,
// Static CPU information (collected once at startup)
#[serde(skip_serializing_if = "Option::is_none")]
pub model_name: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub core_count: Option<u32>,
}
/// Memory monitoring data
@@ -219,6 +224,8 @@ impl AgentData {
temperature_celsius: None,
load_status: Status::Unknown,
temperature_status: Status::Unknown,
model_name: None,
core_count: None,
},
memory: MemoryData {
usage_percent: 0.0,

View File

@@ -82,12 +82,13 @@ impl MetricValue {
/// Health status for metrics
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub enum Status {
Inactive, // Lowest priority
Unknown, //
Offline, //
Pending, //
Ok, // 5th place - good status has higher priority than unknown states
Warning, //
Info, // Lowest priority - informational data with no status (no icon)
Inactive, //
Unknown, //
Offline, //
Pending, //
Ok, // Good status has higher priority than unknown states
Warning, //
Critical, // Highest priority
}
@@ -223,6 +224,17 @@ impl HysteresisThresholds {
Status::Ok
}
}
Status::Info => {
// Informational data shouldn't be used with hysteresis calculations
// Treat like Unknown if it somehow ends up here
if value >= self.critical_high {
Status::Critical
} else if value >= self.warning_high {
Status::Warning
} else {
Status::Ok
}
}
}
}
}