Compare commits

...

95 Commits

Author SHA1 Message Date
41e1be451e Display selected host with brackets in title bar
All checks were successful
Build and Release / build-and-release (push) Successful in 1m25s
- Change nftables port labels to lowercase 'wan tcp:' and 'wan udp:'
- Add brackets around selected host in title bar for clarity
2025-12-04 18:47:30 +01:00
2863526ec8 Remove timeout wrapper from nft command
All checks were successful
Build and Release / build-and-release (push) Successful in 1m21s
Run sudo nft directly without timeout wrapper to preserve capabilities.
The timeout -> sudo chain was preventing nft from accessing netlink
with proper permissions.

- Change from 'timeout 3 sudo nft' to 'sudo nft'
- Allows CAP_NET_ADMIN to pass through correctly
- Update version to v0.1.256
2025-12-04 16:10:25 +01:00
5da9213da6 Fix nftables query by using full path to nft binary
All checks were successful
Build and Release / build-and-release (push) Successful in 1m31s
Use explicit path /run/current-system/sw/bin/nft to match sudoers
configuration. Previously using 'nft' without path was resolving to
wrong location and failing permission checks.

- Change from 'sudo nft' to 'sudo /run/current-system/sw/bin/nft'
- Matches sudoers entry for passwordless execution
- Update version to v0.1.255
2025-12-04 16:02:14 +01:00
a7755f02ae Use 5-minute load average for CPU status calculation
All checks were successful
Build and Release / build-and-release (push) Successful in 1m27s
Change CPU load status to use load_5min instead of load_1min for more
stable status reporting. 5-minute average smooths out temporary spikes
and provides better indication of sustained load.

- Change load_status calculation from load_1min to load_5min
- Add comment explaining use of 5-minute average for stability
- Update version to v0.1.254
2025-12-04 15:59:11 +01:00
b886fb2045 Add detailed error logging for nft command failures
All checks were successful
Build and Release / build-and-release (push) Successful in 1m28s
- Log exit status code when nft command fails
- Log stderr output to diagnose issues
- Distinguish between command execution failure and non-zero exit
- Update version to v0.1.253
2025-12-04 15:53:12 +01:00
cfb02e1763 Change nftables logging from debug to info
All checks were successful
Build and Release / build-and-release (push) Successful in 1m21s
- Add info to tracing imports
- Change debug!() calls to info!() for visibility in logs
- Update version to v0.1.252
2025-12-04 15:48:34 +01:00
5b53ca3d52 Move VPN route display from vpn-connection to vpn-download
All checks were successful
Build and Release / build-and-release (push) Successful in 1m46s
Consolidate VPN-related information under openvpn-vpn-download service.
Now shows both VPN route and torrent statistics as sub-services.

- Remove route from openvpn-vpn-connection
- Add route to openvpn-vpn-download (displayed first)
- Torrent stats displayed second
- Update version to v0.1.251
2025-12-04 15:36:34 +01:00
92a30913b4 Add debug logging and fix chain end detection for nftables
All checks were successful
Build and Release / build-and-release (push) Successful in 1m28s
- Detect chain end with closing brace
- Add debug logging to trace nft command execution and port collection
- Update version to v0.1.250
2025-12-04 15:33:43 +01:00
a288a8ef9a Fix nftables parser to use input_wan chain
All checks were successful
Build and Release / build-and-release (push) Successful in 1m27s
Change nftables port parser to specifically look for 'chain input_wan'
instead of any chain with 'input' in the name. This ensures we only
collect WAN/external ports, not LAN or other internal chains.

- Look for 'chain input_wan' specifically
- Remove internal network filters (no longer needed)
- Update version to v0.1.249
2025-12-04 15:26:20 +01:00
c65d596099 Add sudo for nftables query command
All checks were successful
Build and Release / build-and-release (push) Successful in 1m26s
Update nftables port collector to use sudo when querying ruleset.
Requires corresponding sudoers configuration in NixOS.

- Change nft command to use sudo
- Update version to v0.1.248
2025-12-04 13:40:31 +01:00
98ed17947d Add nftables WAN open ports as sub-services
All checks were successful
Build and Release / build-and-release (push) Successful in 1m53s
Display open external ports from nftables firewall rules as sub-services
grouped by protocol. Only shows WAN incoming ports by filtering input chain
rules and excluding private network sources.

- Parse nftables ruleset for accept rules with dport in input chain
- Filter out internal network traffic (192.168.x, 10.x, 172.16.x, loopback)
- Extract single ports and port sets from rules
- Group and display as "TCP: 22, 80, 443" and "UDP: 53, 123"
- Update version to v0.1.247
2025-12-04 12:50:10 +01:00
1cb6abf58a Replace Transmission with qBittorrent for torrent statistics
All checks were successful
Build and Release / build-and-release (push) Successful in 1m27s
Update collector to use qBittorrent Web API instead of Transmission RPC.
Query qBittorrent through VPN namespace using existing passwordless sudo
permissions for ip netns exec commands.

- Change service name from transmission-vpn to openvpn-vpn-download
- Replace get_transmission_stats() with get_qbittorrent_stats()
- Use curl through VPN namespace to access qBittorrent API at localhost:8080
- Parse qBittorrent JSON response for state, dlspeed, upspeed
- Count active torrents (downloading, uploading, stalledDL, stalledUP)
- Update version to v0.1.246
2025-12-02 23:31:56 +01:00
477724b4f4 Unify sub-service display formatting for Info status
All checks were successful
Build and Release / build-and-release (push) Successful in 1m39s
Change docker images to use name field for all data instead of metrics,
matching the pattern used by torrent stats and VPN routes. Increase display
width for Status::Info sub-services from 18 to 50 characters to accommodate
longer informational text without truncation.

- Docker images now show: "image-name size: 994.0 MB" in name field
- Torrent stats show: "17 active, ↓ 2.5 MB/s, ↑ 1.2 MB/s" in name field
- Remove fixed-width padding for Info status sub-services
- Update version to v0.1.245
2025-12-02 11:36:27 +01:00
7a3ed17952 Add torrent statistics to transmission-vpn service
All checks were successful
Build and Release / build-and-release (push) Successful in 1m47s
Implement aggregate torrent statistics display for transmission-vpn service
via Transmission RPC API. Shows active torrent count and total download/upload
speeds. Change VPN route label from "ip:" to "route:" for clarity.

- Add get_transmission_stats() method to query Transmission RPC
- Display format: "X active, ↓ MB/s, ↑ MB/s"
- Update version to v0.1.244
2025-12-02 11:12:14 +01:00
7e1962a168 Remove ZMQ debug packet counter from display
All checks were successful
Build and Release / build-and-release (push) Successful in 1m23s
- Remove ZMQ stats display from system widget
- Remove update_zmq_stats method
- Remove zmq_packets_received and zmq_last_packet_age fields
- Clean up display to only show essential information
2025-12-01 19:42:05 +01:00
5bb7d6cf57 Fix CPU model extraction for newer Intel generations
All checks were successful
Build and Release / build-and-release (push) Successful in 1m24s
- Handle 12th/13th Gen Intel format (e.g., "12th Gen Intel(R) Core(TM) i7-12700K")
- Extract full model including suffix (i7-12700K instead of truncated name)
- Simplify pattern matching logic
- Reduce fallback truncation to 15 chars
2025-12-01 19:35:03 +01:00
7a0dc27846 Extract CPU model number only to save display space
All checks were successful
Build and Release / build-and-release (push) Successful in 1m35s
- Parse Intel models (i3/i5/i7/i9-XXXX) from full name
- Parse AMD Ryzen models (Ryzen X XXXX) from full name
- Display format: "i7-9700 (8 cores)" instead of full CPU name
- Reduces CPU section width significantly
2025-12-01 19:23:26 +01:00
5bc250a738 Add static CPU model and core count to CPU collector
All checks were successful
Build and Release / build-and-release (push) Successful in 1m47s
- Collect CPU model name and core count from /proc/cpuinfo
- Only collect once at startup (check if fields already set)
- Display below C-state row in dashboard CPU section
- Move CPU info collection from NixOS collector to CPU collector
2025-12-01 19:15:57 +01:00
5c3ac8b15e Remove Docker image icon and use Status::Info
All checks were successful
Build and Release / build-and-release (push) Successful in 1m20s
Docker images now use Status::Info like VPN IP.
No "D" prefix, no status icon - just name and metrics.
All informational sub-services handled consistently.

Version: v0.1.239
2025-12-01 18:45:28 +01:00
bdfff942f7 Remove VPN external IP logging
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
Clean up debug logging for production.

Version: v0.1.238
2025-12-01 15:16:34 +01:00
47ab1e387d Add Status::Info for informational sub-services
All checks were successful
Build and Release / build-and-release (push) Successful in 1m9s
Agent uses Status enum to control display:
- Status::Info: no icon, no status text (VPN IP)
- Other statuses: icon + text (containers, nginx sites)

Dashboard checks status, no hardcoded service_type exceptions.

Version: v0.1.237
2025-12-01 15:11:16 +01:00
966ba27b1e Remove status icon from VPN IP and change to lowercase
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
Change display from "IP: X.X.X.X" to "ip: X.X.X.X".
Remove status icon for vpn_route service type.

Version: v0.1.236
2025-12-01 14:40:21 +01:00
6c6c9144bd Add info-level logging for VPN external IP debugging
All checks were successful
Build and Release / build-and-release (push) Successful in 1m10s
Change debug to info/warn logging to diagnose VPN IP query issues.
Use exact service name match instead of contains.

Version: v0.1.235
2025-12-01 14:30:25 +01:00
3fdcec8047 Add sudo for VPN namespace access
All checks were successful
Build and Release / build-and-release (push) Successful in 1m10s
Use sudo to access vpn namespace for external IP query.
Requires corresponding sudo permission in NixOS config.

Version: v0.1.234
2025-12-01 14:14:41 +01:00
1fcaf4a670 Fix VPN namespace name for external IP query
All checks were successful
Build and Release / build-and-release (push) Successful in 1m9s
Use correct namespace name 'vpn' instead of 'openvpn-namespace'.

Version: v0.1.233
2025-12-01 14:09:07 +01:00
885e19f7fd Add external IP display for OpenVPN connections
All checks were successful
Build and Release / build-and-release (push) Successful in 1m20s
Display VPN external IP as sub-service under openvpn-vpn-connection.
Query external IP through openvpn-namespace using curl ifconfig.me.

Version: v0.1.232
2025-12-01 13:49:54 +01:00
a7b69b8ae7 Fix duplicate data by clearing vectors before collection
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
Collectors now clear their target vectors (tmpfs, drives, pools, services)
before populating to prevent duplicates when updating cached AgentData.

- Clear tmpfs list in memory collector
- Clear drives and pools in disk collector
- Clear services in systemd collector
- Bump version to v0.1.231
2025-12-01 13:21:26 +01:00
2d290f40b2 Fix data caching to prevent empty broadcasts
All checks were successful
Build and Release / build-and-release (push) Successful in 1m33s
CRITICAL FIX: Collectors now update cached AgentData instead of
creating new empty data each cycle. This prevents the dashboard
from seeing flashing/disappearing data.

- Add cached_agent_data field to Agent struct
- Update cached data when collectors run
- Always broadcast the full cached data every 2s
- Only individual collectors respect their intervals
- Bump version to v0.1.230
2025-12-01 13:14:53 +01:00
ad1fcaa27b Fix collector interval timing to prevent excessive SMART checks
All checks were successful
Build and Release / build-and-release (push) Successful in 1m46s
Collectors now respect their configured intervals instead of running
every transmission cycle (2s). This prevents disk SMART checks from
running every 2 seconds, which was causing constant disk activity.

- Add TimedCollector wrapper with interval tracking
- Only collect from collectors whose interval has elapsed
- Disk collector now properly runs every 300s instead of every 2s
- Bump version to v0.1.229
2025-12-01 13:03:45 +01:00
60ab4d4f9e Fix service panel column width calculation
All checks were successful
Build and Release / build-and-release (push) Successful in 1m10s
Replace hardcoded terminal width thresholds with dynamic calculation
based on actual column requirements. Column visibility now adapts
correctly at 58, 52, 43, and 34 character widths instead of the
previous arbitrary 80, 60, 45 thresholds.

- Add width constants for each column (NAME=23, STATUS=10, etc)
- Calculate cumulative widths dynamically for each layout tier
- Ensure header and data formatting use consistent width values
- Fix service name truncation to respect calculated column width
2025-11-30 12:09:44 +01:00
67034c84b9 Add responsive column visibility to service panel
All checks were successful
Build and Release / build-and-release (push) Successful in 1m47s
Service panel now dynamically shows/hides columns based on terminal width:
- ≥80 chars: All columns (Name, Status, RAM, Uptime, Restarts)
- ≥60 chars: Hide Restarts only
- ≥45 chars: Hide Uptime and Restarts
- <45 chars: Minimal (Name and Status only)

Improves dashboard usability on smaller terminal sizes.
2025-11-30 10:50:08 +01:00
c62c7fa698 Remove debug logging from disk collector
All checks were successful
Build and Release / build-and-release (push) Successful in 1m11s
Removed all debug! statements from disk collector to reduce log noise.

Bump version to v0.1.226
2025-11-30 00:44:38 +01:00
0b1d8c0a73 Fix Data_3 showing as unknown by handling smartctl warning exit codes
All checks were successful
Build and Release / build-and-release (push) Successful in 1m11s
Root cause: sda's temperature exceeded threshold in the past, causing
smartctl to return exit code 32 (warning: "Attributes have been <= threshold
in the past"). The agent checked output.status.success() and rejected the
entire output as failed, even though the data (serial, temperature, health)
was perfectly valid.

Smartctl exit codes are bit flags for informational warnings:
- Exit 0: No warnings
- Exit 32 (bit 5): Attributes were at/below threshold in past
- Exit 64 (bit 6): Error log has entries
- etc.

The output data is valid regardless of these warning flags.

Solution: Parse output as long as it's not empty, ignore exit code.
Only return UNKNOWN if output is actually empty (command truly failed).

Result: Data_3 will now show "ZDZ4VE0B T: 31°C" instead of "? Data_3: sda"

Bump version to v0.1.225
2025-11-30 00:35:19 +01:00
c77aa6eaaa Fix Data_3 timeout by removing sequential SMART during pool detection
All checks were successful
Build and Release / build-and-release (push) Successful in 1m34s
Root cause: SMART data was collected TWICE:
1. Sequential collection during pool detection in get_drive_info_for_path()
   using problematic tokio::task::block_in_place() nesting
2. Parallel collection in get_smart_data_for_drives() (v0.1.223)

The sequential collection happened FIRST during pool detection, causing
sda (Data_3) to timeout due to:
- Bad async nesting: block_in_place() wrapping block_on()
- Sequential execution causing runtime issues
- sda being third in sequence, runtime degraded by then

Solution: Remove SMART collection from get_drive_info_for_path().
Pool drive temperatures are populated later from the parallel SMART
collection which properly uses futures::join_all.

Benefits:
- Eliminates problematic async nesting
- All SMART queries happen once in parallel only
- sda/Data_3 should now show serial (ZDZ4VE0B) and temperature

Bump version to v0.1.224
2025-11-30 00:14:25 +01:00
8a0e68f0e3 Fix Data_3 timeout by parallelizing SMART collection
All checks were successful
Build and Release / build-and-release (push) Successful in 1m10s
Root cause: SMART data was collected sequentially, one drive at a time.
With 5 drives taking ~500ms each, total collection time was 2.5+ seconds.
When disk collector runs every 1 second, this caused overlapping
collections creating resource contention. The last drive (sda/Data_3)
would timeout due to the drive being accessed by the previous collection.

Solution: Query all drives in parallel using futures::join_all. Now all
drives get their SMART data collected simultaneously with independent
3-second timeouts, eliminating contention and reducing total collection
time from 2.5+ seconds to ~500ms (the slowest single drive).

Benefits:
- All drives complete in ~500ms instead of 2.5+ seconds
- No overlapping collections causing resource contention
- Each drive gets full 3-second timeout window
- sda/Data_3 should now show temperature and serial number

Bump version to v0.1.223
2025-11-29 23:51:43 +01:00
2d653fe9ae Fix empty Storage section by configuring stdio pipes
All checks were successful
Build and Release / build-and-release (push) Successful in 1m15s
Root cause: run_command_with_timeout() was calling cmd.spawn() without
configuring stdout/stderr pipes. This caused command output to go to
journald instead of being captured by wait_with_output(). The disk
collector received empty output and failed silently.

Solution: Configure stdout(Stdio::piped()) and stderr(Stdio::piped())
before spawning commands. This ensures wait_with_output() can properly
capture command output.

Fixes: Empty Storage section, lsblk output appearing in journald
Bump version to v0.1.222
2025-11-29 23:25:17 +01:00
caba78004e Fix empty Storage section by properly aliasing command types
All checks were successful
Build and Release / build-and-release (push) Successful in 2m6s
v0.1.220 broke disk collector by changing the import from
std::process::Command to tokio::process::Command, but lines 193 and
767 explicitly used std::process::Command::new() which silently failed.

Solution: Import both as aliases (TokioCommand/StdCommand) and use
appropriate type for each operation - async commands use TokioCommand
with run_command_with_timeout, sync commands use StdCommand with
system timeout wrapper.

Fixes: Empty Storage section after v0.1.220 deployment
Bump version to v0.1.221
2025-11-29 21:29:33 +01:00
77bf08a978 Fix blocking smartctl commands with proper async/timeout handling
All checks were successful
Build and Release / build-and-release (push) Successful in 2m2s
- Changed disk collector to use tokio::process::Command instead of std::process::Command
- Updated run_command_with_timeout to properly kill processes on timeout
- Fixes issue where smartctl hangs on problematic drives (/dev/sda) freezing entire agent
- Timeout now force-kills hung processes using kill -9, preventing orphaned smartctl processes

This resolves the issue where Data_3 showed unknown status because smartctl was hanging
indefinitely trying to read from a problematic drive, blocking the entire collector.

Bump version to v0.1.220

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 21:09:04 +01:00
929870f8b6 Bump version to v0.1.219
All checks were successful
Build and Release / build-and-release (push) Successful in 1m11s
2025-11-29 18:35:14 +01:00
7aae852b7b Bump version to v0.1.218
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
2025-11-29 17:59:33 +01:00
40f3ff66d8 Show archive count range to detect inconsistencies
- Display single number if all services have same count
- Display min-max range if counts differ (indicates problem)
2025-11-29 17:59:24 +01:00
1c1beddb55 Bump version to v0.1.217
All checks were successful
Build and Release / build-and-release (push) Successful in 1m20s
2025-11-29 17:51:13 +01:00
620d1f10b6 Show archive count per service instead of total sum 2025-11-29 17:51:01 +01:00
a0d571a40e Bump version to v0.1.216
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
2025-11-29 17:44:12 +01:00
977200fff3 Move archive count to Usage line in backup display 2025-11-29 17:44:05 +01:00
d692de5f83 Bump version to v0.1.215
All checks were successful
Build and Release / build-and-release (push) Successful in 1m11s
2025-11-29 17:41:49 +01:00
f5913dbd43 Add archive count to backup disk display 2025-11-29 17:41:11 +01:00
faa30a7839 Sort backup repositories and disks for stable display
All checks were successful
Build and Release / build-and-release (push) Successful in 1m21s
- Sort repositories alphabetically before rendering
- Sort backup disks by serial number
- Prevents display jumping between different orderings on updates
- Consistent display order across refreshes

Bump version to v0.1.214

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 17:15:17 +01:00
6e4a42799f Bump version to v0.1.213
All checks were successful
Build and Release / build-and-release (push) Successful in 1m22s
2025-11-29 16:46:16 +01:00
afb8d68e03 Implement multi-disk backup support
- Update BackupData structure to support multiple backup disks
- Scan /var/lib/backup/status/ directory for all status files
- Calculate status icons for backup and disk usage
- Aggregate repository status from all disks
- Update dashboard to display all backup disks with per-disk status
- Display repository list with count and aggregated status
2025-11-29 16:44:50 +01:00
5e08b34280 Move C-state name cleaning to agent for smaller JSON
All checks were successful
Build and Release / build-and-release (push) Successful in 1m32s
- Agent now extracts "C" + digits pattern (C3, C10) using char parsing
- Removes suffixes like "_ACPI", "_MWAIT" at source
- Reduces JSON payload size over ZMQ
- No regex dependency - uses fast char iteration (~1μs overhead)
- Robust fallback to original name if pattern not found
- Dashboard simplified to use clean names directly

Bump version to v0.1.212

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 14:05:55 +01:00
0d8284b69c Clean C-state display to show only CX format
All checks were successful
Build and Release / build-and-release (push) Successful in 1m18s
- Strip suffixes like "_ACPI" from C-state names
- Display changes from "C3_ACPI:51%" to "C3:51%"
- Cleaner, more concise presentation

Bump version to v0.1.211

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 13:34:01 +01:00
d84690cb3b Move transmission interval to ZMQ config section
All checks were successful
Build and Release / build-and-release (push) Successful in 1m43s
- Changed code to use zmq.transmission_interval_seconds instead of top-level collection_interval_seconds
- Removed collection_interval_seconds from AgentConfig
- Updated validation to check zmq.transmission_interval_seconds
- Improves config organization by grouping all ZMQ settings together

Bump version to v0.1.210

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 13:31:39 +01:00
7c030b33d6 Show top 3 C-states with usage percentages
All checks were successful
Build and Release / build-and-release (push) Successful in 1m21s
- Changed CpuData.cstate from String to Vec<CStateInfo>
- Added CStateInfo struct with name and percent fields
- Collector calculates percentage for each C-state based on accumulated time
- Sorts and returns top 3 C-states by usage
- Dashboard displays: "C10:79% C8:10% C6:8%"

Provides better visibility into CPU idle state distribution.

Bump version to v0.1.209

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-28 23:45:46 +01:00
c6817537a8 Replace CPU frequency with C-state monitoring
All checks were successful
Build and Release / build-and-release (push) Successful in 1m20s
- Changed CpuData.frequency_mhz to CpuData.cstate (String)
- Implemented collect_cstate() to read CPU idle depth from sysfs
- Finds deepest C-state with most accumulated time (C0-C10)
- Updated dashboard to display C-state instead of frequency
- More accurate indicator of CPU activity vs power management

Bump version to v0.1.208

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-28 23:30:14 +01:00
2189d34b16 Bump version to v0.1.207
All checks were successful
Build and Release / build-and-release (push) Successful in 1m9s
2025-11-28 23:16:33 +01:00
28cfd5758f Fix service metrics not showing - remove cache check
The service_status_cache from discovery only has active_state with
all detailed metrics set to None. During collection, get_service_status()
was returning cached data instead of fetching fresh systemctl show data.

Now always fetch fresh data to populate memory_bytes, restart_count,
and uptime_seconds properly.
2025-11-28 23:15:51 +01:00
5deb8cf8d8 Bump version to v0.1.206
All checks were successful
Build and Release / build-and-release (push) Successful in 1m10s
2025-11-28 23:07:20 +01:00
0e01813ff5 Add service metrics from systemctl (memory, uptime, restarts)
Shared:
- Add memory_bytes, restart_count, uptime_seconds to ServiceData

Agent:
- Add new fields to ServiceStatusInfo struct
- Fetch MemoryCurrent, NRestarts, ExecMainStartTimestamp from systemctl show
- Calculate uptime from start timestamp
- Parse and populate new fields in ServiceData
- Remove unused load_state and sub_state fields

Dashboard:
- Add memory_bytes, restart_count, uptime_seconds to ServiceInfo
- Update header: Service, Status, RAM, Uptime, ↻ (restarts)
- Format memory as MB/GB
- Format uptime as Xd Xh, Xh Xm, or Xm
- Show restart count with ! prefix if > 0 to indicate instability

All metrics obtained from single systemctl show call - zero overhead.
2025-11-28 23:06:13 +01:00
c3c9507a42 Bump version to v0.1.205
All checks were successful
Build and Release / build-and-release (push) Successful in 1m22s
2025-11-28 22:37:28 +01:00
4d77ffe17e Remove RAM and Disk columns from services widget header
Changed header from 4 columns to 2 columns:
- Before: Service, Status, RAM, Disk
- After: Service, Status

Matches the removal of memory_mb and disk_gb fields.
2025-11-28 22:37:14 +01:00
14f74b4cac Bump version to v0.1.204
All checks were successful
Build and Release / build-and-release (push) Successful in 1m20s
2025-11-28 14:33:19 +01:00
67b686f8c7 Remove RAM and disk collection for services
Complete removal of service resource metrics:

Agent:
- Remove memory_mb and disk_gb fields from ServiceData struct
- Remove get_service_memory_usage() method
- Remove get_service_disk_usage() method
- Remove get_directory_size() method
- Remove unused warn import

Dashboard:
- Remove memory_mb and disk_gb from ServiceInfo struct
- Remove memory/disk display from format_parent_service_line
- Remove memory/disk parsing in legacy metric path
- Remove unused format_disk_size() function

Service resource metrics were slow, unreliable, and never worked
properly since structured data migration. Will be handled differently
in the future.
2025-11-28 14:25:12 +01:00
e3996fdb84 Fix compilation errors from command receiver removal
All checks were successful
Build and Release / build-and-release (push) Successful in 1m8s
- Remove AgentCommand import from agent.rs
- Remove handle_commands() method
- Remove command handling from main loop
- Remove command_port validation checks
2025-11-28 13:01:36 +01:00
f94ca60e69 Bump version to v0.1.203
Some checks failed
Build and Release / build-and-release (push) Failing after 1m36s
2025-11-28 12:53:56 +01:00
c19ff56df8 Remove unused ZMQ command receiver (port 6131)
Service control migrated to SSH, command receiver no longer needed.
- Remove command_receiver Socket from ZmqHandler
- Remove try_receive_command method
- Remove AgentCommand enum
- Remove command_port from ZmqConfig
2025-11-28 12:52:43 +01:00
fe2f604703 Bump version to v0.1.202
All checks were successful
Build and Release / build-and-release (push) Successful in 1m8s
2025-11-28 12:45:25 +01:00
8bfd416327 Revert to v0.1.192 - fix agent hang issue
Some checks failed
Build and Release / build-and-release (push) Failing after 1m8s
2025-11-28 12:42:24 +01:00
85c6c624fb Revert D-Bus usage, use systemctl commands only
All checks were successful
Build and Release / build-and-release (push) Successful in 1m20s
- Remove zbus dependency from agent
- Replace D-Bus Connection calls with systemctl show commands
- Fix agent hang by eliminating blocking D-Bus operations
- get_unit_property now uses systemctl show with property flags
- Memory, disk usage, and nginx config queries use systemctl
- Simpler, more reliable service monitoring
2025-11-28 12:15:04 +01:00
eab3f17428 Fix agent hang by reverting service discovery to systemctl
All checks were successful
Build and Release / build-and-release (push) Successful in 1m31s
The D-Bus ListUnits call in discover_services_internal() was causing
the agent to hang on startup.

**Root cause:**
- D-Bus ListUnits call with complex tuple destructuring hung indefinitely
- Agent never completed first collection cycle
- No collector output in logs

**Fix:**
- Revert discover_services_internal() to use systemctl list-units/list-unit-files
- Keep D-Bus-based property queries (WorkingDirectory, MemoryCurrent, ExecStart)
- Hybrid approach: systemctl for discovery, D-Bus for individual queries

**External commands still used:**
- systemctl list-units, list-unit-files (service discovery)
- smartctl (SMART data)
- sudo du (directory sizes)
- nginx -T (config fallback)

Version bump: 0.1.198 → 0.1.199
2025-11-28 11:57:31 +01:00
7ad149bbe4 Replace all systemctl commands with zbus D-Bus API
All checks were successful
Build and Release / build-and-release (push) Successful in 1m31s
Complete migration from systemctl subprocess calls to native D-Bus communication:

**Removed systemctl commands:**
- systemctl is-active (fallback) - use D-Bus cache from ListUnits
- systemctl show --property=LoadState,ActiveState,SubState - use D-Bus cache
- systemctl show --property=WorkingDirectory - use D-Bus Properties.Get
- systemctl show --property=MemoryCurrent - use D-Bus Properties.Get
- systemctl show nginx --property=ExecStart - use D-Bus Properties.Get

**Implementation details:**
- Added get_unit_property() helper for D-Bus property access
- Made get_nginx_site_metrics() async to support D-Bus calls
- Made get_nginx_sites_internal() async
- Made discover_nginx_sites() async
- Made get_nginx_config_from_systemd() async
- Fixed RwLock guard Send issues by using scoped locks

**Remaining external commands:**
- smartctl (disk.rs) - No Rust alternative for SMART data
- sudo du (systemd.rs) - Directory size measurement
- nginx -T (systemd.rs) - Nginx config fallback
- timeout hostname (nixos.rs) - Rare fallback only

Version bump: 0.1.197 → 0.1.198
2025-11-28 11:46:28 +01:00
b444c88ea0 Replace external commands with native Rust APIs
All checks were successful
Build and Release / build-and-release (push) Successful in 1m54s
Significant performance improvements by eliminating subprocess spawning:

- Replace 'ip' commands with rtnetlink for network interface discovery
- Replace 'docker ps/images' with bollard Docker API client
- Replace 'systemctl list-units' with zbus D-Bus for systemd interaction
- Replace 'df' with statvfs() syscall for filesystem statistics
- Replace 'lsblk' with /proc/mounts parsing

Add interval-based caching to collectors:
- DiskCollector now respects interval_seconds configuration
- SystemdCollector now respects interval_seconds configuration
- CpuCollector now respects interval_seconds configuration

Remove unused command communication infrastructure:
- Remove port 6131 ZMQ command receiver
- Clean up unused AgentCommand types

Dependencies added:
- rtnetlink = "0.14"
- netlink-packet-route = "0.19"
- bollard = "0.17"
- zbus = "4.0"
- nix (fs features for statvfs)
2025-11-28 11:27:33 +01:00
317cf76bd1 Bump version to v0.1.196
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
2025-11-27 23:16:40 +01:00
0db1a165b9 Revert "Implement cached collector architecture with configurable timeouts"
This reverts commit 2740de9b54.
2025-11-27 23:12:08 +01:00
3c2955376d Revert "Fix ZMQ sender blocking - move to independent thread with try_read"
This reverts commit 01e1f33b66.
2025-11-27 23:10:55 +01:00
f09ccabc7f Revert "Fix data duplication in cached collector architecture"
This reverts commit 14618c59c6.
2025-11-27 23:09:40 +01:00
43dd5a901a Update CLAUDE.md with correct ZMQ sender architecture 2025-11-27 22:59:38 +01:00
01e1f33b66 Fix ZMQ sender blocking - move to independent thread with try_read
All checks were successful
Build and Release / build-and-release (push) Successful in 1m21s
CRITICAL FIX: The previous cached collector architecture still had ZMQ sending
in the main event loop, where it could block waiting for RwLock when collectors
were writing. This caused the 3-8 second delays you observed.

Changes:
- Move ZMQ publisher to dedicated std::thread (ZMQ sockets aren't thread-safe)
- Use try_read() instead of read() to avoid blocking on write locks
- Send previous data if cache is locked by collector
- ZMQ now sends every 2s regardless of collector timing
- Remove publisher from ZmqHandler (now only handles commands)

Architecture:
- Collectors: Independent tokio tasks updating shared cache
- ZMQ Sender: Dedicated OS thread with its own publisher socket
- Main Loop: Only handles commands and notifications

This ensures ZMQ transmission is NEVER blocked by slow collectors.

Bump version to v0.1.195
2025-11-27 22:56:58 +01:00
ed6399b914 Bump version to v0.1.194
All checks were successful
Build and Release / build-and-release (push) Successful in 1m20s
2025-11-27 22:46:17 +01:00
14618c59c6 Fix data duplication in cached collector architecture
Critical bug fix: Collectors were appending to Vecs instead of replacing them,
causing duplicate entries with each collection cycle.

Fixed by adding .clear() calls before populating:
- Memory collector: tmpfs Vec (was showing 11+ duplicates)
- Disk collector: drives and pools Vecs
- Systemd collector: services Vec
- Network collector: Already correct (assigns new Vec)

This prevents the exponential growth of duplicate entries in the dashboard UI.
2025-11-27 22:45:44 +01:00
2740de9b54 Implement cached collector architecture with configurable timeouts
All checks were successful
Build and Release / build-and-release (push) Successful in 1m20s
Major architectural refactor to eliminate false "host offline" alerts:

- Replace sequential blocking collectors with independent async tasks
- Each collector runs at configurable interval and updates shared cache
- ZMQ sender reads cache every 1-2s regardless of collector speed
- Collector intervals: CPU/Memory (1-10s), Backup/NixOS (30-60s), Disk/Systemd (60-300s)

All intervals now configurable via NixOS config:
- collectors.*.interval_seconds (collection frequency per collector)
- collectors.*.command_timeout_seconds (timeout for shell commands)
- notifications.check_interval_seconds (status change detection rate)

Command timeouts increased from hardcoded 2-3s to configurable 10-30s:
- Disk collector: 30s (SMART operations, lsblk)
- Systemd collector: 15s (systemctl, docker, du commands)
- Network collector: 10s (ip route, ip addr)

Benefits:
- No false "offline" alerts when slow collectors take >10s
- Different update rates for different metric types
- Better resource management with longer timeouts
- Full NixOS configuration control

Bump version to v0.1.193
2025-11-27 22:37:20 +01:00
37f2650200 Document cached collector architecture plan
Add architectural plan for separating ZMQ sending from data collection to prevent false 'host offline' alerts caused by slow collectors.

Key concepts:
- Shared cache (Arc<RwLock<AgentData>>)
- Independent async collector tasks with different update rates
- ZMQ sender always sends every 1s from cache
- Fast collectors (1s), medium (5s), slow (60s)
- No blocking regardless of collector speed
2025-11-27 21:49:44 +01:00
833010e270 Bump version to v0.1.192
All checks were successful
Build and Release / build-and-release (push) Successful in 1m8s
2025-11-27 18:34:53 +01:00
549d9d1c72 Replace whale emoji with ASCII 'D' for performance
Emoji rendering in terminals can be very slow, especially when rendered in the hot path (every frame for every docker image). The whale emoji 🐋 was causing significant rendering delays.

Temporary change to ASCII 'D' to test if emoji was the performance issue.
2025-11-27 18:34:27 +01:00
9b84b70581 Bump version to v0.1.191
All checks were successful
Build and Release / build-and-release (push) Successful in 1m8s
2025-11-27 18:16:49 +01:00
92c3ee3f2a Add Docker whale icon for docker images
Docker images now display with distinctive 🐋 whale icon in blue (highlight color) instead of status icons. This provides clear visual identification that these are docker images while not implying operational status.
2025-11-27 18:16:33 +01:00
1be55f765d Bump version to v0.1.190
All checks were successful
Build and Release / build-and-release (push) Successful in 1m9s
2025-11-27 18:09:49 +01:00
2f94a4b853 Add service_type field to separate data from presentation
Changes:
- Add service_type field to SubServiceData: 'nginx_site', 'container', 'image'
- Agent sends pure data without display formatting
- Dashboard checks service_type to decide presentation
- Docker images now display without status icon (service_type='image')
- Remove unused image_size_str from docker images tuple

Clean separation: agent provides data, dashboard handles display logic.
2025-11-27 18:09:20 +01:00
ff2b43827a Bump version to v0.1.189
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
2025-11-27 17:57:38 +01:00
fac0188c6f Change docker image display format and status
Changes:
- Rename docker images from 'image_node:18...' to 'I node:18...' for conciseness
- Change image status from 'active' to 'inactive' for neutral informational display
- Images now show with gray empty circle ○ instead of green filled circle ●

Docker images are static artifacts without meaningful operational status, so using inactive status provides neutral gray display that won't trigger alerts or affect service status aggregation.
2025-11-27 17:57:24 +01:00
6bb350f016 Bump version to v0.1.188
All checks were successful
Build and Release / build-and-release (push) Successful in 1m8s
2025-11-27 16:39:46 +01:00
374b126446 Reduce all command timeouts to 2-3 seconds max
With 10-second host heartbeat timeout, all command timeouts must be significantly lower to ensure total collection time stays under 10 seconds.

Changed timeouts:
- smartctl: 10s → 3s (critical: multiple drives queried sequentially)
- du: 5s → 2s
- lsblk: 5s → 2s
- systemctl list commands: 5s → 3s
- systemctl show/is-active: 3s → 2s
- docker commands: 5s → 3s
- df, ip commands: 3s → 2s

Total worst-case collection time now capped at more reasonable levels, preventing false host offline alerts from blocking operations.
2025-11-27 16:38:54 +01:00
76c04633b5 Bump version to v0.1.187
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
2025-11-27 16:34:42 +01:00
1e0510be81 Add comprehensive timeouts to all blocking system commands
Fixes random host disconnections caused by blocking operations preventing timely ZMQ packet transmission.

Changes:
- Add run_command_with_timeout() wrapper using tokio for async command execution
- Apply 10s timeout to smartctl (prevents 30+ second hangs on failing drives)
- Apply 5s timeout to du, lsblk, systemctl list commands
- Apply 3s timeout to systemctl show/is-active, df, ip commands
- Apply 2s timeout to hostname command
- Use system 'timeout' command for sync operations where async not needed

Critical fixes:
- smartctl: Failing drives could block for 30+ seconds per drive
- du: Large directories (Docker, PostgreSQL) could block 10-30+ seconds
- systemctl/docker: Commands could block indefinitely during system issues

With 1-second collection interval and 10-second heartbeat timeout, any blocking operation >10s causes false "host offline" alerts. These timeouts ensure collection completes quickly even during system degradation.
2025-11-27 16:34:08 +01:00
9a2df906ea Add ZMQ communication statistics tracking and display
All checks were successful
Build and Release / build-and-release (push) Successful in 1m10s
2025-11-27 16:14:45 +01:00
25 changed files with 1366 additions and 667 deletions

View File

@@ -327,9 +327,16 @@ Storage:
├─ ● Data_2: GGA04461 T: 28°C ├─ ● Data_2: GGA04461 T: 28°C
└─ ● Parity: WDZS8RY0 T: 29°C └─ ● Parity: WDZS8RY0 T: 29°C
Backup: Backup:
● Repo: 4
├─ getea
├─ vaultwarden
├─ mysql
└─ immich
● W800639Y W: 2%
├─ ● Backup: 2025-11-29T04:00:01.324623
└─ ● Usage: 8% 70GB/916GB
● WD-WCC7K1234567 T: 32°C W: 12% ● WD-WCC7K1234567 T: 32°C W: 12%
├─ Last: 2h ago (12.3GB) ├─ ● Backup: 2025-11-29T04:00:01.324623
├─ Next: in 22h
└─ ● Usage: 45% 678GB/1.5TB └─ ● Usage: 45% 678GB/1.5TB
``` ```

48
Cargo.lock generated
View File

@@ -279,7 +279,7 @@ checksum = "a1d728cc89cf3aee9ff92b05e62b19ee65a02b5702cff7d5a377e32c6ae29d8d"
[[package]] [[package]]
name = "cm-dashboard" name = "cm-dashboard"
version = "0.1.184" version = "0.1.256"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"chrono", "chrono",
@@ -301,7 +301,7 @@ dependencies = [
[[package]] [[package]]
name = "cm-dashboard-agent" name = "cm-dashboard-agent"
version = "0.1.184" version = "0.1.256"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-trait", "async-trait",
@@ -309,6 +309,7 @@ dependencies = [
"chrono-tz", "chrono-tz",
"clap", "clap",
"cm-dashboard-shared", "cm-dashboard-shared",
"futures",
"gethostname", "gethostname",
"lettre", "lettre",
"reqwest", "reqwest",
@@ -324,7 +325,7 @@ dependencies = [
[[package]] [[package]]
name = "cm-dashboard-shared" name = "cm-dashboard-shared"
version = "0.1.184" version = "0.1.256"
dependencies = [ dependencies = [
"chrono", "chrono",
"serde", "serde",
@@ -552,6 +553,21 @@ dependencies = [
"percent-encoding", "percent-encoding",
] ]
[[package]]
name = "futures"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "65bc07b1a8bc7c85c5f2e110c476c7389b4554ba72af57d8445ea63a576b0876"
dependencies = [
"futures-channel",
"futures-core",
"futures-executor",
"futures-io",
"futures-sink",
"futures-task",
"futures-util",
]
[[package]] [[package]]
name = "futures-channel" name = "futures-channel"
version = "0.3.31" version = "0.3.31"
@@ -559,6 +575,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2dff15bf788c671c1934e366d07e30c1814a8ef514e1af724a602e8a2fbe1b10" checksum = "2dff15bf788c671c1934e366d07e30c1814a8ef514e1af724a602e8a2fbe1b10"
dependencies = [ dependencies = [
"futures-core", "futures-core",
"futures-sink",
] ]
[[package]] [[package]]
@@ -567,12 +584,34 @@ version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "05f29059c0c2090612e8d742178b0580d2dc940c837851ad723096f87af6663e" checksum = "05f29059c0c2090612e8d742178b0580d2dc940c837851ad723096f87af6663e"
[[package]]
name = "futures-executor"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e28d1d997f585e54aebc3f97d39e72338912123a67330d723fdbb564d646c9f"
dependencies = [
"futures-core",
"futures-task",
"futures-util",
]
[[package]] [[package]]
name = "futures-io" name = "futures-io"
version = "0.3.31" version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9e5c1b78ca4aae1ac06c48a526a655760685149f0d465d21f37abfe57ce075c6" checksum = "9e5c1b78ca4aae1ac06c48a526a655760685149f0d465d21f37abfe57ce075c6"
[[package]]
name = "futures-macro"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "162ee34ebcb7c64a8abebc059ce0fee27c2262618d7b60ed8faf72fef13c3650"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]] [[package]]
name = "futures-sink" name = "futures-sink"
version = "0.3.31" version = "0.3.31"
@@ -591,8 +630,11 @@ version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9fa08315bb612088cc391249efdc3bc77536f16c91f6cf495e6fbe85b20a4a81" checksum = "9fa08315bb612088cc391249efdc3bc77536f16c91f6cf495e6fbe85b20a4a81"
dependencies = [ dependencies = [
"futures-channel",
"futures-core", "futures-core",
"futures-io", "futures-io",
"futures-macro",
"futures-sink",
"futures-task", "futures-task",
"memchr", "memchr",
"pin-project-lite", "pin-project-lite",

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "cm-dashboard-agent" name = "cm-dashboard-agent"
version = "0.1.185" version = "0.1.257"
edition = "2021" edition = "2021"
[dependencies] [dependencies]
@@ -20,4 +20,5 @@ gethostname = { workspace = true }
chrono-tz = "0.8" chrono-tz = "0.8"
toml = { workspace = true } toml = { workspace = true }
async-trait = "0.1" async-trait = "0.1"
reqwest = { version = "0.11", features = ["json", "blocking"] } reqwest = { version = "0.11", features = ["json", "blocking"] }
futures = "0.3"

View File

@@ -1,10 +1,10 @@
use anyhow::Result; use anyhow::Result;
use gethostname::gethostname; use gethostname::gethostname;
use std::time::Duration; use std::time::{Duration, Instant};
use tokio::time::interval; use tokio::time::interval;
use tracing::{debug, error, info}; use tracing::{debug, error, info};
use crate::communication::{AgentCommand, ZmqHandler}; use crate::communication::ZmqHandler;
use crate::config::AgentConfig; use crate::config::AgentConfig;
use crate::collectors::{ use crate::collectors::{
Collector, Collector,
@@ -19,13 +19,22 @@ use crate::collectors::{
use crate::notifications::NotificationManager; use crate::notifications::NotificationManager;
use cm_dashboard_shared::AgentData; use cm_dashboard_shared::AgentData;
/// Wrapper for collectors with timing information
struct TimedCollector {
collector: Box<dyn Collector>,
interval: Duration,
last_collection: Option<Instant>,
name: String,
}
pub struct Agent { pub struct Agent {
hostname: String, hostname: String,
config: AgentConfig, config: AgentConfig,
zmq_handler: ZmqHandler, zmq_handler: ZmqHandler,
collectors: Vec<Box<dyn Collector>>, collectors: Vec<TimedCollector>,
notification_manager: NotificationManager, notification_manager: NotificationManager,
previous_status: Option<SystemStatus>, previous_status: Option<SystemStatus>,
cached_agent_data: AgentData,
} }
/// Track system component status for change detection /// Track system component status for change detection
@@ -55,36 +64,78 @@ impl Agent {
config.zmq.publisher_port config.zmq.publisher_port
); );
// Initialize collectors // Initialize collectors with timing information
let mut collectors: Vec<Box<dyn Collector>> = Vec::new(); let mut collectors: Vec<TimedCollector> = Vec::new();
// Add enabled collectors // Add enabled collectors
if config.collectors.cpu.enabled { if config.collectors.cpu.enabled {
collectors.push(Box::new(CpuCollector::new(config.collectors.cpu.clone()))); collectors.push(TimedCollector {
collector: Box::new(CpuCollector::new(config.collectors.cpu.clone())),
interval: Duration::from_secs(config.collectors.cpu.interval_seconds),
last_collection: None,
name: "CPU".to_string(),
});
info!("CPU collector initialized with {}s interval", config.collectors.cpu.interval_seconds);
} }
if config.collectors.memory.enabled { if config.collectors.memory.enabled {
collectors.push(Box::new(MemoryCollector::new(config.collectors.memory.clone()))); collectors.push(TimedCollector {
collector: Box::new(MemoryCollector::new(config.collectors.memory.clone())),
interval: Duration::from_secs(config.collectors.memory.interval_seconds),
last_collection: None,
name: "Memory".to_string(),
});
info!("Memory collector initialized with {}s interval", config.collectors.memory.interval_seconds);
} }
if config.collectors.disk.enabled { if config.collectors.disk.enabled {
collectors.push(Box::new(DiskCollector::new(config.collectors.disk.clone()))); collectors.push(TimedCollector {
collector: Box::new(DiskCollector::new(config.collectors.disk.clone())),
interval: Duration::from_secs(config.collectors.disk.interval_seconds),
last_collection: None,
name: "Disk".to_string(),
});
info!("Disk collector initialized with {}s interval", config.collectors.disk.interval_seconds);
} }
if config.collectors.systemd.enabled { if config.collectors.systemd.enabled {
collectors.push(Box::new(SystemdCollector::new(config.collectors.systemd.clone()))); collectors.push(TimedCollector {
collector: Box::new(SystemdCollector::new(config.collectors.systemd.clone())),
interval: Duration::from_secs(config.collectors.systemd.interval_seconds),
last_collection: None,
name: "Systemd".to_string(),
});
info!("Systemd collector initialized with {}s interval", config.collectors.systemd.interval_seconds);
} }
if config.collectors.backup.enabled { if config.collectors.backup.enabled {
collectors.push(Box::new(BackupCollector::new())); collectors.push(TimedCollector {
collector: Box::new(BackupCollector::new()),
interval: Duration::from_secs(config.collectors.backup.interval_seconds),
last_collection: None,
name: "Backup".to_string(),
});
info!("Backup collector initialized with {}s interval", config.collectors.backup.interval_seconds);
} }
if config.collectors.network.enabled { if config.collectors.network.enabled {
collectors.push(Box::new(NetworkCollector::new(config.collectors.network.clone()))); collectors.push(TimedCollector {
collector: Box::new(NetworkCollector::new(config.collectors.network.clone())),
interval: Duration::from_secs(config.collectors.network.interval_seconds),
last_collection: None,
name: "Network".to_string(),
});
info!("Network collector initialized with {}s interval", config.collectors.network.interval_seconds);
} }
if config.collectors.nixos.enabled { if config.collectors.nixos.enabled {
collectors.push(Box::new(NixOSCollector::new(config.collectors.nixos.clone()))); collectors.push(TimedCollector {
collector: Box::new(NixOSCollector::new(config.collectors.nixos.clone())),
interval: Duration::from_secs(config.collectors.nixos.interval_seconds),
last_collection: None,
name: "NixOS".to_string(),
});
info!("NixOS collector initialized with {}s interval", config.collectors.nixos.interval_seconds);
} }
info!("Initialized {} collectors", collectors.len()); info!("Initialized {} collectors", collectors.len());
@@ -93,6 +144,9 @@ impl Agent {
let notification_manager = NotificationManager::new(&config.notifications, &hostname)?; let notification_manager = NotificationManager::new(&config.notifications, &hostname)?;
info!("Notification manager initialized"); info!("Notification manager initialized");
// Initialize cached agent data
let cached_agent_data = AgentData::new(hostname.clone(), env!("CARGO_PKG_VERSION").to_string());
Ok(Self { Ok(Self {
hostname, hostname,
config, config,
@@ -100,6 +154,7 @@ impl Agent {
collectors, collectors,
notification_manager, notification_manager,
previous_status: None, previous_status: None,
cached_agent_data,
}) })
} }
@@ -114,7 +169,7 @@ impl Agent {
// Set up intervals // Set up intervals
let mut transmission_interval = interval(Duration::from_secs( let mut transmission_interval = interval(Duration::from_secs(
self.config.collection_interval_seconds, self.config.zmq.transmission_interval_seconds,
)); ));
let mut notification_interval = interval(Duration::from_secs(30)); // Check notifications every 30s let mut notification_interval = interval(Duration::from_secs(30)); // Check notifications every 30s
@@ -134,12 +189,6 @@ impl Agent {
// NOTE: With structured data, we might need to implement status tracking differently // NOTE: With structured data, we might need to implement status tracking differently
// For now, we skip this until status evaluation is migrated // For now, we skip this until status evaluation is migrated
} }
// Handle incoming commands (check periodically)
_ = tokio::time::sleep(Duration::from_millis(100)) => {
if let Err(e) = self.handle_commands().await {
error!("Error handling commands: {}", e);
}
}
_ = &mut shutdown_rx => { _ = &mut shutdown_rx => {
info!("Shutdown signal received, stopping agent loop"); info!("Shutdown signal received, stopping agent loop");
break; break;
@@ -155,24 +204,47 @@ impl Agent {
async fn collect_and_broadcast(&mut self) -> Result<()> { async fn collect_and_broadcast(&mut self) -> Result<()> {
debug!("Starting structured data collection"); debug!("Starting structured data collection");
// Initialize empty AgentData // Collect data from collectors whose intervals have elapsed
let mut agent_data = AgentData::new(self.hostname.clone(), env!("CARGO_PKG_VERSION").to_string()); // Update cached_agent_data with new data
let now = Instant::now();
for timed_collector in &mut self.collectors {
let should_collect = match timed_collector.last_collection {
None => true, // First collection
Some(last_time) => now.duration_since(last_time) >= timed_collector.interval,
};
// Collect data from all collectors if should_collect {
for collector in &self.collectors { if let Err(e) = timed_collector.collector.collect_structured(&mut self.cached_agent_data).await {
if let Err(e) = collector.collect_structured(&mut agent_data).await { error!("Collector {} failed: {}", timed_collector.name, e);
error!("Collector failed: {}", e); // Update last_collection time even on failure to prevent immediate retries
// Continue with other collectors even if one fails timed_collector.last_collection = Some(now);
} else {
timed_collector.last_collection = Some(now);
debug!(
"Collected from {} ({}s interval)",
timed_collector.name,
timed_collector.interval.as_secs()
);
}
} }
} }
// Update timestamp on cached data
self.cached_agent_data.timestamp = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs();
// Clone for notification check (to avoid borrow issues)
let agent_data_snapshot = self.cached_agent_data.clone();
// Check for status changes and send notifications // Check for status changes and send notifications
if let Err(e) = self.check_status_changes_and_notify(&agent_data).await { if let Err(e) = self.check_status_changes_and_notify(&agent_data_snapshot).await {
error!("Failed to check status changes: {}", e); error!("Failed to check status changes: {}", e);
} }
// Broadcast the structured data via ZMQ // Broadcast the cached structured data via ZMQ
if let Err(e) = self.zmq_handler.publish_agent_data(&agent_data).await { if let Err(e) = self.zmq_handler.publish_agent_data(&agent_data_snapshot).await {
error!("Failed to broadcast agent data: {}", e); error!("Failed to broadcast agent data: {}", e);
} else { } else {
debug!("Successfully broadcast structured agent data"); debug!("Successfully broadcast structured agent data");
@@ -259,36 +331,4 @@ impl Agent {
Ok(()) Ok(())
} }
/// Handle incoming commands from dashboard
async fn handle_commands(&mut self) -> Result<()> {
// Try to receive a command (non-blocking)
if let Ok(Some(command)) = self.zmq_handler.try_receive_command() {
info!("Received command: {:?}", command);
match command {
AgentCommand::CollectNow => {
info!("Received immediate collection request");
if let Err(e) = self.collect_and_broadcast().await {
error!("Failed to collect on demand: {}", e);
}
}
AgentCommand::SetInterval { seconds } => {
info!("Received interval change request: {}s", seconds);
// Note: This would require more complex handling to update the interval
// For now, just acknowledge
}
AgentCommand::ToggleCollector { name, enabled } => {
info!("Received collector toggle request: {} -> {}", name, enabled);
// Note: This would require more complex handling to enable/disable collectors
// For now, just acknowledge
}
AgentCommand::Ping => {
info!("Received ping command");
// Maybe send back a pong or status
}
}
}
Ok(())
}
} }

View File

@@ -1,36 +1,66 @@
use async_trait::async_trait; use async_trait::async_trait;
use cm_dashboard_shared::{AgentData, BackupData, BackupDiskData}; use cm_dashboard_shared::{AgentData, BackupData, BackupDiskData, Status};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use std::collections::HashMap; use std::collections::{HashMap, HashSet};
use std::fs; use std::fs;
use std::path::Path; use std::path::{Path, PathBuf};
use tracing::debug; use tracing::{debug, warn};
use super::{Collector, CollectorError}; use super::{Collector, CollectorError};
/// Backup collector that reads backup status from TOML files with structured data output /// Backup collector that reads backup status from TOML files with structured data output
pub struct BackupCollector { pub struct BackupCollector {
/// Path to backup status file /// Directory containing backup status files
status_file_path: String, status_dir: String,
} }
impl BackupCollector { impl BackupCollector {
pub fn new() -> Self { pub fn new() -> Self {
Self { Self {
status_file_path: "/var/lib/backup/backup-status.toml".to_string(), status_dir: "/var/lib/backup/status".to_string(),
} }
} }
/// Read backup status from TOML file /// Scan directory for all backup status files
async fn read_backup_status(&self) -> Result<Option<BackupStatusToml>, CollectorError> { async fn scan_status_files(&self) -> Result<Vec<PathBuf>, CollectorError> {
if !Path::new(&self.status_file_path).exists() { let status_path = Path::new(&self.status_dir);
debug!("Backup status file not found: {}", self.status_file_path);
return Ok(None); if !status_path.exists() {
debug!("Backup status directory not found: {}", self.status_dir);
return Ok(Vec::new());
} }
let content = fs::read_to_string(&self.status_file_path) let mut status_files = Vec::new();
match fs::read_dir(status_path) {
Ok(entries) => {
for entry in entries {
if let Ok(entry) = entry {
let path = entry.path();
if path.is_file() {
if let Some(filename) = path.file_name().and_then(|n| n.to_str()) {
if filename.starts_with("backup-status-") && filename.ends_with(".toml") {
status_files.push(path);
}
}
}
}
}
}
Err(e) => {
warn!("Failed to read backup status directory: {}", e);
return Ok(Vec::new());
}
}
Ok(status_files)
}
/// Read a single backup status file
async fn read_status_file(&self, path: &Path) -> Result<BackupStatusToml, CollectorError> {
let content = fs::read_to_string(path)
.map_err(|e| CollectorError::SystemRead { .map_err(|e| CollectorError::SystemRead {
path: self.status_file_path.clone(), path: path.to_string_lossy().to_string(),
error: e.to_string(), error: e.to_string(),
})?; })?;
@@ -40,66 +70,122 @@ impl BackupCollector {
error: format!("Failed to parse backup status TOML: {}", e), error: format!("Failed to parse backup status TOML: {}", e),
})?; })?;
Ok(Some(status)) Ok(status)
}
/// Calculate backup status from TOML status field
fn calculate_backup_status(status_str: &str) -> Status {
match status_str.to_lowercase().as_str() {
"success" => Status::Ok,
"warning" => Status::Warning,
"failed" | "error" => Status::Critical,
_ => Status::Unknown,
}
}
/// Calculate usage status from disk usage percentage
fn calculate_usage_status(usage_percent: f32) -> Status {
if usage_percent < 80.0 {
Status::Ok
} else if usage_percent < 90.0 {
Status::Warning
} else {
Status::Critical
}
} }
/// Convert BackupStatusToml to BackupData and populate AgentData /// Convert BackupStatusToml to BackupData and populate AgentData
async fn populate_backup_data(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> { async fn populate_backup_data(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
if let Some(backup_status) = self.read_backup_status().await? { let status_files = self.scan_status_files().await?;
// Use raw start_time string from TOML
// Extract disk information if status_files.is_empty() {
let repository_disk = if let Some(disk_space) = &backup_status.disk_space { debug!("No backup status files found");
Some(BackupDiskData {
serial: backup_status.disk_serial_number.clone().unwrap_or_else(|| "Unknown".to_string()),
usage_percent: disk_space.usage_percent as f32,
used_gb: disk_space.used_gb as f32,
total_gb: disk_space.total_gb as f32,
wear_percent: backup_status.disk_wear_percent,
temperature_celsius: None, // Not available in current TOML
})
} else if let Some(serial) = &backup_status.disk_serial_number {
// Fallback: create minimal disk info if we have serial but no disk_space
Some(BackupDiskData {
serial: serial.clone(),
usage_percent: 0.0,
used_gb: 0.0,
total_gb: 0.0,
wear_percent: backup_status.disk_wear_percent,
temperature_celsius: None,
})
} else {
None
};
// Calculate total repository size from services
let total_size_gb = backup_status.services
.values()
.map(|service| service.repo_size_bytes as f32 / (1024.0 * 1024.0 * 1024.0))
.sum::<f32>();
let backup_data = BackupData {
status: backup_status.status,
total_size_gb: Some(total_size_gb),
repository_health: Some("ok".to_string()), // Derive from status if needed
repository_disk,
last_backup_size_gb: None, // Not available in current TOML format
start_time_raw: Some(backup_status.start_time),
};
agent_data.backup = backup_data;
} else {
// No backup status available - set default values
agent_data.backup = BackupData { agent_data.backup = BackupData {
status: "unavailable".to_string(), repositories: Vec::new(),
total_size_gb: None, repository_status: Status::Unknown,
repository_health: None, disks: Vec::new(),
repository_disk: None,
last_backup_size_gb: None,
start_time_raw: None,
}; };
return Ok(());
} }
let mut all_repositories = HashSet::new();
let mut disks = Vec::new();
let mut worst_status = Status::Ok;
for status_file in status_files {
match self.read_status_file(&status_file).await {
Ok(backup_status) => {
// Collect all service names
for service_name in backup_status.services.keys() {
all_repositories.insert(service_name.clone());
}
// Calculate backup status
let backup_status_enum = Self::calculate_backup_status(&backup_status.status);
// Calculate usage status from disk space
let (usage_percent, used_gb, total_gb, usage_status) = if let Some(disk_space) = &backup_status.disk_space {
let usage_pct = disk_space.usage_percent as f32;
(
usage_pct,
disk_space.used_gb as f32,
disk_space.total_gb as f32,
Self::calculate_usage_status(usage_pct),
)
} else {
(0.0, 0.0, 0.0, Status::Unknown)
};
// Update worst status
worst_status = worst_status.max(backup_status_enum).max(usage_status);
// Build service list for this disk
let services: Vec<String> = backup_status.services.keys().cloned().collect();
// Get min and max archive counts to detect inconsistencies
let archives_min: i64 = backup_status.services.values()
.map(|service| service.archive_count)
.min()
.unwrap_or(0);
let archives_max: i64 = backup_status.services.values()
.map(|service| service.archive_count)
.max()
.unwrap_or(0);
// Create disk data
let disk_data = BackupDiskData {
serial: backup_status.disk_serial_number.unwrap_or_else(|| "Unknown".to_string()),
product_name: backup_status.disk_product_name,
wear_percent: backup_status.disk_wear_percent,
temperature_celsius: None, // Not available in current TOML
last_backup_time: Some(backup_status.start_time),
backup_status: backup_status_enum,
disk_usage_percent: usage_percent,
disk_used_gb: used_gb,
disk_total_gb: total_gb,
usage_status,
services,
archives_min,
archives_max,
};
disks.push(disk_data);
}
Err(e) => {
warn!("Failed to read backup status file {:?}: {}", status_file, e);
}
}
}
let repositories: Vec<String> = all_repositories.into_iter().collect();
agent_data.backup = BackupData {
repositories,
repository_status: worst_status,
disks,
};
Ok(()) Ok(())
} }
} }

View File

@@ -119,36 +119,134 @@ impl CpuCollector {
utils::parse_u64(content.trim()) utils::parse_u64(content.trim())
} }
/// Collect CPU frequency and populate AgentData /// Collect static CPU information from /proc/cpuinfo (only once at startup)
async fn collect_frequency(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> { async fn collect_cpu_info(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
// Try scaling frequency first (more accurate for current frequency) let content = utils::read_proc_file("/proc/cpuinfo")?;
if let Ok(freq) =
utils::read_proc_file("/sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq") let mut model_name: Option<String> = None;
{ let mut core_count: u32 = 0;
if let Ok(freq_khz) = utils::parse_u64(freq.trim()) {
let freq_mhz = freq_khz as f32 / 1000.0; for line in content.lines() {
agent_data.system.cpu.frequency_mhz = freq_mhz; if line.starts_with("model name") {
return Ok(()); if let Some(colon_pos) = line.find(':') {
let full_name = line[colon_pos + 1..].trim();
// Extract just the model number (e.g., "i7-9700" from "Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz")
let model = Self::extract_cpu_model(full_name);
if model_name.is_none() {
model_name = Some(model);
}
}
} else if line.starts_with("processor") {
core_count += 1;
} }
} }
// Fallback: parse /proc/cpuinfo for base frequency agent_data.system.cpu.model_name = model_name;
if let Ok(content) = utils::read_proc_file("/proc/cpuinfo") { if core_count > 0 {
for line in content.lines() { agent_data.system.cpu.core_count = Some(core_count);
if line.starts_with("cpu MHz") { }
if let Some(freq_str) = line.split(':').nth(1) {
if let Ok(freq_mhz) = utils::parse_f32(freq_str) { Ok(())
agent_data.system.cpu.frequency_mhz = freq_mhz; }
return Ok(());
/// Extract CPU model number from full model name
/// Examples:
/// - "Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz" -> "i7-9700"
/// - "12th Gen Intel(R) Core(TM) i7-12700K" -> "i7-12700K"
/// - "AMD Ryzen 9 5950X 16-Core Processor" -> "Ryzen 9 5950X"
fn extract_cpu_model(full_name: &str) -> String {
// Look for Intel Core patterns (both old and new gen): i3, i5, i7, i9
// Match pattern like "i7-12700K" or "i7-9700"
for prefix in &["i3-", "i5-", "i7-", "i9-"] {
if let Some(pos) = full_name.find(prefix) {
// Find end of model number (until space or end of string)
let after_prefix = &full_name[pos..];
let end = after_prefix.find(' ').unwrap_or(after_prefix.len());
return after_prefix[..end].to_string();
}
}
// Look for AMD Ryzen pattern
if let Some(pos) = full_name.find("Ryzen") {
// Extract "Ryzen X XXXX" pattern
let after_ryzen = &full_name[pos..];
let parts: Vec<&str> = after_ryzen.split_whitespace().collect();
if parts.len() >= 3 {
return format!("{} {} {}", parts[0], parts[1], parts[2]);
}
}
// Fallback: return first 15 characters or full name if shorter
if full_name.len() > 15 {
full_name[..15].to_string()
} else {
full_name.to_string()
}
}
/// Collect CPU C-state (idle depth) and populate AgentData with top 3 C-states by usage
async fn collect_cstate(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
// Read C-state usage from first CPU (representative of overall system)
// C-states indicate CPU idle depth: C1=light sleep, C6=deep sleep, C10=deepest
let mut cstate_times: Vec<(String, u64)> = Vec::new();
let mut total_time: u64 = 0;
// Collect all C-state times from CPU0
for state_num in 0..=10 {
let time_path = format!("/sys/devices/system/cpu/cpu0/cpuidle/state{}/time", state_num);
let name_path = format!("/sys/devices/system/cpu/cpu0/cpuidle/state{}/name", state_num);
if let Ok(time_str) = utils::read_proc_file(&time_path) {
if let Ok(time) = utils::parse_u64(time_str.trim()) {
if let Ok(name) = utils::read_proc_file(&name_path) {
let state_name = name.trim();
// Skip POLL state (not real idle)
if state_name != "POLL" && time > 0 {
// Extract "C" + digits pattern (C3, C10, etc.) to reduce JSON size
// Handles formats like "C3_ACPI", "C10_MWAIT", etc.
let clean_name = if let Some(c_pos) = state_name.find('C') {
let rest = &state_name[c_pos + 1..];
let digit_count = rest.chars().take_while(|c| c.is_ascii_digit()).count();
if digit_count > 0 {
state_name[c_pos..c_pos + 1 + digit_count].to_string()
} else {
state_name.to_string()
}
} else {
state_name.to_string()
};
cstate_times.push((clean_name, time));
total_time += time;
} }
} }
break; // Only need first CPU entry
} }
} else {
// No more states available
break;
} }
} }
debug!("CPU frequency not available"); // Sort by time descending to get top 3
// Leave frequency as 0.0 if not available cstate_times.sort_by(|a, b| b.1.cmp(&a.1));
// Calculate percentages for top 3 and populate AgentData
agent_data.system.cpu.cstates = cstate_times
.iter()
.take(3)
.map(|(name, time)| {
let percent = if total_time > 0 {
(*time as f32 / total_time as f32) * 100.0
} else {
0.0
};
cm_dashboard_shared::CStateInfo {
name: name.clone(),
percent,
}
})
.collect();
Ok(()) Ok(())
} }
} }
@@ -159,14 +257,19 @@ impl Collector for CpuCollector {
debug!("Collecting CPU metrics"); debug!("Collecting CPU metrics");
let start = std::time::Instant::now(); let start = std::time::Instant::now();
// Collect static CPU info (only once at startup)
if agent_data.system.cpu.model_name.is_none() || agent_data.system.cpu.core_count.is_none() {
self.collect_cpu_info(agent_data).await?;
}
// Collect load averages (always available) // Collect load averages (always available)
self.collect_load_averages(agent_data).await?; self.collect_load_averages(agent_data).await?;
// Collect temperature (optional) // Collect temperature (optional)
self.collect_temperature(agent_data).await?; self.collect_temperature(agent_data).await?;
// Collect frequency (optional) // Collect C-state (CPU idle depth)
self.collect_frequency(agent_data).await?; self.collect_cstate(agent_data).await?;
let duration = start.elapsed(); let duration = start.elapsed();
debug!("CPU collection completed in {:?}", duration); debug!("CPU collection completed in {:?}", duration);
@@ -179,8 +282,8 @@ impl Collector for CpuCollector {
); );
} }
// Calculate status using thresholds // Calculate status using thresholds (use 5-minute average for stability)
agent_data.system.cpu.load_status = self.calculate_load_status(agent_data.system.cpu.load_1min); agent_data.system.cpu.load_status = self.calculate_load_status(agent_data.system.cpu.load_5min);
agent_data.system.cpu.temperature_status = if let Some(temp) = agent_data.system.cpu.temperature_celsius { agent_data.system.cpu.temperature_status = if let Some(temp) = agent_data.system.cpu.temperature_celsius {
self.calculate_temperature_status(temp) self.calculate_temperature_status(temp)
} else { } else {

View File

@@ -3,10 +3,9 @@ use async_trait::async_trait;
use cm_dashboard_shared::{AgentData, DriveData, FilesystemData, PoolData, HysteresisThresholds, Status}; use cm_dashboard_shared::{AgentData, DriveData, FilesystemData, PoolData, HysteresisThresholds, Status};
use crate::config::DiskConfig; use crate::config::DiskConfig;
use std::process::Command; use tokio::process::Command as TokioCommand;
use std::time::Instant; use std::process::Command as StdCommand;
use std::collections::HashMap; use std::collections::HashMap;
use tracing::debug;
use super::{Collector, CollectorError}; use super::{Collector, CollectorError};
@@ -67,8 +66,9 @@ impl DiskCollector {
/// Collect all storage data and populate AgentData /// Collect all storage data and populate AgentData
async fn collect_storage_data(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> { async fn collect_storage_data(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
let start_time = Instant::now(); // Clear drives and pools to prevent duplicates when updating cached data
debug!("Starting clean storage collection"); agent_data.system.storage.drives.clear();
agent_data.system.storage.pools.clear();
// Step 1: Get mount points and their backing devices // Step 1: Get mount points and their backing devices
let mount_devices = self.get_mount_devices().await?; let mount_devices = self.get_mount_devices().await?;
@@ -104,17 +104,17 @@ impl DiskCollector {
self.populate_drives_data(&physical_drives, &smart_data, agent_data)?; self.populate_drives_data(&physical_drives, &smart_data, agent_data)?;
self.populate_pools_data(&mergerfs_pools, &smart_data, agent_data)?; self.populate_pools_data(&mergerfs_pools, &smart_data, agent_data)?;
let elapsed = start_time.elapsed();
debug!("Storage collection completed in {:?}", elapsed);
Ok(()) Ok(())
} }
/// Get block devices and their mount points using lsblk /// Get block devices and their mount points using lsblk
async fn get_mount_devices(&self) -> Result<HashMap<String, String>, CollectorError> { async fn get_mount_devices(&self) -> Result<HashMap<String, String>, CollectorError> {
let output = Command::new("lsblk") use super::run_command_with_timeout;
.args(&["-rn", "-o", "NAME,MOUNTPOINT"])
.output() let mut cmd = TokioCommand::new("lsblk");
cmd.args(&["-rn", "-o", "NAME,MOUNTPOINT"]);
let output = run_command_with_timeout(cmd, 2).await
.map_err(|e| CollectorError::SystemRead { .map_err(|e| CollectorError::SystemRead {
path: "block devices".to_string(), path: "block devices".to_string(),
error: e.to_string(), error: e.to_string(),
@@ -138,7 +138,6 @@ impl DiskCollector {
} }
} }
debug!("Found {} mounted block devices", mount_devices.len());
Ok(mount_devices) Ok(mount_devices)
} }
@@ -151,8 +150,8 @@ impl DiskCollector {
Ok((total, used)) => { Ok((total, used)) => {
filesystem_usage.insert(mount_point.clone(), (total, used)); filesystem_usage.insert(mount_point.clone(), (total, used));
} }
Err(e) => { Err(_e) => {
debug!("Failed to get filesystem info for {}: {}", mount_point, e); // Silently skip filesystems we can't read
} }
} }
} }
@@ -173,8 +172,6 @@ impl DiskCollector {
// Only add if we don't already have usage data for this mount point // Only add if we don't already have usage data for this mount point
if !filesystem_usage.contains_key(&mount_point) { if !filesystem_usage.contains_key(&mount_point) {
if let Ok((total, used)) = self.get_filesystem_info(&mount_point) { if let Ok((total, used)) = self.get_filesystem_info(&mount_point) {
debug!("Added MergerFS filesystem usage for {}: {}GB total, {}GB used",
mount_point, total as f32 / (1024.0 * 1024.0 * 1024.0), used as f32 / (1024.0 * 1024.0 * 1024.0));
filesystem_usage.insert(mount_point, (total, used)); filesystem_usage.insert(mount_point, (total, used));
} }
} }
@@ -186,8 +183,8 @@ impl DiskCollector {
/// Get filesystem info for a single mount point /// Get filesystem info for a single mount point
fn get_filesystem_info(&self, mount_point: &str) -> Result<(u64, u64), CollectorError> { fn get_filesystem_info(&self, mount_point: &str) -> Result<(u64, u64), CollectorError> {
let output = Command::new("df") let output = StdCommand::new("timeout")
.args(&["--block-size=1", mount_point]) .args(&["2", "df", "--block-size=1", mount_point])
.output() .output()
.map_err(|e| CollectorError::SystemRead { .map_err(|e| CollectorError::SystemRead {
path: format!("df {}", mount_point), path: format!("df {}", mount_point),
@@ -249,9 +246,8 @@ impl DiskCollector {
} else { } else {
mount_point.trim_start_matches('/').replace('/', "_") mount_point.trim_start_matches('/').replace('/', "_")
}; };
if pool_name.is_empty() { if pool_name.is_empty() {
debug!("Skipping mergerfs pool with empty name: {}", mount_point);
continue; continue;
} }
@@ -279,8 +275,7 @@ impl DiskCollector {
// Categorize as data vs parity drives // Categorize as data vs parity drives
let (data_drives, parity_drives) = match self.categorize_pool_drives(&all_member_paths) { let (data_drives, parity_drives) = match self.categorize_pool_drives(&all_member_paths) {
Ok(drives) => drives, Ok(drives) => drives,
Err(e) => { Err(_e) => {
debug!("Failed to categorize drives for pool {}: {}. Skipping.", mount_point, e);
continue; continue;
} }
}; };
@@ -295,8 +290,7 @@ impl DiskCollector {
}); });
} }
} }
debug!("Found {} mergerfs pools", pools.len());
Ok(pools) Ok(pools)
} }
@@ -383,9 +377,9 @@ impl DiskCollector {
device.to_string() device.to_string()
} }
/// Get SMART data for drives /// Get SMART data for drives in parallel
async fn get_smart_data_for_drives(&self, physical_drives: &[PhysicalDrive], mergerfs_pools: &[MergerfsPool]) -> HashMap<String, SmartData> { async fn get_smart_data_for_drives(&self, physical_drives: &[PhysicalDrive], mergerfs_pools: &[MergerfsPool]) -> HashMap<String, SmartData> {
let mut smart_data = HashMap::new(); use futures::future::join_all;
// Collect all drive names // Collect all drive names
let mut all_drives = std::collections::HashSet::new(); let mut all_drives = std::collections::HashSet::new();
@@ -401,9 +395,24 @@ impl DiskCollector {
} }
} }
// Get SMART data for each drive // Collect SMART data for all drives in parallel
for drive_name in all_drives { let futures: Vec<_> = all_drives
if let Ok(data) = self.get_smart_data(&drive_name).await { .iter()
.map(|drive_name| {
let drive = drive_name.clone();
async move {
let result = self.get_smart_data(&drive).await;
(drive, result)
}
})
.collect();
let results = join_all(futures).await;
// Build HashMap from results
let mut smart_data = HashMap::new();
for (drive_name, result) in results {
if let Ok(data) = result {
smart_data.insert(drive_name, data); smart_data.insert(drive_name, data);
} }
} }
@@ -413,16 +422,18 @@ impl DiskCollector {
/// Get SMART data for a single drive /// Get SMART data for a single drive
async fn get_smart_data(&self, drive_name: &str) -> Result<SmartData, CollectorError> { async fn get_smart_data(&self, drive_name: &str) -> Result<SmartData, CollectorError> {
use super::run_command_with_timeout;
// Use direct smartctl (no sudo) - service has CAP_SYS_RAWIO and CAP_SYS_ADMIN capabilities // Use direct smartctl (no sudo) - service has CAP_SYS_RAWIO and CAP_SYS_ADMIN capabilities
// For NVMe drives, specify device type explicitly // For NVMe drives, specify device type explicitly
let mut cmd = Command::new("smartctl"); let mut cmd = TokioCommand::new("smartctl");
if drive_name.starts_with("nvme") { if drive_name.starts_with("nvme") {
cmd.args(&["-d", "nvme", "-a", &format!("/dev/{}", drive_name)]); cmd.args(&["-d", "nvme", "-a", &format!("/dev/{}", drive_name)]);
} else { } else {
cmd.args(&["-a", &format!("/dev/{}", drive_name)]); cmd.args(&["-a", &format!("/dev/{}", drive_name)]);
} }
let output = cmd.output() let output = run_command_with_timeout(cmd, 3).await
.map_err(|e| CollectorError::SystemRead { .map_err(|e| CollectorError::SystemRead {
path: format!("SMART data for {}", drive_name), path: format!("SMART data for {}", drive_name),
error: e.to_string(), error: e.to_string(),
@@ -430,8 +441,10 @@ impl DiskCollector {
let output_str = String::from_utf8_lossy(&output.stdout); let output_str = String::from_utf8_lossy(&output.stdout);
if !output.status.success() { // Note: smartctl returns non-zero exit codes for warnings (like exit code 32
// Return unknown data rather than failing completely // for "temperature was high in the past"), but the output data is still valid.
// Only check if we got any output at all, don't reject based on exit code.
if output_str.is_empty() {
return Ok(SmartData { return Ok(SmartData {
health: "UNKNOWN".to_string(), health: "UNKNOWN".to_string(),
serial_number: None, serial_number: None,
@@ -439,7 +452,7 @@ impl DiskCollector {
wear_percent: None, wear_percent: None,
}); });
} }
let mut health = "UNKNOWN".to_string(); let mut health = "UNKNOWN".to_string();
let mut serial_number = None; let mut serial_number = None;
let mut temperature = None; let mut temperature = None;
@@ -757,9 +770,9 @@ impl DiskCollector {
/// Get drive information for a mount path /// Get drive information for a mount path
fn get_drive_info_for_path(&self, path: &str) -> anyhow::Result<PoolDrive> { fn get_drive_info_for_path(&self, path: &str) -> anyhow::Result<PoolDrive> {
// Use lsblk to find the backing device // Use lsblk to find the backing device with timeout
let output = Command::new("lsblk") let output = StdCommand::new("timeout")
.args(&["-rn", "-o", "NAME,MOUNTPOINT"]) .args(&["2", "lsblk", "-rn", "-o", "NAME,MOUNTPOINT"])
.output() .output()
.map_err(|e| anyhow::anyhow!("Failed to run lsblk: {}", e))?; .map_err(|e| anyhow::anyhow!("Failed to run lsblk: {}", e))?;
@@ -780,20 +793,13 @@ impl DiskCollector {
// Extract base device name (e.g., "sda1" -> "sda") // Extract base device name (e.g., "sda1" -> "sda")
let base_device = self.extract_base_device(&format!("/dev/{}", device)); let base_device = self.extract_base_device(&format!("/dev/{}", device));
// Get temperature from SMART data if available // Temperature will be filled in later from parallel SMART collection
let temperature = if let Ok(smart_data) = tokio::task::block_in_place(|| { // Don't collect it here to avoid sequential blocking with problematic async nesting
tokio::runtime::Handle::current().block_on(self.get_smart_data(&base_device))
}) {
smart_data.temperature_celsius
} else {
None
};
Ok(PoolDrive { Ok(PoolDrive {
name: base_device, name: base_device,
mount_point: path.to_string(), mount_point: path.to_string(),
temperature_celsius: temperature, temperature_celsius: None,
}) })
} }

View File

@@ -105,12 +105,12 @@ impl MemoryCollector {
return Ok(()); return Ok(());
} }
// Get usage data for all tmpfs mounts at once using df // Get usage data for all tmpfs mounts at once using df (with 2 second timeout)
let mut df_args = vec!["df", "--output=target,size,used", "--block-size=1"]; let mut df_args = vec!["2", "df", "--output=target,size,used", "--block-size=1"];
df_args.extend(tmpfs_mounts.iter().map(|s| s.as_str())); df_args.extend(tmpfs_mounts.iter().map(|s| s.as_str()));
let df_output = std::process::Command::new(df_args[0]) let df_output = std::process::Command::new("timeout")
.args(&df_args[1..]) .args(&df_args[..])
.output() .output()
.map_err(|e| CollectorError::SystemRead { .map_err(|e| CollectorError::SystemRead {
path: "tmpfs mounts".to_string(), path: "tmpfs mounts".to_string(),
@@ -200,13 +200,16 @@ impl Collector for MemoryCollector {
debug!("Collecting memory metrics"); debug!("Collecting memory metrics");
let start = std::time::Instant::now(); let start = std::time::Instant::now();
// Clear tmpfs list to prevent duplicates when updating cached data
agent_data.system.memory.tmpfs.clear();
// Parse memory info from /proc/meminfo // Parse memory info from /proc/meminfo
let info = self.parse_meminfo().await?; let info = self.parse_meminfo().await?;
// Populate memory data directly // Populate memory data directly
self.populate_memory_data(&info, agent_data).await?; self.populate_memory_data(&info, agent_data).await?;
// Collect tmpfs data // Collect tmpfs data
self.populate_tmpfs_data(agent_data).await?; self.populate_tmpfs_data(agent_data).await?;
let duration = start.elapsed(); let duration = start.elapsed();

View File

@@ -1,6 +1,7 @@
use async_trait::async_trait; use async_trait::async_trait;
use cm_dashboard_shared::{AgentData}; use cm_dashboard_shared::{AgentData};
use std::process::Output;
use std::time::Duration;
pub mod backup; pub mod backup;
pub mod cpu; pub mod cpu;
@@ -13,6 +14,38 @@ pub mod systemd;
pub use error::CollectorError; pub use error::CollectorError;
/// Run a command with a timeout to prevent blocking
/// Properly kills the process if timeout is exceeded
pub async fn run_command_with_timeout(mut cmd: tokio::process::Command, timeout_secs: u64) -> std::io::Result<Output> {
use tokio::time::timeout;
use std::process::Stdio;
let timeout_duration = Duration::from_secs(timeout_secs);
// Configure stdio to capture output
cmd.stdout(Stdio::piped());
cmd.stderr(Stdio::piped());
let child = cmd.spawn()?;
let pid = child.id();
match timeout(timeout_duration, child.wait_with_output()).await {
Ok(result) => result,
Err(_) => {
// Timeout - force kill the process using system kill command
if let Some(process_id) = pid {
let _ = tokio::process::Command::new("kill")
.args(&["-9", &process_id.to_string()])
.output()
.await;
}
Err(std::io::Error::new(
std::io::ErrorKind::TimedOut,
format!("Command timed out after {} seconds", timeout_secs)
))
}
}
}
/// Base trait for all collectors with direct structured data output /// Base trait for all collectors with direct structured data output
#[async_trait] #[async_trait]

View File

@@ -51,7 +51,7 @@ impl NetworkCollector {
/// Get the primary physical interface (the one with default route) /// Get the primary physical interface (the one with default route)
fn get_primary_physical_interface() -> Option<String> { fn get_primary_physical_interface() -> Option<String> {
match Command::new("ip").args(["route", "show", "default"]).output() { match Command::new("timeout").args(["2", "ip", "route", "show", "default"]).output() {
Ok(output) if output.status.success() => { Ok(output) if output.status.success() => {
let output_str = String::from_utf8_lossy(&output.stdout); let output_str = String::from_utf8_lossy(&output.stdout);
// Parse: "default via 192.168.1.1 dev eno1 ..." // Parse: "default via 192.168.1.1 dev eno1 ..."
@@ -110,7 +110,7 @@ impl NetworkCollector {
// Parse VLAN configuration // Parse VLAN configuration
let vlan_map = Self::parse_vlan_config(); let vlan_map = Self::parse_vlan_config();
match Command::new("ip").args(["-j", "addr"]).output() { match Command::new("timeout").args(["2", "ip", "-j", "addr"]).output() {
Ok(output) if output.status.success() => { Ok(output) if output.status.success() => {
let json_str = String::from_utf8_lossy(&output.stdout); let json_str = String::from_utf8_lossy(&output.stdout);

View File

@@ -43,8 +43,8 @@ impl NixOSCollector {
match fs::read_to_string("/etc/hostname") { match fs::read_to_string("/etc/hostname") {
Ok(hostname) => Some(hostname.trim().to_string()), Ok(hostname) => Some(hostname.trim().to_string()),
Err(_) => { Err(_) => {
// Fallback to hostname command // Fallback to hostname command (with 2 second timeout)
match Command::new("hostname").output() { match Command::new("timeout").args(["2", "hostname"]).output() {
Ok(output) => Some(String::from_utf8_lossy(&output.stdout).trim().to_string()), Ok(output) => Some(String::from_utf8_lossy(&output.stdout).trim().to_string()),
Err(_) => None, Err(_) => None,
} }

View File

@@ -4,7 +4,7 @@ use cm_dashboard_shared::{AgentData, ServiceData, SubServiceData, SubServiceMetr
use std::process::Command; use std::process::Command;
use std::sync::RwLock; use std::sync::RwLock;
use std::time::Instant; use std::time::Instant;
use tracing::debug; use tracing::{debug, info};
use super::{Collector, CollectorError}; use super::{Collector, CollectorError};
use crate::config::SystemdConfig; use crate::config::SystemdConfig;
@@ -43,9 +43,10 @@ struct ServiceCacheState {
/// Cached service status information from systemctl list-units /// Cached service status information from systemctl list-units
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
struct ServiceStatusInfo { struct ServiceStatusInfo {
load_state: String,
active_state: String, active_state: String,
sub_state: String, memory_bytes: Option<u64>,
restart_count: Option<u32>,
start_timestamp: Option<u64>,
} }
impl SystemdCollector { impl SystemdCollector {
@@ -86,14 +87,20 @@ impl SystemdCollector {
let mut complete_service_data = Vec::new(); let mut complete_service_data = Vec::new();
for service_name in &monitored_services { for service_name in &monitored_services {
match self.get_service_status(service_name) { match self.get_service_status(service_name) {
Ok((active_status, _detailed_info)) => { Ok(status_info) => {
let memory_mb = self.get_service_memory_usage(service_name).await.unwrap_or(0.0);
let disk_gb = self.get_service_disk_usage(service_name).await.unwrap_or(0.0);
let mut sub_services = Vec::new(); let mut sub_services = Vec::new();
// Calculate uptime if we have start timestamp
let uptime_seconds = status_info.start_timestamp.and_then(|start| {
let now = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.ok()?
.as_secs();
Some(now.saturating_sub(start))
});
// Sub-service metrics for specific services (always include cached results) // Sub-service metrics for specific services (always include cached results)
if service_name.contains("nginx") && active_status == "active" { if service_name.contains("nginx") && status_info.active_state == "active" {
let nginx_sites = self.get_nginx_site_metrics(); let nginx_sites = self.get_nginx_site_metrics();
for (site_name, latency_ms) in nginx_sites { for (site_name, latency_ms) in nginx_sites {
let site_status = if latency_ms >= 0.0 && latency_ms < self.config.nginx_latency_critical_ms { let site_status = if latency_ms >= 0.0 && latency_ms < self.config.nginx_latency_critical_ms {
@@ -113,11 +120,12 @@ impl SystemdCollector {
name: site_name.clone(), name: site_name.clone(),
service_status: self.calculate_service_status(&site_name, &site_status), service_status: self.calculate_service_status(&site_name, &site_status),
metrics, metrics,
service_type: "nginx_site".to_string(),
}); });
} }
} }
if service_name.contains("docker") && active_status == "active" { if service_name.contains("docker") && status_info.active_state == "active" {
let docker_containers = self.get_docker_containers(); let docker_containers = self.get_docker_containers();
for (container_name, container_status) in docker_containers { for (container_name, container_status) in docker_containers {
// For now, docker containers have no additional metrics // For now, docker containers have no additional metrics
@@ -128,23 +136,70 @@ impl SystemdCollector {
name: container_name.clone(), name: container_name.clone(),
service_status: self.calculate_service_status(&container_name, &container_status), service_status: self.calculate_service_status(&container_name, &container_status),
metrics, metrics,
service_type: "container".to_string(),
}); });
} }
// Add Docker images // Add Docker images
let docker_images = self.get_docker_images(); let docker_images = self.get_docker_images();
for (image_name, image_status, image_size_str, image_size_mb) in docker_images { for (image_name, _image_status, image_size_mb) in docker_images {
let mut metrics = Vec::new(); let metrics = Vec::new();
metrics.push(SubServiceMetric {
label: "size".to_string(),
value: image_size_mb,
unit: Some("MB".to_string()),
});
sub_services.push(SubServiceData { sub_services.push(SubServiceData {
name: format!("{} ({})", image_name, image_size_str), name: format!("{} size: {:.1} MB", image_name, image_size_mb),
service_status: self.calculate_service_status(&image_name, &image_status), service_status: Status::Info, // Informational only, no status icon
metrics, metrics,
service_type: "image".to_string(),
});
}
}
if service_name == "openvpn-vpn-download" && status_info.active_state == "active" {
// Add VPN route
if let Some(external_ip) = self.get_vpn_external_ip() {
let metrics = Vec::new();
sub_services.push(SubServiceData {
name: format!("route: {}", external_ip),
service_status: Status::Info,
metrics,
service_type: "vpn_route".to_string(),
});
}
// Add torrent stats
if let Some((active_count, download_mbps, upload_mbps)) = self.get_qbittorrent_stats() {
let metrics = Vec::new();
sub_services.push(SubServiceData {
name: format!("{} active, ↓ {:.1} MB/s, ↑ {:.1} MB/s", active_count, download_mbps, upload_mbps),
service_status: Status::Info,
metrics,
service_type: "torrent_stats".to_string(),
});
}
}
if service_name == "nftables" && status_info.active_state == "active" {
let (tcp_ports, udp_ports) = self.get_nftables_open_ports();
if !tcp_ports.is_empty() {
let metrics = Vec::new();
sub_services.push(SubServiceData {
name: format!("wan tcp: {}", tcp_ports),
service_status: Status::Info,
metrics,
service_type: "firewall_port".to_string(),
});
}
if !udp_ports.is_empty() {
let metrics = Vec::new();
sub_services.push(SubServiceData {
name: format!("wan udp: {}", udp_ports),
service_status: Status::Info,
metrics,
service_type: "firewall_port".to_string(),
}); });
} }
} }
@@ -152,11 +207,12 @@ impl SystemdCollector {
// Create complete service data // Create complete service data
let service_data = ServiceData { let service_data = ServiceData {
name: service_name.clone(), name: service_name.clone(),
memory_mb,
disk_gb,
user_stopped: false, // TODO: Integrate with service tracker user_stopped: false, // TODO: Integrate with service tracker
service_status: self.calculate_service_status(service_name, &active_status), service_status: self.calculate_service_status(service_name, &status_info.active_state),
sub_services, sub_services,
memory_bytes: status_info.memory_bytes,
restart_count: status_info.restart_count,
uptime_seconds,
}; };
// Add to AgentData and cache // Add to AgentData and cache
@@ -251,18 +307,18 @@ impl SystemdCollector {
/// Auto-discover interesting services to monitor /// Auto-discover interesting services to monitor
fn discover_services_internal(&self) -> Result<(Vec<String>, std::collections::HashMap<String, ServiceStatusInfo>)> { fn discover_services_internal(&self) -> Result<(Vec<String>, std::collections::HashMap<String, ServiceStatusInfo>)> {
// First: Get all service unit files // First: Get all service unit files (with 3 second timeout)
let unit_files_output = Command::new("systemctl") let unit_files_output = Command::new("timeout")
.args(&["list-unit-files", "--type=service", "--no-pager", "--plain"]) .args(&["3", "systemctl", "list-unit-files", "--type=service", "--no-pager", "--plain"])
.output()?; .output()?;
if !unit_files_output.status.success() { if !unit_files_output.status.success() {
return Err(anyhow::anyhow!("systemctl list-unit-files command failed")); return Err(anyhow::anyhow!("systemctl list-unit-files command failed"));
} }
// Second: Get runtime status of all units // Second: Get runtime status of all units (with 3 second timeout)
let units_status_output = Command::new("systemctl") let units_status_output = Command::new("timeout")
.args(&["list-units", "--type=service", "--all", "--no-pager", "--plain"]) .args(&["3", "systemctl", "list-units", "--type=service", "--all", "--no-pager", "--plain"])
.output()?; .output()?;
if !units_status_output.status.success() { if !units_status_output.status.success() {
@@ -292,14 +348,13 @@ impl SystemdCollector {
let fields: Vec<&str> = line.split_whitespace().collect(); let fields: Vec<&str> = line.split_whitespace().collect();
if fields.len() >= 4 && fields[0].ends_with(".service") { if fields.len() >= 4 && fields[0].ends_with(".service") {
let service_name = fields[0].trim_end_matches(".service"); let service_name = fields[0].trim_end_matches(".service");
let load_state = fields.get(1).unwrap_or(&"unknown").to_string();
let active_state = fields.get(2).unwrap_or(&"unknown").to_string(); let active_state = fields.get(2).unwrap_or(&"unknown").to_string();
let sub_state = fields.get(3).unwrap_or(&"unknown").to_string();
status_cache.insert(service_name.to_string(), ServiceStatusInfo { status_cache.insert(service_name.to_string(), ServiceStatusInfo {
load_state,
active_state, active_state,
sub_state, memory_bytes: None,
restart_count: None,
start_timestamp: None,
}); });
} }
} }
@@ -308,9 +363,10 @@ impl SystemdCollector {
for service_name in &all_service_names { for service_name in &all_service_names {
if !status_cache.contains_key(service_name) { if !status_cache.contains_key(service_name) {
status_cache.insert(service_name.to_string(), ServiceStatusInfo { status_cache.insert(service_name.to_string(), ServiceStatusInfo {
load_state: "not-loaded".to_string(),
active_state: "inactive".to_string(), active_state: "inactive".to_string(),
sub_state: "dead".to_string(), memory_bytes: None,
restart_count: None,
start_timestamp: None,
}); });
} }
} }
@@ -342,36 +398,60 @@ impl SystemdCollector {
Ok((services, status_cache)) Ok((services, status_cache))
} }
/// Get service status from cache (if available) or fallback to systemctl /// Get service status with detailed metrics from systemctl
fn get_service_status(&self, service: &str) -> Result<(String, String)> { fn get_service_status(&self, service: &str) -> Result<ServiceStatusInfo> {
// Try to get status from cache first // Always fetch fresh data to get detailed metrics (memory, restarts, uptime)
if let Ok(state) = self.state.read() { // Note: Cache in service_status_cache only has basic active_state from discovery,
if let Some(cached_info) = state.service_status_cache.get(service) { // with all detailed metrics set to None. We need fresh systemctl show data.
let active_status = cached_info.active_state.clone();
let detailed_info = format!( let output = Command::new("timeout")
"LoadState={}\nActiveState={}\nSubState={}", .args(&[
cached_info.load_state, "2",
cached_info.active_state, "systemctl",
cached_info.sub_state "show",
); &format!("{}.service", service),
return Ok((active_status, detailed_info)); "--property=LoadState,ActiveState,SubState,MemoryCurrent,NRestarts,ExecMainStartTimestamp"
])
.output()?;
let output_str = String::from_utf8(output.stdout)?;
// Parse properties
let mut active_state = String::new();
let mut memory_bytes = None;
let mut restart_count = None;
let mut start_timestamp = None;
for line in output_str.lines() {
if let Some(value) = line.strip_prefix("ActiveState=") {
active_state = value.to_string();
} else if let Some(value) = line.strip_prefix("MemoryCurrent=") {
if value != "[not set]" {
memory_bytes = value.parse().ok();
}
} else if let Some(value) = line.strip_prefix("NRestarts=") {
restart_count = value.parse().ok();
} else if let Some(value) = line.strip_prefix("ExecMainStartTimestamp=") {
if value != "[not set]" && !value.is_empty() {
// Parse timestamp to seconds since epoch
if let Ok(output) = Command::new("date")
.args(&["+%s", "-d", value])
.output()
{
if let Ok(timestamp_str) = String::from_utf8(output.stdout) {
start_timestamp = timestamp_str.trim().parse().ok();
}
}
}
} }
} }
// Fallback to systemctl if not in cache Ok(ServiceStatusInfo {
let output = Command::new("systemctl") active_state,
.args(&["is-active", &format!("{}.service", service)]) memory_bytes,
.output()?; restart_count,
start_timestamp,
let active_status = String::from_utf8(output.stdout)?.trim().to_string(); })
// Get more detailed info
let output = Command::new("systemctl")
.args(&["show", &format!("{}.service", service), "--property=LoadState,ActiveState,SubState"])
.output()?;
let detailed_info = String::from_utf8(output.stdout)?;
Ok((active_status, detailed_info))
} }
/// Check if service name matches pattern (supports wildcards like nginx*) /// Check if service name matches pattern (supports wildcards like nginx*)
@@ -413,75 +493,6 @@ impl SystemdCollector {
true true
} }
/// Get disk usage for a specific service
async fn get_service_disk_usage(&self, service_name: &str) -> Result<f32, CollectorError> {
// Check if this service has configured directory paths
if let Some(dirs) = self.config.service_directories.get(service_name) {
// Service has configured paths - use the first accessible one
for dir in dirs {
if let Some(size) = self.get_directory_size(dir) {
return Ok(size);
}
}
// If configured paths failed, return 0
return Ok(0.0);
}
// No configured path - try to get WorkingDirectory from systemctl
let output = Command::new("systemctl")
.args(&["show", &format!("{}.service", service_name), "--property=WorkingDirectory"])
.output()
.map_err(|e| CollectorError::SystemRead {
path: format!("WorkingDirectory for {}", service_name),
error: e.to_string(),
})?;
let output_str = String::from_utf8_lossy(&output.stdout);
for line in output_str.lines() {
if line.starts_with("WorkingDirectory=") && !line.contains("[not set]") {
let dir = line.strip_prefix("WorkingDirectory=").unwrap_or("");
if !dir.is_empty() && dir != "/" {
return Ok(self.get_directory_size(dir).unwrap_or(0.0));
}
}
}
Ok(0.0)
}
/// Get size of a directory in GB
fn get_directory_size(&self, path: &str) -> Option<f32> {
let output = Command::new("sudo")
.args(&["du", "-sb", path])
.output()
.ok()?;
if !output.status.success() {
// Log permission errors for debugging but don't spam logs
let stderr = String::from_utf8_lossy(&output.stderr);
if stderr.contains("Permission denied") {
debug!("Permission denied accessing directory: {}", path);
} else {
debug!("Failed to get size for directory {}: {}", path, stderr);
}
return None;
}
let output_str = String::from_utf8(output.stdout).ok()?;
let size_str = output_str.split_whitespace().next()?;
if let Ok(size_bytes) = size_str.parse::<u64>() {
let size_gb = size_bytes as f32 / (1024.0 * 1024.0 * 1024.0);
// Return size even if very small (minimum 0.001 GB = 1MB for visibility)
if size_gb > 0.0 {
Some(size_gb.max(0.001))
} else {
None
}
} else {
None
}
}
/// Calculate service status, taking user-stopped services into account /// Calculate service status, taking user-stopped services into account
fn calculate_service_status(&self, service_name: &str, active_status: &str) -> Status { fn calculate_service_status(&self, service_name: &str, active_status: &str) -> Status {
match active_status.to_lowercase().as_str() { match active_status.to_lowercase().as_str() {
@@ -499,33 +510,6 @@ impl SystemdCollector {
} }
} }
/// Get memory usage for a specific service
async fn get_service_memory_usage(&self, service_name: &str) -> Result<f32, CollectorError> {
let output = Command::new("systemctl")
.args(&["show", &format!("{}.service", service_name), "--property=MemoryCurrent"])
.output()
.map_err(|e| CollectorError::SystemRead {
path: format!("memory usage for {}", service_name),
error: e.to_string(),
})?;
let output_str = String::from_utf8_lossy(&output.stdout);
for line in output_str.lines() {
if line.starts_with("MemoryCurrent=") {
if let Some(mem_str) = line.strip_prefix("MemoryCurrent=") {
if mem_str != "[not set]" {
if let Ok(memory_bytes) = mem_str.parse::<u64>() {
return Ok(memory_bytes as f32 / (1024.0 * 1024.0)); // Convert to MB
}
}
}
}
}
Ok(0.0)
}
/// Check if service collection cache should be updated /// Check if service collection cache should be updated
fn should_update_cache(&self) -> bool { fn should_update_cache(&self) -> bool {
let state = self.state.read().unwrap(); let state = self.state.read().unwrap();
@@ -778,9 +762,9 @@ impl SystemdCollector {
let mut containers = Vec::new(); let mut containers = Vec::new();
// Check if docker is available (cm-agent user is in docker group) // Check if docker is available (cm-agent user is in docker group)
// Use -a to show ALL containers (running and stopped) // Use -a to show ALL containers (running and stopped) with 3 second timeout
let output = Command::new("docker") let output = Command::new("timeout")
.args(&["ps", "-a", "--format", "{{.Names}},{{.Status}}"]) .args(&["3", "docker", "ps", "-a", "--format", "{{.Names}},{{.Status}}"])
.output(); .output();
let output = match output { let output = match output {
@@ -819,11 +803,11 @@ impl SystemdCollector {
} }
/// Get docker images as sub-services /// Get docker images as sub-services
fn get_docker_images(&self) -> Vec<(String, String, String, f32)> { fn get_docker_images(&self) -> Vec<(String, String, f32)> {
let mut images = Vec::new(); let mut images = Vec::new();
// Check if docker is available (cm-agent user is in docker group) // Check if docker is available (cm-agent user is in docker group) with 3 second timeout
let output = Command::new("docker") let output = Command::new("timeout")
.args(&["images", "--format", "{{.Repository}}:{{.Tag}},{{.Size}}"]) .args(&["3", "docker", "images", "--format", "{{.Repository}}:{{.Tag}},{{.Size}}"])
.output(); .output();
let output = match output { let output = match output {
@@ -860,9 +844,8 @@ impl SystemdCollector {
let size_mb = self.parse_docker_size(size_str); let size_mb = self.parse_docker_size(size_str);
images.push(( images.push((
format!("image_{}", image_name), image_name.to_string(),
"active".to_string(), // Images are always "active" (present) "inactive".to_string(), // Images are informational - use inactive for neutral display
size_str.to_string(),
size_mb size_mb
)); ));
} }
@@ -898,11 +881,221 @@ impl SystemdCollector {
_ => value, // Assume bytes if no unit _ => value, // Assume bytes if no unit
} }
} }
/// Get VPN external IP by querying through the vpn namespace
fn get_vpn_external_ip(&self) -> Option<String> {
let output = Command::new("timeout")
.args(&[
"5",
"sudo",
"ip",
"netns",
"exec",
"vpn",
"curl",
"-s",
"--max-time",
"4",
"https://ifconfig.me"
])
.output()
.ok()?;
if output.status.success() {
let ip = String::from_utf8_lossy(&output.stdout).trim().to_string();
if !ip.is_empty() && ip.contains('.') {
return Some(ip);
}
}
None
}
/// Get nftables open ports grouped by protocol
/// Returns: (tcp_ports_string, udp_ports_string)
fn get_nftables_open_ports(&self) -> (String, String) {
let output = Command::new("sudo")
.args(&["/run/current-system/sw/bin/nft", "list", "ruleset"])
.output();
let output = match output {
Ok(out) if out.status.success() => out,
Ok(out) => {
info!("nft command failed with status: {:?}, stderr: {}",
out.status, String::from_utf8_lossy(&out.stderr));
return (String::new(), String::new());
}
Err(e) => {
info!("Failed to execute nft command: {}", e);
return (String::new(), String::new());
}
};
let output_str = match String::from_utf8(output.stdout) {
Ok(s) => s,
Err(_) => {
info!("Failed to parse nft output as UTF-8");
return (String::new(), String::new());
}
};
let mut tcp_ports = std::collections::HashSet::new();
let mut udp_ports = std::collections::HashSet::new();
// Parse nftables output for WAN incoming accept rules with dport
// Looking for patterns like: tcp dport 22 accept or tcp dport { 22, 80, 443 } accept
// Only include rules in input_wan chain
let mut in_wan_chain = false;
for line in output_str.lines() {
let line = line.trim();
// Track if we're in the input_wan chain
if line.contains("chain input_wan") {
in_wan_chain = true;
continue;
}
// Reset when exiting chain (closing brace) or entering other chains
if line == "}" || (line.starts_with("chain ") && !line.contains("input_wan")) {
in_wan_chain = false;
continue;
}
// Only process rules in input_wan chain
if !in_wan_chain {
continue;
}
// Skip if not an accept rule
if !line.contains("accept") {
continue;
}
// Parse TCP ports
if line.contains("tcp dport") {
for port in self.extract_ports_from_nft_rule(line) {
tcp_ports.insert(port);
}
}
// Parse UDP ports
if line.contains("udp dport") {
for port in self.extract_ports_from_nft_rule(line) {
udp_ports.insert(port);
}
}
}
// Sort and format
let mut tcp_vec: Vec<u16> = tcp_ports.into_iter().collect();
let mut udp_vec: Vec<u16> = udp_ports.into_iter().collect();
tcp_vec.sort();
udp_vec.sort();
let tcp_str = tcp_vec.iter().map(|p| p.to_string()).collect::<Vec<_>>().join(", ");
let udp_str = udp_vec.iter().map(|p| p.to_string()).collect::<Vec<_>>().join(", ");
info!("nftables WAN ports - TCP: '{}', UDP: '{}'", tcp_str, udp_str);
(tcp_str, udp_str)
}
/// Extract port numbers from nftables rule line
/// Returns vector of ports (handles both single ports and sets)
fn extract_ports_from_nft_rule(&self, line: &str) -> Vec<u16> {
let mut ports = Vec::new();
// Pattern: "tcp dport 22 accept" or "tcp dport { 22, 80, 443 } accept"
if let Some(dport_pos) = line.find("dport") {
let after_dport = &line[dport_pos + 5..].trim();
// Handle port sets like { 22, 80, 443 }
if after_dport.starts_with('{') {
if let Some(end_brace) = after_dport.find('}') {
let ports_str = &after_dport[1..end_brace];
// Parse each port in the set
for port_str in ports_str.split(',') {
if let Ok(port) = port_str.trim().parse::<u16>() {
ports.push(port);
}
}
}
} else {
// Single port
if let Some(port_str) = after_dport.split_whitespace().next() {
if let Ok(port) = port_str.parse::<u16>() {
ports.push(port);
}
}
}
}
ports
}
/// Get aggregate qBittorrent torrent statistics
/// Returns: (active_count, download_mbps, upload_mbps)
fn get_qbittorrent_stats(&self) -> Option<(u32, f32, f32)> {
// Query qBittorrent API through VPN namespace
let output = Command::new("timeout")
.args(&[
"5",
"sudo",
"ip",
"netns",
"exec",
"vpn",
"curl",
"-s",
"--max-time",
"4",
"http://localhost:8080/api/v2/torrents/info"
])
.output()
.ok()?;
if !output.status.success() {
return None;
}
let output_str = String::from_utf8_lossy(&output.stdout);
let torrents: Vec<serde_json::Value> = serde_json::from_str(&output_str).ok()?;
let mut active_count = 0u32;
let mut total_download_bps = 0.0f64;
let mut total_upload_bps = 0.0f64;
for torrent in torrents {
let state = torrent["state"].as_str().unwrap_or("");
let dlspeed = torrent["dlspeed"].as_f64().unwrap_or(0.0);
let upspeed = torrent["upspeed"].as_f64().unwrap_or(0.0);
// States: downloading, uploading, stalledDL, stalledUP, queuedDL, queuedUP, pausedDL, pausedUP
// Count as active if downloading or uploading (seeding)
if state.contains("downloading") || state.contains("uploading") ||
state == "stalledDL" || state == "stalledUP" {
active_count += 1;
}
total_download_bps += dlspeed;
total_upload_bps += upspeed;
}
// qBittorrent returns bytes/s, convert to MB/s
let download_mbps = (total_download_bps / 1024.0 / 1024.0) as f32;
let upload_mbps = (total_upload_bps / 1024.0 / 1024.0) as f32;
Some((active_count, download_mbps, upload_mbps))
}
} }
#[async_trait] #[async_trait]
impl Collector for SystemdCollector { impl Collector for SystemdCollector {
async fn collect_structured(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> { async fn collect_structured(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
// Clear services to prevent duplicates when updating cached data
agent_data.services.clear();
// Use cached complete data if available and fresh // Use cached complete data if available and fresh
if let Some(cached_complete_services) = self.get_cached_complete_services() { if let Some(cached_complete_services) = self.get_cached_complete_services() {
for service_data in cached_complete_services { for service_data in cached_complete_services {

View File

@@ -5,10 +5,9 @@ use zmq::{Context, Socket, SocketType};
use crate::config::ZmqConfig; use crate::config::ZmqConfig;
/// ZMQ communication handler for publishing metrics and receiving commands /// ZMQ communication handler for publishing metrics
pub struct ZmqHandler { pub struct ZmqHandler {
publisher: Socket, publisher: Socket,
command_receiver: Socket,
} }
impl ZmqHandler { impl ZmqHandler {
@@ -26,20 +25,8 @@ impl ZmqHandler {
publisher.set_sndhwm(1000)?; // High water mark for outbound messages publisher.set_sndhwm(1000)?; // High water mark for outbound messages
publisher.set_linger(1000)?; // Linger time on close publisher.set_linger(1000)?; // Linger time on close
// Create command receiver socket (PULL socket to receive commands from dashboard)
let command_receiver = context.socket(SocketType::PULL)?;
let cmd_bind_address = format!("tcp://{}:{}", config.bind_address, config.command_port);
command_receiver.bind(&cmd_bind_address)?;
info!("ZMQ command receiver bound to {}", cmd_bind_address);
// Set non-blocking mode for command receiver
command_receiver.set_rcvtimeo(0)?; // Non-blocking receive
command_receiver.set_linger(1000)?;
Ok(Self { Ok(Self {
publisher, publisher,
command_receiver,
}) })
} }
@@ -65,36 +52,4 @@ impl ZmqHandler {
Ok(()) Ok(())
} }
/// Try to receive a command (non-blocking)
pub fn try_receive_command(&self) -> Result<Option<AgentCommand>> {
match self.command_receiver.recv_bytes(zmq::DONTWAIT) {
Ok(bytes) => {
debug!("Received command message ({} bytes)", bytes.len());
let command: AgentCommand = serde_json::from_slice(&bytes)
.map_err(|e| anyhow::anyhow!("Failed to deserialize command: {}", e))?;
debug!("Parsed command: {:?}", command);
Ok(Some(command))
}
Err(zmq::Error::EAGAIN) => {
// No message available (non-blocking)
Ok(None)
}
Err(e) => Err(anyhow::anyhow!("ZMQ receive error: {}", e)),
}
}
}
/// Commands that can be sent to the agent
#[derive(Debug, Clone, serde::Deserialize, serde::Serialize)]
pub enum AgentCommand {
/// Request immediate metric collection
CollectNow,
/// Change collection interval
SetInterval { seconds: u64 },
/// Enable/disable a collector
ToggleCollector { name: String, enabled: bool },
/// Request status/health check
Ping,
} }

View File

@@ -13,14 +13,12 @@ pub struct AgentConfig {
pub collectors: CollectorConfig, pub collectors: CollectorConfig,
pub cache: CacheConfig, pub cache: CacheConfig,
pub notifications: NotificationConfig, pub notifications: NotificationConfig,
pub collection_interval_seconds: u64,
} }
/// ZMQ communication configuration /// ZMQ communication configuration
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ZmqConfig { pub struct ZmqConfig {
pub publisher_port: u16, pub publisher_port: u16,
pub command_port: u16,
pub bind_address: String, pub bind_address: String,
pub transmission_interval_seconds: u64, pub transmission_interval_seconds: u64,
/// Heartbeat transmission interval in seconds for host connectivity detection /// Heartbeat transmission interval in seconds for host connectivity detection

View File

@@ -7,21 +7,13 @@ pub fn validate_config(config: &AgentConfig) -> Result<()> {
bail!("ZMQ publisher port cannot be 0"); bail!("ZMQ publisher port cannot be 0");
} }
if config.zmq.command_port == 0 {
bail!("ZMQ command port cannot be 0");
}
if config.zmq.publisher_port == config.zmq.command_port {
bail!("ZMQ publisher and command ports cannot be the same");
}
if config.zmq.bind_address.is_empty() { if config.zmq.bind_address.is_empty() {
bail!("ZMQ bind address cannot be empty"); bail!("ZMQ bind address cannot be empty");
} }
// Validate collection interval // Validate ZMQ transmission interval
if config.collection_interval_seconds == 0 { if config.zmq.transmission_interval_seconds == 0 {
bail!("Collection interval cannot be 0"); bail!("ZMQ transmission interval cannot be 0");
} }
// Validate CPU thresholds // Validate CPU thresholds

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "cm-dashboard" name = "cm-dashboard"
version = "0.1.185" version = "0.1.257"
edition = "2021" edition = "2021"
[dependencies] [dependencies]

View File

@@ -215,7 +215,7 @@ impl Dashboard {
// Update TUI with new metrics (only if not headless) // Update TUI with new metrics (only if not headless)
if let Some(ref mut tui_app) = self.tui_app { if let Some(ref mut tui_app) = self.tui_app {
tui_app.update_metrics(&self.metric_store); tui_app.update_metrics(&mut self.metric_store);
} }
} }

View File

@@ -5,6 +5,14 @@ use tracing::{debug, info, warn};
use super::MetricDataPoint; use super::MetricDataPoint;
/// ZMQ communication statistics per host
#[derive(Debug, Clone)]
pub struct ZmqStats {
pub packets_received: u64,
pub last_packet_time: Instant,
pub last_packet_age_secs: f64,
}
/// Central metric storage for the dashboard /// Central metric storage for the dashboard
pub struct MetricStore { pub struct MetricStore {
/// Current structured data: hostname -> AgentData /// Current structured data: hostname -> AgentData
@@ -13,6 +21,8 @@ pub struct MetricStore {
historical_metrics: HashMap<String, Vec<MetricDataPoint>>, historical_metrics: HashMap<String, Vec<MetricDataPoint>>,
/// Last heartbeat timestamp per host /// Last heartbeat timestamp per host
last_heartbeat: HashMap<String, Instant>, last_heartbeat: HashMap<String, Instant>,
/// ZMQ communication statistics per host
zmq_stats: HashMap<String, ZmqStats>,
/// Configuration /// Configuration
max_metrics_per_host: usize, max_metrics_per_host: usize,
history_retention: Duration, history_retention: Duration,
@@ -24,6 +34,7 @@ impl MetricStore {
current_agent_data: HashMap::new(), current_agent_data: HashMap::new(),
historical_metrics: HashMap::new(), historical_metrics: HashMap::new(),
last_heartbeat: HashMap::new(), last_heartbeat: HashMap::new(),
zmq_stats: HashMap::new(),
max_metrics_per_host, max_metrics_per_host,
history_retention: Duration::from_secs(history_retention_hours * 3600), history_retention: Duration::from_secs(history_retention_hours * 3600),
} }
@@ -44,6 +55,16 @@ impl MetricStore {
self.last_heartbeat.insert(hostname.clone(), now); self.last_heartbeat.insert(hostname.clone(), now);
debug!("Updated heartbeat for host {}", hostname); debug!("Updated heartbeat for host {}", hostname);
// Update ZMQ stats
let stats = self.zmq_stats.entry(hostname.clone()).or_insert(ZmqStats {
packets_received: 0,
last_packet_time: now,
last_packet_age_secs: 0.0,
});
stats.packets_received += 1;
stats.last_packet_time = now;
stats.last_packet_age_secs = 0.0; // Just received
// Add to history // Add to history
let host_history = self let host_history = self
.historical_metrics .historical_metrics
@@ -65,6 +86,15 @@ impl MetricStore {
self.current_agent_data.get(hostname) self.current_agent_data.get(hostname)
} }
/// Get ZMQ communication statistics for a host
pub fn get_zmq_stats(&mut self, hostname: &str) -> Option<ZmqStats> {
let now = Instant::now();
self.zmq_stats.get_mut(hostname).map(|stats| {
// Update packet age
stats.last_packet_age_secs = now.duration_since(stats.last_packet_time).as_secs_f64();
stats.clone()
})
}
/// Get connected hosts (hosts with recent heartbeats) /// Get connected hosts (hosts with recent heartbeats)
pub fn get_connected_hosts(&self, timeout: Duration) -> Vec<String> { pub fn get_connected_hosts(&self, timeout: Duration) -> Vec<String> {

View File

@@ -100,7 +100,7 @@ impl TuiApp {
} }
/// Update widgets with structured data from store (only for current host) /// Update widgets with structured data from store (only for current host)
pub fn update_metrics(&mut self, metric_store: &MetricStore) { pub fn update_metrics(&mut self, metric_store: &mut MetricStore) {
if let Some(hostname) = self.current_host.clone() { if let Some(hostname) = self.current_host.clone() {
// Get structured data for this host // Get structured data for this host
if let Some(agent_data) = metric_store.get_agent_data(&hostname) { if let Some(agent_data) = metric_store.get_agent_data(&hostname) {
@@ -556,7 +556,14 @@ impl TuiApp {
)); ));
if Some(host) == self.current_host.as_ref() { if Some(host) == self.current_host.as_ref() {
// Selected host in bold background color against status background // Selected host with brackets in bold background color against status background
host_spans.push(Span::styled(
"[",
Style::default()
.fg(Theme::background())
.bg(background_color)
.add_modifier(Modifier::BOLD),
));
host_spans.push(Span::styled( host_spans.push(Span::styled(
host.clone(), host.clone(),
Style::default() Style::default()
@@ -564,6 +571,13 @@ impl TuiApp {
.bg(background_color) .bg(background_color)
.add_modifier(Modifier::BOLD), .add_modifier(Modifier::BOLD),
)); ));
host_spans.push(Span::styled(
"]",
Style::default()
.fg(Theme::background())
.bg(background_color)
.add_modifier(Modifier::BOLD),
));
} else { } else {
// Other hosts in normal background color against status background // Other hosts in normal background color against status background
host_spans.push(Span::styled( host_spans.push(Span::styled(

View File

@@ -142,6 +142,7 @@ impl Theme {
/// Get color for status level /// Get color for status level
pub fn status_color(status: Status) -> Color { pub fn status_color(status: Status) -> Color {
match status { match status {
Status::Info => Self::muted_text(), // Gray for informational data
Status::Ok => Self::success(), Status::Ok => Self::success(),
Status::Inactive => Self::muted_text(), // Gray for inactive services in service list Status::Inactive => Self::muted_text(), // Gray for inactive services in service list
Status::Pending => Self::highlight(), // Blue for pending Status::Pending => Self::highlight(), // Blue for pending
@@ -240,6 +241,7 @@ impl StatusIcons {
/// Get status icon symbol /// Get status icon symbol
pub fn get_icon(status: Status) -> &'static str { pub fn get_icon(status: Status) -> &'static str {
match status { match status {
Status::Info => "", // No icon for informational data
Status::Ok => "", Status::Ok => "",
Status::Inactive => "", // Empty circle for inactive services Status::Inactive => "", // Empty circle for inactive services
Status::Pending => "", // Hollow circle for pending Status::Pending => "", // Hollow circle for pending
@@ -254,6 +256,7 @@ impl StatusIcons {
pub fn create_status_spans(status: Status, text: &str) -> Vec<ratatui::text::Span<'static>> { pub fn create_status_spans(status: Status, text: &str) -> Vec<ratatui::text::Span<'static>> {
let icon = Self::get_icon(status); let icon = Self::get_icon(status);
let status_color = match status { let status_color = match status {
Status::Info => Theme::muted_text(), // Gray for info
Status::Ok => Theme::success(), // Green Status::Ok => Theme::success(), // Green
Status::Inactive => Theme::muted_text(), // Gray for inactive services Status::Inactive => Theme::muted_text(), // Gray for inactive services
Status::Pending => Theme::highlight(), // Blue Status::Pending => Theme::highlight(), // Blue

View File

@@ -11,6 +11,74 @@ use tracing::debug;
use crate::ui::theme::{Components, StatusIcons, Theme, Typography}; use crate::ui::theme::{Components, StatusIcons, Theme, Typography};
use ratatui::style::Style; use ratatui::style::Style;
/// Column visibility configuration based on terminal width
#[derive(Debug, Clone, Copy)]
struct ColumnVisibility {
show_name: bool,
show_status: bool,
show_ram: bool,
show_uptime: bool,
show_restarts: bool,
}
impl ColumnVisibility {
/// Calculate actual width needed for all columns
const NAME_WIDTH: u16 = 23;
const STATUS_WIDTH: u16 = 10;
const RAM_WIDTH: u16 = 8;
const UPTIME_WIDTH: u16 = 8;
const RESTARTS_WIDTH: u16 = 5;
const COLUMN_SPACING: u16 = 1; // Space between columns
/// Determine which columns to show based on available width
/// Priority order: Name > Status > RAM > Uptime > Restarts
fn from_width(width: u16) -> Self {
// Calculate cumulative widths for each configuration
let minimal = Self::NAME_WIDTH + Self::COLUMN_SPACING + Self::STATUS_WIDTH; // 34
let with_ram = minimal + Self::COLUMN_SPACING + Self::RAM_WIDTH; // 43
let with_uptime = with_ram + Self::COLUMN_SPACING + Self::UPTIME_WIDTH; // 52
let full = with_uptime + Self::COLUMN_SPACING + Self::RESTARTS_WIDTH; // 58
if width >= full {
// Show all columns
Self {
show_name: true,
show_status: true,
show_ram: true,
show_uptime: true,
show_restarts: true,
}
} else if width >= with_uptime {
// Hide restarts
Self {
show_name: true,
show_status: true,
show_ram: true,
show_uptime: true,
show_restarts: false,
}
} else if width >= with_ram {
// Hide uptime and restarts
Self {
show_name: true,
show_status: true,
show_ram: true,
show_uptime: false,
show_restarts: false,
}
} else {
// Minimal: Name + Status only
Self {
show_name: true,
show_status: true,
show_ram: false,
show_uptime: false,
show_restarts: false,
}
}
}
}
/// Services widget displaying hierarchical systemd service statuses /// Services widget displaying hierarchical systemd service statuses
#[derive(Clone)] #[derive(Clone)]
pub struct ServicesWidget { pub struct ServicesWidget {
@@ -28,10 +96,12 @@ pub struct ServicesWidget {
#[derive(Clone)] #[derive(Clone)]
struct ServiceInfo { struct ServiceInfo {
memory_mb: Option<f32>,
disk_gb: Option<f32>,
metrics: Vec<(String, f32, Option<String>)>, // (label, value, unit) metrics: Vec<(String, f32, Option<String>)>, // (label, value, unit)
widget_status: Status, widget_status: Status,
service_type: String, // "nginx_site", "container", "image", or empty for parent services
memory_bytes: Option<u64>,
restart_count: Option<u32>,
uptime_seconds: Option<u64>,
} }
impl ServicesWidget { impl ServicesWidget {
@@ -51,8 +121,6 @@ impl ServicesWidget {
if metric_name.starts_with("service_") { if metric_name.starts_with("service_") {
if let Some(end_pos) = metric_name if let Some(end_pos) = metric_name
.rfind("_status") .rfind("_status")
.or_else(|| metric_name.rfind("_memory_mb"))
.or_else(|| metric_name.rfind("_disk_gb"))
.or_else(|| metric_name.rfind("_latency_ms")) .or_else(|| metric_name.rfind("_latency_ms"))
{ {
let service_part = &metric_name[8..end_pos]; // Remove "service_" prefix let service_part = &metric_name[8..end_pos]; // Remove "service_" prefix
@@ -75,47 +143,22 @@ impl ServicesWidget {
None None
} }
/// Format disk size with appropriate units (kB/MB/GB)
fn format_disk_size(size_gb: f32) -> String {
let size_mb = size_gb * 1024.0; // Convert GB to MB
if size_mb >= 1024.0 {
// Show as GB
format!("{:.1}GB", size_gb)
} else if size_mb >= 1.0 {
// Show as MB
format!("{:.0}MB", size_mb)
} else if size_mb >= 0.001 {
// Convert to kB
let size_kb = size_mb * 1024.0;
format!("{:.0}kB", size_kb)
} else {
// Show very small sizes as bytes
let size_bytes = size_mb * 1024.0 * 1024.0;
format!("{:.0}B", size_bytes)
}
}
/// Format parent service line - returns text without icon for span formatting /// Format parent service line - returns text without icon for span formatting
fn format_parent_service_line(&self, name: &str, info: &ServiceInfo) -> String { fn format_parent_service_line(&self, name: &str, info: &ServiceInfo, columns: ColumnVisibility) -> String {
let memory_str = info // Truncate long service names to fit layout
.memory_mb // NAME_WIDTH - 3 chars for "..." = max displayable chars
.map_or("0M".to_string(), |m| format!("{:.0}M", m)); let max_name_len = (ColumnVisibility::NAME_WIDTH - 3) as usize;
let disk_str = info let short_name = if name.len() > max_name_len {
.disk_gb format!("{}...", &name[..max_name_len.saturating_sub(3)])
.map_or("0".to_string(), |d| Self::format_disk_size(d));
// Truncate long service names to fit layout (account for icon space)
let short_name = if name.len() > 22 {
format!("{}...", &name[..19])
} else { } else {
name.to_string() name.to_string()
}; };
// Convert Status enum to display text // Convert Status enum to display text
let status_str = match info.widget_status { let status_str = match info.widget_status {
Status::Info => "", // Shouldn't happen for parent services
Status::Ok => "active", Status::Ok => "active",
Status::Inactive => "inactive", Status::Inactive => "inactive",
Status::Critical => "failed", Status::Critical => "failed",
Status::Pending => "pending", Status::Pending => "pending",
Status::Warning => "warning", Status::Warning => "warning",
@@ -123,10 +166,59 @@ impl ServicesWidget {
Status::Offline => "offline", Status::Offline => "offline",
}; };
format!( // Format memory
"{:<23} {:<10} {:<8} {:<8}", let memory_str = info.memory_bytes.map_or("-".to_string(), |bytes| {
short_name, status_str, memory_str, disk_str let mb = bytes as f64 / (1024.0 * 1024.0);
) if mb >= 1000.0 {
format!("{:.1}G", mb / 1024.0)
} else {
format!("{:.0}M", mb)
}
});
// Format uptime
let uptime_str = info.uptime_seconds.map_or("-".to_string(), |secs| {
let days = secs / 86400;
let hours = (secs % 86400) / 3600;
let mins = (secs % 3600) / 60;
if days > 0 {
format!("{}d{}h", days, hours)
} else if hours > 0 {
format!("{}h{}m", hours, mins)
} else {
format!("{}m", mins)
}
});
// Format restarts (show "!" if > 0 to indicate instability)
let restart_str = info.restart_count.map_or("-".to_string(), |count| {
if count > 0 {
format!("!{}", count)
} else {
"0".to_string()
}
});
// Build format string based on column visibility
let mut parts = Vec::new();
if columns.show_name {
parts.push(format!("{:<width$}", short_name, width = ColumnVisibility::NAME_WIDTH as usize));
}
if columns.show_status {
parts.push(format!("{:<width$}", status_str, width = ColumnVisibility::STATUS_WIDTH as usize));
}
if columns.show_ram {
parts.push(format!("{:<width$}", memory_str, width = ColumnVisibility::RAM_WIDTH as usize));
}
if columns.show_uptime {
parts.push(format!("{:<width$}", uptime_str, width = ColumnVisibility::UPTIME_WIDTH as usize));
}
if columns.show_restarts {
parts.push(format!("{:<width$}", restart_str, width = ColumnVisibility::RESTARTS_WIDTH as usize));
}
parts.join(" ")
} }
@@ -138,9 +230,12 @@ impl ServicesWidget {
info: &ServiceInfo, info: &ServiceInfo,
is_last: bool, is_last: bool,
) -> Vec<ratatui::text::Span<'static>> { ) -> Vec<ratatui::text::Span<'static>> {
// Informational sub-services (Status::Info) can use more width since they don't show columns
let max_width = if info.widget_status == Status::Info { 50 } else { 18 };
// Truncate long sub-service names to fit layout (accounting for indentation) // Truncate long sub-service names to fit layout (accounting for indentation)
let short_name = if name.len() > 18 { let short_name = if name.len() > max_width {
format!("{}...", &name[..15]) format!("{}...", &name[..(max_width.saturating_sub(3))])
} else { } else {
name.to_string() name.to_string()
}; };
@@ -148,6 +243,7 @@ impl ServicesWidget {
// Get status icon and text // Get status icon and text
let icon = StatusIcons::get_icon(info.widget_status); let icon = StatusIcons::get_icon(info.widget_status);
let status_color = match info.widget_status { let status_color = match info.widget_status {
Status::Info => Theme::muted_text(),
Status::Ok => Theme::success(), Status::Ok => Theme::success(),
Status::Inactive => Theme::muted_text(), Status::Inactive => Theme::muted_text(),
Status::Pending => Theme::highlight(), Status::Pending => Theme::highlight(),
@@ -168,8 +264,9 @@ impl ServicesWidget {
} else { } else {
// Convert Status enum to display text for sub-services // Convert Status enum to display text for sub-services
match info.widget_status { match info.widget_status {
Status::Info => "",
Status::Ok => "active", Status::Ok => "active",
Status::Inactive => "inactive", Status::Inactive => "inactive",
Status::Critical => "failed", Status::Critical => "failed",
Status::Pending => "pending", Status::Pending => "pending",
Status::Warning => "warning", Status::Warning => "warning",
@@ -179,32 +276,62 @@ impl ServicesWidget {
}; };
let tree_symbol = if is_last { "└─" } else { "├─" }; let tree_symbol = if is_last { "└─" } else { "├─" };
vec![ if info.widget_status == Status::Info {
// Indentation and tree prefix // Informational data - no status icon, show metrics if available
ratatui::text::Span::styled( let mut spans = vec![
format!(" {} ", tree_symbol), // Indentation and tree prefix
Typography::tree(), ratatui::text::Span::styled(
), format!(" {} ", tree_symbol),
// Status icon Typography::tree(),
ratatui::text::Span::styled( ),
format!("{} ", icon), // Service name (no icon) - no fixed width padding for Info status
Style::default().fg(status_color).bg(Theme::background()), ratatui::text::Span::styled(
), short_name,
// Service name Style::default()
ratatui::text::Span::styled( .fg(Theme::secondary_text())
format!("{:<18} ", short_name), .bg(Theme::background()),
Style::default() ),
.fg(Theme::secondary_text()) ];
.bg(Theme::background()),
), // Add metrics if available (e.g., Docker image size)
// Status/latency text if !status_str.is_empty() {
ratatui::text::Span::styled( spans.push(ratatui::text::Span::styled(
status_str, status_str,
Style::default() Style::default()
.fg(Theme::secondary_text()) .fg(Theme::secondary_text())
.bg(Theme::background()), .bg(Theme::background()),
), ));
] }
spans
} else {
vec![
// Indentation and tree prefix
ratatui::text::Span::styled(
format!(" {} ", tree_symbol),
Typography::tree(),
),
// Status icon
ratatui::text::Span::styled(
format!("{} ", icon),
Style::default().fg(status_color).bg(Theme::background()),
),
// Service name
ratatui::text::Span::styled(
format!("{:<18} ", short_name),
Style::default()
.fg(Theme::secondary_text())
.bg(Theme::background()),
),
// Status/latency text
ratatui::text::Span::styled(
status_str,
Style::default()
.fg(Theme::secondary_text())
.bg(Theme::background()),
),
]
}
} }
/// Move selection up /// Move selection up
@@ -278,13 +405,15 @@ impl Widget for ServicesWidget {
for service in &agent_data.services { for service in &agent_data.services {
// Store parent service // Store parent service
let parent_info = ServiceInfo { let parent_info = ServiceInfo {
memory_mb: Some(service.memory_mb),
disk_gb: Some(service.disk_gb),
metrics: Vec::new(), // Parent services don't have custom metrics metrics: Vec::new(), // Parent services don't have custom metrics
widget_status: service.service_status, widget_status: service.service_status,
service_type: String::new(), // Parent services have no type
memory_bytes: service.memory_bytes,
restart_count: service.restart_count,
uptime_seconds: service.uptime_seconds,
}; };
self.parent_services.insert(service.name.clone(), parent_info); self.parent_services.insert(service.name.clone(), parent_info);
// Process sub-services if any // Process sub-services if any
if !service.sub_services.is_empty() { if !service.sub_services.is_empty() {
let mut sub_list = Vec::new(); let mut sub_list = Vec::new();
@@ -293,12 +422,14 @@ impl Widget for ServicesWidget {
let metrics: Vec<(String, f32, Option<String>)> = sub_service.metrics.iter() let metrics: Vec<(String, f32, Option<String>)> = sub_service.metrics.iter()
.map(|m| (m.label.clone(), m.value, m.unit.clone())) .map(|m| (m.label.clone(), m.value, m.unit.clone()))
.collect(); .collect();
let sub_info = ServiceInfo { let sub_info = ServiceInfo {
memory_mb: None, // Not used for sub-services
disk_gb: None, // Not used for sub-services
metrics, metrics,
widget_status: sub_service.service_status, widget_status: sub_service.service_status,
service_type: sub_service.service_type.clone(),
memory_bytes: None, // Sub-services don't have individual metrics yet
restart_count: None,
uptime_seconds: None,
}; };
sub_list.push((sub_service.name.clone(), sub_info)); sub_list.push((sub_service.name.clone(), sub_info));
} }
@@ -338,22 +469,16 @@ impl ServicesWidget {
self.parent_services self.parent_services
.entry(parent_service) .entry(parent_service)
.or_insert(ServiceInfo { .or_insert(ServiceInfo {
memory_mb: None,
disk_gb: None,
metrics: Vec::new(), metrics: Vec::new(),
widget_status: Status::Unknown, widget_status: Status::Unknown,
service_type: String::new(),
memory_bytes: None,
restart_count: None,
uptime_seconds: None,
}); });
if metric.name.ends_with("_status") { if metric.name.ends_with("_status") {
service_info.widget_status = metric.status; service_info.widget_status = metric.status;
} else if metric.name.ends_with("_memory_mb") {
if let Some(memory) = metric.value.as_f32() {
service_info.memory_mb = Some(memory);
}
} else if metric.name.ends_with("_disk_gb") {
if let Some(disk) = metric.value.as_f32() {
service_info.disk_gb = Some(disk);
}
} }
} }
Some(sub_name) => { Some(sub_name) => {
@@ -373,10 +498,12 @@ impl ServicesWidget {
sub_service_list.push(( sub_service_list.push((
sub_name.clone(), sub_name.clone(),
ServiceInfo { ServiceInfo {
memory_mb: None,
disk_gb: None,
metrics: Vec::new(), metrics: Vec::new(),
widget_status: Status::Unknown, widget_status: Status::Unknown,
service_type: String::new(), // Unknown type in legacy path
memory_bytes: None,
restart_count: None,
uptime_seconds: None,
}, },
)); ));
&mut sub_service_list.last_mut().unwrap().1 &mut sub_service_list.last_mut().unwrap().1
@@ -384,14 +511,6 @@ impl ServicesWidget {
if metric.name.ends_with("_status") { if metric.name.ends_with("_status") {
sub_service_info.widget_status = metric.status; sub_service_info.widget_status = metric.status;
} else if metric.name.ends_with("_memory_mb") {
if let Some(memory) = metric.value.as_f32() {
sub_service_info.memory_mb = Some(memory);
}
} else if metric.name.ends_with("_disk_gb") {
if let Some(disk) = metric.value.as_f32() {
sub_service_info.disk_gb = Some(disk);
}
} }
} }
} }
@@ -448,11 +567,28 @@ impl ServicesWidget {
.constraints([Constraint::Length(1), Constraint::Min(0)]) .constraints([Constraint::Length(1), Constraint::Min(0)])
.split(inner_area); .split(inner_area);
// Header // Determine which columns to show based on available width
let header = format!( let columns = ColumnVisibility::from_width(inner_area.width);
"{:<25} {:<10} {:<8} {:<8}",
"Service:", "Status:", "RAM:", "Disk:" // Build header based on visible columns
); let mut header_parts = Vec::new();
if columns.show_name {
header_parts.push(format!("{:<width$}", "Service:", width = ColumnVisibility::NAME_WIDTH as usize));
}
if columns.show_status {
header_parts.push(format!("{:<width$}", "Status:", width = ColumnVisibility::STATUS_WIDTH as usize));
}
if columns.show_ram {
header_parts.push(format!("{:<width$}", "RAM:", width = ColumnVisibility::RAM_WIDTH as usize));
}
if columns.show_uptime {
header_parts.push(format!("{:<width$}", "Uptime:", width = ColumnVisibility::UPTIME_WIDTH as usize));
}
if columns.show_restarts {
header_parts.push(format!("{:<width$}", "↻:", width = ColumnVisibility::RESTARTS_WIDTH as usize));
}
let header = header_parts.join(" ");
let header_para = Paragraph::new(header).style(Typography::muted()); let header_para = Paragraph::new(header).style(Typography::muted());
frame.render_widget(header_para, content_chunks[0]); frame.render_widget(header_para, content_chunks[0]);
@@ -464,11 +600,11 @@ impl ServicesWidget {
} }
// Render the services list // Render the services list
self.render_services(frame, content_chunks[1], is_focused); self.render_services(frame, content_chunks[1], is_focused, columns);
} }
/// Render services list /// Render services list
fn render_services(&mut self, frame: &mut Frame, area: Rect, is_focused: bool) { fn render_services(&mut self, frame: &mut Frame, area: Rect, is_focused: bool, columns: ColumnVisibility) {
// Build hierarchical service list for display // Build hierarchical service list for display
let mut display_lines: Vec<(String, Status, bool, Option<(ServiceInfo, bool)>)> = Vec::new(); let mut display_lines: Vec<(String, Status, bool, Option<(ServiceInfo, bool)>)> = Vec::new();
@@ -478,7 +614,7 @@ impl ServicesWidget {
for (parent_name, parent_info) in parent_services { for (parent_name, parent_info) in parent_services {
// Add parent service line // Add parent service line
let parent_line = self.format_parent_service_line(parent_name, parent_info); let parent_line = self.format_parent_service_line(parent_name, parent_info, columns);
display_lines.push((parent_line, parent_info.widget_status, false, None)); display_lines.push((parent_line, parent_info.widget_status, false, None));
// Add sub-services for this parent (if any) // Add sub-services for this parent (if any)

View File

@@ -22,7 +22,9 @@ pub struct SystemWidget {
cpu_load_1min: Option<f32>, cpu_load_1min: Option<f32>,
cpu_load_5min: Option<f32>, cpu_load_5min: Option<f32>,
cpu_load_15min: Option<f32>, cpu_load_15min: Option<f32>,
cpu_frequency: Option<f32>, cpu_cstates: Vec<cm_dashboard_shared::CStateInfo>,
cpu_model_name: Option<String>,
cpu_core_count: Option<u32>,
cpu_status: Status, cpu_status: Status,
// Memory metrics // Memory metrics
@@ -41,15 +43,9 @@ pub struct SystemWidget {
storage_pools: Vec<StoragePool>, storage_pools: Vec<StoragePool>,
// Backup metrics // Backup metrics
backup_status: String, backup_repositories: Vec<String>,
backup_start_time_raw: Option<String>, backup_repository_status: Status,
backup_disk_serial: Option<String>, backup_disks: Vec<cm_dashboard_shared::BackupDiskData>,
backup_disk_usage_percent: Option<f32>,
backup_disk_used_gb: Option<f32>,
backup_disk_total_gb: Option<f32>,
backup_disk_wear_percent: Option<f32>,
backup_disk_temperature: Option<f32>,
backup_last_size_gb: Option<f32>,
// Overall status // Overall status
has_data: bool, has_data: bool,
@@ -96,7 +92,9 @@ impl SystemWidget {
cpu_load_1min: None, cpu_load_1min: None,
cpu_load_5min: None, cpu_load_5min: None,
cpu_load_15min: None, cpu_load_15min: None,
cpu_frequency: None, cpu_cstates: Vec::new(),
cpu_model_name: None,
cpu_core_count: None,
cpu_status: Status::Unknown, cpu_status: Status::Unknown,
memory_usage_percent: None, memory_usage_percent: None,
memory_used_gb: None, memory_used_gb: None,
@@ -108,15 +106,9 @@ impl SystemWidget {
tmp_status: Status::Unknown, tmp_status: Status::Unknown,
tmpfs_mounts: Vec::new(), tmpfs_mounts: Vec::new(),
storage_pools: Vec::new(), storage_pools: Vec::new(),
backup_status: "unknown".to_string(), backup_repositories: Vec::new(),
backup_start_time_raw: None, backup_repository_status: Status::Unknown,
backup_disk_serial: None, backup_disks: Vec::new(),
backup_disk_usage_percent: None,
backup_disk_used_gb: None,
backup_disk_total_gb: None,
backup_disk_wear_percent: None,
backup_disk_temperature: None,
backup_last_size_gb: None,
has_data: false, has_data: false,
} }
} }
@@ -131,12 +123,19 @@ impl SystemWidget {
} }
} }
/// Format CPU frequency /// Format CPU C-states (idle depth) with percentages
fn format_cpu_frequency(&self) -> String { fn format_cpu_cstate(&self) -> String {
match self.cpu_frequency { if self.cpu_cstates.is_empty() {
Some(freq) => format!("{:.0} MHz", freq), return "".to_string();
None => "— MHz".to_string(),
} }
// Format top 3 C-states with percentages: "C10:79% C8:10% C6:8%"
// Agent already sends clean names (C3, C10, etc.)
self.cpu_cstates
.iter()
.map(|cs| format!("{}:{:.0}%", cs.name, cs.percent))
.collect::<Vec<_>>()
.join(" ")
} }
/// Format memory usage /// Format memory usage
@@ -176,7 +175,9 @@ impl Widget for SystemWidget {
self.cpu_load_1min = Some(cpu.load_1min); self.cpu_load_1min = Some(cpu.load_1min);
self.cpu_load_5min = Some(cpu.load_5min); self.cpu_load_5min = Some(cpu.load_5min);
self.cpu_load_15min = Some(cpu.load_15min); self.cpu_load_15min = Some(cpu.load_15min);
self.cpu_frequency = Some(cpu.frequency_mhz); self.cpu_cstates = cpu.cstates.clone();
self.cpu_model_name = cpu.model_name.clone();
self.cpu_core_count = cpu.core_count;
self.cpu_status = Status::Ok; self.cpu_status = Status::Ok;
// Extract memory data directly // Extract memory data directly
@@ -202,25 +203,9 @@ impl Widget for SystemWidget {
// Extract backup data // Extract backup data
let backup = &agent_data.backup; let backup = &agent_data.backup;
self.backup_status = backup.status.clone(); self.backup_repositories = backup.repositories.clone();
self.backup_start_time_raw = backup.start_time_raw.clone(); self.backup_repository_status = backup.repository_status;
self.backup_last_size_gb = backup.last_backup_size_gb; self.backup_disks = backup.disks.clone();
if let Some(disk) = &backup.repository_disk {
self.backup_disk_serial = Some(disk.serial.clone());
self.backup_disk_usage_percent = Some(disk.usage_percent);
self.backup_disk_used_gb = Some(disk.used_gb);
self.backup_disk_total_gb = Some(disk.total_gb);
self.backup_disk_wear_percent = disk.wear_percent;
self.backup_disk_temperature = disk.temperature_celsius;
} else {
self.backup_disk_serial = None;
self.backup_disk_usage_percent = None;
self.backup_disk_used_gb = None;
self.backup_disk_total_gb = None;
self.backup_disk_wear_percent = None;
self.backup_disk_temperature = None;
}
} }
} }
@@ -520,14 +505,36 @@ impl SystemWidget {
fn render_backup(&self) -> Vec<Line<'_>> { fn render_backup(&self) -> Vec<Line<'_>> {
let mut lines = Vec::new(); let mut lines = Vec::new();
// First line: serial number with temperature and wear // First section: Repository status and list
if let Some(serial) = &self.backup_disk_serial { if !self.backup_repositories.is_empty() {
let truncated_serial = truncate_serial(serial); let repo_text = format!("Repo: {}", self.backup_repositories.len());
let repo_spans = StatusIcons::create_status_spans(self.backup_repository_status, &repo_text);
lines.push(Line::from(repo_spans));
// List all repositories (sorted for consistent display)
let mut sorted_repos = self.backup_repositories.clone();
sorted_repos.sort();
let repo_count = sorted_repos.len();
for (idx, repo) in sorted_repos.iter().enumerate() {
let tree_char = if idx == repo_count - 1 { "└─" } else { "├─" };
lines.push(Line::from(vec![
Span::styled(format!(" {} ", tree_char), Typography::tree()),
Span::styled(repo.clone(), Typography::secondary()),
]));
}
}
// Second section: Per-disk backup information (sorted by serial for consistent display)
let mut sorted_disks = self.backup_disks.clone();
sorted_disks.sort_by(|a, b| a.serial.cmp(&b.serial));
for disk in &sorted_disks {
let truncated_serial = truncate_serial(&disk.serial);
let mut details = Vec::new(); let mut details = Vec::new();
if let Some(temp) = self.backup_disk_temperature {
if let Some(temp) = disk.temperature_celsius {
details.push(format!("T: {}°C", temp as i32)); details.push(format!("T: {}°C", temp as i32));
} }
if let Some(wear) = self.backup_disk_wear_percent { if let Some(wear) = disk.wear_percent {
details.push(format!("W: {}%", wear as i32)); details.push(format!("W: {}%", wear as i32));
} }
@@ -537,44 +544,40 @@ impl SystemWidget {
truncated_serial truncated_serial
}; };
let backup_status = match self.backup_status.as_str() { // Overall disk status (worst of backup and usage)
"completed" | "success" => Status::Ok, let disk_status = disk.backup_status.max(disk.usage_status);
"running" => Status::Pending, let disk_spans = StatusIcons::create_status_spans(disk_status, &disk_text);
"failed" => Status::Critical,
_ => Status::Unknown,
};
let disk_spans = StatusIcons::create_status_spans(backup_status, &disk_text);
lines.push(Line::from(disk_spans)); lines.push(Line::from(disk_spans));
// Show backup time from TOML if available // Show backup time with status
if let Some(start_time) = &self.backup_start_time_raw { if let Some(backup_time) = &disk.last_backup_time {
let time_text = if let Some(size) = self.backup_last_size_gb { let time_text = format!("Backup: {}", backup_time);
format!("Time: {} ({:.1}GB)", start_time, size) let mut time_spans = vec![
} else {
format!("Time: {}", start_time)
};
lines.push(Line::from(vec![
Span::styled(" ├─ ", Typography::tree()), Span::styled(" ├─ ", Typography::tree()),
Span::styled(time_text, Typography::secondary()) ];
])); time_spans.extend(StatusIcons::create_status_spans(disk.backup_status, &time_text));
lines.push(Line::from(time_spans));
} }
// Usage information // Show usage with status and archive count
if let (Some(used), Some(total), Some(usage_percent)) = ( let archive_display = if disk.archives_min == disk.archives_max {
self.backup_disk_used_gb, format!("{}", disk.archives_min)
self.backup_disk_total_gb, } else {
self.backup_disk_usage_percent format!("{}-{}", disk.archives_min, disk.archives_max)
) { };
let usage_text = format!("Usage: {:.0}% {:.0}GB/{:.0}GB", usage_percent, used, total);
let usage_spans = StatusIcons::create_status_spans(Status::Ok, &usage_text); let usage_text = format!(
let mut full_spans = vec![ "Usage: ({}) {:.0}% {:.0}GB/{:.0}GB",
Span::styled(" └─ ", Typography::tree()), archive_display,
]; disk.disk_usage_percent,
full_spans.extend(usage_spans); disk.disk_used_gb,
lines.push(Line::from(full_spans)); disk.disk_total_gb
} );
let mut usage_spans = vec![
Span::styled(" └─ ", Typography::tree()),
];
usage_spans.extend(StatusIcons::create_status_spans(disk.usage_status, &usage_text));
lines.push(Line::from(usage_spans));
} }
lines lines
@@ -808,12 +811,32 @@ impl SystemWidget {
); );
lines.push(Line::from(cpu_spans)); lines.push(Line::from(cpu_spans));
let freq_text = self.format_cpu_frequency(); let cstate_text = self.format_cpu_cstate();
let has_cpu_info = self.cpu_model_name.is_some() || self.cpu_core_count.is_some();
let cstate_tree = if has_cpu_info { " ├─ " } else { " └─ " };
lines.push(Line::from(vec![ lines.push(Line::from(vec![
Span::styled(" └─ ", Typography::tree()), Span::styled(cstate_tree, Typography::tree()),
Span::styled(format!("Freq: {}", freq_text), Typography::secondary()) Span::styled(format!("C-state: {}", cstate_text), Typography::secondary())
])); ]));
// CPU model and core count (if available)
if let (Some(model), Some(cores)) = (&self.cpu_model_name, self.cpu_core_count) {
lines.push(Line::from(vec![
Span::styled(" └─ ", Typography::tree()),
Span::styled(format!("{} ({} cores)", model, cores), Typography::secondary())
]));
} else if let Some(model) = &self.cpu_model_name {
lines.push(Line::from(vec![
Span::styled(" └─ ", Typography::tree()),
Span::styled(model.clone(), Typography::secondary())
]));
} else if let Some(cores) = self.cpu_core_count {
lines.push(Line::from(vec![
Span::styled(" └─ ", Typography::tree()),
Span::styled(format!("{} cores", cores), Typography::secondary())
]));
}
// RAM section // RAM section
lines.push(Line::from(vec![ lines.push(Line::from(vec![
Span::styled("RAM:", Typography::widget_title()) Span::styled("RAM:", Typography::widget_title())
@@ -870,7 +893,7 @@ impl SystemWidget {
lines.extend(storage_lines); lines.extend(storage_lines);
// Backup section (if available) // Backup section (if available)
if self.backup_status != "unavailable" && self.backup_status != "unknown" { if !self.backup_repositories.is_empty() || !self.backup_disks.is_empty() {
lines.push(Line::from(vec![ lines.push(Line::from(vec![
Span::styled("Backup:", Typography::widget_title()) Span::styled("Backup:", Typography::widget_title())
])); ]));

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "cm-dashboard-shared" name = "cm-dashboard-shared"
version = "0.1.185" version = "0.1.257"
edition = "2021" edition = "2021"
[dependencies] [dependencies]

View File

@@ -40,16 +40,28 @@ pub struct NetworkInterfaceData {
pub vlan_id: Option<u16>, pub vlan_id: Option<u16>,
} }
/// CPU C-state usage information
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CStateInfo {
pub name: String,
pub percent: f32,
}
/// CPU monitoring data /// CPU monitoring data
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CpuData { pub struct CpuData {
pub load_1min: f32, pub load_1min: f32,
pub load_5min: f32, pub load_5min: f32,
pub load_15min: f32, pub load_15min: f32,
pub frequency_mhz: f32, pub cstates: Vec<CStateInfo>, // C-state usage percentages (C1, C6, C10, etc.) - indicates CPU idle depth distribution
pub temperature_celsius: Option<f32>, pub temperature_celsius: Option<f32>,
pub load_status: Status, pub load_status: Status,
pub temperature_status: Status, pub temperature_status: Status,
// Static CPU information (collected once at startup)
#[serde(skip_serializing_if = "Option::is_none")]
pub model_name: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub core_count: Option<u32>,
} }
/// Memory monitoring data /// Memory monitoring data
@@ -136,11 +148,15 @@ pub struct PoolDriveData {
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ServiceData { pub struct ServiceData {
pub name: String, pub name: String,
pub memory_mb: f32,
pub disk_gb: f32,
pub user_stopped: bool, pub user_stopped: bool,
pub service_status: Status, pub service_status: Status,
pub sub_services: Vec<SubServiceData>, pub sub_services: Vec<SubServiceData>,
/// Memory usage in bytes (from MemoryCurrent)
pub memory_bytes: Option<u64>,
/// Number of service restarts (from NRestarts)
pub restart_count: Option<u32>,
/// Uptime in seconds (calculated from ExecMainStartTimestamp)
pub uptime_seconds: Option<u64>,
} }
/// Sub-service data (nginx sites, docker containers, etc.) /// Sub-service data (nginx sites, docker containers, etc.)
@@ -149,6 +165,9 @@ pub struct SubServiceData {
pub name: String, pub name: String,
pub service_status: Status, pub service_status: Status,
pub metrics: Vec<SubServiceMetric>, pub metrics: Vec<SubServiceMetric>,
/// Type of sub-service: "nginx_site", "container", "image"
#[serde(default)]
pub service_type: String,
} }
/// Individual metric for a sub-service /// Individual metric for a sub-service
@@ -162,23 +181,27 @@ pub struct SubServiceMetric {
/// Backup system data /// Backup system data
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct BackupData { pub struct BackupData {
pub status: String, pub repositories: Vec<String>,
pub total_size_gb: Option<f32>, pub repository_status: Status,
pub repository_health: Option<String>, pub disks: Vec<BackupDiskData>,
pub repository_disk: Option<BackupDiskData>,
pub last_backup_size_gb: Option<f32>,
pub start_time_raw: Option<String>,
} }
/// Backup repository disk information /// Backup repository disk information
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct BackupDiskData { pub struct BackupDiskData {
pub serial: String, pub serial: String,
pub usage_percent: f32, pub product_name: Option<String>,
pub used_gb: f32,
pub total_gb: f32,
pub wear_percent: Option<f32>, pub wear_percent: Option<f32>,
pub temperature_celsius: Option<f32>, pub temperature_celsius: Option<f32>,
pub last_backup_time: Option<String>,
pub backup_status: Status,
pub disk_usage_percent: f32,
pub disk_used_gb: f32,
pub disk_total_gb: f32,
pub usage_status: Status,
pub services: Vec<String>,
pub archives_min: i64,
pub archives_max: i64,
} }
impl AgentData { impl AgentData {
@@ -197,10 +220,12 @@ impl AgentData {
load_1min: 0.0, load_1min: 0.0,
load_5min: 0.0, load_5min: 0.0,
load_15min: 0.0, load_15min: 0.0,
frequency_mhz: 0.0, cstates: Vec::new(),
temperature_celsius: None, temperature_celsius: None,
load_status: Status::Unknown, load_status: Status::Unknown,
temperature_status: Status::Unknown, temperature_status: Status::Unknown,
model_name: None,
core_count: None,
}, },
memory: MemoryData { memory: MemoryData {
usage_percent: 0.0, usage_percent: 0.0,
@@ -219,12 +244,9 @@ impl AgentData {
}, },
services: Vec::new(), services: Vec::new(),
backup: BackupData { backup: BackupData {
status: "unknown".to_string(), repositories: Vec::new(),
total_size_gb: None, repository_status: Status::Unknown,
repository_health: None, disks: Vec::new(),
repository_disk: None,
last_backup_size_gb: None,
start_time_raw: None,
}, },
} }
} }

View File

@@ -82,12 +82,13 @@ impl MetricValue {
/// Health status for metrics /// Health status for metrics
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)] #[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub enum Status { pub enum Status {
Inactive, // Lowest priority Info, // Lowest priority - informational data with no status (no icon)
Unknown, // Inactive, //
Offline, // Unknown, //
Pending, // Offline, //
Ok, // 5th place - good status has higher priority than unknown states Pending, //
Warning, // Ok, // Good status has higher priority than unknown states
Warning, //
Critical, // Highest priority Critical, // Highest priority
} }
@@ -223,6 +224,17 @@ impl HysteresisThresholds {
Status::Ok Status::Ok
} }
} }
Status::Info => {
// Informational data shouldn't be used with hysteresis calculations
// Treat like Unknown if it somehow ends up here
if value >= self.critical_high {
Status::Critical
} else if value >= self.warning_high {
Status::Warning
} else {
Status::Ok
}
}
} }
} }
} }