Compare commits

..

47 Commits

Author SHA1 Message Date
3fdcec8047 Add sudo for VPN namespace access
All checks were successful
Build and Release / build-and-release (push) Successful in 1m10s
Use sudo to access vpn namespace for external IP query.
Requires corresponding sudo permission in NixOS config.

Version: v0.1.234
2025-12-01 14:14:41 +01:00
1fcaf4a670 Fix VPN namespace name for external IP query
All checks were successful
Build and Release / build-and-release (push) Successful in 1m9s
Use correct namespace name 'vpn' instead of 'openvpn-namespace'.

Version: v0.1.233
2025-12-01 14:09:07 +01:00
885e19f7fd Add external IP display for OpenVPN connections
All checks were successful
Build and Release / build-and-release (push) Successful in 1m20s
Display VPN external IP as sub-service under openvpn-vpn-connection.
Query external IP through openvpn-namespace using curl ifconfig.me.

Version: v0.1.232
2025-12-01 13:49:54 +01:00
a7b69b8ae7 Fix duplicate data by clearing vectors before collection
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
Collectors now clear their target vectors (tmpfs, drives, pools, services)
before populating to prevent duplicates when updating cached AgentData.

- Clear tmpfs list in memory collector
- Clear drives and pools in disk collector
- Clear services in systemd collector
- Bump version to v0.1.231
2025-12-01 13:21:26 +01:00
2d290f40b2 Fix data caching to prevent empty broadcasts
All checks were successful
Build and Release / build-and-release (push) Successful in 1m33s
CRITICAL FIX: Collectors now update cached AgentData instead of
creating new empty data each cycle. This prevents the dashboard
from seeing flashing/disappearing data.

- Add cached_agent_data field to Agent struct
- Update cached data when collectors run
- Always broadcast the full cached data every 2s
- Only individual collectors respect their intervals
- Bump version to v0.1.230
2025-12-01 13:14:53 +01:00
ad1fcaa27b Fix collector interval timing to prevent excessive SMART checks
All checks were successful
Build and Release / build-and-release (push) Successful in 1m46s
Collectors now respect their configured intervals instead of running
every transmission cycle (2s). This prevents disk SMART checks from
running every 2 seconds, which was causing constant disk activity.

- Add TimedCollector wrapper with interval tracking
- Only collect from collectors whose interval has elapsed
- Disk collector now properly runs every 300s instead of every 2s
- Bump version to v0.1.229
2025-12-01 13:03:45 +01:00
60ab4d4f9e Fix service panel column width calculation
All checks were successful
Build and Release / build-and-release (push) Successful in 1m10s
Replace hardcoded terminal width thresholds with dynamic calculation
based on actual column requirements. Column visibility now adapts
correctly at 58, 52, 43, and 34 character widths instead of the
previous arbitrary 80, 60, 45 thresholds.

- Add width constants for each column (NAME=23, STATUS=10, etc)
- Calculate cumulative widths dynamically for each layout tier
- Ensure header and data formatting use consistent width values
- Fix service name truncation to respect calculated column width
2025-11-30 12:09:44 +01:00
67034c84b9 Add responsive column visibility to service panel
All checks were successful
Build and Release / build-and-release (push) Successful in 1m47s
Service panel now dynamically shows/hides columns based on terminal width:
- ≥80 chars: All columns (Name, Status, RAM, Uptime, Restarts)
- ≥60 chars: Hide Restarts only
- ≥45 chars: Hide Uptime and Restarts
- <45 chars: Minimal (Name and Status only)

Improves dashboard usability on smaller terminal sizes.
2025-11-30 10:50:08 +01:00
c62c7fa698 Remove debug logging from disk collector
All checks were successful
Build and Release / build-and-release (push) Successful in 1m11s
Removed all debug! statements from disk collector to reduce log noise.

Bump version to v0.1.226
2025-11-30 00:44:38 +01:00
0b1d8c0a73 Fix Data_3 showing as unknown by handling smartctl warning exit codes
All checks were successful
Build and Release / build-and-release (push) Successful in 1m11s
Root cause: sda's temperature exceeded threshold in the past, causing
smartctl to return exit code 32 (warning: "Attributes have been <= threshold
in the past"). The agent checked output.status.success() and rejected the
entire output as failed, even though the data (serial, temperature, health)
was perfectly valid.

Smartctl exit codes are bit flags for informational warnings:
- Exit 0: No warnings
- Exit 32 (bit 5): Attributes were at/below threshold in past
- Exit 64 (bit 6): Error log has entries
- etc.

The output data is valid regardless of these warning flags.

Solution: Parse output as long as it's not empty, ignore exit code.
Only return UNKNOWN if output is actually empty (command truly failed).

Result: Data_3 will now show "ZDZ4VE0B T: 31°C" instead of "? Data_3: sda"

Bump version to v0.1.225
2025-11-30 00:35:19 +01:00
c77aa6eaaa Fix Data_3 timeout by removing sequential SMART during pool detection
All checks were successful
Build and Release / build-and-release (push) Successful in 1m34s
Root cause: SMART data was collected TWICE:
1. Sequential collection during pool detection in get_drive_info_for_path()
   using problematic tokio::task::block_in_place() nesting
2. Parallel collection in get_smart_data_for_drives() (v0.1.223)

The sequential collection happened FIRST during pool detection, causing
sda (Data_3) to timeout due to:
- Bad async nesting: block_in_place() wrapping block_on()
- Sequential execution causing runtime issues
- sda being third in sequence, runtime degraded by then

Solution: Remove SMART collection from get_drive_info_for_path().
Pool drive temperatures are populated later from the parallel SMART
collection which properly uses futures::join_all.

Benefits:
- Eliminates problematic async nesting
- All SMART queries happen once in parallel only
- sda/Data_3 should now show serial (ZDZ4VE0B) and temperature

Bump version to v0.1.224
2025-11-30 00:14:25 +01:00
8a0e68f0e3 Fix Data_3 timeout by parallelizing SMART collection
All checks were successful
Build and Release / build-and-release (push) Successful in 1m10s
Root cause: SMART data was collected sequentially, one drive at a time.
With 5 drives taking ~500ms each, total collection time was 2.5+ seconds.
When disk collector runs every 1 second, this caused overlapping
collections creating resource contention. The last drive (sda/Data_3)
would timeout due to the drive being accessed by the previous collection.

Solution: Query all drives in parallel using futures::join_all. Now all
drives get their SMART data collected simultaneously with independent
3-second timeouts, eliminating contention and reducing total collection
time from 2.5+ seconds to ~500ms (the slowest single drive).

Benefits:
- All drives complete in ~500ms instead of 2.5+ seconds
- No overlapping collections causing resource contention
- Each drive gets full 3-second timeout window
- sda/Data_3 should now show temperature and serial number

Bump version to v0.1.223
2025-11-29 23:51:43 +01:00
2d653fe9ae Fix empty Storage section by configuring stdio pipes
All checks were successful
Build and Release / build-and-release (push) Successful in 1m15s
Root cause: run_command_with_timeout() was calling cmd.spawn() without
configuring stdout/stderr pipes. This caused command output to go to
journald instead of being captured by wait_with_output(). The disk
collector received empty output and failed silently.

Solution: Configure stdout(Stdio::piped()) and stderr(Stdio::piped())
before spawning commands. This ensures wait_with_output() can properly
capture command output.

Fixes: Empty Storage section, lsblk output appearing in journald
Bump version to v0.1.222
2025-11-29 23:25:17 +01:00
caba78004e Fix empty Storage section by properly aliasing command types
All checks were successful
Build and Release / build-and-release (push) Successful in 2m6s
v0.1.220 broke disk collector by changing the import from
std::process::Command to tokio::process::Command, but lines 193 and
767 explicitly used std::process::Command::new() which silently failed.

Solution: Import both as aliases (TokioCommand/StdCommand) and use
appropriate type for each operation - async commands use TokioCommand
with run_command_with_timeout, sync commands use StdCommand with
system timeout wrapper.

Fixes: Empty Storage section after v0.1.220 deployment
Bump version to v0.1.221
2025-11-29 21:29:33 +01:00
77bf08a978 Fix blocking smartctl commands with proper async/timeout handling
All checks were successful
Build and Release / build-and-release (push) Successful in 2m2s
- Changed disk collector to use tokio::process::Command instead of std::process::Command
- Updated run_command_with_timeout to properly kill processes on timeout
- Fixes issue where smartctl hangs on problematic drives (/dev/sda) freezing entire agent
- Timeout now force-kills hung processes using kill -9, preventing orphaned smartctl processes

This resolves the issue where Data_3 showed unknown status because smartctl was hanging
indefinitely trying to read from a problematic drive, blocking the entire collector.

Bump version to v0.1.220

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 21:09:04 +01:00
929870f8b6 Bump version to v0.1.219
All checks were successful
Build and Release / build-and-release (push) Successful in 1m11s
2025-11-29 18:35:14 +01:00
7aae852b7b Bump version to v0.1.218
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
2025-11-29 17:59:33 +01:00
40f3ff66d8 Show archive count range to detect inconsistencies
- Display single number if all services have same count
- Display min-max range if counts differ (indicates problem)
2025-11-29 17:59:24 +01:00
1c1beddb55 Bump version to v0.1.217
All checks were successful
Build and Release / build-and-release (push) Successful in 1m20s
2025-11-29 17:51:13 +01:00
620d1f10b6 Show archive count per service instead of total sum 2025-11-29 17:51:01 +01:00
a0d571a40e Bump version to v0.1.216
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
2025-11-29 17:44:12 +01:00
977200fff3 Move archive count to Usage line in backup display 2025-11-29 17:44:05 +01:00
d692de5f83 Bump version to v0.1.215
All checks were successful
Build and Release / build-and-release (push) Successful in 1m11s
2025-11-29 17:41:49 +01:00
f5913dbd43 Add archive count to backup disk display 2025-11-29 17:41:11 +01:00
faa30a7839 Sort backup repositories and disks for stable display
All checks were successful
Build and Release / build-and-release (push) Successful in 1m21s
- Sort repositories alphabetically before rendering
- Sort backup disks by serial number
- Prevents display jumping between different orderings on updates
- Consistent display order across refreshes

Bump version to v0.1.214

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 17:15:17 +01:00
6e4a42799f Bump version to v0.1.213
All checks were successful
Build and Release / build-and-release (push) Successful in 1m22s
2025-11-29 16:46:16 +01:00
afb8d68e03 Implement multi-disk backup support
- Update BackupData structure to support multiple backup disks
- Scan /var/lib/backup/status/ directory for all status files
- Calculate status icons for backup and disk usage
- Aggregate repository status from all disks
- Update dashboard to display all backup disks with per-disk status
- Display repository list with count and aggregated status
2025-11-29 16:44:50 +01:00
5e08b34280 Move C-state name cleaning to agent for smaller JSON
All checks were successful
Build and Release / build-and-release (push) Successful in 1m32s
- Agent now extracts "C" + digits pattern (C3, C10) using char parsing
- Removes suffixes like "_ACPI", "_MWAIT" at source
- Reduces JSON payload size over ZMQ
- No regex dependency - uses fast char iteration (~1μs overhead)
- Robust fallback to original name if pattern not found
- Dashboard simplified to use clean names directly

Bump version to v0.1.212

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 14:05:55 +01:00
0d8284b69c Clean C-state display to show only CX format
All checks were successful
Build and Release / build-and-release (push) Successful in 1m18s
- Strip suffixes like "_ACPI" from C-state names
- Display changes from "C3_ACPI:51%" to "C3:51%"
- Cleaner, more concise presentation

Bump version to v0.1.211

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 13:34:01 +01:00
d84690cb3b Move transmission interval to ZMQ config section
All checks were successful
Build and Release / build-and-release (push) Successful in 1m43s
- Changed code to use zmq.transmission_interval_seconds instead of top-level collection_interval_seconds
- Removed collection_interval_seconds from AgentConfig
- Updated validation to check zmq.transmission_interval_seconds
- Improves config organization by grouping all ZMQ settings together

Bump version to v0.1.210

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 13:31:39 +01:00
7c030b33d6 Show top 3 C-states with usage percentages
All checks were successful
Build and Release / build-and-release (push) Successful in 1m21s
- Changed CpuData.cstate from String to Vec<CStateInfo>
- Added CStateInfo struct with name and percent fields
- Collector calculates percentage for each C-state based on accumulated time
- Sorts and returns top 3 C-states by usage
- Dashboard displays: "C10:79% C8:10% C6:8%"

Provides better visibility into CPU idle state distribution.

Bump version to v0.1.209

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-28 23:45:46 +01:00
c6817537a8 Replace CPU frequency with C-state monitoring
All checks were successful
Build and Release / build-and-release (push) Successful in 1m20s
- Changed CpuData.frequency_mhz to CpuData.cstate (String)
- Implemented collect_cstate() to read CPU idle depth from sysfs
- Finds deepest C-state with most accumulated time (C0-C10)
- Updated dashboard to display C-state instead of frequency
- More accurate indicator of CPU activity vs power management

Bump version to v0.1.208

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-28 23:30:14 +01:00
2189d34b16 Bump version to v0.1.207
All checks were successful
Build and Release / build-and-release (push) Successful in 1m9s
2025-11-28 23:16:33 +01:00
28cfd5758f Fix service metrics not showing - remove cache check
The service_status_cache from discovery only has active_state with
all detailed metrics set to None. During collection, get_service_status()
was returning cached data instead of fetching fresh systemctl show data.

Now always fetch fresh data to populate memory_bytes, restart_count,
and uptime_seconds properly.
2025-11-28 23:15:51 +01:00
5deb8cf8d8 Bump version to v0.1.206
All checks were successful
Build and Release / build-and-release (push) Successful in 1m10s
2025-11-28 23:07:20 +01:00
0e01813ff5 Add service metrics from systemctl (memory, uptime, restarts)
Shared:
- Add memory_bytes, restart_count, uptime_seconds to ServiceData

Agent:
- Add new fields to ServiceStatusInfo struct
- Fetch MemoryCurrent, NRestarts, ExecMainStartTimestamp from systemctl show
- Calculate uptime from start timestamp
- Parse and populate new fields in ServiceData
- Remove unused load_state and sub_state fields

Dashboard:
- Add memory_bytes, restart_count, uptime_seconds to ServiceInfo
- Update header: Service, Status, RAM, Uptime, ↻ (restarts)
- Format memory as MB/GB
- Format uptime as Xd Xh, Xh Xm, or Xm
- Show restart count with ! prefix if > 0 to indicate instability

All metrics obtained from single systemctl show call - zero overhead.
2025-11-28 23:06:13 +01:00
c3c9507a42 Bump version to v0.1.205
All checks were successful
Build and Release / build-and-release (push) Successful in 1m22s
2025-11-28 22:37:28 +01:00
4d77ffe17e Remove RAM and Disk columns from services widget header
Changed header from 4 columns to 2 columns:
- Before: Service, Status, RAM, Disk
- After: Service, Status

Matches the removal of memory_mb and disk_gb fields.
2025-11-28 22:37:14 +01:00
14f74b4cac Bump version to v0.1.204
All checks were successful
Build and Release / build-and-release (push) Successful in 1m20s
2025-11-28 14:33:19 +01:00
67b686f8c7 Remove RAM and disk collection for services
Complete removal of service resource metrics:

Agent:
- Remove memory_mb and disk_gb fields from ServiceData struct
- Remove get_service_memory_usage() method
- Remove get_service_disk_usage() method
- Remove get_directory_size() method
- Remove unused warn import

Dashboard:
- Remove memory_mb and disk_gb from ServiceInfo struct
- Remove memory/disk display from format_parent_service_line
- Remove memory/disk parsing in legacy metric path
- Remove unused format_disk_size() function

Service resource metrics were slow, unreliable, and never worked
properly since structured data migration. Will be handled differently
in the future.
2025-11-28 14:25:12 +01:00
e3996fdb84 Fix compilation errors from command receiver removal
All checks were successful
Build and Release / build-and-release (push) Successful in 1m8s
- Remove AgentCommand import from agent.rs
- Remove handle_commands() method
- Remove command handling from main loop
- Remove command_port validation checks
2025-11-28 13:01:36 +01:00
f94ca60e69 Bump version to v0.1.203
Some checks failed
Build and Release / build-and-release (push) Failing after 1m36s
2025-11-28 12:53:56 +01:00
c19ff56df8 Remove unused ZMQ command receiver (port 6131)
Service control migrated to SSH, command receiver no longer needed.
- Remove command_receiver Socket from ZmqHandler
- Remove try_receive_command method
- Remove AgentCommand enum
- Remove command_port from ZmqConfig
2025-11-28 12:52:43 +01:00
fe2f604703 Bump version to v0.1.202
All checks were successful
Build and Release / build-and-release (push) Successful in 1m8s
2025-11-28 12:45:25 +01:00
8bfd416327 Revert to v0.1.192 - fix agent hang issue
Some checks failed
Build and Release / build-and-release (push) Failing after 1m8s
2025-11-28 12:42:24 +01:00
85c6c624fb Revert D-Bus usage, use systemctl commands only
All checks were successful
Build and Release / build-and-release (push) Successful in 1m20s
- Remove zbus dependency from agent
- Replace D-Bus Connection calls with systemctl show commands
- Fix agent hang by eliminating blocking D-Bus operations
- get_unit_property now uses systemctl show with property flags
- Memory, disk usage, and nginx config queries use systemctl
- Simpler, more reliable service monitoring
2025-11-28 12:15:04 +01:00
eab3f17428 Fix agent hang by reverting service discovery to systemctl
All checks were successful
Build and Release / build-and-release (push) Successful in 1m31s
The D-Bus ListUnits call in discover_services_internal() was causing
the agent to hang on startup.

**Root cause:**
- D-Bus ListUnits call with complex tuple destructuring hung indefinitely
- Agent never completed first collection cycle
- No collector output in logs

**Fix:**
- Revert discover_services_internal() to use systemctl list-units/list-unit-files
- Keep D-Bus-based property queries (WorkingDirectory, MemoryCurrent, ExecStart)
- Hybrid approach: systemctl for discovery, D-Bus for individual queries

**External commands still used:**
- systemctl list-units, list-unit-files (service discovery)
- smartctl (SMART data)
- sudo du (directory sizes)
- nginx -T (config fallback)

Version bump: 0.1.198 → 0.1.199
2025-11-28 11:57:31 +01:00
18 changed files with 1262 additions and 2036 deletions

View File

@@ -156,56 +156,6 @@ Complete migration from string-based metrics to structured JSON data. Eliminates
- ✅ Backward compatibility via bridge conversion to existing UI widgets
- ✅ All string parsing bugs eliminated
### Cached Collector Architecture (🚧 PLANNED)
**Problem:** Blocking collectors prevent timely ZMQ transmission, causing false "host offline" alerts.
**Previous (Sequential Blocking):**
```
Every 1 second:
└─ collect_all_data() [BLOCKS for 2-10+ seconds]
├─ CPU (fast: 10ms)
├─ Memory (fast: 20ms)
├─ Disk SMART (slow: 3s per drive × 4 drives = 12s)
├─ Service disk usage (slow: 2-8s per service)
└─ Docker (medium: 500ms)
└─ send_via_zmq() [Only after ALL collection completes]
Result: If any collector takes >10s → "host offline" false alert
```
**New (Cached Independent Collectors):**
```
Shared Cache: Arc<RwLock<AgentData>>
Background Collectors (independent async tasks):
├─ Fast collectors (CPU, RAM, Network)
│ └─ Update cache every 1 second
├─ Medium collectors (Services, Docker)
│ └─ Update cache every 5 seconds
└─ Slow collectors (Disk usage, SMART data)
└─ Update cache every 60 seconds
ZMQ Sender (separate async task):
Every 1 second:
└─ Read current cache
└─ Send via ZMQ [Always instant, never blocked]
```
**Benefits:**
- ✅ ZMQ sends every 1 second regardless of collector speed
- ✅ No false "host offline" alerts from slow collectors
- ✅ Different update rates for different metrics (CPU=1s, SMART=60s)
- ✅ System stays responsive even with slow operations
- ✅ Slow collectors can use longer timeouts without blocking
**Implementation:**
- Shared `AgentData` cache wrapped in `Arc<RwLock<>>`
- Each collector spawned as independent tokio task
- Collectors update their section of cache at their own rate
- ZMQ sender reads cache every 1s and transmits
- Stale data acceptable for slow-changing metrics (disk usage, SMART)
### Maintenance Mode
- Agent checks for `/tmp/cm-maintenance` file before sending notifications
@@ -377,9 +327,16 @@ Storage:
├─ ● Data_2: GGA04461 T: 28°C
└─ ● Parity: WDZS8RY0 T: 29°C
Backup:
● Repo: 4
├─ getea
├─ vaultwarden
├─ mysql
└─ immich
● W800639Y W: 2%
├─ ● Backup: 2025-11-29T04:00:01.324623
└─ ● Usage: 8% 70GB/916GB
● WD-WCC7K1234567 T: 32°C W: 12%
├─ Last: 2h ago (12.3GB)
├─ Next: in 22h
├─ ● Backup: 2025-11-29T04:00:01.324623
└─ ● Usage: 45% 678GB/1.5TB
```

1087
Cargo.lock generated
View File

@@ -1,5 +1,6 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
# This file is automatically generated by Cargo.
version = 4
[[package]]
@@ -103,137 +104,6 @@ dependencies = [
"object",
]
[[package]]
name = "async-broadcast"
version = "0.7.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "435a87a52755b8f27fcf321ac4f04b2802e337c8c4872923137471ec39c37532"
dependencies = [
"event-listener",
"event-listener-strategy",
"futures-core",
"pin-project-lite",
]
[[package]]
name = "async-channel"
version = "2.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "924ed96dd52d1b75e9c1a3e6275715fd320f5f9439fb5a4a11fa51f4221158d2"
dependencies = [
"concurrent-queue",
"event-listener-strategy",
"futures-core",
"pin-project-lite",
]
[[package]]
name = "async-executor"
version = "1.13.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "497c00e0fd83a72a79a39fcbd8e3e2f055d6f6c7e025f3b3d91f4f8e76527fb8"
dependencies = [
"async-task",
"concurrent-queue",
"fastrand",
"futures-lite",
"pin-project-lite",
"slab",
]
[[package]]
name = "async-fs"
version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8034a681df4aed8b8edbd7fbe472401ecf009251c8b40556b304567052e294c5"
dependencies = [
"async-lock",
"blocking",
"futures-lite",
]
[[package]]
name = "async-io"
version = "2.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "456b8a8feb6f42d237746d4b3e9a178494627745c3c56c6ea55d92ba50d026fc"
dependencies = [
"autocfg",
"cfg-if",
"concurrent-queue",
"futures-io",
"futures-lite",
"parking",
"polling",
"rustix",
"slab",
"windows-sys 0.61.2",
]
[[package]]
name = "async-lock"
version = "3.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5fd03604047cee9b6ce9de9f70c6cd540a0520c813cbd49bae61f33ab80ed1dc"
dependencies = [
"event-listener",
"event-listener-strategy",
"pin-project-lite",
]
[[package]]
name = "async-process"
version = "2.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fc50921ec0055cdd8a16de48773bfeec5c972598674347252c0399676be7da75"
dependencies = [
"async-channel",
"async-io",
"async-lock",
"async-signal",
"async-task",
"blocking",
"cfg-if",
"event-listener",
"futures-lite",
"rustix",
]
[[package]]
name = "async-recursion"
version = "1.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3b43422f69d8ff38f95f1b2bb76517c91589a924d1559a0e935d7c8ce0274c11"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "async-signal"
version = "0.2.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "43c070bbf59cd3570b6b2dd54cd772527c7c3620fce8be898406dd3ed6adc64c"
dependencies = [
"async-io",
"async-lock",
"atomic-waker",
"cfg-if",
"futures-core",
"futures-io",
"rustix",
"signal-hook-registry",
"slab",
"windows-sys 0.61.2",
]
[[package]]
name = "async-task"
version = "4.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b75356056920673b02621b35afd0f7dda9306d03c79a30f5c56c44cf256e3de"
[[package]]
name = "async-trait"
version = "0.1.89"
@@ -245,12 +115,6 @@ dependencies = [
"syn",
]
[[package]]
name = "atomic-waker"
version = "1.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1505bd5d3d116872e7271a6d4e16d81d0c8570876c8de68093a09ac269d8aac0"
[[package]]
name = "autocfg"
version = "1.5.0"
@@ -281,84 +145,12 @@ version = "2.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "812e12b5285cc515a9c72a5c1d3b6d46a19dac5acfef5265968c166106e31dd3"
[[package]]
name = "block-buffer"
version = "0.10.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3078c7629b62d3f0439517fa394996acacc5cbc91c5a20d8c658e77abd503a71"
dependencies = [
"generic-array",
]
[[package]]
name = "blocking"
version = "1.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e83f8d02be6967315521be875afa792a316e28d57b5a2d401897e2a7921b7f21"
dependencies = [
"async-channel",
"async-task",
"futures-io",
"futures-lite",
"piper",
]
[[package]]
name = "bollard"
version = "0.17.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d41711ad46fda47cd701f6908e59d1bd6b9a2b7464c0d0aeab95c6d37096ff8a"
dependencies = [
"base64 0.22.1",
"bollard-stubs",
"bytes",
"futures-core",
"futures-util",
"hex",
"http 1.4.0",
"http-body-util",
"hyper 1.8.1",
"hyper-named-pipe",
"hyper-util",
"hyperlocal",
"log",
"pin-project-lite",
"serde",
"serde_derive",
"serde_json",
"serde_repr",
"serde_urlencoded",
"thiserror 1.0.69",
"tokio",
"tokio-util",
"tower-service",
"url",
"winapi",
]
[[package]]
name = "bollard-stubs"
version = "1.45.0-rc.26.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6d7c5415e3a6bc6d3e99eff6268e488fd4ee25e7b28c10f08fa6760bd9de16e4"
dependencies = [
"serde",
"serde_repr",
"serde_with",
]
[[package]]
name = "bumpalo"
version = "3.19.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "46c5e41b57b8bba42a04676d81cb89e9ee8e859a1a66f80a5a72e1cb76b34d43"
[[package]]
name = "byteorder"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1fd0f2584146f6f2ef48085050886acf353beff7305ebd1ae69500e27c67f64b"
[[package]]
name = "bytes"
version = "1.11.0"
@@ -373,9 +165,9 @@ checksum = "df8670b8c7b9dae1793364eafadf7239c40d669904660c5960d74cfd80b46a53"
[[package]]
name = "cc"
version = "1.2.47"
version = "1.2.46"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cd405d82c84ff7f35739f175f67d8b9fb7687a0e84ccdc78bd3568839827cf07"
checksum = "b97463e1064cb1b1c1384ad0a0b9c8abd0988e2a91f52606c80ef14aadb63e36"
dependencies = [
"find-msvc-tools",
"jobserver",
@@ -399,12 +191,6 @@ version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9330f8b2ff13f34540b44e946ef35111825727b38d33286ef986142615121801"
[[package]]
name = "cfg_aliases"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724"
[[package]]
name = "chrono"
version = "0.4.42"
@@ -453,9 +239,9 @@ dependencies = [
[[package]]
name = "clap"
version = "4.5.53"
version = "4.5.52"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c9e340e012a1bf4935f5282ed1436d1489548e8f72308207ea5df0e23d2d03f8"
checksum = "aa8120877db0e5c011242f96806ce3c94e0737ab8108532a76a3300a01db2ab8"
dependencies = [
"clap_builder",
"clap_derive",
@@ -463,9 +249,9 @@ dependencies = [
[[package]]
name = "clap_builder"
version = "4.5.53"
version = "4.5.52"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d76b5d13eaa18c901fd2f7fca939fefe3a0727a953561fefdf3b2922b8569d00"
checksum = "02576b399397b659c26064fbc92a75fede9d18ffd5f80ca1cd74ddab167016e1"
dependencies = [
"anstream",
"anstyle",
@@ -493,7 +279,7 @@ checksum = "a1d728cc89cf3aee9ff92b05e62b19ee65a02b5702cff7d5a377e32c6ae29d8d"
[[package]]
name = "cm-dashboard"
version = "0.1.198"
version = "0.1.234"
dependencies = [
"anyhow",
"chrono",
@@ -504,7 +290,7 @@ dependencies = [
"ratatui",
"serde",
"serde_json",
"thiserror 1.0.69",
"thiserror",
"tokio",
"toml",
"tracing",
@@ -515,11 +301,10 @@ dependencies = [
[[package]]
name = "cm-dashboard-agent"
version = "0.1.198"
version = "0.1.234"
dependencies = [
"anyhow",
"async-trait",
"bollard",
"chrono",
"chrono-tz",
"clap",
@@ -527,30 +312,25 @@ dependencies = [
"futures",
"gethostname",
"lettre",
"libc",
"netlink-packet-route",
"nix 0.29.0",
"reqwest",
"rtnetlink",
"serde",
"serde_json",
"thiserror 1.0.69",
"thiserror",
"tokio",
"toml",
"tracing",
"tracing-subscriber",
"zbus",
"zmq",
]
[[package]]
name = "cm-dashboard-shared"
version = "0.1.198"
version = "0.1.234"
dependencies = [
"chrono",
"serde",
"serde_json",
"thiserror 1.0.69",
"thiserror",
]
[[package]]
@@ -559,15 +339,6 @@ version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b05b61dc5112cbb17e4b6cd61790d9845d13888356391624cbe7e41efeac1e75"
[[package]]
name = "concurrent-queue"
version = "2.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4ca0197aee26d1ae37445ee532fefce43251d24cc7c166799f4d46817f1d3973"
dependencies = [
"crossbeam-utils",
]
[[package]]
name = "core-foundation"
version = "0.9.4"
@@ -584,15 +355,6 @@ version = "0.8.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "773648b94d0e5d620f64f280777445740e61fe701025087ec8b57f45c791888b"
[[package]]
name = "cpufeatures"
version = "0.2.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "59ed5838eebb26a2bb2e58f6d5b5316989ae9d08bab10e0e6d103e656d1b0280"
dependencies = [
"libc",
]
[[package]]
name = "crossbeam"
version = "0.8.4"
@@ -674,36 +436,6 @@ dependencies = [
"winapi",
]
[[package]]
name = "crypto-common"
version = "0.1.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "78c8292055d1c1df0cce5d180393dc8cce0abec0a7102adb6c7b1eef6016d60a"
dependencies = [
"generic-array",
"typenum",
]
[[package]]
name = "deranged"
version = "0.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ececcb659e7ba858fb4f10388c250a7252eb0a27373f1a72b8748afdd248e587"
dependencies = [
"powerfmt",
"serde_core",
]
[[package]]
name = "digest"
version = "0.10.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292"
dependencies = [
"block-buffer",
"crypto-common",
]
[[package]]
name = "dircpy"
version = "0.3.19"
@@ -726,12 +458,6 @@ dependencies = [
"syn",
]
[[package]]
name = "dyn-clone"
version = "1.0.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d0881ea181b1df73ff77ffaaf9c7544ecc11e82fba9b5f27b262a3c73a332555"
[[package]]
name = "either"
version = "1.15.0"
@@ -763,33 +489,6 @@ dependencies = [
"cfg-if",
]
[[package]]
name = "endi"
version = "1.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "66b7e2430c6dff6a955451e2cfc438f09cea1965a9d6f87f7e3b90decc014099"
[[package]]
name = "enumflags2"
version = "0.7.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1027f7680c853e056ebcec683615fb6fbbc07dbaa13b4d5d9442b146ded4ecef"
dependencies = [
"enumflags2_derive",
"serde",
]
[[package]]
name = "enumflags2_derive"
version = "0.7.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "67c78a4d8fdf9953a5c9d458f9efe940fd97a0cab0941c075a813ac594733827"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "equivalent"
version = "1.0.2"
@@ -806,27 +505,6 @@ dependencies = [
"windows-sys 0.61.2",
]
[[package]]
name = "event-listener"
version = "5.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e13b66accf52311f30a0db42147dadea9850cb48cd070028831ae5f5d4b856ab"
dependencies = [
"concurrent-queue",
"parking",
"pin-project-lite",
]
[[package]]
name = "event-listener-strategy"
version = "0.5.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8be9f3dfaaffdae2972880079a491a1a8bb7cbed0b8dd7a347f668b4150a3b93"
dependencies = [
"event-listener",
"pin-project-lite",
]
[[package]]
name = "fastrand"
version = "2.3.0"
@@ -923,19 +601,6 @@ version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9e5c1b78ca4aae1ac06c48a526a655760685149f0d465d21f37abfe57ce075c6"
[[package]]
name = "futures-lite"
version = "2.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f78e10609fe0e0b3f4157ffab1876319b5b0db102a2c60dc4626306dc46b44ad"
dependencies = [
"fastrand",
"futures-core",
"futures-io",
"parking",
"pin-project-lite",
]
[[package]]
name = "futures-macro"
version = "0.3.31"
@@ -977,16 +642,6 @@ dependencies = [
"slab",
]
[[package]]
name = "generic-array"
version = "0.14.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "85649ca51fd72272d7821adaf274ad91c288277713d9c18820d8499a7ff69e9a"
dependencies = [
"typenum",
"version_check",
]
[[package]]
name = "gethostname"
version = "0.4.3"
@@ -997,17 +652,6 @@ dependencies = [
"windows-targets 0.48.5",
]
[[package]]
name = "getrandom"
version = "0.2.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "335ff9f135e4384c8150d6f27c6daed433577f86b4750418338c01a1a2528592"
dependencies = [
"cfg-if",
"libc",
"wasi",
]
[[package]]
name = "getrandom"
version = "0.3.4"
@@ -1031,20 +675,14 @@ dependencies = [
"futures-core",
"futures-sink",
"futures-util",
"http 0.2.12",
"indexmap 2.12.1",
"http",
"indexmap",
"slab",
"tokio",
"tokio-util",
"tracing",
]
[[package]]
name = "hashbrown"
version = "0.12.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8a9ee70c43aaf417c914396645a0fa852624801b24ebb7ae78fe8272889ac888"
[[package]]
name = "hashbrown"
version = "0.14.5"
@@ -1068,9 +706,9 @@ dependencies = [
[[package]]
name = "hashbrown"
version = "0.16.1"
version = "0.16.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "841d1cc9bed7f9236f321df977030373f4a4163ae1a7dbfe1a51a2c1a51d9100"
checksum = "5419bdc4f6a9207fbeba6d11b604d481addf78ecd10c11ad51e76c2f6482748d"
[[package]]
name = "heck"
@@ -1084,18 +722,6 @@ version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea"
[[package]]
name = "hermit-abi"
version = "0.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fc0fef456e4baa96da950455cd02c081ca953b141298e41db3fc7e36b1da849c"
[[package]]
name = "hex"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
[[package]]
name = "http"
version = "0.2.12"
@@ -1107,16 +733,6 @@ dependencies = [
"itoa",
]
[[package]]
name = "http"
version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e3ba2a386d7f85a81f119ad7498ebe444d2e22c2af0b86b069416ace48b3311a"
dependencies = [
"bytes",
"itoa",
]
[[package]]
name = "http-body"
version = "0.4.6"
@@ -1124,30 +740,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7ceab25649e9960c0311ea418d17bee82c0dcec1bd053b5f9a66e265a693bed2"
dependencies = [
"bytes",
"http 0.2.12",
"pin-project-lite",
]
[[package]]
name = "http-body"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1efedce1fb8e6913f23e0c92de8e62cd5b772a67e7b3946df930a62566c93184"
dependencies = [
"bytes",
"http 1.4.0",
]
[[package]]
name = "http-body-util"
version = "0.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b021d93e26becf5dc7e1b75b1bed1fd93124b374ceb73f43d4d4eafec896a64a"
dependencies = [
"bytes",
"futures-core",
"http 1.4.0",
"http-body 1.0.1",
"http",
"pin-project-lite",
]
@@ -1174,8 +767,8 @@ dependencies = [
"futures-core",
"futures-util",
"h2",
"http 0.2.12",
"http-body 0.4.6",
"http",
"http-body",
"httparse",
"httpdate",
"itoa",
@@ -1187,43 +780,6 @@ dependencies = [
"want",
]
[[package]]
name = "hyper"
version = "1.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2ab2d4f250c3d7b1c9fcdff1cece94ea4e2dfbec68614f7b87cb205f24ca9d11"
dependencies = [
"atomic-waker",
"bytes",
"futures-channel",
"futures-core",
"http 1.4.0",
"http-body 1.0.1",
"httparse",
"httpdate",
"itoa",
"pin-project-lite",
"pin-utils",
"smallvec",
"tokio",
"want",
]
[[package]]
name = "hyper-named-pipe"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "73b7d8abf35697b81a825e386fc151e0d503e8cb5fcb93cc8669c376dfd6f278"
dependencies = [
"hex",
"hyper 1.8.1",
"hyper-util",
"pin-project-lite",
"tokio",
"tower-service",
"winapi",
]
[[package]]
name = "hyper-tls"
version = "0.5.0"
@@ -1231,48 +787,12 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d6183ddfa99b85da61a140bea0efc93fdf56ceaa041b37d553518030827f9905"
dependencies = [
"bytes",
"hyper 0.14.32",
"hyper",
"native-tls",
"tokio",
"tokio-native-tls",
]
[[package]]
name = "hyper-util"
version = "0.1.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "52e9a2a24dc5c6821e71a7030e1e14b7b632acac55c40e9d2e082c621261bb56"
dependencies = [
"bytes",
"futures-channel",
"futures-core",
"futures-util",
"http 1.4.0",
"http-body 1.0.1",
"hyper 1.8.1",
"libc",
"pin-project-lite",
"socket2 0.6.1",
"tokio",
"tower-service",
"tracing",
]
[[package]]
name = "hyperlocal"
version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "986c5ce3b994526b3cd75578e62554abd09f0899d6206de48b3e96ab34ccc8c7"
dependencies = [
"hex",
"http-body-util",
"hyper 1.8.1",
"hyper-util",
"pin-project-lite",
"tokio",
"tower-service",
]
[[package]]
name = "iana-time-zone"
version = "0.1.64"
@@ -1401,25 +921,12 @@ dependencies = [
[[package]]
name = "indexmap"
version = "1.9.3"
version = "2.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bd070e393353796e801d209ad339e89596eb4c8d430d18ede6a1cced8fafbd99"
dependencies = [
"autocfg",
"hashbrown 0.12.3",
"serde",
]
[[package]]
name = "indexmap"
version = "2.12.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0ad4bb2b565bca0645f4d68c5c9af97fba094e9791da685bf83cb5f3ce74acf2"
checksum = "6717a8d2a5a929a1a2eb43a12812498ed141a0bcfb7e8f7844fbdbe4303bba9f"
dependencies = [
"equivalent",
"hashbrown 0.16.1",
"serde",
"serde_core",
"hashbrown 0.16.0",
]
[[package]]
@@ -1464,7 +971,7 @@ version = "0.1.34"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9afb3de4395d6b3e67a780b6de64b51c978ecf11cb9a462c66be7d4ca9039d33"
dependencies = [
"getrandom 0.3.4",
"getrandom",
"libc",
]
@@ -1573,15 +1080,6 @@ version = "2.7.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f52b00d39961fc5b2736ea853c9cc86238e165017a493d1d5c8eac6bdc4cc273"
[[package]]
name = "memoffset"
version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "488016bfae457b036d996092f6cb448677611ce4449e970ceaf42695203f218a"
dependencies = [
"autocfg",
]
[[package]]
name = "mime"
version = "0.3.17"
@@ -1628,94 +1126,6 @@ dependencies = [
"tempfile",
]
[[package]]
name = "netlink-packet-core"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72724faf704479d67b388da142b186f916188505e7e0b26719019c525882eda4"
dependencies = [
"anyhow",
"byteorder",
"netlink-packet-utils",
]
[[package]]
name = "netlink-packet-route"
version = "0.19.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "74c171cd77b4ee8c7708da746ce392440cb7bcf618d122ec9ecc607b12938bf4"
dependencies = [
"anyhow",
"byteorder",
"libc",
"log",
"netlink-packet-core",
"netlink-packet-utils",
]
[[package]]
name = "netlink-packet-utils"
version = "0.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0ede8a08c71ad5a95cdd0e4e52facd37190977039a4704eb82a283f713747d34"
dependencies = [
"anyhow",
"byteorder",
"paste",
"thiserror 1.0.69",
]
[[package]]
name = "netlink-proto"
version = "0.11.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72452e012c2f8d612410d89eea01e2d9b56205274abb35d53f60200b2ec41d60"
dependencies = [
"bytes",
"futures",
"log",
"netlink-packet-core",
"netlink-sys",
"thiserror 2.0.17",
]
[[package]]
name = "netlink-sys"
version = "0.8.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "16c903aa70590cb93691bf97a767c8d1d6122d2cc9070433deb3bbf36ce8bd23"
dependencies = [
"bytes",
"futures",
"libc",
"log",
"tokio",
]
[[package]]
name = "nix"
version = "0.27.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2eb04e9c688eff1c89d72b407f168cf79bb9e867a9d3323ed6c01519eb9cc053"
dependencies = [
"bitflags 2.10.0",
"cfg-if",
"libc",
]
[[package]]
name = "nix"
version = "0.29.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "71e2746dc3a24dd78b3cfcb7be93368c6de9963d30f43a6a73998a9cf4b17b46"
dependencies = [
"bitflags 2.10.0",
"cfg-if",
"cfg_aliases",
"libc",
"memoffset",
]
[[package]]
name = "nom"
version = "8.0.0"
@@ -1734,12 +1144,6 @@ dependencies = [
"windows-sys 0.61.2",
]
[[package]]
name = "num-conv"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "51d515d32fb182ee37cda2ccdcb92950d6a3c2893aa280e540671c2cd0f3b1d9"
[[package]]
name = "num-traits"
version = "0.2.19"
@@ -1814,22 +1218,6 @@ dependencies = [
"vcpkg",
]
[[package]]
name = "ordered-stream"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9aa2b01e1d916879f73a53d01d1d6cee68adbb31d6d9177a8cfce093cced1d50"
dependencies = [
"futures-core",
"pin-project-lite",
]
[[package]]
name = "parking"
version = "2.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f38d5652c16fde515bb1ecef450ab0f6a219d619a7274976324d5e377f7dceba"
[[package]]
name = "parking_lot"
version = "0.12.5"
@@ -1924,37 +1312,12 @@ version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184"
[[package]]
name = "piper"
version = "0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "96c8c490f422ef9a4efd2cb5b42b76c8613d7e7dfc1caf667b8a3350a5acc066"
dependencies = [
"atomic-waker",
"fastrand",
"futures-io",
]
[[package]]
name = "pkg-config"
version = "0.3.32"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7edddbd0b52d732b21ad9a5fab5c704c14cd949e5e9a1ec5929a24fded1b904c"
[[package]]
name = "polling"
version = "3.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5d0e4f59085d47d8241c88ead0f274e8a0cb551f3625263c05eb8dd897c34218"
dependencies = [
"cfg-if",
"concurrent-queue",
"hermit-abi",
"pin-project-lite",
"rustix",
"windows-sys 0.61.2",
]
[[package]]
name = "potential_utf"
version = "0.1.4"
@@ -1964,30 +1327,6 @@ dependencies = [
"zerovec",
]
[[package]]
name = "powerfmt"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "439ee305def115ba05938db6eb1644ff94165c5ab5e9420d1c1bcedbba909391"
[[package]]
name = "ppv-lite86"
version = "0.2.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "85eae3c4ed2f50dcfe72643da4befc30deadb458a9b590d720cde2f2b1e97da9"
dependencies = [
"zerocopy",
]
[[package]]
name = "proc-macro-crate"
version = "3.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "219cb19e96be00ab2e37d6e299658a0cfa83e52429179969b0f0121b4ac46983"
dependencies = [
"toml_edit 0.23.7",
]
[[package]]
name = "proc-macro2"
version = "1.0.103"
@@ -2034,18 +1373,6 @@ version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404"
dependencies = [
"libc",
"rand_chacha",
"rand_core",
]
[[package]]
name = "rand_chacha"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6c10a63a0fa32252be49d21e7709d4d4baf8d231c2dbce1eaa8141b9b127d88"
dependencies = [
"ppv-lite86",
"rand_core",
]
@@ -2054,9 +1381,6 @@ name = "rand_core"
version = "0.6.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec0be4795e2f6a28069bec0b5ff3e2ac9bafc99e6a9a7dc3547996c5c816922c"
dependencies = [
"getrandom 0.2.16",
]
[[package]]
name = "ratatui"
@@ -2105,26 +1429,6 @@ dependencies = [
"bitflags 2.10.0",
]
[[package]]
name = "ref-cast"
version = "1.0.25"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f354300ae66f76f1c85c5f84693f0ce81d747e2c3f21a45fef496d89c960bf7d"
dependencies = [
"ref-cast-impl",
]
[[package]]
name = "ref-cast-impl"
version = "1.0.25"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b7186006dcb21920990093f30e3dea63b7d6e977bf1256be20c3563a5db070da"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "regex"
version = "1.12.2"
@@ -2166,9 +1470,9 @@ dependencies = [
"futures-core",
"futures-util",
"h2",
"http 0.2.12",
"http-body 0.4.6",
"hyper 0.14.32",
"http",
"http-body",
"hyper",
"hyper-tls",
"ipnet",
"js-sys",
@@ -2194,24 +1498,6 @@ dependencies = [
"winreg",
]
[[package]]
name = "rtnetlink"
version = "0.14.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b684475344d8df1859ddb2d395dd3dac4f8f3422a1aa0725993cb375fc5caba5"
dependencies = [
"futures",
"log",
"netlink-packet-core",
"netlink-packet-route",
"netlink-packet-utils",
"netlink-proto",
"netlink-sys",
"nix 0.27.1",
"thiserror 1.0.69",
"tokio",
]
[[package]]
name = "rustix"
version = "1.1.2"
@@ -2264,30 +1550,6 @@ dependencies = [
"windows-sys 0.61.2",
]
[[package]]
name = "schemars"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4cd191f9397d57d581cddd31014772520aa448f65ef991055d7f61582c65165f"
dependencies = [
"dyn-clone",
"ref-cast",
"serde",
"serde_json",
]
[[package]]
name = "schemars"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9558e172d4e8533736ba97870c4b2cd63f84b382a3d6eb063da41b91cce17289"
dependencies = [
"dyn-clone",
"ref-cast",
"serde",
"serde_json",
]
[[package]]
name = "scopeguard"
version = "1.2.0"
@@ -2360,17 +1622,6 @@ dependencies = [
"serde_core",
]
[[package]]
name = "serde_repr"
version = "0.1.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "175ee3e80ae9982737ca543e96133087cbd9a485eecc3bc4de9c1a37b47ea59c"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "serde_spanned"
version = "0.6.9"
@@ -2392,35 +1643,6 @@ dependencies = [
"serde",
]
[[package]]
name = "serde_with"
version = "3.16.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4fa237f2807440d238e0364a218270b98f767a00d3dada77b1c53ae88940e2e7"
dependencies = [
"base64 0.22.1",
"chrono",
"hex",
"indexmap 1.9.3",
"indexmap 2.12.1",
"schemars 0.9.0",
"schemars 1.1.0",
"serde_core",
"serde_json",
"time",
]
[[package]]
name = "sha1"
version = "0.10.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e3bf829a2d51ab4a5ddf1352d8470c140cadc8301b2ae1789db023f01cedd6ba"
dependencies = [
"cfg-if",
"cpufeatures",
"digest",
]
[[package]]
name = "sharded-slab"
version = "0.1.7"
@@ -2459,9 +1681,9 @@ dependencies = [
[[package]]
name = "signal-hook-registry"
version = "1.4.7"
version = "1.4.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7664a098b8e616bdfcc2dc0e9ac44eb231eedf41db4e9fe95d8d32ec728dedad"
checksum = "b2a4719bff48cee6b39d12c020eeb490953ad2443b7055bd0b21fca26bd8c28b"
dependencies = [
"libc",
]
@@ -2523,12 +1745,6 @@ dependencies = [
"windows-sys 0.59.0",
]
[[package]]
name = "static_assertions"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f"
[[package]]
name = "strsim"
version = "0.11.1"
@@ -2559,9 +1775,9 @@ dependencies = [
[[package]]
name = "syn"
version = "2.0.111"
version = "2.0.110"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "390cc9a294ab71bdb1aa2e99d13be9c753cd2d7bd6560c77118597410c4d2e87"
checksum = "a99801b5bd34ede4cf3fc688c5919368fea4e4814a4664359503e6015b280aea"
dependencies = [
"proc-macro2",
"quote",
@@ -2632,7 +1848,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2d31c77bdf42a745371d260a26ca7163f1e0924b64afa0b688e61b5a9fa02f16"
dependencies = [
"fastrand",
"getrandom 0.3.4",
"getrandom",
"once_cell",
"rustix",
"windows-sys 0.61.2",
@@ -2644,16 +1860,7 @@ version = "1.0.69"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b6aaf5339b578ea85b50e080feb250a3e8ae8cfcdff9a461c9ec2904bc923f52"
dependencies = [
"thiserror-impl 1.0.69",
]
[[package]]
name = "thiserror"
version = "2.0.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f63587ca0f12b72a0600bcba1d40081f830876000bb46dd2337a3051618f4fc8"
dependencies = [
"thiserror-impl 2.0.17",
"thiserror-impl",
]
[[package]]
@@ -2667,17 +1874,6 @@ dependencies = [
"syn",
]
[[package]]
name = "thiserror-impl"
version = "2.0.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3ff15c8ecd7de3849db632e14d18d2571fa09dfc5ed93479bc4485c7a517c913"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "thread_local"
version = "1.1.9"
@@ -2687,37 +1883,6 @@ dependencies = [
"cfg-if",
]
[[package]]
name = "time"
version = "0.3.44"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "91e7d9e3bb61134e77bde20dd4825b97c010155709965fedf0f49bb138e52a9d"
dependencies = [
"deranged",
"itoa",
"num-conv",
"powerfmt",
"serde",
"time-core",
"time-macros",
]
[[package]]
name = "time-core"
version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "40868e7c1d2f0b8d73e4a8c7f0ff63af4f6d19be117e90bd73eb1d62cf831c6b"
[[package]]
name = "time-macros"
version = "0.2.24"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "30cfb0125f12d9c277f35663a0a33f8c30190f4e4574868a330595412d34ebf3"
dependencies = [
"num-conv",
"time-core",
]
[[package]]
name = "tinystr"
version = "0.8.2"
@@ -2787,8 +1952,8 @@ checksum = "dc1beb996b9d83529a9e75c17a1686767d148d70663143c7854d8b4a09ced362"
dependencies = [
"serde",
"serde_spanned",
"toml_datetime 0.6.11",
"toml_edit 0.22.27",
"toml_datetime",
"toml_edit",
]
[[package]]
@@ -2800,50 +1965,20 @@ dependencies = [
"serde",
]
[[package]]
name = "toml_datetime"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f2cdb639ebbc97961c51720f858597f7f24c4fc295327923af55b74c3c724533"
dependencies = [
"serde_core",
]
[[package]]
name = "toml_edit"
version = "0.22.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "41fe8c660ae4257887cf66394862d21dbca4a6ddd26f04a3560410406a2f819a"
dependencies = [
"indexmap 2.12.1",
"indexmap",
"serde",
"serde_spanned",
"toml_datetime 0.6.11",
"toml_datetime",
"toml_write",
"winnow",
]
[[package]]
name = "toml_edit"
version = "0.23.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6485ef6d0d9b5d0ec17244ff7eb05310113c3f316f2d14200d4de56b3cb98f8d"
dependencies = [
"indexmap 2.12.1",
"toml_datetime 0.7.3",
"toml_parser",
"winnow",
]
[[package]]
name = "toml_parser"
version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c0cbe268d35bdb4bb5a56a2de88d0ad0eb70af5384a99d648cd4b3d04039800e"
dependencies = [
"winnow",
]
[[package]]
name = "toml_write"
version = "0.1.2"
@@ -2869,9 +2004,9 @@ dependencies = [
[[package]]
name = "tracing-attributes"
version = "0.1.31"
version = "0.1.30"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7490cfa5ec963746568740651ac6781f701c9c5ea257c58e057f3ba8cf69e8da"
checksum = "81383ab64e72a7a8b8e13130c49e3dab29def6d0c7d76a03087b3cf71c5c6903"
dependencies = [
"proc-macro2",
"quote",
@@ -2880,9 +2015,9 @@ dependencies = [
[[package]]
name = "tracing-core"
version = "0.1.35"
version = "0.1.34"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7a04e24fab5c89c6a36eb8558c9656f30d81de51dfa4d3b45f26b21d61fa0a6c"
checksum = "b9d12581f227e93f094d3af2ae690a574abb8a2b9b7a96e7cfe9647b2b617678"
dependencies = [
"once_cell",
"valuable",
@@ -2923,23 +2058,6 @@ version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e421abadd41a4225275504ea4d6566923418b7f05506fbc9c0fe86ba7396114b"
[[package]]
name = "typenum"
version = "1.19.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "562d481066bde0658276a35467c4af00bdc6ee726305698a55b86e61d7ad82bb"
[[package]]
name = "uds_windows"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "89daebc3e6fd160ac4aa9fc8b3bf71e1f74fbf92367ae71fb83a037e8bf164b9"
dependencies = [
"memoffset",
"tempfile",
"winapi",
]
[[package]]
name = "unicode-ident"
version = "1.0.22"
@@ -3437,9 +2555,9 @@ checksum = "d6bbff5f0aada427a1e5a6da5f1f98158182f26556f345ac9e04d36d0ebed650"
[[package]]
name = "winnow"
version = "0.7.14"
version = "0.7.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5a5364e9d77fcdeeaa6062ced926ee3381faa2ee02d3eb83a5c27a8825540829"
checksum = "21a0236b59786fed61e2a80582dd500fe61f18b5dca67a4a067d0bc9039339cf"
dependencies = [
"memchr",
]
@@ -3466,16 +2584,6 @@ version = "0.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9edde0db4769d2dc68579893f2306b26c6ecfbe0ef499b013d731b7b9247e0b9"
[[package]]
name = "xdg-home"
version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec1cdab258fb55c0da61328dc52c8764709b249011b2cad0454c72f0bf10a1f6"
dependencies = [
"libc",
"windows-sys 0.59.0",
]
[[package]]
name = "yoke"
version = "0.8.1"
@@ -3499,82 +2607,20 @@ dependencies = [
"synstructure",
]
[[package]]
name = "zbus"
version = "4.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bb97012beadd29e654708a0fdb4c84bc046f537aecfde2c3ee0a9e4b4d48c725"
dependencies = [
"async-broadcast",
"async-executor",
"async-fs",
"async-io",
"async-lock",
"async-process",
"async-recursion",
"async-task",
"async-trait",
"blocking",
"enumflags2",
"event-listener",
"futures-core",
"futures-sink",
"futures-util",
"hex",
"nix 0.29.0",
"ordered-stream",
"rand",
"serde",
"serde_repr",
"sha1",
"static_assertions",
"tracing",
"uds_windows",
"windows-sys 0.52.0",
"xdg-home",
"zbus_macros",
"zbus_names",
"zvariant",
]
[[package]]
name = "zbus_macros"
version = "4.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "267db9407081e90bbfa46d841d3cbc60f59c0351838c4bc65199ecd79ab1983e"
dependencies = [
"proc-macro-crate",
"proc-macro2",
"quote",
"syn",
"zvariant_utils",
]
[[package]]
name = "zbus_names"
version = "3.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4b9b1fef7d021261cc16cba64c351d291b715febe0fa10dc3a443ac5a5022e6c"
dependencies = [
"serde",
"static_assertions",
"zvariant",
]
[[package]]
name = "zerocopy"
version = "0.8.30"
version = "0.8.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4ea879c944afe8a2b25fef16bb4ba234f47c694565e97383b36f3a878219065c"
checksum = "0894878a5fa3edfd6da3f88c4805f4c8558e2b996227a3d864f47fe11e38282c"
dependencies = [
"zerocopy-derive",
]
[[package]]
name = "zerocopy-derive"
version = "0.8.30"
version = "0.8.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf955aa904d6040f70dc8e9384444cb1030aed272ba3cb09bbc4ab9e7c1f34f5"
checksum = "88d2b8d9c68ad2b9e4340d7832716a4d21a22a1154777ad56ea55c51a9cf3831"
dependencies = [
"proc-macro2",
"quote",
@@ -3666,40 +2712,3 @@ dependencies = [
"system-deps",
"zeromq-src",
]
[[package]]
name = "zvariant"
version = "4.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2084290ab9a1c471c38fc524945837734fbf124487e105daec2bb57fd48c81fe"
dependencies = [
"endi",
"enumflags2",
"serde",
"static_assertions",
"zvariant_derive",
]
[[package]]
name = "zvariant_derive"
version = "4.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "73e2ba546bda683a90652bac4a279bc146adad1386f25379cf73200d2002c449"
dependencies = [
"proc-macro-crate",
"proc-macro2",
"quote",
"syn",
"zvariant_utils",
]
[[package]]
name = "zvariant_utils"
version = "2.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c51bcff7cc3dbb5055396bcf774748c3dab426b4b8659046963523cee4808340"
dependencies = [
"proc-macro2",
"quote",
"syn",
]

View File

@@ -1,6 +1,6 @@
[package]
name = "cm-dashboard-agent"
version = "0.1.198"
version = "0.1.234"
edition = "2021"
[dependencies]
@@ -21,16 +21,4 @@ chrono-tz = "0.8"
toml = { workspace = true }
async-trait = "0.1"
reqwest = { version = "0.11", features = ["json", "blocking"] }
# Native system APIs
nix = { version = "0.29", features = ["fs"] }
rtnetlink = "0.14"
netlink-packet-route = "0.19"
futures = "0.3"
libc = "0.2"
# Docker API client
bollard = "0.17"
# D-Bus client for systemd
zbus = "4.0"
futures = "0.3"

View File

@@ -1,6 +1,6 @@
use anyhow::Result;
use gethostname::gethostname;
use std::time::Duration;
use std::time::{Duration, Instant};
use tokio::time::interval;
use tracing::{debug, error, info};
@@ -19,13 +19,22 @@ use crate::collectors::{
use crate::notifications::NotificationManager;
use cm_dashboard_shared::AgentData;
/// Wrapper for collectors with timing information
struct TimedCollector {
collector: Box<dyn Collector>,
interval: Duration,
last_collection: Option<Instant>,
name: String,
}
pub struct Agent {
hostname: String,
config: AgentConfig,
zmq_handler: ZmqHandler,
collectors: Vec<Box<dyn Collector>>,
collectors: Vec<TimedCollector>,
notification_manager: NotificationManager,
previous_status: Option<SystemStatus>,
cached_agent_data: AgentData,
}
/// Track system component status for change detection
@@ -55,36 +64,78 @@ impl Agent {
config.zmq.publisher_port
);
// Initialize collectors
let mut collectors: Vec<Box<dyn Collector>> = Vec::new();
// Initialize collectors with timing information
let mut collectors: Vec<TimedCollector> = Vec::new();
// Add enabled collectors
if config.collectors.cpu.enabled {
collectors.push(Box::new(CpuCollector::new(config.collectors.cpu.clone())));
collectors.push(TimedCollector {
collector: Box::new(CpuCollector::new(config.collectors.cpu.clone())),
interval: Duration::from_secs(config.collectors.cpu.interval_seconds),
last_collection: None,
name: "CPU".to_string(),
});
info!("CPU collector initialized with {}s interval", config.collectors.cpu.interval_seconds);
}
if config.collectors.memory.enabled {
collectors.push(Box::new(MemoryCollector::new(config.collectors.memory.clone())));
collectors.push(TimedCollector {
collector: Box::new(MemoryCollector::new(config.collectors.memory.clone())),
interval: Duration::from_secs(config.collectors.memory.interval_seconds),
last_collection: None,
name: "Memory".to_string(),
});
info!("Memory collector initialized with {}s interval", config.collectors.memory.interval_seconds);
}
if config.collectors.disk.enabled {
collectors.push(Box::new(DiskCollector::new(config.collectors.disk.clone())));
collectors.push(TimedCollector {
collector: Box::new(DiskCollector::new(config.collectors.disk.clone())),
interval: Duration::from_secs(config.collectors.disk.interval_seconds),
last_collection: None,
name: "Disk".to_string(),
});
info!("Disk collector initialized with {}s interval", config.collectors.disk.interval_seconds);
}
if config.collectors.systemd.enabled {
collectors.push(Box::new(SystemdCollector::new(config.collectors.systemd.clone())));
collectors.push(TimedCollector {
collector: Box::new(SystemdCollector::new(config.collectors.systemd.clone())),
interval: Duration::from_secs(config.collectors.systemd.interval_seconds),
last_collection: None,
name: "Systemd".to_string(),
});
info!("Systemd collector initialized with {}s interval", config.collectors.systemd.interval_seconds);
}
if config.collectors.backup.enabled {
collectors.push(Box::new(BackupCollector::new()));
collectors.push(TimedCollector {
collector: Box::new(BackupCollector::new()),
interval: Duration::from_secs(config.collectors.backup.interval_seconds),
last_collection: None,
name: "Backup".to_string(),
});
info!("Backup collector initialized with {}s interval", config.collectors.backup.interval_seconds);
}
if config.collectors.network.enabled {
collectors.push(Box::new(NetworkCollector::new(config.collectors.network.clone())));
collectors.push(TimedCollector {
collector: Box::new(NetworkCollector::new(config.collectors.network.clone())),
interval: Duration::from_secs(config.collectors.network.interval_seconds),
last_collection: None,
name: "Network".to_string(),
});
info!("Network collector initialized with {}s interval", config.collectors.network.interval_seconds);
}
if config.collectors.nixos.enabled {
collectors.push(Box::new(NixOSCollector::new(config.collectors.nixos.clone())));
collectors.push(TimedCollector {
collector: Box::new(NixOSCollector::new(config.collectors.nixos.clone())),
interval: Duration::from_secs(config.collectors.nixos.interval_seconds),
last_collection: None,
name: "NixOS".to_string(),
});
info!("NixOS collector initialized with {}s interval", config.collectors.nixos.interval_seconds);
}
info!("Initialized {} collectors", collectors.len());
@@ -93,6 +144,9 @@ impl Agent {
let notification_manager = NotificationManager::new(&config.notifications, &hostname)?;
info!("Notification manager initialized");
// Initialize cached agent data
let cached_agent_data = AgentData::new(hostname.clone(), env!("CARGO_PKG_VERSION").to_string());
Ok(Self {
hostname,
config,
@@ -100,6 +154,7 @@ impl Agent {
collectors,
notification_manager,
previous_status: None,
cached_agent_data,
})
}
@@ -114,7 +169,7 @@ impl Agent {
// Set up intervals
let mut transmission_interval = interval(Duration::from_secs(
self.config.collection_interval_seconds,
self.config.zmq.transmission_interval_seconds,
));
let mut notification_interval = interval(Duration::from_secs(30)); // Check notifications every 30s
@@ -149,24 +204,47 @@ impl Agent {
async fn collect_and_broadcast(&mut self) -> Result<()> {
debug!("Starting structured data collection");
// Initialize empty AgentData
let mut agent_data = AgentData::new(self.hostname.clone(), env!("CARGO_PKG_VERSION").to_string());
// Collect data from collectors whose intervals have elapsed
// Update cached_agent_data with new data
let now = Instant::now();
for timed_collector in &mut self.collectors {
let should_collect = match timed_collector.last_collection {
None => true, // First collection
Some(last_time) => now.duration_since(last_time) >= timed_collector.interval,
};
// Collect data from all collectors
for collector in &self.collectors {
if let Err(e) = collector.collect_structured(&mut agent_data).await {
error!("Collector failed: {}", e);
// Continue with other collectors even if one fails
if should_collect {
if let Err(e) = timed_collector.collector.collect_structured(&mut self.cached_agent_data).await {
error!("Collector {} failed: {}", timed_collector.name, e);
// Update last_collection time even on failure to prevent immediate retries
timed_collector.last_collection = Some(now);
} else {
timed_collector.last_collection = Some(now);
debug!(
"Collected from {} ({}s interval)",
timed_collector.name,
timed_collector.interval.as_secs()
);
}
}
}
// Update timestamp on cached data
self.cached_agent_data.timestamp = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs();
// Clone for notification check (to avoid borrow issues)
let agent_data_snapshot = self.cached_agent_data.clone();
// Check for status changes and send notifications
if let Err(e) = self.check_status_changes_and_notify(&agent_data).await {
if let Err(e) = self.check_status_changes_and_notify(&agent_data_snapshot).await {
error!("Failed to check status changes: {}", e);
}
// Broadcast the structured data via ZMQ
if let Err(e) = self.zmq_handler.publish_agent_data(&agent_data).await {
// Broadcast the cached structured data via ZMQ
if let Err(e) = self.zmq_handler.publish_agent_data(&agent_data_snapshot).await {
error!("Failed to broadcast agent data: {}", e);
} else {
debug!("Successfully broadcast structured agent data");

View File

@@ -1,36 +1,66 @@
use async_trait::async_trait;
use cm_dashboard_shared::{AgentData, BackupData, BackupDiskData};
use cm_dashboard_shared::{AgentData, BackupData, BackupDiskData, Status};
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::collections::{HashMap, HashSet};
use std::fs;
use std::path::Path;
use tracing::debug;
use std::path::{Path, PathBuf};
use tracing::{debug, warn};
use super::{Collector, CollectorError};
/// Backup collector that reads backup status from TOML files with structured data output
pub struct BackupCollector {
/// Path to backup status file
status_file_path: String,
/// Directory containing backup status files
status_dir: String,
}
impl BackupCollector {
pub fn new() -> Self {
Self {
status_file_path: "/var/lib/backup/backup-status.toml".to_string(),
status_dir: "/var/lib/backup/status".to_string(),
}
}
/// Read backup status from TOML file
async fn read_backup_status(&self) -> Result<Option<BackupStatusToml>, CollectorError> {
if !Path::new(&self.status_file_path).exists() {
debug!("Backup status file not found: {}", self.status_file_path);
return Ok(None);
/// Scan directory for all backup status files
async fn scan_status_files(&self) -> Result<Vec<PathBuf>, CollectorError> {
let status_path = Path::new(&self.status_dir);
if !status_path.exists() {
debug!("Backup status directory not found: {}", self.status_dir);
return Ok(Vec::new());
}
let content = fs::read_to_string(&self.status_file_path)
let mut status_files = Vec::new();
match fs::read_dir(status_path) {
Ok(entries) => {
for entry in entries {
if let Ok(entry) = entry {
let path = entry.path();
if path.is_file() {
if let Some(filename) = path.file_name().and_then(|n| n.to_str()) {
if filename.starts_with("backup-status-") && filename.ends_with(".toml") {
status_files.push(path);
}
}
}
}
}
}
Err(e) => {
warn!("Failed to read backup status directory: {}", e);
return Ok(Vec::new());
}
}
Ok(status_files)
}
/// Read a single backup status file
async fn read_status_file(&self, path: &Path) -> Result<BackupStatusToml, CollectorError> {
let content = fs::read_to_string(path)
.map_err(|e| CollectorError::SystemRead {
path: self.status_file_path.clone(),
path: path.to_string_lossy().to_string(),
error: e.to_string(),
})?;
@@ -40,66 +70,122 @@ impl BackupCollector {
error: format!("Failed to parse backup status TOML: {}", e),
})?;
Ok(Some(status))
Ok(status)
}
/// Calculate backup status from TOML status field
fn calculate_backup_status(status_str: &str) -> Status {
match status_str.to_lowercase().as_str() {
"success" => Status::Ok,
"warning" => Status::Warning,
"failed" | "error" => Status::Critical,
_ => Status::Unknown,
}
}
/// Calculate usage status from disk usage percentage
fn calculate_usage_status(usage_percent: f32) -> Status {
if usage_percent < 80.0 {
Status::Ok
} else if usage_percent < 90.0 {
Status::Warning
} else {
Status::Critical
}
}
/// Convert BackupStatusToml to BackupData and populate AgentData
async fn populate_backup_data(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
if let Some(backup_status) = self.read_backup_status().await? {
// Use raw start_time string from TOML
let status_files = self.scan_status_files().await?;
// Extract disk information
let repository_disk = if let Some(disk_space) = &backup_status.disk_space {
Some(BackupDiskData {
serial: backup_status.disk_serial_number.clone().unwrap_or_else(|| "Unknown".to_string()),
usage_percent: disk_space.usage_percent as f32,
used_gb: disk_space.used_gb as f32,
total_gb: disk_space.total_gb as f32,
wear_percent: backup_status.disk_wear_percent,
temperature_celsius: None, // Not available in current TOML
})
} else if let Some(serial) = &backup_status.disk_serial_number {
// Fallback: create minimal disk info if we have serial but no disk_space
Some(BackupDiskData {
serial: serial.clone(),
usage_percent: 0.0,
used_gb: 0.0,
total_gb: 0.0,
wear_percent: backup_status.disk_wear_percent,
temperature_celsius: None,
})
} else {
None
};
// Calculate total repository size from services
let total_size_gb = backup_status.services
.values()
.map(|service| service.repo_size_bytes as f32 / (1024.0 * 1024.0 * 1024.0))
.sum::<f32>();
let backup_data = BackupData {
status: backup_status.status,
total_size_gb: Some(total_size_gb),
repository_health: Some("ok".to_string()), // Derive from status if needed
repository_disk,
last_backup_size_gb: None, // Not available in current TOML format
start_time_raw: Some(backup_status.start_time),
};
agent_data.backup = backup_data;
} else {
// No backup status available - set default values
if status_files.is_empty() {
debug!("No backup status files found");
agent_data.backup = BackupData {
status: "unavailable".to_string(),
total_size_gb: None,
repository_health: None,
repository_disk: None,
last_backup_size_gb: None,
start_time_raw: None,
repositories: Vec::new(),
repository_status: Status::Unknown,
disks: Vec::new(),
};
return Ok(());
}
let mut all_repositories = HashSet::new();
let mut disks = Vec::new();
let mut worst_status = Status::Ok;
for status_file in status_files {
match self.read_status_file(&status_file).await {
Ok(backup_status) => {
// Collect all service names
for service_name in backup_status.services.keys() {
all_repositories.insert(service_name.clone());
}
// Calculate backup status
let backup_status_enum = Self::calculate_backup_status(&backup_status.status);
// Calculate usage status from disk space
let (usage_percent, used_gb, total_gb, usage_status) = if let Some(disk_space) = &backup_status.disk_space {
let usage_pct = disk_space.usage_percent as f32;
(
usage_pct,
disk_space.used_gb as f32,
disk_space.total_gb as f32,
Self::calculate_usage_status(usage_pct),
)
} else {
(0.0, 0.0, 0.0, Status::Unknown)
};
// Update worst status
worst_status = worst_status.max(backup_status_enum).max(usage_status);
// Build service list for this disk
let services: Vec<String> = backup_status.services.keys().cloned().collect();
// Get min and max archive counts to detect inconsistencies
let archives_min: i64 = backup_status.services.values()
.map(|service| service.archive_count)
.min()
.unwrap_or(0);
let archives_max: i64 = backup_status.services.values()
.map(|service| service.archive_count)
.max()
.unwrap_or(0);
// Create disk data
let disk_data = BackupDiskData {
serial: backup_status.disk_serial_number.unwrap_or_else(|| "Unknown".to_string()),
product_name: backup_status.disk_product_name,
wear_percent: backup_status.disk_wear_percent,
temperature_celsius: None, // Not available in current TOML
last_backup_time: Some(backup_status.start_time),
backup_status: backup_status_enum,
disk_usage_percent: usage_percent,
disk_used_gb: used_gb,
disk_total_gb: total_gb,
usage_status,
services,
archives_min,
archives_max,
};
disks.push(disk_data);
}
Err(e) => {
warn!("Failed to read backup status file {:?}: {}", status_file, e);
}
}
}
let repositories: Vec<String> = all_repositories.into_iter().collect();
agent_data.backup = BackupData {
repositories,
repository_status: worst_status,
disks,
};
Ok(())
}
}

View File

@@ -1,25 +1,22 @@
use async_trait::async_trait;
use cm_dashboard_shared::{AgentData, Status, HysteresisThresholds, CpuData};
use std::sync::RwLock;
use std::time::Instant;
use cm_dashboard_shared::{AgentData, Status, HysteresisThresholds};
use tracing::debug;
use super::{utils, Collector, CollectorError};
use crate::config::CpuConfig;
/// Extremely efficient CPU metrics collector with interval-based caching
/// Extremely efficient CPU metrics collector
///
/// EFFICIENCY OPTIMIZATIONS:
/// - Single /proc/loadavg read for all load metrics
/// - Single /proc/stat read for CPU usage
/// - Minimal string allocations
/// - No process spawning
/// - <0.1ms collection time target
pub struct CpuCollector {
load_thresholds: HysteresisThresholds,
temperature_thresholds: HysteresisThresholds,
config: CpuConfig,
state: RwLock<CpuCacheState>,
}
#[derive(Debug, Clone)]
struct CpuCacheState {
last_collection: Option<Instant>,
cached_data: CpuData,
}
impl CpuCollector {
@@ -29,39 +26,15 @@ impl CpuCollector {
config.load_warning_threshold,
config.load_critical_threshold,
);
let temperature_thresholds = HysteresisThresholds::new(
config.temperature_warning_threshold,
config.temperature_critical_threshold,
);
Self {
load_thresholds,
temperature_thresholds,
config,
state: RwLock::new(CpuCacheState {
last_collection: None,
cached_data: CpuData {
load_1min: 0.0,
load_5min: 0.0,
load_15min: 0.0,
frequency_mhz: 0.0,
temperature_celsius: None,
load_status: Status::Unknown,
temperature_status: Status::Unknown,
},
}),
}
}
fn should_update_cache(&self) -> bool {
let state = self.state.read().unwrap();
match state.last_collection {
None => true,
Some(last) => {
let cache_duration = std::time::Duration::from_secs(self.config.interval_seconds);
last.elapsed() > cache_duration
}
}
}
@@ -146,36 +119,69 @@ impl CpuCollector {
utils::parse_u64(content.trim())
}
/// Collect CPU frequency and populate AgentData
async fn collect_frequency(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
// Try scaling frequency first (more accurate for current frequency)
if let Ok(freq) =
utils::read_proc_file("/sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq")
{
if let Ok(freq_khz) = utils::parse_u64(freq.trim()) {
let freq_mhz = freq_khz as f32 / 1000.0;
agent_data.system.cpu.frequency_mhz = freq_mhz;
return Ok(());
}
}
/// Collect CPU C-state (idle depth) and populate AgentData with top 3 C-states by usage
async fn collect_cstate(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
// Read C-state usage from first CPU (representative of overall system)
// C-states indicate CPU idle depth: C1=light sleep, C6=deep sleep, C10=deepest
// Fallback: parse /proc/cpuinfo for base frequency
if let Ok(content) = utils::read_proc_file("/proc/cpuinfo") {
for line in content.lines() {
if line.starts_with("cpu MHz") {
if let Some(freq_str) = line.split(':').nth(1) {
if let Ok(freq_mhz) = utils::parse_f32(freq_str) {
agent_data.system.cpu.frequency_mhz = freq_mhz;
return Ok(());
let mut cstate_times: Vec<(String, u64)> = Vec::new();
let mut total_time: u64 = 0;
// Collect all C-state times from CPU0
for state_num in 0..=10 {
let time_path = format!("/sys/devices/system/cpu/cpu0/cpuidle/state{}/time", state_num);
let name_path = format!("/sys/devices/system/cpu/cpu0/cpuidle/state{}/name", state_num);
if let Ok(time_str) = utils::read_proc_file(&time_path) {
if let Ok(time) = utils::parse_u64(time_str.trim()) {
if let Ok(name) = utils::read_proc_file(&name_path) {
let state_name = name.trim();
// Skip POLL state (not real idle)
if state_name != "POLL" && time > 0 {
// Extract "C" + digits pattern (C3, C10, etc.) to reduce JSON size
// Handles formats like "C3_ACPI", "C10_MWAIT", etc.
let clean_name = if let Some(c_pos) = state_name.find('C') {
let rest = &state_name[c_pos + 1..];
let digit_count = rest.chars().take_while(|c| c.is_ascii_digit()).count();
if digit_count > 0 {
state_name[c_pos..c_pos + 1 + digit_count].to_string()
} else {
state_name.to_string()
}
} else {
state_name.to_string()
};
cstate_times.push((clean_name, time));
total_time += time;
}
}
break; // Only need first CPU entry
}
} else {
// No more states available
break;
}
}
debug!("CPU frequency not available");
// Leave frequency as 0.0 if not available
// Sort by time descending to get top 3
cstate_times.sort_by(|a, b| b.1.cmp(&a.1));
// Calculate percentages for top 3 and populate AgentData
agent_data.system.cpu.cstates = cstate_times
.iter()
.take(3)
.map(|(name, time)| {
let percent = if total_time > 0 {
(*time as f32 / total_time as f32) * 100.0
} else {
0.0
};
cm_dashboard_shared::CStateInfo {
name: name.clone(),
percent,
}
})
.collect();
Ok(())
}
}
@@ -183,14 +189,6 @@ impl CpuCollector {
#[async_trait]
impl Collector for CpuCollector {
async fn collect_structured(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
// Check if cache is valid
if !self.should_update_cache() {
let state = self.state.read().unwrap();
agent_data.system.cpu = state.cached_data.clone();
debug!("Using cached CPU data (interval: {}s)", self.config.interval_seconds);
return Ok(());
}
debug!("Collecting CPU metrics");
let start = std::time::Instant::now();
@@ -200,8 +198,8 @@ impl Collector for CpuCollector {
// Collect temperature (optional)
self.collect_temperature(agent_data).await?;
// Collect frequency (optional)
self.collect_frequency(agent_data).await?;
// Collect C-state (CPU idle depth)
self.collect_cstate(agent_data).await?;
let duration = start.elapsed();
debug!("CPU collection completed in {:?}", duration);
@@ -222,11 +220,6 @@ impl Collector for CpuCollector {
Status::Unknown
};
// Update cache
let mut state = self.state.write().unwrap();
state.last_collection = Some(Instant::now());
state.cached_data = agent_data.system.cpu.clone();
Ok(())
}
}

View File

@@ -3,11 +3,9 @@ use async_trait::async_trait;
use cm_dashboard_shared::{AgentData, DriveData, FilesystemData, PoolData, HysteresisThresholds, Status};
use crate::config::DiskConfig;
use std::process::Command;
use std::time::Instant;
use tokio::process::Command as TokioCommand;
use std::process::Command as StdCommand;
use std::collections::HashMap;
use std::sync::RwLock;
use tracing::debug;
use super::{Collector, CollectorError};
@@ -15,19 +13,6 @@ use super::{Collector, CollectorError};
pub struct DiskCollector {
config: DiskConfig,
temperature_thresholds: HysteresisThresholds,
/// Cached state with thread-safe interior mutability
state: RwLock<DiskCacheState>,
}
/// Internal state for disk caching
#[derive(Debug, Clone)]
struct DiskCacheState {
/// Last collection time for performance tracking
last_collection: Option<Instant>,
/// Cached drive data
cached_drives: Vec<DriveData>,
/// Cached pool data
cached_pools: Vec<PoolData>,
}
/// A physical drive with its filesystems
@@ -72,24 +57,18 @@ impl DiskCollector {
config.temperature_warning_celsius,
config.temperature_critical_celsius,
);
let state = DiskCacheState {
last_collection: None,
cached_drives: Vec::new(),
cached_pools: Vec::new(),
};
Self {
config,
temperature_thresholds,
state: RwLock::new(state),
}
}
/// Collect all storage data and populate AgentData
async fn collect_storage_data(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
let start_time = Instant::now();
debug!("Starting clean storage collection");
// Clear drives and pools to prevent duplicates when updating cached data
agent_data.system.storage.drives.clear();
agent_data.system.storage.pools.clear();
// Step 1: Get mount points and their backing devices
let mount_devices = self.get_mount_devices().await?;
@@ -125,74 +104,40 @@ impl DiskCollector {
self.populate_drives_data(&physical_drives, &smart_data, agent_data)?;
self.populate_pools_data(&mergerfs_pools, &smart_data, agent_data)?;
// Step 7: Update cache with fresh data
{
let mut state = self.state.write().unwrap();
state.cached_drives = agent_data.system.storage.drives.clone();
state.cached_pools = agent_data.system.storage.pools.clone();
state.last_collection = Some(Instant::now());
}
let elapsed = start_time.elapsed();
debug!("Storage collection completed in {:?}", elapsed);
Ok(())
}
/// Check if disk collection cache should be updated
fn should_update_cache(&self) -> bool {
let state = self.state.read().unwrap();
match state.last_collection {
None => true,
Some(last) => {
let cache_duration = std::time::Duration::from_secs(self.config.interval_seconds);
last.elapsed() > cache_duration
}
}
}
/// Get cached disk data if available and fresh
fn get_cached_data(&self) -> Option<(Vec<DriveData>, Vec<PoolData>)> {
if !self.should_update_cache() {
let state = self.state.read().unwrap();
Some((state.cached_drives.clone(), state.cached_pools.clone()))
} else {
None
}
}
/// Get block devices and their mount points by reading /proc/mounts
/// Get block devices and their mount points using lsblk
async fn get_mount_devices(&self) -> Result<HashMap<String, String>, CollectorError> {
let content = std::fs::read_to_string("/proc/mounts")
use super::run_command_with_timeout;
let mut cmd = TokioCommand::new("lsblk");
cmd.args(&["-rn", "-o", "NAME,MOUNTPOINT"]);
let output = run_command_with_timeout(cmd, 2).await
.map_err(|e| CollectorError::SystemRead {
path: "/proc/mounts".to_string(),
path: "block devices".to_string(),
error: e.to_string(),
})?;
let mut mount_devices = HashMap::new();
for line in content.lines() {
for line in String::from_utf8_lossy(&output.stdout).lines() {
let parts: Vec<&str> = line.split_whitespace().collect();
if parts.len() >= 3 {
let device = parts[0];
if parts.len() >= 2 {
let device_name = parts[0];
let mount_point = parts[1];
let fs_type = parts[2];
// Skip pseudo filesystems and fuse mounts
if fs_type.starts_with("fuse") ||
matches!(fs_type, "proc" | "sysfs" | "tmpfs" | "devtmpfs" |
"devpts" | "cgroup" | "cgroup2" | "pstore" | "bpf" |
"tracefs" | "debugfs" | "securityfs" | "hugetlbfs" |
"mqueue" | "configfs" | "autofs") {
// Skip swap partitions and unmounted devices
if mount_point == "[SWAP]" || mount_point.is_empty() {
continue;
}
mount_devices.insert(mount_point.to_string(), device.to_string());
// Convert device name to full path
let device_path = format!("/dev/{}", device_name);
mount_devices.insert(mount_point.to_string(), device_path);
}
}
debug!("Found {} mounted block devices", mount_devices.len());
Ok(mount_devices)
}
@@ -205,8 +150,8 @@ impl DiskCollector {
Ok((total, used)) => {
filesystem_usage.insert(mount_point.clone(), (total, used));
}
Err(e) => {
debug!("Failed to get filesystem info for {}: {}", mount_point, e);
Err(_e) => {
// Silently skip filesystems we can't read
}
}
}
@@ -227,8 +172,6 @@ impl DiskCollector {
// Only add if we don't already have usage data for this mount point
if !filesystem_usage.contains_key(&mount_point) {
if let Ok((total, used)) = self.get_filesystem_info(&mount_point) {
debug!("Added MergerFS filesystem usage for {}: {}GB total, {}GB used",
mount_point, total as f32 / (1024.0 * 1024.0 * 1024.0), used as f32 / (1024.0 * 1024.0 * 1024.0));
filesystem_usage.insert(mount_point, (total, used));
}
}
@@ -238,20 +181,44 @@ impl DiskCollector {
Ok(())
}
/// Get filesystem info for a single mount point using statvfs syscall
/// Get filesystem info for a single mount point
fn get_filesystem_info(&self, mount_point: &str) -> Result<(u64, u64), CollectorError> {
use nix::sys::statvfs::statvfs;
let output = StdCommand::new("timeout")
.args(&["2", "df", "--block-size=1", mount_point])
.output()
.map_err(|e| CollectorError::SystemRead {
path: format!("df {}", mount_point),
error: e.to_string(),
})?;
let stat = statvfs(mount_point).map_err(|e| CollectorError::SystemRead {
path: mount_point.to_string(),
error: format!("statvfs failed: {}", e),
let output_str = String::from_utf8_lossy(&output.stdout);
let lines: Vec<&str> = output_str.lines().collect();
if lines.len() < 2 {
return Err(CollectorError::Parse {
value: output_str.to_string(),
error: "Expected at least 2 lines from df output".to_string(),
});
}
// Parse the data line (skip header)
let parts: Vec<&str> = lines[1].split_whitespace().collect();
if parts.len() < 4 {
return Err(CollectorError::Parse {
value: lines[1].to_string(),
error: "Expected at least 4 fields in df output".to_string(),
});
}
let total_bytes: u64 = parts[1].parse().map_err(|e| CollectorError::Parse {
value: parts[1].to_string(),
error: format!("Failed to parse total bytes: {}", e),
})?;
// Calculate total and used bytes
let block_size = stat.fragment_size() as u64;
let total_bytes = stat.blocks() as u64 * block_size;
let available_bytes = stat.blocks_available() as u64 * block_size;
let used_bytes = total_bytes - available_bytes;
let used_bytes: u64 = parts[2].parse().map_err(|e| CollectorError::Parse {
value: parts[2].to_string(),
error: format!("Failed to parse used bytes: {}", e),
})?;
Ok((total_bytes, used_bytes))
}
@@ -279,9 +246,8 @@ impl DiskCollector {
} else {
mount_point.trim_start_matches('/').replace('/', "_")
};
if pool_name.is_empty() {
debug!("Skipping mergerfs pool with empty name: {}", mount_point);
continue;
}
@@ -309,8 +275,7 @@ impl DiskCollector {
// Categorize as data vs parity drives
let (data_drives, parity_drives) = match self.categorize_pool_drives(&all_member_paths) {
Ok(drives) => drives,
Err(e) => {
debug!("Failed to categorize drives for pool {}: {}. Skipping.", mount_point, e);
Err(_e) => {
continue;
}
};
@@ -325,8 +290,7 @@ impl DiskCollector {
});
}
}
debug!("Found {} mergerfs pools", pools.len());
Ok(pools)
}
@@ -413,9 +377,9 @@ impl DiskCollector {
device.to_string()
}
/// Get SMART data for drives
/// Get SMART data for drives in parallel
async fn get_smart_data_for_drives(&self, physical_drives: &[PhysicalDrive], mergerfs_pools: &[MergerfsPool]) -> HashMap<String, SmartData> {
let mut smart_data = HashMap::new();
use futures::future::join_all;
// Collect all drive names
let mut all_drives = std::collections::HashSet::new();
@@ -431,9 +395,24 @@ impl DiskCollector {
}
}
// Get SMART data for each drive
for drive_name in all_drives {
if let Ok(data) = self.get_smart_data(&drive_name).await {
// Collect SMART data for all drives in parallel
let futures: Vec<_> = all_drives
.iter()
.map(|drive_name| {
let drive = drive_name.clone();
async move {
let result = self.get_smart_data(&drive).await;
(drive, result)
}
})
.collect();
let results = join_all(futures).await;
// Build HashMap from results
let mut smart_data = HashMap::new();
for (drive_name, result) in results {
if let Ok(data) = result {
smart_data.insert(drive_name, data);
}
}
@@ -447,7 +426,7 @@ impl DiskCollector {
// Use direct smartctl (no sudo) - service has CAP_SYS_RAWIO and CAP_SYS_ADMIN capabilities
// For NVMe drives, specify device type explicitly
let mut cmd = Command::new("smartctl");
let mut cmd = TokioCommand::new("smartctl");
if drive_name.starts_with("nvme") {
cmd.args(&["-d", "nvme", "-a", &format!("/dev/{}", drive_name)]);
} else {
@@ -462,8 +441,10 @@ impl DiskCollector {
let output_str = String::from_utf8_lossy(&output.stdout);
if !output.status.success() {
// Return unknown data rather than failing completely
// Note: smartctl returns non-zero exit codes for warnings (like exit code 32
// for "temperature was high in the past"), but the output data is still valid.
// Only check if we got any output at all, don't reject based on exit code.
if output_str.is_empty() {
return Ok(SmartData {
health: "UNKNOWN".to_string(),
serial_number: None,
@@ -471,7 +452,7 @@ impl DiskCollector {
wear_percent: None,
});
}
let mut health = "UNKNOWN".to_string();
let mut serial_number = None;
let mut temperature = None;
@@ -787,42 +768,38 @@ impl DiskCollector {
Ok((data_drives, parity_drives))
}
/// Get drive information for a mount path by reading /proc/mounts
/// Get drive information for a mount path
fn get_drive_info_for_path(&self, path: &str) -> anyhow::Result<PoolDrive> {
// Read /proc/mounts to find the backing device
let content = std::fs::read_to_string("/proc/mounts")
.map_err(|e| anyhow::anyhow!("Failed to read /proc/mounts: {}", e))?;
// Use lsblk to find the backing device with timeout
let output = StdCommand::new("timeout")
.args(&["2", "lsblk", "-rn", "-o", "NAME,MOUNTPOINT"])
.output()
.map_err(|e| anyhow::anyhow!("Failed to run lsblk: {}", e))?;
let output_str = String::from_utf8_lossy(&output.stdout);
let mut device = String::new();
for line in content.lines() {
for line in output_str.lines() {
let parts: Vec<&str> = line.split_whitespace().collect();
if parts.len() >= 2 && parts[1] == path {
device = parts[0].to_string();
break;
}
}
if device.is_empty() {
return Err(anyhow::anyhow!("Could not find device for path {}", path));
}
// Extract base device name (e.g., "sda1" -> "sda")
let base_device = self.extract_base_device(&format!("/dev/{}", device));
// Extract base device name (e.g., "/dev/sda1" -> "sda")
let base_device = self.extract_base_device(&device);
// Get temperature from SMART data if available
let temperature = if let Ok(smart_data) = tokio::task::block_in_place(|| {
tokio::runtime::Handle::current().block_on(self.get_smart_data(&base_device))
}) {
smart_data.temperature_celsius
} else {
None
};
// Temperature will be filled in later from parallel SMART collection
// Don't collect it here to avoid sequential blocking with problematic async nesting
Ok(PoolDrive {
name: base_device,
mount_point: path.to_string(),
temperature_celsius: temperature,
temperature_celsius: None,
})
}
@@ -862,15 +839,7 @@ impl DiskCollector {
#[async_trait]
impl Collector for DiskCollector {
async fn collect_structured(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
// Use cached data if available and fresh
if let Some((cached_drives, cached_pools)) = self.get_cached_data() {
agent_data.system.storage.drives = cached_drives;
agent_data.system.storage.pools = cached_pools;
Ok(())
} else {
// Collect fresh data
self.collect_storage_data(agent_data).await
}
self.collect_storage_data(agent_data).await
}
}

View File

@@ -95,47 +95,62 @@ impl MemoryCollector {
Ok(())
}
/// Populate tmpfs data into AgentData using statvfs syscall
/// Populate tmpfs data into AgentData
async fn populate_tmpfs_data(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
use nix::sys::statvfs::statvfs;
// Discover all tmpfs mount points
let tmpfs_mounts = self.discover_tmpfs_mounts()?;
if tmpfs_mounts.is_empty() {
debug!("No tmpfs mounts found to monitor");
return Ok(());
}
// Get usage data for each tmpfs mount using statvfs syscall
for mount_point in tmpfs_mounts {
match statvfs(mount_point.as_str()) {
Ok(stat) => {
let block_size = stat.fragment_size() as u64;
let total_bytes = stat.blocks() as u64 * block_size;
let available_bytes = stat.blocks_available() as u64 * block_size;
let used_bytes = total_bytes - available_bytes;
// Get usage data for all tmpfs mounts at once using df (with 2 second timeout)
let mut df_args = vec!["2", "df", "--output=target,size,used", "--block-size=1"];
df_args.extend(tmpfs_mounts.iter().map(|s| s.as_str()));
if total_bytes == 0 {
continue;
}
let df_output = std::process::Command::new("timeout")
.args(&df_args[..])
.output()
.map_err(|e| CollectorError::SystemRead {
path: "tmpfs mounts".to_string(),
error: e.to_string(),
})?;
let total_gb = total_bytes as f32 / (1024.0 * 1024.0 * 1024.0);
let used_gb = used_bytes as f32 / (1024.0 * 1024.0 * 1024.0);
let usage_percent = (used_bytes as f32 / total_bytes as f32) * 100.0;
let df_str = String::from_utf8_lossy(&df_output.stdout);
let df_lines: Vec<&str> = df_str.lines().skip(1).collect(); // Skip header
// Add to tmpfs list
agent_data.system.memory.tmpfs.push(TmpfsData {
mount: mount_point.clone(),
usage_percent,
used_gb,
total_gb,
});
}
Err(e) => {
debug!("Failed to get stats for tmpfs mount {}: {}", mount_point, e);
}
// Process each tmpfs mount
for (i, mount_point) in tmpfs_mounts.iter().enumerate() {
if i >= df_lines.len() {
debug!("Not enough df output lines for tmpfs mount: {}", mount_point);
continue;
}
let parts: Vec<&str> = df_lines[i].split_whitespace().collect();
if parts.len() < 3 {
debug!("Invalid df output for tmpfs mount: {}", mount_point);
continue;
}
let total_bytes: u64 = parts[1].parse().unwrap_or(0);
let used_bytes: u64 = parts[2].parse().unwrap_or(0);
if total_bytes == 0 {
continue;
}
let total_gb = total_bytes as f32 / (1024.0 * 1024.0 * 1024.0);
let used_gb = used_bytes as f32 / (1024.0 * 1024.0 * 1024.0);
let usage_percent = (used_bytes as f32 / total_bytes as f32) * 100.0;
// Add to tmpfs list
agent_data.system.memory.tmpfs.push(TmpfsData {
mount: mount_point.clone(),
usage_percent,
used_gb,
total_gb,
});
}
// Sort tmpfs mounts by mount point for consistent display order
@@ -185,13 +200,16 @@ impl Collector for MemoryCollector {
debug!("Collecting memory metrics");
let start = std::time::Instant::now();
// Clear tmpfs list to prevent duplicates when updating cached data
agent_data.system.memory.tmpfs.clear();
// Parse memory info from /proc/meminfo
let info = self.parse_meminfo().await?;
// Populate memory data directly
self.populate_memory_data(&info, agent_data).await?;
// Collect tmpfs data
// Collect tmpfs data
self.populate_tmpfs_data(agent_data).await?;
let duration = start.elapsed();

View File

@@ -1,8 +1,7 @@
use async_trait::async_trait;
use cm_dashboard_shared::{AgentData};
use std::process::{Command, Output};
use std::process::Output;
use std::time::Duration;
use tokio::time::timeout;
pub mod backup;
pub mod cpu;
@@ -16,16 +15,34 @@ pub mod systemd;
pub use error::CollectorError;
/// Run a command with a timeout to prevent blocking
pub async fn run_command_with_timeout(mut cmd: Command, timeout_secs: u64) -> std::io::Result<Output> {
/// Properly kills the process if timeout is exceeded
pub async fn run_command_with_timeout(mut cmd: tokio::process::Command, timeout_secs: u64) -> std::io::Result<Output> {
use tokio::time::timeout;
use std::process::Stdio;
let timeout_duration = Duration::from_secs(timeout_secs);
match timeout(timeout_duration, tokio::task::spawn_blocking(move || cmd.output())).await {
Ok(Ok(result)) => result,
Ok(Err(e)) => Err(std::io::Error::new(std::io::ErrorKind::Other, e)),
Err(_) => Err(std::io::Error::new(
std::io::ErrorKind::TimedOut,
format!("Command timed out after {} seconds", timeout_secs)
)),
// Configure stdio to capture output
cmd.stdout(Stdio::piped());
cmd.stderr(Stdio::piped());
let child = cmd.spawn()?;
let pid = child.id();
match timeout(timeout_duration, child.wait_with_output()).await {
Ok(result) => result,
Err(_) => {
// Timeout - force kill the process using system kill command
if let Some(process_id) = pid {
let _ = tokio::process::Command::new("kill")
.args(&["-9", &process_id.to_string()])
.output()
.await;
}
Err(std::io::Error::new(
std::io::ErrorKind::TimedOut,
format!("Command timed out after {} seconds", timeout_secs)
))
}
}
}

View File

@@ -1,12 +1,7 @@
use async_trait::async_trait;
use cm_dashboard_shared::{AgentData, NetworkInterfaceData, Status};
use std::process::Command;
use tracing::debug;
use futures::stream::TryStreamExt;
use rtnetlink::{new_connection, IpVersion};
use netlink_packet_route::link::LinkAttribute;
use netlink_packet_route::address::AddressAttribute;
use netlink_packet_route::route::RouteAttribute;
use std::net::IpAddr;
use super::{Collector, CollectorError};
use crate::config::NetworkConfig;
@@ -54,52 +49,36 @@ impl NetworkCollector {
}
}
/// Get the primary physical interface (the one with default route) using rtnetlink
async fn get_primary_physical_interface() -> Option<String> {
let (connection, handle, _) = match new_connection() {
Ok(conn) => conn,
Err(e) => {
debug!("Failed to create netlink connection: {}", e);
return None;
}
};
tokio::spawn(connection);
// Get default route
let mut routes = handle.route().get(IpVersion::V4).execute();
while let Ok(Some(route)) = routes.try_next().await {
// Check if this is a default route (destination is 0.0.0.0/0)
if route.header.destination_prefix_length == 0 {
// Find the output interface (OIF) attribute
if let Some(oif) = route.attributes.iter().find_map(|attr| {
if let RouteAttribute::Oif(index) = attr {
Some(*index)
} else {
None
}
}) {
// Get interface name from index
let mut link = handle.link().get().match_index(oif).execute();
if let Ok(Some(link_msg)) = link.try_next().await {
if let Some(name) = link_msg.attributes.iter().find_map(|attr| {
if let LinkAttribute::IfName(n) = attr {
Some(n.to_string())
/// Get the primary physical interface (the one with default route)
fn get_primary_physical_interface() -> Option<String> {
match Command::new("timeout").args(["2", "ip", "route", "show", "default"]).output() {
Ok(output) if output.status.success() => {
let output_str = String::from_utf8_lossy(&output.stdout);
// Parse: "default via 192.168.1.1 dev eno1 ..."
for line in output_str.lines() {
if line.starts_with("default") {
if let Some(dev_pos) = line.find(" dev ") {
let after_dev = &line[dev_pos + 5..];
if let Some(space_pos) = after_dev.find(' ') {
let interface = &after_dev[..space_pos];
// Only return if it's a physical interface
if Self::is_physical_interface(interface) {
return Some(interface.to_string());
}
} else {
None
}
}) {
if Self::is_physical_interface(&name) {
return Some(name);
// No space after interface name (end of line)
let interface = after_dev.trim();
if Self::is_physical_interface(interface) {
return Some(interface.to_string());
}
}
}
}
}
None
}
_ => None,
}
None
}
/// Parse VLAN configuration from /proc/net/vlan/config
@@ -124,105 +103,102 @@ impl NetworkCollector {
vlan_map
}
/// Collect network interfaces using rtnetlink
/// Collect network interfaces using ip command
async fn collect_interfaces(&self) -> Vec<NetworkInterfaceData> {
let mut interfaces = Vec::new();
// Parse VLAN configuration
let vlan_map = Self::parse_vlan_config();
// Create netlink connection
let (connection, handle, _) = match new_connection() {
Ok(conn) => conn,
Err(e) => {
debug!("Failed to create netlink connection: {}", e);
return interfaces;
}
};
match Command::new("timeout").args(["2", "ip", "-j", "addr"]).output() {
Ok(output) if output.status.success() => {
let json_str = String::from_utf8_lossy(&output.stdout);
tokio::spawn(connection);
if let Ok(json_data) = serde_json::from_str::<serde_json::Value>(&json_str) {
if let Some(ifaces) = json_data.as_array() {
for iface in ifaces {
let name = iface["ifname"].as_str().unwrap_or("").to_string();
// Get all links
let mut links = handle.link().get().execute();
// Skip loopback, empty names, and ifb* interfaces
if name.is_empty() || name == "lo" || name.starts_with("ifb") {
continue;
}
while let Ok(Some(link)) = links.try_next().await {
// Get interface name
let name = match link.attributes.iter().find_map(|attr| {
if let LinkAttribute::IfName(n) = attr {
Some(n.to_string())
} else {
None
}
}) {
Some(n) => n,
None => continue,
};
// Parse parent interface from @parent notation (e.g., lan@enp0s31f6)
let (interface_name, parent_interface) = if let Some(at_pos) = name.find('@') {
let (child, parent) = name.split_at(at_pos);
(child.to_string(), Some(parent[1..].to_string()))
} else {
(name.clone(), None)
};
// Skip loopback and ifb interfaces
if name == "lo" || name.starts_with("ifb") {
continue;
}
let mut ipv4_addresses = Vec::new();
let mut ipv6_addresses = Vec::new();
// Parse parent interface from @parent notation (e.g., lan@enp0s31f6)
let (interface_name, parent_interface) = if let Some(at_pos) = name.find('@') {
let (child, parent) = name.split_at(at_pos);
(child.to_string(), Some(parent[1..].to_string()))
} else {
(name.clone(), None)
};
// Get IP addresses for this interface
let mut ipv4_addresses = Vec::new();
let mut ipv6_addresses = Vec::new();
let mut addrs = handle.address().get().set_link_index_filter(link.header.index).execute();
while let Ok(Some(addr)) = addrs.try_next().await {
for nla in &addr.attributes {
if let AddressAttribute::Address(ip) = nla {
match ip {
IpAddr::V4(ipv4) => ipv4_addresses.push(ipv4.to_string()),
IpAddr::V6(ipv6) => {
// Skip link-local IPv6 addresses (fe80::)
if !ipv6.to_string().starts_with("fe80:") {
ipv6_addresses.push(ipv6.to_string());
// Extract IP addresses
if let Some(addr_info) = iface["addr_info"].as_array() {
for addr in addr_info {
if let Some(family) = addr["family"].as_str() {
if let Some(local) = addr["local"].as_str() {
match family {
"inet" => ipv4_addresses.push(local.to_string()),
"inet6" => {
// Skip link-local IPv6 addresses (fe80::)
if !local.starts_with("fe80:") {
ipv6_addresses.push(local.to_string());
}
}
_ => {}
}
}
}
}
}
// Determine if physical and get status
let is_physical = Self::is_physical_interface(&interface_name);
// Only filter out virtual interfaces without IPs
// Physical interfaces should always be shown even if down/no IPs
if !is_physical && ipv4_addresses.is_empty() && ipv6_addresses.is_empty() {
continue;
}
let link_status = if is_physical {
Self::get_link_status(&name)
} else {
Status::Unknown // Virtual interfaces don't have meaningful link status
};
// Look up VLAN ID from the map (use original name before @ parsing)
let vlan_id = vlan_map.get(&name).copied();
interfaces.push(NetworkInterfaceData {
name: interface_name,
ipv4_addresses,
ipv6_addresses,
is_physical,
link_status,
parent_interface,
vlan_id,
});
}
}
}
}
// Determine if physical
let is_physical = Self::is_physical_interface(&interface_name);
// Only filter out virtual interfaces without IPs
if !is_physical && ipv4_addresses.is_empty() && ipv6_addresses.is_empty() {
continue;
Err(e) => {
debug!("Failed to execute ip command: {}", e);
}
Ok(output) => {
debug!("ip command failed with status: {}", output.status);
}
let link_status = if is_physical {
Self::get_link_status(&name)
} else {
Status::Unknown
};
// Look up VLAN ID
let vlan_id = vlan_map.get(&name).copied();
interfaces.push(NetworkInterfaceData {
name: interface_name,
ipv4_addresses,
ipv6_addresses,
is_physical,
link_status,
parent_interface,
vlan_id,
});
}
// Assign primary physical interface as parent to virtual interfaces
if let Some(primary) = Self::get_primary_physical_interface().await {
// Assign primary physical interface as parent to virtual interfaces without explicit parent
let primary_interface = Self::get_primary_physical_interface();
if let Some(primary) = primary_interface {
for interface in interfaces.iter_mut() {
// Only assign parent to virtual interfaces that don't already have one
if !interface.is_physical && interface.parent_interface.is_none() {
interface.parent_interface = Some(primary.clone());
}

View File

@@ -4,10 +4,7 @@ use cm_dashboard_shared::{AgentData, ServiceData, SubServiceData, SubServiceMetr
use std::process::Command;
use std::sync::RwLock;
use std::time::Instant;
use tracing::{debug, warn};
use bollard::Docker;
use bollard::container::ListContainersOptions;
use zbus::Connection;
use tracing::debug;
use super::{Collector, CollectorError};
use crate::config::SystemdConfig;
@@ -46,9 +43,10 @@ struct ServiceCacheState {
/// Cached service status information from systemctl list-units
#[derive(Debug, Clone)]
struct ServiceStatusInfo {
load_state: String,
active_state: String,
sub_state: String,
memory_bytes: Option<u64>,
restart_count: Option<u32>,
start_timestamp: Option<u64>,
}
impl SystemdCollector {
@@ -77,7 +75,7 @@ impl SystemdCollector {
debug!("Collecting systemd services metrics");
// Get cached services (discovery only happens when needed)
let monitored_services = match self.get_monitored_services().await {
let monitored_services = match self.get_monitored_services() {
Ok(services) => services,
Err(e) => {
debug!("Failed to get monitored services: {}", e);
@@ -89,15 +87,21 @@ impl SystemdCollector {
let mut complete_service_data = Vec::new();
for service_name in &monitored_services {
match self.get_service_status(service_name) {
Ok((active_status, _detailed_info)) => {
let memory_mb = self.get_service_memory_usage(service_name).await.unwrap_or(0.0);
let disk_gb = self.get_service_disk_usage(service_name).await.unwrap_or(0.0);
Ok(status_info) => {
let mut sub_services = Vec::new();
// Calculate uptime if we have start timestamp
let uptime_seconds = status_info.start_timestamp.and_then(|start| {
let now = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.ok()?
.as_secs();
Some(now.saturating_sub(start))
});
// Sub-service metrics for specific services (always include cached results)
if service_name.contains("nginx") && active_status == "active" {
let nginx_sites = self.get_nginx_site_metrics().await;
if service_name.contains("nginx") && status_info.active_state == "active" {
let nginx_sites = self.get_nginx_site_metrics();
for (site_name, latency_ms) in nginx_sites {
let site_status = if latency_ms >= 0.0 && latency_ms < self.config.nginx_latency_critical_ms {
"active"
@@ -121,8 +125,8 @@ impl SystemdCollector {
}
}
if service_name.contains("docker") && active_status == "active" {
let docker_containers = self.get_docker_containers().await;
if service_name.contains("docker") && status_info.active_state == "active" {
let docker_containers = self.get_docker_containers();
for (container_name, container_status) in docker_containers {
// For now, docker containers have no additional metrics
// Future: could add memory_mb, cpu_percent, restart_count, etc.
@@ -137,7 +141,7 @@ impl SystemdCollector {
}
// Add Docker images
let docker_images = self.get_docker_images().await;
let docker_images = self.get_docker_images();
for (image_name, image_status, image_size_mb) in docker_images {
let mut metrics = Vec::new();
metrics.push(SubServiceMetric {
@@ -155,14 +159,28 @@ impl SystemdCollector {
}
}
if service_name.contains("openvpn-vpn-connection") && status_info.active_state == "active" {
if let Some(external_ip) = self.get_vpn_external_ip() {
let metrics = Vec::new();
sub_services.push(SubServiceData {
name: format!("IP: {}", external_ip),
service_status: Status::Ok,
metrics,
service_type: "vpn_route".to_string(),
});
}
}
// Create complete service data
let service_data = ServiceData {
name: service_name.clone(),
memory_mb,
disk_gb,
user_stopped: false, // TODO: Integrate with service tracker
service_status: self.calculate_service_status(service_name, &active_status),
service_status: self.calculate_service_status(service_name, &status_info.active_state),
sub_services,
memory_bytes: status_info.memory_bytes,
restart_count: status_info.restart_count,
uptime_seconds,
};
// Add to AgentData and cache
@@ -193,7 +211,7 @@ impl SystemdCollector {
}
/// Get monitored services, discovering them if needed or cache is expired
async fn get_monitored_services(&self) -> Result<Vec<String>> {
fn get_monitored_services(&self) -> Result<Vec<String>> {
// Check if we need discovery without holding the lock
let needs_discovery = {
let state = self.state.read().unwrap();
@@ -208,7 +226,7 @@ impl SystemdCollector {
if needs_discovery {
debug!("Discovering systemd services (cache expired or first run)");
match self.discover_services_internal().await {
match self.discover_services_internal() {
Ok((services, status_cache)) => {
if let Ok(mut state) = self.state.write() {
state.monitored_services = services.clone();
@@ -231,77 +249,95 @@ impl SystemdCollector {
}
/// Get nginx site metrics, checking them if cache is expired (like old working version)
async fn get_nginx_site_metrics(&self) -> Vec<(String, f32)> {
// Check if we need to refresh (read lock)
let needs_refresh = {
let state = self.state.read().unwrap();
match state.last_nginx_check_time {
None => true,
Some(last_time) => {
let elapsed = last_time.elapsed().as_secs();
elapsed >= state.nginx_check_interval_seconds
}
fn get_nginx_site_metrics(&self) -> Vec<(String, f32)> {
let mut state = self.state.write().unwrap();
// Check if we need to refresh nginx site metrics
let needs_refresh = match state.last_nginx_check_time {
None => true, // First time
Some(last_time) => {
let elapsed = last_time.elapsed().as_secs();
elapsed >= state.nginx_check_interval_seconds
}
};
if needs_refresh {
// Check if nginx is active (read lock)
let has_nginx = {
let state = self.state.read().unwrap();
state.monitored_services.iter().any(|s| s.contains("nginx"))
};
if has_nginx {
let fresh_metrics = self.get_nginx_sites_internal().await;
let mut state = self.state.write().unwrap();
// Only check nginx sites if nginx service is active
if state.monitored_services.iter().any(|s| s.contains("nginx")) {
let fresh_metrics = self.get_nginx_sites_internal();
state.nginx_site_metrics = fresh_metrics;
state.last_nginx_check_time = Some(Instant::now());
}
}
let state = self.state.read().unwrap();
state.nginx_site_metrics.clone()
}
/// Auto-discover interesting services to monitor using D-Bus
async fn discover_services_internal(&self) -> Result<(Vec<String>, std::collections::HashMap<String, ServiceStatusInfo>)> {
// Connect to system D-Bus
let connection = Connection::system().await?;
/// Auto-discover interesting services to monitor
fn discover_services_internal(&self) -> Result<(Vec<String>, std::collections::HashMap<String, ServiceStatusInfo>)> {
// First: Get all service unit files (with 3 second timeout)
let unit_files_output = Command::new("timeout")
.args(&["3", "systemctl", "list-unit-files", "--type=service", "--no-pager", "--plain"])
.output()?;
// Get systemd manager proxy
let proxy = zbus::Proxy::new(
&connection,
"org.freedesktop.systemd1",
"/org/freedesktop/systemd1",
"org.freedesktop.systemd1.Manager",
).await?;
if !unit_files_output.status.success() {
return Err(anyhow::anyhow!("systemctl list-unit-files command failed"));
}
// List all units via D-Bus
let units: Vec<(String, String, String, String, String, String, zbus::zvariant::OwnedObjectPath, u32, String, zbus::zvariant::OwnedObjectPath)> =
proxy.call("ListUnits", &()).await?;
// Second: Get runtime status of all units (with 3 second timeout)
let units_status_output = Command::new("timeout")
.args(&["3", "systemctl", "list-units", "--type=service", "--all", "--no-pager", "--plain"])
.output()?;
if !units_status_output.status.success() {
return Err(anyhow::anyhow!("systemctl list-units command failed"));
}
let unit_files_str = String::from_utf8(unit_files_output.stdout)?;
let units_status_str = String::from_utf8(units_status_output.stdout)?;
let mut services = Vec::new();
let excluded_services = &self.config.excluded_services;
let service_name_filters = &self.config.service_name_filters;
// Parse all service unit files
let mut all_service_names = std::collections::HashSet::new();
let mut service_status_cache = std::collections::HashMap::new();
// Parse D-Bus response for services only
for unit in units {
let (unit_name, _description, load_state, active_state, sub_state, _followed, _unit_path, _job_id, _job_type, _job_path) = unit;
if unit_name.ends_with(".service") {
let service_name = unit_name.trim_end_matches(".service");
for line in unit_files_str.lines() {
let fields: Vec<&str> = line.split_whitespace().collect();
if fields.len() >= 2 && fields[0].ends_with(".service") {
let service_name = fields[0].trim_end_matches(".service");
all_service_names.insert(service_name.to_string());
}
}
service_status_cache.insert(service_name.to_string(), ServiceStatusInfo {
load_state: load_state.clone(),
active_state: active_state.clone(),
sub_state: sub_state.clone(),
// Parse runtime status for all units
let mut status_cache = std::collections::HashMap::new();
for line in units_status_str.lines() {
let fields: Vec<&str> = line.split_whitespace().collect();
if fields.len() >= 4 && fields[0].ends_with(".service") {
let service_name = fields[0].trim_end_matches(".service");
let active_state = fields.get(2).unwrap_or(&"unknown").to_string();
status_cache.insert(service_name.to_string(), ServiceStatusInfo {
active_state,
memory_bytes: None,
restart_count: None,
start_timestamp: None,
});
}
}
let mut services = Vec::new();
let excluded_services = &self.config.excluded_services;
let service_name_filters = &self.config.service_name_filters;
// For services found in unit files but not in runtime status, set default inactive status
for service_name in &all_service_names {
if !status_cache.contains_key(service_name) {
status_cache.insert(service_name.to_string(), ServiceStatusInfo {
active_state: "inactive".to_string(),
memory_bytes: None,
restart_count: None,
start_timestamp: None,
});
}
}
// Process all discovered services and apply filters
for service_name in &all_service_names {
@@ -327,64 +363,63 @@ impl SystemdCollector {
}
}
Ok((services, service_status_cache))
Ok((services, status_cache))
}
/// Get service status from D-Bus cache
fn get_service_status(&self, service: &str) -> Result<(String, String)> {
// Get status from D-Bus cache (populated by discover_services_internal)
if let Ok(state) = self.state.read() {
if let Some(cached_info) = state.service_status_cache.get(service) {
let active_status = cached_info.active_state.clone();
let detailed_info = format!(
"LoadState={}\nActiveState={}\nSubState={}",
cached_info.load_state,
cached_info.active_state,
cached_info.sub_state
);
return Ok((active_status, detailed_info));
/// Get service status with detailed metrics from systemctl
fn get_service_status(&self, service: &str) -> Result<ServiceStatusInfo> {
// Always fetch fresh data to get detailed metrics (memory, restarts, uptime)
// Note: Cache in service_status_cache only has basic active_state from discovery,
// with all detailed metrics set to None. We need fresh systemctl show data.
let output = Command::new("timeout")
.args(&[
"2",
"systemctl",
"show",
&format!("{}.service", service),
"--property=LoadState,ActiveState,SubState,MemoryCurrent,NRestarts,ExecMainStartTimestamp"
])
.output()?;
let output_str = String::from_utf8(output.stdout)?;
// Parse properties
let mut active_state = String::new();
let mut memory_bytes = None;
let mut restart_count = None;
let mut start_timestamp = None;
for line in output_str.lines() {
if let Some(value) = line.strip_prefix("ActiveState=") {
active_state = value.to_string();
} else if let Some(value) = line.strip_prefix("MemoryCurrent=") {
if value != "[not set]" {
memory_bytes = value.parse().ok();
}
} else if let Some(value) = line.strip_prefix("NRestarts=") {
restart_count = value.parse().ok();
} else if let Some(value) = line.strip_prefix("ExecMainStartTimestamp=") {
if value != "[not set]" && !value.is_empty() {
// Parse timestamp to seconds since epoch
if let Ok(output) = Command::new("date")
.args(&["+%s", "-d", value])
.output()
{
if let Ok(timestamp_str) = String::from_utf8(output.stdout) {
start_timestamp = timestamp_str.trim().parse().ok();
}
}
}
}
}
// Service not found in D-Bus cache - treat as inactive
Ok(("inactive".to_string(), "LoadState=not-found\nActiveState=inactive\nSubState=dead".to_string()))
}
/// Get a unit property via D-Bus
async fn get_unit_property(&self, service_name: &str, property: &str) -> Option<zbus::zvariant::OwnedValue> {
// Connect to system D-Bus
let connection = Connection::system().await.ok()?;
// Get systemd manager proxy
let manager_proxy = zbus::Proxy::new(
&connection,
"org.freedesktop.systemd1",
"/org/freedesktop/systemd1",
"org.freedesktop.systemd1.Manager",
).await.ok()?;
// Get unit path for service
let unit_name = format!("{}.service", service_name);
let unit_path: zbus::zvariant::OwnedObjectPath = manager_proxy
.call("GetUnit", &(unit_name,))
.await
.ok()?;
// Get property using standard D-Bus Properties interface
let prop_proxy = zbus::Proxy::new(
&connection,
"org.freedesktop.systemd1",
unit_path.as_str(),
"org.freedesktop.DBus.Properties",
).await.ok()?;
// Try Service interface first, fallback to Unit interface
// Get returns a Variant, we need to extract the inner value
if let Ok(variant) = prop_proxy.call("Get", &("org.freedesktop.systemd1.Service", property)).await {
return Some(variant);
}
prop_proxy.call("Get", &("org.freedesktop.systemd1.Unit", property)).await.ok()
Ok(ServiceStatusInfo {
active_state,
memory_bytes,
restart_count,
start_timestamp,
})
}
/// Check if service name matches pattern (supports wildcards like nginx*)
@@ -426,71 +461,6 @@ impl SystemdCollector {
true
}
/// Get disk usage for a specific service
async fn get_service_disk_usage(&self, service_name: &str) -> Result<f32, CollectorError> {
// Check if this service has configured directory paths
if let Some(dirs) = self.config.service_directories.get(service_name) {
// Service has configured paths - use the first accessible one
for dir in dirs {
if let Some(size) = self.get_directory_size(dir).await {
return Ok(size);
}
}
// If configured paths failed, return 0
return Ok(0.0);
}
// No configured path - try to get WorkingDirectory from D-Bus
if let Some(value) = self.get_unit_property(service_name, "WorkingDirectory").await {
// WorkingDirectory is a string property - try to extract as string
if let Ok(dir_str) = <String>::try_from(value) {
if !dir_str.is_empty() && dir_str != "/" && dir_str != "[not set]" {
return Ok(self.get_directory_size(&dir_str).await.unwrap_or(0.0));
}
}
}
Ok(0.0)
}
/// Get size of a directory in GB (with 2 second timeout)
async fn get_directory_size(&self, path: &str) -> Option<f32> {
use super::run_command_with_timeout;
// Use -s (summary) and --apparent-size for speed, 2 second timeout
let mut cmd = Command::new("sudo");
cmd.args(&["du", "-s", "--apparent-size", "--block-size=1", path]);
let output = run_command_with_timeout(cmd, 2).await.ok()?;
if !output.status.success() {
// Log permission errors for debugging but don't spam logs
let stderr = String::from_utf8_lossy(&output.stderr);
if stderr.contains("Permission denied") {
debug!("Permission denied accessing directory: {}", path);
} else if stderr.contains("timed out") {
warn!("Directory size check timed out for {}", path);
} else {
debug!("Failed to get size for directory {}: {}", path, stderr);
}
return None;
}
let output_str = String::from_utf8(output.stdout).ok()?;
let size_str = output_str.split_whitespace().next()?;
if let Ok(size_bytes) = size_str.parse::<u64>() {
let size_gb = size_bytes as f32 / (1024.0 * 1024.0 * 1024.0);
// Return size even if very small (minimum 0.001 GB = 1MB for visibility)
if size_gb > 0.0 {
Some(size_gb.max(0.001))
} else {
None
}
} else {
None
}
}
/// Calculate service status, taking user-stopped services into account
fn calculate_service_status(&self, service_name: &str, active_status: &str) -> Status {
match active_status.to_lowercase().as_str() {
@@ -508,19 +478,6 @@ impl SystemdCollector {
}
}
/// Get memory usage for a specific service via D-Bus
async fn get_service_memory_usage(&self, service_name: &str) -> Result<f32, CollectorError> {
// Get MemoryCurrent property from D-Bus
if let Some(value) = self.get_unit_property(service_name, "MemoryCurrent").await {
// MemoryCurrent is a u64 property (bytes) - try to extract
if let Ok(memory_bytes) = <u64>::try_from(value) {
return Ok(memory_bytes as f32 / (1024.0 * 1024.0)); // Convert to MB
}
}
Ok(0.0)
}
/// Check if service collection cache should be updated
fn should_update_cache(&self) -> bool {
let state = self.state.read().unwrap();
@@ -528,7 +485,7 @@ impl SystemdCollector {
match state.last_collection {
None => true,
Some(last) => {
let cache_duration = std::time::Duration::from_secs(self.config.interval_seconds);
let cache_duration = std::time::Duration::from_secs(30);
last.elapsed() > cache_duration
}
}
@@ -545,11 +502,11 @@ impl SystemdCollector {
}
/// Get nginx sites with latency checks (internal - no caching)
async fn get_nginx_sites_internal(&self) -> Vec<(String, f32)> {
fn get_nginx_sites_internal(&self) -> Vec<(String, f32)> {
let mut sites = Vec::new();
// Discover nginx sites from configuration
let discovered_sites = self.discover_nginx_sites().await;
let discovered_sites = self.discover_nginx_sites();
// Always add all discovered sites, even if checks fail (like old version)
for (site_name, url) in &discovered_sites {
@@ -568,9 +525,9 @@ impl SystemdCollector {
}
/// Discover nginx sites from configuration
async fn discover_nginx_sites(&self) -> Vec<(String, String)> {
fn discover_nginx_sites(&self) -> Vec<(String, String)> {
// Use the same approach as the old working agent: get nginx config from systemd
let config_content = match self.get_nginx_config_from_systemd().await {
let config_content = match self.get_nginx_config_from_systemd() {
Some(content) => content,
None => {
debug!("Could not get nginx config from systemd, trying nginx -T fallback");
@@ -603,20 +560,30 @@ impl SystemdCollector {
Some(String::from_utf8_lossy(&output.stdout).to_string())
}
/// Get nginx config from systemd service definition via D-Bus (NixOS compatible)
async fn get_nginx_config_from_systemd(&self) -> Option<String> {
// Get ExecStart property from D-Bus
let value = self.get_unit_property("nginx", "ExecStart").await?;
/// Get nginx config from systemd service definition (NixOS compatible)
fn get_nginx_config_from_systemd(&self) -> Option<String> {
let output = Command::new("systemctl")
.args(&["show", "nginx", "--property=ExecStart", "--no-pager"])
.output()
.ok()?;
// ExecStart is a complex structure: array of (path, args, unclean_exit)
// For our purposes, we need to extract the command line
let exec_start_str = format!("{:?}", value);
debug!("nginx ExecStart from D-Bus: {}", exec_start_str);
if !output.status.success() {
debug!("Failed to get nginx ExecStart from systemd");
return None;
}
// Extract config path from ExecStart structure
if let Some(config_path) = self.extract_config_path_from_exec_start(&exec_start_str) {
debug!("Extracted config path: {}", config_path);
return std::fs::read_to_string(&config_path).ok();
let stdout = String::from_utf8_lossy(&output.stdout);
debug!("systemctl show nginx output: {}", stdout);
// Parse ExecStart to extract -c config path
for line in stdout.lines() {
if line.starts_with("ExecStart=") {
debug!("Found ExecStart line: {}", line);
if let Some(config_path) = self.extract_config_path_from_exec_start(line) {
debug!("Extracted config path: {}", config_path);
return std::fs::read_to_string(&config_path).ok();
}
}
}
None
@@ -758,91 +725,94 @@ impl SystemdCollector {
}
}
/// Get docker containers as sub-services using bollard API
async fn get_docker_containers(&self) -> Vec<(String, String)> {
/// Get docker containers as sub-services
fn get_docker_containers(&self) -> Vec<(String, String)> {
let mut containers = Vec::new();
// Connect to Docker daemon
let docker = match Docker::connect_with_local_defaults() {
Ok(d) => d,
Err(e) => {
debug!("Failed to connect to Docker daemon: {}", e);
return containers;
}
// Check if docker is available (cm-agent user is in docker group)
// Use -a to show ALL containers (running and stopped) with 3 second timeout
let output = Command::new("timeout")
.args(&["3", "docker", "ps", "-a", "--format", "{{.Names}},{{.Status}}"])
.output();
let output = match output {
Ok(out) if out.status.success() => out,
_ => return containers, // Docker not available or failed
};
// List all containers (running and stopped)
let list_options = Some(ListContainersOptions::<String> {
all: true,
..Default::default()
});
let container_list = match docker.list_containers(list_options).await {
Ok(list) => list,
Err(e) => {
debug!("Failed to list Docker containers: {}", e);
return containers;
}
let output_str = match String::from_utf8(output.stdout) {
Ok(s) => s,
Err(_) => return containers,
};
for container in container_list {
// Get container name (remove leading slash if present)
let container_name = container.names
.and_then(|names| names.first().map(|n| n.trim_start_matches('/').to_string()))
.unwrap_or_else(|| container.id.clone().unwrap_or_default());
for line in output_str.lines() {
if line.trim().is_empty() {
continue;
}
// Map container state to service status
let container_status = match container.state.as_deref() {
Some("running") => "active",
Some("exited") | Some("created") => "inactive",
_ => "failed", // restarting, paused, dead, etc.
};
let parts: Vec<&str> = line.split(',').collect();
if parts.len() >= 2 {
let container_name = parts[0].trim();
let status_str = parts[1].trim();
containers.push((format!("docker_{}", container_name), container_status.to_string()));
let container_status = if status_str.contains("Up") {
"active"
} else if status_str.contains("Exited") || status_str.contains("Created") {
"inactive" // Stopped/created containers are inactive
} else {
"failed" // Other states (restarting, paused, dead) → failed
};
containers.push((format!("docker_{}", container_name), container_status.to_string()));
}
}
containers
}
/// Get docker images as sub-services using bollard API
async fn get_docker_images(&self) -> Vec<(String, String, f32)> {
/// Get docker images as sub-services
fn get_docker_images(&self) -> Vec<(String, String, f32)> {
let mut images = Vec::new();
// Check if docker is available (cm-agent user is in docker group) with 3 second timeout
let output = Command::new("timeout")
.args(&["3", "docker", "images", "--format", "{{.Repository}}:{{.Tag}},{{.Size}}"])
.output();
// Connect to Docker daemon
let docker = match Docker::connect_with_local_defaults() {
Ok(d) => d,
Err(e) => {
debug!("Failed to connect to Docker daemon: {}", e);
let output = match output {
Ok(out) if out.status.success() => out,
Ok(_) => {
return images;
}
Err(_) => {
return images;
}
};
// List all images
let image_list = match docker.list_images::<String>(None).await {
Ok(list) => list,
Err(e) => {
debug!("Failed to list Docker images: {}", e);
return images;
}
let output_str = match String::from_utf8(output.stdout) {
Ok(s) => s,
Err(_) => return images,
};
for image in image_list {
// Get image name from repo tags
let image_names: Vec<String> = image.repo_tags
.into_iter()
.filter(|tag| !tag.contains("<none>"))
.collect();
if image_names.is_empty() {
continue; // Skip untagged images
for line in output_str.lines() {
if line.trim().is_empty() {
continue;
}
// Get size in MB
let size_mb = image.size as f32 / (1024.0 * 1024.0);
let parts: Vec<&str> = line.split(',').collect();
if parts.len() >= 2 {
let image_name = parts[0].trim();
let size_str = parts[1].trim();
// Skip <none>:<none> images (dangling images)
if image_name.contains("<none>") {
continue;
}
// Parse size to MB (sizes come as "142MB", "1.5GB", "512kB", etc.)
let size_mb = self.parse_docker_size(size_str);
for image_name in image_names {
images.push((
image_name,
image_name.to_string(),
"inactive".to_string(), // Images are informational - use inactive for neutral display
size_mb
));
@@ -851,11 +821,73 @@ impl SystemdCollector {
images
}
/// Parse Docker size string to MB
fn parse_docker_size(&self, size_str: &str) -> f32 {
let size_upper = size_str.to_uppercase();
// Extract numeric part and unit
let mut num_str = String::new();
let mut unit = String::new();
for ch in size_upper.chars() {
if ch.is_ascii_digit() || ch == '.' {
num_str.push(ch);
} else if ch.is_alphabetic() {
unit.push(ch);
}
}
let value: f32 = num_str.parse().unwrap_or(0.0);
// Convert to MB
match unit.as_str() {
"KB" | "K" => value / 1024.0,
"MB" | "M" => value,
"GB" | "G" => value * 1024.0,
"TB" | "T" => value * 1024.0 * 1024.0,
_ => value, // Assume bytes if no unit
}
}
/// Get VPN external IP by querying through the vpn namespace
fn get_vpn_external_ip(&self) -> Option<String> {
let output = Command::new("timeout")
.args(&[
"5",
"sudo",
"ip",
"netns",
"exec",
"vpn",
"curl",
"-s",
"--max-time",
"4",
"https://ifconfig.me"
])
.output()
.ok()?;
if output.status.success() {
let ip = String::from_utf8_lossy(&output.stdout).trim().to_string();
if !ip.is_empty() && ip.contains('.') {
debug!("VPN external IP: {}", ip);
return Some(ip);
}
}
debug!("Failed to get VPN external IP");
None
}
}
#[async_trait]
impl Collector for SystemdCollector {
async fn collect_structured(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
// Clear services to prevent duplicates when updating cached data
agent_data.services.clear();
// Use cached complete data if available and fresh
if let Some(cached_complete_services) = self.get_cached_complete_services() {
for service_data in cached_complete_services {

View File

@@ -13,7 +13,6 @@ pub struct AgentConfig {
pub collectors: CollectorConfig,
pub cache: CacheConfig,
pub notifications: NotificationConfig,
pub collection_interval_seconds: u64,
}
/// ZMQ communication configuration

View File

@@ -11,9 +11,9 @@ pub fn validate_config(config: &AgentConfig) -> Result<()> {
bail!("ZMQ bind address cannot be empty");
}
// Validate collection interval
if config.collection_interval_seconds == 0 {
bail!("Collection interval cannot be 0");
// Validate ZMQ transmission interval
if config.zmq.transmission_interval_seconds == 0 {
bail!("ZMQ transmission interval cannot be 0");
}
// Validate CPU thresholds

View File

@@ -1,6 +1,6 @@
[package]
name = "cm-dashboard"
version = "0.1.198"
version = "0.1.234"
edition = "2021"
[dependencies]

View File

@@ -11,6 +11,74 @@ use tracing::debug;
use crate::ui::theme::{Components, StatusIcons, Theme, Typography};
use ratatui::style::Style;
/// Column visibility configuration based on terminal width
#[derive(Debug, Clone, Copy)]
struct ColumnVisibility {
show_name: bool,
show_status: bool,
show_ram: bool,
show_uptime: bool,
show_restarts: bool,
}
impl ColumnVisibility {
/// Calculate actual width needed for all columns
const NAME_WIDTH: u16 = 23;
const STATUS_WIDTH: u16 = 10;
const RAM_WIDTH: u16 = 8;
const UPTIME_WIDTH: u16 = 8;
const RESTARTS_WIDTH: u16 = 5;
const COLUMN_SPACING: u16 = 1; // Space between columns
/// Determine which columns to show based on available width
/// Priority order: Name > Status > RAM > Uptime > Restarts
fn from_width(width: u16) -> Self {
// Calculate cumulative widths for each configuration
let minimal = Self::NAME_WIDTH + Self::COLUMN_SPACING + Self::STATUS_WIDTH; // 34
let with_ram = minimal + Self::COLUMN_SPACING + Self::RAM_WIDTH; // 43
let with_uptime = with_ram + Self::COLUMN_SPACING + Self::UPTIME_WIDTH; // 52
let full = with_uptime + Self::COLUMN_SPACING + Self::RESTARTS_WIDTH; // 58
if width >= full {
// Show all columns
Self {
show_name: true,
show_status: true,
show_ram: true,
show_uptime: true,
show_restarts: true,
}
} else if width >= with_uptime {
// Hide restarts
Self {
show_name: true,
show_status: true,
show_ram: true,
show_uptime: true,
show_restarts: false,
}
} else if width >= with_ram {
// Hide uptime and restarts
Self {
show_name: true,
show_status: true,
show_ram: true,
show_uptime: false,
show_restarts: false,
}
} else {
// Minimal: Name + Status only
Self {
show_name: true,
show_status: true,
show_ram: false,
show_uptime: false,
show_restarts: false,
}
}
}
}
/// Services widget displaying hierarchical systemd service statuses
#[derive(Clone)]
pub struct ServicesWidget {
@@ -28,11 +96,12 @@ pub struct ServicesWidget {
#[derive(Clone)]
struct ServiceInfo {
memory_mb: Option<f32>,
disk_gb: Option<f32>,
metrics: Vec<(String, f32, Option<String>)>, // (label, value, unit)
widget_status: Status,
service_type: String, // "nginx_site", "container", "image", or empty for parent services
memory_bytes: Option<u64>,
restart_count: Option<u32>,
uptime_seconds: Option<u64>,
}
impl ServicesWidget {
@@ -52,8 +121,6 @@ impl ServicesWidget {
if metric_name.starts_with("service_") {
if let Some(end_pos) = metric_name
.rfind("_status")
.or_else(|| metric_name.rfind("_memory_mb"))
.or_else(|| metric_name.rfind("_disk_gb"))
.or_else(|| metric_name.rfind("_latency_ms"))
{
let service_part = &metric_name[8..end_pos]; // Remove "service_" prefix
@@ -76,39 +143,13 @@ impl ServicesWidget {
None
}
/// Format disk size with appropriate units (kB/MB/GB)
fn format_disk_size(size_gb: f32) -> String {
let size_mb = size_gb * 1024.0; // Convert GB to MB
if size_mb >= 1024.0 {
// Show as GB
format!("{:.1}GB", size_gb)
} else if size_mb >= 1.0 {
// Show as MB
format!("{:.0}MB", size_mb)
} else if size_mb >= 0.001 {
// Convert to kB
let size_kb = size_mb * 1024.0;
format!("{:.0}kB", size_kb)
} else {
// Show very small sizes as bytes
let size_bytes = size_mb * 1024.0 * 1024.0;
format!("{:.0}B", size_bytes)
}
}
/// Format parent service line - returns text without icon for span formatting
fn format_parent_service_line(&self, name: &str, info: &ServiceInfo) -> String {
let memory_str = info
.memory_mb
.map_or("0M".to_string(), |m| format!("{:.0}M", m));
let disk_str = info
.disk_gb
.map_or("0".to_string(), |d| Self::format_disk_size(d));
// Truncate long service names to fit layout (account for icon space)
let short_name = if name.len() > 22 {
format!("{}...", &name[..19])
fn format_parent_service_line(&self, name: &str, info: &ServiceInfo, columns: ColumnVisibility) -> String {
// Truncate long service names to fit layout
// NAME_WIDTH - 3 chars for "..." = max displayable chars
let max_name_len = (ColumnVisibility::NAME_WIDTH - 3) as usize;
let short_name = if name.len() > max_name_len {
format!("{}...", &name[..max_name_len.saturating_sub(3)])
} else {
name.to_string()
};
@@ -116,7 +157,7 @@ impl ServicesWidget {
// Convert Status enum to display text
let status_str = match info.widget_status {
Status::Ok => "active",
Status::Inactive => "inactive",
Status::Inactive => "inactive",
Status::Critical => "failed",
Status::Pending => "pending",
Status::Warning => "warning",
@@ -124,10 +165,59 @@ impl ServicesWidget {
Status::Offline => "offline",
};
format!(
"{:<23} {:<10} {:<8} {:<8}",
short_name, status_str, memory_str, disk_str
)
// Format memory
let memory_str = info.memory_bytes.map_or("-".to_string(), |bytes| {
let mb = bytes as f64 / (1024.0 * 1024.0);
if mb >= 1000.0 {
format!("{:.1}G", mb / 1024.0)
} else {
format!("{:.0}M", mb)
}
});
// Format uptime
let uptime_str = info.uptime_seconds.map_or("-".to_string(), |secs| {
let days = secs / 86400;
let hours = (secs % 86400) / 3600;
let mins = (secs % 3600) / 60;
if days > 0 {
format!("{}d{}h", days, hours)
} else if hours > 0 {
format!("{}h{}m", hours, mins)
} else {
format!("{}m", mins)
}
});
// Format restarts (show "!" if > 0 to indicate instability)
let restart_str = info.restart_count.map_or("-".to_string(), |count| {
if count > 0 {
format!("!{}", count)
} else {
"0".to_string()
}
});
// Build format string based on column visibility
let mut parts = Vec::new();
if columns.show_name {
parts.push(format!("{:<width$}", short_name, width = ColumnVisibility::NAME_WIDTH as usize));
}
if columns.show_status {
parts.push(format!("{:<width$}", status_str, width = ColumnVisibility::STATUS_WIDTH as usize));
}
if columns.show_ram {
parts.push(format!("{:<width$}", memory_str, width = ColumnVisibility::RAM_WIDTH as usize));
}
if columns.show_uptime {
parts.push(format!("{:<width$}", uptime_str, width = ColumnVisibility::UPTIME_WIDTH as usize));
}
if columns.show_restarts {
parts.push(format!("{:<width$}", restart_str, width = ColumnVisibility::RESTARTS_WIDTH as usize));
}
parts.join(" ")
}
@@ -309,11 +399,12 @@ impl Widget for ServicesWidget {
for service in &agent_data.services {
// Store parent service
let parent_info = ServiceInfo {
memory_mb: Some(service.memory_mb),
disk_gb: Some(service.disk_gb),
metrics: Vec::new(), // Parent services don't have custom metrics
widget_status: service.service_status,
service_type: String::new(), // Parent services have no type
memory_bytes: service.memory_bytes,
restart_count: service.restart_count,
uptime_seconds: service.uptime_seconds,
};
self.parent_services.insert(service.name.clone(), parent_info);
@@ -327,11 +418,12 @@ impl Widget for ServicesWidget {
.collect();
let sub_info = ServiceInfo {
memory_mb: None, // Not used for sub-services
disk_gb: None, // Not used for sub-services
metrics,
widget_status: sub_service.service_status,
service_type: sub_service.service_type.clone(),
memory_bytes: None, // Sub-services don't have individual metrics yet
restart_count: None,
uptime_seconds: None,
};
sub_list.push((sub_service.name.clone(), sub_info));
}
@@ -371,23 +463,16 @@ impl ServicesWidget {
self.parent_services
.entry(parent_service)
.or_insert(ServiceInfo {
memory_mb: None,
disk_gb: None,
metrics: Vec::new(),
widget_status: Status::Unknown,
service_type: String::new(),
memory_bytes: None,
restart_count: None,
uptime_seconds: None,
});
if metric.name.ends_with("_status") {
service_info.widget_status = metric.status;
} else if metric.name.ends_with("_memory_mb") {
if let Some(memory) = metric.value.as_f32() {
service_info.memory_mb = Some(memory);
}
} else if metric.name.ends_with("_disk_gb") {
if let Some(disk) = metric.value.as_f32() {
service_info.disk_gb = Some(disk);
}
}
}
Some(sub_name) => {
@@ -407,11 +492,12 @@ impl ServicesWidget {
sub_service_list.push((
sub_name.clone(),
ServiceInfo {
memory_mb: None,
disk_gb: None,
metrics: Vec::new(),
widget_status: Status::Unknown,
service_type: String::new(), // Unknown type in legacy path
memory_bytes: None,
restart_count: None,
uptime_seconds: None,
},
));
&mut sub_service_list.last_mut().unwrap().1
@@ -419,14 +505,6 @@ impl ServicesWidget {
if metric.name.ends_with("_status") {
sub_service_info.widget_status = metric.status;
} else if metric.name.ends_with("_memory_mb") {
if let Some(memory) = metric.value.as_f32() {
sub_service_info.memory_mb = Some(memory);
}
} else if metric.name.ends_with("_disk_gb") {
if let Some(disk) = metric.value.as_f32() {
sub_service_info.disk_gb = Some(disk);
}
}
}
}
@@ -483,11 +561,28 @@ impl ServicesWidget {
.constraints([Constraint::Length(1), Constraint::Min(0)])
.split(inner_area);
// Header
let header = format!(
"{:<25} {:<10} {:<8} {:<8}",
"Service:", "Status:", "RAM:", "Disk:"
);
// Determine which columns to show based on available width
let columns = ColumnVisibility::from_width(inner_area.width);
// Build header based on visible columns
let mut header_parts = Vec::new();
if columns.show_name {
header_parts.push(format!("{:<width$}", "Service:", width = ColumnVisibility::NAME_WIDTH as usize));
}
if columns.show_status {
header_parts.push(format!("{:<width$}", "Status:", width = ColumnVisibility::STATUS_WIDTH as usize));
}
if columns.show_ram {
header_parts.push(format!("{:<width$}", "RAM:", width = ColumnVisibility::RAM_WIDTH as usize));
}
if columns.show_uptime {
header_parts.push(format!("{:<width$}", "Uptime:", width = ColumnVisibility::UPTIME_WIDTH as usize));
}
if columns.show_restarts {
header_parts.push(format!("{:<width$}", "↻:", width = ColumnVisibility::RESTARTS_WIDTH as usize));
}
let header = header_parts.join(" ");
let header_para = Paragraph::new(header).style(Typography::muted());
frame.render_widget(header_para, content_chunks[0]);
@@ -499,11 +594,11 @@ impl ServicesWidget {
}
// Render the services list
self.render_services(frame, content_chunks[1], is_focused);
self.render_services(frame, content_chunks[1], is_focused, columns);
}
/// Render services list
fn render_services(&mut self, frame: &mut Frame, area: Rect, is_focused: bool) {
fn render_services(&mut self, frame: &mut Frame, area: Rect, is_focused: bool, columns: ColumnVisibility) {
// Build hierarchical service list for display
let mut display_lines: Vec<(String, Status, bool, Option<(ServiceInfo, bool)>)> = Vec::new();
@@ -513,7 +608,7 @@ impl ServicesWidget {
for (parent_name, parent_info) in parent_services {
// Add parent service line
let parent_line = self.format_parent_service_line(parent_name, parent_info);
let parent_line = self.format_parent_service_line(parent_name, parent_info, columns);
display_lines.push((parent_line, parent_info.widget_status, false, None));
// Add sub-services for this parent (if any)

View File

@@ -26,7 +26,7 @@ pub struct SystemWidget {
cpu_load_1min: Option<f32>,
cpu_load_5min: Option<f32>,
cpu_load_15min: Option<f32>,
cpu_frequency: Option<f32>,
cpu_cstates: Vec<cm_dashboard_shared::CStateInfo>,
cpu_status: Status,
// Memory metrics
@@ -45,15 +45,9 @@ pub struct SystemWidget {
storage_pools: Vec<StoragePool>,
// Backup metrics
backup_status: String,
backup_start_time_raw: Option<String>,
backup_disk_serial: Option<String>,
backup_disk_usage_percent: Option<f32>,
backup_disk_used_gb: Option<f32>,
backup_disk_total_gb: Option<f32>,
backup_disk_wear_percent: Option<f32>,
backup_disk_temperature: Option<f32>,
backup_last_size_gb: Option<f32>,
backup_repositories: Vec<String>,
backup_repository_status: Status,
backup_disks: Vec<cm_dashboard_shared::BackupDiskData>,
// Overall status
has_data: bool,
@@ -102,7 +96,7 @@ impl SystemWidget {
cpu_load_1min: None,
cpu_load_5min: None,
cpu_load_15min: None,
cpu_frequency: None,
cpu_cstates: Vec::new(),
cpu_status: Status::Unknown,
memory_usage_percent: None,
memory_used_gb: None,
@@ -114,15 +108,9 @@ impl SystemWidget {
tmp_status: Status::Unknown,
tmpfs_mounts: Vec::new(),
storage_pools: Vec::new(),
backup_status: "unknown".to_string(),
backup_start_time_raw: None,
backup_disk_serial: None,
backup_disk_usage_percent: None,
backup_disk_used_gb: None,
backup_disk_total_gb: None,
backup_disk_wear_percent: None,
backup_disk_temperature: None,
backup_last_size_gb: None,
backup_repositories: Vec::new(),
backup_repository_status: Status::Unknown,
backup_disks: Vec::new(),
has_data: false,
}
}
@@ -137,12 +125,19 @@ impl SystemWidget {
}
}
/// Format CPU frequency
fn format_cpu_frequency(&self) -> String {
match self.cpu_frequency {
Some(freq) => format!("{:.0} MHz", freq),
None => "— MHz".to_string(),
/// Format CPU C-states (idle depth) with percentages
fn format_cpu_cstate(&self) -> String {
if self.cpu_cstates.is_empty() {
return "".to_string();
}
// Format top 3 C-states with percentages: "C10:79% C8:10% C6:8%"
// Agent already sends clean names (C3, C10, etc.)
self.cpu_cstates
.iter()
.map(|cs| format!("{}:{:.0}%", cs.name, cs.percent))
.collect::<Vec<_>>()
.join(" ")
}
/// Format memory usage
@@ -188,7 +183,7 @@ impl Widget for SystemWidget {
self.cpu_load_1min = Some(cpu.load_1min);
self.cpu_load_5min = Some(cpu.load_5min);
self.cpu_load_15min = Some(cpu.load_15min);
self.cpu_frequency = Some(cpu.frequency_mhz);
self.cpu_cstates = cpu.cstates.clone();
self.cpu_status = Status::Ok;
// Extract memory data directly
@@ -214,25 +209,9 @@ impl Widget for SystemWidget {
// Extract backup data
let backup = &agent_data.backup;
self.backup_status = backup.status.clone();
self.backup_start_time_raw = backup.start_time_raw.clone();
self.backup_last_size_gb = backup.last_backup_size_gb;
if let Some(disk) = &backup.repository_disk {
self.backup_disk_serial = Some(disk.serial.clone());
self.backup_disk_usage_percent = Some(disk.usage_percent);
self.backup_disk_used_gb = Some(disk.used_gb);
self.backup_disk_total_gb = Some(disk.total_gb);
self.backup_disk_wear_percent = disk.wear_percent;
self.backup_disk_temperature = disk.temperature_celsius;
} else {
self.backup_disk_serial = None;
self.backup_disk_usage_percent = None;
self.backup_disk_used_gb = None;
self.backup_disk_total_gb = None;
self.backup_disk_wear_percent = None;
self.backup_disk_temperature = None;
}
self.backup_repositories = backup.repositories.clone();
self.backup_repository_status = backup.repository_status;
self.backup_disks = backup.disks.clone();
}
}
@@ -532,14 +511,36 @@ impl SystemWidget {
fn render_backup(&self) -> Vec<Line<'_>> {
let mut lines = Vec::new();
// First line: serial number with temperature and wear
if let Some(serial) = &self.backup_disk_serial {
let truncated_serial = truncate_serial(serial);
// First section: Repository status and list
if !self.backup_repositories.is_empty() {
let repo_text = format!("Repo: {}", self.backup_repositories.len());
let repo_spans = StatusIcons::create_status_spans(self.backup_repository_status, &repo_text);
lines.push(Line::from(repo_spans));
// List all repositories (sorted for consistent display)
let mut sorted_repos = self.backup_repositories.clone();
sorted_repos.sort();
let repo_count = sorted_repos.len();
for (idx, repo) in sorted_repos.iter().enumerate() {
let tree_char = if idx == repo_count - 1 { "└─" } else { "├─" };
lines.push(Line::from(vec![
Span::styled(format!(" {} ", tree_char), Typography::tree()),
Span::styled(repo.clone(), Typography::secondary()),
]));
}
}
// Second section: Per-disk backup information (sorted by serial for consistent display)
let mut sorted_disks = self.backup_disks.clone();
sorted_disks.sort_by(|a, b| a.serial.cmp(&b.serial));
for disk in &sorted_disks {
let truncated_serial = truncate_serial(&disk.serial);
let mut details = Vec::new();
if let Some(temp) = self.backup_disk_temperature {
if let Some(temp) = disk.temperature_celsius {
details.push(format!("T: {}°C", temp as i32));
}
if let Some(wear) = self.backup_disk_wear_percent {
if let Some(wear) = disk.wear_percent {
details.push(format!("W: {}%", wear as i32));
}
@@ -549,44 +550,40 @@ impl SystemWidget {
truncated_serial
};
let backup_status = match self.backup_status.as_str() {
"completed" | "success" => Status::Ok,
"running" => Status::Pending,
"failed" => Status::Critical,
_ => Status::Unknown,
};
let disk_spans = StatusIcons::create_status_spans(backup_status, &disk_text);
// Overall disk status (worst of backup and usage)
let disk_status = disk.backup_status.max(disk.usage_status);
let disk_spans = StatusIcons::create_status_spans(disk_status, &disk_text);
lines.push(Line::from(disk_spans));
// Show backup time from TOML if available
if let Some(start_time) = &self.backup_start_time_raw {
let time_text = if let Some(size) = self.backup_last_size_gb {
format!("Time: {} ({:.1}GB)", start_time, size)
} else {
format!("Time: {}", start_time)
};
lines.push(Line::from(vec![
// Show backup time with status
if let Some(backup_time) = &disk.last_backup_time {
let time_text = format!("Backup: {}", backup_time);
let mut time_spans = vec![
Span::styled(" ├─ ", Typography::tree()),
Span::styled(time_text, Typography::secondary())
]));
];
time_spans.extend(StatusIcons::create_status_spans(disk.backup_status, &time_text));
lines.push(Line::from(time_spans));
}
// Usage information
if let (Some(used), Some(total), Some(usage_percent)) = (
self.backup_disk_used_gb,
self.backup_disk_total_gb,
self.backup_disk_usage_percent
) {
let usage_text = format!("Usage: {:.0}% {:.0}GB/{:.0}GB", usage_percent, used, total);
let usage_spans = StatusIcons::create_status_spans(Status::Ok, &usage_text);
let mut full_spans = vec![
Span::styled(" └─ ", Typography::tree()),
];
full_spans.extend(usage_spans);
lines.push(Line::from(full_spans));
}
// Show usage with status and archive count
let archive_display = if disk.archives_min == disk.archives_max {
format!("{}", disk.archives_min)
} else {
format!("{}-{}", disk.archives_min, disk.archives_max)
};
let usage_text = format!(
"Usage: ({}) {:.0}% {:.0}GB/{:.0}GB",
archive_display,
disk.disk_usage_percent,
disk.disk_used_gb,
disk.disk_total_gb
);
let mut usage_spans = vec![
Span::styled(" └─ ", Typography::tree()),
];
usage_spans.extend(StatusIcons::create_status_spans(disk.usage_status, &usage_text));
lines.push(Line::from(usage_spans));
}
lines
@@ -832,10 +829,10 @@ impl SystemWidget {
);
lines.push(Line::from(cpu_spans));
let freq_text = self.format_cpu_frequency();
let cstate_text = self.format_cpu_cstate();
lines.push(Line::from(vec![
Span::styled(" └─ ", Typography::tree()),
Span::styled(format!("Freq: {}", freq_text), Typography::secondary())
Span::styled(format!("C-state: {}", cstate_text), Typography::secondary())
]));
// RAM section
@@ -894,7 +891,7 @@ impl SystemWidget {
lines.extend(storage_lines);
// Backup section (if available)
if self.backup_status != "unavailable" && self.backup_status != "unknown" {
if !self.backup_repositories.is_empty() || !self.backup_disks.is_empty() {
lines.push(Line::from(vec![
Span::styled("Backup:", Typography::widget_title())
]));

View File

@@ -1,6 +1,6 @@
[package]
name = "cm-dashboard-shared"
version = "0.1.198"
version = "0.1.234"
edition = "2021"
[dependencies]

View File

@@ -40,13 +40,20 @@ pub struct NetworkInterfaceData {
pub vlan_id: Option<u16>,
}
/// CPU C-state usage information
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CStateInfo {
pub name: String,
pub percent: f32,
}
/// CPU monitoring data
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CpuData {
pub load_1min: f32,
pub load_5min: f32,
pub load_15min: f32,
pub frequency_mhz: f32,
pub cstates: Vec<CStateInfo>, // C-state usage percentages (C1, C6, C10, etc.) - indicates CPU idle depth distribution
pub temperature_celsius: Option<f32>,
pub load_status: Status,
pub temperature_status: Status,
@@ -136,11 +143,15 @@ pub struct PoolDriveData {
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ServiceData {
pub name: String,
pub memory_mb: f32,
pub disk_gb: f32,
pub user_stopped: bool,
pub service_status: Status,
pub sub_services: Vec<SubServiceData>,
/// Memory usage in bytes (from MemoryCurrent)
pub memory_bytes: Option<u64>,
/// Number of service restarts (from NRestarts)
pub restart_count: Option<u32>,
/// Uptime in seconds (calculated from ExecMainStartTimestamp)
pub uptime_seconds: Option<u64>,
}
/// Sub-service data (nginx sites, docker containers, etc.)
@@ -165,23 +176,27 @@ pub struct SubServiceMetric {
/// Backup system data
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct BackupData {
pub status: String,
pub total_size_gb: Option<f32>,
pub repository_health: Option<String>,
pub repository_disk: Option<BackupDiskData>,
pub last_backup_size_gb: Option<f32>,
pub start_time_raw: Option<String>,
pub repositories: Vec<String>,
pub repository_status: Status,
pub disks: Vec<BackupDiskData>,
}
/// Backup repository disk information
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct BackupDiskData {
pub serial: String,
pub usage_percent: f32,
pub used_gb: f32,
pub total_gb: f32,
pub product_name: Option<String>,
pub wear_percent: Option<f32>,
pub temperature_celsius: Option<f32>,
pub last_backup_time: Option<String>,
pub backup_status: Status,
pub disk_usage_percent: f32,
pub disk_used_gb: f32,
pub disk_total_gb: f32,
pub usage_status: Status,
pub services: Vec<String>,
pub archives_min: i64,
pub archives_max: i64,
}
impl AgentData {
@@ -200,7 +215,7 @@ impl AgentData {
load_1min: 0.0,
load_5min: 0.0,
load_15min: 0.0,
frequency_mhz: 0.0,
cstates: Vec::new(),
temperature_celsius: None,
load_status: Status::Unknown,
temperature_status: Status::Unknown,
@@ -222,12 +237,9 @@ impl AgentData {
},
services: Vec::new(),
backup: BackupData {
status: "unknown".to_string(),
total_size_gb: None,
repository_health: None,
repository_disk: None,
last_backup_size_gb: None,
start_time_raw: None,
repositories: Vec::new(),
repository_status: Status::Unknown,
disks: Vec::new(),
},
}
}