Compare commits

...

26 Commits

Author SHA1 Message Date
d68ecfbc64 Complete unified pool visualization with filesystem children
All checks were successful
Build and Release / build-and-release (push) Successful in 2m17s
- Implement filesystem children display under physical drive pools
- Agent generates individual filesystem metrics for each mount point
- Dashboard parses filesystem metrics and displays as tree children
- Add filesystem usage, total, and available space metrics
- Support target format: drive info + filesystem children hierarchy
- Fix compilation warnings by properly using available_bytes calculation
2025-11-23 12:48:24 +01:00
d1272a6c13 Implement unified pool visualization for single drives
All checks were successful
Build and Release / build-and-release (push) Successful in 1m19s
- Group single disk filesystems by physical drive during auto-discovery
- Create physical drive pools with filesystem children
- Display temperature, wear, and health at drive level
- Provide consistent hierarchical storage visualization
- Fix borrow checker issues in create_physical_drive_pool method
- Add PhysicalDrive case to all StoragePoolType match statements
2025-11-23 12:10:42 +01:00
33b3beb342 Implement storage auto-discovery system
All checks were successful
Build and Release / build-and-release (push) Successful in 1m49s
- Add automatic detection of mergerfs pools by parsing /proc/mounts
- Implement smart heuristics for parity disk identification
- Store discovered topology at agent startup for efficient monitoring
- Eliminate need for manual storage pool configuration
- Support zero-config storage visualization with backward compatibility
- Clean up mount parsing and remove unused fields
2025-11-23 11:44:57 +01:00
f9384d9df6 Implement enhanced storage pool visualization
All checks were successful
Build and Release / build-and-release (push) Successful in 2m34s
- Add support for mergerfs pool grouping with data and parity disk separation
- Implement pool health monitoring (healthy/degraded/critical status)
- Create hierarchical tree view for multi-disk storage arrays
- Add automatic pool type detection and member disk association
- Maintain backward compatibility for single disk configurations
- Support future extension for RAID and ZFS pool types
2025-11-23 11:18:21 +01:00
156d707377 Add version display and fix status aggregation priorities
All checks were successful
Build and Release / build-and-release (push) Successful in 2m37s
- Add dynamic version display in top bar using CARGO_PKG_VERSION
- Rewrite status aggregation to only show Critical/Warning/OK in top bar
- Fix Status enum ordering to prioritize OK over transitional states
- Remove blue/gray colors from top bar background
2025-11-21 16:19:45 +01:00
dc1a2e3a0f Add disk wear monitoring and fix storage overflow display
All checks were successful
Build and Release / build-and-release (push) Successful in 1m15s
- Add disk wear percentage collection from SMART data in backup script
- Add backup_disk_wear_percent metric to backup collector with thresholds
- Display wear percentage in backup widget disk section
- Fix storage section overflow handling to use consistent "X more below" logic
- Update maintenance mode to return pending status instead of unknown
2025-11-20 20:36:45 +01:00
5d6b8e6253 Treat pending status as OK for title bar color aggregation
All checks were successful
Build and Release / build-and-release (push) Successful in 1m12s
Apply same logic used for inactive status to pending status.
Pending services now contribute to OK count instead of being
ignored, preventing blue title bar during service transitions.
2025-11-20 18:09:59 +01:00
0cba083305 Remove pending status from title bar color aggregation
All checks were successful
Build and Release / build-and-release (push) Successful in 2m9s
Title bar now only shows Critical (red), Warning (yellow), and OK (green)
colors. Pending status is ignored in color calculation to prevent blue
title bar during service transitions.
2025-11-20 14:19:29 +01:00
a6be7a4788 Consolidate log viewing to use service-manage logs action
All checks were successful
Build and Release / build-and-release (push) Successful in 1m31s
Replace separate service_logs_cmd with service-manage logs action
to unify service management through single script interface.
Dashboard now calls 'service-manage logs <service>' which provides
intelligent log viewing based on service state and configuration.
2025-11-20 11:30:55 +01:00
2384f7f9b9 Unify log viewing with configurable script command
All checks were successful
Build and Release / build-and-release (push) Successful in 2m37s
Replace separate J/L keys with single L key that calls configurable
service_logs_cmd from dashboard config. Script handles both journalctl
and custom log files automatically based on service configuration.

Update status bar to show all available keybindings including
previously missing backup and terminal commands.
2025-11-20 11:00:38 +01:00
cd5ef65d3d Fix service selection for services with sub-services
All checks were successful
Build and Release / build-and-release (push) Successful in 2m35s
- Fix get_selected_service to always return parent service names
- Prevent selection of container sub-items when managing docker services
- Ensure service commands operate on correct systemd service names
- Simplify service selection logic to only consider parent services
- Update version to 0.1.92
2025-11-19 18:01:10 +01:00
7bf9ca6201 Fix SSH command quoting and remove duplicate user prompts
All checks were successful
Build and Release / build-and-release (push) Successful in 2m8s
- Fix rebuild and backup commands with proper inner command quoting
- Remove duplicate "Press any key to close..." from SSH commands since scripts handle it
- Clean up SSH terminal command to avoid redundant prompts
- Ensure consistent command execution patterns across all SSH operations
- Update version to 0.1.91
2025-11-19 16:08:03 +01:00
f587b42797 Implement unified SSH command management with dedicated scripts
All checks were successful
Build and Release / build-and-release (push) Successful in 1m11s
- Replace complex SSH command patterns with simple script calls
- Create service-manage script for start/stop operations with proper logging
- Create rebuild script equivalent to rebuild_git alias with user feedback
- Update dashboard to use unified command pattern: sudo service-manage, sudo rebuild
- Simplify backup to use service management: service-manage start borgbackup
- Configure sudoers with wildcards for Nix store path compatibility
- Remove cmtec references from script names for better genericity
- Update version to 0.1.90
2025-11-19 15:37:33 +01:00
7ae464e172 Wrap service commands in bash -c to ensure session persistence
All checks were successful
Build and Release / build-and-release (push) Successful in 1m10s
- Use bash -c to properly execute service start/stop command sequences
- Ensure SSH session stays alive for user input prompt
- Fix escaping issues with nested quotes in commands
- Update version to 0.1.89
2025-11-19 13:32:04 +01:00
980c9a20a2 Fix service start/stop popup auto-close issue
All checks were successful
Build and Release / build-and-release (push) Successful in 1m12s
- Move 'Press any key to close...' prompt inside SSH session
- Ensure tmux popup stays open until user manually closes
- Maintain consistent behavior with other SSH commands
- Update version to 0.1.88
2025-11-19 13:21:48 +01:00
448a38dede Fix service management command issues
All checks were successful
Build and Release / build-and-release (push) Successful in 2m7s
- Add sudo to pkill commands to resolve permission errors when killing journalctl processes
- Fix service stop command timing to show logs during shutdown process
- Add sleep delays to ensure log visibility before cleanup
- Update version to 0.1.87
2025-11-19 13:13:15 +01:00
f12e20b0f3 Standardize SSH command patterns with consistent user feedback
All checks were successful
Build and Release / build-and-release (push) Successful in 2m10s
- Apply uniform pattern to all SSH commands: informational text + command + exit prompt
- Remove exit prompt from logging commands (J/L keys) that run continuously with -f flag
- Simplify rebuild and backup commands to match service command pattern
- Update version to 0.1.86
2025-11-19 12:57:18 +01:00
564d1f37e7 Streamline service commands with auto-close functionality
All checks were successful
Build and Release / build-and-release (push) Successful in 2m8s
- Remove header text from start/stop commands for cleaner output
- Add automatic log termination when service reaches target state
- Start command auto-closes when service becomes active
- Stop command auto-closes when service becomes inactive
- Simplify SSH command structure by removing bash -c wrapper
- Version bump to 0.1.85
2025-11-19 12:30:36 +01:00
65bfb9f617 Add real-time logging to service stop command
All checks were successful
Build and Release / build-and-release (push) Successful in 2m9s
- Update stop command to use background systemctl with immediate log following
- Use same approach as start command for consistent real-time log viewing
- Version bump to 0.1.84
2025-11-19 11:59:18 +01:00
4f4ef6259b Fix service start log command escaping
All checks were successful
Build and Release / build-and-release (push) Successful in 2m6s
- Change --since="1 second ago" to --since='1 second ago'
- Fixes shell escaping issue preventing real-time logs
- Version bump to 0.1.83
2025-11-19 11:49:08 +01:00
505263cec6 Fix real-time service logs with background start approach
All checks were successful
Build and Release / build-and-release (push) Successful in 2m8s
- Update service start to use background systemctl start with immediate log following
- Implement `sudo systemctl start service & sudo journalctl -fu service --since="1 second ago"`
- Remove buffering issues that prevented real-time log streaming
- Version bump to 0.1.82
2025-11-19 11:21:49 +01:00
61dd686fb9 Fix real-time log streaming by simplifying service start command
All checks were successful
Build and Release / build-and-release (push) Successful in 1m34s
- Remove complex background process monitoring that was buffering output
- Use direct journalctl -fu command for immediate real-time log streaming
- Eliminate monitoring loop that was killing log stream when service became active
- User now controls log following duration with Ctrl+C
- Fixes buffering issues that prevented seeing ark server startup logs in real-time
2025-11-19 08:42:50 +01:00
c0f7a97a6f Remove all scrolling code and user-stopped tracking logic
All checks were successful
Build and Release / build-and-release (push) Successful in 2m36s
- Remove scroll offset fields from HostWidgets struct
- Replace scrolling with simple "X more below" indicators in all widgets
- Remove user-stopped service tracking from agent (now uses SSH control)
- Inactive services now consistently show Status::Inactive with empty circles
- Simplify widget render methods by removing scroll parameters
- Clean up unused imports and legacy scrolling infrastructure
- Fix journalctl command to use -fu for proper log following
2025-11-19 08:32:42 +01:00
9575077045 Fix Status::Inactive aggregation priority for green title bar
All checks were successful
Build and Release / build-and-release (push) Successful in 2m9s
- Move Status::Inactive to lowest priority in enum (before Ok)
- Status aggregation now prefers Ok over Inactive in mixed scenarios
- Title bar stays green when mixing active and inactive services
- Inactive services still show gray icons but don't affect overall status
- Ensures healthy systems with stopped services maintain green status
2025-11-18 18:17:25 +01:00
34a1f7b9dc Fix Status::Inactive ordering to prevent gray title bar
All checks were successful
Build and Release / build-and-release (push) Successful in 2m8s
- Reorder Status enum variants to fix aggregation priority
- Status::Inactive now has same priority as Status::Ok in aggregation
- Prevents inactive services from causing gray title bar
- Title bar stays green when system has only active and inactive services
- Only Unknown/Offline/Pending/Warning/Critical statuses affect title color
2025-11-18 18:03:50 +01:00
d11aa11f99 Add Status::Inactive for inactive services with empty circle display
All checks were successful
Build and Release / build-and-release (push) Successful in 1m12s
- Add new Status::Inactive variant to enum for better service state representation
- Agent now assigns Status::Inactive instead of Status::Warning for inactive services
- Dashboard displays inactive services with empty circle (○) icon in gray color
- User-stopped services still show as Status::Ok with green filled circle
- Inactive services treated as OK for host status aggregation
- Improves visual clarity between active (●), inactive (○), and warning (◐) states
2025-11-18 17:54:51 +01:00
15 changed files with 1445 additions and 471 deletions

114
CLAUDE.md
View File

@@ -144,6 +144,120 @@ nix-build --no-out-link -E 'with import <nixpkgs> {}; fetchurl {
- **Workspace builds**: `nix-shell -p openssl pkg-config --run "cargo build --workspace"` - **Workspace builds**: `nix-shell -p openssl pkg-config --run "cargo build --workspace"`
- **Clean compilation**: Remove `target/` between major changes - **Clean compilation**: Remove `target/` between major changes
## Enhanced Storage Pool Visualization
### Auto-Discovery Architecture
The dashboard uses automatic storage discovery to eliminate manual configuration complexity while providing intelligent storage pool grouping.
### Discovery Process
**At Agent Startup:**
1. Parse `/proc/mounts` to identify all mounted filesystems
2. Detect MergerFS pools by analyzing `fuse.mergerfs` mount sources
3. Identify member disks and potential parity relationships via heuristics
4. Store discovered storage topology for continuous monitoring
5. Generate pool-aware metrics with hierarchical relationships
**Continuous Monitoring:**
- Use stored discovery data for efficient metric collection
- Monitor individual drives for SMART data, temperature, wear
- Calculate pool-level health based on member drive status
- Generate enhanced metrics for dashboard visualization
### Supported Storage Types
**Single Disks:**
- ext4, xfs, btrfs mounted directly
- Individual drive monitoring with SMART data
- Traditional single-disk display for root, boot, etc.
**MergerFS Pools:**
- Auto-detect from `/proc/mounts` fuse.mergerfs entries
- Parse source paths to identify member disks (e.g., "/mnt/disk1:/mnt/disk2")
- Heuristic parity disk detection (sequential device names, "parity" in path)
- Pool health calculation (healthy/degraded/critical)
- Hierarchical tree display with data/parity disk grouping
**Future Extensions Ready:**
- RAID arrays via `/proc/mdstat` parsing
- ZFS pools via `zpool status` integration
- LVM logical volumes via `lvs` discovery
### Configuration
```toml
[collectors.disk]
enabled = true
auto_discover = true # Default: true
# Optional exclusions for special filesystems
exclude_mount_points = ["/tmp", "/proc", "/sys", "/dev"]
exclude_fs_types = ["tmpfs", "devtmpfs", "sysfs", "proc"]
```
### Display Format
```
Storage:
● /srv/media (mergerfs (2+1)):
├─ Pool Status: ● Healthy (3 drives)
├─ Total: ● 63% 2355.2GB/3686.4GB
├─ Data Disks:
│ ├─ ● sdb T: 24°C
│ └─ ● sdd T: 27°C
└─ Parity: ● sdc T: 24°C
● /:
├─ ● nvme0n1 W: 13%
└─ ● 7% 14.5GB/218.5GB
```
### Implementation Benefits
- **Zero Configuration**: No manual pool definitions required
- **Always Accurate**: Reflects actual system state automatically
- **Scales Automatically**: Handles any number of pools without config changes
- **Backwards Compatible**: Single disks continue working unchanged
- **Future Ready**: Easy extension for additional storage technologies
### Current Status (v0.1.100)
**✅ Completed:**
- Auto-discovery system implemented and deployed
- `/proc/mounts` parsing with smart heuristics for parity detection
- Storage topology stored at agent startup for efficient monitoring
- Universal zero-configuration for all hosts (cmbox, steambox, simonbox, srv01, srv02, srv03)
- Enhanced pool health calculation (healthy/degraded/critical)
- Hierarchical tree visualization with data/parity disk separation
**🔄 In Progress - Unified Pool Visualization:**
Current auto-discovery works but displays filesystems separately instead of grouped by physical drives. Need to implement unified pool concept where single drives are treated as pools.
**Current Display (needs improvement):**
```
● /boot: (separate entry)
● /nix_store: (separate entry)
● /: (separate entry)
```
**Target Display (unified pools):**
```
● nvme0n1:
├─ Drive: T: 35°C W: 1%
├─ /boot: 11% 0.1GB/1.0GB
├─ /nix_store: 23% 214.9GB/928.2GB
└─ /: 23% 214.9GB/928.2GB
```
**Required Changes:**
1. **Enhanced Auto-Discovery**: Group filesystems by backing physical drive during discovery
2. **UI Pool Logic**: Treat single drives as "pools" with drive name as header
3. **Drive Info Display**: Show temperature, wear, health at pool level for single drives
4. **Filesystem Children**: Display mount points as children under their physical drives
5. **Hybrid Rendering**: Physical grouping for single drives, logical grouping for mergerfs pools
**Expected Result**: Consistent hierarchical storage visualization where everything follows pool->children pattern, regardless of underlying storage technology.
## Important Communication Guidelines ## Important Communication Guidelines
Keep responses concise and focused. Avoid extensive implementation summaries unless requested. Keep responses concise and focused. Avoid extensive implementation summaries unless requested.

230
Cargo.lock generated
View File

@@ -17,9 +17,9 @@ dependencies = [
[[package]] [[package]]
name = "aho-corasick" name = "aho-corasick"
version = "1.1.3" version = "1.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e60d3430d3a69478ad0993f19238d2df97c507009a52b3c10addcd7f6bcb916" checksum = "ddd31a130427c27518df266943a5308ed92d4b226cc639f5a8f1002816174301"
dependencies = [ dependencies = [
"memchr", "memchr",
] ]
@@ -71,22 +71,22 @@ dependencies = [
[[package]] [[package]]
name = "anstyle-query" name = "anstyle-query"
version = "1.1.4" version = "1.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9e231f6134f61b71076a3eab506c379d4f36122f2af15a9ff04415ea4c3339e2" checksum = "40c48f72fd53cd289104fc64099abca73db4166ad86ea0b4341abe65af83dadc"
dependencies = [ dependencies = [
"windows-sys 0.60.2", "windows-sys 0.61.2",
] ]
[[package]] [[package]]
name = "anstyle-wincon" name = "anstyle-wincon"
version = "3.0.10" version = "3.0.11"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3e0633414522a32ffaac8ac6cc8f748e090c5717661fddeea04219e2344f5f2a" checksum = "291e6a250ff86cd4a820112fb8898808a366d8f9f58ce16d1f538353ad55747d"
dependencies = [ dependencies = [
"anstyle", "anstyle",
"once_cell_polyfill", "once_cell_polyfill",
"windows-sys 0.60.2", "windows-sys 0.61.2",
] ]
[[package]] [[package]]
@@ -95,6 +95,15 @@ version = "1.0.100"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a23eb6b1614318a8071c9b2521f36b424b2c83db5eb3a0fead4a6c0809af6e61" checksum = "a23eb6b1614318a8071c9b2521f36b424b2c83db5eb3a0fead4a6c0809af6e61"
[[package]]
name = "ar_archive_writer"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0c269894b6fe5e9d7ada0cf69b5bf847ff35bc25fc271f08e1d080fce80339a"
dependencies = [
"object",
]
[[package]] [[package]]
name = "async-trait" name = "async-trait"
version = "0.1.89" version = "0.1.89"
@@ -144,9 +153,9 @@ checksum = "46c5e41b57b8bba42a04676d81cb89e9ee8e859a1a66f80a5a72e1cb76b34d43"
[[package]] [[package]]
name = "bytes" name = "bytes"
version = "1.10.1" version = "1.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d71b6127be86fdcfddb610f7182ac57211d4b18a3e9c82eb2d17662f2227ad6a" checksum = "b35204fbdc0b3f4446b89fc1ac2cf84a8a68971995d0bf2e925ec7cd960f9cb3"
[[package]] [[package]]
name = "cassowary" name = "cassowary"
@@ -156,9 +165,9 @@ checksum = "df8670b8c7b9dae1793364eafadf7239c40d669904660c5960d74cfd80b46a53"
[[package]] [[package]]
name = "cc" name = "cc"
version = "1.2.41" version = "1.2.46"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac9fe6cdbb24b6ade63616c0a0688e45bb56732262c158df3c0c4bea4ca47cb7" checksum = "b97463e1064cb1b1c1384ad0a0b9c8abd0988e2a91f52606c80ef14aadb63e36"
dependencies = [ dependencies = [
"find-msvc-tools", "find-msvc-tools",
"jobserver", "jobserver",
@@ -230,9 +239,9 @@ dependencies = [
[[package]] [[package]]
name = "clap" name = "clap"
version = "4.5.49" version = "4.5.52"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f4512b90fa68d3a9932cea5184017c5d200f5921df706d45e853537dea51508f" checksum = "aa8120877db0e5c011242f96806ce3c94e0737ab8108532a76a3300a01db2ab8"
dependencies = [ dependencies = [
"clap_builder", "clap_builder",
"clap_derive", "clap_derive",
@@ -240,9 +249,9 @@ dependencies = [
[[package]] [[package]]
name = "clap_builder" name = "clap_builder"
version = "4.5.49" version = "4.5.52"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0025e98baa12e766c67ba13ff4695a887a1eba19569aad00a472546795bd6730" checksum = "02576b399397b659c26064fbc92a75fede9d18ffd5f80ca1cd74ddab167016e1"
dependencies = [ dependencies = [
"anstream", "anstream",
"anstyle", "anstyle",
@@ -270,7 +279,7 @@ checksum = "a1d728cc89cf3aee9ff92b05e62b19ee65a02b5702cff7d5a377e32c6ae29d8d"
[[package]] [[package]]
name = "cm-dashboard" name = "cm-dashboard"
version = "0.1.75" version = "0.1.102"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"chrono", "chrono",
@@ -292,7 +301,7 @@ dependencies = [
[[package]] [[package]]
name = "cm-dashboard-agent" name = "cm-dashboard-agent"
version = "0.1.75" version = "0.1.102"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-trait", "async-trait",
@@ -315,7 +324,7 @@ dependencies = [
[[package]] [[package]]
name = "cm-dashboard-shared" name = "cm-dashboard-shared"
version = "0.1.75" version = "0.1.103"
dependencies = [ dependencies = [
"chrono", "chrono",
"serde", "serde",
@@ -503,9 +512,9 @@ checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be"
[[package]] [[package]]
name = "find-msvc-tools" name = "find-msvc-tools"
version = "0.1.4" version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "52051878f80a721bb68ebfbc930e07b65ba72f2da88968ea5c06fd6ca3d3a127" checksum = "3a3076410a55c90011c298b04d0cfa770b00fa04e1e3c97d3f6c9de105a03844"
[[package]] [[package]]
name = "fnv" name = "fnv"
@@ -768,9 +777,9 @@ dependencies = [
[[package]] [[package]]
name = "icu_collections" name = "icu_collections"
version = "2.0.0" version = "2.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "200072f5d0e3614556f94a9930d5dc3e0662a652823904c3a75dc3b0af7fee47" checksum = "4c6b649701667bbe825c3b7e6388cb521c23d88644678e83c0c4d0a621a34b43"
dependencies = [ dependencies = [
"displaydoc", "displaydoc",
"potential_utf", "potential_utf",
@@ -781,9 +790,9 @@ dependencies = [
[[package]] [[package]]
name = "icu_locale_core" name = "icu_locale_core"
version = "2.0.0" version = "2.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0cde2700ccaed3872079a65fb1a78f6c0a36c91570f28755dda67bc8f7d9f00a" checksum = "edba7861004dd3714265b4db54a3c390e880ab658fec5f7db895fae2046b5bb6"
dependencies = [ dependencies = [
"displaydoc", "displaydoc",
"litemap", "litemap",
@@ -794,11 +803,10 @@ dependencies = [
[[package]] [[package]]
name = "icu_normalizer" name = "icu_normalizer"
version = "2.0.0" version = "2.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "436880e8e18df4d7bbc06d58432329d6458cc84531f7ac5f024e93deadb37979" checksum = "5f6c8828b67bf8908d82127b2054ea1b4427ff0230ee9141c54251934ab1b599"
dependencies = [ dependencies = [
"displaydoc",
"icu_collections", "icu_collections",
"icu_normalizer_data", "icu_normalizer_data",
"icu_properties", "icu_properties",
@@ -809,42 +817,38 @@ dependencies = [
[[package]] [[package]]
name = "icu_normalizer_data" name = "icu_normalizer_data"
version = "2.0.0" version = "2.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "00210d6893afc98edb752b664b8890f0ef174c8adbb8d0be9710fa66fbbf72d3" checksum = "7aedcccd01fc5fe81e6b489c15b247b8b0690feb23304303a9e560f37efc560a"
[[package]] [[package]]
name = "icu_properties" name = "icu_properties"
version = "2.0.1" version = "2.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "016c619c1eeb94efb86809b015c58f479963de65bdb6253345c1a1276f22e32b" checksum = "e93fcd3157766c0c8da2f8cff6ce651a31f0810eaa1c51ec363ef790bbb5fb99"
dependencies = [ dependencies = [
"displaydoc",
"icu_collections", "icu_collections",
"icu_locale_core", "icu_locale_core",
"icu_properties_data", "icu_properties_data",
"icu_provider", "icu_provider",
"potential_utf",
"zerotrie", "zerotrie",
"zerovec", "zerovec",
] ]
[[package]] [[package]]
name = "icu_properties_data" name = "icu_properties_data"
version = "2.0.1" version = "2.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "298459143998310acd25ffe6810ed544932242d3f07083eee1084d83a71bd632" checksum = "02845b3647bb045f1100ecd6480ff52f34c35f82d9880e029d329c21d1054899"
[[package]] [[package]]
name = "icu_provider" name = "icu_provider"
version = "2.0.0" version = "2.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "03c80da27b5f4187909049ee2d72f276f0d9f99a42c306bd0131ecfe04d8e5af" checksum = "85962cf0ce02e1e0a629cc34e7ca3e373ce20dda4c4d7294bbd0bf1fdb59e614"
dependencies = [ dependencies = [
"displaydoc", "displaydoc",
"icu_locale_core", "icu_locale_core",
"stable_deref_trait",
"tinystr",
"writeable", "writeable",
"yoke", "yoke",
"zerofrom", "zerofrom",
@@ -885,9 +889,12 @@ dependencies = [
[[package]] [[package]]
name = "indoc" name = "indoc"
version = "2.0.6" version = "2.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f4c7245a08504955605670dbf141fceab975f15ca21570696aebe9d2e71576bd" checksum = "79cf5c93f93228cf8efb3ba362535fb11199ac548a09ce117c9b1adc3030d706"
dependencies = [
"rustversion",
]
[[package]] [[package]]
name = "ipnet" name = "ipnet"
@@ -897,9 +904,9 @@ checksum = "469fb0b9cefa57e3ef31275ee7cacb78f2fdca44e4765491884a2b119d4eb130"
[[package]] [[package]]
name = "is_terminal_polyfill" name = "is_terminal_polyfill"
version = "1.70.1" version = "1.70.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7943c866cc5cd64cbc25b2e01621d07fa8eb2a1a23160ee81ce38704e97b8ecf" checksum = "a6cb138bb79a146c1bd460005623e142ef0181e3d0219cb493e02f7d08a35695"
[[package]] [[package]]
name = "itertools" name = "itertools"
@@ -928,9 +935,9 @@ dependencies = [
[[package]] [[package]]
name = "js-sys" name = "js-sys"
version = "0.3.81" version = "0.3.82"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec48937a97411dcb524a265206ccd4c90bb711fca92b2792c407f268825b9305" checksum = "b011eec8cc36da2aab2d5cff675ec18454fad408585853910a202391cf9f8e65"
dependencies = [ dependencies = [
"once_cell", "once_cell",
"wasm-bindgen", "wasm-bindgen",
@@ -988,9 +995,9 @@ checksum = "df1d3c3b53da64cf5760482273a98e575c651a67eec7f77df96b5b642de8f039"
[[package]] [[package]]
name = "litemap" name = "litemap"
version = "0.8.0" version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "241eaef5fd12c88705a01fc1066c48c4b36e0dd4377dcdc7ec3942cea7a69956" checksum = "6373607a59f0be73a39b6fe456b8192fcc3585f602af20751600e974dd455e77"
[[package]] [[package]]
name = "lock_api" name = "lock_api"
@@ -1104,6 +1111,15 @@ dependencies = [
"autocfg", "autocfg",
] ]
[[package]]
name = "object"
version = "0.32.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a6a622008b6e321afc04970976f62ee297fdbaa6f95318ca343e3eebb9648441"
dependencies = [
"memchr",
]
[[package]] [[package]]
name = "once_cell" name = "once_cell"
version = "1.21.3" version = "1.21.3"
@@ -1112,15 +1128,15 @@ checksum = "42f5e15c9953c5e4ccceeb2e7382a716482c34515315f7b03532b8b4e8393d2d"
[[package]] [[package]]
name = "once_cell_polyfill" name = "once_cell_polyfill"
version = "1.70.1" version = "1.70.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a4895175b425cb1f87721b59f0f286c2092bd4af812243672510e1ac53e2e0ad" checksum = "384b8ab6d37215f3c5301a95a4accb5d64aa607f1fcb26a11b5303878451b4fe"
[[package]] [[package]]
name = "openssl" name = "openssl"
version = "0.10.74" version = "0.10.75"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24ad14dd45412269e1a30f52ad8f0664f0f4f4a89ee8fe28c3b3527021ebb654" checksum = "08838db121398ad17ab8531ce9de97b244589089e290a384c900cb9ff7434328"
dependencies = [ dependencies = [
"bitflags 2.10.0", "bitflags 2.10.0",
"cfg-if", "cfg-if",
@@ -1150,9 +1166,9 @@ checksum = "d05e27ee213611ffe7d6348b942e8f942b37114c00cc03cec254295a4a17852e"
[[package]] [[package]]
name = "openssl-sys" name = "openssl-sys"
version = "0.9.110" version = "0.9.111"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0a9f0075ba3c21b09f8e8b2026584b1d18d49388648f2fbbf3c97ea8deced8e2" checksum = "82cab2d520aa75e3c58898289429321eb788c3106963d0dc886ec7a5f4adc321"
dependencies = [ dependencies = [
"cc", "cc",
"libc", "libc",
@@ -1262,36 +1278,37 @@ checksum = "7edddbd0b52d732b21ad9a5fab5c704c14cd949e5e9a1ec5929a24fded1b904c"
[[package]] [[package]]
name = "potential_utf" name = "potential_utf"
version = "0.1.3" version = "0.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "84df19adbe5b5a0782edcab45899906947ab039ccf4573713735ee7de1e6b08a" checksum = "b73949432f5e2a09657003c25bca5e19a0e9c84f8058ca374f49e0ebe605af77"
dependencies = [ dependencies = [
"zerovec", "zerovec",
] ]
[[package]] [[package]]
name = "proc-macro2" name = "proc-macro2"
version = "1.0.101" version = "1.0.103"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "89ae43fd86e4158d6db51ad8e2b80f313af9cc74f5c0e03ccb87de09998732de" checksum = "5ee95bc4ef87b8d5ba32e8b7714ccc834865276eab0aed5c9958d00ec45f49e8"
dependencies = [ dependencies = [
"unicode-ident", "unicode-ident",
] ]
[[package]] [[package]]
name = "psm" name = "psm"
version = "0.1.27" version = "0.1.28"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e66fcd288453b748497d8fb18bccc83a16b0518e3906d4b8df0a8d42d93dbb1c" checksum = "d11f2fedc3b7dafdc2851bc52f277377c5473d378859be234bc7ebb593144d01"
dependencies = [ dependencies = [
"ar_archive_writer",
"cc", "cc",
] ]
[[package]] [[package]]
name = "quote" name = "quote"
version = "1.0.41" version = "1.0.42"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ce25767e7b499d1b604768e7cde645d14cc8584231ea6b295e9c9eb22c02e1d1" checksum = "a338cc41d27e6cc6dce6cefc13a0729dfbb81c262b1f519331575dd80ef3067f"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
] ]
@@ -1611,9 +1628,9 @@ dependencies = [
[[package]] [[package]]
name = "signal-hook-mio" name = "signal-hook-mio"
version = "0.2.4" version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "34db1a06d485c9142248b7a054f034b349b212551f3dfd19c94d45a754a217cd" checksum = "b75a19a7a740b25bc7944bdee6172368f988763b744e3d4dfe753f6b4ece40cc"
dependencies = [ dependencies = [
"libc", "libc",
"mio 0.8.11", "mio 0.8.11",
@@ -1716,9 +1733,9 @@ dependencies = [
[[package]] [[package]]
name = "syn" name = "syn"
version = "2.0.107" version = "2.0.110"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2a26dbd934e5451d21ef060c018dae56fc073894c5a7896f882928a76e6d081b" checksum = "a99801b5bd34ede4cf3fc688c5919368fea4e4814a4664359503e6015b280aea"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@@ -1826,9 +1843,9 @@ dependencies = [
[[package]] [[package]]
name = "tinystr" name = "tinystr"
version = "0.8.1" version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5d4f6d1145dcb577acf783d4e601bc1d76a13337bb54e6233add580b07344c8b" checksum = "42d3e9c45c09de15d06dd8acf5f4e0e399e85927b7f00711024eb7ae10fa4869"
dependencies = [ dependencies = [
"displaydoc", "displaydoc",
"zerovec", "zerovec",
@@ -1874,9 +1891,9 @@ dependencies = [
[[package]] [[package]]
name = "tokio-util" name = "tokio-util"
version = "0.7.16" version = "0.7.17"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "14307c986784f72ef81c89db7d9e28d6ac26d16213b109ea501696195e6e3ce5" checksum = "2efa149fe76073d6e8fd97ef4f4eca7b67f599660115591483572e406e165594"
dependencies = [ dependencies = [
"bytes", "bytes",
"futures-core", "futures-core",
@@ -2001,9 +2018,9 @@ checksum = "e421abadd41a4225275504ea4d6566923418b7f05506fbc9c0fe86ba7396114b"
[[package]] [[package]]
name = "unicode-ident" name = "unicode-ident"
version = "1.0.19" version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f63a545481291138910575129486daeaf8ac54aee4387fe7906919f7830c7d9d" checksum = "9312f7c4f6ff9069b165498234ce8be658059c6728633667c526e27dc2cf1df5"
[[package]] [[package]]
name = "unicode-segmentation" name = "unicode-segmentation"
@@ -2055,9 +2072,9 @@ checksum = "accd4ea62f7bb7a82fe23066fb0957d48ef677f6eeb8215f372f52e48bb32426"
[[package]] [[package]]
name = "version-compare" name = "version-compare"
version = "0.2.0" version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "852e951cb7832cb45cb1169900d19760cfa39b82bc0ea9c0e5a14ae88411c98b" checksum = "03c2856837ef78f57382f06b2b8563a2f512f7185d732608fd9176cb3b8edf0e"
[[package]] [[package]]
name = "version_check" name = "version_check"
@@ -2107,9 +2124,9 @@ dependencies = [
[[package]] [[package]]
name = "wasm-bindgen" name = "wasm-bindgen"
version = "0.2.104" version = "0.2.105"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c1da10c01ae9f1ae40cbfac0bac3b1e724b320abfcf52229f80b547c0d250e2d" checksum = "da95793dfc411fbbd93f5be7715b0578ec61fe87cb1a42b12eb625caa5c5ea60"
dependencies = [ dependencies = [
"cfg-if", "cfg-if",
"once_cell", "once_cell",
@@ -2118,25 +2135,11 @@ dependencies = [
"wasm-bindgen-shared", "wasm-bindgen-shared",
] ]
[[package]]
name = "wasm-bindgen-backend"
version = "0.2.104"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "671c9a5a66f49d8a47345ab942e2cb93c7d1d0339065d4f8139c486121b43b19"
dependencies = [
"bumpalo",
"log",
"proc-macro2",
"quote",
"syn",
"wasm-bindgen-shared",
]
[[package]] [[package]]
name = "wasm-bindgen-futures" name = "wasm-bindgen-futures"
version = "0.4.54" version = "0.4.55"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7e038d41e478cc73bae0ff9b36c60cff1c98b8f38f8d7e8061e79ee63608ac5c" checksum = "551f88106c6d5e7ccc7cd9a16f312dd3b5d36ea8b4954304657d5dfba115d4a0"
dependencies = [ dependencies = [
"cfg-if", "cfg-if",
"js-sys", "js-sys",
@@ -2147,9 +2150,9 @@ dependencies = [
[[package]] [[package]]
name = "wasm-bindgen-macro" name = "wasm-bindgen-macro"
version = "0.2.104" version = "0.2.105"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7ca60477e4c59f5f2986c50191cd972e3a50d8a95603bc9434501cf156a9a119" checksum = "04264334509e04a7bf8690f2384ef5265f05143a4bff3889ab7a3269adab59c2"
dependencies = [ dependencies = [
"quote", "quote",
"wasm-bindgen-macro-support", "wasm-bindgen-macro-support",
@@ -2157,31 +2160,31 @@ dependencies = [
[[package]] [[package]]
name = "wasm-bindgen-macro-support" name = "wasm-bindgen-macro-support"
version = "0.2.104" version = "0.2.105"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9f07d2f20d4da7b26400c9f4a0511e6e0345b040694e8a75bd41d578fa4421d7" checksum = "420bc339d9f322e562942d52e115d57e950d12d88983a14c79b86859ee6c7ebc"
dependencies = [ dependencies = [
"bumpalo",
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn", "syn",
"wasm-bindgen-backend",
"wasm-bindgen-shared", "wasm-bindgen-shared",
] ]
[[package]] [[package]]
name = "wasm-bindgen-shared" name = "wasm-bindgen-shared"
version = "0.2.104" version = "0.2.105"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bad67dc8b2a1a6e5448428adec4c3e84c43e561d8c9ee8a9e5aabeb193ec41d1" checksum = "76f218a38c84bcb33c25ec7059b07847d465ce0e0a76b995e134a45adcb6af76"
dependencies = [ dependencies = [
"unicode-ident", "unicode-ident",
] ]
[[package]] [[package]]
name = "web-sys" name = "web-sys"
version = "0.3.81" version = "0.3.82"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9367c417a924a74cae129e6a2ae3b47fabb1f8995595ab474029da749a8be120" checksum = "3a1f95c0d03a47f4ae1f7a64643a6bb97465d9b740f0fa8f90ea33915c99a9a1"
dependencies = [ dependencies = [
"js-sys", "js-sys",
"wasm-bindgen", "wasm-bindgen",
@@ -2535,17 +2538,16 @@ checksum = "f17a85883d4e6d00e8a97c586de764dabcc06133f7f1d55dce5cdc070ad7fe59"
[[package]] [[package]]
name = "writeable" name = "writeable"
version = "0.6.1" version = "0.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ea2f10b9bb0928dfb1b42b65e1f9e36f7f54dbdf08457afefb38afcdec4fa2bb" checksum = "9edde0db4769d2dc68579893f2306b26c6ecfbe0ef499b013d731b7b9247e0b9"
[[package]] [[package]]
name = "yoke" name = "yoke"
version = "0.8.0" version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5f41bb01b8226ef4bfd589436a297c53d118f65921786300e427be8d487695cc" checksum = "72d6e5c6afb84d73944e5cedb052c4680d5657337201555f9f2a16b7406d4954"
dependencies = [ dependencies = [
"serde",
"stable_deref_trait", "stable_deref_trait",
"yoke-derive", "yoke-derive",
"zerofrom", "zerofrom",
@@ -2553,9 +2555,9 @@ dependencies = [
[[package]] [[package]]
name = "yoke-derive" name = "yoke-derive"
version = "0.8.0" version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "38da3c9736e16c5d3c8c597a9aaa5d1fa565d0532ae05e27c24aa62fb32c0ab6" checksum = "b659052874eb698efe5b9e8cf382204678a0086ebf46982b79d6ca3182927e5d"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@@ -2616,9 +2618,9 @@ dependencies = [
[[package]] [[package]]
name = "zerotrie" name = "zerotrie"
version = "0.2.2" version = "0.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "36f0bbd478583f79edad978b407914f61b2972f5af6fa089686016be8f9af595" checksum = "2a59c17a5562d507e4b54960e8569ebee33bee890c70aa3fe7b97e85a9fd7851"
dependencies = [ dependencies = [
"displaydoc", "displaydoc",
"yoke", "yoke",
@@ -2627,9 +2629,9 @@ dependencies = [
[[package]] [[package]]
name = "zerovec" name = "zerovec"
version = "0.11.4" version = "0.11.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e7aa2bd55086f1ab526693ecbe444205da57e25f4489879da80635a46d90e73b" checksum = "6c28719294829477f525be0186d13efa9a3c602f7ec202ca9e353d310fb9a002"
dependencies = [ dependencies = [
"yoke", "yoke",
"zerofrom", "zerofrom",
@@ -2638,9 +2640,9 @@ dependencies = [
[[package]] [[package]]
name = "zerovec-derive" name = "zerovec-derive"
version = "0.11.1" version = "0.11.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b96237efa0c878c64bd89c436f661be4e46b2f3eff1ebb976f7ef2321d2f58f" checksum = "eadce39539ca5cb3985590102671f2567e659fca9666581ad3411d59207951f3"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "cm-dashboard-agent" name = "cm-dashboard-agent"
version = "0.1.76" version = "0.1.103"
edition = "2021" edition = "2021"
[dependencies] [dependencies]

View File

@@ -25,6 +25,25 @@ impl BackupCollector {
} }
async fn read_backup_status(&self) -> Result<Option<BackupStatusToml>, CollectorError> { async fn read_backup_status(&self) -> Result<Option<BackupStatusToml>, CollectorError> {
// Check if we're in maintenance mode
if std::fs::metadata("/tmp/cm-maintenance").is_ok() {
// Return special maintenance mode status
let maintenance_status = BackupStatusToml {
backup_name: "maintenance".to_string(),
start_time: chrono::Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(),
current_time: chrono::Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(),
duration_seconds: 0,
status: "pending".to_string(),
last_updated: chrono::Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(),
disk_space: None,
disk_product_name: None,
disk_serial_number: None,
disk_wear_percent: None,
services: HashMap::new(),
};
return Ok(Some(maintenance_status));
}
// Check if backup status file exists // Check if backup status file exists
if !std::path::Path::new(&self.backup_status_file).exists() { if !std::path::Path::new(&self.backup_status_file).exists() {
return Ok(None); // File doesn't exist, but this is not an error return Ok(None); // File doesn't exist, but this is not an error
@@ -79,7 +98,9 @@ impl BackupCollector {
} }
} }
"failed" => Status::Critical, "failed" => Status::Critical,
"warning" => Status::Warning, // Backup completed with warnings
"running" => Status::Ok, // Currently running is OK "running" => Status::Ok, // Currently running is OK
"pending" => Status::Pending, // Maintenance mode or backup starting
_ => Status::Unknown, _ => Status::Unknown,
} }
} }
@@ -136,6 +157,7 @@ impl Collector for BackupCollector {
name: "backup_overall_status".to_string(), name: "backup_overall_status".to_string(),
value: MetricValue::String(match overall_status { value: MetricValue::String(match overall_status {
Status::Ok => "ok".to_string(), Status::Ok => "ok".to_string(),
Status::Inactive => "inactive".to_string(),
Status::Pending => "pending".to_string(), Status::Pending => "pending".to_string(),
Status::Warning => "warning".to_string(), Status::Warning => "warning".to_string(),
Status::Critical => "critical".to_string(), Status::Critical => "critical".to_string(),
@@ -199,6 +221,7 @@ impl Collector for BackupCollector {
name: format!("backup_service_{}_status", service_name), name: format!("backup_service_{}_status", service_name),
value: MetricValue::String(match service_status { value: MetricValue::String(match service_status {
Status::Ok => "ok".to_string(), Status::Ok => "ok".to_string(),
Status::Inactive => "inactive".to_string(),
Status::Pending => "pending".to_string(), Status::Pending => "pending".to_string(),
Status::Warning => "warning".to_string(), Status::Warning => "warning".to_string(),
Status::Critical => "critical".to_string(), Status::Critical => "critical".to_string(),
@@ -377,6 +400,25 @@ impl Collector for BackupCollector {
}); });
} }
if let Some(wear_percent) = backup_status.disk_wear_percent {
let wear_status = if wear_percent >= 90.0 {
Status::Critical
} else if wear_percent >= 75.0 {
Status::Warning
} else {
Status::Ok
};
metrics.push(Metric {
name: "backup_disk_wear_percent".to_string(),
value: MetricValue::Float(wear_percent),
status: wear_status,
timestamp,
description: Some("Backup disk wear percentage from SMART data".to_string()),
unit: Some("percent".to_string()),
});
}
// Count services by status // Count services by status
let mut status_counts = HashMap::new(); let mut status_counts = HashMap::new();
for service in backup_status.services.values() { for service in backup_status.services.values() {
@@ -410,6 +452,7 @@ pub struct BackupStatusToml {
pub disk_space: Option<DiskSpace>, pub disk_space: Option<DiskSpace>,
pub disk_product_name: Option<String>, pub disk_product_name: Option<String>,
pub disk_serial_number: Option<String>, pub disk_serial_number: Option<String>,
pub disk_wear_percent: Option<f32>,
pub services: HashMap<String, ServiceStatus>, pub services: HashMap<String, ServiceStatus>,
} }

View File

@@ -5,22 +5,82 @@ use cm_dashboard_shared::{Metric, MetricValue, Status, StatusTracker, Hysteresis
use crate::config::DiskConfig; use crate::config::DiskConfig;
use std::process::Command; use std::process::Command;
use std::time::Instant; use std::time::Instant;
use std::fs;
use tracing::debug; use tracing::debug;
use super::{Collector, CollectorError}; use super::{Collector, CollectorError};
/// Mount point information from /proc/mounts
#[derive(Debug, Clone)]
struct MountInfo {
device: String, // e.g., "/dev/sda1" or "/mnt/disk1:/mnt/disk2"
mount_point: String, // e.g., "/", "/srv/media"
fs_type: String, // e.g., "ext4", "xfs", "fuse.mergerfs"
}
/// Auto-discovered storage topology
#[derive(Debug, Clone)]
struct StorageTopology {
single_disks: Vec<MountInfo>,
mergerfs_pools: Vec<MergerfsPoolInfo>,
}
/// MergerFS pool information
#[derive(Debug, Clone)]
struct MergerfsPoolInfo {
mount_point: String, // e.g., "/srv/media"
data_members: Vec<String>, // e.g., ["/mnt/disk1", "/mnt/disk2"]
parity_disks: Vec<String>, // e.g., ["/mnt/parity"]
}
/// Information about a storage pool (mount point with underlying drives) /// Information about a storage pool (mount point with underlying drives)
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
struct StoragePool { struct StoragePool {
name: String, // e.g., "steampool", "root" name: String, // e.g., "steampool", "root"
mount_point: String, // e.g., "/mnt/steampool", "/" mount_point: String, // e.g., "/mnt/steampool", "/"
filesystem: String, // e.g., "mergerfs", "ext4", "zfs", "btrfs" filesystem: String, // e.g., "mergerfs", "ext4", "zfs", "btrfs"
storage_type: String, // e.g., "mergerfs", "single", "raid", "zfs" pool_type: StoragePoolType, // Enhanced pool type with configuration
size: String, // e.g., "2.5TB" size: String, // e.g., "2.5TB"
used: String, // e.g., "2.1TB" used: String, // e.g., "2.1TB"
available: String, // e.g., "400GB" available: String, // e.g., "400GB"
usage_percent: f32, // e.g., 85.0 usage_percent: f32, // e.g., 85.0
underlying_drives: Vec<DriveInfo>, // Individual physical drives underlying_drives: Vec<DriveInfo>, // Individual physical drives
pool_health: PoolHealth, // Overall pool health status
}
/// Enhanced storage pool types with specific configurations
#[derive(Debug, Clone)]
enum StoragePoolType {
Single, // Traditional single disk (legacy)
PhysicalDrive { // Physical drive with multiple filesystems
filesystems: Vec<String>, // Mount points on this drive
},
MergerfsPool { // MergerFS with optional parity
data_disks: Vec<String>, // Member disk names (sdb, sdd)
parity_disks: Vec<String>, // Parity disk names (sdc)
},
#[allow(dead_code)]
RaidArray { // Hardware RAID (future)
level: String, // "RAID1", "RAID5", etc.
member_disks: Vec<String>,
spare_disks: Vec<String>,
},
#[allow(dead_code)]
ZfsPool { // ZFS pool (future)
pool_name: String,
vdevs: Vec<String>,
}
}
/// Pool health status for redundant storage
#[derive(Debug, Clone, Copy, PartialEq)]
enum PoolHealth {
Healthy, // All drives OK, parity current
Degraded, // One drive failed or parity outdated, still functional
Critical, // Multiple failures, data at risk
#[allow(dead_code)]
Rebuilding, // Actively rebuilding/scrubbing (future: SnapRAID status integration)
Unknown, // Cannot determine status
} }
/// Information about an individual physical drive /// Information about an individual physical drive
@@ -37,6 +97,7 @@ pub struct DiskCollector {
config: DiskConfig, config: DiskConfig,
temperature_thresholds: HysteresisThresholds, temperature_thresholds: HysteresisThresholds,
detected_devices: std::collections::HashMap<String, Vec<String>>, // mount_point -> devices detected_devices: std::collections::HashMap<String, Vec<String>>, // mount_point -> devices
storage_topology: Option<StorageTopology>, // Auto-discovered storage layout
} }
impl DiskCollector { impl DiskCollector {
@@ -49,12 +110,57 @@ impl DiskCollector {
5.0, // 5°C gap for recovery 5.0, // 5°C gap for recovery
); );
// Detect devices for all configured filesystems at startup // Perform auto-discovery of storage topology
let storage_topology = match Self::auto_discover_storage() {
Ok(topology) => {
debug!("Auto-discovered storage topology: {} single disks, {} mergerfs pools",
topology.single_disks.len(), topology.mergerfs_pools.len());
Some(topology)
}
Err(e) => {
debug!("Failed to auto-discover storage topology: {}", e);
None
}
};
// Detect devices for discovered storage
let mut detected_devices = std::collections::HashMap::new(); let mut detected_devices = std::collections::HashMap::new();
for fs_config in &config.filesystems { if let Some(ref topology) = storage_topology {
if fs_config.monitor { // Add single disks
if let Ok(devices) = Self::detect_device_for_mount_point_static(&fs_config.mount_point) { for disk in &topology.single_disks {
detected_devices.insert(fs_config.mount_point.clone(), devices); if let Ok(devices) = Self::detect_device_for_mount_point_static(&disk.mount_point) {
detected_devices.insert(disk.mount_point.clone(), devices);
}
}
// Add mergerfs pools and their members
for pool in &topology.mergerfs_pools {
// Detect devices for the pool itself
if let Ok(devices) = Self::detect_device_for_mount_point_static(&pool.mount_point) {
detected_devices.insert(pool.mount_point.clone(), devices);
}
// Detect devices for member disks
for member in &pool.data_members {
if let Ok(devices) = Self::detect_device_for_mount_point_static(member) {
detected_devices.insert(member.clone(), devices);
}
}
// Detect devices for parity disks
for parity in &pool.parity_disks {
if let Ok(devices) = Self::detect_device_for_mount_point_static(parity) {
detected_devices.insert(parity.clone(), devices);
}
}
}
} else {
// Fallback: use legacy filesystem config detection
for fs_config in &config.filesystems {
if fs_config.monitor {
if let Ok(devices) = Self::detect_device_for_mount_point_static(&fs_config.mount_point) {
detected_devices.insert(fs_config.mount_point.clone(), devices);
}
} }
} }
} }
@@ -63,58 +169,422 @@ impl DiskCollector {
config, config,
temperature_thresholds, temperature_thresholds,
detected_devices, detected_devices,
storage_topology,
} }
} }
/// Auto-discover storage topology by parsing system information
fn auto_discover_storage() -> Result<StorageTopology> {
let mounts = Self::parse_proc_mounts()?;
let mut single_disks = Vec::new();
let mut mergerfs_pools = Vec::new();
// Filter out unwanted filesystem types and mount points
let exclude_fs_types = ["tmpfs", "devtmpfs", "sysfs", "proc", "cgroup", "cgroup2", "devpts"];
let exclude_mount_prefixes = ["/proc", "/sys", "/dev", "/tmp", "/run"];
for mount in mounts {
// Skip excluded filesystem types
if exclude_fs_types.contains(&mount.fs_type.as_str()) {
continue;
}
// Skip excluded mount point prefixes
if exclude_mount_prefixes.iter().any(|prefix| mount.mount_point.starts_with(prefix)) {
continue;
}
match mount.fs_type.as_str() {
"fuse.mergerfs" => {
// Parse mergerfs pool
let data_members = Self::parse_mergerfs_sources(&mount.device);
let parity_disks = Self::detect_parity_disks(&data_members);
mergerfs_pools.push(MergerfsPoolInfo {
mount_point: mount.mount_point.clone(),
data_members,
parity_disks,
});
debug!("Discovered mergerfs pool at {}", mount.mount_point);
}
"ext4" | "xfs" | "btrfs" | "ntfs" | "vfat" => {
// Check if this mount is part of a mergerfs pool
let is_mergerfs_member = mergerfs_pools.iter()
.any(|pool| pool.data_members.contains(&mount.mount_point) ||
pool.parity_disks.contains(&mount.mount_point));
if !is_mergerfs_member {
debug!("Discovered single disk at {}", mount.mount_point);
single_disks.push(mount);
}
}
_ => {
debug!("Skipping unsupported filesystem type: {}", mount.fs_type);
}
}
}
Ok(StorageTopology {
single_disks,
mergerfs_pools,
})
}
/// Parse /proc/mounts to get all mount information
fn parse_proc_mounts() -> Result<Vec<MountInfo>> {
let mounts_content = fs::read_to_string("/proc/mounts")?;
let mut mounts = Vec::new();
for line in mounts_content.lines() {
let parts: Vec<&str> = line.split_whitespace().collect();
if parts.len() >= 3 {
mounts.push(MountInfo {
device: parts[0].to_string(),
mount_point: parts[1].to_string(),
fs_type: parts[2].to_string(),
});
}
}
Ok(mounts)
}
/// Parse mergerfs source string to extract member paths
fn parse_mergerfs_sources(source: &str) -> Vec<String> {
// MergerFS source format: "/mnt/disk1:/mnt/disk2:/mnt/disk3"
source.split(':')
.map(|s| s.trim().to_string())
.filter(|s| !s.is_empty())
.collect()
}
/// Detect potential parity disks based on data member heuristics
fn detect_parity_disks(data_members: &[String]) -> Vec<String> {
let mut parity_disks = Vec::new();
// Heuristic 1: Look for mount points with "parity" in the name
if let Ok(mounts) = Self::parse_proc_mounts() {
for mount in mounts {
if mount.mount_point.to_lowercase().contains("parity") &&
(mount.fs_type == "xfs" || mount.fs_type == "ext4") {
debug!("Detected parity disk by name: {}", mount.mount_point);
parity_disks.push(mount.mount_point);
}
}
}
// Heuristic 2: Look for sequential device pattern
// If data members are /mnt/disk1, /mnt/disk2, look for /mnt/disk* that's not in data
if parity_disks.is_empty() {
if let Some(pattern) = Self::extract_mount_pattern(data_members) {
if let Ok(mounts) = Self::parse_proc_mounts() {
for mount in mounts {
if mount.mount_point.starts_with(&pattern) &&
!data_members.contains(&mount.mount_point) &&
(mount.fs_type == "xfs" || mount.fs_type == "ext4") {
debug!("Detected parity disk by pattern: {}", mount.mount_point);
parity_disks.push(mount.mount_point);
}
}
}
}
}
parity_disks
}
/// Extract common mount point pattern from data members
fn extract_mount_pattern(data_members: &[String]) -> Option<String> {
if data_members.is_empty() {
return None;
}
// Find common prefix (e.g., "/mnt/disk" from "/mnt/disk1", "/mnt/disk2")
let first = &data_members[0];
if let Some(last_slash) = first.rfind('/') {
let base = &first[..last_slash + 1]; // Include the slash
// Check if all members share this base
if data_members.iter().all(|member| member.starts_with(base)) {
return Some(base.to_string());
}
}
None
}
/// Calculate disk temperature status using hysteresis thresholds /// Calculate disk temperature status using hysteresis thresholds
fn calculate_temperature_status(&self, metric_name: &str, temperature: f32, status_tracker: &mut StatusTracker) -> Status { fn calculate_temperature_status(&self, metric_name: &str, temperature: f32, status_tracker: &mut StatusTracker) -> Status {
status_tracker.calculate_with_hysteresis(metric_name, temperature, &self.temperature_thresholds) status_tracker.calculate_with_hysteresis(metric_name, temperature, &self.temperature_thresholds)
} }
/// Get configured storage pools with individual drive information /// Get storage pools using auto-discovered topology or fallback to configuration
fn get_configured_storage_pools(&self) -> Result<Vec<StoragePool>> { fn get_configured_storage_pools(&self) -> Result<Vec<StoragePool>> {
if let Some(ref topology) = self.storage_topology {
self.get_auto_discovered_storage_pools(topology)
} else {
self.get_legacy_configured_storage_pools()
}
}
/// Get storage pools from auto-discovered topology
fn get_auto_discovered_storage_pools(&self, topology: &StorageTopology) -> Result<Vec<StoragePool>> {
let mut storage_pools = Vec::new(); let mut storage_pools = Vec::new();
// Group single disks by physical drive for unified pool display
let grouped_disks = self.group_filesystems_by_physical_drive(&topology.single_disks)?;
// Process grouped single disks (each physical drive becomes a pool)
for (drive_name, filesystems) in grouped_disks {
// Create a unified pool for this physical drive
let pool = self.create_physical_drive_pool(&drive_name, &filesystems)?;
storage_pools.push(pool);
}
// IMPORTANT: Do not create individual filesystem pools when using auto-discovery
// All single disk filesystems should be grouped into physical drive pools above
// Process mergerfs pools (these remain as logical pools)
for pool_info in &topology.mergerfs_pools {
if let Ok((total_bytes, used_bytes)) = self.get_filesystem_info(&pool_info.mount_point) {
let available_bytes = total_bytes - used_bytes;
let usage_percent = if total_bytes > 0 {
(used_bytes as f64 / total_bytes as f64) * 100.0
} else { 0.0 };
let size = self.bytes_to_human_readable(total_bytes);
let used = self.bytes_to_human_readable(used_bytes);
let available = self.bytes_to_human_readable(available_bytes);
// Collect all member and parity drives
let mut all_drives = Vec::new();
// Add data member drives
for member in &pool_info.data_members {
if let Some(devices) = self.detected_devices.get(member) {
all_drives.extend(devices.clone());
}
}
// Add parity drives
for parity in &pool_info.parity_disks {
if let Some(devices) = self.detected_devices.get(parity) {
all_drives.extend(devices.clone());
}
}
let underlying_drives = self.get_drive_info_for_devices(&all_drives)?;
// Calculate pool health
let pool_health = self.calculate_mergerfs_pool_health(&pool_info.data_members, &pool_info.parity_disks, &underlying_drives);
// Generate pool name from mount point
let name = pool_info.mount_point.trim_start_matches('/').replace('/', "_");
storage_pools.push(StoragePool {
name,
mount_point: pool_info.mount_point.clone(),
filesystem: "fuse.mergerfs".to_string(),
pool_type: StoragePoolType::MergerfsPool {
data_disks: pool_info.data_members.iter()
.filter_map(|member| self.detected_devices.get(member).and_then(|devices| devices.first().cloned()))
.collect(),
parity_disks: pool_info.parity_disks.iter()
.filter_map(|parity| self.detected_devices.get(parity).and_then(|devices| devices.first().cloned()))
.collect(),
},
size,
used,
available,
usage_percent: usage_percent as f32,
underlying_drives,
pool_health,
});
debug!("Auto-discovered mergerfs pool: {} with {} data + {} parity disks",
pool_info.mount_point, pool_info.data_members.len(), pool_info.parity_disks.len());
}
}
Ok(storage_pools)
}
/// Group filesystems by their backing physical drive
fn group_filesystems_by_physical_drive(&self, filesystems: &[MountInfo]) -> Result<std::collections::HashMap<String, Vec<MountInfo>>> {
let mut grouped = std::collections::HashMap::new();
for fs in filesystems {
// Get the physical drive name for this mount point
if let Some(devices) = self.detected_devices.get(&fs.mount_point) {
if let Some(device_name) = devices.first() {
// Extract drive name (e.g., "nvme0n1" from "nvme0n1")
let drive_name = device_name.clone();
grouped.entry(drive_name).or_insert_with(Vec::new).push(fs.clone());
}
}
}
Ok(grouped)
}
/// Create a physical drive pool containing multiple filesystems
fn create_physical_drive_pool(&self, drive_name: &str, filesystems: &[MountInfo]) -> Result<StoragePool> {
if filesystems.is_empty() {
return Err(anyhow::anyhow!("No filesystems for drive {}", drive_name));
}
// Calculate total usage across all filesystems on this drive
let mut total_capacity = 0u64;
let mut total_used = 0u64;
for fs in filesystems {
if let Ok((capacity, used)) = self.get_filesystem_info(&fs.mount_point) {
total_capacity += capacity;
total_used += used;
}
}
let total_available = total_capacity.saturating_sub(total_used);
let usage_percent = if total_capacity > 0 {
(total_used as f64 / total_capacity as f64) * 100.0
} else { 0.0 };
// Get drive information for SMART data
let device_names = vec![drive_name.to_string()];
let underlying_drives = self.get_drive_info_for_devices(&device_names)?;
// Collect filesystem mount points for this drive
let filesystem_mount_points: Vec<String> = filesystems.iter()
.map(|fs| fs.mount_point.clone())
.collect();
Ok(StoragePool {
name: drive_name.to_string(),
mount_point: format!("(physical drive)"), // Special marker for physical drives
filesystem: "physical".to_string(),
pool_type: StoragePoolType::PhysicalDrive {
filesystems: filesystem_mount_points,
},
size: self.bytes_to_human_readable(total_capacity),
used: self.bytes_to_human_readable(total_used),
available: self.bytes_to_human_readable(total_available),
usage_percent: usage_percent as f32,
pool_health: if underlying_drives.iter().all(|d| d.health_status == "PASSED") {
PoolHealth::Healthy
} else {
PoolHealth::Critical
},
underlying_drives,
})
}
/// Calculate pool health specifically for mergerfs pools
fn calculate_mergerfs_pool_health(&self, data_members: &[String], parity_disks: &[String], drives: &[DriveInfo]) -> PoolHealth {
// Get device names for data and parity drives
let mut data_device_names = Vec::new();
let mut parity_device_names = Vec::new();
for member in data_members {
if let Some(devices) = self.detected_devices.get(member) {
data_device_names.extend(devices.clone());
}
}
for parity in parity_disks {
if let Some(devices) = self.detected_devices.get(parity) {
parity_device_names.extend(devices.clone());
}
}
let failed_data = drives.iter()
.filter(|d| data_device_names.contains(&d.device) && d.health_status != "PASSED")
.count();
let failed_parity = drives.iter()
.filter(|d| parity_device_names.contains(&d.device) && d.health_status != "PASSED")
.count();
match (failed_data, failed_parity) {
(0, 0) => PoolHealth::Healthy,
(1, 0) => PoolHealth::Degraded, // Can recover with parity
(0, 1) => PoolHealth::Degraded, // Lost parity protection
_ => PoolHealth::Critical, // Multiple failures
}
}
/// Fallback to legacy configuration-based storage pools
fn get_legacy_configured_storage_pools(&self) -> Result<Vec<StoragePool>> {
let mut storage_pools = Vec::new();
let mut processed_pools = std::collections::HashSet::new();
// Legacy implementation: use filesystem configuration
for fs_config in &self.config.filesystems { for fs_config in &self.config.filesystems {
if !fs_config.monitor { if !fs_config.monitor {
continue; continue;
} }
let (pool_type, skip_in_single_mode) = self.determine_pool_type(&fs_config.storage_type);
// Skip member disks if they're part of a pool
if skip_in_single_mode {
continue;
}
// Check if this pool was already processed (in case of multiple member disks)
let pool_key = match &pool_type {
StoragePoolType::MergerfsPool { .. } => {
// For mergerfs pools, use the main mount point
if fs_config.fs_type == "fuse.mergerfs" {
fs_config.mount_point.clone()
} else {
continue; // Skip member disks
}
}
_ => fs_config.mount_point.clone()
};
if processed_pools.contains(&pool_key) {
continue;
}
processed_pools.insert(pool_key.clone());
// Get filesystem stats for the mount point // Get filesystem stats for the mount point
match self.get_filesystem_info(&fs_config.mount_point) { match self.get_filesystem_info(&fs_config.mount_point) {
Ok((total_bytes, used_bytes)) => { Ok((total_bytes, used_bytes)) => {
let available_bytes = total_bytes - used_bytes; let available_bytes = total_bytes - used_bytes;
let usage_percent = if total_bytes > 0 { let usage_percent = if total_bytes > 0 {
(used_bytes as f64 / total_bytes as f64) * 100.0 (used_bytes as f64 / total_bytes as f64) * 100.0
} else { } else { 0.0 };
0.0
};
// Convert bytes to human-readable format // Convert bytes to human-readable format
let size = self.bytes_to_human_readable(total_bytes); let size = self.bytes_to_human_readable(total_bytes);
let used = self.bytes_to_human_readable(used_bytes); let used = self.bytes_to_human_readable(used_bytes);
let available = self.bytes_to_human_readable(available_bytes); let available = self.bytes_to_human_readable(available_bytes);
// Get individual drive information using pre-detected devices // Get underlying drives based on pool type
let device_names = self.detected_devices.get(&fs_config.mount_point).cloned().unwrap_or_default(); let underlying_drives = self.get_pool_drives(&pool_type, &fs_config.mount_point)?;
let underlying_drives = self.get_drive_info_for_devices(&device_names)?;
// Calculate pool health
let pool_health = self.calculate_pool_health(&pool_type, &underlying_drives);
let drive_count = underlying_drives.len();
storage_pools.push(StoragePool { storage_pools.push(StoragePool {
name: fs_config.name.clone(), name: fs_config.name.clone(),
mount_point: fs_config.mount_point.clone(), mount_point: fs_config.mount_point.clone(),
filesystem: fs_config.fs_type.clone(), filesystem: fs_config.fs_type.clone(),
storage_type: fs_config.storage_type.clone(), pool_type: pool_type.clone(),
size, size,
used, used,
available, available,
usage_percent: usage_percent as f32, usage_percent: usage_percent as f32,
underlying_drives, underlying_drives,
pool_health,
}); });
debug!( debug!(
"Storage pool '{}' ({}) at {} with {} detected drives", "Legacy configured storage pool '{}' ({:?}) at {} with {} drives, health: {:?}",
fs_config.name, fs_config.storage_type, fs_config.mount_point, device_names.len() fs_config.name, pool_type, fs_config.mount_point, drive_count, pool_health
); );
} }
Err(e) => { Err(e) => {
@@ -129,6 +599,138 @@ impl DiskCollector {
Ok(storage_pools) Ok(storage_pools)
} }
/// Determine the storage pool type from configuration
fn determine_pool_type(&self, storage_type: &str) -> (StoragePoolType, bool) {
match storage_type {
"single" => (StoragePoolType::Single, false),
"mergerfs_pool" | "mergerfs" => {
// Find associated member disks
let data_disks = self.find_pool_member_disks("mergerfs_member");
let parity_disks = self.find_pool_member_disks("parity");
(StoragePoolType::MergerfsPool { data_disks, parity_disks }, false)
}
"mergerfs_member" => (StoragePoolType::Single, true), // Skip, part of pool
"parity" => (StoragePoolType::Single, true), // Skip, part of pool
"raid1" | "raid5" | "raid6" => {
let member_disks = self.find_pool_member_disks(&format!("{}_member", storage_type));
(StoragePoolType::RaidArray {
level: storage_type.to_uppercase(),
member_disks,
spare_disks: Vec::new()
}, false)
}
_ => (StoragePoolType::Single, false) // Default to single
}
}
/// Find member disks for a specific storage type
fn find_pool_member_disks(&self, member_type: &str) -> Vec<String> {
let mut member_disks = Vec::new();
for fs_config in &self.config.filesystems {
if fs_config.storage_type == member_type && fs_config.monitor {
// Get device names for this mount point
if let Some(devices) = self.detected_devices.get(&fs_config.mount_point) {
member_disks.extend(devices.clone());
}
}
}
member_disks
}
/// Get drive information for a specific pool type
fn get_pool_drives(&self, pool_type: &StoragePoolType, mount_point: &str) -> Result<Vec<DriveInfo>> {
match pool_type {
StoragePoolType::Single => {
// Single disk - use detected devices for this mount point
let device_names = self.detected_devices.get(mount_point).cloned().unwrap_or_default();
self.get_drive_info_for_devices(&device_names)
}
StoragePoolType::PhysicalDrive { .. } => {
// Physical drive - get drive info for the drive directly (mount_point not used)
let device_names = vec![mount_point.to_string()];
self.get_drive_info_for_devices(&device_names)
}
StoragePoolType::MergerfsPool { data_disks, parity_disks } => {
// Mergerfs pool - collect all member drives
let mut all_disks = data_disks.clone();
all_disks.extend(parity_disks.clone());
self.get_drive_info_for_devices(&all_disks)
}
StoragePoolType::RaidArray { member_disks, spare_disks, .. } => {
// RAID array - collect member and spare drives
let mut all_disks = member_disks.clone();
all_disks.extend(spare_disks.clone());
self.get_drive_info_for_devices(&all_disks)
}
StoragePoolType::ZfsPool { .. } => {
// ZFS pool - use detected devices (future implementation)
let device_names = self.detected_devices.get(mount_point).cloned().unwrap_or_default();
self.get_drive_info_for_devices(&device_names)
}
}
}
/// Calculate pool health based on drive status and pool type
fn calculate_pool_health(&self, pool_type: &StoragePoolType, drives: &[DriveInfo]) -> PoolHealth {
match pool_type {
StoragePoolType::Single => {
// Single disk - health is just the drive health
if drives.is_empty() {
PoolHealth::Unknown
} else if drives.iter().all(|d| d.health_status == "PASSED") {
PoolHealth::Healthy
} else {
PoolHealth::Critical
}
}
StoragePoolType::PhysicalDrive { .. } => {
// Physical drive - health is just the drive health (similar to Single)
if drives.is_empty() {
PoolHealth::Unknown
} else if drives.iter().all(|d| d.health_status == "PASSED") {
PoolHealth::Healthy
} else {
PoolHealth::Critical
}
}
StoragePoolType::MergerfsPool { data_disks, parity_disks } => {
let failed_data = drives.iter()
.filter(|d| data_disks.contains(&d.device) && d.health_status != "PASSED")
.count();
let failed_parity = drives.iter()
.filter(|d| parity_disks.contains(&d.device) && d.health_status != "PASSED")
.count();
match (failed_data, failed_parity) {
(0, 0) => PoolHealth::Healthy,
(1, 0) => PoolHealth::Degraded, // Can recover with parity
(0, 1) => PoolHealth::Degraded, // Lost parity protection
_ => PoolHealth::Critical, // Multiple failures
}
}
StoragePoolType::RaidArray { level, .. } => {
let failed_drives = drives.iter().filter(|d| d.health_status != "PASSED").count();
// Basic RAID health logic (can be enhanced per RAID level)
match failed_drives {
0 => PoolHealth::Healthy,
1 if level.contains('1') || level.contains('5') || level.contains('6') => PoolHealth::Degraded,
_ => PoolHealth::Critical,
}
}
StoragePoolType::ZfsPool { .. } => {
// ZFS health would require zpool status parsing (future)
if drives.iter().all(|d| d.health_status == "PASSED") {
PoolHealth::Healthy
} else {
PoolHealth::Degraded
}
}
}
}
/// Get drive information for a list of device names /// Get drive information for a list of device names
fn get_drive_info_for_devices(&self, device_names: &[String]) -> Result<Vec<DriveInfo>> { fn get_drive_info_for_devices(&self, device_names: &[String]) -> Result<Vec<DriveInfo>> {
let mut drives = Vec::new(); let mut drives = Vec::new();
@@ -288,6 +890,11 @@ impl DiskCollector {
} }
} }
/// Convert bytes to gigabytes
fn bytes_to_gb(&self, bytes: u64) -> f32 {
bytes as f32 / (1024.0 * 1024.0 * 1024.0)
}
/// Detect device backing a mount point using lsblk (static version for startup) /// Detect device backing a mount point using lsblk (static version for startup)
fn detect_device_for_mount_point_static(mount_point: &str) -> Result<Vec<String>> { fn detect_device_for_mount_point_static(mount_point: &str) -> Result<Vec<String>> {
let output = Command::new("lsblk") let output = Command::new("lsblk")
@@ -448,8 +1055,8 @@ impl Collector for DiskCollector {
let used_gb = self.parse_size_to_gb(&storage_pool.used); let used_gb = self.parse_size_to_gb(&storage_pool.used);
let avail_gb = self.parse_size_to_gb(&storage_pool.available); let avail_gb = self.parse_size_to_gb(&storage_pool.available);
// Calculate status based on configured thresholds // Calculate status based on configured thresholds and pool health
let pool_status = if storage_pool.usage_percent >= self.config.usage_critical_percent { let usage_status = if storage_pool.usage_percent >= self.config.usage_critical_percent {
Status::Critical Status::Critical
} else if storage_pool.usage_percent >= self.config.usage_warning_percent { } else if storage_pool.usage_percent >= self.config.usage_warning_percent {
Status::Warning Status::Warning
@@ -457,6 +1064,14 @@ impl Collector for DiskCollector {
Status::Ok Status::Ok
}; };
let pool_status = match storage_pool.pool_health {
PoolHealth::Critical => Status::Critical,
PoolHealth::Degraded => Status::Warning,
PoolHealth::Rebuilding => Status::Warning,
PoolHealth::Healthy => usage_status,
PoolHealth::Unknown => Status::Unknown,
};
// Storage pool info metrics // Storage pool info metrics
metrics.push(Metric { metrics.push(Metric {
name: format!("disk_{}_mount_point", pool_name), name: format!("disk_{}_mount_point", pool_name),
@@ -476,15 +1091,50 @@ impl Collector for DiskCollector {
timestamp, timestamp,
}); });
// Enhanced pool type information
let pool_type_str = match &storage_pool.pool_type {
StoragePoolType::Single => "single".to_string(),
StoragePoolType::PhysicalDrive { filesystems } => {
format!("drive ({})", filesystems.len())
}
StoragePoolType::MergerfsPool { data_disks, parity_disks } => {
format!("mergerfs ({}+{})", data_disks.len(), parity_disks.len())
}
StoragePoolType::RaidArray { level, member_disks, spare_disks } => {
format!("{} ({}+{})", level, member_disks.len(), spare_disks.len())
}
StoragePoolType::ZfsPool { pool_name, .. } => {
format!("zfs ({})", pool_name)
}
};
metrics.push(Metric { metrics.push(Metric {
name: format!("disk_{}_storage_type", pool_name), name: format!("disk_{}_pool_type", pool_name),
value: MetricValue::String(storage_pool.storage_type.clone()), value: MetricValue::String(pool_type_str.clone()),
unit: None, unit: None,
description: Some(format!("Type: {}", storage_pool.storage_type)), description: Some(format!("Type: {}", pool_type_str)),
status: Status::Ok, status: Status::Ok,
timestamp, timestamp,
}); });
// Pool health status
let health_str = match storage_pool.pool_health {
PoolHealth::Healthy => "healthy",
PoolHealth::Degraded => "degraded",
PoolHealth::Critical => "critical",
PoolHealth::Rebuilding => "rebuilding",
PoolHealth::Unknown => "unknown",
};
metrics.push(Metric {
name: format!("disk_{}_pool_health", pool_name),
value: MetricValue::String(health_str.to_string()),
unit: None,
description: Some(format!("Health: {}", health_str)),
status: pool_status,
timestamp,
});
// Storage pool size metrics // Storage pool size metrics
metrics.push(Metric { metrics.push(Metric {
name: format!("disk_{}_total_gb", pool_name), name: format!("disk_{}_total_gb", pool_name),
@@ -570,6 +1220,79 @@ impl Collector for DiskCollector {
}); });
} }
} }
// Individual filesystem metrics for PhysicalDrive pools
if let StoragePoolType::PhysicalDrive { filesystems } = &storage_pool.pool_type {
for filesystem_mount in filesystems {
if let Ok((total_bytes, used_bytes)) = self.get_filesystem_info(filesystem_mount) {
let available_bytes = total_bytes - used_bytes;
let usage_percent = if total_bytes > 0 {
(used_bytes as f64 / total_bytes as f64) * 100.0
} else { 0.0 };
let filesystem_name = if filesystem_mount == "/" {
"root".to_string()
} else {
filesystem_mount.trim_start_matches('/').replace('/', "_")
};
// Calculate filesystem status based on usage
let fs_status = if usage_percent >= self.config.usage_critical_percent as f64 {
Status::Critical
} else if usage_percent >= self.config.usage_warning_percent as f64 {
Status::Warning
} else {
Status::Ok
};
// Filesystem usage metrics
metrics.push(Metric {
name: format!("disk_{}_fs_{}_usage_percent", pool_name, filesystem_name),
value: MetricValue::Float(usage_percent as f32),
unit: Some("%".to_string()),
description: Some(format!("{}: {:.0}%", filesystem_mount, usage_percent)),
status: fs_status.clone(),
timestamp,
});
metrics.push(Metric {
name: format!("disk_{}_fs_{}_used_gb", pool_name, filesystem_name),
value: MetricValue::Float(self.bytes_to_gb(used_bytes)),
unit: Some("GB".to_string()),
description: Some(format!("{}: {}GB used", filesystem_mount, self.bytes_to_human_readable(used_bytes))),
status: Status::Ok,
timestamp,
});
metrics.push(Metric {
name: format!("disk_{}_fs_{}_total_gb", pool_name, filesystem_name),
value: MetricValue::Float(self.bytes_to_gb(total_bytes)),
unit: Some("GB".to_string()),
description: Some(format!("{}: {}GB total", filesystem_mount, self.bytes_to_human_readable(total_bytes))),
status: Status::Ok,
timestamp,
});
metrics.push(Metric {
name: format!("disk_{}_fs_{}_available_gb", pool_name, filesystem_name),
value: MetricValue::Float(self.bytes_to_gb(available_bytes)),
unit: Some("GB".to_string()),
description: Some(format!("{}: {}GB available", filesystem_mount, self.bytes_to_human_readable(available_bytes))),
status: Status::Ok,
timestamp,
});
metrics.push(Metric {
name: format!("disk_{}_fs_{}_mount_point", pool_name, filesystem_name),
value: MetricValue::String(filesystem_mount.clone()),
unit: None,
description: Some(format!("Mount: {}", filesystem_mount)),
status: Status::Ok,
timestamp,
});
}
}
}
} }
// Add storage pool count metric // Add storage pool count metric
@@ -592,5 +1315,4 @@ impl Collector for DiskCollector {
Ok(metrics) Ok(metrics)
} }
} }

View File

@@ -8,7 +8,6 @@ use tracing::debug;
use super::{Collector, CollectorError}; use super::{Collector, CollectorError};
use crate::config::SystemdConfig; use crate::config::SystemdConfig;
use crate::service_tracker::UserStoppedServiceTracker;
/// Systemd collector for monitoring systemd services /// Systemd collector for monitoring systemd services
pub struct SystemdCollector { pub struct SystemdCollector {
@@ -357,33 +356,15 @@ impl SystemdCollector {
/// Calculate service status, taking user-stopped services into account /// Calculate service status, taking user-stopped services into account
fn calculate_service_status(&self, service_name: &str, active_status: &str) -> Status { fn calculate_service_status(&self, service_name: &str, active_status: &str) -> Status {
match active_status.to_lowercase().as_str() { match active_status.to_lowercase().as_str() {
"active" => { "active" => Status::Ok,
// If service is now active and was marked as user-stopped, clear the flag
if UserStoppedServiceTracker::is_service_user_stopped(service_name) {
debug!("Service '{}' is now active - clearing user-stopped flag", service_name);
// Note: We can't directly clear here because this is a read-only context
// The agent will need to handle this differently
}
Status::Ok
},
"inactive" | "dead" => { "inactive" | "dead" => {
// Check if this service was stopped by user action debug!("Service '{}' is inactive - treating as Inactive status", service_name);
if UserStoppedServiceTracker::is_service_user_stopped(service_name) { Status::Inactive
debug!("Service '{}' is inactive but marked as user-stopped - treating as OK", service_name);
Status::Ok
} else {
Status::Warning
}
}, },
"failed" | "error" => Status::Critical, "failed" | "error" => Status::Critical,
"activating" | "deactivating" | "reloading" | "start" | "stop" | "restart" => { "activating" | "deactivating" | "reloading" | "start" | "stop" | "restart" => {
// For user-stopped services that are transitioning, keep them as OK during transition debug!("Service '{}' is transitioning - treating as Pending", service_name);
if UserStoppedServiceTracker::is_service_user_stopped(service_name) { Status::Pending
debug!("Service '{}' is transitioning but was user-stopped - treating as OK", service_name);
Status::Ok
} else {
Status::Pending
}
}, },
_ => Status::Unknown, _ => Status::Unknown,
} }

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "cm-dashboard" name = "cm-dashboard"
version = "0.1.76" version = "0.1.103"
edition = "2021" edition = "2021"
[dependencies] [dependencies]

View File

@@ -55,8 +55,8 @@ pub struct SystemConfig {
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SshConfig { pub struct SshConfig {
pub rebuild_user: String, pub rebuild_user: String,
pub rebuild_alias: String, pub rebuild_cmd: String,
pub backup_alias: String, pub service_manage_cmd: String,
} }
/// Service log file configuration per host /// Service log file configuration per host

View File

@@ -34,10 +34,6 @@ pub struct HostWidgets {
pub services_widget: ServicesWidget, pub services_widget: ServicesWidget,
/// Backup widget state /// Backup widget state
pub backup_widget: BackupWidget, pub backup_widget: BackupWidget,
/// Scroll offsets for each panel
pub system_scroll_offset: usize,
pub services_scroll_offset: usize,
pub backup_scroll_offset: usize,
/// Last update time for this host /// Last update time for this host
pub last_update: Option<Instant>, pub last_update: Option<Instant>,
} }
@@ -48,9 +44,6 @@ impl HostWidgets {
system_widget: SystemWidget::new(), system_widget: SystemWidget::new(),
services_widget: ServicesWidget::new(), services_widget: ServicesWidget::new(),
backup_widget: BackupWidget::new(), backup_widget: BackupWidget::new(),
system_scroll_offset: 0,
services_scroll_offset: 0,
backup_scroll_offset: 0,
last_update: None, last_update: None,
} }
} }
@@ -227,12 +220,12 @@ impl TuiApp {
let connection_ip = self.get_connection_ip(&hostname); let connection_ip = self.get_connection_ip(&hostname);
// Create command that shows logo, rebuilds, and waits for user input // Create command that shows logo, rebuilds, and waits for user input
let logo_and_rebuild = format!( let logo_and_rebuild = format!(
"bash -c 'cat << \"EOF\"\nNixOS System Rebuild\nTarget: {} ({})\n\nEOF\nssh -tt {}@{} \"bash -ic {}\"\necho\necho \"========================================\"\necho \"Rebuild completed. Press any key to close...\"\necho \"========================================\"\nread -n 1 -s\nexit'", "echo 'Rebuilding system: {} ({})' && ssh -tt {}@{} \"bash -ic '{}'\"",
hostname, hostname,
connection_ip, connection_ip,
self.config.ssh.rebuild_user, self.config.ssh.rebuild_user,
connection_ip, connection_ip,
self.config.ssh.rebuild_alias self.config.ssh.rebuild_cmd
); );
std::process::Command::new("tmux") std::process::Command::new("tmux")
@@ -251,12 +244,12 @@ impl TuiApp {
let connection_ip = self.get_connection_ip(&hostname); let connection_ip = self.get_connection_ip(&hostname);
// Create command that shows logo, runs backup, and waits for user input // Create command that shows logo, runs backup, and waits for user input
let logo_and_backup = format!( let logo_and_backup = format!(
"bash -c 'cat << \"EOF\"\nBackup Operation\nTarget: {} ({})\n\nEOF\nssh -tt {}@{} \"bash -ic {}\"\necho\necho \"========================================\"\necho \"Backup completed. Press any key to close...\"\necho \"========================================\"\nread -n 1 -s\nexit'", "echo 'Running backup: {} ({})' && ssh -tt {}@{} \"bash -ic '{}'\"",
hostname, hostname,
connection_ip, connection_ip,
self.config.ssh.rebuild_user, self.config.ssh.rebuild_user,
connection_ip, connection_ip,
self.config.ssh.backup_alias format!("{} start borgbackup", self.config.ssh.service_manage_cmd)
); );
std::process::Command::new("tmux") std::process::Command::new("tmux")
@@ -274,15 +267,12 @@ impl TuiApp {
if let (Some(service_name), Some(hostname)) = (self.get_selected_service(), self.current_host.clone()) { if let (Some(service_name), Some(hostname)) = (self.get_selected_service(), self.current_host.clone()) {
let connection_ip = self.get_connection_ip(&hostname); let connection_ip = self.get_connection_ip(&hostname);
let service_start_command = format!( let service_start_command = format!(
"bash -c 'cat << \"EOF\"\nService Start: {}.service\nTarget: {} ({})\n\nEOF\nssh -tt {}@{} \"echo \\\"Starting service...\\\" && sudo systemctl start {}.service && echo \\\"Following logs until service is active...\\\" && echo \\\"========================================\\\" && {{ sudo journalctl -u {}.service -f --no-pager -n 10 & JOURNAL_PID=\\$!; while true; do if sudo systemctl is-active {}.service --quiet; then echo; echo \\\"========================================\\\"; echo \\\"Service is now active!\\\"; kill \\$JOURNAL_PID 2>/dev/null; break; fi; sleep 1; done; wait \\$JOURNAL_PID 2>/dev/null; }} && sudo systemctl status {}.service --no-pager -l\"\necho\necho \"========================================\"\necho \"Operation completed. Press any key to close...\"\necho \"========================================\"\nread -n 1 -s\nexit'", "echo 'Starting service: {} on {}' && ssh -tt {}@{} \"bash -ic '{} start {}'\"",
service_name, service_name,
hostname, hostname,
connection_ip,
self.config.ssh.rebuild_user, self.config.ssh.rebuild_user,
connection_ip, connection_ip,
service_name, self.config.ssh.service_manage_cmd,
service_name,
service_name,
service_name service_name
); );
@@ -301,14 +291,12 @@ impl TuiApp {
if let (Some(service_name), Some(hostname)) = (self.get_selected_service(), self.current_host.clone()) { if let (Some(service_name), Some(hostname)) = (self.get_selected_service(), self.current_host.clone()) {
let connection_ip = self.get_connection_ip(&hostname); let connection_ip = self.get_connection_ip(&hostname);
let service_stop_command = format!( let service_stop_command = format!(
"bash -c 'cat << \"EOF\"\nService Stop: {}.service\nTarget: {} ({})\n\nEOF\nssh -tt {}@{} \"echo \\\"Stopping service...\\\" && sudo systemctl stop {}.service && echo \\\"Service stopped! Final logs:\\\" && echo \\\"========================================\\\" && sudo journalctl -u {}.service --no-pager -n 10 && echo \\\"========================================\\\" && sudo systemctl status {}.service --no-pager -l\"\necho\necho \"========================================\"\necho \"Operation completed. Press any key to close...\"\necho \"========================================\"\nread -n 1 -s\nexit'", "echo 'Stopping service: {} on {}' && ssh -tt {}@{} \"bash -ic '{} stop {}'\"",
service_name, service_name,
hostname, hostname,
connection_ip,
self.config.ssh.rebuild_user, self.config.ssh.rebuild_user,
connection_ip, connection_ip,
service_name, self.config.ssh.service_manage_cmd,
service_name,
service_name service_name
); );
@@ -322,14 +310,15 @@ impl TuiApp {
.ok(); // Ignore errors, tmux will handle them .ok(); // Ignore errors, tmux will handle them
} }
} }
KeyCode::Char('J') => { KeyCode::Char('L') => {
// Show service logs via journalctl in tmux split window // Show service logs via service-manage script in tmux split window
if let (Some(service_name), Some(hostname)) = (self.get_selected_service(), self.current_host.clone()) { if let (Some(service_name), Some(hostname)) = (self.get_selected_service(), self.current_host.clone()) {
let connection_ip = self.get_connection_ip(&hostname); let connection_ip = self.get_connection_ip(&hostname);
let journalctl_command = format!( let logs_command = format!(
"bash -c \"ssh -tt {}@{} 'sudo journalctl -u {}.service -f --no-pager -n 50'; exit\"", "ssh -tt {}@{} '{} logs {}'",
self.config.ssh.rebuild_user, self.config.ssh.rebuild_user,
connection_ip, connection_ip,
self.config.ssh.service_manage_cmd,
service_name service_name
); );
@@ -338,37 +327,11 @@ impl TuiApp {
.arg("-v") .arg("-v")
.arg("-p") .arg("-p")
.arg("30") .arg("30")
.arg(&journalctl_command) .arg(&logs_command)
.spawn() .spawn()
.ok(); // Ignore errors, tmux will handle them .ok(); // Ignore errors, tmux will handle them
} }
} }
KeyCode::Char('L') => {
// Show custom service log file in tmux split window
if let (Some(service_name), Some(hostname)) = (self.get_selected_service(), self.current_host.clone()) {
// Check if this service has a custom log file configured
if let Some(host_logs) = self.config.service_logs.get(&hostname) {
if let Some(log_config) = host_logs.iter().find(|config| config.service_name == service_name) {
let connection_ip = self.get_connection_ip(&hostname);
let tail_command = format!(
"bash -c \"ssh -tt {}@{} 'sudo tail -n 50 -f {}'; exit\"",
self.config.ssh.rebuild_user,
connection_ip,
log_config.log_file_path
);
std::process::Command::new("tmux")
.arg("split-window")
.arg("-v")
.arg("-p")
.arg("30")
.arg(&tail_command)
.spawn()
.ok(); // Ignore errors, tmux will handle them
}
}
}
}
KeyCode::Char('w') => { KeyCode::Char('w') => {
// Wake on LAN for offline hosts // Wake on LAN for offline hosts
if let Some(hostname) = self.current_host.clone() { if let Some(hostname) = self.current_host.clone() {
@@ -401,7 +364,8 @@ impl TuiApp {
if let Some(hostname) = self.current_host.clone() { if let Some(hostname) = self.current_host.clone() {
let connection_ip = self.get_connection_ip(&hostname); let connection_ip = self.get_connection_ip(&hostname);
let ssh_command = format!( let ssh_command = format!(
"ssh -tt {}@{}", "echo 'Opening SSH terminal to: {}' && ssh -tt {}@{}",
hostname,
self.config.ssh.rebuild_user, self.config.ssh.rebuild_user,
connection_ip connection_ip
); );
@@ -584,14 +548,10 @@ impl TuiApp {
// Render services widget for current host // Render services widget for current host
if let Some(hostname) = self.current_host.clone() { if let Some(hostname) = self.current_host.clone() {
let is_focused = true; // Always show service selection let is_focused = true; // Always show service selection
let scroll_offset = {
let host_widgets = self.get_or_create_host_widgets(&hostname);
host_widgets.services_scroll_offset
};
let host_widgets = self.get_or_create_host_widgets(&hostname); let host_widgets = self.get_or_create_host_widgets(&hostname);
host_widgets host_widgets
.services_widget .services_widget
.render(frame, content_chunks[1], is_focused, scroll_offset); // Services takes full right side .render(frame, content_chunks[1], is_focused); // Services takes full right side
} }
// Render statusbar at the bottom // Render statusbar at the bottom
@@ -629,12 +589,13 @@ impl TuiApp {
// Split the title bar into left and right sections // Split the title bar into left and right sections
let chunks = Layout::default() let chunks = Layout::default()
.direction(Direction::Horizontal) .direction(Direction::Horizontal)
.constraints([Constraint::Length(15), Constraint::Min(0)]) .constraints([Constraint::Length(22), Constraint::Min(0)])
.split(area); .split(area);
// Left side: "cm-dashboard" text // Left side: "cm-dashboard" text with version
let title_text = format!(" cm-dashboard v{}", env!("CARGO_PKG_VERSION"));
let left_span = Span::styled( let left_span = Span::styled(
" cm-dashboard", &title_text,
Style::default().fg(Theme::background()).bg(background_color).add_modifier(Modifier::BOLD) Style::default().fg(Theme::background()).bg(background_color).add_modifier(Modifier::BOLD)
); );
let left_title = Paragraph::new(Line::from(vec![left_span])) let left_title = Paragraph::new(Line::from(vec![left_span]))
@@ -706,34 +667,27 @@ impl TuiApp {
return host_summary_metric.status; return host_summary_metric.status;
} }
// Fallback to old aggregation logic with proper Pending handling // Rewritten status aggregation - only Critical, Warning, or OK for top bar
let mut has_critical = false; let mut has_critical = false;
let mut has_warning = false; let mut has_warning = false;
let mut has_pending = false;
let mut ok_count = 0;
for metric in &metrics { for metric in &metrics {
match metric.status { match metric.status {
Status::Critical => has_critical = true, Status::Critical => has_critical = true,
Status::Warning => has_warning = true, Status::Warning => has_warning = true,
Status::Pending => has_pending = true, // Treat all other statuses as OK for top bar aggregation
Status::Ok => ok_count += 1, Status::Ok | Status::Pending | Status::Inactive | Status::Unknown => {},
Status::Unknown => {}, // Ignore unknown for aggregation Status::Offline => {}, // Ignore offline
Status::Offline => {}, // Ignore offline for aggregation
} }
} }
// Priority order: Critical > Warning > Pending > Ok > Unknown // Only return Critical, Warning, or OK - no other statuses
if has_critical { if has_critical {
Status::Critical Status::Critical
} else if has_warning { } else if has_warning {
Status::Warning Status::Warning
} else if has_pending {
Status::Pending
} else if ok_count > 0 {
Status::Ok
} else { } else {
Status::Unknown Status::Ok
} }
} }
@@ -757,9 +711,10 @@ impl TuiApp {
shortcuts.push("Tab: Host".to_string()); shortcuts.push("Tab: Host".to_string());
shortcuts.push("↑↓/jk: Select".to_string()); shortcuts.push("↑↓/jk: Select".to_string());
shortcuts.push("r: Rebuild".to_string()); shortcuts.push("r: Rebuild".to_string());
shortcuts.push("B: Backup".to_string());
shortcuts.push("s/S: Start/Stop".to_string()); shortcuts.push("s/S: Start/Stop".to_string());
shortcuts.push("J: Logs".to_string()); shortcuts.push("L: Logs".to_string());
shortcuts.push("L: Custom".to_string()); shortcuts.push("t: Terminal".to_string());
shortcuts.push("w: Wake".to_string()); shortcuts.push("w: Wake".to_string());
// Always show quit // Always show quit
@@ -774,14 +729,10 @@ impl TuiApp {
frame.render_widget(system_block, area); frame.render_widget(system_block, area);
// Get current host widgets, create if none exist // Get current host widgets, create if none exist
if let Some(hostname) = self.current_host.clone() { if let Some(hostname) = self.current_host.clone() {
let scroll_offset = {
let host_widgets = self.get_or_create_host_widgets(&hostname);
host_widgets.system_scroll_offset
};
// Clone the config to avoid borrowing issues // Clone the config to avoid borrowing issues
let config = self.config.clone(); let config = self.config.clone();
let host_widgets = self.get_or_create_host_widgets(&hostname); let host_widgets = self.get_or_create_host_widgets(&hostname);
host_widgets.system_widget.render_with_scroll(frame, inner_area, scroll_offset, &hostname, Some(&config)); host_widgets.system_widget.render(frame, inner_area, &hostname, Some(&config));
} }
} }
@@ -792,12 +743,8 @@ impl TuiApp {
// Get current host widgets for backup widget // Get current host widgets for backup widget
if let Some(hostname) = self.current_host.clone() { if let Some(hostname) = self.current_host.clone() {
let scroll_offset = {
let host_widgets = self.get_or_create_host_widgets(&hostname);
host_widgets.backup_scroll_offset
};
let host_widgets = self.get_or_create_host_widgets(&hostname); let host_widgets = self.get_or_create_host_widgets(&hostname);
host_widgets.backup_widget.render_with_scroll(frame, inner_area, scroll_offset); host_widgets.backup_widget.render(frame, inner_area);
} }
} }

View File

@@ -143,6 +143,7 @@ impl Theme {
pub fn status_color(status: Status) -> Color { pub fn status_color(status: Status) -> Color {
match status { match status {
Status::Ok => Self::success(), Status::Ok => Self::success(),
Status::Inactive => Self::muted_text(), // Gray for inactive services in service list
Status::Pending => Self::highlight(), // Blue for pending Status::Pending => Self::highlight(), // Blue for pending
Status::Warning => Self::warning(), Status::Warning => Self::warning(),
Status::Critical => Self::error(), Status::Critical => Self::error(),
@@ -243,6 +244,7 @@ impl StatusIcons {
pub fn get_icon(status: Status) -> &'static str { pub fn get_icon(status: Status) -> &'static str {
match status { match status {
Status::Ok => "", Status::Ok => "",
Status::Inactive => "", // Empty circle for inactive services
Status::Pending => "", // Hollow circle for pending Status::Pending => "", // Hollow circle for pending
Status::Warning => "", Status::Warning => "",
Status::Critical => "!", Status::Critical => "!",
@@ -256,6 +258,7 @@ impl StatusIcons {
let icon = Self::get_icon(status); let icon = Self::get_icon(status);
let status_color = match status { let status_color = match status {
Status::Ok => Theme::success(), // Green Status::Ok => Theme::success(), // Green
Status::Inactive => Theme::muted_text(), // Gray for inactive services
Status::Pending => Theme::highlight(), // Blue Status::Pending => Theme::highlight(), // Blue
Status::Warning => Theme::warning(), // Yellow Status::Warning => Theme::warning(), // Yellow
Status::Critical => Theme::error(), // Red Status::Critical => Theme::error(), // Red

View File

@@ -30,6 +30,8 @@ pub struct BackupWidget {
backup_disk_product_name: Option<String>, backup_disk_product_name: Option<String>,
/// Backup disk serial number from SMART data /// Backup disk serial number from SMART data
backup_disk_serial_number: Option<String>, backup_disk_serial_number: Option<String>,
/// Backup disk wear percentage from SMART data
backup_disk_wear_percent: Option<f32>,
/// Backup disk filesystem label /// Backup disk filesystem label
backup_disk_filesystem_label: Option<String>, backup_disk_filesystem_label: Option<String>,
/// Number of completed services /// Number of completed services
@@ -65,6 +67,7 @@ impl BackupWidget {
backup_disk_used_gb: None, backup_disk_used_gb: None,
backup_disk_product_name: None, backup_disk_product_name: None,
backup_disk_serial_number: None, backup_disk_serial_number: None,
backup_disk_wear_percent: None,
backup_disk_filesystem_label: None, backup_disk_filesystem_label: None,
services_completed_count: None, services_completed_count: None,
services_failed_count: None, services_failed_count: None,
@@ -197,6 +200,9 @@ impl Widget for BackupWidget {
"backup_disk_serial_number" => { "backup_disk_serial_number" => {
self.backup_disk_serial_number = Some(metric.value.as_string()); self.backup_disk_serial_number = Some(metric.value.as_string());
} }
"backup_disk_wear_percent" => {
self.backup_disk_wear_percent = metric.value.as_f32();
}
"backup_disk_filesystem_label" => { "backup_disk_filesystem_label" => {
self.backup_disk_filesystem_label = Some(metric.value.as_string()); self.backup_disk_filesystem_label = Some(metric.value.as_string());
} }
@@ -285,8 +291,8 @@ impl Widget for BackupWidget {
} }
impl BackupWidget { impl BackupWidget {
/// Render with scroll offset support /// Render backup widget
pub fn render_with_scroll(&mut self, frame: &mut Frame, area: Rect, scroll_offset: usize) { pub fn render(&mut self, frame: &mut Frame, area: Rect) {
let mut lines = Vec::new(); let mut lines = Vec::new();
// Latest backup section // Latest backup section
@@ -328,21 +334,31 @@ impl BackupWidget {
); );
lines.push(ratatui::text::Line::from(disk_spans)); lines.push(ratatui::text::Line::from(disk_spans));
// Serial number as sub-item // Collect sub-items to determine tree structure
let mut sub_items = Vec::new();
if let Some(serial) = &self.backup_disk_serial_number { if let Some(serial) = &self.backup_disk_serial_number {
lines.push(ratatui::text::Line::from(vec![ sub_items.push(format!("S/N: {}", serial));
ratatui::text::Span::styled(" ├─ ", Typography::tree()),
ratatui::text::Span::styled(format!("S/N: {}", serial), Typography::secondary())
]));
} }
// Usage as sub-item if let Some(wear) = self.backup_disk_wear_percent {
sub_items.push(format!("Wear: {:.0}%", wear));
}
if let (Some(used), Some(total)) = (self.backup_disk_used_gb, self.backup_disk_total_gb) { if let (Some(used), Some(total)) = (self.backup_disk_used_gb, self.backup_disk_total_gb) {
let used_str = Self::format_size_with_proper_units(used); let used_str = Self::format_size_with_proper_units(used);
let total_str = Self::format_size_with_proper_units(total); let total_str = Self::format_size_with_proper_units(total);
sub_items.push(format!("Usage: {}/{}", used_str, total_str));
}
// Render sub-items with proper tree structure
let num_items = sub_items.len();
for (i, item) in sub_items.into_iter().enumerate() {
let is_last = i == num_items - 1;
let tree_char = if is_last { " └─ " } else { " ├─ " };
lines.push(ratatui::text::Line::from(vec![ lines.push(ratatui::text::Line::from(vec![
ratatui::text::Span::styled(" └─ ", Typography::tree()), ratatui::text::Span::styled(tree_char, Typography::tree()),
ratatui::text::Span::styled(format!("Usage: {}/{}", used_str, total_str), Typography::secondary()) ratatui::text::Span::styled(item, Typography::secondary())
])); ]));
} }
} }
@@ -366,42 +382,20 @@ impl BackupWidget {
let total_lines = lines.len(); let total_lines = lines.len();
let available_height = area.height as usize; let available_height = area.height as usize;
// Calculate scroll boundaries // Show only what fits, with "X more below" if needed
let max_scroll = if total_lines > available_height { if total_lines > available_height {
total_lines - available_height let lines_for_content = available_height.saturating_sub(1); // Reserve one line for "more below"
} else {
total_lines.saturating_sub(1)
};
let effective_scroll = scroll_offset.min(max_scroll);
// Apply scrolling if needed
if scroll_offset > 0 || total_lines > available_height {
let mut visible_lines: Vec<_> = lines let mut visible_lines: Vec<_> = lines
.into_iter() .into_iter()
.skip(effective_scroll) .take(lines_for_content)
.take(available_height)
.collect(); .collect();
// Add scroll indicator if there are hidden lines let hidden_below = total_lines.saturating_sub(lines_for_content);
if total_lines > available_height { if hidden_below > 0 {
let hidden_above = effective_scroll; let more_line = ratatui::text::Line::from(vec![
let hidden_below = total_lines.saturating_sub(effective_scroll + available_height); ratatui::text::Span::styled(format!("... {} more below", hidden_below), Typography::muted())
]);
if (hidden_above > 0 || hidden_below > 0) && !visible_lines.is_empty() { visible_lines.push(more_line);
let scroll_text = if hidden_above > 0 && hidden_below > 0 {
format!("... {} above, {} below", hidden_above, hidden_below)
} else if hidden_above > 0 {
format!("... {} more above", hidden_above)
} else {
format!("... {} more below", hidden_below)
};
// Replace last line with scroll indicator
visible_lines.pop();
visible_lines.push(ratatui::text::Line::from(vec![
ratatui::text::Span::styled(scroll_text, Typography::muted())
]));
}
} }
let paragraph = Paragraph::new(ratatui::text::Text::from(visible_lines)); let paragraph = Paragraph::new(ratatui::text::Text::from(visible_lines));

View File

@@ -144,6 +144,7 @@ impl ServicesWidget {
let icon = StatusIcons::get_icon(info.widget_status); let icon = StatusIcons::get_icon(info.widget_status);
let status_color = match info.widget_status { let status_color = match info.widget_status {
Status::Ok => Theme::success(), Status::Ok => Theme::success(),
Status::Inactive => Theme::muted_text(),
Status::Pending => Theme::highlight(), Status::Pending => Theme::highlight(),
Status::Warning => Theme::warning(), Status::Warning => Theme::warning(),
Status::Critical => Theme::error(), Status::Critical => Theme::error(),
@@ -208,36 +209,13 @@ impl ServicesWidget {
} }
/// Get currently selected service name (for actions) /// Get currently selected service name (for actions)
/// Only returns parent service names since only parent services can be selected
pub fn get_selected_service(&self) -> Option<String> { pub fn get_selected_service(&self) -> Option<String> {
// Build the same display list to find the selected service // Only parent services can be selected, so just get the parent service at selected_index
let mut display_lines: Vec<(String, Status, bool, Option<(ServiceInfo, bool)>, String)> = Vec::new();
let mut parent_services: Vec<_> = self.parent_services.iter().collect(); let mut parent_services: Vec<_> = self.parent_services.iter().collect();
parent_services.sort_by(|(a, _), (b, _)| a.cmp(b)); parent_services.sort_by(|(a, _), (b, _)| a.cmp(b));
for (parent_name, parent_info) in parent_services {
let parent_line = self.format_parent_service_line(parent_name, parent_info);
display_lines.push((parent_line, parent_info.widget_status, false, None, parent_name.clone()));
if let Some(sub_list) = self.sub_services.get(parent_name) {
let mut sorted_subs = sub_list.clone();
sorted_subs.sort_by(|(a, _), (b, _)| a.cmp(b));
for (i, (sub_name, sub_info)) in sorted_subs.iter().enumerate() {
let is_last_sub = i == sorted_subs.len() - 1;
let full_sub_name = format!("{}_{}", parent_name, sub_name);
display_lines.push((
sub_name.clone(),
sub_info.widget_status,
true,
Some((sub_info.clone(), is_last_sub)),
full_sub_name,
));
}
}
}
display_lines.get(self.selected_index).map(|(_, _, _, _, raw_name)| raw_name.clone()) parent_services.get(self.selected_index).map(|(name, _)| name.to_string())
} }
/// Get total count of selectable services (parent services only, not sub-services) /// Get total count of selectable services (parent services only, not sub-services)
@@ -400,8 +378,8 @@ impl Widget for ServicesWidget {
impl ServicesWidget { impl ServicesWidget {
/// Render with focus and scroll /// Render with focus
pub fn render(&mut self, frame: &mut Frame, area: Rect, is_focused: bool, scroll_offset: usize) { pub fn render(&mut self, frame: &mut Frame, area: Rect, is_focused: bool) {
let services_block = Components::widget_block("services"); let services_block = Components::widget_block("services");
let inner_area = services_block.inner(area); let inner_area = services_block.inner(area);
frame.render_widget(services_block, area); frame.render_widget(services_block, area);
@@ -427,11 +405,11 @@ impl ServicesWidget {
} }
// Render the services list // Render the services list
self.render_services(frame, content_chunks[1], is_focused, scroll_offset); self.render_services(frame, content_chunks[1], is_focused);
} }
/// Render services list /// Render services list
fn render_services(&mut self, frame: &mut Frame, area: Rect, is_focused: bool, scroll_offset: usize) { fn render_services(&mut self, frame: &mut Frame, area: Rect, is_focused: bool) {
// Build hierarchical service list for display // Build hierarchical service list for display
let mut display_lines: Vec<(String, Status, bool, Option<(ServiceInfo, bool)>)> = Vec::new(); let mut display_lines: Vec<(String, Status, bool, Option<(ServiceInfo, bool)>)> = Vec::new();
@@ -463,36 +441,37 @@ impl ServicesWidget {
} }
} }
// Apply scroll offset and render visible lines (same as existing logic) // Show only what fits, with "X more below" if needed
let available_lines = area.height as usize; let available_lines = area.height as usize;
let total_lines = display_lines.len(); let total_lines = display_lines.len();
// Calculate scroll boundaries // Reserve one line for "X more below" if needed
let max_scroll = if total_lines > available_lines { let lines_for_content = if total_lines > available_lines {
total_lines - available_lines available_lines.saturating_sub(1)
} else { } else {
total_lines.saturating_sub(1) available_lines
}; };
let effective_scroll = scroll_offset.min(max_scroll);
// Get visible lines after scrolling
let visible_lines: Vec<_> = display_lines let visible_lines: Vec<_> = display_lines
.iter() .iter()
.skip(effective_scroll) .take(lines_for_content)
.take(available_lines)
.collect(); .collect();
let hidden_below = total_lines.saturating_sub(lines_for_content);
let lines_to_show = visible_lines.len(); let lines_to_show = visible_lines.len();
if lines_to_show > 0 { if lines_to_show > 0 {
// Add space for "X more below" message if needed
let total_chunks_needed = if hidden_below > 0 { lines_to_show + 1 } else { lines_to_show };
let service_chunks = Layout::default() let service_chunks = Layout::default()
.direction(Direction::Vertical) .direction(Direction::Vertical)
.constraints(vec![Constraint::Length(1); lines_to_show]) .constraints(vec![Constraint::Length(1); total_chunks_needed])
.split(area); .split(area);
for (i, (line_text, line_status, is_sub, sub_info)) in visible_lines.iter().enumerate() for (i, (line_text, line_status, is_sub, sub_info)) in visible_lines.iter().enumerate()
{ {
let actual_index = effective_scroll + i; // Real index in the full list let actual_index = i; // Simple index since we're not scrolling
// Only parent services can be selected - calculate parent service index // Only parent services can be selected - calculate parent service index
let is_selected = if !*is_sub { let is_selected = if !*is_sub {
@@ -534,33 +513,12 @@ impl ServicesWidget {
frame.render_widget(service_para, service_chunks[i]); frame.render_widget(service_para, service_chunks[i]);
} }
}
// Show scroll indicator if there are more services than we can display (same as existing)
if total_lines > available_lines {
let hidden_above = effective_scroll;
let hidden_below = total_lines.saturating_sub(effective_scroll + available_lines);
if hidden_above > 0 || hidden_below > 0 { // Show "X more below" message if content was truncated
let scroll_text = if hidden_above > 0 && hidden_below > 0 { if hidden_below > 0 {
format!("... {} above, {} below", hidden_above, hidden_below) let more_text = format!("... {} more below", hidden_below);
} else if hidden_above > 0 { let more_para = Paragraph::new(more_text).style(Typography::muted());
format!("... {} more above", hidden_above) frame.render_widget(more_para, service_chunks[lines_to_show]);
} else {
format!("... {} more below", hidden_below)
};
if available_lines > 0 && lines_to_show > 0 {
let last_line_area = Rect {
x: area.x,
y: area.y + (lines_to_show - 1) as u16,
width: area.width,
height: 1,
};
let scroll_para = Paragraph::new(scroll_text).style(Typography::muted());
frame.render_widget(scroll_para, last_line_area);
}
} }
} }
} }

View File

@@ -45,12 +45,15 @@ pub struct SystemWidget {
struct StoragePool { struct StoragePool {
name: String, name: String,
mount_point: String, mount_point: String,
pool_type: String, // "Single", "Raid0", etc. pool_type: String, // "single", "mergerfs (2+1)", "RAID5 (3+1)", etc.
pool_health: Option<String>, // "healthy", "degraded", "critical", "rebuilding"
drives: Vec<StorageDrive>, drives: Vec<StorageDrive>,
filesystems: Vec<FileSystem>, // For physical drive pools: individual filesystem children
usage_percent: Option<f32>, usage_percent: Option<f32>,
used_gb: Option<f32>, used_gb: Option<f32>,
total_gb: Option<f32>, total_gb: Option<f32>,
status: Status, status: Status,
health_status: Status, // Separate status for pool health vs usage
} }
#[derive(Clone)] #[derive(Clone)]
@@ -61,6 +64,15 @@ struct StorageDrive {
status: Status, status: Status,
} }
#[derive(Clone)]
struct FileSystem {
mount_point: String,
usage_percent: Option<f32>,
used_gb: Option<f32>,
total_gb: Option<f32>,
status: Status,
}
impl SystemWidget { impl SystemWidget {
pub fn new() -> Self { pub fn new() -> Self {
Self { Self {
@@ -155,12 +167,15 @@ impl SystemWidget {
let pool = pools.entry(pool_name.clone()).or_insert_with(|| StoragePool { let pool = pools.entry(pool_name.clone()).or_insert_with(|| StoragePool {
name: pool_name.clone(), name: pool_name.clone(),
mount_point: mount_point.clone(), mount_point: mount_point.clone(),
pool_type: "Single".to_string(), // Default, could be enhanced pool_type: "single".to_string(), // Default, will be updated
pool_health: None,
drives: Vec::new(), drives: Vec::new(),
filesystems: Vec::new(),
usage_percent: None, usage_percent: None,
used_gb: None, used_gb: None,
total_gb: None, total_gb: None,
status: Status::Unknown, status: Status::Unknown,
health_status: Status::Unknown,
}); });
// Parse different metric types // Parse different metric types
@@ -177,6 +192,15 @@ impl SystemWidget {
if let MetricValue::Float(total) = metric.value { if let MetricValue::Float(total) = metric.value {
pool.total_gb = Some(total); pool.total_gb = Some(total);
} }
} else if metric.name.contains("_pool_type") {
if let MetricValue::String(pool_type) = &metric.value {
pool.pool_type = pool_type.clone();
}
} else if metric.name.contains("_pool_health") {
if let MetricValue::String(health) = &metric.value {
pool.pool_health = Some(health.clone());
pool.health_status = metric.status.clone();
}
} else if metric.name.contains("_temperature") { } else if metric.name.contains("_temperature") {
if let Some(drive_name) = self.extract_drive_name(&metric.name) { if let Some(drive_name) = self.extract_drive_name(&metric.name) {
// Find existing drive or create new one // Find existing drive or create new one
@@ -217,6 +241,75 @@ impl SystemWidget {
} }
} }
} }
} else if metric.name.contains("_fs_") {
// Handle filesystem metrics for physical drive pools (disk_{pool}_fs_{fs_name}_{metric})
if let (Some(fs_name), Some(metric_type)) = self.extract_filesystem_metric(&metric.name) {
// Find or create filesystem entry
let fs_exists = pool.filesystems.iter().any(|fs| {
let fs_id = if fs.mount_point == "/" {
"root".to_string()
} else {
fs.mount_point.trim_start_matches('/').replace('/', "_")
};
fs_id == fs_name
});
if !fs_exists {
// Extract actual mount point from mount_point metric if available
let mount_point = if metric_type == "mount_point" {
if let MetricValue::String(mount) = &metric.value {
mount.clone()
} else {
format!("/{}", fs_name.replace('_', "/"))
}
} else {
format!("/{}", fs_name.replace('_', "/"))
};
pool.filesystems.push(FileSystem {
mount_point,
usage_percent: None,
used_gb: None,
total_gb: None,
status: Status::Unknown,
});
}
// Update the filesystem with the metric value
if let Some(filesystem) = pool.filesystems.iter_mut().find(|fs| {
let fs_id = if fs.mount_point == "/" {
"root".to_string()
} else {
fs.mount_point.trim_start_matches('/').replace('/', "_")
};
fs_id == fs_name
}) {
match metric_type.as_str() {
"usage_percent" => {
if let MetricValue::Float(usage) = metric.value {
filesystem.usage_percent = Some(usage);
filesystem.status = metric.status.clone();
}
}
"used_gb" => {
if let MetricValue::Float(used) = metric.value {
filesystem.used_gb = Some(used);
}
}
"total_gb" => {
if let MetricValue::Float(total) = metric.value {
filesystem.total_gb = Some(total);
}
}
"mount_point" => {
if let MetricValue::String(mount) = &metric.value {
filesystem.mount_point = mount.clone();
}
}
_ => {}
}
}
}
} }
} }
} }
@@ -259,6 +352,25 @@ impl SystemWidget {
None None
} }
/// Extract filesystem name and metric type from filesystem metric names
/// Pattern: disk_{pool}_fs_{filesystem_name}_{metric_type}
fn extract_filesystem_metric(&self, metric_name: &str) -> (Option<String>, Option<String>) {
if metric_name.starts_with("disk_") && metric_name.contains("_fs_") {
// Find the _fs_ part
if let Some(fs_start) = metric_name.find("_fs_") {
let after_fs = &metric_name[fs_start + 4..]; // Skip "_fs_"
// Find the last underscore to separate filesystem name from metric type
if let Some(last_underscore) = after_fs.rfind('_') {
let fs_name = after_fs[..last_underscore].to_string();
let metric_type = after_fs[last_underscore + 1..].to_string();
return (Some(fs_name), Some(metric_type));
}
}
}
(None, None)
}
/// Extract drive name from disk metric name /// Extract drive name from disk metric name
fn extract_drive_name(&self, metric_name: &str) -> Option<String> { fn extract_drive_name(&self, metric_name: &str) -> Option<String> {
// Pattern: disk_{pool_name}_{drive_name}_{metric_type} // Pattern: disk_{pool_name}_{drive_name}_{metric_type}
@@ -277,73 +389,199 @@ impl SystemWidget {
None None
} }
/// Render storage section with tree structure /// Render storage section with enhanced tree structure
fn render_storage(&self) -> Vec<Line<'_>> { fn render_storage(&self) -> Vec<Line<'_>> {
let mut lines = Vec::new(); let mut lines = Vec::new();
for pool in &self.storage_pools { for pool in &self.storage_pools {
// Pool header line // Pool header line with type and health
let usage_text = match (pool.usage_percent, pool.used_gb, pool.total_gb) { let pool_label = if pool.pool_type == "single" {
(Some(pct), Some(used), Some(total)) => {
format!("{:.0}% {:.1}GB/{:.1}GB", pct, used, total)
}
_ => "—% —GB/—GB".to_string(),
};
let pool_label = if pool.pool_type.to_lowercase() == "single" {
format!("{}:", pool.mount_point) format!("{}:", pool.mount_point)
} else { } else {
format!("{} ({}):", pool.mount_point, pool.pool_type) format!("{} ({}):", pool.mount_point, pool.pool_type)
}; };
let pool_spans = StatusIcons::create_status_spans( let pool_spans = StatusIcons::create_status_spans(
pool.status.clone(), pool.health_status.clone(),
&pool_label &pool_label
); );
lines.push(Line::from(pool_spans)); lines.push(Line::from(pool_spans));
// Drive lines with tree structure // Pool health line (for multi-disk pools)
let has_usage_line = pool.usage_percent.is_some(); if pool.pool_type != "single" {
for (i, drive) in pool.drives.iter().enumerate() { if let Some(health) = &pool.pool_health {
let is_last_drive = i == pool.drives.len() - 1; let health_text = match health.as_str() {
let tree_symbol = if is_last_drive && !has_usage_line { "└─" } else { "├─" }; "healthy" => format!("Pool Status: {} Healthy",
if pool.drives.len() > 1 { format!("({} drives)", pool.drives.len()) } else { String::new() }),
let mut drive_info = Vec::new(); "degraded" => "Pool Status: ⚠ Degraded".to_string(),
if let Some(temp) = drive.temperature { "critical" => "Pool Status: ✗ Critical".to_string(),
drive_info.push(format!("T: {:.0}C", temp)); "rebuilding" => "Pool Status: ⟳ Rebuilding".to_string(),
_ => format!("Pool Status: ? {}", health),
};
let mut health_spans = vec![
Span::raw(" "),
Span::styled("├─ ", Typography::tree()),
];
health_spans.extend(StatusIcons::create_status_spans(pool.health_status.clone(), &health_text));
lines.push(Line::from(health_spans));
} }
if let Some(wear) = drive.wear_percent {
drive_info.push(format!("W: {:.0}%", wear));
}
let drive_text = if drive_info.is_empty() {
drive.name.clone()
} else {
format!("{} {}", drive.name, drive_info.join(""))
};
let mut drive_spans = vec![
Span::raw(" "),
Span::styled(tree_symbol, Typography::tree()),
Span::raw(" "),
];
drive_spans.extend(StatusIcons::create_status_spans(drive.status.clone(), &drive_text));
lines.push(Line::from(drive_spans));
} }
// Usage line // Total usage line (always show for pools)
if pool.usage_percent.is_some() { let usage_text = match (pool.usage_percent, pool.used_gb, pool.total_gb) {
let tree_symbol = "└─"; (Some(pct), Some(used), Some(total)) => {
let mut usage_spans = vec![ format!("Total: {:.0}% {:.1}GB/{:.1}GB", pct, used, total)
Span::raw(" "), }
Span::styled(tree_symbol, Typography::tree()), _ => "Total: —% —GB/—GB".to_string(),
Span::raw(" "), };
];
usage_spans.extend(StatusIcons::create_status_spans(pool.status.clone(), &usage_text)); let has_drives = !pool.drives.is_empty();
lines.push(Line::from(usage_spans)); let has_filesystems = !pool.filesystems.is_empty();
let has_children = has_drives || has_filesystems;
let tree_symbol = if has_children { "├─" } else { "└─" };
let mut usage_spans = vec![
Span::raw(" "),
Span::styled(tree_symbol, Typography::tree()),
Span::raw(" "),
];
usage_spans.extend(StatusIcons::create_status_spans(pool.status.clone(), &usage_text));
lines.push(Line::from(usage_spans));
// Drive lines with enhanced grouping
if pool.pool_type != "single" && pool.drives.len() > 1 {
// Group drives by type for mergerfs pools
let (data_drives, parity_drives): (Vec<_>, Vec<_>) = pool.drives.iter().enumerate()
.partition(|(_, drive)| {
// Simple heuristic: drives with 'parity' in name or sdc (common parity drive)
!drive.name.to_lowercase().contains("parity") && drive.name != "sdc"
});
// Show data drives
if !data_drives.is_empty() && pool.pool_type.contains("mergerfs") {
lines.push(Line::from(vec![
Span::raw(" "),
Span::styled("├─ ", Typography::tree()),
Span::styled("Data Disks:", Typography::secondary()),
]));
for (i, (_, drive)) in data_drives.iter().enumerate() {
let is_last = i == data_drives.len() - 1;
if is_last && parity_drives.is_empty() {
self.render_drive_line(&mut lines, drive, "│ └─");
} else {
self.render_drive_line(&mut lines, drive, "│ ├─");
}
}
}
// Show parity drives
if !parity_drives.is_empty() && pool.pool_type.contains("mergerfs") {
lines.push(Line::from(vec![
Span::raw(" "),
Span::styled("└─ ", Typography::tree()),
Span::styled("Parity:", Typography::secondary()),
]));
for (i, (_, drive)) in parity_drives.iter().enumerate() {
let is_last = i == parity_drives.len() - 1;
if is_last {
self.render_drive_line(&mut lines, drive, " └─");
} else {
self.render_drive_line(&mut lines, drive, " ├─");
}
}
} else {
// Regular drive listing for non-mergerfs pools
for (i, drive) in pool.drives.iter().enumerate() {
let is_last = i == pool.drives.len() - 1;
let tree_symbol = if is_last { "└─" } else { "├─" };
self.render_drive_line(&mut lines, drive, tree_symbol);
}
}
} else if pool.pool_type.starts_with("drive (") {
// Physical drive pools: show drive info + filesystem children
// First show drive information
for drive in &pool.drives {
let mut drive_info = Vec::new();
if let Some(temp) = drive.temperature {
drive_info.push(format!("T: {:.0}°C", temp));
}
if let Some(wear) = drive.wear_percent {
drive_info.push(format!("W: {:.0}%", wear));
}
let drive_text = if drive_info.is_empty() {
format!("Drive: {}", drive.name)
} else {
format!("Drive: {}", drive_info.join(" "))
};
let has_filesystems = !pool.filesystems.is_empty();
let tree_symbol = if has_filesystems { "├─" } else { "└─" };
let mut drive_spans = vec![
Span::raw(" "),
Span::styled(tree_symbol, Typography::tree()),
Span::raw(" "),
];
drive_spans.extend(StatusIcons::create_status_spans(drive.status.clone(), &drive_text));
lines.push(Line::from(drive_spans));
}
// Then show filesystem children
for (i, filesystem) in pool.filesystems.iter().enumerate() {
let is_last = i == pool.filesystems.len() - 1;
let tree_symbol = if is_last { "└─" } else { "├─" };
let fs_text = match (filesystem.usage_percent, filesystem.used_gb, filesystem.total_gb) {
(Some(pct), Some(used), Some(total)) => {
format!("{}: {:.0}% {:.1}GB/{:.1}GB", filesystem.mount_point, pct, used, total)
}
_ => format!("{}: —% —GB/—GB", filesystem.mount_point),
};
let mut fs_spans = vec![
Span::raw(" "),
Span::styled(tree_symbol, Typography::tree()),
Span::raw(" "),
];
fs_spans.extend(StatusIcons::create_status_spans(filesystem.status.clone(), &fs_text));
lines.push(Line::from(fs_spans));
}
} else {
// Single drive or simple pools
for (i, drive) in pool.drives.iter().enumerate() {
let is_last = i == pool.drives.len() - 1;
let tree_symbol = if is_last { "└─" } else { "├─" };
self.render_drive_line(&mut lines, drive, tree_symbol);
}
} }
} }
lines lines
} }
/// Helper to render a single drive line
fn render_drive_line<'a>(&self, lines: &mut Vec<Line<'a>>, drive: &StorageDrive, tree_symbol: &'a str) {
let mut drive_info = Vec::new();
if let Some(temp) = drive.temperature {
drive_info.push(format!("T: {:.0}°C", temp));
}
if let Some(wear) = drive.wear_percent {
drive_info.push(format!("W: {:.0}%", wear));
}
let drive_text = if drive_info.is_empty() {
drive.name.clone()
} else {
format!("{} {}", drive.name, drive_info.join(""))
};
let mut drive_spans = vec![
Span::raw(" "),
Span::styled(tree_symbol, Typography::tree()),
Span::raw(" "),
];
drive_spans.extend(StatusIcons::create_status_spans(drive.status.clone(), &drive_text));
lines.push(Line::from(drive_spans));
}
} }
impl Widget for SystemWidget { impl Widget for SystemWidget {
@@ -438,8 +676,8 @@ impl Widget for SystemWidget {
} }
impl SystemWidget { impl SystemWidget {
/// Render with scroll offset support /// Render system widget
pub fn render_with_scroll(&mut self, frame: &mut Frame, area: Rect, scroll_offset: usize, hostname: &str, config: Option<&crate::config::DashboardConfig>) { pub fn render(&mut self, frame: &mut Frame, area: Rect, hostname: &str, config: Option<&crate::config::DashboardConfig>) {
let mut lines = Vec::new(); let mut lines = Vec::new();
// NixOS section // NixOS section
@@ -513,69 +751,30 @@ impl SystemWidget {
Span::styled("Storage:", Typography::widget_title()) Span::styled("Storage:", Typography::widget_title())
])); ]));
// Storage items with overflow handling // Storage items - let main overflow logic handle truncation
let storage_lines = self.render_storage(); let storage_lines = self.render_storage();
let remaining_space = area.height.saturating_sub(lines.len() as u16); lines.extend(storage_lines);
if storage_lines.len() <= remaining_space as usize {
// All storage lines fit
lines.extend(storage_lines);
} else if remaining_space >= 2 {
// Show what we can and add overflow indicator
let lines_to_show = (remaining_space - 1) as usize; // Reserve 1 line for overflow
lines.extend(storage_lines.iter().take(lines_to_show).cloned());
// Count hidden pools
let mut hidden_pools = 0;
let mut current_pool = String::new();
for (i, line) in storage_lines.iter().enumerate() {
if i >= lines_to_show {
// Check if this line represents a new pool (no indentation)
if let Some(first_span) = line.spans.first() {
let text = first_span.content.as_ref();
if !text.starts_with(" ") && text.contains(':') {
let pool_name = text.split(':').next().unwrap_or("").trim();
if pool_name != current_pool {
hidden_pools += 1;
current_pool = pool_name.to_string();
}
}
}
}
}
if hidden_pools > 0 {
let overflow_text = format!(
"... and {} more pool{}",
hidden_pools,
if hidden_pools == 1 { "" } else { "s" }
);
lines.push(Line::from(vec![
Span::styled(overflow_text, Typography::muted())
]));
}
}
// Apply scroll offset // Apply scroll offset
let total_lines = lines.len(); let total_lines = lines.len();
let available_height = area.height as usize; let available_height = area.height as usize;
// Always apply scrolling if scroll_offset > 0, even if content fits // Show only what fits, with "X more below" if needed
if scroll_offset > 0 || total_lines > available_height { if total_lines > available_height {
let max_scroll = if total_lines > available_height { let lines_for_content = available_height.saturating_sub(1); // Reserve one line for "more below"
total_lines - available_height let mut visible_lines: Vec<Line> = lines
} else {
total_lines.saturating_sub(1)
};
let effective_scroll = scroll_offset.min(max_scroll);
// Take only the visible portion after scrolling
let visible_lines: Vec<Line> = lines
.into_iter() .into_iter()
.skip(effective_scroll) .take(lines_for_content)
.take(available_height)
.collect(); .collect();
let hidden_below = total_lines.saturating_sub(lines_for_content);
if hidden_below > 0 {
let more_line = Line::from(vec![
Span::styled(format!("... {} more below", hidden_below), Typography::muted())
]);
visible_lines.push(more_line);
}
let paragraph = Paragraph::new(Text::from(visible_lines)); let paragraph = Paragraph::new(Text::from(visible_lines));
frame.render_widget(paragraph, area); frame.render_widget(paragraph, area);
} else { } else {

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "cm-dashboard-shared" name = "cm-dashboard-shared"
version = "0.1.76" version = "0.1.103"
edition = "2021" edition = "2021"
[dependencies] [dependencies]

View File

@@ -82,12 +82,13 @@ impl MetricValue {
/// Health status for metrics /// Health status for metrics
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)] #[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub enum Status { pub enum Status {
Ok, Inactive, // Lowest priority
Pending, Unknown, //
Warning, Offline, //
Critical, Pending, //
Unknown, Ok, // 5th place - good status has higher priority than unknown states
Offline, Warning, //
Critical, // Highest priority
} }
impl Status { impl Status {
@@ -181,6 +182,16 @@ impl HysteresisThresholds {
Status::Ok Status::Ok
} }
} }
Status::Inactive => {
// Inactive services use normal thresholds like first measurement
if value >= self.critical_high {
Status::Critical
} else if value >= self.warning_high {
Status::Warning
} else {
Status::Ok
}
}
Status::Pending => { Status::Pending => {
// Service transitioning, use normal thresholds like first measurement // Service transitioning, use normal thresholds like first measurement
if value >= self.critical_high { if value >= self.critical_high {