Compare commits
16 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 2c27d0e1db | |||
| 9f18488752 | |||
| fab6404cca | |||
| c3626cc362 | |||
| d68ecfbc64 | |||
| d1272a6c13 | |||
| 33b3beb342 | |||
| f9384d9df6 | |||
| 156d707377 | |||
| dc1a2e3a0f | |||
| 5d6b8e6253 | |||
| 0cba083305 | |||
| a6be7a4788 | |||
| 2384f7f9b9 | |||
| cd5ef65d3d | |||
| 7bf9ca6201 |
114
CLAUDE.md
114
CLAUDE.md
@@ -144,6 +144,120 @@ nix-build --no-out-link -E 'with import <nixpkgs> {}; fetchurl {
|
||||
- **Workspace builds**: `nix-shell -p openssl pkg-config --run "cargo build --workspace"`
|
||||
- **Clean compilation**: Remove `target/` between major changes
|
||||
|
||||
## Enhanced Storage Pool Visualization
|
||||
|
||||
### Auto-Discovery Architecture
|
||||
|
||||
The dashboard uses automatic storage discovery to eliminate manual configuration complexity while providing intelligent storage pool grouping.
|
||||
|
||||
### Discovery Process
|
||||
|
||||
**At Agent Startup:**
|
||||
1. Parse `/proc/mounts` to identify all mounted filesystems
|
||||
2. Detect MergerFS pools by analyzing `fuse.mergerfs` mount sources
|
||||
3. Identify member disks and potential parity relationships via heuristics
|
||||
4. Store discovered storage topology for continuous monitoring
|
||||
5. Generate pool-aware metrics with hierarchical relationships
|
||||
|
||||
**Continuous Monitoring:**
|
||||
- Use stored discovery data for efficient metric collection
|
||||
- Monitor individual drives for SMART data, temperature, wear
|
||||
- Calculate pool-level health based on member drive status
|
||||
- Generate enhanced metrics for dashboard visualization
|
||||
|
||||
### Supported Storage Types
|
||||
|
||||
**Single Disks:**
|
||||
- ext4, xfs, btrfs mounted directly
|
||||
- Individual drive monitoring with SMART data
|
||||
- Traditional single-disk display for root, boot, etc.
|
||||
|
||||
**MergerFS Pools:**
|
||||
- Auto-detect from `/proc/mounts` fuse.mergerfs entries
|
||||
- Parse source paths to identify member disks (e.g., "/mnt/disk1:/mnt/disk2")
|
||||
- Heuristic parity disk detection (sequential device names, "parity" in path)
|
||||
- Pool health calculation (healthy/degraded/critical)
|
||||
- Hierarchical tree display with data/parity disk grouping
|
||||
|
||||
**Future Extensions Ready:**
|
||||
- RAID arrays via `/proc/mdstat` parsing
|
||||
- ZFS pools via `zpool status` integration
|
||||
- LVM logical volumes via `lvs` discovery
|
||||
|
||||
### Configuration
|
||||
|
||||
```toml
|
||||
[collectors.disk]
|
||||
enabled = true
|
||||
auto_discover = true # Default: true
|
||||
# Optional exclusions for special filesystems
|
||||
exclude_mount_points = ["/tmp", "/proc", "/sys", "/dev"]
|
||||
exclude_fs_types = ["tmpfs", "devtmpfs", "sysfs", "proc"]
|
||||
```
|
||||
|
||||
### Display Format
|
||||
|
||||
```
|
||||
Storage:
|
||||
● /srv/media (mergerfs (2+1)):
|
||||
├─ Pool Status: ● Healthy (3 drives)
|
||||
├─ Total: ● 63% 2355.2GB/3686.4GB
|
||||
├─ Data Disks:
|
||||
│ ├─ ● sdb T: 24°C
|
||||
│ └─ ● sdd T: 27°C
|
||||
└─ Parity: ● sdc T: 24°C
|
||||
● /:
|
||||
├─ ● nvme0n1 W: 13%
|
||||
└─ ● 7% 14.5GB/218.5GB
|
||||
```
|
||||
|
||||
### Implementation Benefits
|
||||
|
||||
- **Zero Configuration**: No manual pool definitions required
|
||||
- **Always Accurate**: Reflects actual system state automatically
|
||||
- **Scales Automatically**: Handles any number of pools without config changes
|
||||
- **Backwards Compatible**: Single disks continue working unchanged
|
||||
- **Future Ready**: Easy extension for additional storage technologies
|
||||
|
||||
### Current Status (v0.1.100)
|
||||
|
||||
**✅ Completed:**
|
||||
- Auto-discovery system implemented and deployed
|
||||
- `/proc/mounts` parsing with smart heuristics for parity detection
|
||||
- Storage topology stored at agent startup for efficient monitoring
|
||||
- Universal zero-configuration for all hosts (cmbox, steambox, simonbox, srv01, srv02, srv03)
|
||||
- Enhanced pool health calculation (healthy/degraded/critical)
|
||||
- Hierarchical tree visualization with data/parity disk separation
|
||||
|
||||
**🔄 In Progress - Unified Pool Visualization:**
|
||||
|
||||
Current auto-discovery works but displays filesystems separately instead of grouped by physical drives. Need to implement unified pool concept where single drives are treated as pools.
|
||||
|
||||
**Current Display (needs improvement):**
|
||||
```
|
||||
● /boot: (separate entry)
|
||||
● /nix_store: (separate entry)
|
||||
● /: (separate entry)
|
||||
```
|
||||
|
||||
**Target Display (unified pools):**
|
||||
```
|
||||
● nvme0n1:
|
||||
├─ Drive: T: 35°C W: 1%
|
||||
├─ /boot: 11% 0.1GB/1.0GB
|
||||
├─ /nix_store: 23% 214.9GB/928.2GB
|
||||
└─ /: 23% 214.9GB/928.2GB
|
||||
```
|
||||
|
||||
**Required Changes:**
|
||||
1. **Enhanced Auto-Discovery**: Group filesystems by backing physical drive during discovery
|
||||
2. **UI Pool Logic**: Treat single drives as "pools" with drive name as header
|
||||
3. **Drive Info Display**: Show temperature, wear, health at pool level for single drives
|
||||
4. **Filesystem Children**: Display mount points as children under their physical drives
|
||||
5. **Hybrid Rendering**: Physical grouping for single drives, logical grouping for mergerfs pools
|
||||
|
||||
**Expected Result**: Consistent hierarchical storage visualization where everything follows pool->children pattern, regardless of underlying storage technology.
|
||||
|
||||
## Important Communication Guidelines
|
||||
|
||||
Keep responses concise and focused. Avoid extensive implementation summaries unless requested.
|
||||
|
||||
6
Cargo.lock
generated
6
Cargo.lock
generated
@@ -279,7 +279,7 @@ checksum = "a1d728cc89cf3aee9ff92b05e62b19ee65a02b5702cff7d5a377e32c6ae29d8d"
|
||||
|
||||
[[package]]
|
||||
name = "cm-dashboard"
|
||||
version = "0.1.90"
|
||||
version = "0.1.105"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"chrono",
|
||||
@@ -301,7 +301,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "cm-dashboard-agent"
|
||||
version = "0.1.90"
|
||||
version = "0.1.105"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"async-trait",
|
||||
@@ -324,7 +324,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "cm-dashboard-shared"
|
||||
version = "0.1.90"
|
||||
version = "0.1.105"
|
||||
dependencies = [
|
||||
"chrono",
|
||||
"serde",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "cm-dashboard-agent"
|
||||
version = "0.1.90"
|
||||
version = "0.1.107"
|
||||
edition = "2021"
|
||||
|
||||
[dependencies]
|
||||
|
||||
@@ -25,6 +25,25 @@ impl BackupCollector {
|
||||
}
|
||||
|
||||
async fn read_backup_status(&self) -> Result<Option<BackupStatusToml>, CollectorError> {
|
||||
// Check if we're in maintenance mode
|
||||
if std::fs::metadata("/tmp/cm-maintenance").is_ok() {
|
||||
// Return special maintenance mode status
|
||||
let maintenance_status = BackupStatusToml {
|
||||
backup_name: "maintenance".to_string(),
|
||||
start_time: chrono::Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(),
|
||||
current_time: chrono::Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(),
|
||||
duration_seconds: 0,
|
||||
status: "pending".to_string(),
|
||||
last_updated: chrono::Utc::now().format("%Y-%m-%d %H:%M:%S UTC").to_string(),
|
||||
disk_space: None,
|
||||
disk_product_name: None,
|
||||
disk_serial_number: None,
|
||||
disk_wear_percent: None,
|
||||
services: HashMap::new(),
|
||||
};
|
||||
return Ok(Some(maintenance_status));
|
||||
}
|
||||
|
||||
// Check if backup status file exists
|
||||
if !std::path::Path::new(&self.backup_status_file).exists() {
|
||||
return Ok(None); // File doesn't exist, but this is not an error
|
||||
@@ -79,7 +98,9 @@ impl BackupCollector {
|
||||
}
|
||||
}
|
||||
"failed" => Status::Critical,
|
||||
"warning" => Status::Warning, // Backup completed with warnings
|
||||
"running" => Status::Ok, // Currently running is OK
|
||||
"pending" => Status::Pending, // Maintenance mode or backup starting
|
||||
_ => Status::Unknown,
|
||||
}
|
||||
}
|
||||
@@ -379,6 +400,25 @@ impl Collector for BackupCollector {
|
||||
});
|
||||
}
|
||||
|
||||
if let Some(wear_percent) = backup_status.disk_wear_percent {
|
||||
let wear_status = if wear_percent >= 90.0 {
|
||||
Status::Critical
|
||||
} else if wear_percent >= 75.0 {
|
||||
Status::Warning
|
||||
} else {
|
||||
Status::Ok
|
||||
};
|
||||
|
||||
metrics.push(Metric {
|
||||
name: "backup_disk_wear_percent".to_string(),
|
||||
value: MetricValue::Float(wear_percent),
|
||||
status: wear_status,
|
||||
timestamp,
|
||||
description: Some("Backup disk wear percentage from SMART data".to_string()),
|
||||
unit: Some("percent".to_string()),
|
||||
});
|
||||
}
|
||||
|
||||
// Count services by status
|
||||
let mut status_counts = HashMap::new();
|
||||
for service in backup_status.services.values() {
|
||||
@@ -412,6 +452,7 @@ pub struct BackupStatusToml {
|
||||
pub disk_space: Option<DiskSpace>,
|
||||
pub disk_product_name: Option<String>,
|
||||
pub disk_serial_number: Option<String>,
|
||||
pub disk_wear_percent: Option<f32>,
|
||||
pub services: HashMap<String, ServiceStatus>,
|
||||
}
|
||||
|
||||
|
||||
@@ -5,22 +5,82 @@ use cm_dashboard_shared::{Metric, MetricValue, Status, StatusTracker, Hysteresis
|
||||
use crate::config::DiskConfig;
|
||||
use std::process::Command;
|
||||
use std::time::Instant;
|
||||
use std::fs;
|
||||
use tracing::debug;
|
||||
|
||||
use super::{Collector, CollectorError};
|
||||
|
||||
/// Mount point information from /proc/mounts
|
||||
#[derive(Debug, Clone)]
|
||||
struct MountInfo {
|
||||
device: String, // e.g., "/dev/sda1" or "/mnt/disk1:/mnt/disk2"
|
||||
mount_point: String, // e.g., "/", "/srv/media"
|
||||
fs_type: String, // e.g., "ext4", "xfs", "fuse.mergerfs"
|
||||
}
|
||||
|
||||
/// Auto-discovered storage topology
|
||||
#[derive(Debug, Clone)]
|
||||
struct StorageTopology {
|
||||
single_disks: Vec<MountInfo>,
|
||||
mergerfs_pools: Vec<MergerfsPoolInfo>,
|
||||
}
|
||||
|
||||
/// MergerFS pool information
|
||||
#[derive(Debug, Clone)]
|
||||
struct MergerfsPoolInfo {
|
||||
mount_point: String, // e.g., "/srv/media"
|
||||
data_members: Vec<String>, // e.g., ["/mnt/disk1", "/mnt/disk2"]
|
||||
parity_disks: Vec<String>, // e.g., ["/mnt/parity"]
|
||||
}
|
||||
|
||||
/// Information about a storage pool (mount point with underlying drives)
|
||||
#[derive(Debug, Clone)]
|
||||
struct StoragePool {
|
||||
name: String, // e.g., "steampool", "root"
|
||||
mount_point: String, // e.g., "/mnt/steampool", "/"
|
||||
filesystem: String, // e.g., "mergerfs", "ext4", "zfs", "btrfs"
|
||||
storage_type: String, // e.g., "mergerfs", "single", "raid", "zfs"
|
||||
pool_type: StoragePoolType, // Enhanced pool type with configuration
|
||||
size: String, // e.g., "2.5TB"
|
||||
used: String, // e.g., "2.1TB"
|
||||
available: String, // e.g., "400GB"
|
||||
usage_percent: f32, // e.g., 85.0
|
||||
underlying_drives: Vec<DriveInfo>, // Individual physical drives
|
||||
pool_health: PoolHealth, // Overall pool health status
|
||||
}
|
||||
|
||||
/// Enhanced storage pool types with specific configurations
|
||||
#[derive(Debug, Clone)]
|
||||
enum StoragePoolType {
|
||||
Single, // Traditional single disk (legacy)
|
||||
PhysicalDrive { // Physical drive with multiple filesystems
|
||||
filesystems: Vec<String>, // Mount points on this drive
|
||||
},
|
||||
MergerfsPool { // MergerFS with optional parity
|
||||
data_disks: Vec<String>, // Member disk names (sdb, sdd)
|
||||
parity_disks: Vec<String>, // Parity disk names (sdc)
|
||||
},
|
||||
#[allow(dead_code)]
|
||||
RaidArray { // Hardware RAID (future)
|
||||
level: String, // "RAID1", "RAID5", etc.
|
||||
member_disks: Vec<String>,
|
||||
spare_disks: Vec<String>,
|
||||
},
|
||||
#[allow(dead_code)]
|
||||
ZfsPool { // ZFS pool (future)
|
||||
pool_name: String,
|
||||
vdevs: Vec<String>,
|
||||
}
|
||||
}
|
||||
|
||||
/// Pool health status for redundant storage
|
||||
#[derive(Debug, Clone, Copy, PartialEq)]
|
||||
enum PoolHealth {
|
||||
Healthy, // All drives OK, parity current
|
||||
Degraded, // One drive failed or parity outdated, still functional
|
||||
Critical, // Multiple failures, data at risk
|
||||
#[allow(dead_code)]
|
||||
Rebuilding, // Actively rebuilding/scrubbing (future: SnapRAID status integration)
|
||||
Unknown, // Cannot determine status
|
||||
}
|
||||
|
||||
/// Information about an individual physical drive
|
||||
@@ -37,6 +97,7 @@ pub struct DiskCollector {
|
||||
config: DiskConfig,
|
||||
temperature_thresholds: HysteresisThresholds,
|
||||
detected_devices: std::collections::HashMap<String, Vec<String>>, // mount_point -> devices
|
||||
storage_topology: Option<StorageTopology>, // Auto-discovered storage layout
|
||||
}
|
||||
|
||||
impl DiskCollector {
|
||||
@@ -49,12 +110,57 @@ impl DiskCollector {
|
||||
5.0, // 5°C gap for recovery
|
||||
);
|
||||
|
||||
// Detect devices for all configured filesystems at startup
|
||||
// Perform auto-discovery of storage topology
|
||||
let storage_topology = match Self::auto_discover_storage() {
|
||||
Ok(topology) => {
|
||||
debug!("Auto-discovered storage topology: {} single disks, {} mergerfs pools",
|
||||
topology.single_disks.len(), topology.mergerfs_pools.len());
|
||||
Some(topology)
|
||||
}
|
||||
Err(e) => {
|
||||
debug!("Failed to auto-discover storage topology: {}", e);
|
||||
None
|
||||
}
|
||||
};
|
||||
|
||||
// Detect devices for discovered storage
|
||||
let mut detected_devices = std::collections::HashMap::new();
|
||||
for fs_config in &config.filesystems {
|
||||
if fs_config.monitor {
|
||||
if let Ok(devices) = Self::detect_device_for_mount_point_static(&fs_config.mount_point) {
|
||||
detected_devices.insert(fs_config.mount_point.clone(), devices);
|
||||
if let Some(ref topology) = storage_topology {
|
||||
// Add single disks
|
||||
for disk in &topology.single_disks {
|
||||
if let Ok(devices) = Self::detect_device_for_mount_point_static(&disk.mount_point) {
|
||||
detected_devices.insert(disk.mount_point.clone(), devices);
|
||||
}
|
||||
}
|
||||
|
||||
// Add mergerfs pools and their members
|
||||
for pool in &topology.mergerfs_pools {
|
||||
// Detect devices for the pool itself
|
||||
if let Ok(devices) = Self::detect_device_for_mount_point_static(&pool.mount_point) {
|
||||
detected_devices.insert(pool.mount_point.clone(), devices);
|
||||
}
|
||||
|
||||
// Detect devices for member disks
|
||||
for member in &pool.data_members {
|
||||
if let Ok(devices) = Self::detect_device_for_mount_point_static(member) {
|
||||
detected_devices.insert(member.clone(), devices);
|
||||
}
|
||||
}
|
||||
|
||||
// Detect devices for parity disks
|
||||
for parity in &pool.parity_disks {
|
||||
if let Ok(devices) = Self::detect_device_for_mount_point_static(parity) {
|
||||
detected_devices.insert(parity.clone(), devices);
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Fallback: use legacy filesystem config detection
|
||||
for fs_config in &config.filesystems {
|
||||
if fs_config.monitor {
|
||||
if let Ok(devices) = Self::detect_device_for_mount_point_static(&fs_config.mount_point) {
|
||||
detected_devices.insert(fs_config.mount_point.clone(), devices);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -63,58 +169,422 @@ impl DiskCollector {
|
||||
config,
|
||||
temperature_thresholds,
|
||||
detected_devices,
|
||||
storage_topology,
|
||||
}
|
||||
}
|
||||
|
||||
/// Auto-discover storage topology by parsing system information
|
||||
fn auto_discover_storage() -> Result<StorageTopology> {
|
||||
let mounts = Self::parse_proc_mounts()?;
|
||||
let mut single_disks = Vec::new();
|
||||
let mut mergerfs_pools = Vec::new();
|
||||
|
||||
// Filter out unwanted filesystem types and mount points
|
||||
let exclude_fs_types = ["tmpfs", "devtmpfs", "sysfs", "proc", "cgroup", "cgroup2", "devpts"];
|
||||
let exclude_mount_prefixes = ["/proc", "/sys", "/dev", "/tmp", "/run"];
|
||||
|
||||
for mount in mounts {
|
||||
// Skip excluded filesystem types
|
||||
if exclude_fs_types.contains(&mount.fs_type.as_str()) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Skip excluded mount point prefixes
|
||||
if exclude_mount_prefixes.iter().any(|prefix| mount.mount_point.starts_with(prefix)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
match mount.fs_type.as_str() {
|
||||
"fuse.mergerfs" => {
|
||||
// Parse mergerfs pool
|
||||
let data_members = Self::parse_mergerfs_sources(&mount.device);
|
||||
let parity_disks = Self::detect_parity_disks(&data_members);
|
||||
|
||||
mergerfs_pools.push(MergerfsPoolInfo {
|
||||
mount_point: mount.mount_point.clone(),
|
||||
data_members,
|
||||
parity_disks,
|
||||
});
|
||||
|
||||
debug!("Discovered mergerfs pool at {}", mount.mount_point);
|
||||
}
|
||||
"ext4" | "xfs" | "btrfs" | "ntfs" | "vfat" => {
|
||||
// Check if this mount is part of a mergerfs pool
|
||||
let is_mergerfs_member = mergerfs_pools.iter()
|
||||
.any(|pool| pool.data_members.contains(&mount.mount_point) ||
|
||||
pool.parity_disks.contains(&mount.mount_point));
|
||||
|
||||
if !is_mergerfs_member {
|
||||
debug!("Discovered single disk at {}", mount.mount_point);
|
||||
single_disks.push(mount);
|
||||
}
|
||||
}
|
||||
_ => {
|
||||
debug!("Skipping unsupported filesystem type: {}", mount.fs_type);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(StorageTopology {
|
||||
single_disks,
|
||||
mergerfs_pools,
|
||||
})
|
||||
}
|
||||
|
||||
/// Parse /proc/mounts to get all mount information
|
||||
fn parse_proc_mounts() -> Result<Vec<MountInfo>> {
|
||||
let mounts_content = fs::read_to_string("/proc/mounts")?;
|
||||
let mut mounts = Vec::new();
|
||||
|
||||
for line in mounts_content.lines() {
|
||||
let parts: Vec<&str> = line.split_whitespace().collect();
|
||||
if parts.len() >= 3 {
|
||||
mounts.push(MountInfo {
|
||||
device: parts[0].to_string(),
|
||||
mount_point: parts[1].to_string(),
|
||||
fs_type: parts[2].to_string(),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
Ok(mounts)
|
||||
}
|
||||
|
||||
/// Parse mergerfs source string to extract member paths
|
||||
fn parse_mergerfs_sources(source: &str) -> Vec<String> {
|
||||
// MergerFS source format: "/mnt/disk1:/mnt/disk2:/mnt/disk3"
|
||||
source.split(':')
|
||||
.map(|s| s.trim().to_string())
|
||||
.filter(|s| !s.is_empty())
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Detect potential parity disks based on data member heuristics
|
||||
fn detect_parity_disks(data_members: &[String]) -> Vec<String> {
|
||||
let mut parity_disks = Vec::new();
|
||||
|
||||
// Heuristic 1: Look for mount points with "parity" in the name
|
||||
if let Ok(mounts) = Self::parse_proc_mounts() {
|
||||
for mount in mounts {
|
||||
if mount.mount_point.to_lowercase().contains("parity") &&
|
||||
(mount.fs_type == "xfs" || mount.fs_type == "ext4") {
|
||||
debug!("Detected parity disk by name: {}", mount.mount_point);
|
||||
parity_disks.push(mount.mount_point);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Heuristic 2: Look for sequential device pattern
|
||||
// If data members are /mnt/disk1, /mnt/disk2, look for /mnt/disk* that's not in data
|
||||
if parity_disks.is_empty() {
|
||||
if let Some(pattern) = Self::extract_mount_pattern(data_members) {
|
||||
if let Ok(mounts) = Self::parse_proc_mounts() {
|
||||
for mount in mounts {
|
||||
if mount.mount_point.starts_with(&pattern) &&
|
||||
!data_members.contains(&mount.mount_point) &&
|
||||
(mount.fs_type == "xfs" || mount.fs_type == "ext4") {
|
||||
debug!("Detected parity disk by pattern: {}", mount.mount_point);
|
||||
parity_disks.push(mount.mount_point);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
parity_disks
|
||||
}
|
||||
|
||||
/// Extract common mount point pattern from data members
|
||||
fn extract_mount_pattern(data_members: &[String]) -> Option<String> {
|
||||
if data_members.is_empty() {
|
||||
return None;
|
||||
}
|
||||
|
||||
// Find common prefix (e.g., "/mnt/disk" from "/mnt/disk1", "/mnt/disk2")
|
||||
let first = &data_members[0];
|
||||
if let Some(last_slash) = first.rfind('/') {
|
||||
let base = &first[..last_slash + 1]; // Include the slash
|
||||
|
||||
// Check if all members share this base
|
||||
if data_members.iter().all(|member| member.starts_with(base)) {
|
||||
return Some(base.to_string());
|
||||
}
|
||||
}
|
||||
|
||||
None
|
||||
}
|
||||
|
||||
/// Calculate disk temperature status using hysteresis thresholds
|
||||
fn calculate_temperature_status(&self, metric_name: &str, temperature: f32, status_tracker: &mut StatusTracker) -> Status {
|
||||
status_tracker.calculate_with_hysteresis(metric_name, temperature, &self.temperature_thresholds)
|
||||
}
|
||||
|
||||
|
||||
/// Get configured storage pools with individual drive information
|
||||
/// Get storage pools using auto-discovered topology or fallback to configuration
|
||||
fn get_configured_storage_pools(&self) -> Result<Vec<StoragePool>> {
|
||||
if let Some(ref topology) = self.storage_topology {
|
||||
self.get_auto_discovered_storage_pools(topology)
|
||||
} else {
|
||||
self.get_legacy_configured_storage_pools()
|
||||
}
|
||||
}
|
||||
|
||||
/// Get storage pools from auto-discovered topology
|
||||
fn get_auto_discovered_storage_pools(&self, topology: &StorageTopology) -> Result<Vec<StoragePool>> {
|
||||
let mut storage_pools = Vec::new();
|
||||
|
||||
// Group single disks by physical drive for unified pool display
|
||||
let grouped_disks = self.group_filesystems_by_physical_drive(&topology.single_disks)?;
|
||||
|
||||
// Process grouped single disks (each physical drive becomes a pool)
|
||||
for (drive_name, filesystems) in grouped_disks {
|
||||
// Create a unified pool for this physical drive
|
||||
let pool = self.create_physical_drive_pool(&drive_name, &filesystems)?;
|
||||
storage_pools.push(pool);
|
||||
}
|
||||
|
||||
// IMPORTANT: Do not create individual filesystem pools when using auto-discovery
|
||||
// All single disk filesystems should be grouped into physical drive pools above
|
||||
|
||||
// Process mergerfs pools (these remain as logical pools)
|
||||
for pool_info in &topology.mergerfs_pools {
|
||||
if let Ok((total_bytes, used_bytes)) = self.get_filesystem_info(&pool_info.mount_point) {
|
||||
let available_bytes = total_bytes - used_bytes;
|
||||
let usage_percent = if total_bytes > 0 {
|
||||
(used_bytes as f64 / total_bytes as f64) * 100.0
|
||||
} else { 0.0 };
|
||||
|
||||
let size = self.bytes_to_human_readable(total_bytes);
|
||||
let used = self.bytes_to_human_readable(used_bytes);
|
||||
let available = self.bytes_to_human_readable(available_bytes);
|
||||
|
||||
// Collect all member and parity drives
|
||||
let mut all_drives = Vec::new();
|
||||
|
||||
// Add data member drives
|
||||
for member in &pool_info.data_members {
|
||||
if let Some(devices) = self.detected_devices.get(member) {
|
||||
all_drives.extend(devices.clone());
|
||||
}
|
||||
}
|
||||
|
||||
// Add parity drives
|
||||
for parity in &pool_info.parity_disks {
|
||||
if let Some(devices) = self.detected_devices.get(parity) {
|
||||
all_drives.extend(devices.clone());
|
||||
}
|
||||
}
|
||||
|
||||
let underlying_drives = self.get_drive_info_for_devices(&all_drives)?;
|
||||
|
||||
// Calculate pool health
|
||||
let pool_health = self.calculate_mergerfs_pool_health(&pool_info.data_members, &pool_info.parity_disks, &underlying_drives);
|
||||
|
||||
// Generate pool name from mount point
|
||||
let name = pool_info.mount_point.trim_start_matches('/').replace('/', "_");
|
||||
|
||||
storage_pools.push(StoragePool {
|
||||
name,
|
||||
mount_point: pool_info.mount_point.clone(),
|
||||
filesystem: "fuse.mergerfs".to_string(),
|
||||
pool_type: StoragePoolType::MergerfsPool {
|
||||
data_disks: pool_info.data_members.iter()
|
||||
.filter_map(|member| self.detected_devices.get(member).and_then(|devices| devices.first().cloned()))
|
||||
.collect(),
|
||||
parity_disks: pool_info.parity_disks.iter()
|
||||
.filter_map(|parity| self.detected_devices.get(parity).and_then(|devices| devices.first().cloned()))
|
||||
.collect(),
|
||||
},
|
||||
size,
|
||||
used,
|
||||
available,
|
||||
usage_percent: usage_percent as f32,
|
||||
underlying_drives,
|
||||
pool_health,
|
||||
});
|
||||
|
||||
debug!("Auto-discovered mergerfs pool: {} with {} data + {} parity disks",
|
||||
pool_info.mount_point, pool_info.data_members.len(), pool_info.parity_disks.len());
|
||||
}
|
||||
}
|
||||
|
||||
Ok(storage_pools)
|
||||
}
|
||||
|
||||
/// Group filesystems by their backing physical drive
|
||||
fn group_filesystems_by_physical_drive(&self, filesystems: &[MountInfo]) -> Result<std::collections::HashMap<String, Vec<MountInfo>>> {
|
||||
let mut grouped = std::collections::HashMap::new();
|
||||
|
||||
for fs in filesystems {
|
||||
// Get the physical drive name for this mount point
|
||||
if let Some(devices) = self.detected_devices.get(&fs.mount_point) {
|
||||
if let Some(device_name) = devices.first() {
|
||||
// Extract drive name (e.g., "nvme0n1" from "nvme0n1")
|
||||
let drive_name = device_name.clone();
|
||||
grouped.entry(drive_name).or_insert_with(Vec::new).push(fs.clone());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(grouped)
|
||||
}
|
||||
|
||||
/// Create a physical drive pool containing multiple filesystems
|
||||
fn create_physical_drive_pool(&self, drive_name: &str, filesystems: &[MountInfo]) -> Result<StoragePool> {
|
||||
if filesystems.is_empty() {
|
||||
return Err(anyhow::anyhow!("No filesystems for drive {}", drive_name));
|
||||
}
|
||||
|
||||
// Calculate total usage across all filesystems on this drive
|
||||
let mut total_capacity = 0u64;
|
||||
let mut total_used = 0u64;
|
||||
|
||||
for fs in filesystems {
|
||||
if let Ok((capacity, used)) = self.get_filesystem_info(&fs.mount_point) {
|
||||
total_capacity += capacity;
|
||||
total_used += used;
|
||||
}
|
||||
}
|
||||
|
||||
let total_available = total_capacity.saturating_sub(total_used);
|
||||
let usage_percent = if total_capacity > 0 {
|
||||
(total_used as f64 / total_capacity as f64) * 100.0
|
||||
} else { 0.0 };
|
||||
|
||||
// Get drive information for SMART data
|
||||
let device_names = vec![drive_name.to_string()];
|
||||
let underlying_drives = self.get_drive_info_for_devices(&device_names)?;
|
||||
|
||||
// Collect filesystem mount points for this drive
|
||||
let filesystem_mount_points: Vec<String> = filesystems.iter()
|
||||
.map(|fs| fs.mount_point.clone())
|
||||
.collect();
|
||||
|
||||
Ok(StoragePool {
|
||||
name: drive_name.to_string(),
|
||||
mount_point: format!("(physical drive)"), // Special marker for physical drives
|
||||
filesystem: "physical".to_string(),
|
||||
pool_type: StoragePoolType::PhysicalDrive {
|
||||
filesystems: filesystem_mount_points,
|
||||
},
|
||||
size: self.bytes_to_human_readable(total_capacity),
|
||||
used: self.bytes_to_human_readable(total_used),
|
||||
available: self.bytes_to_human_readable(total_available),
|
||||
usage_percent: usage_percent as f32,
|
||||
pool_health: if underlying_drives.iter().all(|d| d.health_status == "PASSED") {
|
||||
PoolHealth::Healthy
|
||||
} else {
|
||||
PoolHealth::Critical
|
||||
},
|
||||
underlying_drives,
|
||||
})
|
||||
}
|
||||
|
||||
/// Calculate pool health specifically for mergerfs pools
|
||||
fn calculate_mergerfs_pool_health(&self, data_members: &[String], parity_disks: &[String], drives: &[DriveInfo]) -> PoolHealth {
|
||||
// Get device names for data and parity drives
|
||||
let mut data_device_names = Vec::new();
|
||||
let mut parity_device_names = Vec::new();
|
||||
|
||||
for member in data_members {
|
||||
if let Some(devices) = self.detected_devices.get(member) {
|
||||
data_device_names.extend(devices.clone());
|
||||
}
|
||||
}
|
||||
|
||||
for parity in parity_disks {
|
||||
if let Some(devices) = self.detected_devices.get(parity) {
|
||||
parity_device_names.extend(devices.clone());
|
||||
}
|
||||
}
|
||||
|
||||
let failed_data = drives.iter()
|
||||
.filter(|d| data_device_names.contains(&d.device) && d.health_status != "PASSED")
|
||||
.count();
|
||||
let failed_parity = drives.iter()
|
||||
.filter(|d| parity_device_names.contains(&d.device) && d.health_status != "PASSED")
|
||||
.count();
|
||||
|
||||
match (failed_data, failed_parity) {
|
||||
(0, 0) => PoolHealth::Healthy,
|
||||
(1, 0) => PoolHealth::Degraded, // Can recover with parity
|
||||
(0, 1) => PoolHealth::Degraded, // Lost parity protection
|
||||
_ => PoolHealth::Critical, // Multiple failures
|
||||
}
|
||||
}
|
||||
|
||||
/// Fallback to legacy configuration-based storage pools
|
||||
fn get_legacy_configured_storage_pools(&self) -> Result<Vec<StoragePool>> {
|
||||
let mut storage_pools = Vec::new();
|
||||
let mut processed_pools = std::collections::HashSet::new();
|
||||
|
||||
// Legacy implementation: use filesystem configuration
|
||||
for fs_config in &self.config.filesystems {
|
||||
if !fs_config.monitor {
|
||||
continue;
|
||||
}
|
||||
|
||||
let (pool_type, skip_in_single_mode) = self.determine_pool_type(&fs_config.storage_type);
|
||||
|
||||
// Skip member disks if they're part of a pool
|
||||
if skip_in_single_mode {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check if this pool was already processed (in case of multiple member disks)
|
||||
let pool_key = match &pool_type {
|
||||
StoragePoolType::MergerfsPool { .. } => {
|
||||
// For mergerfs pools, use the main mount point
|
||||
if fs_config.fs_type == "fuse.mergerfs" {
|
||||
fs_config.mount_point.clone()
|
||||
} else {
|
||||
continue; // Skip member disks
|
||||
}
|
||||
}
|
||||
_ => fs_config.mount_point.clone()
|
||||
};
|
||||
|
||||
if processed_pools.contains(&pool_key) {
|
||||
continue;
|
||||
}
|
||||
processed_pools.insert(pool_key.clone());
|
||||
|
||||
// Get filesystem stats for the mount point
|
||||
match self.get_filesystem_info(&fs_config.mount_point) {
|
||||
Ok((total_bytes, used_bytes)) => {
|
||||
let available_bytes = total_bytes - used_bytes;
|
||||
let usage_percent = if total_bytes > 0 {
|
||||
(used_bytes as f64 / total_bytes as f64) * 100.0
|
||||
} else {
|
||||
0.0
|
||||
};
|
||||
} else { 0.0 };
|
||||
|
||||
// Convert bytes to human-readable format
|
||||
let size = self.bytes_to_human_readable(total_bytes);
|
||||
let used = self.bytes_to_human_readable(used_bytes);
|
||||
let available = self.bytes_to_human_readable(available_bytes);
|
||||
|
||||
// Get individual drive information using pre-detected devices
|
||||
let device_names = self.detected_devices.get(&fs_config.mount_point).cloned().unwrap_or_default();
|
||||
let underlying_drives = self.get_drive_info_for_devices(&device_names)?;
|
||||
// Get underlying drives based on pool type
|
||||
let underlying_drives = self.get_pool_drives(&pool_type, &fs_config.mount_point)?;
|
||||
|
||||
// Calculate pool health
|
||||
let pool_health = self.calculate_pool_health(&pool_type, &underlying_drives);
|
||||
let drive_count = underlying_drives.len();
|
||||
|
||||
storage_pools.push(StoragePool {
|
||||
name: fs_config.name.clone(),
|
||||
mount_point: fs_config.mount_point.clone(),
|
||||
filesystem: fs_config.fs_type.clone(),
|
||||
storage_type: fs_config.storage_type.clone(),
|
||||
pool_type: pool_type.clone(),
|
||||
size,
|
||||
used,
|
||||
available,
|
||||
usage_percent: usage_percent as f32,
|
||||
underlying_drives,
|
||||
pool_health,
|
||||
});
|
||||
|
||||
debug!(
|
||||
"Storage pool '{}' ({}) at {} with {} detected drives",
|
||||
fs_config.name, fs_config.storage_type, fs_config.mount_point, device_names.len()
|
||||
"Legacy configured storage pool '{}' ({:?}) at {} with {} drives, health: {:?}",
|
||||
fs_config.name, pool_type, fs_config.mount_point, drive_count, pool_health
|
||||
);
|
||||
}
|
||||
Err(e) => {
|
||||
@@ -129,6 +599,138 @@ impl DiskCollector {
|
||||
Ok(storage_pools)
|
||||
}
|
||||
|
||||
/// Determine the storage pool type from configuration
|
||||
fn determine_pool_type(&self, storage_type: &str) -> (StoragePoolType, bool) {
|
||||
match storage_type {
|
||||
"single" => (StoragePoolType::Single, false),
|
||||
"mergerfs_pool" | "mergerfs" => {
|
||||
// Find associated member disks
|
||||
let data_disks = self.find_pool_member_disks("mergerfs_member");
|
||||
let parity_disks = self.find_pool_member_disks("parity");
|
||||
(StoragePoolType::MergerfsPool { data_disks, parity_disks }, false)
|
||||
}
|
||||
"mergerfs_member" => (StoragePoolType::Single, true), // Skip, part of pool
|
||||
"parity" => (StoragePoolType::Single, true), // Skip, part of pool
|
||||
"raid1" | "raid5" | "raid6" => {
|
||||
let member_disks = self.find_pool_member_disks(&format!("{}_member", storage_type));
|
||||
(StoragePoolType::RaidArray {
|
||||
level: storage_type.to_uppercase(),
|
||||
member_disks,
|
||||
spare_disks: Vec::new()
|
||||
}, false)
|
||||
}
|
||||
_ => (StoragePoolType::Single, false) // Default to single
|
||||
}
|
||||
}
|
||||
|
||||
/// Find member disks for a specific storage type
|
||||
fn find_pool_member_disks(&self, member_type: &str) -> Vec<String> {
|
||||
let mut member_disks = Vec::new();
|
||||
|
||||
for fs_config in &self.config.filesystems {
|
||||
if fs_config.storage_type == member_type && fs_config.monitor {
|
||||
// Get device names for this mount point
|
||||
if let Some(devices) = self.detected_devices.get(&fs_config.mount_point) {
|
||||
member_disks.extend(devices.clone());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
member_disks
|
||||
}
|
||||
|
||||
/// Get drive information for a specific pool type
|
||||
fn get_pool_drives(&self, pool_type: &StoragePoolType, mount_point: &str) -> Result<Vec<DriveInfo>> {
|
||||
match pool_type {
|
||||
StoragePoolType::Single => {
|
||||
// Single disk - use detected devices for this mount point
|
||||
let device_names = self.detected_devices.get(mount_point).cloned().unwrap_or_default();
|
||||
self.get_drive_info_for_devices(&device_names)
|
||||
}
|
||||
StoragePoolType::PhysicalDrive { .. } => {
|
||||
// Physical drive - get drive info for the drive directly (mount_point not used)
|
||||
let device_names = vec![mount_point.to_string()];
|
||||
self.get_drive_info_for_devices(&device_names)
|
||||
}
|
||||
StoragePoolType::MergerfsPool { data_disks, parity_disks } => {
|
||||
// Mergerfs pool - collect all member drives
|
||||
let mut all_disks = data_disks.clone();
|
||||
all_disks.extend(parity_disks.clone());
|
||||
self.get_drive_info_for_devices(&all_disks)
|
||||
}
|
||||
StoragePoolType::RaidArray { member_disks, spare_disks, .. } => {
|
||||
// RAID array - collect member and spare drives
|
||||
let mut all_disks = member_disks.clone();
|
||||
all_disks.extend(spare_disks.clone());
|
||||
self.get_drive_info_for_devices(&all_disks)
|
||||
}
|
||||
StoragePoolType::ZfsPool { .. } => {
|
||||
// ZFS pool - use detected devices (future implementation)
|
||||
let device_names = self.detected_devices.get(mount_point).cloned().unwrap_or_default();
|
||||
self.get_drive_info_for_devices(&device_names)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Calculate pool health based on drive status and pool type
|
||||
fn calculate_pool_health(&self, pool_type: &StoragePoolType, drives: &[DriveInfo]) -> PoolHealth {
|
||||
match pool_type {
|
||||
StoragePoolType::Single => {
|
||||
// Single disk - health is just the drive health
|
||||
if drives.is_empty() {
|
||||
PoolHealth::Unknown
|
||||
} else if drives.iter().all(|d| d.health_status == "PASSED") {
|
||||
PoolHealth::Healthy
|
||||
} else {
|
||||
PoolHealth::Critical
|
||||
}
|
||||
}
|
||||
StoragePoolType::PhysicalDrive { .. } => {
|
||||
// Physical drive - health is just the drive health (similar to Single)
|
||||
if drives.is_empty() {
|
||||
PoolHealth::Unknown
|
||||
} else if drives.iter().all(|d| d.health_status == "PASSED") {
|
||||
PoolHealth::Healthy
|
||||
} else {
|
||||
PoolHealth::Critical
|
||||
}
|
||||
}
|
||||
StoragePoolType::MergerfsPool { data_disks, parity_disks } => {
|
||||
let failed_data = drives.iter()
|
||||
.filter(|d| data_disks.contains(&d.device) && d.health_status != "PASSED")
|
||||
.count();
|
||||
let failed_parity = drives.iter()
|
||||
.filter(|d| parity_disks.contains(&d.device) && d.health_status != "PASSED")
|
||||
.count();
|
||||
|
||||
match (failed_data, failed_parity) {
|
||||
(0, 0) => PoolHealth::Healthy,
|
||||
(1, 0) => PoolHealth::Degraded, // Can recover with parity
|
||||
(0, 1) => PoolHealth::Degraded, // Lost parity protection
|
||||
_ => PoolHealth::Critical, // Multiple failures
|
||||
}
|
||||
}
|
||||
StoragePoolType::RaidArray { level, .. } => {
|
||||
let failed_drives = drives.iter().filter(|d| d.health_status != "PASSED").count();
|
||||
|
||||
// Basic RAID health logic (can be enhanced per RAID level)
|
||||
match failed_drives {
|
||||
0 => PoolHealth::Healthy,
|
||||
1 if level.contains('1') || level.contains('5') || level.contains('6') => PoolHealth::Degraded,
|
||||
_ => PoolHealth::Critical,
|
||||
}
|
||||
}
|
||||
StoragePoolType::ZfsPool { .. } => {
|
||||
// ZFS health would require zpool status parsing (future)
|
||||
if drives.iter().all(|d| d.health_status == "PASSED") {
|
||||
PoolHealth::Healthy
|
||||
} else {
|
||||
PoolHealth::Degraded
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Get drive information for a list of device names
|
||||
fn get_drive_info_for_devices(&self, device_names: &[String]) -> Result<Vec<DriveInfo>> {
|
||||
let mut drives = Vec::new();
|
||||
@@ -288,6 +890,11 @@ impl DiskCollector {
|
||||
}
|
||||
}
|
||||
|
||||
/// Convert bytes to gigabytes
|
||||
fn bytes_to_gb(&self, bytes: u64) -> f32 {
|
||||
bytes as f32 / (1024.0 * 1024.0 * 1024.0)
|
||||
}
|
||||
|
||||
/// Detect device backing a mount point using lsblk (static version for startup)
|
||||
fn detect_device_for_mount_point_static(mount_point: &str) -> Result<Vec<String>> {
|
||||
let output = Command::new("lsblk")
|
||||
@@ -448,8 +1055,8 @@ impl Collector for DiskCollector {
|
||||
let used_gb = self.parse_size_to_gb(&storage_pool.used);
|
||||
let avail_gb = self.parse_size_to_gb(&storage_pool.available);
|
||||
|
||||
// Calculate status based on configured thresholds
|
||||
let pool_status = if storage_pool.usage_percent >= self.config.usage_critical_percent {
|
||||
// Calculate status based on configured thresholds and pool health
|
||||
let usage_status = if storage_pool.usage_percent >= self.config.usage_critical_percent {
|
||||
Status::Critical
|
||||
} else if storage_pool.usage_percent >= self.config.usage_warning_percent {
|
||||
Status::Warning
|
||||
@@ -457,6 +1064,14 @@ impl Collector for DiskCollector {
|
||||
Status::Ok
|
||||
};
|
||||
|
||||
let pool_status = match storage_pool.pool_health {
|
||||
PoolHealth::Critical => Status::Critical,
|
||||
PoolHealth::Degraded => Status::Warning,
|
||||
PoolHealth::Rebuilding => Status::Warning,
|
||||
PoolHealth::Healthy => usage_status,
|
||||
PoolHealth::Unknown => Status::Unknown,
|
||||
};
|
||||
|
||||
// Storage pool info metrics
|
||||
metrics.push(Metric {
|
||||
name: format!("disk_{}_mount_point", pool_name),
|
||||
@@ -476,15 +1091,50 @@ impl Collector for DiskCollector {
|
||||
timestamp,
|
||||
});
|
||||
|
||||
// Enhanced pool type information
|
||||
let pool_type_str = match &storage_pool.pool_type {
|
||||
StoragePoolType::Single => "single".to_string(),
|
||||
StoragePoolType::PhysicalDrive { filesystems } => {
|
||||
format!("drive ({})", filesystems.len())
|
||||
}
|
||||
StoragePoolType::MergerfsPool { data_disks, parity_disks } => {
|
||||
format!("mergerfs ({}+{})", data_disks.len(), parity_disks.len())
|
||||
}
|
||||
StoragePoolType::RaidArray { level, member_disks, spare_disks } => {
|
||||
format!("{} ({}+{})", level, member_disks.len(), spare_disks.len())
|
||||
}
|
||||
StoragePoolType::ZfsPool { pool_name, .. } => {
|
||||
format!("zfs ({})", pool_name)
|
||||
}
|
||||
};
|
||||
|
||||
metrics.push(Metric {
|
||||
name: format!("disk_{}_storage_type", pool_name),
|
||||
value: MetricValue::String(storage_pool.storage_type.clone()),
|
||||
name: format!("disk_{}_pool_type", pool_name),
|
||||
value: MetricValue::String(pool_type_str.clone()),
|
||||
unit: None,
|
||||
description: Some(format!("Type: {}", storage_pool.storage_type)),
|
||||
description: Some(format!("Type: {}", pool_type_str)),
|
||||
status: Status::Ok,
|
||||
timestamp,
|
||||
});
|
||||
|
||||
// Pool health status
|
||||
let health_str = match storage_pool.pool_health {
|
||||
PoolHealth::Healthy => "healthy",
|
||||
PoolHealth::Degraded => "degraded",
|
||||
PoolHealth::Critical => "critical",
|
||||
PoolHealth::Rebuilding => "rebuilding",
|
||||
PoolHealth::Unknown => "unknown",
|
||||
};
|
||||
|
||||
metrics.push(Metric {
|
||||
name: format!("disk_{}_pool_health", pool_name),
|
||||
value: MetricValue::String(health_str.to_string()),
|
||||
unit: None,
|
||||
description: Some(format!("Health: {}", health_str)),
|
||||
status: pool_status,
|
||||
timestamp,
|
||||
});
|
||||
|
||||
// Storage pool size metrics
|
||||
metrics.push(Metric {
|
||||
name: format!("disk_{}_total_gb", pool_name),
|
||||
@@ -570,6 +1220,79 @@ impl Collector for DiskCollector {
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Individual filesystem metrics for PhysicalDrive pools
|
||||
if let StoragePoolType::PhysicalDrive { filesystems } = &storage_pool.pool_type {
|
||||
for filesystem_mount in filesystems {
|
||||
if let Ok((total_bytes, used_bytes)) = self.get_filesystem_info(filesystem_mount) {
|
||||
let available_bytes = total_bytes - used_bytes;
|
||||
let usage_percent = if total_bytes > 0 {
|
||||
(used_bytes as f64 / total_bytes as f64) * 100.0
|
||||
} else { 0.0 };
|
||||
|
||||
let filesystem_name = if filesystem_mount == "/" {
|
||||
"root".to_string()
|
||||
} else {
|
||||
filesystem_mount.trim_start_matches('/').replace('/', "_")
|
||||
};
|
||||
|
||||
// Calculate filesystem status based on usage
|
||||
let fs_status = if usage_percent >= self.config.usage_critical_percent as f64 {
|
||||
Status::Critical
|
||||
} else if usage_percent >= self.config.usage_warning_percent as f64 {
|
||||
Status::Warning
|
||||
} else {
|
||||
Status::Ok
|
||||
};
|
||||
|
||||
// Filesystem usage metrics
|
||||
metrics.push(Metric {
|
||||
name: format!("disk_{}_fs_{}_usage_percent", pool_name, filesystem_name),
|
||||
value: MetricValue::Float(usage_percent as f32),
|
||||
unit: Some("%".to_string()),
|
||||
description: Some(format!("{}: {:.0}%", filesystem_mount, usage_percent)),
|
||||
status: fs_status.clone(),
|
||||
timestamp,
|
||||
});
|
||||
|
||||
metrics.push(Metric {
|
||||
name: format!("disk_{}_fs_{}_used_gb", pool_name, filesystem_name),
|
||||
value: MetricValue::Float(self.bytes_to_gb(used_bytes)),
|
||||
unit: Some("GB".to_string()),
|
||||
description: Some(format!("{}: {}GB used", filesystem_mount, self.bytes_to_human_readable(used_bytes))),
|
||||
status: Status::Ok,
|
||||
timestamp,
|
||||
});
|
||||
|
||||
metrics.push(Metric {
|
||||
name: format!("disk_{}_fs_{}_total_gb", pool_name, filesystem_name),
|
||||
value: MetricValue::Float(self.bytes_to_gb(total_bytes)),
|
||||
unit: Some("GB".to_string()),
|
||||
description: Some(format!("{}: {}GB total", filesystem_mount, self.bytes_to_human_readable(total_bytes))),
|
||||
status: Status::Ok,
|
||||
timestamp,
|
||||
});
|
||||
|
||||
metrics.push(Metric {
|
||||
name: format!("disk_{}_fs_{}_available_gb", pool_name, filesystem_name),
|
||||
value: MetricValue::Float(self.bytes_to_gb(available_bytes)),
|
||||
unit: Some("GB".to_string()),
|
||||
description: Some(format!("{}: {}GB available", filesystem_mount, self.bytes_to_human_readable(available_bytes))),
|
||||
status: Status::Ok,
|
||||
timestamp,
|
||||
});
|
||||
|
||||
metrics.push(Metric {
|
||||
name: format!("disk_{}_fs_{}_mount_point", pool_name, filesystem_name),
|
||||
value: MetricValue::String(filesystem_mount.clone()),
|
||||
unit: None,
|
||||
description: Some(format!("Mount: {}", filesystem_mount)),
|
||||
status: Status::Ok,
|
||||
timestamp,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Add storage pool count metric
|
||||
@@ -592,5 +1315,4 @@ impl Collector for DiskCollector {
|
||||
|
||||
Ok(metrics)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "cm-dashboard"
|
||||
version = "0.1.90"
|
||||
version = "0.1.107"
|
||||
edition = "2021"
|
||||
|
||||
[dependencies]
|
||||
|
||||
@@ -220,7 +220,7 @@ impl TuiApp {
|
||||
let connection_ip = self.get_connection_ip(&hostname);
|
||||
// Create command that shows logo, rebuilds, and waits for user input
|
||||
let logo_and_rebuild = format!(
|
||||
"echo 'Rebuilding system: {} ({})' && ssh -tt {}@{} \"bash -ic {}\" && echo 'Press any key to close...' && read -n 1 -s",
|
||||
"echo 'Rebuilding system: {} ({})' && ssh -tt {}@{} \"bash -ic '{}'\"",
|
||||
hostname,
|
||||
connection_ip,
|
||||
self.config.ssh.rebuild_user,
|
||||
@@ -244,7 +244,7 @@ impl TuiApp {
|
||||
let connection_ip = self.get_connection_ip(&hostname);
|
||||
// Create command that shows logo, runs backup, and waits for user input
|
||||
let logo_and_backup = format!(
|
||||
"echo 'Running backup: {} ({})' && ssh -tt {}@{} \"bash -ic {}\" && echo 'Press any key to close...' && read -n 1 -s",
|
||||
"echo 'Running backup: {} ({})' && ssh -tt {}@{} \"bash -ic '{}'\"",
|
||||
hostname,
|
||||
connection_ip,
|
||||
self.config.ssh.rebuild_user,
|
||||
@@ -267,7 +267,7 @@ impl TuiApp {
|
||||
if let (Some(service_name), Some(hostname)) = (self.get_selected_service(), self.current_host.clone()) {
|
||||
let connection_ip = self.get_connection_ip(&hostname);
|
||||
let service_start_command = format!(
|
||||
"echo 'Starting service: {} on {}' && ssh -tt {}@{} \"bash -ic '{} start {}'\" && echo 'Press any key to close...' && read -n 1 -s",
|
||||
"echo 'Starting service: {} on {}' && ssh -tt {}@{} \"bash -ic '{} start {}'\"",
|
||||
service_name,
|
||||
hostname,
|
||||
self.config.ssh.rebuild_user,
|
||||
@@ -291,7 +291,7 @@ impl TuiApp {
|
||||
if let (Some(service_name), Some(hostname)) = (self.get_selected_service(), self.current_host.clone()) {
|
||||
let connection_ip = self.get_connection_ip(&hostname);
|
||||
let service_stop_command = format!(
|
||||
"echo 'Stopping service: {} on {}' && ssh -tt {}@{} \"bash -ic '{} stop {}'\" && echo 'Press any key to close...' && read -n 1 -s",
|
||||
"echo 'Stopping service: {} on {}' && ssh -tt {}@{} \"bash -ic '{} stop {}'\"",
|
||||
service_name,
|
||||
hostname,
|
||||
self.config.ssh.rebuild_user,
|
||||
@@ -310,16 +310,15 @@ impl TuiApp {
|
||||
.ok(); // Ignore errors, tmux will handle them
|
||||
}
|
||||
}
|
||||
KeyCode::Char('J') => {
|
||||
// Show service logs via journalctl in tmux split window
|
||||
KeyCode::Char('L') => {
|
||||
// Show service logs via service-manage script in tmux split window
|
||||
if let (Some(service_name), Some(hostname)) = (self.get_selected_service(), self.current_host.clone()) {
|
||||
let connection_ip = self.get_connection_ip(&hostname);
|
||||
let journalctl_command = format!(
|
||||
"echo 'Viewing logs for service: {} on {}' && ssh -tt {}@{} 'sudo journalctl -u {}.service -f --no-pager -n 50'",
|
||||
service_name,
|
||||
hostname,
|
||||
let logs_command = format!(
|
||||
"ssh -tt {}@{} '{} logs {}'",
|
||||
self.config.ssh.rebuild_user,
|
||||
connection_ip,
|
||||
self.config.ssh.service_manage_cmd,
|
||||
service_name
|
||||
);
|
||||
|
||||
@@ -328,39 +327,11 @@ impl TuiApp {
|
||||
.arg("-v")
|
||||
.arg("-p")
|
||||
.arg("30")
|
||||
.arg(&journalctl_command)
|
||||
.arg(&logs_command)
|
||||
.spawn()
|
||||
.ok(); // Ignore errors, tmux will handle them
|
||||
}
|
||||
}
|
||||
KeyCode::Char('L') => {
|
||||
// Show custom service log file in tmux split window
|
||||
if let (Some(service_name), Some(hostname)) = (self.get_selected_service(), self.current_host.clone()) {
|
||||
// Check if this service has a custom log file configured
|
||||
if let Some(host_logs) = self.config.service_logs.get(&hostname) {
|
||||
if let Some(log_config) = host_logs.iter().find(|config| config.service_name == service_name) {
|
||||
let connection_ip = self.get_connection_ip(&hostname);
|
||||
let tail_command = format!(
|
||||
"echo 'Viewing custom logs for service: {} on {}' && ssh -tt {}@{} 'sudo tail -n 50 -f {}'",
|
||||
service_name,
|
||||
hostname,
|
||||
self.config.ssh.rebuild_user,
|
||||
connection_ip,
|
||||
log_config.log_file_path
|
||||
);
|
||||
|
||||
std::process::Command::new("tmux")
|
||||
.arg("split-window")
|
||||
.arg("-v")
|
||||
.arg("-p")
|
||||
.arg("30")
|
||||
.arg(&tail_command)
|
||||
.spawn()
|
||||
.ok(); // Ignore errors, tmux will handle them
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
KeyCode::Char('w') => {
|
||||
// Wake on LAN for offline hosts
|
||||
if let Some(hostname) = self.current_host.clone() {
|
||||
@@ -393,7 +364,7 @@ impl TuiApp {
|
||||
if let Some(hostname) = self.current_host.clone() {
|
||||
let connection_ip = self.get_connection_ip(&hostname);
|
||||
let ssh_command = format!(
|
||||
"echo 'Opening SSH terminal to: {}' && ssh -tt {}@{} && echo 'Press any key to close...' && read -n 1 -s",
|
||||
"echo 'Opening SSH terminal to: {}' && ssh -tt {}@{}",
|
||||
hostname,
|
||||
self.config.ssh.rebuild_user,
|
||||
connection_ip
|
||||
@@ -618,12 +589,13 @@ impl TuiApp {
|
||||
// Split the title bar into left and right sections
|
||||
let chunks = Layout::default()
|
||||
.direction(Direction::Horizontal)
|
||||
.constraints([Constraint::Length(15), Constraint::Min(0)])
|
||||
.constraints([Constraint::Length(22), Constraint::Min(0)])
|
||||
.split(area);
|
||||
|
||||
// Left side: "cm-dashboard" text
|
||||
// Left side: "cm-dashboard" text with version
|
||||
let title_text = format!(" cm-dashboard v{}", env!("CARGO_PKG_VERSION"));
|
||||
let left_span = Span::styled(
|
||||
" cm-dashboard",
|
||||
&title_text,
|
||||
Style::default().fg(Theme::background()).bg(background_color).add_modifier(Modifier::BOLD)
|
||||
);
|
||||
let left_title = Paragraph::new(Line::from(vec![left_span]))
|
||||
@@ -695,35 +667,27 @@ impl TuiApp {
|
||||
return host_summary_metric.status;
|
||||
}
|
||||
|
||||
// Fallback to old aggregation logic with proper Pending handling
|
||||
// Rewritten status aggregation - only Critical, Warning, or OK for top bar
|
||||
let mut has_critical = false;
|
||||
let mut has_warning = false;
|
||||
let mut has_pending = false;
|
||||
let mut ok_count = 0;
|
||||
|
||||
for metric in &metrics {
|
||||
match metric.status {
|
||||
Status::Critical => has_critical = true,
|
||||
Status::Warning => has_warning = true,
|
||||
Status::Pending => has_pending = true,
|
||||
Status::Ok => ok_count += 1,
|
||||
Status::Inactive => ok_count += 1, // Treat inactive as OK for aggregation
|
||||
Status::Unknown => {}, // Ignore unknown for aggregation
|
||||
Status::Offline => {}, // Ignore offline for aggregation
|
||||
// Treat all other statuses as OK for top bar aggregation
|
||||
Status::Ok | Status::Pending | Status::Inactive | Status::Unknown => {},
|
||||
Status::Offline => {}, // Ignore offline
|
||||
}
|
||||
}
|
||||
|
||||
// Priority order: Critical > Warning > Pending > Ok > Unknown
|
||||
// Only return Critical, Warning, or OK - no other statuses
|
||||
if has_critical {
|
||||
Status::Critical
|
||||
} else if has_warning {
|
||||
Status::Warning
|
||||
} else if has_pending {
|
||||
Status::Pending
|
||||
} else if ok_count > 0 {
|
||||
Status::Ok
|
||||
} else {
|
||||
Status::Unknown
|
||||
Status::Ok
|
||||
}
|
||||
}
|
||||
|
||||
@@ -747,9 +711,10 @@ impl TuiApp {
|
||||
shortcuts.push("Tab: Host".to_string());
|
||||
shortcuts.push("↑↓/jk: Select".to_string());
|
||||
shortcuts.push("r: Rebuild".to_string());
|
||||
shortcuts.push("B: Backup".to_string());
|
||||
shortcuts.push("s/S: Start/Stop".to_string());
|
||||
shortcuts.push("J: Logs".to_string());
|
||||
shortcuts.push("L: Custom".to_string());
|
||||
shortcuts.push("L: Logs".to_string());
|
||||
shortcuts.push("t: Terminal".to_string());
|
||||
shortcuts.push("w: Wake".to_string());
|
||||
|
||||
// Always show quit
|
||||
|
||||
@@ -30,6 +30,8 @@ pub struct BackupWidget {
|
||||
backup_disk_product_name: Option<String>,
|
||||
/// Backup disk serial number from SMART data
|
||||
backup_disk_serial_number: Option<String>,
|
||||
/// Backup disk wear percentage from SMART data
|
||||
backup_disk_wear_percent: Option<f32>,
|
||||
/// Backup disk filesystem label
|
||||
backup_disk_filesystem_label: Option<String>,
|
||||
/// Number of completed services
|
||||
@@ -65,6 +67,7 @@ impl BackupWidget {
|
||||
backup_disk_used_gb: None,
|
||||
backup_disk_product_name: None,
|
||||
backup_disk_serial_number: None,
|
||||
backup_disk_wear_percent: None,
|
||||
backup_disk_filesystem_label: None,
|
||||
services_completed_count: None,
|
||||
services_failed_count: None,
|
||||
@@ -197,6 +200,9 @@ impl Widget for BackupWidget {
|
||||
"backup_disk_serial_number" => {
|
||||
self.backup_disk_serial_number = Some(metric.value.as_string());
|
||||
}
|
||||
"backup_disk_wear_percent" => {
|
||||
self.backup_disk_wear_percent = metric.value.as_f32();
|
||||
}
|
||||
"backup_disk_filesystem_label" => {
|
||||
self.backup_disk_filesystem_label = Some(metric.value.as_string());
|
||||
}
|
||||
@@ -328,21 +334,31 @@ impl BackupWidget {
|
||||
);
|
||||
lines.push(ratatui::text::Line::from(disk_spans));
|
||||
|
||||
// Serial number as sub-item
|
||||
// Collect sub-items to determine tree structure
|
||||
let mut sub_items = Vec::new();
|
||||
|
||||
if let Some(serial) = &self.backup_disk_serial_number {
|
||||
lines.push(ratatui::text::Line::from(vec![
|
||||
ratatui::text::Span::styled(" ├─ ", Typography::tree()),
|
||||
ratatui::text::Span::styled(format!("S/N: {}", serial), Typography::secondary())
|
||||
]));
|
||||
sub_items.push(format!("S/N: {}", serial));
|
||||
}
|
||||
|
||||
// Usage as sub-item
|
||||
|
||||
if let Some(wear) = self.backup_disk_wear_percent {
|
||||
sub_items.push(format!("Wear: {:.0}%", wear));
|
||||
}
|
||||
|
||||
if let (Some(used), Some(total)) = (self.backup_disk_used_gb, self.backup_disk_total_gb) {
|
||||
let used_str = Self::format_size_with_proper_units(used);
|
||||
let total_str = Self::format_size_with_proper_units(total);
|
||||
sub_items.push(format!("Usage: {}/{}", used_str, total_str));
|
||||
}
|
||||
|
||||
// Render sub-items with proper tree structure
|
||||
let num_items = sub_items.len();
|
||||
for (i, item) in sub_items.into_iter().enumerate() {
|
||||
let is_last = i == num_items - 1;
|
||||
let tree_char = if is_last { " └─ " } else { " ├─ " };
|
||||
lines.push(ratatui::text::Line::from(vec![
|
||||
ratatui::text::Span::styled(" └─ ", Typography::tree()),
|
||||
ratatui::text::Span::styled(format!("Usage: {}/{}", used_str, total_str), Typography::secondary())
|
||||
ratatui::text::Span::styled(tree_char, Typography::tree()),
|
||||
ratatui::text::Span::styled(item, Typography::secondary())
|
||||
]));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -209,36 +209,13 @@ impl ServicesWidget {
|
||||
}
|
||||
|
||||
/// Get currently selected service name (for actions)
|
||||
/// Only returns parent service names since only parent services can be selected
|
||||
pub fn get_selected_service(&self) -> Option<String> {
|
||||
// Build the same display list to find the selected service
|
||||
let mut display_lines: Vec<(String, Status, bool, Option<(ServiceInfo, bool)>, String)> = Vec::new();
|
||||
|
||||
// Only parent services can be selected, so just get the parent service at selected_index
|
||||
let mut parent_services: Vec<_> = self.parent_services.iter().collect();
|
||||
parent_services.sort_by(|(a, _), (b, _)| a.cmp(b));
|
||||
|
||||
for (parent_name, parent_info) in parent_services {
|
||||
let parent_line = self.format_parent_service_line(parent_name, parent_info);
|
||||
display_lines.push((parent_line, parent_info.widget_status, false, None, parent_name.clone()));
|
||||
|
||||
if let Some(sub_list) = self.sub_services.get(parent_name) {
|
||||
let mut sorted_subs = sub_list.clone();
|
||||
sorted_subs.sort_by(|(a, _), (b, _)| a.cmp(b));
|
||||
|
||||
for (i, (sub_name, sub_info)) in sorted_subs.iter().enumerate() {
|
||||
let is_last_sub = i == sorted_subs.len() - 1;
|
||||
let full_sub_name = format!("{}_{}", parent_name, sub_name);
|
||||
display_lines.push((
|
||||
sub_name.clone(),
|
||||
sub_info.widget_status,
|
||||
true,
|
||||
Some((sub_info.clone(), is_last_sub)),
|
||||
full_sub_name,
|
||||
));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
display_lines.get(self.selected_index).map(|(_, _, _, _, raw_name)| raw_name.clone())
|
||||
parent_services.get(self.selected_index).map(|(name, _)| name.to_string())
|
||||
}
|
||||
|
||||
/// Get total count of selectable services (parent services only, not sub-services)
|
||||
|
||||
@@ -45,12 +45,15 @@ pub struct SystemWidget {
|
||||
struct StoragePool {
|
||||
name: String,
|
||||
mount_point: String,
|
||||
pool_type: String, // "Single", "Raid0", etc.
|
||||
pool_type: String, // "single", "mergerfs (2+1)", "RAID5 (3+1)", etc.
|
||||
pool_health: Option<String>, // "healthy", "degraded", "critical", "rebuilding"
|
||||
drives: Vec<StorageDrive>,
|
||||
filesystems: Vec<FileSystem>, // For physical drive pools: individual filesystem children
|
||||
usage_percent: Option<f32>,
|
||||
used_gb: Option<f32>,
|
||||
total_gb: Option<f32>,
|
||||
status: Status,
|
||||
health_status: Status, // Separate status for pool health vs usage
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
@@ -61,6 +64,16 @@ struct StorageDrive {
|
||||
status: Status,
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
struct FileSystem {
|
||||
mount_point: String,
|
||||
usage_percent: Option<f32>,
|
||||
used_gb: Option<f32>,
|
||||
total_gb: Option<f32>,
|
||||
available_gb: Option<f32>,
|
||||
status: Status,
|
||||
}
|
||||
|
||||
impl SystemWidget {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
@@ -155,12 +168,15 @@ impl SystemWidget {
|
||||
let pool = pools.entry(pool_name.clone()).or_insert_with(|| StoragePool {
|
||||
name: pool_name.clone(),
|
||||
mount_point: mount_point.clone(),
|
||||
pool_type: "Single".to_string(), // Default, could be enhanced
|
||||
pool_type: "single".to_string(), // Default, will be updated
|
||||
pool_health: None,
|
||||
drives: Vec::new(),
|
||||
filesystems: Vec::new(),
|
||||
usage_percent: None,
|
||||
used_gb: None,
|
||||
total_gb: None,
|
||||
status: Status::Unknown,
|
||||
health_status: Status::Unknown,
|
||||
});
|
||||
|
||||
// Parse different metric types
|
||||
@@ -177,6 +193,15 @@ impl SystemWidget {
|
||||
if let MetricValue::Float(total) = metric.value {
|
||||
pool.total_gb = Some(total);
|
||||
}
|
||||
} else if metric.name.contains("_pool_type") {
|
||||
if let MetricValue::String(pool_type) = &metric.value {
|
||||
pool.pool_type = pool_type.clone();
|
||||
}
|
||||
} else if metric.name.contains("_pool_health") {
|
||||
if let MetricValue::String(health) = &metric.value {
|
||||
pool.pool_health = Some(health.clone());
|
||||
pool.health_status = metric.status.clone();
|
||||
}
|
||||
} else if metric.name.contains("_temperature") {
|
||||
if let Some(drive_name) = self.extract_drive_name(&metric.name) {
|
||||
// Find existing drive or create new one
|
||||
@@ -217,6 +242,91 @@ impl SystemWidget {
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if metric.name.contains("_fs_") {
|
||||
// Handle filesystem metrics for physical drive pools (disk_{pool}_fs_{fs_name}_{metric})
|
||||
if let (Some(fs_name), Some(metric_type)) = self.extract_filesystem_metric(&metric.name) {
|
||||
// Find or create filesystem entry
|
||||
let fs_exists = pool.filesystems.iter().any(|fs| {
|
||||
let fs_id = if fs.mount_point == "/" {
|
||||
"root".to_string()
|
||||
} else {
|
||||
fs.mount_point.trim_start_matches('/').replace('/', "_")
|
||||
};
|
||||
fs_id == fs_name
|
||||
});
|
||||
|
||||
if !fs_exists {
|
||||
// Create filesystem entry with correct mount point
|
||||
let mount_point = if metric_type == "mount_point" {
|
||||
if let MetricValue::String(mount) = &metric.value {
|
||||
mount.clone()
|
||||
} else {
|
||||
// Fallback: handle special cases
|
||||
if fs_name == "root" {
|
||||
"/".to_string()
|
||||
} else {
|
||||
format!("/{}", fs_name.replace('_', "/"))
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Fallback for non-mount_point metrics: generate mount point from fs_name
|
||||
if fs_name == "root" {
|
||||
"/".to_string()
|
||||
} else {
|
||||
format!("/{}", fs_name.replace('_', "/"))
|
||||
}
|
||||
};
|
||||
|
||||
pool.filesystems.push(FileSystem {
|
||||
mount_point,
|
||||
usage_percent: None,
|
||||
used_gb: None,
|
||||
total_gb: None,
|
||||
available_gb: None,
|
||||
status: Status::Unknown,
|
||||
});
|
||||
}
|
||||
|
||||
// Update the filesystem with the metric value
|
||||
if let Some(filesystem) = pool.filesystems.iter_mut().find(|fs| {
|
||||
let fs_id = if fs.mount_point == "/" {
|
||||
"root".to_string()
|
||||
} else {
|
||||
fs.mount_point.trim_start_matches('/').replace('/', "_")
|
||||
};
|
||||
fs_id == fs_name
|
||||
}) {
|
||||
match metric_type.as_str() {
|
||||
"usage_percent" => {
|
||||
if let MetricValue::Float(usage) = metric.value {
|
||||
filesystem.usage_percent = Some(usage);
|
||||
filesystem.status = metric.status.clone();
|
||||
}
|
||||
}
|
||||
"used_gb" => {
|
||||
if let MetricValue::Float(used) = metric.value {
|
||||
filesystem.used_gb = Some(used);
|
||||
}
|
||||
}
|
||||
"total_gb" => {
|
||||
if let MetricValue::Float(total) = metric.value {
|
||||
filesystem.total_gb = Some(total);
|
||||
}
|
||||
}
|
||||
"available_gb" => {
|
||||
if let MetricValue::Float(available) = metric.value {
|
||||
filesystem.available_gb = Some(available);
|
||||
}
|
||||
}
|
||||
"mount_point" => {
|
||||
if let MetricValue::String(mount) = &metric.value {
|
||||
filesystem.mount_point = mount.clone();
|
||||
}
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -243,10 +353,17 @@ impl SystemWidget {
|
||||
return Some(metric_name[5..drive_start].to_string()); // Skip "disk_"
|
||||
}
|
||||
}
|
||||
// Handle filesystem metrics: disk_{pool}_fs_{filesystem}_{metric}
|
||||
else if metric_name.contains("_fs_") {
|
||||
if let Some(fs_pos) = metric_name.find("_fs_") {
|
||||
return Some(metric_name[5..fs_pos].to_string()); // Skip "disk_", extract pool name before "_fs_"
|
||||
}
|
||||
}
|
||||
// For pool-level metrics (usage_percent, used_gb, total_gb), take everything before the metric suffix
|
||||
else if let Some(suffix_pos) = metric_name.rfind("_usage_percent")
|
||||
.or_else(|| metric_name.rfind("_used_gb"))
|
||||
.or_else(|| metric_name.rfind("_total_gb")) {
|
||||
.or_else(|| metric_name.rfind("_total_gb"))
|
||||
.or_else(|| metric_name.rfind("_available_gb")) {
|
||||
return Some(metric_name[5..suffix_pos].to_string()); // Skip "disk_"
|
||||
}
|
||||
// Fallback to old behavior for unknown patterns
|
||||
@@ -259,6 +376,28 @@ impl SystemWidget {
|
||||
None
|
||||
}
|
||||
|
||||
/// Extract filesystem name and metric type from filesystem metric names
|
||||
/// Pattern: disk_{pool}_fs_{filesystem_name}_{metric_type}
|
||||
fn extract_filesystem_metric(&self, metric_name: &str) -> (Option<String>, Option<String>) {
|
||||
if metric_name.starts_with("disk_") && metric_name.contains("_fs_") {
|
||||
// Find the _fs_ part
|
||||
if let Some(fs_start) = metric_name.find("_fs_") {
|
||||
let after_fs = &metric_name[fs_start + 4..]; // Skip "_fs_"
|
||||
|
||||
// Look for known metric suffixes (these can contain underscores)
|
||||
let known_suffixes = ["usage_percent", "used_gb", "total_gb", "available_gb", "mount_point"];
|
||||
|
||||
for suffix in known_suffixes {
|
||||
if after_fs.ends_with(suffix) {
|
||||
let fs_name = after_fs[..after_fs.len() - suffix.len() - 1].to_string(); // Remove suffix + underscore
|
||||
return (Some(fs_name), Some(suffix.to_string()));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
(None, None)
|
||||
}
|
||||
|
||||
/// Extract drive name from disk metric name
|
||||
fn extract_drive_name(&self, metric_name: &str) -> Option<String> {
|
||||
// Pattern: disk_{pool_name}_{drive_name}_{metric_type}
|
||||
@@ -277,73 +416,199 @@ impl SystemWidget {
|
||||
None
|
||||
}
|
||||
|
||||
/// Render storage section with tree structure
|
||||
/// Render storage section with enhanced tree structure
|
||||
fn render_storage(&self) -> Vec<Line<'_>> {
|
||||
let mut lines = Vec::new();
|
||||
|
||||
for pool in &self.storage_pools {
|
||||
// Pool header line
|
||||
let usage_text = match (pool.usage_percent, pool.used_gb, pool.total_gb) {
|
||||
(Some(pct), Some(used), Some(total)) => {
|
||||
format!("{:.0}% {:.1}GB/{:.1}GB", pct, used, total)
|
||||
}
|
||||
_ => "—% —GB/—GB".to_string(),
|
||||
};
|
||||
|
||||
let pool_label = if pool.pool_type.to_lowercase() == "single" {
|
||||
// Pool header line with type and health
|
||||
let pool_label = if pool.pool_type == "single" {
|
||||
format!("{}:", pool.mount_point)
|
||||
} else {
|
||||
format!("{} ({}):", pool.mount_point, pool.pool_type)
|
||||
};
|
||||
let pool_spans = StatusIcons::create_status_spans(
|
||||
pool.status.clone(),
|
||||
pool.health_status.clone(),
|
||||
&pool_label
|
||||
);
|
||||
lines.push(Line::from(pool_spans));
|
||||
|
||||
// Drive lines with tree structure
|
||||
let has_usage_line = pool.usage_percent.is_some();
|
||||
for (i, drive) in pool.drives.iter().enumerate() {
|
||||
let is_last_drive = i == pool.drives.len() - 1;
|
||||
let tree_symbol = if is_last_drive && !has_usage_line { "└─" } else { "├─" };
|
||||
|
||||
let mut drive_info = Vec::new();
|
||||
if let Some(temp) = drive.temperature {
|
||||
drive_info.push(format!("T: {:.0}C", temp));
|
||||
// Pool health line (for multi-disk pools)
|
||||
if pool.pool_type != "single" {
|
||||
if let Some(health) = &pool.pool_health {
|
||||
let health_text = match health.as_str() {
|
||||
"healthy" => format!("Pool Status: {} Healthy",
|
||||
if pool.drives.len() > 1 { format!("({} drives)", pool.drives.len()) } else { String::new() }),
|
||||
"degraded" => "Pool Status: ⚠ Degraded".to_string(),
|
||||
"critical" => "Pool Status: ✗ Critical".to_string(),
|
||||
"rebuilding" => "Pool Status: ⟳ Rebuilding".to_string(),
|
||||
_ => format!("Pool Status: ? {}", health),
|
||||
};
|
||||
|
||||
let mut health_spans = vec![
|
||||
Span::raw(" "),
|
||||
Span::styled("├─ ", Typography::tree()),
|
||||
];
|
||||
health_spans.extend(StatusIcons::create_status_spans(pool.health_status.clone(), &health_text));
|
||||
lines.push(Line::from(health_spans));
|
||||
}
|
||||
if let Some(wear) = drive.wear_percent {
|
||||
drive_info.push(format!("W: {:.0}%", wear));
|
||||
}
|
||||
let drive_text = if drive_info.is_empty() {
|
||||
drive.name.clone()
|
||||
} else {
|
||||
format!("{} {}", drive.name, drive_info.join(" • "))
|
||||
};
|
||||
|
||||
let mut drive_spans = vec![
|
||||
Span::raw(" "),
|
||||
Span::styled(tree_symbol, Typography::tree()),
|
||||
Span::raw(" "),
|
||||
];
|
||||
drive_spans.extend(StatusIcons::create_status_spans(drive.status.clone(), &drive_text));
|
||||
lines.push(Line::from(drive_spans));
|
||||
}
|
||||
|
||||
// Usage line
|
||||
if pool.usage_percent.is_some() {
|
||||
let tree_symbol = "└─";
|
||||
let mut usage_spans = vec![
|
||||
Span::raw(" "),
|
||||
Span::styled(tree_symbol, Typography::tree()),
|
||||
Span::raw(" "),
|
||||
];
|
||||
usage_spans.extend(StatusIcons::create_status_spans(pool.status.clone(), &usage_text));
|
||||
lines.push(Line::from(usage_spans));
|
||||
// Total usage line (always show for pools)
|
||||
let usage_text = match (pool.usage_percent, pool.used_gb, pool.total_gb) {
|
||||
(Some(pct), Some(used), Some(total)) => {
|
||||
format!("Total: {:.0}% {:.1}GB/{:.1}GB", pct, used, total)
|
||||
}
|
||||
_ => "Total: —% —GB/—GB".to_string(),
|
||||
};
|
||||
|
||||
let has_drives = !pool.drives.is_empty();
|
||||
let has_filesystems = !pool.filesystems.is_empty();
|
||||
let has_children = has_drives || has_filesystems;
|
||||
let tree_symbol = if has_children { "├─" } else { "└─" };
|
||||
let mut usage_spans = vec![
|
||||
Span::raw(" "),
|
||||
Span::styled(tree_symbol, Typography::tree()),
|
||||
Span::raw(" "),
|
||||
];
|
||||
usage_spans.extend(StatusIcons::create_status_spans(pool.status.clone(), &usage_text));
|
||||
lines.push(Line::from(usage_spans));
|
||||
|
||||
// Drive lines with enhanced grouping
|
||||
if pool.pool_type != "single" && pool.drives.len() > 1 {
|
||||
// Group drives by type for mergerfs pools
|
||||
let (data_drives, parity_drives): (Vec<_>, Vec<_>) = pool.drives.iter().enumerate()
|
||||
.partition(|(_, drive)| {
|
||||
// Simple heuristic: drives with 'parity' in name or sdc (common parity drive)
|
||||
!drive.name.to_lowercase().contains("parity") && drive.name != "sdc"
|
||||
});
|
||||
|
||||
// Show data drives
|
||||
if !data_drives.is_empty() && pool.pool_type.contains("mergerfs") {
|
||||
lines.push(Line::from(vec![
|
||||
Span::raw(" "),
|
||||
Span::styled("├─ ", Typography::tree()),
|
||||
Span::styled("Data Disks:", Typography::secondary()),
|
||||
]));
|
||||
|
||||
for (i, (_, drive)) in data_drives.iter().enumerate() {
|
||||
let is_last = i == data_drives.len() - 1;
|
||||
if is_last && parity_drives.is_empty() {
|
||||
self.render_drive_line(&mut lines, drive, "│ └─");
|
||||
} else {
|
||||
self.render_drive_line(&mut lines, drive, "│ ├─");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Show parity drives
|
||||
if !parity_drives.is_empty() && pool.pool_type.contains("mergerfs") {
|
||||
lines.push(Line::from(vec![
|
||||
Span::raw(" "),
|
||||
Span::styled("└─ ", Typography::tree()),
|
||||
Span::styled("Parity:", Typography::secondary()),
|
||||
]));
|
||||
|
||||
for (i, (_, drive)) in parity_drives.iter().enumerate() {
|
||||
let is_last = i == parity_drives.len() - 1;
|
||||
if is_last {
|
||||
self.render_drive_line(&mut lines, drive, " └─");
|
||||
} else {
|
||||
self.render_drive_line(&mut lines, drive, " ├─");
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Regular drive listing for non-mergerfs pools
|
||||
for (i, drive) in pool.drives.iter().enumerate() {
|
||||
let is_last = i == pool.drives.len() - 1;
|
||||
let tree_symbol = if is_last { "└─" } else { "├─" };
|
||||
self.render_drive_line(&mut lines, drive, tree_symbol);
|
||||
}
|
||||
}
|
||||
} else if pool.pool_type.starts_with("drive (") {
|
||||
// Physical drive pools: show drive info + filesystem children
|
||||
// First show drive information
|
||||
for drive in &pool.drives {
|
||||
let mut drive_info = Vec::new();
|
||||
if let Some(temp) = drive.temperature {
|
||||
drive_info.push(format!("T: {:.0}°C", temp));
|
||||
}
|
||||
if let Some(wear) = drive.wear_percent {
|
||||
drive_info.push(format!("W: {:.0}%", wear));
|
||||
}
|
||||
let drive_text = if drive_info.is_empty() {
|
||||
format!("Drive: {}", drive.name)
|
||||
} else {
|
||||
format!("Drive: {}", drive_info.join(" "))
|
||||
};
|
||||
|
||||
let has_filesystems = !pool.filesystems.is_empty();
|
||||
let tree_symbol = if has_filesystems { "├─" } else { "└─" };
|
||||
let mut drive_spans = vec![
|
||||
Span::raw(" "),
|
||||
Span::styled(tree_symbol, Typography::tree()),
|
||||
Span::raw(" "),
|
||||
];
|
||||
drive_spans.extend(StatusIcons::create_status_spans(drive.status.clone(), &drive_text));
|
||||
lines.push(Line::from(drive_spans));
|
||||
}
|
||||
|
||||
// Then show filesystem children
|
||||
for (i, filesystem) in pool.filesystems.iter().enumerate() {
|
||||
let is_last = i == pool.filesystems.len() - 1;
|
||||
let tree_symbol = if is_last { "└─" } else { "├─" };
|
||||
|
||||
let fs_text = match (filesystem.usage_percent, filesystem.used_gb, filesystem.total_gb) {
|
||||
(Some(pct), Some(used), Some(total)) => {
|
||||
format!("{}: {:.0}% {:.1}GB/{:.1}GB", filesystem.mount_point, pct, used, total)
|
||||
}
|
||||
_ => format!("{}: —% —GB/—GB", filesystem.mount_point),
|
||||
};
|
||||
|
||||
let mut fs_spans = vec![
|
||||
Span::raw(" "),
|
||||
Span::styled(tree_symbol, Typography::tree()),
|
||||
Span::raw(" "),
|
||||
];
|
||||
fs_spans.extend(StatusIcons::create_status_spans(filesystem.status.clone(), &fs_text));
|
||||
lines.push(Line::from(fs_spans));
|
||||
}
|
||||
} else {
|
||||
// Single drive or simple pools
|
||||
for (i, drive) in pool.drives.iter().enumerate() {
|
||||
let is_last = i == pool.drives.len() - 1;
|
||||
let tree_symbol = if is_last { "└─" } else { "├─" };
|
||||
self.render_drive_line(&mut lines, drive, tree_symbol);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
lines
|
||||
}
|
||||
|
||||
/// Helper to render a single drive line
|
||||
fn render_drive_line<'a>(&self, lines: &mut Vec<Line<'a>>, drive: &StorageDrive, tree_symbol: &'a str) {
|
||||
let mut drive_info = Vec::new();
|
||||
if let Some(temp) = drive.temperature {
|
||||
drive_info.push(format!("T: {:.0}°C", temp));
|
||||
}
|
||||
if let Some(wear) = drive.wear_percent {
|
||||
drive_info.push(format!("W: {:.0}%", wear));
|
||||
}
|
||||
let drive_text = if drive_info.is_empty() {
|
||||
drive.name.clone()
|
||||
} else {
|
||||
format!("{} {}", drive.name, drive_info.join(" • "))
|
||||
};
|
||||
|
||||
let mut drive_spans = vec![
|
||||
Span::raw(" "),
|
||||
Span::styled(tree_symbol, Typography::tree()),
|
||||
Span::raw(" "),
|
||||
];
|
||||
drive_spans.extend(StatusIcons::create_status_spans(drive.status.clone(), &drive_text));
|
||||
lines.push(Line::from(drive_spans));
|
||||
}
|
||||
}
|
||||
|
||||
impl Widget for SystemWidget {
|
||||
@@ -513,48 +778,9 @@ impl SystemWidget {
|
||||
Span::styled("Storage:", Typography::widget_title())
|
||||
]));
|
||||
|
||||
// Storage items with overflow handling
|
||||
// Storage items - let main overflow logic handle truncation
|
||||
let storage_lines = self.render_storage();
|
||||
let remaining_space = area.height.saturating_sub(lines.len() as u16);
|
||||
|
||||
if storage_lines.len() <= remaining_space as usize {
|
||||
// All storage lines fit
|
||||
lines.extend(storage_lines);
|
||||
} else if remaining_space >= 2 {
|
||||
// Show what we can and add overflow indicator
|
||||
let lines_to_show = (remaining_space - 1) as usize; // Reserve 1 line for overflow
|
||||
lines.extend(storage_lines.iter().take(lines_to_show).cloned());
|
||||
|
||||
// Count hidden pools
|
||||
let mut hidden_pools = 0;
|
||||
let mut current_pool = String::new();
|
||||
for (i, line) in storage_lines.iter().enumerate() {
|
||||
if i >= lines_to_show {
|
||||
// Check if this line represents a new pool (no indentation)
|
||||
if let Some(first_span) = line.spans.first() {
|
||||
let text = first_span.content.as_ref();
|
||||
if !text.starts_with(" ") && text.contains(':') {
|
||||
let pool_name = text.split(':').next().unwrap_or("").trim();
|
||||
if pool_name != current_pool {
|
||||
hidden_pools += 1;
|
||||
current_pool = pool_name.to_string();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if hidden_pools > 0 {
|
||||
let overflow_text = format!(
|
||||
"... and {} more pool{}",
|
||||
hidden_pools,
|
||||
if hidden_pools == 1 { "" } else { "s" }
|
||||
);
|
||||
lines.push(Line::from(vec![
|
||||
Span::styled(overflow_text, Typography::muted())
|
||||
]));
|
||||
}
|
||||
}
|
||||
lines.extend(storage_lines);
|
||||
|
||||
// Apply scroll offset
|
||||
let total_lines = lines.len();
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "cm-dashboard-shared"
|
||||
version = "0.1.90"
|
||||
version = "0.1.107"
|
||||
edition = "2021"
|
||||
|
||||
[dependencies]
|
||||
|
||||
@@ -82,13 +82,13 @@ impl MetricValue {
|
||||
/// Health status for metrics
|
||||
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
|
||||
pub enum Status {
|
||||
Inactive, // Lowest priority - treated as good
|
||||
Ok, // Second lowest - also good
|
||||
Unknown,
|
||||
Offline,
|
||||
Pending,
|
||||
Warning,
|
||||
Critical,
|
||||
Inactive, // Lowest priority
|
||||
Unknown, //
|
||||
Offline, //
|
||||
Pending, //
|
||||
Ok, // 5th place - good status has higher priority than unknown states
|
||||
Warning, //
|
||||
Critical, // Highest priority
|
||||
}
|
||||
|
||||
impl Status {
|
||||
|
||||
Reference in New Issue
Block a user