Compare commits

..

29 Commits

Author SHA1 Message Date
516d159d2f Reorganize dashboard UI with tabbed layout and improved status bars
All checks were successful
Build and Release / build-and-release (push) Successful in 1m36s
Add tabbed navigation in right panel with "hosts | services" tabs to
better utilize vertical space. Hosts tab displays all available hosts
with blue selector bar (j/k navigation, Enter to switch). Services tab
shows services for currently selected host.

Status bar improvements:
- Move dashboard IP to top-right status bar (non-bold)
- Restructure bottom status bar with right-aligned build/agent versions
- Fix overflow crashes using saturating_sub for small terminal windows

Additional changes:
- Add HostsWidget with scroll handling and mouse click support
- Bold styling for currently active host
- Create render_content() methods to avoid nested blocks in tabs
- Display SMB share read/write mode in share listings
2025-12-14 10:03:33 +01:00
1656f20e96 Fix NFS export parsing to handle both inline and continuation formats
All checks were successful
Build and Release / build-and-release (push) Successful in 1m20s
Support exportfs output where network info appears on same line as path
(e.g. /srv/media/tv 192.168.0.0/16(...)) in addition to continuation
line format. Ensures all NFS exports are detected correctly.
2025-12-11 11:10:59 +01:00
dcd350ec2c Add NFS export permissions and network display, fix SMB service detection
All checks were successful
Build and Release / build-and-release (push) Successful in 1m13s
Display NFS exports with ro/rw permissions and network ranges for better
visibility into share configuration. Support both smbd and samba-smbd
service names for SMB share detection across different distributions.
2025-12-11 10:59:00 +01:00
a34b095857 Simplify NFS export options display
All checks were successful
Build and Release / build-and-release (push) Successful in 1m14s
Filter NFS export options to show only key settings (rw/ro, sync/async)
instead of verbose option strings. Improves readability while maintaining
essential information about export configuration.
2025-12-11 10:26:27 +01:00
7362464b46 Deduplicate NFS exports and remove client info
All checks were successful
Build and Release / build-and-release (push) Successful in 1m48s
Fix NFS export display to show each export path only once instead of
once per client. Use HashMap to deduplicate by path and sort results
alphabetically. Remove IP addresses and client specifications from
display, showing only export paths with their options.

Prevents duplicate entries when a single export is shared with multiple
clients or networks.
2025-12-11 10:06:59 +01:00
c8b79576fa Add NFS/SMB share monitoring and increase disk timeouts
All checks were successful
Build and Release / build-and-release (push) Successful in 1m36s
Add sub-service display for NFS exports and SMB shares under their
respective services. NFS shows active exports from exportfs with
options. SMB shows configured shares from smb.conf with paths.

Increase disk operation timeouts to handle multiple drives:
- lsblk: 2s → 10s
- smartctl: 3s → 15s (critical for multi-drive systems)
- df: 2s → 10s

Prevents timeouts when querying SMART data from systems with multiple
drives (3+ data drives plus parity).
2025-12-11 09:30:06 +01:00
f53df5440b Remove 'Repo' prefix from backup header display
All checks were successful
Build and Release / build-and-release (push) Successful in 1m10s
Simplify backup section header by removing the 'Repo' prefix and
displaying only the timestamp with status icon. Repository details
are still shown as sub-items below the timestamp.
2025-12-09 20:32:05 +01:00
d1b0e2c431 Add kB unit support for backup repository sizes
All checks were successful
Build and Release / build-and-release (push) Successful in 1m24s
Extended size formatting to handle repositories smaller than 1MB by displaying in kB units. Size display logic now cascades: kB for < 1MB, MB for 1MB-1GB, GB for >= 1GB.
2025-12-09 19:51:03 +01:00
b1719a60fc Use nfs-backup.toml and support completed status
All checks were successful
Build and Release / build-and-release (push) Successful in 1m24s
Update agent to read nfs-backup.toml instead of legacy backup-status-*.toml
files. Add support for 'completed' status string used by backup script.

Changes:
- Read nfs-backup.toml from status directory
- Match 'completed' status as Status::Ok
- Simplify file scanning logic for single NFS backup file
2025-12-09 19:37:01 +01:00
d922e8d6f3 Restructure backup display to show per-repository metrics
All checks were successful
Build and Release / build-and-release (push) Successful in 1m15s
Remove disk-based backup display and implement repository-centric view
with per-repo archive counts and sizes. Backup now uses NFS storage
instead of direct disk monitoring.

Changes:
- Remove BackupDiskData, add BackupRepositoryData structure
- Display format: "Repo <timestamp>" with per-repo details
- Show archive count and size (MB/GB) for each repository
- Agent aggregates repo data from backup status TOML files
- Dashboard renders repo list with individual status indicators
2025-12-09 19:22:51 +01:00
407bc9dbc2 Reduce CPU usage with conditional rendering
All checks were successful
Build and Release / build-and-release (push) Successful in 1m12s
Implement event-driven rendering to dramatically reduce CPU usage.
Only render when something actually changes instead of constantly
rendering at 20 FPS.

Changes:
- Increase poll timeout from 50ms to 200ms (5 FPS)
- Add needs_render flag to track when rendering is required
- Trigger rendering only on: user input, new metrics, heartbeat
  checks, or terminal resize events
- Reset render flag after each render cycle

Based on cm-player optimization approach.
2025-12-09 11:56:40 +01:00
3c278351c9 Filter out current host from Tailscale peer list
All checks were successful
Build and Release / build-and-release (push) Successful in 1m50s
Skip the first line in tailscale status output which is always the
current host showing as idle. Add additional hostname check to prevent
showing the current host in the peer list. Only display actual remote
peers with their connection methods.
2025-12-09 10:47:18 +01:00
8da4522d85 Fix Tailscale peer detection by parsing text output
All checks were successful
Build and Release / build-and-release (push) Successful in 1m13s
Replace JSON parsing with simpler text output parsing from tailscale
status command. The text format clearly shows hostname and connection
method (direct/relay/idle) making detection more reliable.

Fixes issues with incorrect hostname (localhost instead of actual name)
and incorrect connection method detection (showing relay when actually
using direct connection).
2025-12-09 10:34:55 +01:00
5b1e39cfca Show all connected Tailscale peers with connection methods
All checks were successful
Build and Release / build-and-release (push) Successful in 1m38s
Replace single connection method display with individual sub-service
rows for each online Tailscale peer. Each peer shows hostname and
connection type (direct, relay, or idle) allowing monitoring of all
connected devices and their connection quality.

Query tailscale status --json to enumerate all online peers and display
each as a separate sub-service under tailscaled.
2025-12-09 08:35:15 +01:00
ffecbc3166 Fix service widget auto-scroll and remove dead code
All checks were successful
Build and Release / build-and-release (push) Successful in 1m12s
Fix service selection scrolling to prevent selector bar from being
hidden by "... X more below" message. When scrolling down, position
selected service one line above the bottom if there's content below,
ensuring the selector remains visible above the overflow message.

Remove unused get_zmq_stats method and service_type field to eliminate
compilation warnings and dead code.
2025-12-08 23:10:57 +01:00
49f9504429 Add Tailscale connection method monitoring
All checks were successful
Build and Release / build-and-release (push) Successful in 1m25s
Add connection_method field to NetworkInterfaceData to track whether
Tailscale is using direct P2P, DERP relay, or HTTP proxy connections.
The connection method is displayed as a sub-service under tailscaled
service, following the same pattern as VPN routes and firewall ports.

Query tailscale status --json to determine active connection type and
display as informational sub-service when tailscaled is active.
2025-12-08 21:01:47 +01:00
bc9015e96b Add mouse support and improve terminal resize handling
All checks were successful
Build and Release / build-and-release (push) Successful in 1m21s
- Add mouse click support for hostname selection in title bar
- Fix right-aligned hostname position calculation
- Add mouse scroll support for both panels
- Add mouse click to select service rows
- Add right-click popup menu for service actions (Start/Stop/Logs)
- Add hover highlighting for popup menu items
- Improve terminal resize crash protection with 90x15 minimum size
- Add "Host:" prefix and separators to status bar
- Move NixOS metrics from system panel to status bar
- Change "... X more below" indicator to use border color
- Remove service name from popup menu title
2025-12-08 19:56:06 +01:00
aaec8e691c Bump version to 0.1.259
All checks were successful
Build and Release / build-and-release (push) Successful in 1m28s
2025-12-07 14:52:12 +01:00
4a8cfbbde4 Support multiple concurrent torrent copy operations
Update monitoring to handle multiple simultaneous torrent copy operations
using the new directory-based marker structure.

Changes:
- Rename get_active_torrent_copy() to get_active_torrent_copies()
- Read all marker files from /tmp/torrent-copy/ directory
- Return Vec<String> instead of Option<String> for multiple copies
- Display each active copy as separate sub-service
- Unsanitize filenames by replacing _ with /

This enables monitoring when multiple torrents finish simultaneously
and are being copied in parallel to permanent storage.
2025-12-07 14:47:49 +01:00
d93260529b Add torrent copy operation monitoring
All checks were successful
Build and Release / build-and-release (push) Successful in 1m46s
Add real-time monitoring of torrent copy operations when completed
downloads are copied from SSD to HDD storage.

Changes:
- Add marker file tracking during rsync operations
- Monitor active copy operations via /tmp/torrent-copy-active
- Display copy status as sub-service under openvpn-vpn-download
- Show currently copying torrent name in dashboard

The copy status appears as an informational sub-service while rsync
is actively copying completed torrents to permanent storage, providing
visibility into potentially long-running file transfer operations.
2025-12-07 13:59:28 +01:00
41e1be451e Display selected host with brackets in title bar
All checks were successful
Build and Release / build-and-release (push) Successful in 1m25s
- Change nftables port labels to lowercase 'wan tcp:' and 'wan udp:'
- Add brackets around selected host in title bar for clarity
2025-12-04 18:47:30 +01:00
2863526ec8 Remove timeout wrapper from nft command
All checks were successful
Build and Release / build-and-release (push) Successful in 1m21s
Run sudo nft directly without timeout wrapper to preserve capabilities.
The timeout -> sudo chain was preventing nft from accessing netlink
with proper permissions.

- Change from 'timeout 3 sudo nft' to 'sudo nft'
- Allows CAP_NET_ADMIN to pass through correctly
- Update version to v0.1.256
2025-12-04 16:10:25 +01:00
5da9213da6 Fix nftables query by using full path to nft binary
All checks were successful
Build and Release / build-and-release (push) Successful in 1m31s
Use explicit path /run/current-system/sw/bin/nft to match sudoers
configuration. Previously using 'nft' without path was resolving to
wrong location and failing permission checks.

- Change from 'sudo nft' to 'sudo /run/current-system/sw/bin/nft'
- Matches sudoers entry for passwordless execution
- Update version to v0.1.255
2025-12-04 16:02:14 +01:00
a7755f02ae Use 5-minute load average for CPU status calculation
All checks were successful
Build and Release / build-and-release (push) Successful in 1m27s
Change CPU load status to use load_5min instead of load_1min for more
stable status reporting. 5-minute average smooths out temporary spikes
and provides better indication of sustained load.

- Change load_status calculation from load_1min to load_5min
- Add comment explaining use of 5-minute average for stability
- Update version to v0.1.254
2025-12-04 15:59:11 +01:00
b886fb2045 Add detailed error logging for nft command failures
All checks were successful
Build and Release / build-and-release (push) Successful in 1m28s
- Log exit status code when nft command fails
- Log stderr output to diagnose issues
- Distinguish between command execution failure and non-zero exit
- Update version to v0.1.253
2025-12-04 15:53:12 +01:00
cfb02e1763 Change nftables logging from debug to info
All checks were successful
Build and Release / build-and-release (push) Successful in 1m21s
- Add info to tracing imports
- Change debug!() calls to info!() for visibility in logs
- Update version to v0.1.252
2025-12-04 15:48:34 +01:00
5b53ca3d52 Move VPN route display from vpn-connection to vpn-download
All checks were successful
Build and Release / build-and-release (push) Successful in 1m46s
Consolidate VPN-related information under openvpn-vpn-download service.
Now shows both VPN route and torrent statistics as sub-services.

- Remove route from openvpn-vpn-connection
- Add route to openvpn-vpn-download (displayed first)
- Torrent stats displayed second
- Update version to v0.1.251
2025-12-04 15:36:34 +01:00
92a30913b4 Add debug logging and fix chain end detection for nftables
All checks were successful
Build and Release / build-and-release (push) Successful in 1m28s
- Detect chain end with closing brace
- Add debug logging to trace nft command execution and port collection
- Update version to v0.1.250
2025-12-04 15:33:43 +01:00
a288a8ef9a Fix nftables parser to use input_wan chain
All checks were successful
Build and Release / build-and-release (push) Successful in 1m27s
Change nftables port parser to specifically look for 'chain input_wan'
instead of any chain with 'input' in the name. This ensures we only
collect WAN/external ports, not LAN or other internal chains.

- Look for 'chain input_wan' specifically
- Remove internal network filters (no longer needed)
- Update version to v0.1.249
2025-12-04 15:26:20 +01:00
17 changed files with 1801 additions and 493 deletions

6
Cargo.lock generated
View File

@ -279,7 +279,7 @@ checksum = "a1d728cc89cf3aee9ff92b05e62b19ee65a02b5702cff7d5a377e32c6ae29d8d"
[[package]]
name = "cm-dashboard"
version = "0.1.247"
version = "0.1.276"
dependencies = [
"anyhow",
"chrono",
@ -301,7 +301,7 @@ dependencies = [
[[package]]
name = "cm-dashboard-agent"
version = "0.1.247"
version = "0.1.275"
dependencies = [
"anyhow",
"async-trait",
@ -325,7 +325,7 @@ dependencies = [
[[package]]
name = "cm-dashboard-shared"
version = "0.1.247"
version = "0.1.275"
dependencies = [
"chrono",
"serde",

View File

@ -1,6 +1,6 @@
[package]
name = "cm-dashboard-agent"
version = "0.1.248"
version = "0.1.275"
edition = "2021"
[dependencies]

View File

@ -1,7 +1,7 @@
use async_trait::async_trait;
use cm_dashboard_shared::{AgentData, BackupData, BackupDiskData, Status};
use cm_dashboard_shared::{AgentData, BackupData, BackupRepositoryData, Status};
use serde::{Deserialize, Serialize};
use std::collections::{HashMap, HashSet};
use std::collections::HashMap;
use std::fs;
use std::path::{Path, PathBuf};
use tracing::{debug, warn};
@ -21,7 +21,7 @@ impl BackupCollector {
}
}
/// Scan directory for all backup status files
/// Scan directory for backup status file (nfs-backup.toml)
async fn scan_status_files(&self) -> Result<Vec<PathBuf>, CollectorError> {
let status_path = Path::new(&self.status_dir);
@ -30,30 +30,15 @@ impl BackupCollector {
return Ok(Vec::new());
}
let mut status_files = Vec::new();
match fs::read_dir(status_path) {
Ok(entries) => {
for entry in entries {
if let Ok(entry) = entry {
let path = entry.path();
if path.is_file() {
if let Some(filename) = path.file_name().and_then(|n| n.to_str()) {
if filename.starts_with("backup-status-") && filename.ends_with(".toml") {
status_files.push(path);
}
}
}
}
}
}
Err(e) => {
warn!("Failed to read backup status directory: {}", e);
return Ok(Vec::new());
}
// Look for nfs-backup.toml (new NFS-based backup)
let nfs_backup_file = status_path.join("nfs-backup.toml");
if nfs_backup_file.exists() {
return Ok(vec![nfs_backup_file]);
}
Ok(status_files)
// No backup status file found
debug!("No nfs-backup.toml found in {}", self.status_dir);
Ok(Vec::new())
}
/// Read a single backup status file
@ -76,24 +61,13 @@ impl BackupCollector {
/// Calculate backup status from TOML status field
fn calculate_backup_status(status_str: &str) -> Status {
match status_str.to_lowercase().as_str() {
"success" => Status::Ok,
"success" | "completed" => Status::Ok,
"warning" => Status::Warning,
"failed" | "error" => Status::Critical,
_ => Status::Unknown,
}
}
/// Calculate usage status from disk usage percentage
fn calculate_usage_status(usage_percent: f32) -> Status {
if usage_percent < 80.0 {
Status::Ok
} else if usage_percent < 90.0 {
Status::Warning
} else {
Status::Critical
}
}
/// Convert BackupStatusToml to BackupData and populate AgentData
async fn populate_backup_data(&self, agent_data: &mut AgentData) -> Result<(), CollectorError> {
let status_files = self.scan_status_files().await?;
@ -101,76 +75,47 @@ impl BackupCollector {
if status_files.is_empty() {
debug!("No backup status files found");
agent_data.backup = BackupData {
last_backup_time: None,
backup_status: Status::Unknown,
repositories: Vec::new(),
repository_status: Status::Unknown,
disks: Vec::new(),
};
return Ok(());
}
let mut all_repositories = HashSet::new();
let mut disks = Vec::new();
// Aggregate repository data across all backup status files
let mut repo_map: HashMap<String, BackupRepositoryData> = HashMap::new();
let mut worst_status = Status::Ok;
let mut latest_backup_time: Option<String> = None;
for status_file in status_files {
match self.read_status_file(&status_file).await {
Ok(backup_status) => {
// Collect all service names
for service_name in backup_status.services.keys() {
all_repositories.insert(service_name.clone());
}
// Calculate backup status
let backup_status_enum = Self::calculate_backup_status(&backup_status.status);
worst_status = worst_status.max(backup_status_enum);
// Calculate usage status from disk space
let (usage_percent, used_gb, total_gb, usage_status) = if let Some(disk_space) = &backup_status.disk_space {
let usage_pct = disk_space.usage_percent as f32;
(
usage_pct,
disk_space.used_gb as f32,
disk_space.total_gb as f32,
Self::calculate_usage_status(usage_pct),
)
} else {
(0.0, 0.0, 0.0, Status::Unknown)
};
// Track latest backup time
if latest_backup_time.is_none() || Some(&backup_status.start_time) > latest_backup_time.as_ref() {
latest_backup_time = Some(backup_status.start_time.clone());
}
// Update worst status
worst_status = worst_status.max(backup_status_enum).max(usage_status);
// Process each service in this backup
for (service_name, service_status) in backup_status.services {
// Convert bytes to GB
let repo_size_gb = service_status.repo_size_bytes as f32 / 1_073_741_824.0;
// Build service list for this disk
let services: Vec<String> = backup_status.services.keys().cloned().collect();
// Calculate service status
let service_status_enum = Self::calculate_backup_status(&service_status.status);
worst_status = worst_status.max(service_status_enum);
// Get min and max archive counts to detect inconsistencies
let archives_min: i64 = backup_status.services.values()
.map(|service| service.archive_count)
.min()
.unwrap_or(0);
let archives_max: i64 = backup_status.services.values()
.map(|service| service.archive_count)
.max()
.unwrap_or(0);
// Create disk data
let disk_data = BackupDiskData {
serial: backup_status.disk_serial_number.unwrap_or_else(|| "Unknown".to_string()),
product_name: backup_status.disk_product_name,
wear_percent: backup_status.disk_wear_percent,
temperature_celsius: None, // Not available in current TOML
last_backup_time: Some(backup_status.start_time),
backup_status: backup_status_enum,
disk_usage_percent: usage_percent,
disk_used_gb: used_gb,
disk_total_gb: total_gb,
usage_status,
services,
archives_min,
archives_max,
};
disks.push(disk_data);
// Update or insert repository data
repo_map.insert(service_name.clone(), BackupRepositoryData {
name: service_name,
archive_count: service_status.archive_count,
repo_size_gb,
status: service_status_enum,
});
}
}
Err(e) => {
warn!("Failed to read backup status file {:?}: {}", status_file, e);
@ -178,12 +123,14 @@ impl BackupCollector {
}
}
let repositories: Vec<String> = all_repositories.into_iter().collect();
// Convert HashMap to sorted Vec
let mut repositories: Vec<BackupRepositoryData> = repo_map.into_values().collect();
repositories.sort_by(|a, b| a.name.cmp(&b.name));
agent_data.backup = BackupData {
last_backup_time: latest_backup_time,
backup_status: worst_status,
repositories,
repository_status: worst_status,
disks,
};
Ok(())

View File

@ -282,8 +282,8 @@ impl Collector for CpuCollector {
);
}
// Calculate status using thresholds
agent_data.system.cpu.load_status = self.calculate_load_status(agent_data.system.cpu.load_1min);
// Calculate status using thresholds (use 5-minute average for stability)
agent_data.system.cpu.load_status = self.calculate_load_status(agent_data.system.cpu.load_5min);
agent_data.system.cpu.temperature_status = if let Some(temp) = agent_data.system.cpu.temperature_celsius {
self.calculate_temperature_status(temp)
} else {

View File

@ -114,7 +114,7 @@ impl DiskCollector {
let mut cmd = TokioCommand::new("lsblk");
cmd.args(&["-rn", "-o", "NAME,MOUNTPOINT"]);
let output = run_command_with_timeout(cmd, 2).await
let output = run_command_with_timeout(cmd, 10).await
.map_err(|e| CollectorError::SystemRead {
path: "block devices".to_string(),
error: e.to_string(),
@ -184,7 +184,7 @@ impl DiskCollector {
/// Get filesystem info for a single mount point
fn get_filesystem_info(&self, mount_point: &str) -> Result<(u64, u64), CollectorError> {
let output = StdCommand::new("timeout")
.args(&["2", "df", "--block-size=1", mount_point])
.args(&["10", "df", "--block-size=1", mount_point])
.output()
.map_err(|e| CollectorError::SystemRead {
path: format!("df {}", mount_point),
@ -433,7 +433,7 @@ impl DiskCollector {
cmd.args(&["-a", &format!("/dev/{}", drive_name)]);
}
let output = run_command_with_timeout(cmd, 3).await
let output = run_command_with_timeout(cmd, 15).await
.map_err(|e| CollectorError::SystemRead {
path: format!("SMART data for {}", drive_name),
error: e.to_string(),
@ -772,7 +772,7 @@ impl DiskCollector {
fn get_drive_info_for_path(&self, path: &str) -> anyhow::Result<PoolDrive> {
// Use lsblk to find the backing device with timeout
let output = StdCommand::new("timeout")
.args(&["2", "lsblk", "-rn", "-o", "NAME,MOUNTPOINT"])
.args(&["10", "lsblk", "-rn", "-o", "NAME,MOUNTPOINT"])
.output()
.map_err(|e| anyhow::anyhow!("Failed to run lsblk: {}", e))?;

View File

@ -181,6 +181,7 @@ impl NetworkCollector {
link_status,
parent_interface,
vlan_id,
connection_method: None,
});
}
}

View File

@ -4,7 +4,7 @@ use cm_dashboard_shared::{AgentData, ServiceData, SubServiceData, SubServiceMetr
use std::process::Command;
use std::sync::RwLock;
use std::time::Instant;
use tracing::debug;
use tracing::{debug, info};
use super::{Collector, CollectorError};
use crate::config::SystemdConfig;
@ -154,7 +154,8 @@ impl SystemdCollector {
}
}
if service_name == "openvpn-vpn-connection" && status_info.active_state == "active" {
if service_name == "openvpn-vpn-download" && status_info.active_state == "active" {
// Add VPN route
if let Some(external_ip) = self.get_vpn_external_ip() {
let metrics = Vec::new();
@ -165,9 +166,8 @@ impl SystemdCollector {
service_type: "vpn_route".to_string(),
});
}
}
if service_name == "openvpn-vpn-download" && status_info.active_state == "active" {
// Add torrent stats
if let Some((active_count, download_mbps, upload_mbps)) = self.get_qbittorrent_stats() {
let metrics = Vec::new();
@ -178,6 +178,18 @@ impl SystemdCollector {
service_type: "torrent_stats".to_string(),
});
}
// Add active torrent copy status for each copy operation
for torrent_name in self.get_active_torrent_copies() {
let metrics = Vec::new();
sub_services.push(SubServiceData {
name: format!("Copy: {}", torrent_name),
service_status: Status::Info,
metrics,
service_type: "torrent_copy".to_string(),
});
}
}
if service_name == "nftables" && status_info.active_state == "active" {
@ -186,7 +198,7 @@ impl SystemdCollector {
if !tcp_ports.is_empty() {
let metrics = Vec::new();
sub_services.push(SubServiceData {
name: format!("TCP: {}", tcp_ports),
name: format!("wan tcp: {}", tcp_ports),
service_status: Status::Info,
metrics,
service_type: "firewall_port".to_string(),
@ -196,7 +208,7 @@ impl SystemdCollector {
if !udp_ports.is_empty() {
let metrics = Vec::new();
sub_services.push(SubServiceData {
name: format!("UDP: {}", udp_ports),
name: format!("wan udp: {}", udp_ports),
service_status: Status::Info,
metrics,
service_type: "firewall_port".to_string(),
@ -204,6 +216,51 @@ impl SystemdCollector {
}
}
if service_name == "tailscaled" && status_info.active_state == "active" {
// Add Tailscale peers with their connection methods as sub-services
let peers = self.get_tailscale_peers();
for (peer_name, conn_method) in peers {
let metrics = Vec::new();
sub_services.push(SubServiceData {
name: format!("{}: {}", peer_name, conn_method),
service_status: Status::Info,
metrics,
service_type: "tailscale_peer".to_string(),
});
}
}
if service_name == "nfs-server" && status_info.active_state == "active" {
// Add NFS exports as sub-services
let exports = self.get_nfs_exports();
for (export_path, info) in exports {
let display = if !info.is_empty() {
format!("{} {}", export_path, info)
} else {
export_path
};
sub_services.push(SubServiceData {
name: display,
service_status: Status::Info,
metrics: Vec::new(),
service_type: "nfs_export".to_string(),
});
}
}
if (service_name == "smbd" || service_name == "samba-smbd") && status_info.active_state == "active" {
// Add SMB shares as sub-services
let shares = self.get_smb_shares();
for (share_name, share_path, mode) in shares {
sub_services.push(SubServiceData {
name: format!("{}: {} {}", share_name, share_path, mode),
service_status: Status::Info,
metrics: Vec::new(),
service_type: "smb_share".to_string(),
});
}
}
// Create complete service data
let service_data = ServiceData {
name: service_name.clone(),
@ -911,21 +968,264 @@ impl SystemdCollector {
None
}
/// Get Tailscale connected peers with their connection methods
/// Returns a list of (device_name, connection_method) tuples
fn get_tailscale_peers(&self) -> Vec<(String, String)> {
match Command::new("timeout")
.args(["2", "tailscale", "status"])
.output()
{
Ok(output) if output.status.success() => {
let status_output = String::from_utf8_lossy(&output.stdout);
let mut peers = Vec::new();
// Get current hostname to filter it out
let current_hostname = gethostname::gethostname()
.to_string_lossy()
.to_string();
// Parse tailscale status output
// Format: IP hostname user os status
// Example: 100.110.98.3 wslbox cm@ linux active; direct 192.168.30.227:53757
// Note: First line is always the current host, skip it
for (idx, line) in status_output.lines().enumerate() {
if idx == 0 {
continue; // Skip first line (current host)
}
let parts: Vec<&str> = line.split_whitespace().collect();
if parts.len() < 5 {
continue; // Skip invalid lines
}
// parts[0] = IP
// parts[1] = hostname
// parts[2] = user
// parts[3] = OS
// parts[4+] = status (e.g., "active;", "direct", "192.168.30.227:53757" or "idle;" or "offline")
let hostname = parts[1];
// Skip if this is the current host (double-check in case format changes)
if hostname == current_hostname {
continue;
}
let status_parts = &parts[4..];
// Determine connection method from status
let connection_method = if status_parts.is_empty() {
continue; // Skip if no status
} else {
let status_str = status_parts.join(" ");
if status_str.contains("offline") {
continue; // Skip offline peers
} else if status_str.contains("direct") {
"direct"
} else if status_str.contains("relay") {
"relay"
} else if status_str.contains("idle") {
"idle"
} else if status_str.contains("active") {
"active"
} else {
continue; // Skip unknown status
}
};
peers.push((hostname.to_string(), connection_method.to_string()));
}
peers
}
_ => Vec::new(),
}
}
/// Get NFS exports from exportfs
/// Returns a list of (export_path, info_string) tuples
fn get_nfs_exports(&self) -> Vec<(String, String)> {
let output = match Command::new("timeout")
.args(["2", "exportfs", "-v"])
.output()
{
Ok(output) if output.status.success() => output,
_ => return Vec::new(),
};
let exports_output = String::from_utf8_lossy(&output.stdout);
let mut exports_map: std::collections::HashMap<String, Vec<(String, String)>> =
std::collections::HashMap::new();
let mut current_path: Option<String> = None;
for line in exports_output.lines() {
let trimmed = line.trim();
if trimmed.is_empty() || trimmed.starts_with('#') {
continue;
}
if trimmed.starts_with('/') {
// Export path line - may have network on same line or continuation
let parts: Vec<&str> = trimmed.splitn(2, char::is_whitespace).collect();
let path = parts[0].to_string();
current_path = Some(path.clone());
// Check if network info is on the same line
if parts.len() > 1 {
let rest = parts[1].trim();
if let Some(paren_pos) = rest.find('(') {
let network = rest[..paren_pos].trim();
if let Some(end_paren) = rest.find(')') {
let options = &rest[paren_pos+1..end_paren];
let mode = if options.contains(",rw,") || options.ends_with(",rw") {
"rw"
} else {
"ro"
};
exports_map.entry(path)
.or_insert_with(Vec::new)
.push((network.to_string(), mode.to_string()));
}
}
}
} else if let Some(ref path) = current_path {
// Continuation line with network and options
if let Some(paren_pos) = trimmed.find('(') {
let network = trimmed[..paren_pos].trim();
if let Some(end_paren) = trimmed.find(')') {
let options = &trimmed[paren_pos+1..end_paren];
let mode = if options.contains(",rw,") || options.ends_with(",rw") {
"rw"
} else {
"ro"
};
exports_map.entry(path.clone())
.or_insert_with(Vec::new)
.push((network.to_string(), mode.to_string()));
}
}
}
}
// Build display strings: "path: mode [networks]"
let mut exports: Vec<(String, String)> = exports_map
.into_iter()
.map(|(path, mut entries)| {
if entries.is_empty() {
return (path, String::new());
}
let mode = entries[0].1.clone();
let networks: Vec<String> = entries.drain(..).map(|(n, _)| n).collect();
let info = format!("{} [{}]", mode, networks.join(", "));
(path, info)
})
.collect();
exports.sort_by(|a, b| a.0.cmp(&b.0));
exports
}
/// Get SMB shares from smb.conf
/// Returns a list of (share_name, share_path, mode) tuples
fn get_smb_shares(&self) -> Vec<(String, String, String)> {
match std::fs::read_to_string("/etc/samba/smb.conf") {
Ok(config) => {
let mut shares = Vec::new();
let mut current_share: Option<String> = None;
let mut current_path: Option<String> = None;
let mut current_mode: String = "ro".to_string(); // Default to read-only
for line in config.lines() {
let line = line.trim();
// Skip comments and empty lines
if line.is_empty() || line.starts_with('#') || line.starts_with(';') {
continue;
}
// Detect share section [sharename]
if line.starts_with('[') && line.ends_with(']') {
// Save previous share if we have both name and path
if let (Some(name), Some(path)) = (current_share.take(), current_path.take()) {
// Skip special sections
if name != "global" && name != "homes" && name != "printers" {
shares.push((name, path, current_mode.clone()));
}
}
// Start new share
let share_name = line[1..line.len()-1].trim().to_string();
current_share = Some(share_name);
current_path = None;
current_mode = "ro".to_string(); // Reset to default
}
// Look for path = /some/path
else if line.starts_with("path") && line.contains('=') {
if let Some(path_value) = line.split('=').nth(1) {
current_path = Some(path_value.trim().to_string());
}
}
// Look for read only = yes/no
else if line.to_lowercase().starts_with("read only") && line.contains('=') {
if let Some(value) = line.split('=').nth(1) {
let val = value.trim().to_lowercase();
current_mode = if val == "no" || val == "false" { "rw" } else { "ro" }.to_string();
}
}
// Look for writable = yes/no (opposite of read only)
else if line.to_lowercase().starts_with("writable") && line.contains('=') {
if let Some(value) = line.split('=').nth(1) {
let val = value.trim().to_lowercase();
current_mode = if val == "yes" || val == "true" { "rw" } else { "ro" }.to_string();
}
}
}
// Don't forget the last share
if let (Some(name), Some(path)) = (current_share, current_path) {
if name != "global" && name != "homes" && name != "printers" {
shares.push((name, path, current_mode));
}
}
shares
}
_ => Vec::new(),
}
}
/// Get nftables open ports grouped by protocol
/// Returns: (tcp_ports_string, udp_ports_string)
fn get_nftables_open_ports(&self) -> (String, String) {
let output = Command::new("timeout")
.args(&["3", "sudo", "nft", "list", "ruleset"])
let output = Command::new("sudo")
.args(&["/run/current-system/sw/bin/nft", "list", "ruleset"])
.output();
let output = match output {
Ok(out) if out.status.success() => out,
_ => return (String::new(), String::new()),
Ok(out) => {
info!("nft command failed with status: {:?}, stderr: {}",
out.status, String::from_utf8_lossy(&out.stderr));
return (String::new(), String::new());
}
Err(e) => {
info!("Failed to execute nft command: {}", e);
return (String::new(), String::new());
}
};
let output_str = match String::from_utf8(output.stdout) {
Ok(s) => s,
Err(_) => return (String::new(), String::new()),
Err(_) => {
info!("Failed to parse nft output as UTF-8");
return (String::new(), String::new());
}
};
let mut tcp_ports = std::collections::HashSet::new();
@ -933,26 +1233,26 @@ impl SystemdCollector {
// Parse nftables output for WAN incoming accept rules with dport
// Looking for patterns like: tcp dport 22 accept or tcp dport { 22, 80, 443 } accept
// Only include rules in input chain without private network source restrictions
let mut in_input_chain = false;
// Only include rules in input_wan chain
let mut in_wan_chain = false;
for line in output_str.lines() {
let line = line.trim();
// Track if we're in the input chain
if line.contains("chain input") || line.contains("chain INPUT") {
in_input_chain = true;
// Track if we're in the input_wan chain
if line.contains("chain input_wan") {
in_wan_chain = true;
continue;
}
// Reset when entering other chains
if line.starts_with("chain ") && !line.contains("input") && !line.contains("INPUT") {
in_input_chain = false;
// Reset when exiting chain (closing brace) or entering other chains
if line == "}" || (line.starts_with("chain ") && !line.contains("input_wan")) {
in_wan_chain = false;
continue;
}
// Only process rules in input chain
if !in_input_chain {
// Only process rules in input_wan chain
if !in_wan_chain {
continue;
}
@ -961,14 +1261,6 @@ impl SystemdCollector {
continue;
}
// Skip internal network traffic (LAN/private networks)
if line.contains("ip saddr 192.168.") ||
line.contains("ip saddr 10.") ||
line.contains("ip saddr 172.16.") ||
line.contains("iifname \"lo\"") {
continue;
}
// Parse TCP ports
if line.contains("tcp dport") {
for port in self.extract_ports_from_nft_rule(line) {
@ -993,6 +1285,8 @@ impl SystemdCollector {
let tcp_str = tcp_vec.iter().map(|p| p.to_string()).collect::<Vec<_>>().join(", ");
let udp_str = udp_vec.iter().map(|p| p.to_string()).collect::<Vec<_>>().join(", ");
info!("nftables WAN ports - TCP: '{}', UDP: '{}'", tcp_str, udp_str);
(tcp_str, udp_str)
}
@ -1083,6 +1377,31 @@ impl SystemdCollector {
Some((active_count, download_mbps, upload_mbps))
}
/// Check for active torrent copy operations
/// Returns: Vec of filenames currently being copied
fn get_active_torrent_copies(&self) -> Vec<String> {
let marker_dir = "/tmp/torrent-copy";
let mut active_copies = Vec::new();
// Read all marker files from directory
if let Ok(entries) = std::fs::read_dir(marker_dir) {
for entry in entries.flatten() {
if let Ok(file_type) = entry.file_type() {
if file_type.is_file() {
// Filename is the marker (sanitized torrent name)
if let Some(filename) = entry.file_name().to_str() {
// Convert sanitized name back (replace _ with /)
let display_name = filename.replace('_', "/");
active_copies.push(display_name);
}
}
}
}
}
active_copies
}
}
#[async_trait]

View File

@ -1,6 +1,6 @@
[package]
name = "cm-dashboard"
version = "0.1.248"
version = "0.1.276"
edition = "2021"
[dependencies]

View File

@ -1,10 +1,10 @@
use anyhow::Result;
use crossterm::{
event::{self},
event::{self, EnableMouseCapture, DisableMouseCapture, Event, MouseEvent, MouseEventKind, MouseButton},
execute,
terminal::{disable_raw_mode, enable_raw_mode, EnterAlternateScreen, LeaveAlternateScreen},
};
use ratatui::{backend::CrosstermBackend, Terminal};
use ratatui::{backend::CrosstermBackend, Terminal, layout::Rect};
use std::io;
use std::time::{Duration, Instant};
use tracing::{debug, error, info, warn};
@ -22,6 +22,8 @@ pub struct Dashboard {
headless: bool,
initial_commands_sent: std::collections::HashSet<String>,
config: DashboardConfig,
system_area: Rect, // Store system area for mouse event handling
services_area: Rect, // Store services area for mouse event handling
}
impl Dashboard {
@ -92,7 +94,7 @@ impl Dashboard {
}
let mut stdout = io::stdout();
if let Err(e) = execute!(stdout, EnterAlternateScreen) {
if let Err(e) = execute!(stdout, EnterAlternateScreen, EnableMouseCapture) {
error!("Failed to enter alternate screen: {}", e);
let _ = disable_raw_mode();
return Err(e.into());
@ -121,6 +123,8 @@ impl Dashboard {
headless,
initial_commands_sent: std::collections::HashSet::new(),
config,
system_area: Rect::default(),
services_area: Rect::default(),
})
}
@ -132,27 +136,45 @@ impl Dashboard {
let metrics_check_interval = Duration::from_millis(100); // Check for metrics every 100ms
let mut last_heartbeat_check = Instant::now();
let heartbeat_check_interval = Duration::from_secs(1); // Check for host connectivity every 1 second
let mut needs_render = true; // Track if we need to render
loop {
// Handle terminal events (keyboard input) only if not headless
// Handle terminal events (keyboard and mouse input) only if not headless
if !self.headless {
match event::poll(Duration::from_millis(50)) {
match event::poll(Duration::from_millis(200)) {
Ok(true) => {
match event::read() {
Ok(event) => {
if let Some(ref mut tui_app) = self.tui_app {
// Handle input
match tui_app.handle_input(event) {
Ok(_) => {
// Check if we should quit
if tui_app.should_quit() {
info!("Quit requested, exiting dashboard");
break;
match event {
Event::Key(_) => {
// Handle keyboard input
match tui_app.handle_input(event) {
Ok(_) => {
needs_render = true;
// Check if we should quit
if tui_app.should_quit() {
info!("Quit requested, exiting dashboard");
break;
}
}
Err(e) => {
error!("Error handling input: {}", e);
}
}
}
Err(e) => {
error!("Error handling input: {}", e);
Event::Mouse(mouse_event) => {
// Handle mouse events
if let Err(e) = self.handle_mouse_event(mouse_event) {
error!("Error handling mouse event: {}", e);
}
needs_render = true;
}
Event::Resize(_width, _height) => {
// Terminal was resized - mark for re-render
needs_render = true;
}
_ => {}
}
}
}
@ -168,17 +190,6 @@ impl Dashboard {
break;
}
}
// Render UI immediately after handling input for responsive feedback
if let Some(ref mut terminal) = self.terminal {
if let Some(ref mut tui_app) = self.tui_app {
if let Err(e) = terminal.draw(|frame| {
tui_app.render(frame, &self.metric_store);
}) {
error!("Error rendering TUI after input: {}", e);
}
}
}
}
// Check for new metrics
@ -217,8 +228,10 @@ impl Dashboard {
if let Some(ref mut tui_app) = self.tui_app {
tui_app.update_metrics(&mut self.metric_store);
}
needs_render = true; // New metrics received, need to render
}
// Also check for command output messages
if let Ok(Some(cmd_output)) = self.zmq_consumer.receive_command_output().await {
debug!(
@ -229,47 +242,371 @@ impl Dashboard {
// Command output (terminal popup removed - output not displayed)
}
last_metrics_check = Instant::now();
}
// Check for host connectivity changes (heartbeat timeouts) periodically
if last_heartbeat_check.elapsed() >= heartbeat_check_interval {
let timeout = Duration::from_secs(self.config.zmq.heartbeat_timeout_seconds);
// Clean up metrics for offline hosts
self.metric_store.cleanup_offline_hosts(timeout);
if let Some(ref mut tui_app) = self.tui_app {
let connected_hosts = self.metric_store.get_connected_hosts(timeout);
tui_app.update_hosts(connected_hosts);
}
last_heartbeat_check = Instant::now();
needs_render = true; // Heartbeat check happened, may have changed hosts
}
// Render TUI (only if not headless)
if !self.headless {
// Render TUI only when needed (not headless and something changed)
if !self.headless && needs_render {
if let Some(ref mut terminal) = self.terminal {
if let Some(ref mut tui_app) = self.tui_app {
// Clear and autoresize terminal to handle any resize events
if let Err(e) = terminal.autoresize() {
warn!("Error autoresizing terminal: {}", e);
}
// Render TUI regardless of terminal size
if let Err(e) = terminal.draw(|frame| {
tui_app.render(frame, &self.metric_store);
let (_title_area, system_area, services_area) = tui_app.render(frame, &self.metric_store);
self.system_area = system_area;
self.services_area = services_area;
}) {
error!("Error rendering TUI: {}", e);
break;
}
}
}
needs_render = false; // Reset flag after rendering
}
// Small sleep to prevent excessive CPU usage
tokio::time::sleep(Duration::from_millis(10)).await;
}
info!("Dashboard main loop ended");
Ok(())
}
/// Handle mouse events
fn handle_mouse_event(&mut self, mouse: MouseEvent) -> Result<()> {
let x = mouse.column;
let y = mouse.row;
// Handle popup menu if open
let popup_info = if let Some(ref tui_app) = self.tui_app {
tui_app.popup_menu.clone().map(|popup| {
let hostname = tui_app.current_host.clone();
(popup, hostname)
})
} else {
None
};
if let Some((popup, hostname)) = popup_info {
// Calculate popup bounds using screen coordinates
let popup_width = 20;
let popup_height = 5; // 3 items + 2 borders
// Get terminal size
let (screen_width, screen_height) = if let Some(ref terminal) = self.terminal {
let size = terminal.size().unwrap_or_default();
(size.width, size.height)
} else {
(80, 24) // fallback
};
let popup_x = if popup.x + popup_width < screen_width {
popup.x
} else {
screen_width.saturating_sub(popup_width)
};
let popup_y = if popup.y + popup_height < screen_height {
popup.y
} else {
screen_height.saturating_sub(popup_height)
};
let popup_area = Rect {
x: popup_x,
y: popup_y,
width: popup_width,
height: popup_height,
};
// Update selected index on mouse move
if matches!(mouse.kind, MouseEventKind::Moved) {
if is_in_area(x, y, &popup_area) {
let relative_y = y.saturating_sub(popup_y + 1) as usize; // +1 for top border
if relative_y < 3 {
if let Some(ref mut tui_app) = self.tui_app {
if let Some(ref mut popup) = tui_app.popup_menu {
popup.selected_index = relative_y;
}
}
}
}
return Ok(());
}
if matches!(mouse.kind, MouseEventKind::Down(MouseButton::Left)) {
if is_in_area(x, y, &popup_area) {
// Click inside popup - execute action
let relative_y = y.saturating_sub(popup_y + 1) as usize; // +1 for top border
if relative_y < 3 {
// Execute the selected action
self.execute_service_action(relative_y, &popup.service_name, hostname.as_deref())?;
}
// Close popup after action
if let Some(ref mut tui_app) = self.tui_app {
tui_app.popup_menu = None;
}
return Ok(());
} else {
// Click outside popup - close it
if let Some(ref mut tui_app) = self.tui_app {
tui_app.popup_menu = None;
}
return Ok(());
}
}
// Any other event while popup is open - don't process panels
return Ok(());
}
// Check for tab clicks in right panel (hosts | services)
if matches!(mouse.kind, MouseEventKind::Down(MouseButton::Left)) {
let services_end = self.services_area.x.saturating_add(self.services_area.width);
if y == self.services_area.y && x >= self.services_area.x && x < services_end {
// Click on top border of services area (where tabs are)
if let Some(ref mut tui_app) = self.tui_app {
tui_app.handle_tab_click(x, &self.services_area);
}
return Ok(());
}
}
// Determine which panel the mouse is over
let in_system_area = is_in_area(x, y, &self.system_area);
let in_services_area = is_in_area(x, y, &self.services_area);
if !in_system_area && !in_services_area {
return Ok(());
}
// Handle mouse events
match mouse.kind {
MouseEventKind::ScrollDown => {
if in_system_area {
// Scroll down in system panel
if let Some(ref mut tui_app) = self.tui_app {
if let Some(hostname) = tui_app.current_host.clone() {
let host_widgets = tui_app.get_or_create_host_widgets(&hostname);
let visible_height = self.system_area.height as usize;
let total_lines = host_widgets.system_widget.get_total_lines();
host_widgets.system_widget.scroll_down(visible_height, total_lines);
}
}
} else if in_services_area {
// Scroll down in services panel
if let Some(ref mut tui_app) = self.tui_app {
if let Some(hostname) = tui_app.current_host.clone() {
let host_widgets = tui_app.get_or_create_host_widgets(&hostname);
// Calculate visible height (panel height - borders and header)
let visible_height = self.services_area.height.saturating_sub(3) as usize;
host_widgets.services_widget.scroll_down(visible_height);
}
}
}
}
MouseEventKind::ScrollUp => {
if in_system_area {
// Scroll up in system panel
if let Some(ref mut tui_app) = self.tui_app {
if let Some(hostname) = tui_app.current_host.clone() {
let host_widgets = tui_app.get_or_create_host_widgets(&hostname);
host_widgets.system_widget.scroll_up();
}
}
} else if in_services_area {
// Scroll up in services panel
if let Some(ref mut tui_app) = self.tui_app {
if let Some(hostname) = tui_app.current_host.clone() {
let host_widgets = tui_app.get_or_create_host_widgets(&hostname);
host_widgets.services_widget.scroll_up();
}
}
}
}
MouseEventKind::Down(button) => {
// Only handle clicks in services area (not system area)
if !in_services_area {
return Ok(());
}
if let Some(ref mut tui_app) = self.tui_app {
if tui_app.focus_hosts {
// Hosts tab is active - handle host click
// The services area includes a border and header, so account for that
let relative_y = y.saturating_sub(self.services_area.y + 2) as usize; // +2 for border and header
let total_hosts = tui_app.get_available_hosts().len();
let clicked_index = tui_app.hosts_widget.y_to_host_index(relative_y);
if clicked_index < total_hosts {
match button {
MouseButton::Left => {
// Left click: set selector and switch to host immediately
tui_app.hosts_widget.set_selected_index(clicked_index, total_hosts);
let selected_host = tui_app.get_available_hosts()[clicked_index].clone();
tui_app.switch_to_host(&selected_host);
debug!("Clicked host at index {}: {}", clicked_index, selected_host);
}
_ => {}
}
}
} else {
// Services tab is active - handle service click
// The services area includes a border, so we need to account for that
let relative_y = y.saturating_sub(self.services_area.y + 2) as usize; // +2 for border and header
if let Some(hostname) = tui_app.current_host.clone() {
let host_widgets = tui_app.get_or_create_host_widgets(&hostname);
// Account for scroll offset - the clicked line is relative to viewport
let display_line_index = host_widgets.services_widget.scroll_offset + relative_y;
// Map display line to parent service index
if let Some(parent_index) = host_widgets.services_widget.display_line_to_parent_index(display_line_index) {
// Set the selected index to the clicked parent service
host_widgets.services_widget.selected_index = parent_index;
match button {
MouseButton::Left => {
// Left click just selects the service
debug!("Left-clicked service at display line {} (parent index: {})", display_line_index, parent_index);
}
MouseButton::Right => {
// Right click opens context menu
debug!("Right-clicked service at display line {} (parent index: {})", display_line_index, parent_index);
// Get the service name for the popup
if let Some(service_name) = host_widgets.services_widget.get_selected_service() {
tui_app.popup_menu = Some(crate::ui::PopupMenu {
service_name,
x,
y,
selected_index: 0,
});
}
}
_ => {}
}
}
}
}
}
}
_ => {}
}
Ok(())
}
/// Execute service action from popup menu
fn execute_service_action(&self, action_index: usize, service_name: &str, hostname: Option<&str>) -> Result<()> {
let Some(hostname) = hostname else {
return Ok(());
};
let connection_ip = self.get_connection_ip(hostname);
match action_index {
0 => {
// Start Service
let service_start_command = format!(
"echo 'Starting service: {} on {}' && ssh -tt {}@{} \"bash -ic '{} start {}'\"",
service_name,
hostname,
self.config.ssh.rebuild_user,
connection_ip,
self.config.ssh.service_manage_cmd,
service_name
);
std::process::Command::new("tmux")
.arg("split-window")
.arg("-v")
.arg("-p")
.arg("30")
.arg(&service_start_command)
.spawn()
.ok();
}
1 => {
// Stop Service
let service_stop_command = format!(
"echo 'Stopping service: {} on {}' && ssh -tt {}@{} \"bash -ic '{} stop {}'\"",
service_name,
hostname,
self.config.ssh.rebuild_user,
connection_ip,
self.config.ssh.service_manage_cmd,
service_name
);
std::process::Command::new("tmux")
.arg("split-window")
.arg("-v")
.arg("-p")
.arg("30")
.arg(&service_stop_command)
.spawn()
.ok();
}
2 => {
// View Logs
let logs_command = format!(
"ssh -tt {}@{} '{} logs {}'",
self.config.ssh.rebuild_user,
connection_ip,
self.config.ssh.service_manage_cmd,
service_name
);
std::process::Command::new("tmux")
.arg("split-window")
.arg("-v")
.arg("-p")
.arg("30")
.arg(&logs_command)
.spawn()
.ok();
}
_ => {}
}
Ok(())
}
/// Get connection IP for a host
fn get_connection_ip(&self, hostname: &str) -> String {
self.config
.hosts
.get(hostname)
.and_then(|h| h.ip.clone())
.unwrap_or_else(|| hostname.to_string())
}
}
/// Check if a point is within a rectangular area
fn is_in_area(x: u16, y: u16, area: &Rect) -> bool {
x >= area.x && x < area.x.saturating_add(area.width)
&& y >= area.y && y < area.y.saturating_add(area.height)
}
impl Drop for Dashboard {
@ -278,7 +615,7 @@ impl Drop for Dashboard {
if !self.headless {
let _ = disable_raw_mode();
if let Some(ref mut terminal) = self.terminal {
let _ = execute!(terminal.backend_mut(), LeaveAlternateScreen);
let _ = execute!(terminal.backend_mut(), LeaveAlternateScreen, DisableMouseCapture);
let _ = terminal.show_cursor();
}
}

View File

@ -86,16 +86,6 @@ impl MetricStore {
self.current_agent_data.get(hostname)
}
/// Get ZMQ communication statistics for a host
pub fn get_zmq_stats(&mut self, hostname: &str) -> Option<ZmqStats> {
let now = Instant::now();
self.zmq_stats.get_mut(hostname).map(|stats| {
// Update packet age
stats.last_packet_age_secs = now.duration_since(stats.last_packet_time).as_secs_f64();
stats.clone()
})
}
/// Get connected hosts (hosts with recent heartbeats)
pub fn get_connected_hosts(&self, timeout: Duration) -> Vec<String> {
let now = Instant::now();

View File

@ -17,8 +17,8 @@ pub mod widgets;
use crate::config::DashboardConfig;
use crate::metrics::MetricStore;
use cm_dashboard_shared::Status;
use theme::{Components, Layout as ThemeLayout, Theme, Typography};
use widgets::{ServicesWidget, SystemWidget, Widget};
use theme::{Components, Layout as ThemeLayout, Theme};
use widgets::{HostsWidget, ServicesWidget, SystemWidget, Widget};
@ -47,16 +47,23 @@ impl HostWidgets {
}
/// Popup menu state
#[derive(Clone)]
pub struct PopupMenu {
pub service_name: String,
pub x: u16,
pub y: u16,
pub selected_index: usize,
}
/// Main TUI application
pub struct TuiApp {
/// Widget states per host (hostname -> HostWidgets)
host_widgets: HashMap<String, HostWidgets>,
/// Current active host
current_host: Option<String>,
pub current_host: Option<String>,
/// Available hosts
available_hosts: Vec<String>,
/// Host index for navigation
host_index: usize,
/// Should quit application
should_quit: bool,
/// Track if user manually navigated away from localhost
@ -65,6 +72,12 @@ pub struct TuiApp {
config: DashboardConfig,
/// Cached localhost hostname to avoid repeated system calls
localhost: String,
/// Active popup menu (if any)
pub popup_menu: Option<PopupMenu>,
/// Focus on hosts tab (false = Services, true = Hosts)
pub focus_hosts: bool,
/// Hosts widget for navigation and rendering
pub hosts_widget: HostsWidget,
}
impl TuiApp {
@ -74,11 +87,13 @@ impl TuiApp {
host_widgets: HashMap::new(),
current_host: None,
available_hosts: config.hosts.keys().cloned().collect(),
host_index: 0,
should_quit: false,
user_navigated_away: false,
config,
localhost,
popup_menu: None,
focus_hosts: true, // Start with Hosts tab focused by default
hosts_widget: HostsWidget::new(),
};
// Sort predefined hosts
@ -93,7 +108,7 @@ impl TuiApp {
}
/// Get or create host widgets for the given hostname
fn get_or_create_host_widgets(&mut self, hostname: &str) -> &mut HostWidgets {
pub fn get_or_create_host_widgets(&mut self, hostname: &str) -> &mut HostWidgets {
self.host_widgets
.entry(hostname.to_string())
.or_insert_with(HostWidgets::new)
@ -130,27 +145,32 @@ impl TuiApp {
all_hosts.sort();
self.available_hosts = all_hosts;
// Track if we had a host before this update
let had_host = self.current_host.is_some();
// Get the current hostname (localhost) for auto-selection
if !self.available_hosts.is_empty() {
if self.available_hosts.contains(&self.localhost) && !self.user_navigated_away {
// Localhost is available and user hasn't navigated away - switch to it
self.current_host = Some(self.localhost.clone());
// Find the actual index of localhost in the sorted list
self.host_index = self.available_hosts.iter().position(|h| h == &self.localhost).unwrap_or(0);
// Initialize selector bar on first host selection
if !had_host {
let index = self.available_hosts.iter().position(|h| h == &self.localhost).unwrap_or(0);
self.hosts_widget.set_selected_index(index, self.available_hosts.len());
}
} else if self.current_host.is_none() {
// No current host - select first available (which is localhost if available)
self.current_host = Some(self.available_hosts[0].clone());
self.host_index = 0;
// Initialize selector bar
self.hosts_widget.set_selected_index(0, self.available_hosts.len());
} else if let Some(ref current) = self.current_host {
if !self.available_hosts.contains(current) {
// Current host disconnected - select first available and reset navigation flag
// Current host disconnected - FORCE switch to first available
self.current_host = Some(self.available_hosts[0].clone());
self.host_index = 0;
// Reset selector bar since we're forcing a host change
self.hosts_widget.set_selected_index(0, self.available_hosts.len());
self.user_navigated_away = false; // Reset since we're forced to switch
} else if let Some(index) = self.available_hosts.iter().position(|h| h == current) {
// Update index for current host
self.host_index = index;
}
}
}
@ -159,16 +179,18 @@ impl TuiApp {
/// Handle keyboard input
pub fn handle_input(&mut self, event: Event) -> Result<()> {
if let Event::Key(key) = event {
// Close popup on Escape
if matches!(key.code, KeyCode::Esc) {
if self.popup_menu.is_some() {
self.popup_menu = None;
return Ok(());
}
}
match key.code {
KeyCode::Char('q') => {
self.should_quit = true;
}
KeyCode::Left => {
self.navigate_host(-1);
}
KeyCode::Right => {
self.navigate_host(1);
}
KeyCode::Char('r') => {
// System rebuild command - works on any panel for current host
if let Some(hostname) = self.current_host.clone() {
@ -336,25 +358,46 @@ impl TuiApp {
}
}
KeyCode::Tab => {
// Tab cycles to next host
self.navigate_host(1);
// Tab toggles between Services and Hosts tabs
self.focus_hosts = !self.focus_hosts;
}
KeyCode::Up | KeyCode::Char('k') => {
// Move service selection up
if let Some(hostname) = self.current_host.clone() {
let host_widgets = self.get_or_create_host_widgets(&hostname);
host_widgets.services_widget.select_previous();
if self.focus_hosts {
// Move blue selector bar up when in Hosts tab
self.hosts_widget.select_previous();
} else {
// Move service selection up when in Services tab
if let Some(hostname) = self.current_host.clone() {
let host_widgets = self.get_or_create_host_widgets(&hostname);
host_widgets.services_widget.select_previous();
}
}
}
KeyCode::Down | KeyCode::Char('j') => {
// Move service selection down
if let Some(hostname) = self.current_host.clone() {
let total_services = {
if self.focus_hosts {
// Move blue selector bar down when in Hosts tab
let total_hosts = self.available_hosts.len();
self.hosts_widget.select_next(total_hosts);
} else {
// Move service selection down when in Services tab
if let Some(hostname) = self.current_host.clone() {
let total_services = {
let host_widgets = self.get_or_create_host_widgets(&hostname);
host_widgets.services_widget.get_total_services_count()
};
let host_widgets = self.get_or_create_host_widgets(&hostname);
host_widgets.services_widget.get_total_services_count()
};
let host_widgets = self.get_or_create_host_widgets(&hostname);
host_widgets.services_widget.select_next(total_services);
host_widgets.services_widget.select_next(total_services);
}
}
}
KeyCode::Enter => {
if self.focus_hosts {
// Enter key switches to the selected host
let selected_idx = self.hosts_widget.get_selected_index();
if selected_idx < self.available_hosts.len() {
let selected_host = self.available_hosts[selected_idx].clone();
self.switch_to_host(&selected_host);
}
}
}
_ => {}
@ -363,37 +406,47 @@ impl TuiApp {
Ok(())
}
/// Navigate between hosts
fn navigate_host(&mut self, direction: i32) {
if self.available_hosts.is_empty() {
return;
}
/// Switch to a specific host by name
pub fn switch_to_host(&mut self, hostname: &str) {
if let Some(index) = self.available_hosts.iter().position(|h| h == hostname) {
// Update selector bar position
self.hosts_widget.set_selected_index(index, self.available_hosts.len());
self.current_host = Some(hostname.to_string());
let len = self.available_hosts.len();
if direction > 0 {
self.host_index = (self.host_index + 1) % len;
} else {
self.host_index = if self.host_index == 0 {
len - 1
} else {
self.host_index - 1
};
}
self.current_host = Some(self.available_hosts[self.host_index].clone());
// Check if user navigated away from localhost
if let Some(ref current) = self.current_host {
if current != &self.localhost {
// Check if user navigated away from localhost
if hostname != &self.localhost {
self.user_navigated_away = true;
} else {
self.user_navigated_away = false; // User navigated back to localhost
}
info!("Switched to host: {}", hostname);
}
info!("Switched to host: {}", self.current_host.as_ref().unwrap());
}
/// Handle mouse click on tab title area
pub fn handle_tab_click(&mut self, x: u16, area: &Rect) {
// Tab title format: "hosts | services"
// Calculate positions relative to area start
let title_start_x = area.x + 1; // +1 for left border
// "hosts | services"
// 0123456789...
let hosts_start = title_start_x;
let hosts_end = hosts_start + 5; // "hosts" is 5 chars
let services_start = hosts_end + 3; // After " | "
let services_end = services_start + 8; // "services" is 8 chars
if x >= hosts_start && x < hosts_end {
// Clicked on "hosts"
self.focus_hosts = true;
} else if x >= services_start && x < services_end {
// Clicked on "services"
self.focus_hosts = false;
}
}
@ -408,6 +461,10 @@ impl TuiApp {
None
}
/// Get the list of available hosts
pub fn get_available_hosts(&self) -> &Vec<String> {
&self.available_hosts
}
/// Should quit application
pub fn should_quit(&self) -> bool {
@ -421,7 +478,7 @@ impl TuiApp {
/// Render the dashboard (real btop-style multi-panel layout)
pub fn render(&mut self, frame: &mut Frame, metric_store: &MetricStore) {
pub fn render(&mut self, frame: &mut Frame, metric_store: &MetricStore) -> (Rect, Rect, Rect) {
let size = frame.size();
// Clear background to true black like btop
@ -461,8 +518,8 @@ impl TuiApp {
if current_host_offline {
self.render_offline_host_message(frame, main_chunks[1]);
self.render_btop_title(frame, main_chunks[0], metric_store);
self.render_statusbar(frame, main_chunks[2]);
return;
self.render_statusbar(frame, main_chunks[2], metric_store);
return (main_chunks[0], Rect::default(), Rect::default()); // Return title area and empty areas when offline
}
// Left side: system panel only (full height)
@ -475,27 +532,29 @@ impl TuiApp {
self.render_btop_title(frame, main_chunks[0], metric_store);
// Render system panel
self.render_system_panel(frame, left_chunks[0], metric_store);
let system_area = left_chunks[0];
self.render_system_panel(frame, system_area, metric_store);
// Render services widget for current host
if let Some(hostname) = self.current_host.clone() {
let is_focused = true; // Always show service selection
let host_widgets = self.get_or_create_host_widgets(&hostname);
host_widgets
.services_widget
.render(frame, content_chunks[1], is_focused); // Services takes full right side
}
// Render right panel with tabs (Services | Hosts)
let services_area = content_chunks[1];
self.render_right_panel_with_tabs(frame, services_area, metric_store);
// Render statusbar at the bottom
self.render_statusbar(frame, main_chunks[2]); // main_chunks[2] is the statusbar area
self.render_statusbar(frame, main_chunks[2], metric_store);
// Render popup menu on top of everything if active
if let Some(ref popup) = self.popup_menu {
self.render_popup_menu(frame, popup);
}
// Return all areas for mouse event handling
(main_chunks[0], system_area, services_area)
}
/// Render btop-style minimal title with host status colors
fn render_btop_title(&self, frame: &mut Frame, area: Rect, metric_store: &MetricStore) {
use ratatui::style::Modifier;
use ratatui::text::{Line, Span};
use theme::StatusIcons;
if self.available_hosts.is_empty() {
let title_text = "cm-dashboard • no hosts discovered";
@ -518,72 +577,34 @@ impl TuiApp {
// Use the worst status color as background
let background_color = Theme::status_color(worst_status);
// Split the title bar into left and right sections
let chunks = Layout::default()
.direction(Direction::Horizontal)
.constraints([Constraint::Length(22), Constraint::Min(0)])
.split(area);
// Single line title bar showing dashboard name (left) and dashboard IP (right)
let left_text = format!(" cm-dashboard v{}", env!("CARGO_PKG_VERSION"));
// Left side: "cm-dashboard" text with version
let title_text = format!(" cm-dashboard v{}", env!("CARGO_PKG_VERSION"));
let left_span = Span::styled(
&title_text,
Style::default().fg(Theme::background()).bg(background_color).add_modifier(Modifier::BOLD)
);
let left_title = Paragraph::new(Line::from(vec![left_span]))
.style(Style::default().bg(background_color));
frame.render_widget(left_title, chunks[0]);
// Get dashboard local IP for right side
let dashboard_ip = Self::get_local_ip();
let right_text = format!("{} ", dashboard_ip);
// Right side: hosts with status indicators
let mut host_spans = Vec::new();
for (i, host) in self.available_hosts.iter().enumerate() {
if i > 0 {
host_spans.push(Span::styled(
" ",
Style::default().fg(Theme::background()).bg(background_color)
));
}
// Calculate spacing to push right text to the right
let total_text_len = left_text.len() + right_text.len();
let spacing = (area.width as usize).saturating_sub(total_text_len).max(1);
let spacing_str = " ".repeat(spacing);
// Always show normal status icon based on metrics (no command status at host level)
let host_status = self.calculate_host_status(host, metric_store);
let status_icon = StatusIcons::get_icon(host_status);
// Add status icon with background color as foreground against status background
host_spans.push(Span::styled(
format!("{} ", status_icon),
Style::default().fg(Theme::background()).bg(background_color),
));
if Some(host) == self.current_host.as_ref() {
// Selected host in bold background color against status background
host_spans.push(Span::styled(
host.clone(),
Style::default()
.fg(Theme::background())
.bg(background_color)
.add_modifier(Modifier::BOLD),
));
} else {
// Other hosts in normal background color against status background
host_spans.push(Span::styled(
host.clone(),
Style::default().fg(Theme::background()).bg(background_color),
));
}
}
// Add right padding
host_spans.push(Span::styled(
" ",
Style::default().fg(Theme::background()).bg(background_color)
));
let host_line = Line::from(host_spans);
let host_title = Paragraph::new(vec![host_line])
.style(Style::default().bg(background_color))
.alignment(ratatui::layout::Alignment::Right);
frame.render_widget(host_title, chunks[1]);
let title = Paragraph::new(Line::from(vec![
Span::styled(
left_text,
Style::default().fg(Theme::background()).bg(background_color).add_modifier(Modifier::BOLD)
),
Span::styled(
spacing_str,
Style::default().bg(background_color)
),
Span::styled(
right_text,
Style::default().fg(Theme::background()).bg(background_color)
),
]))
.style(Style::default().bg(background_color));
frame.render_widget(title, area);
}
/// Calculate overall status for a host based on its structured data
@ -597,36 +618,134 @@ impl TuiApp {
}
}
/// Render dynamic statusbar with context-aware shortcuts
fn render_statusbar(&self, frame: &mut Frame, area: Rect) {
let shortcuts = self.get_context_shortcuts();
let statusbar_text = shortcuts.join("");
let statusbar = Paragraph::new(statusbar_text)
.style(Typography::secondary())
.alignment(ratatui::layout::Alignment::Center);
/// Render popup menu for service actions
fn render_popup_menu(&self, frame: &mut Frame, popup: &PopupMenu) {
use ratatui::widgets::{Block, Borders, Clear, List, ListItem};
use ratatui::style::{Color, Modifier};
// Menu items
let items = vec![
"Start Service",
"Stop Service",
"View Logs",
];
// Calculate popup size
let width = 20;
let height = items.len() as u16 + 2; // +2 for borders
// Position popup near click location, but keep it on screen
let screen_width = frame.size().width;
let screen_height = frame.size().height;
let x = if popup.x + width < screen_width {
popup.x
} else {
screen_width.saturating_sub(width)
};
let y = if popup.y + height < screen_height {
popup.y
} else {
screen_height.saturating_sub(height)
};
let popup_area = Rect {
x,
y,
width,
height,
};
// Create menu items with selection highlight
let menu_items: Vec<ListItem> = items
.iter()
.enumerate()
.map(|(i, item)| {
let style = if i == popup.selected_index {
Style::default()
.fg(Color::Black)
.bg(Color::White)
.add_modifier(Modifier::BOLD)
} else {
Style::default().fg(Theme::primary_text())
};
ListItem::new(*item).style(style)
})
.collect();
let menu_list = List::new(menu_items)
.block(
Block::default()
.borders(Borders::ALL)
.style(Style::default().bg(Theme::background()).fg(Theme::primary_text()))
);
// Clear the area and render menu
frame.render_widget(Clear, popup_area);
frame.render_widget(menu_list, popup_area);
}
/// Render statusbar with host and client IPs
fn render_statusbar(&self, frame: &mut Frame, area: Rect, _metric_store: &MetricStore) {
use ratatui::text::{Line, Span};
use ratatui::widgets::Paragraph;
// Get current host info
let (hostname_str, host_ip, build_version, agent_version) = if let Some(hostname) = &self.current_host {
// Get the connection IP (the IP dashboard uses to connect to the agent)
let ip = if let Some(host_details) = self.config.hosts.get(hostname) {
host_details.get_connection_ip(hostname)
} else {
hostname.clone()
};
// Get build and agent versions from system widget
let (build, agent) = if let Some(host_widgets) = self.host_widgets.get(hostname) {
let build = host_widgets.system_widget.get_build_version().unwrap_or("N/A".to_string());
let agent = host_widgets.system_widget.get_agent_version().unwrap_or("N/A".to_string());
(build, agent)
} else {
("N/A".to_string(), "N/A".to_string())
};
(hostname.clone(), ip, build, agent)
} else {
("None".to_string(), "N/A".to_string(), "N/A".to_string(), "N/A".to_string())
};
let left_text = format!(" Host: {} | {}", hostname_str, host_ip);
let right_text = format!("Build:{} | Agent:{} ", build_version, agent_version);
// Calculate spacing to push right text to the right
let total_text_len = left_text.len() + right_text.len();
let spacing = (area.width as usize).saturating_sub(total_text_len).max(1);
let spacing_str = " ".repeat(spacing);
let line = Line::from(vec![
Span::styled(left_text, Style::default().fg(Theme::border())),
Span::raw(spacing_str),
Span::styled(right_text, Style::default().fg(Theme::border())),
]);
let statusbar = Paragraph::new(line);
frame.render_widget(statusbar, area);
}
/// Get context-aware shortcuts based on focused panel
fn get_context_shortcuts(&self) -> Vec<String> {
let mut shortcuts = Vec::new();
// Global shortcuts
shortcuts.push("Tab: Host".to_string());
shortcuts.push("↑↓/jk: Select".to_string());
shortcuts.push("r: Rebuild".to_string());
shortcuts.push("B: Backup".to_string());
shortcuts.push("s/S: Start/Stop".to_string());
shortcuts.push("L: Logs".to_string());
shortcuts.push("t: Terminal".to_string());
shortcuts.push("w: Wake".to_string());
// Always show quit
shortcuts.push("q: Quit".to_string());
shortcuts
/// Get local IP address of the dashboard
fn get_local_ip() -> String {
use std::net::UdpSocket;
// Try to get local IP by creating a UDP socket
// This doesn't actually send data, just determines routing
if let Ok(socket) = UdpSocket::bind("0.0.0.0:0") {
if socket.connect("8.8.8.8:80").is_ok() {
if let Ok(addr) = socket.local_addr() {
return addr.ip().to_string();
}
}
}
"N/A".to_string()
}
fn render_system_panel(&mut self, frame: &mut Frame, area: Rect, _metric_store: &MetricStore) {
@ -643,6 +762,73 @@ impl TuiApp {
}
/// Render right panel with tabs (hosts | services)
fn render_right_panel_with_tabs(&mut self, frame: &mut Frame, area: Rect, metric_store: &MetricStore) {
use ratatui::style::Modifier;
use ratatui::text::{Line, Span};
use ratatui::widgets::{Block, Borders};
// Build tab title with bold styling for active tab (like cm-player)
let hosts_style = if self.focus_hosts {
Style::default().fg(Theme::border_title()).add_modifier(Modifier::BOLD)
} else {
Style::default().fg(Theme::border_title())
};
let services_style = if !self.focus_hosts {
Style::default().fg(Theme::border_title()).add_modifier(Modifier::BOLD)
} else {
Style::default().fg(Theme::border_title())
};
let title = Line::from(vec![
Span::styled("hosts", hosts_style),
Span::raw(" | "),
Span::styled("services", services_style),
]);
// Create ONE block with tab title (like cm-player)
let main_block = Block::default()
.borders(Borders::ALL)
.title(title.clone())
.style(Style::default().fg(Theme::border()).bg(Theme::background()));
let inner_area = main_block.inner(area);
frame.render_widget(main_block, area);
// Render appropriate content based on active tab
if self.focus_hosts {
// Render hosts list (no additional borders)
let localhost = self.localhost.clone();
let current_host = self.current_host.as_deref();
self.hosts_widget.render(
frame,
inner_area,
&self.available_hosts,
&localhost,
current_host,
metric_store,
|hostname, store| {
// Inline calculate_host_status logic
if store.get_agent_data(hostname).is_some() {
Status::Ok
} else {
Status::Offline
}
},
true, // Always focused when visible
);
} else {
// Render services for current host (no additional borders - just content!)
if let Some(hostname) = self.current_host.clone() {
let is_focused = true;
let host_widgets = self.get_or_create_host_widgets(&hostname);
host_widgets.services_widget.render_content(frame, inner_area, is_focused);
}
}
}
/// Render offline host message with wake-up option
fn render_offline_host_message(&self, frame: &mut Frame, area: Rect) {
use ratatui::layout::Alignment;

View File

@ -0,0 +1,229 @@
use ratatui::{
layout::Rect,
style::{Modifier, Style},
text::{Line, Span},
widgets::{List, ListItem},
Frame,
};
use crate::metrics::MetricStore;
use crate::ui::theme::Theme;
use cm_dashboard_shared::Status;
/// Hosts widget displaying all available hosts with selector bar navigation
#[derive(Clone)]
pub struct HostsWidget {
/// Currently selected host index (for blue selector bar)
pub selected_index: usize,
/// Scroll offset for viewport
pub scroll_offset: usize,
/// Last rendered viewport height for scroll calculations
last_viewport_height: usize,
}
impl HostsWidget {
pub fn new() -> Self {
Self {
selected_index: 0,
scroll_offset: 0,
last_viewport_height: 0,
}
}
/// Move selection up
pub fn select_previous(&mut self) {
if self.selected_index > 0 {
self.selected_index -= 1;
self.ensure_selected_visible();
}
}
/// Move selection down
pub fn select_next(&mut self, total_hosts: usize) {
if total_hosts > 0 && self.selected_index < total_hosts.saturating_sub(1) {
self.selected_index += 1;
self.ensure_selected_visible();
}
}
/// Ensure selected item is visible in viewport (auto-scroll)
fn ensure_selected_visible(&mut self) {
if self.last_viewport_height == 0 {
return; // Can't calculate without viewport height
}
let viewport_height = self.last_viewport_height;
// If selection is above viewport, scroll up to show it
if self.selected_index < self.scroll_offset {
self.scroll_offset = self.selected_index;
}
// If selection is below viewport, scroll down to show it
if self.selected_index >= self.scroll_offset + viewport_height {
self.scroll_offset = self.selected_index.saturating_sub(viewport_height.saturating_sub(1));
}
}
/// Scroll down manually
pub fn scroll_down(&mut self, total_hosts: usize) {
if self.last_viewport_height == 0 {
return;
}
let viewport_height = self.last_viewport_height;
let max_scroll = total_hosts.saturating_sub(viewport_height);
if self.scroll_offset < max_scroll {
self.scroll_offset += 1;
}
}
/// Scroll up manually
pub fn scroll_up(&mut self) {
if self.scroll_offset > 0 {
self.scroll_offset -= 1;
}
}
/// Get the currently selected host index
pub fn get_selected_index(&self) -> usize {
self.selected_index
}
/// Set selected index (used when switching to host via mouse)
pub fn set_selected_index(&mut self, index: usize, total_hosts: usize) {
if index < total_hosts {
self.selected_index = index;
self.ensure_selected_visible();
}
}
/// Convert y coordinate to host index (accounting for scroll)
pub fn y_to_host_index(&self, relative_y: usize) -> usize {
self.scroll_offset + relative_y
}
/// Render hosts list with selector bar
pub fn render<F>(
&mut self,
frame: &mut Frame,
area: Rect,
available_hosts: &[String],
localhost: &str,
current_host: Option<&str>,
metric_store: &MetricStore,
mut calculate_host_status: F,
is_focused: bool,
) where F: FnMut(&str, &MetricStore) -> Status {
use crate::ui::theme::{StatusIcons, Typography};
use ratatui::widgets::Paragraph;
// Split area for header and list
let chunks = ratatui::layout::Layout::default()
.direction(ratatui::layout::Direction::Vertical)
.constraints([
ratatui::layout::Constraint::Length(1), // Header
ratatui::layout::Constraint::Min(0), // List
])
.split(area);
// Render header
let header = Paragraph::new("Hosts:").style(Typography::muted());
frame.render_widget(header, chunks[0]);
// Store viewport height for scroll calculations (minus header)
self.last_viewport_height = chunks[1].height as usize;
// Validate scroll offset
if self.scroll_offset >= available_hosts.len() && !available_hosts.is_empty() {
self.scroll_offset = available_hosts.len().saturating_sub(1);
}
// Create list items for visible hosts
let items: Vec<ListItem> = available_hosts
.iter()
.enumerate()
.skip(self.scroll_offset)
.take(chunks[1].height as usize)
.map(|(idx, hostname)| {
let host_status = calculate_host_status(hostname, metric_store);
let status_icon = StatusIcons::get_icon(host_status);
let status_color = Theme::status_color(host_status);
// Check if this is the selected host (for blue selector bar)
let is_selected = is_focused && idx == self.selected_index;
// Check if this is the current (active) host
let is_current = current_host == Some(hostname.as_str());
// Check if this is localhost
let is_localhost = hostname == localhost;
// Build the line with icon and hostname
let mut spans = vec![Span::styled(
format!("{} ", status_icon),
if is_selected {
Style::default()
.fg(Theme::background())
.add_modifier(Modifier::BOLD)
} else {
Style::default().fg(status_color)
},
)];
// Add arrow indicator if this is the current host (like cm-player)
if is_current {
spans.push(Span::styled(
"",
if is_selected {
Style::default()
.fg(Theme::background())
.add_modifier(Modifier::BOLD)
} else {
Style::default()
.fg(Theme::primary_text())
.add_modifier(Modifier::BOLD)
},
));
}
// Add hostname with appropriate styling
let hostname_text = if is_localhost {
format!("{} (localhost)", hostname)
} else {
hostname.clone()
};
spans.push(Span::styled(
hostname_text,
if is_selected {
Style::default()
.fg(Theme::background())
.add_modifier(Modifier::BOLD)
} else if is_current {
Style::default()
.fg(Theme::primary_text())
.add_modifier(Modifier::BOLD)
} else {
Style::default().fg(Theme::primary_text())
},
));
let line = Line::from(spans);
// Apply blue background to selected row
let base_style = if is_selected {
Style::default().bg(Theme::highlight()) // Blue background
} else {
Style::default().bg(Theme::background())
};
ListItem::new(line).style(base_style)
})
.collect();
let hosts_list = List::new(items);
frame.render_widget(hosts_list, chunks[1]);
}
}

View File

@ -1,8 +1,10 @@
use cm_dashboard_shared::AgentData;
pub mod hosts;
pub mod services;
pub mod system;
pub use hosts::HostsWidget;
pub use services::ServicesWidget;
pub use system::SystemWidget;

View File

@ -91,14 +91,17 @@ pub struct ServicesWidget {
/// Last update indicator
has_data: bool,
/// Currently selected service index (for navigation cursor)
selected_index: usize,
pub selected_index: usize,
/// Scroll offset for viewport (which display line is at the top)
pub scroll_offset: usize,
/// Last rendered viewport height (for accurate scroll bounds)
last_viewport_height: usize,
}
#[derive(Clone)]
struct ServiceInfo {
metrics: Vec<(String, f32, Option<String>)>, // (label, value, unit)
widget_status: Status,
service_type: String, // "nginx_site", "container", "image", or empty for parent services
memory_bytes: Option<u64>,
restart_count: Option<u32>,
uptime_seconds: Option<u64>,
@ -112,6 +115,8 @@ impl ServicesWidget {
status: Status::Unknown,
has_data: false,
selected_index: 0,
scroll_offset: 0,
last_viewport_height: 0,
}
}
@ -338,18 +343,86 @@ impl ServicesWidget {
pub fn select_previous(&mut self) {
if self.selected_index > 0 {
self.selected_index -= 1;
self.ensure_selected_visible();
}
debug!("Service selection moved up to: {}", self.selected_index);
}
/// Move selection down
/// Move selection down
pub fn select_next(&mut self, total_services: usize) {
if total_services > 0 && self.selected_index < total_services.saturating_sub(1) {
self.selected_index += 1;
self.ensure_selected_visible();
}
debug!("Service selection: {}/{}", self.selected_index, total_services);
}
/// Convert parent service index to display line index
fn parent_index_to_display_line(&self, parent_index: usize) -> usize {
let mut parent_services: Vec<_> = self.parent_services.iter().collect();
parent_services.sort_by(|(a, _), (b, _)| a.cmp(b));
let mut display_line = 0;
for (idx, (parent_name, _)) in parent_services.iter().enumerate() {
if idx == parent_index {
return display_line;
}
display_line += 1; // Parent service line
// Add sub-service lines
if let Some(sub_list) = self.sub_services.get(*parent_name) {
display_line += sub_list.len();
}
}
display_line
}
/// Ensure the currently selected service is visible in the viewport
fn ensure_selected_visible(&mut self) {
if self.last_viewport_height == 0 {
return; // Can't adjust without knowing viewport size
}
let display_line = self.parent_index_to_display_line(self.selected_index);
let total_display_lines = self.get_total_display_lines();
let viewport_height = self.last_viewport_height;
// Check if selected line is above visible area
if display_line < self.scroll_offset {
self.scroll_offset = display_line;
return;
}
// Calculate current effective viewport (accounting for "more below" if present)
let current_remaining = total_display_lines.saturating_sub(self.scroll_offset);
let current_has_more = current_remaining > viewport_height;
let current_effective = if current_has_more {
viewport_height.saturating_sub(1)
} else {
viewport_height
};
// Check if selected line is below current visible area
if display_line >= self.scroll_offset + current_effective {
// Need to scroll down. Position selected line so there's room for "more below" if needed
// Strategy: if there are lines below the selected line, don't put it at the very bottom
let has_content_below = display_line < total_display_lines - 1;
if has_content_below {
// Leave room for "... X more below" message by positioning selected line
// one position higher than the last line
let target_position = viewport_height.saturating_sub(2);
self.scroll_offset = display_line.saturating_sub(target_position);
} else {
// This is the last line, can put it at the bottom
self.scroll_offset = display_line.saturating_sub(viewport_height - 1);
}
}
debug!("Auto-scroll: selected={}, display_line={}, scroll_offset={}, viewport={}, total={}",
self.selected_index, display_line, self.scroll_offset, viewport_height, total_display_lines);
}
/// Get currently selected service name (for actions)
/// Only returns parent service names since only parent services can be selected
pub fn get_selected_service(&self) -> Option<String> {
@ -366,6 +439,81 @@ impl ServicesWidget {
self.parent_services.len()
}
/// Get total display lines (parent services + sub-services)
pub fn get_total_display_lines(&self) -> usize {
let mut total = self.parent_services.len();
for sub_list in self.sub_services.values() {
total += sub_list.len();
}
total
}
/// Scroll down by one line
pub fn scroll_down(&mut self, _visible_height: usize) {
let total_lines = self.get_total_display_lines();
// Use last_viewport_height if available (more accurate), otherwise can't scroll
let viewport_height = if self.last_viewport_height > 0 {
self.last_viewport_height
} else {
return; // Can't scroll without knowing viewport size
};
// Calculate exact max scroll to match render logic
// Stop scrolling when all remaining content fits in viewport
// At scroll_offset N: remaining = total_lines - N
// We can show all when: remaining <= viewport_height
// So max_scroll is when: total_lines - max_scroll = viewport_height
// Therefore: max_scroll = total_lines - viewport_height (but at least 0)
let max_scroll = total_lines.saturating_sub(viewport_height);
debug!("Scroll down: total={}, viewport={}, offset={}, max={}", total_lines, viewport_height, self.scroll_offset, max_scroll);
if self.scroll_offset < max_scroll {
self.scroll_offset += 1;
}
}
/// Scroll up by one line
pub fn scroll_up(&mut self) {
if self.scroll_offset > 0 {
self.scroll_offset -= 1;
}
}
/// Map a display line index to a parent service index (returns None if clicked on sub-service)
pub fn display_line_to_parent_index(&self, display_line_index: usize) -> Option<usize> {
// Build the same display list to map line index to parent service index
let mut parent_index = 0;
let mut line_index = 0;
let mut parent_services: Vec<_> = self.parent_services.iter().collect();
parent_services.sort_by(|(a, _), (b, _)| a.cmp(b));
for (parent_name, _) in parent_services {
// Check if this line index matches a parent service
if line_index == display_line_index {
return Some(parent_index);
}
line_index += 1;
// Add sub-services for this parent (if any)
if let Some(sub_list) = self.sub_services.get(parent_name) {
for _ in sub_list {
if line_index == display_line_index {
// Clicked on a sub-service - return None (can't select sub-services)
return None;
}
line_index += 1;
}
}
parent_index += 1;
}
None
}
/// Calculate which parent service index corresponds to a display line index
fn calculate_parent_service_index(&self, display_line_index: &usize) -> usize {
@ -407,7 +555,6 @@ impl Widget for ServicesWidget {
let parent_info = ServiceInfo {
metrics: Vec::new(), // Parent services don't have custom metrics
widget_status: service.service_status,
service_type: String::new(), // Parent services have no type
memory_bytes: service.memory_bytes,
restart_count: service.restart_count,
uptime_seconds: service.uptime_seconds,
@ -426,7 +573,6 @@ impl Widget for ServicesWidget {
let sub_info = ServiceInfo {
metrics,
widget_status: sub_service.service_status,
service_type: sub_service.service_type.clone(),
memory_bytes: None, // Sub-services don't have individual metrics yet
restart_count: None,
uptime_seconds: None,
@ -471,7 +617,6 @@ impl ServicesWidget {
.or_insert(ServiceInfo {
metrics: Vec::new(),
widget_status: Status::Unknown,
service_type: String::new(),
memory_bytes: None,
restart_count: None,
uptime_seconds: None,
@ -500,7 +645,6 @@ impl ServicesWidget {
ServiceInfo {
metrics: Vec::new(),
widget_status: Status::Unknown,
service_type: String::new(), // Unknown type in legacy path
memory_bytes: None,
restart_count: None,
uptime_seconds: None,
@ -542,12 +686,23 @@ impl ServicesWidget {
self.selected_index = total_count - 1;
}
// Clamp scroll offset to valid range after update
// This prevents scroll issues when switching between hosts or when service count changes
let total_display_lines = self.get_total_display_lines();
if total_display_lines == 0 {
self.scroll_offset = 0;
} else if self.scroll_offset >= total_display_lines {
// Clamp to max valid value, not reset to 0
self.scroll_offset = total_display_lines.saturating_sub(1);
}
debug!(
"Services widget updated: {} parent services, {} sub-service groups, total={}, selected={}, status={:?}",
"Services widget updated: {} parent services, {} sub-service groups, total={}, selected={}, scroll={}, status={:?}",
self.parent_services.len(),
self.sub_services.len(),
total_count,
self.selected_index,
self.scroll_offset,
self.status
);
}
@ -558,7 +713,11 @@ impl ServicesWidget {
/// Render with focus
pub fn render(&mut self, frame: &mut Frame, area: Rect, is_focused: bool) {
let services_block = Components::widget_block("services");
self.render_with_title(frame, area, is_focused, "services");
}
pub fn render_with_title(&mut self, frame: &mut Frame, area: Rect, is_focused: bool, title: &str) {
let services_block = Components::widget_block(title);
let inner_area = services_block.inner(area);
frame.render_widget(services_block, area);
@ -603,6 +762,49 @@ impl ServicesWidget {
self.render_services(frame, content_chunks[1], is_focused, columns);
}
/// Render services content WITHOUT block (for tab mode like cm-player)
pub fn render_content(&mut self, frame: &mut Frame, area: Rect, is_focused: bool) {
let content_chunks = Layout::default()
.direction(Direction::Vertical)
.constraints([Constraint::Length(1), Constraint::Min(0)])
.split(area);
// Determine which columns to show based on available width
let columns = ColumnVisibility::from_width(area.width);
// Build header based on visible columns
let mut header_parts = Vec::new();
if columns.show_name {
header_parts.push(format!("{:<width$}", "Service:", width = ColumnVisibility::NAME_WIDTH as usize));
}
if columns.show_status {
header_parts.push(format!("{:<width$}", "Status:", width = ColumnVisibility::STATUS_WIDTH as usize));
}
if columns.show_ram {
header_parts.push(format!("{:<width$}", "RAM:", width = ColumnVisibility::RAM_WIDTH as usize));
}
if columns.show_uptime {
header_parts.push(format!("{:<width$}", "Uptime:", width = ColumnVisibility::UPTIME_WIDTH as usize));
}
if columns.show_restarts {
header_parts.push(format!("{:<width$}", "↻:", width = ColumnVisibility::RESTARTS_WIDTH as usize));
}
let header = header_parts.join(" ");
let header_para = Paragraph::new(header).style(Typography::muted());
frame.render_widget(header_para, content_chunks[0]);
// Check if we have any services to display
if self.parent_services.is_empty() && self.sub_services.is_empty() {
let empty_text = Paragraph::new("No process data").style(Typography::muted());
frame.render_widget(empty_text, content_chunks[1]);
return;
}
// Render the services list
self.render_services(frame, content_chunks[1], is_focused, columns);
}
/// Render services list
fn render_services(&mut self, frame: &mut Frame, area: Rect, is_focused: bool, columns: ColumnVisibility) {
// Build hierarchical service list for display
@ -639,20 +841,46 @@ impl ServicesWidget {
// Show only what fits, with "X more below" if needed
let available_lines = area.height as usize;
let total_lines = display_lines.len();
// Reserve one line for "X more below" if needed
let lines_for_content = if total_lines > available_lines {
// Store viewport height for accurate scroll calculations
self.last_viewport_height = available_lines;
// Clamp scroll_offset to valid range based on current viewport and content
// This handles dynamic viewport size changes
let max_valid_scroll = total_lines.saturating_sub(available_lines);
if self.scroll_offset > max_valid_scroll {
self.scroll_offset = max_valid_scroll;
}
// Calculate how many lines remain after scroll offset
let remaining_lines = total_lines.saturating_sub(self.scroll_offset);
debug!("Render: total={}, viewport={}, offset={}, max={}, remaining={}",
total_lines, available_lines, self.scroll_offset, max_valid_scroll, remaining_lines);
// Check if all remaining content fits in viewport
let will_show_more_below = remaining_lines > available_lines;
// Reserve one line for "X more below" only if we can't fit everything
let lines_for_content = if will_show_more_below {
available_lines.saturating_sub(1)
} else {
available_lines
available_lines.min(remaining_lines)
};
// Apply scroll offset
let visible_lines: Vec<_> = display_lines
.iter()
.skip(self.scroll_offset)
.take(lines_for_content)
.collect();
let hidden_below = total_lines.saturating_sub(lines_for_content);
// Only calculate hidden_below if we actually reserved space for the message
let hidden_below = if will_show_more_below {
remaining_lines.saturating_sub(lines_for_content)
} else {
0
};
let lines_to_show = visible_lines.len();
@ -666,8 +894,8 @@ impl ServicesWidget {
for (i, (line_text, line_status, is_sub, sub_info)) in visible_lines.iter().enumerate()
{
let actual_index = i; // Simple index since we're not scrolling
let actual_index = self.scroll_offset + i; // Account for scroll offset
// Only parent services can be selected - calculate parent service index
let is_selected = if !*is_sub {
// This is a parent service - count how many parent services came before this one
@ -712,7 +940,7 @@ impl ServicesWidget {
// Show "X more below" message if content was truncated
if hidden_below > 0 {
let more_text = format!("... {} more below", hidden_below);
let more_para = Paragraph::new(more_text).style(Typography::muted());
let more_para = Paragraph::new(more_text).style(Style::default().fg(Theme::border()));
frame.render_widget(more_para, service_chunks[lines_to_show]);
}
}

View File

@ -1,12 +1,13 @@
use cm_dashboard_shared::Status;
use ratatui::{
layout::Rect,
style::Style,
text::{Line, Span, Text},
widgets::Paragraph,
Frame,
};
use crate::ui::theme::{StatusIcons, Typography};
use crate::ui::theme::{StatusIcons, Theme, Typography};
/// System widget displaying NixOS info, Network, CPU, RAM, and Storage in unified layout
#[derive(Clone)]
@ -43,12 +44,17 @@ pub struct SystemWidget {
storage_pools: Vec<StoragePool>,
// Backup metrics
backup_repositories: Vec<String>,
backup_repository_status: Status,
backup_disks: Vec<cm_dashboard_shared::BackupDiskData>,
backup_last_time: Option<String>,
backup_status: Status,
backup_repositories: Vec<cm_dashboard_shared::BackupRepositoryData>,
// Overall status
has_data: bool,
// Scroll offset for viewport
pub scroll_offset: usize,
/// Last rendered viewport height (for accurate scroll bounds)
last_viewport_height: usize,
}
#[derive(Clone)]
@ -106,10 +112,12 @@ impl SystemWidget {
tmp_status: Status::Unknown,
tmpfs_mounts: Vec::new(),
storage_pools: Vec::new(),
backup_last_time: None,
backup_status: Status::Unknown,
backup_repositories: Vec::new(),
backup_repository_status: Status::Unknown,
backup_disks: Vec::new(),
has_data: false,
scroll_offset: 0,
last_viewport_height: 0,
}
}
@ -153,6 +161,16 @@ impl SystemWidget {
pub fn _get_agent_hash(&self) -> Option<&String> {
self.agent_hash.as_ref()
}
/// Get the build version
pub fn get_build_version(&self) -> Option<String> {
self.nixos_build.clone()
}
/// Get the agent version
pub fn get_agent_version(&self) -> Option<String> {
self.agent_hash.clone()
}
}
use super::Widget;
@ -203,9 +221,19 @@ impl Widget for SystemWidget {
// Extract backup data
let backup = &agent_data.backup;
self.backup_last_time = backup.last_backup_time.clone();
self.backup_status = backup.backup_status;
self.backup_repositories = backup.repositories.clone();
self.backup_repository_status = backup.repository_status;
self.backup_disks = backup.disks.clone();
// Clamp scroll offset to valid range after update
// This prevents scroll issues when switching between hosts
let total_lines = self.get_total_lines();
if total_lines == 0 {
self.scroll_offset = 0;
} else if self.scroll_offset >= total_lines {
// Clamp to max valid value, not reset to 0
self.scroll_offset = total_lines.saturating_sub(1);
}
}
}
@ -505,79 +533,42 @@ impl SystemWidget {
fn render_backup(&self) -> Vec<Line<'_>> {
let mut lines = Vec::new();
// First section: Repository status and list
if !self.backup_repositories.is_empty() {
let repo_text = format!("Repo: {}", self.backup_repositories.len());
let repo_spans = StatusIcons::create_status_spans(self.backup_repository_status, &repo_text);
lines.push(Line::from(repo_spans));
// List all repositories (sorted for consistent display)
let mut sorted_repos = self.backup_repositories.clone();
sorted_repos.sort();
let repo_count = sorted_repos.len();
for (idx, repo) in sorted_repos.iter().enumerate() {
let tree_char = if idx == repo_count - 1 { "└─" } else { "├─" };
lines.push(Line::from(vec![
Span::styled(format!(" {} ", tree_char), Typography::tree()),
Span::styled(repo.clone(), Typography::secondary()),
]));
}
if self.backup_repositories.is_empty() {
return lines;
}
// Second section: Per-disk backup information (sorted by serial for consistent display)
let mut sorted_disks = self.backup_disks.clone();
sorted_disks.sort_by(|a, b| a.serial.cmp(&b.serial));
for disk in &sorted_disks {
let truncated_serial = truncate_serial(&disk.serial);
let mut details = Vec::new();
// Format backup time (use complete timestamp)
let time_display = if let Some(ref time_str) = self.backup_last_time {
time_str.clone()
} else {
"unknown".to_string()
};
if let Some(temp) = disk.temperature_celsius {
details.push(format!("T: {}°C", temp as i32));
}
if let Some(wear) = disk.wear_percent {
details.push(format!("W: {}%", wear as i32));
}
// Header: just the timestamp
let repo_spans = StatusIcons::create_status_spans(self.backup_status, &time_display);
lines.push(Line::from(repo_spans));
let disk_text = if !details.is_empty() {
format!("{} {}", truncated_serial, details.join(" "))
// List all repositories with archive count and size
let repo_count = self.backup_repositories.len();
for (idx, repo) in self.backup_repositories.iter().enumerate() {
let tree_char = if idx == repo_count - 1 { "└─" } else { "├─" };
// Format size: use kB for < 1MB, MB for < 1GB, otherwise GB
let size_display = if repo.repo_size_gb < 0.001 {
format!("{:.0}kB", repo.repo_size_gb * 1024.0 * 1024.0)
} else if repo.repo_size_gb < 1.0 {
format!("{:.0}MB", repo.repo_size_gb * 1024.0)
} else {
truncated_serial
format!("{:.1}GB", repo.repo_size_gb)
};
// Overall disk status (worst of backup and usage)
let disk_status = disk.backup_status.max(disk.usage_status);
let disk_spans = StatusIcons::create_status_spans(disk_status, &disk_text);
lines.push(Line::from(disk_spans));
let repo_text = format!("{} ({}) {}", repo.name, repo.archive_count, size_display);
// Show backup time with status
if let Some(backup_time) = &disk.last_backup_time {
let time_text = format!("Backup: {}", backup_time);
let mut time_spans = vec![
Span::styled(" ├─ ", Typography::tree()),
];
time_spans.extend(StatusIcons::create_status_spans(disk.backup_status, &time_text));
lines.push(Line::from(time_spans));
}
// Show usage with status and archive count
let archive_display = if disk.archives_min == disk.archives_max {
format!("{}", disk.archives_min)
} else {
format!("{}-{}", disk.archives_min, disk.archives_max)
};
let usage_text = format!(
"Usage: ({}) {:.0}% {:.0}GB/{:.0}GB",
archive_display,
disk.disk_usage_percent,
disk.disk_used_gb,
disk.disk_total_gb
);
let mut usage_spans = vec![
Span::styled(" └─ ", Typography::tree()),
let mut repo_spans = vec![
Span::styled(format!(" {} ", tree_char), Typography::tree()),
];
usage_spans.extend(StatusIcons::create_status_spans(disk.usage_status, &usage_text));
lines.push(Line::from(usage_spans));
repo_spans.extend(StatusIcons::create_status_spans(repo.status, &repo_text));
lines.push(Line::from(repo_spans));
}
lines
@ -781,23 +772,87 @@ impl SystemWidget {
}
/// Render system widget
pub fn render(&mut self, frame: &mut Frame, area: Rect, hostname: &str, _config: Option<&crate::config::DashboardConfig>) {
let mut lines = Vec::new();
/// Scroll down by one line
pub fn scroll_down(&mut self, _visible_height: usize, _total_lines: usize) {
let total_lines = self.get_total_lines();
// NixOS section
lines.push(Line::from(vec![
Span::styled(format!("NixOS {}:", hostname), Typography::widget_title())
]));
let build_text = self.nixos_build.as_deref().unwrap_or("unknown");
lines.push(Line::from(vec![
Span::styled(format!("Build: {}", build_text), Typography::secondary())
]));
let agent_version_text = self.agent_hash.as_deref().unwrap_or("unknown");
lines.push(Line::from(vec![
Span::styled(format!("Agent: {}", agent_version_text), Typography::secondary())
]));
// Use last_viewport_height if available (more accurate), otherwise can't scroll
let viewport_height = if self.last_viewport_height > 0 {
self.last_viewport_height
} else {
return; // Can't scroll without knowing viewport size
};
// Max scroll should allow us to see all remaining content
// When scroll_offset + viewport_height >= total_lines, we can see everything
let max_scroll = if total_lines > viewport_height {
total_lines - viewport_height
} else {
0
};
if self.scroll_offset < max_scroll {
self.scroll_offset += 1;
}
}
/// Scroll up by one line
pub fn scroll_up(&mut self) {
if self.scroll_offset > 0 {
self.scroll_offset -= 1;
}
}
/// Get total line count (needs to be calculated before rendering)
pub fn get_total_lines(&self) -> usize {
let mut count = 0;
// CPU section (2+ lines for load/cstate, +1 if has model/cores)
count += 2;
if self.cpu_model_name.is_some() || self.cpu_core_count.is_some() {
count += 1;
}
// RAM section (1 + tmpfs mounts)
count += 2;
count += self.tmpfs_mounts.len();
// Network section
if !self.network_interfaces.is_empty() {
count += 1; // Header
// Count network lines (would need to mirror render_network logic)
for iface in &self.network_interfaces {
count += 1; // Interface name
count += iface.ipv4_addresses.len();
count += iface.ipv6_addresses.len();
}
}
// Storage section
count += 1; // Header
for pool in &self.storage_pools {
count += 1; // Pool header
count += pool.drives.len();
count += pool.data_drives.len();
count += pool.parity_drives.len();
count += pool.filesystems.len();
}
// Backup section
if !self.backup_repositories.is_empty() {
count += 1; // Header: "Backup:"
count += 1; // Repo count and timestamp header
count += self.backup_repositories.len(); // Individual repos
}
count
}
pub fn render(&mut self, frame: &mut Frame, area: Rect, _hostname: &str, _config: Option<&crate::config::DashboardConfig>) {
// Store viewport height for accurate scroll calculations
self.last_viewport_height = area.height as usize;
let mut lines = Vec::new();
// CPU section
lines.push(Line::from(vec![
@ -893,7 +948,7 @@ impl SystemWidget {
lines.extend(storage_lines);
// Backup section (if available)
if !self.backup_repositories.is_empty() || !self.backup_disks.is_empty() {
if !self.backup_repositories.is_empty() {
lines.push(Line::from(vec![
Span::styled("Backup:", Typography::widget_title())
]));
@ -905,29 +960,51 @@ impl SystemWidget {
// Apply scroll offset
let total_lines = lines.len();
let available_height = area.height as usize;
// Show only what fits, with "X more below" if needed
if total_lines > available_height {
let lines_for_content = available_height.saturating_sub(1); // Reserve one line for "more below"
let mut visible_lines: Vec<Line> = lines
.into_iter()
.take(lines_for_content)
.collect();
let hidden_below = total_lines.saturating_sub(lines_for_content);
if hidden_below > 0 {
let more_line = Line::from(vec![
Span::styled(format!("... {} more below", hidden_below), Typography::muted())
]);
visible_lines.push(more_line);
}
let paragraph = Paragraph::new(Text::from(visible_lines));
frame.render_widget(paragraph, area);
// Clamp scroll_offset to valid range based on current viewport and content
// This handles dynamic viewport size changes
let max_valid_scroll = total_lines.saturating_sub(available_height);
let clamped_scroll = self.scroll_offset.min(max_valid_scroll);
// Calculate how many lines remain after scroll offset
let remaining_lines = total_lines.saturating_sub(clamped_scroll);
// Check if all remaining content fits in viewport
let will_show_more_below = remaining_lines > available_height;
// Reserve one line for "X more below" only if we can't fit everything
let lines_for_content = if will_show_more_below {
available_height.saturating_sub(1)
} else {
// All content fits and no scroll offset, render normally
let paragraph = Paragraph::new(Text::from(lines));
frame.render_widget(paragraph, area);
available_height.min(remaining_lines)
};
// Apply clamped scroll offset and take only what fits
let mut visible_lines: Vec<Line> = lines
.into_iter()
.skip(clamped_scroll)
.take(lines_for_content)
.collect();
// Note: we don't update self.scroll_offset here due to borrow checker constraints
// It will be clamped on next render if still out of bounds
// Only calculate hidden_below if we actually reserved space for the message
let hidden_below = if will_show_more_below {
remaining_lines.saturating_sub(lines_for_content)
} else {
0
};
// Add "more below" message if needed
if hidden_below > 0 {
let more_line = Line::from(vec![
Span::styled(format!("... {} more below", hidden_below), Style::default().fg(Theme::border()))
]);
visible_lines.push(more_line);
}
let paragraph = Paragraph::new(Text::from(visible_lines));
frame.render_widget(paragraph, area);
}
}

View File

@ -1,6 +1,6 @@
[package]
name = "cm-dashboard-shared"
version = "0.1.248"
version = "0.1.275"
edition = "2021"
[dependencies]

View File

@ -38,6 +38,7 @@ pub struct NetworkInterfaceData {
pub link_status: Status,
pub parent_interface: Option<String>,
pub vlan_id: Option<u16>,
pub connection_method: Option<String>, // For Tailscale: "direct", "relay", or "proxy"
}
/// CPU C-state usage information
@ -181,27 +182,18 @@ pub struct SubServiceMetric {
/// Backup system data
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct BackupData {
pub repositories: Vec<String>,
pub repository_status: Status,
pub disks: Vec<BackupDiskData>,
}
/// Backup repository disk information
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct BackupDiskData {
pub serial: String,
pub product_name: Option<String>,
pub wear_percent: Option<f32>,
pub temperature_celsius: Option<f32>,
pub last_backup_time: Option<String>,
pub backup_status: Status,
pub disk_usage_percent: f32,
pub disk_used_gb: f32,
pub disk_total_gb: f32,
pub usage_status: Status,
pub services: Vec<String>,
pub archives_min: i64,
pub archives_max: i64,
pub repositories: Vec<BackupRepositoryData>,
}
/// Individual backup repository information
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct BackupRepositoryData {
pub name: String,
pub archive_count: i64,
pub repo_size_gb: f32,
pub status: Status,
}
impl AgentData {
@ -244,9 +236,9 @@ impl AgentData {
},
services: Vec::new(),
backup: BackupData {
last_backup_time: None,
backup_status: Status::Unknown,
repositories: Vec::new(),
repository_status: Status::Unknown,
disks: Vec::new(),
},
}
}