Agent heartbeat was sending empty AgentData every few seconds, causing
dashboard to display zero values for all metrics intermittently. Since
agent already transmits complete data every 1 second, heartbeat is
redundant. Dashboard will detect offline hosts via data timestamps.
Allow agent configuration without explicit filesystems list by making
the field optional with serde default, enabling pure auto-discovery mode.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Update last_collection timestamp even when collectors fail to prevent
immediate retry loops that cause data transmission gaps every 5 seconds.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
The structured data bridge conversion was only converting individual
drive metrics (temperature, wear) and filesystem metrics, but wasn't
generating the aggregated total usage metrics expected by the storage
widget (disk_{drive}_total_gb, disk_{drive}_used_gb, disk_{drive}_usage_percent).
This caused physical drives to display "—% —GB/—GB" instead of actual
usage statistics.
Updated the bridge conversion to calculate drive totals by aggregating
all filesystems on each drive:
- total_used = sum of all filesystem used_gb values
- total_size = sum of all filesystem total_gb values
- average_usage = (total_used / total_size) * 100
Now physical drives like nvme0n1 properly display total usage aggregated
from all their filesystems (e.g., /boot + / = total drive usage).
Version bump: v0.1.131 → v0.1.132
Replace fragile string-based metrics with type-safe JSON data structures.
Agent converts all metrics to structured data, dashboard processes typed fields.
Changes:
- Add AgentData struct with CPU, memory, storage, services, backup fields
- Replace string parsing with direct field access throughout system
- Maintain UI compatibility via temporary metric bridge conversion
- Fix NVMe temperature display and eliminate string parsing bugs
- Update protocol to support structured data transmission over ZMQ
- Comprehensive metric type coverage: CPU, memory, storage, services, backup
Version bump to 0.1.131
- Display wear percentage in storage headers for single physical drives
- Remove redundant drive type indicators, show wear data instead
- Fix wear metric parsing for physical drives (underscore count issue)
- Add NVMe temperature parsing support (Temperature: format)
- Add raw metrics debugging functionality for troubleshooting
- Clean up physical drive display to remove redundant information
- Display actual drive name (e.g., nvme0n1) instead of mount point for physical drives
- Fix health status parsing for physical drives to show proper status icons
- Update pool name extraction to handle disk_{drive}_health metrics correctly
- Improve storage widget rendering for physical drive identification
- Replace blanket parity drive inclusion with smart relationship detection
- Only associate parity drives from same parent directory as data drives
- Prevent incorrect exclusion of nvme0n1 physical drives from grouping
- Maintain zero-configuration auto-discovery without hardcoded paths
- Eliminate hardcoded mappings like 'root' -> '/' and 'steampool' -> '/mnt/steampool'
- Use device names directly for physical drives
- Rely on mount_point metrics from agent for actual mount paths
- Implement zero-configuration architecture as specified in CLAUDE.md
- Remove fallback logic that could extract incorrect pool names
- Simplify pool suffix matching to use explicit arrays
- Ensure only valid metric patterns create pools
- Use actual device names (sdb, sdc) instead of data_0, parity_0
- Fix physical drive naming to show device names instead of mount points
- Update pool name extraction to handle new device-based naming
- Ensure Drive: line shows temperature and wear data for physical drives
- Add SnapRAID parity drive detection to mergerfs discovery
- Remove Pool Status health line as discussed
- Update drive display to always show wear data when available
- Include /mnt/parity drives as part of mergerfs pool structure
- Restructure storage rendering logic to prevent drive duplication
- Use specific mergerfs check instead of generic multi-drive condition
- Ensure drives only appear once under organized data/parity sections
- Update extract_pool_name to handle data_/parity_ drive metrics correctly
- Fix extract_drive_name to parse mergerfs drive roles properly
- Prevent srv_media_data from being parsed as separate pool
- Improve pool name extraction in dashboard parsing
- Use consistent mergerfs pool naming in agent
- Add mount_point metric parsing to use actual mount paths
- Fix pool consolidation to prevent duplicate entries
Add support for numeric mergerfs references like "1:2" by mapping them
to actual mount points (/mnt/disk1, /mnt/disk2). This enables proper
mergerfs pool detection and hides individual member drives as intended.
Skip mergerfs pools with numeric device references (e.g., "1:2")
instead of crashing. This allows regular drive detection to work
even when mergerfs uses non-standard mount formats.
Preserves existing functionality for standard mergerfs setups.
1. Add missing _fs_ filter to usage_percent parsing in dashboard
2. Fix agent to use calculated fs_status instead of hardcoded Status::Ok
This completes the disk collector auto-discovery by ensuring filesystem
usage percentages and status indicators display correctly.
1. Prevent filesystem _fs_ metrics from overwriting pool totals
2. Fix filesystem name extraction to properly parse boot/root names
This resolves both the pool total display (showing 0.1GB instead of 220GB)
and individual filesystem display (showing —% —GB/—GB).
Remove unused debug code and fix device name parsing to properly
handle lsblk tree characters. This resolves the issue where only
/boot filesystem was discovered instead of both /boot and /.
Prevent string slicing panic in extract_filesystem_metric when
parsing individual filesystem metrics. This resolves the issue
where filesystem entries show —% —GB/—GB instead of actual usage.
Add debug logging to filesystem usage collection to identify why
some mount points are being dropped during discovery. This should
resolve the issue where total capacity shows incorrect values.
Fixed critical bug where dashboard crashed with 'begin <= end' slice error
when parsing disk metrics with new naming format. Added bounds checking
to prevent invalid string slicing operations.
- Fixed extract_pool_name string slicing bounds check
- Removed ineffective panic handling that caused infinite loop
- Dashboard now handles new disk collector metrics correctly
Added comprehensive error handling to storage metrics parsing to prevent
dashboard crashes when encountering unexpected metric formats or parsing
errors. Dashboard now continues gracefully with empty storage display
instead of crashing, improving reliability during metric format changes.
- Wrapped storage metric parsing in panic recovery
- Added logging for metric parsing failures
- Dashboard shows empty storage on errors instead of crashing
- Ensures dashboard remains functional during agent updates
Replaced complex disk collector with simple lsblk → df → group workflow.
Supports both physical drives and mergerfs pools with unified metrics.
Eliminates configuration complexity through pure auto-discovery.
- Clean discovery pipeline using lsblk and df commands
- Physical drive grouping with filesystem children
- MergerFS pool detection with parity heuristics
- Unified metric generation for consistent dashboard display
- SMART data collection for temperature, wear, and health
Updated filesystem grouping to use extract_base_device method for proper
partition-to-drive mapping. This ensures nvme0n1p1 and nvme0n1p2 are
correctly grouped under nvme0n1 drive pool instead of separate pools.
- Make filesystem display more forgiving - show partial data if available
- Will display usage% even if GB values are missing, or vice versa
- This should help identify which specific metrics aren't being populated
- Debug version to identify filesystem data population issues
Current status: Filesystem children appear with correct mount points but show —% —GB/—GB
Need to debug why usage_percent, used_gb, total_gb metrics aren't populating filesystem entries
- Fix extract_filesystem_metric() to handle multi-underscore metric names correctly
- Parse known metric suffixes (usage_percent, mount_point, available_gb, etc.)
- Prevent incorrect parsing like boot_mount_point -> fs_name='boot_mount', metric_type='point'
- Should now correctly show /boot and / instead of /boot/mount and /root/mount
- Allow filesystem entries to be created with any metric, not just mount_point
- Ensure filesystem children appear under physical drive pools
- Improve mount point fallback logic for better compatibility
- Fix extract_pool_name() to handle filesystem metrics (_fs_) correctly
- Prevent individual filesystem pools (nvme0n1_fs_boot, nvme0n1_fs_root) from being created
- Fix incorrect mount point names (was showing /root/mount instead of /)
- Only create filesystem entries when receiving mount_point metrics
- Add available_gb field to FileSystem struct for proper available space handling
- Ensure filesystem children show correct usage data instead of —% —GB/—GB
- Implement filesystem children display under physical drive pools
- Agent generates individual filesystem metrics for each mount point
- Dashboard parses filesystem metrics and displays as tree children
- Add filesystem usage, total, and available space metrics
- Support target format: drive info + filesystem children hierarchy
- Fix compilation warnings by properly using available_bytes calculation
- Group single disk filesystems by physical drive during auto-discovery
- Create physical drive pools with filesystem children
- Display temperature, wear, and health at drive level
- Provide consistent hierarchical storage visualization
- Fix borrow checker issues in create_physical_drive_pool method
- Add PhysicalDrive case to all StoragePoolType match statements
- Add automatic detection of mergerfs pools by parsing /proc/mounts
- Implement smart heuristics for parity disk identification
- Store discovered topology at agent startup for efficient monitoring
- Eliminate need for manual storage pool configuration
- Support zero-config storage visualization with backward compatibility
- Clean up mount parsing and remove unused fields
- Add support for mergerfs pool grouping with data and parity disk separation
- Implement pool health monitoring (healthy/degraded/critical status)
- Create hierarchical tree view for multi-disk storage arrays
- Add automatic pool type detection and member disk association
- Maintain backward compatibility for single disk configurations
- Support future extension for RAID and ZFS pool types
- Add dynamic version display in top bar using CARGO_PKG_VERSION
- Rewrite status aggregation to only show Critical/Warning/OK in top bar
- Fix Status enum ordering to prioritize OK over transitional states
- Remove blue/gray colors from top bar background
- Add disk wear percentage collection from SMART data in backup script
- Add backup_disk_wear_percent metric to backup collector with thresholds
- Display wear percentage in backup widget disk section
- Fix storage section overflow handling to use consistent "X more below" logic
- Update maintenance mode to return pending status instead of unknown
Apply same logic used for inactive status to pending status.
Pending services now contribute to OK count instead of being
ignored, preventing blue title bar during service transitions.
Title bar now only shows Critical (red), Warning (yellow), and OK (green)
colors. Pending status is ignored in color calculation to prevent blue
title bar during service transitions.
Replace separate service_logs_cmd with service-manage logs action
to unify service management through single script interface.
Dashboard now calls 'service-manage logs <service>' which provides
intelligent log viewing based on service state and configuration.
Replace separate J/L keys with single L key that calls configurable
service_logs_cmd from dashboard config. Script handles both journalctl
and custom log files automatically based on service configuration.
Update status bar to show all available keybindings including
previously missing backup and terminal commands.
- Fix get_selected_service to always return parent service names
- Prevent selection of container sub-items when managing docker services
- Ensure service commands operate on correct systemd service names
- Simplify service selection logic to only consider parent services
- Update version to 0.1.92
- Fix rebuild and backup commands with proper inner command quoting
- Remove duplicate "Press any key to close..." from SSH commands since scripts handle it
- Clean up SSH terminal command to avoid redundant prompts
- Ensure consistent command execution patterns across all SSH operations
- Update version to 0.1.91
- Replace complex SSH command patterns with simple script calls
- Create service-manage script for start/stop operations with proper logging
- Create rebuild script equivalent to rebuild_git alias with user feedback
- Update dashboard to use unified command pattern: sudo service-manage, sudo rebuild
- Simplify backup to use service management: service-manage start borgbackup
- Configure sudoers with wildcards for Nix store path compatibility
- Remove cmtec references from script names for better genericity
- Update version to 0.1.90
- Use bash -c to properly execute service start/stop command sequences
- Ensure SSH session stays alive for user input prompt
- Fix escaping issues with nested quotes in commands
- Update version to 0.1.89
- Move 'Press any key to close...' prompt inside SSH session
- Ensure tmux popup stays open until user manually closes
- Maintain consistent behavior with other SSH commands
- Update version to 0.1.88
- Add sudo to pkill commands to resolve permission errors when killing journalctl processes
- Fix service stop command timing to show logs during shutdown process
- Add sleep delays to ensure log visibility before cleanup
- Update version to 0.1.87
- Apply uniform pattern to all SSH commands: informational text + command + exit prompt
- Remove exit prompt from logging commands (J/L keys) that run continuously with -f flag
- Simplify rebuild and backup commands to match service command pattern
- Update version to 0.1.86