auto docs
Some checks failed
Test / tests (push) Has been cancelled
/ OpenTofu (push) Has been cancelled

This commit is contained in:
plasmagoat 2025-07-29 16:28:17 +02:00
parent a955528e44
commit ce8c543e84
18 changed files with 3129 additions and 55 deletions

65
docs/README.md Normal file
View file

@ -0,0 +1,65 @@
# Homelab Documentation
> Auto-generated documentation for the homelab deployment
>
> Generated on: tir 29 jul 16:25:52 CEST 2025
> Source: /home/plasmagoat/homelab
## 📚 Documentation Files
This documentation is automatically generated from your colmena flake configuration.
### 📊 Overview Documents
- **[Fleet Overview](fleet-overview.md)** - High-level fleet statistics and service distribution
- **[Current Deployment](current-deployment.md)** - Current deployment state and node status
### 📖 Detailed Configuration
- **[Node Configurations](nodes.md)** - Per-node detailed configuration and services
- **[Service Configurations](services.md)** - Service configurations across the fleet
## 🚀 Quick Actions
### View Current Status
```bash
# Service status across fleet (if homelab CLI is available)
homelab services --global
# Backup status
homelab backups --global
# Overall status
homelab status
```
### Update Documentation
```bash
# Regenerate all documentation
homelab-generate-docs ./docs
# Generate in different directory
homelab-generate-docs /path/to/output
```
## 📋 Quick Stats
- **Total Nodes**: 2
- **Homelab-Enabled Nodes**: 2
- **Generated**: tir 29 jul 16:25:55 CEST 2025
## 🛠️ Management Tools
### Documentation Commands
- `homelab-generate-docs` - Regenerate this documentation
- `homelab-docs-fleet` - Generate fleet overview only
- `homelab-docs-nodes` - Generate node configurations only
- `homelab-docs-services` - Generate service configurations only
- `homelab-docs-deployment` - Generate deployment state only
### Colmena Commands
- `colmena eval` - Evaluate flake expressions
- `colmena apply` - Deploy configuration changes
- `colmena build` - Build configurations without deploying
---
*This documentation reflects the live state of your homelab deployment as evaluated by colmena.*

View file

@ -0,0 +1,26 @@
# Current Deployment State
> Current homelab deployment configuration
>
> Generated on: tir 29 jul 16:25:46 CEST 2025
> Working directory: /home/plasmagoat/homelab
## Deployment Summary
| Metric | Count |
|--------|-------|
| Total Nodes | 2 |
| Homelab-Enabled Nodes | 2 |
| Unique Services | 1 |
| Service Instances | 1 |
## Node Status
| Node | Homelab | Environment | Services | Monitoring | Backups | Proxy |
|------|---------|-------------|----------|------------|---------|-------|
| `photos` | ✅ | production | 1 | ✅ | ❌ | ❌ |
| `sandbox` | ✅ | production | 0 | ✅ | ✅ | ❌ |
---
*Deployment state extracted from live colmena configuration*

39
docs/fleet-overview.md Normal file
View file

@ -0,0 +1,39 @@
# Homelab Fleet Overview
> Auto-generated fleet overview
>
> Generated on: tir 29 jul 16:25:32 CEST 2025
> Source: /home/plasmagoat/homelab
## Fleet Statistics
### Basic Information
| Metric | Value |
|--------|-------|
| Total Nodes | 2 |
| Node Names | photos sandbox |
### Homelab Configuration
| Node | Homelab Enabled | Hostname | Environment |
|------|----------------|----------|-------------|
| `photos` | ✅ | photos | production |
| `sandbox` | ✅ | sandbox | production |
### Service Distribution
| Node | Service Count | Services |
|------|---------------|----------|
| `photos` | 1 | minio |
| `sandbox` | 0 | |
### Environment Distribution
| Environment | Node Count |
|-------------|------------|
| production | 2 |
---
*Fleet overview generated from colmena evaluation*

70
docs/nodes.md Normal file
View file

@ -0,0 +1,70 @@
# Node Configurations
> Detailed per-node configuration
>
> Generated on: tir 29 jul 16:25:40 CEST 2025
## Node: photos
### System Information
| Property | Value |
|----------|-------|
| NixOS Version | `25.11pre-git` |
| Hostname | `photos` |
| System | `x86_64-linux` |
### Homelab Configuration
| Property | Value |
|----------|-------|
| Homelab Hostname | `photos` |
| Domain | `lab` |
| External Domain | `procopius.dk` |
| Environment | `production` |
| Location | `proxmox-cluster` |
| Tags | photos |
### Services
| Service | Enabled | Port | Description | Tags |
|---------|---------|------|-------------|------|
| `gatus` | ❌ | 8080 | Gatus Status Page | |
| `grafana` | ❌ | 3000 | Grafana Metrics Dashboard | |
| `minio` | ✅ | 9000 | minio | |
| `prometheus` | ❌ | 9090 | Prometheus Monitoring Server | |
---
## Node: sandbox
### System Information
| Property | Value |
|----------|-------|
| NixOS Version | `25.11pre-git` |
| Hostname | `sandbox` |
| System | `x86_64-linux` |
### Homelab Configuration
| Property | Value |
|----------|-------|
| Homelab Hostname | `sandbox` |
| Domain | `lab` |
| External Domain | `procopius.dk` |
| Environment | `production` |
| Location | `proxmox-cluster` |
| Tags | sandbox |
### Services
| Service | Enabled | Port | Description | Tags |
|---------|---------|------|-------------|------|
| `gatus` | ❌ | 8080 | Gatus Status Page | |
| `grafana` | ❌ | 3000 | Grafana Metrics Dashboard | |
| `minio` | ❌ | 9000 | minio | |
| `prometheus` | ❌ | 9090 | Prometheus Monitoring Server | |
---

209
docs/services.md Normal file
View file

@ -0,0 +1,209 @@
# Service Catalog
> Available services and their configuration options
>
> Generated on: tir 29 jul 16:25:43 CEST 2025
This document catalogs all available homelab services, their configuration options, and integration capabilities.
## Overview
**Total Available Services:** 4
## Service Reference
### gatus
**Description:** Gatus Status Page
**Default Port:** `8080`
**Current Deployments:** 0 instance(s)
#### Core Configuration
```nix
homelab.services.gatus = {
enable = true;
port = 8080;
description = "Gatus Status Page";
};
```
#### Service-Specific Options
Available configuration options for gatus:
```nix
homelab.services.gatus = {
# ... core options above ...
# Service-specific configuration
alerting = {};
extraConfig = {};
storage = {"type":"memory"};
ui = {"buttons":[{"link":"https://grafana.procopius.dk","name":"Grafana"},{"link":"https://prometheus.procopius.dk","name":"Prometheus"}],"header":"Homelab Services Status","link":"https://status.procopius.dk","title":"Homelab Status"};
web = {"address":"0.0.0.0"};
};
```
#### Complete Example
```nix
# Full configuration example for gatus
homelab.services.gatus = {
enable = true;
port = 8080;
description = "Gatus Status Page";
};
```
---
### grafana
**Description:** Grafana Metrics Dashboard
**Default Port:** `3000`
**Current Deployments:** 0 instance(s)
#### Core Configuration
```nix
homelab.services.grafana = {
enable = true;
port = 3000;
description = "Grafana Metrics Dashboard";
};
```
#### Complete Example
```nix
# Full configuration example for grafana
homelab.services.grafana = {
enable = true;
port = 3000;
description = "Grafana Metrics Dashboard";
};
```
---
### minio
**Description:** minio
**Default Port:** `9000`
**Current Deployments:** 1 instance(s)
#### Core Configuration
```nix
homelab.services.minio = {
enable = true;
port = 9000;
description = "minio";
};
```
#### Service-Specific Options
Available configuration options for minio:
```nix
homelab.services.minio = {
# ... core options above ...
# Service-specific configuration
openFirewall = true;
webPort = 9001;
};
```
#### Complete Example
```nix
# Full configuration example for minio
homelab.services.minio = {
enable = true;
port = 9000;
description = "minio";
};
```
---
### prometheus
**Description:** Prometheus Monitoring Server
**Default Port:** `9090`
**Current Deployments:** 0 instance(s)
#### Core Configuration
```nix
homelab.services.prometheus = {
enable = true;
port = 9090;
description = "Prometheus Monitoring Server";
};
```
#### Service-Specific Options
Available configuration options for prometheus:
```nix
homelab.services.prometheus = {
# ... core options above ...
# Service-specific configuration
alertmanager = {"enable":true,"url":"alertmanager.lab:9093"};
extraAlertingRules = [];
extraFlags = [];
extraScrapeConfigs = [];
globalConfig = {"evaluation_interval":"15s","scrape_interval":"15s"};
retention = 15d;
ruleFiles = [];
systemdServices = ["prometheus.service","prometheus"];
};
```
#### Complete Example
```nix
# Full configuration example for prometheus
homelab.services.prometheus = {
enable = true;
port = 9090;
description = "Prometheus Monitoring Server";
};
```
---
## Integration Summary
### Available Integration Types
| Integration | Purpose | Configuration |
|-------------|---------|---------------|
| **Monitoring** | Prometheus metrics + health checks | `monitoring.enable = true` |
| **Logging** | Centralized log collection | `logging.enable = true` |
| **Proxy** | Reverse proxy with SSL + auth | `proxy.enable = true` |
### Integration Benefits
- **🔄 Automatic Discovery:** Enabled integrations are automatically discovered by fleet-wide services
- **📊 Unified Monitoring:** All metrics and health checks appear in Prometheus/Grafana
- **📝 Centralized Logging:** All logs are collected and indexed in Loki
- **🌐 Consistent Access:** All services get consistent subdomain access with SSL
---
*This service catalog is generated from actual service configurations across your homelab fleet.*

30
flake.lock generated
View file

@ -25,11 +25,11 @@
"stable": "stable" "stable": "stable"
}, },
"locked": { "locked": {
"lastModified": 1752287590, "lastModified": 1753701727,
"narHash": "sha256-U1IqFnxlgCRrPaeT5IGCdH0j9CNLPFcI/fRAidi0aDQ=", "narHash": "sha256-tgiPAFXoSGIm3wUAuKwjk2fgTgZ0rpT90RNfhU5QKJA=",
"owner": "zhaofengli", "owner": "zhaofengli",
"repo": "colmena", "repo": "colmena",
"rev": "d2beb694d54db653399b8597c0f6e15e20b26405", "rev": "342054695f53c4a27c8dce0a8c9f35ade6d963d6",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -156,11 +156,11 @@
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1753429684, "lastModified": 1750134718,
"narHash": "sha256-9h7+4/53cSfQ/uA3pSvCaBepmZaz/dLlLVJnbQ+SJjk=", "narHash": "sha256-v263g4GbxXv87hMXMCpjkIxd/viIF7p3JpJrwgKdNiI=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "7fd36ee82c0275fb545775cc5e4d30542899511d", "rev": "9e83b64f727c88a7711a2c463a7b16eedb69a84c",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -188,11 +188,11 @@
}, },
"nixpkgs-unstable": { "nixpkgs-unstable": {
"locked": { "locked": {
"lastModified": 1752480373, "lastModified": 1753694789,
"narHash": "sha256-JHQbm+OcGp32wAsXTE/FLYGNpb+4GLi5oTvCxwSoBOA=", "narHash": "sha256-cKgvtz6fKuK1Xr5LQW/zOUiAC0oSQoA9nOISB0pJZqM=",
"owner": "nixos", "owner": "nixos",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "62e0f05ede1da0d54515d4ea8ce9c733f12d9f08", "rev": "dc9637876d0dcc8c9e5e22986b857632effeb727",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -204,11 +204,11 @@
}, },
"nixpkgs_2": { "nixpkgs_2": {
"locked": { "locked": {
"lastModified": 1752624097, "lastModified": 1753795159,
"narHash": "sha256-mQCof2VccFzF7cmXy43n3GCwSN2+m8TVhZpGLx9sxVc=", "narHash": "sha256-0fOuNh5MefjES+ie0zV3mVMSs1RwXhVIxcNQuu+Q4g4=",
"owner": "nixos", "owner": "nixos",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "d7c8095791ce3aafe97d9c16c1dc2f4e3d69a3ba", "rev": "5a012ffbe2494cb777ec3dbace5811f927bddc72",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -237,11 +237,11 @@
"nixpkgs-25_05": "nixpkgs-25_05" "nixpkgs-25_05": "nixpkgs-25_05"
}, },
"locked": { "locked": {
"lastModified": 1752060039, "lastModified": 1753285640,
"narHash": "sha256-MqcbN/PgfXOv8S4q6GcmlORd6kJZ3UlFNhzCvLOEe4I=", "narHash": "sha256-ofa021NeHDXAxg5J8mSnn8rHa393PAlD85ZCetP4Qa0=",
"owner": "simple-nixos-mailserver", "owner": "simple-nixos-mailserver",
"repo": "nixos-mailserver", "repo": "nixos-mailserver",
"rev": "80d21ed7a1ab8007597f7cd9adc26ebc98b9611f", "rev": "ce87c8a9771d1a20c3fa3b60113b9b0821627dcb",
"type": "gitlab" "type": "gitlab"
}, },
"original": { "original": {

View file

@ -25,6 +25,7 @@
self, self,
nixpkgs, nixpkgs,
# systems, # systems,
colmena,
sops-nix, sops-nix,
# home-manager, # home-manager,
simple-nixos-mailserver, simple-nixos-mailserver,
@ -54,18 +55,18 @@
# Reusable nixos modules # Reusable nixos modules
nixosModules = import ./modules/nixos; nixosModules = import ./modules/nixos;
colmenaHive = inputs.colmena.lib.makeHive self.outputs.colmena; colmenaHive = colmena.lib.makeHive self.outputs.colmena;
colmena = import ./colmena.nix {inherit inputs outputs;}; colmena = import ./colmena.nix {inherit inputs outputs;};
# Development shells # Development shells
devShells = forAllSystems ( devShells = forAllSystems (
system: let system: let
inherit (inputs.colmena.packages."${pkgs.system}") colmena;
pkgs = nixpkgs.legacyPackages.${system}; pkgs = nixpkgs.legacyPackages.${system};
in { in {
default = pkgs.mkShell { default = pkgs.mkShell {
packages = with pkgs; [ packages = with pkgs; [
# colmena self.packages.${system}.homelab-docs
colmena.packages.${system}.colmena
sops sops
age age
nix-output-monitor nix-output-monitor

View file

@ -42,9 +42,9 @@
}; };
# services.loki.enable = true; # services.loki.enable = true;
services.prometheus.enable = true; # services.prometheus.enable = true;
services.grafana.enable = true; # services.grafana.enable = true;
services.gatus.enable = true; # services.gatus.enable = true;
}; };
system.stateVersion = "25.05"; system.stateVersion = "25.05";

577
modules/homelab/README.md Normal file
View file

@ -0,0 +1,577 @@
# Homelab Configuration Documentation
## Overview
This homelab configuration system provides a unified way to manage services across multiple nodes with automatic aggregation of monitoring, logging, backup, and reverse proxy configurations. The system is built on NixOS and follows a modular architecture with both local and global configuration scopes.
## Core Homelab Options
### Basic Configuration (`homelab.*`)
```nix
homelab = {
enable = true; # Enable homelab fleet configuration
hostname = "node-01"; # Hostname for this system
domain = "lab"; # Base domain for the homelab (default: "lab")
externalDomain = "procopius.dk"; # External domain to the homelab
environment = "production"; # Environment type: "production" | "staging" | "development"
location = "homelab"; # Physical location identifier
tags = ["web" "database"]; # Tags for this system
};
```
### Monitoring System (`homelab.monitoring.*`)
```nix
homelab.monitoring = {
enable = true; # Enable monitoring system
# Node exporter (automatically enabled)
nodeExporter = {
enable = true; # Enable node exporter (default: true)
port = 9100; # Node exporter port (default: 9100)
};
# Manual metrics endpoints
metrics = [
{
name = "custom-app"; # Metric endpoint name
host = "localhost"; # Host (default: homelab.hostname)
port = 8080; # Port for metrics endpoint
path = "/metrics"; # Metrics path (default: "/metrics")
jobName = "custom"; # Prometheus job name
scrapeInterval = "30s"; # Scrape interval (default: "30s")
labels = { # Additional labels
component = "web";
};
}
];
# Manual health checks
healthChecks = [
{
name = "web-service"; # Health check name
host = "localhost"; # Host (default: homelab.hostname)
port = 80; # Port (nullable)
path = "/health"; # Health check path (default: "/")
protocol = "http"; # Protocol: "http" | "https" | "tcp" | "icmp"
method = "GET"; # HTTP method (default: "GET")
interval = "30s"; # Check interval (default: "30s")
timeout = "10s"; # Timeout (default: "10s")
conditions = [ # Check conditions
"[STATUS] == 200"
];
group = "web"; # Group name (default: "manual")
labels = {}; # Additional labels
enabled = true; # Enable check (default: true)
}
];
# Read-only aggregated data (automatically populated)
allMetrics = [...]; # All metrics from this node
allHealthChecks = [...]; # All health checks from this node
global = { # Global aggregation from all nodes
allMetrics = [...]; # All metrics from entire fleet
allHealthChecks = [...]; # All health checks from entire fleet
metricsByJobName = {...}; # Grouped by job name
healthChecksByGroup = {...}; # Grouped by group
summary = {
totalMetrics = 42;
totalHealthChecks = 15;
nodesCovered = ["node-01" "node-02"];
};
};
};
```
### Logging System (`homelab.logging.*`)
```nix
homelab.logging = {
enable = true; # Enable logging system
# Promtail configuration
promtail = {
enable = true; # Enable Promtail (default: true)
port = 9080; # Promtail port (default: 9080)
clients = [ # Loki clients
{
url = "http://monitor.lab:3100/loki/api/v1/push";
tenant_id = null; # Optional tenant ID
}
];
};
# Log sources
sources = [
{
name = "app-logs"; # Source name
type = "file"; # Type: "journal" | "file" | "syslog" | "docker"
files = {
paths = ["/var/log/app.log"]; # File paths
multiline = { # Optional multiline config
firstLineRegex = "^\\d{4}-\\d{2}-\\d{2}";
maxWaitTime = "3s";
};
};
journal = { # Journal config (for type="journal")
path = "/var/log/journal";
};
labels = { # Additional labels
application = "myapp";
};
pipelineStages = []; # Promtail pipeline stages
enabled = true; # Enable source (default: true)
}
];
defaultLabels = { # Default labels for all sources
hostname = "node-01";
environment = "production";
location = "homelab";
};
# Read-only aggregated data
allSources = [...]; # All sources from this node
global = { # Global aggregation
allSources = [...]; # All sources from entire fleet
sourcesByType = {...}; # Grouped by type
summary = {
total = 25;
byType = {...};
byNode = {...};
};
};
};
```
### Backup System (`homelab.backups.*`)
```nix
homelab.backups = {
enable = true; # Enable backup system
# Backup jobs
jobs = [
{
name = "database-backup"; # Job name
backend = "restic-s3"; # Backend name (must exist in backends)
backendOptions = { # Backend-specific overrides
repository = "custom-repo";
};
labels = { # Additional labels
type = "database";
};
}
];
# Backend configurations (defined by imported modules)
backends = {
restic-s3 = {...}; # Defined in restic.nix
};
defaultLabels = { # Default labels for all jobs
hostname = "node-01";
environment = "production";
location = "homelab";
};
monitoring = true; # Enable backup monitoring (default: true)
# Read-only aggregated data
allJobs = [...]; # All jobs from this node
allBackends = [...]; # All backend names from this node
global = { # Global aggregation
allJobs = [...]; # All jobs from entire fleet
allBackends = [...]; # All backends from entire fleet
jobsByBackend = {...}; # Grouped by backend
summary = {
total = 15;
byBackend = {...};
byNode = {...};
uniqueBackends = ["restic-s3" "borgbackup"];
};
};
};
```
### Reverse Proxy System (`homelab.reverseProxy.*`)
```nix
homelab.reverseProxy = {
enable = true; # Enable reverse proxy system
# Proxy entries
entries = [
{
subdomain = "app"; # Subdomain
host = "localhost"; # Backend host (default: homelab.hostname)
port = 8080; # Backend port
path = "/"; # Backend path (default: "/")
enableAuth = false; # Enable authentication (default: false)
enableSSL = true; # Enable SSL (default: true)
}
];
# Read-only aggregated data
allEntries = [...]; # All entries from this node
global = { # Global aggregation
allEntries = [...]; # All entries from entire fleet
entriesBySubdomain = {...}; # Grouped by subdomain
entriesWithAuth = [...]; # Entries with authentication
entriesWithoutAuth = [...]; # Entries without authentication
summary = {
total = 12;
byNode = {...};
withAuth = 5;
withoutAuth = 7;
};
};
};
```
## Service Configuration Pattern
All services follow a consistent pattern with automatic monitoring, logging, and proxy integration.
### Generic Service Structure (`homelab.services.${serviceName}.*`)
```nix
homelab.services.myservice = {
enable = true; # Enable the service
port = 8080; # Main service port
description = "My Service"; # Service description
# Monitoring integration (automatic when enabled)
monitoring = {
enable = true; # Enable monitoring (default: true when service enabled)
metrics = {
enable = true; # Enable metrics endpoint (default: true)
path = "/metrics"; # Metrics path (default: "/metrics")
extraEndpoints = [ # Additional metric endpoints
{
name = "admin-metrics";
port = 8081;
path = "/admin/metrics";
jobName = "myservice-admin";
}
];
};
healthCheck = {
enable = true; # Enable health check (default: true)
path = "/health"; # Health check path (default: "/health")
conditions = [ # Check conditions
"[STATUS] == 200"
];
extraChecks = [ # Additional health checks
{
name = "myservice-api";
port = 8080;
path = "/api/health";
conditions = ["[STATUS] == 200" "[RESPONSE_TIME] < 500"];
}
];
};
extraLabels = { # Additional labels for all monitoring
tier = "application";
};
};
# Logging integration (automatic when enabled)
logging = {
enable = true; # Enable logging
files = [ # Log files to collect
"/var/log/myservice/app.log"
"/var/log/myservice/error.log"
];
parsing = {
regex = "^(?P<timestamp>\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}) (?P<level>\\w+) (?P<message>.*)";
extractFields = ["level"]; # Fields to extract as labels
};
multiline = { # Multiline log handling
firstLineRegex = "^\\d{4}-\\d{2}-\\d{2}";
maxWaitTime = "3s";
};
extraLabels = { # Additional labels
application = "myservice";
};
extraSources = [ # Additional log sources
{
name = "myservice-access";
type = "file";
files.paths = ["/var/log/myservice/access.log"];
}
];
};
# Reverse proxy integration (automatic when enabled)
proxy = {
enable = true; # Enable reverse proxy
subdomain = "myservice"; # Subdomain (default: service name)
enableAuth = false; # Enable authentication (default: false)
additionalSubdomains = [ # Additional proxy entries
{
subdomain = "myservice-api";
port = 8081;
path = "/api";
enableAuth = true;
}
];
};
# Service-specific options
customOption = "value"; # Service-specific configuration
};
```
## Example Service Implementations
### Prometheus Service
```nix
homelab.services.prometheus = {
enable = true;
port = 9090;
# Prometheus-specific options
retention = "15d"; # Data retention period
alertmanager = {
enable = true;
url = "alertmanager.lab:9093";
};
extraScrapeConfigs = []; # Additional scrape configs
extraAlertingRules = []; # Additional alerting rules
globalConfig = { # Prometheus global config
scrape_interval = "15s";
evaluation_interval = "15s";
};
extraFlags = []; # Additional command line flags
ruleFiles = []; # Additional rule files
# Automatic integrations
monitoring.enable = true; # Self-monitoring
logging.enable = true; # Log collection
proxy = {
enable = true;
subdomain = "prometheus";
enableAuth = true; # Admin interface needs protection
};
};
```
### Gatus Service
```nix
homelab.services.gatus = {
enable = true;
port = 8080;
# Gatus-specific options
ui = {
title = "Homelab Status";
header = "Homelab Services Status";
link = "https://status.procopius.dk";
buttons = [
{ name = "Grafana"; link = "https://grafana.procopius.dk"; }
{ name = "Prometheus"; link = "https://prometheus.procopius.dk"; }
];
};
alerting = { # Discord/Slack/etc notifications
discord = {
webhook-url = "https://discord.com/api/webhooks/...";
default-alert = {
enabled = true;
failure-threshold = 3;
success-threshold = 2;
};
};
};
storage = { # Storage backend
type = "memory"; # or "postgres", "sqlite"
};
web.address = "0.0.0.0";
extraConfig = {}; # Additional Gatus configuration
# Automatic integrations
monitoring.enable = true;
logging.enable = true;
proxy = {
enable = true;
subdomain = "status";
enableAuth = false; # Status page should be public
};
};
```
## Global Aggregation System
The homelab system automatically aggregates configuration from all nodes in your fleet, making it easy to have centralized monitoring and management.
### How Global Aggregation Works
1. **Local Configuration**: Each node defines its own services and configurations
2. **Automatic Collection**: The system automatically collects data from all nodes using the `base.nix` aggregator
3. **Enhancement**: Each collected item is enhanced with node context (`_nodeName`, `_nodeConfig`, etc.)
4. **Global Exposure**: Aggregated data is exposed in `*.global.*` options
### Global Data Structure
```nix
# Available on every node with global data from entire fleet
homelab.monitoring.global = {
allMetrics = [ # All metrics from all nodes
{
name = "prometheus-main";
host = "monitor";
port = 9090;
# ... other fields
_nodeName = "monitor"; # Source node name
_nodeConfig = {...}; # Source node config
_fullAddress = "monitor:9090";
_metricsUrl = "http://monitor:9090/metrics";
}
# ... more metrics from other nodes
];
allHealthChecks = [...]; # All health checks from all nodes
metricsByJobName = { # Grouped by Prometheus job name
"prometheus" = [...];
"node" = [...];
};
healthChecksByGroup = { # Grouped by health check group
"services" = [...];
"infrastructure" = [...];
};
summary = {
totalMetrics = 42;
totalHealthChecks = 15;
nodesCovered = ["monitor" "web-01" "db-01"];
};
};
homelab.logging.global = {
allSources = [...]; # All log sources from all nodes
sourcesByType = {
"file" = [...];
"journal" = [...];
};
summary = {...};
};
homelab.backups.global = {
allJobs = [...]; # All backup jobs from all nodes
allBackends = [...]; # All backup backends from all nodes
jobsByBackend = {...};
summary = {...};
};
homelab.reverseProxy.global = {
allEntries = [...]; # All proxy entries from all nodes
entriesBySubdomain = {...};
entriesWithAuth = [...];
entriesWithoutAuth = [...];
summary = {...};
};
```
### Using Global Data
Services like Prometheus and Gatus automatically use global data:
```nix
# Prometheus automatically scrapes ALL metrics from the entire fleet
services.prometheus.scrapeConfigs =
# Automatically generated from homelab.monitoring.global.allMetrics
# Gatus automatically monitors ALL health checks from the entire fleet
services.gatus.settings.endpoints =
# Automatically generated from homelab.monitoring.global.allHealthChecks
```
## Integration Examples
### Adding a New Service
1. **Create the service configuration**:
```nix
homelab.services.myapp = {
enable = true;
port = 3000;
monitoring.enable = true;
logging.enable = true;
proxy = {
enable = true;
subdomain = "myapp";
};
};
```
2. **The system automatically**:
- Adds metrics endpoint to Prometheus (fleet-wide)
- Adds health check to Gatus (fleet-wide)
- Configures log collection to Loki
- Sets up reverse proxy entry
- Exposes the service globally for other nodes
### Multi-Node Setup
```nix
# Node 1 (monitor.nix)
homelab = {
hostname = "monitor";
services.prometheus.enable = true;
services.gatus.enable = true;
};
# Node 2 (web.nix)
homelab = {
hostname = "web-01";
services.nginx.enable = true;
services.webapp.enable = true;
};
# Node 3 (database.nix)
homelab = {
hostname = "db-01";
services.postgresql.enable = true;
services.redis.enable = true;
};
```
Result: Monitor node automatically discovers and monitors all services across all three nodes.
## File Structure
```
homelab/
├── default.nix # Main homelab options and imports
├── lib/
│ ├── systems/ # Core system modules
│ │ ├── monitoring.nix # Monitoring aggregation
│ │ ├── logging.nix # Logging aggregation
│ │ ├── backups.nix # Backup aggregation
│ │ └── proxy.nix # Reverse proxy aggregation
│ ├── features/ # Service feature modules
│ │ ├── monitoring.nix # Service monitoring template
│ │ ├── logging.nix # Service logging template
│ │ └── proxy.nix # Service proxy template
│ └── aggregators/
│ └── base.nix # Base aggregation functions
└── services/ # Individual service implementations
├── prometheus.nix
├── gatus.nix
└── ...
```
This architecture provides a scalable, consistent way to manage a homelab fleet with automatic service discovery, monitoring, and management across all nodes.

View file

@ -6,8 +6,6 @@
}: }:
with lib; let with lib; let
cfg = config.homelab; cfg = config.homelab;
nodeAgg = import ./lib/node-aggregation.nix {inherit lib;};
in { in {
imports = [ imports = [
./lib/systems/monitoring.nix ./lib/systems/monitoring.nix
@ -15,6 +13,8 @@ in {
./lib/systems/proxy.nix ./lib/systems/proxy.nix
./lib/systems/backups.nix ./lib/systems/backups.nix
./lib/cli/homelab-cli.nix
./services ./services
./motd ./motd
]; ];

View file

@ -0,0 +1,943 @@
#!/usr/bin/env bash
# CLI command implementations
# Services command
# Enhanced services command with detailed service information
cmd_services() {
local SCOPE="local"
local FORMAT="table"
local SHOW_SYSTEMD=true
local DETAIL_SERVICE=""
local ACTION=""
while [[ $# -gt 0 ]]; do
case $1 in
--global|-g) SCOPE="global"; shift ;;
--local|-l) SCOPE="local"; shift ;;
--json) FORMAT="json"; shift ;;
--no-systemd) SHOW_SYSTEMD=false; shift ;;
--detail|-d)
DETAIL_SERVICE="$2"
shift 2
;;
--logs)
ACTION="logs"
DETAIL_SERVICE="$2"
shift 2
;;
--status)
ACTION="status"
DETAIL_SERVICE="$2"
shift 2
;;
--restart)
ACTION="restart"
DETAIL_SERVICE="$2"
shift 2
;;
--errors)
ACTION="errors"
DETAIL_SERVICE="$2"
shift 2
;;
--help|-h)
cat << 'EOF'
homelab services - List and manage services
USAGE:
homelab services [options]
homelab services --detail <service-name>
homelab services --logs <service-name>
homelab services --status <service-name>
homelab services --restart <service-name>
homelab services --errors <service-name>
OPTIONS:
--global, -g Show services from entire fleet
--local, -l Show local services (default)
--json Output JSON format
--no-systemd Don't check systemd status
--detail, -d <name> Show detailed info for service
--logs <name> Show recent logs for service
--status <name> Show detailed status for service
--restart <name> Restart service
--errors <name> Show recent errors for service
EXAMPLES:
homelab services
homelab services --global
homelab services --detail prometheus
homelab services --logs grafana
homelab services --errors nginx
homelab services --restart prometheus
EOF
return 0
;;
*) shift ;;
esac
done
# Handle specific service actions
if [[ -n "$DETAIL_SERVICE" ]]; then
case "$ACTION" in
logs)
show_service_logs "$DETAIL_SERVICE"
return $?
;;
status)
show_service_status "$DETAIL_SERVICE"
return $?
;;
restart)
restart_service "$DETAIL_SERVICE"
return $?
;;
errors)
show_service_errors "$DETAIL_SERVICE"
return $?
;;
*)
show_service_detail "$DETAIL_SERVICE"
return $?
;;
esac
fi
# Regular service listing
if [[ "$FORMAT" == "json" ]]; then
if [[ "$SCOPE" == "global" ]]; then
jq -r '.services.global // {}' "$HOMELAB_CONFIG"
else
jq -r '.services.local // {}' "$HOMELAB_CONFIG"
fi
else
info "Homelab Services ($SCOPE)"
echo "=============================="
echo
services_data=$(jq -r "
if \"$SCOPE\" == \"global\" then .services.global.all // []
else .services.local.all // []
end |
.[] |
[.name, (.node // \"local\"), (.port // \"N/A\"), (.description // \"\")] |
@tsv
" "$HOMELAB_CONFIG" 2>/dev/null || echo "")
if [[ -z "$services_data" ]]; then
warn "No services found"
return 0
fi
printf "%-20s %-12s %-8s %-12s %-8s %s\n" "SERVICE" "NODE" "PORT" "SYSTEMD" "UPTIME" "DESCRIPTION"
printf "%-20s %-12s %-8s %-12s %-8s %s\n" "-------" "----" "----" "-------" "------" "-----------"
while IFS=$'\t' read -r service node port description; do
systemd_status="N/A"
uptime="N/A"
if [[ "$SHOW_SYSTEMD" == "true" && "$node" == "local" ]]; then
# Get systemd service names for this service
systemd_services=($(jq -r ".services.local.all[] | select(.name == \"$service\") | .systemdServices[]?" "$HOMELAB_CONFIG" 2>/dev/null))
if [[ ${#systemd_services[@]} -eq 0 ]]; then
# Fallback to common patterns
systemd_services=("$service" "$service.service")
fi
for svc_name in "${systemd_services[@]}"; do
if systemctl is-enabled "$svc_name" >/dev/null 2>&1; then
if systemctl is-active "$svc_name" >/dev/null 2>&1; then
systemd_status="${GREEN}active${NC}"
# Get uptime
active_since=$(systemctl show -p ActiveEnterTimestamp "$svc_name" --value 2>/dev/null)
if [[ -n "$active_since" && "$active_since" != "n/a" ]]; then
active_epoch=$(date -d "$active_since" +%s 2>/dev/null || echo 0)
current_epoch=$(date +%s)
if [[ "$active_epoch" -gt 0 ]]; then
uptime_seconds=$((current_epoch - active_epoch))
uptime=$(format_duration $uptime_seconds)
fi
fi
else
systemd_status="${RED}inactive${NC}"
uptime="0s"
fi
break
fi
done
fi
printf "%-20s %-12s %-8s %-12b %-8s %s\n" "$service" "$node" "$port" "$systemd_status" "$uptime" "$description"
done <<< "$services_data"
echo
service_count=$(echo "$services_data" | wc -l)
success "Total services: $service_count"
echo
info "💡 Use 'homelab services --detail <service-name>' for detailed information"
info "💡 Use 'homelab services --logs <service-name>' to view logs"
fi
}
# Helper function to format duration
format_duration() {
local seconds=$1
local days=$((seconds / 86400))
local hours=$(((seconds % 86400) / 3600))
local minutes=$(((seconds % 3600) / 60))
local secs=$((seconds % 60))
if [[ $days -gt 0 ]]; then
echo "${days}d ${hours}h"
elif [[ $hours -gt 0 ]]; then
echo "${hours}h ${minutes}m"
elif [[ $minutes -gt 0 ]]; then
echo "${minutes}m"
else
echo "${secs}s"
fi
}
# Robust service detection function
find_systemd_service() {
local service_name="$1"
# Get configured systemd services from homelab config
local systemd_services=($(jq -r ".services.local.all[] | select(.name == \"$service_name\") | .systemdServices[]?" "$HOMELAB_CONFIG" 2>/dev/null))
# If no configured services, use common patterns
if [[ ${#systemd_services[@]} -eq 0 ]]; then
systemd_services=(
"$service_name.service"
"$service_name"
"nixos-$service_name.service"
"nixos-$service_name"
"$service_name-nixos.service"
)
fi
# Try each potential service name with multiple detection methods
for svc_name in "${systemd_services[@]}"; do
# Method 1: Check if systemctl can show the unit (most reliable)
if systemctl show "$svc_name" >/dev/null 2>&1; then
echo "$svc_name"
return 0
fi
# Method 2: Check if unit file exists
if systemctl list-unit-files --no-pager --no-legend "$svc_name" 2>/dev/null | grep -q "^$svc_name"; then
echo "$svc_name"
return 0
fi
# Method 3: Check if unit is loaded
if systemctl list-units --no-pager --no-legend "$svc_name" 2>/dev/null | grep -q "^$svc_name"; then
echo "$svc_name"
return 0
fi
done
# If still not found, try a broader search
local found_service=$(systemctl list-units --all --no-pager --no-legend | grep -E "^$service_name[.-]|^$service_name\.service" | head -1 | awk '{print $1}')
if [[ -n "$found_service" ]]; then
echo "$found_service"
return 0
fi
# Last resort: check unit files
found_service=$(systemctl list-unit-files --no-pager --no-legend | grep -E "^$service_name[.-]|^$service_name\.service" | head -1 | awk '{print $1}')
if [[ -n "$found_service" ]]; then
echo "$found_service"
return 0
fi
return 1
}
show_service_logs() {
local service_name="$1"
local lines="${2:-100}"
local follow="${3:-false}"
if [[ -z "$service_name" ]]; then
error "Service name required"
return 1
fi
# Use robust service detection
local found_service=$(find_systemd_service "$service_name")
if [[ -z "$found_service" ]]; then
error "No systemd service found for '$service_name'"
echo
info "💡 Available services containing '$service_name':"
systemctl list-units --all --no-pager --no-legend | grep -i "$service_name" | awk '{print " " $1}' || echo " None found"
return 1
fi
info "📝 Logs for $service_name ($found_service)"
echo "=================================="
echo
local journalctl_args="-u $found_service -n $lines --no-pager"
if [[ "$follow" == "true" ]]; then
journalctl_args="$journalctl_args -f"
info "Following logs (Press Ctrl+C to stop)..."
echo
fi
journalctl $journalctl_args
}
show_service_detail() {
local service_name="$1"
if [[ -z "$service_name" ]]; then
error "Service name required"
return 1
fi
# Get service info from config
local service_info=$(jq -r ".services.local.all[] | select(.name == \"$service_name\")" "$HOMELAB_CONFIG" 2>/dev/null)
if [[ -z "$service_info" ]]; then
error "Service '$service_name' not found in homelab configuration"
return 1
fi
info "🔍 Service Details: $service_name"
echo "================================="
echo
# Basic info
local port=$(echo "$service_info" | jq -r '.port // "N/A"')
local description=$(echo "$service_info" | jq -r '.description // "N/A"')
local tags=$(echo "$service_info" | jq -r '.tags[]? // empty' | tr '\n' ',' | sed 's/,$//')
echo "📋 Configuration:"
echo " Port: $port"
echo " Description: $description"
echo " Tags: ${tags:-"None"}"
echo
# Use robust service detection
local found_service=$(find_systemd_service "$service_name")
echo "🔧 Systemd Status:"
if [[ -n "$found_service" ]]; then
echo " Service: $found_service"
echo " Status: $(systemctl is-active "$found_service" 2>/dev/null || echo "unknown")"
echo " Enabled: $(systemctl is-enabled "$found_service" 2>/dev/null || echo "unknown")"
# Detailed status
local active_since=$(systemctl show -p ActiveEnterTimestamp "$found_service" --value 2>/dev/null)
if [[ -n "$active_since" && "$active_since" != "n/a" ]]; then
echo " Active since: $active_since"
fi
local main_pid=$(systemctl show -p MainPID "$found_service" --value 2>/dev/null)
if [[ -n "$main_pid" && "$main_pid" != "0" ]]; then
echo " Main PID: $main_pid"
# Memory usage
local memory_usage=$(systemctl show -p MemoryCurrent "$found_service" --value 2>/dev/null)
if [[ -n "$memory_usage" && "$memory_usage" != "[not set]" && "$memory_usage" -gt 0 ]]; then
local memory_mb=$((memory_usage / 1024 / 1024))
echo " Memory: ${memory_mb}MB"
fi
fi
echo
# Recent logs preview
echo "📝 Recent Logs (last 10 lines):"
echo "--------------------------------"
journalctl -u "$found_service" -n 10 --no-pager --output=short 2>/dev/null || echo "No logs available"
echo
# Check for recent errors
local error_count=$(journalctl -u "$found_service" --since "24 hours ago" --no-pager -q 2>/dev/null | grep -i "error\|failed\|exception" | wc -l)
if [[ "$error_count" -gt 0 ]]; then
warn "⚠️ Found $error_count error(s) in last 24 hours"
echo " Use 'homelab services --errors $service_name' to view them"
else
success "✅ No errors found in last 24 hours"
fi
echo
info "📊 Available Actions:"
echo " homelab services --logs $service_name # View full logs"
echo " homelab services --errors $service_name # View recent errors"
echo " homelab services --restart $service_name # Restart service"
else
warn "No systemd service found for '$service_name'"
echo
info "💡 Available services containing '$service_name':"
systemctl list-units --all --no-pager --no-legend | grep -i "$service_name" | awk '{print " " $1}' || echo " None found"
fi
}
show_service_errors() {
local service_name="$1"
local since="${2:-24 hours ago}"
if [[ -z "$service_name" ]]; then
error "Service name required"
return 1
fi
# Use robust service detection
local found_service=$(find_systemd_service "$service_name")
if [[ -z "$found_service" ]]; then
error "No systemd service found for '$service_name'"
return 1
fi
info "🚨 Errors for $service_name ($found_service) since $since"
echo "=============================================="
echo
# Get logs with priority filtering for errors and higher
local systemd_errors=$(journalctl -u "$found_service" --since "$since" --no-pager -p err 2>/dev/null)
# Also get application-level errors from all logs but with better filtering
local app_errors=$(journalctl -u "$found_service" --since "$since" --no-pager 2>/dev/null | \
grep -E "(ERROR|FATAL|CRITICAL|Exception|Traceback|failed to|cannot|unable to|connection refused|timeout|denied)" | \
grep -v -E "(debug|DEBUG|info|INFO|warn|WARNING|notice|NOTICE)" | \
grep -v -E "(successfully|completed|started|stopped|reloaded)")
local has_errors=false
# Show systemd-level errors (priority err and above)
if [[ -n "$systemd_errors" ]]; then
warn "📍 System-level errors (priority: err/crit/alert/emerg):"
echo "────────────────────────────────────────────────────────"
echo "$systemd_errors"
echo
has_errors=true
fi
# Show application-level errors
if [[ -n "$app_errors" ]]; then
warn "📍 Application-level errors:"
echo "─────────────────────────────"
echo "$app_errors"
echo
has_errors=true
fi
# Check for service failures/restarts
local service_failures=$(journalctl -u "$found_service" --since "$since" --no-pager 2>/dev/null | \
grep -E "(Failed|failed|Stopped|stopped|Restarted|restarted|Exit code|exit code)" | \
grep -v -E "(successfully|gracefully)")
if [[ -n "$service_failures" ]]; then
warn "📍 Service state changes/failures:"
echo "───────────────────────────────────"
echo "$service_failures"
echo
has_errors=true
fi
if [[ "$has_errors" == "false" ]]; then
success "✅ No errors found since $since"
echo
info "💡 Error detection includes:"
echo " • System-level errors (journald priority: err/crit/alert/emerg)"
echo " • Application errors (ERROR, FATAL, CRITICAL, Exception, etc.)"
echo " • Service failures and unexpected restarts"
else
echo
local total_systemd=$(echo "$systemd_errors" | grep -c . || echo 0)
local total_app=$(echo "$app_errors" | grep -c . || echo 0)
local total_failures=$(echo "$service_failures" | grep -c . || echo 0)
warn "Summary: $total_systemd system errors, $total_app application errors, $total_failures service issues"
echo
info "💡 Use 'homelab services --logs $service_name' to view all logs"
fi
}
show_service_status() {
local service_name="$1"
if [[ -z "$service_name" ]]; then
error "Service name required"
return 1
fi
# Use robust service detection
local found_service=$(find_systemd_service "$service_name")
if [[ -z "$found_service" ]]; then
error "No systemd service found for '$service_name'"
return 1
fi
info "📊 Status for $service_name ($found_service)"
echo "=================================="
echo
systemctl status "$found_service" --no-pager -l
}
restart_service() {
local service_name="$1"
if [[ -z "$service_name" ]]; then
error "Service name required"
return 1
fi
# Use robust service detection
local found_service=$(find_systemd_service "$service_name")
if [[ -z "$found_service" ]]; then
error "No systemd service found for '$service_name'"
return 1
fi
info "🔄 Restarting $service_name ($found_service)..."
if sudo systemctl restart "$found_service"; then
success "✅ Successfully restarted $service_name"
# Show brief status
sleep 2
if systemctl is-active "$found_service" >/dev/null 2>&1; then
success "✅ Service is now active"
else
error "❌ Service failed to start properly"
warn "Use 'homelab services --status $service_name' to check details"
fi
else
error "❌ Failed to restart $service_name"
return 1
fi
}
# Backups command
cmd_backups() {
local SCOPE="local"
local FORMAT="table"
local SHOW_STATUS=true
while [[ $# -gt 0 ]]; do
case $1 in
--global|-g) SCOPE="global"; shift ;;
--local|-l) SCOPE="local"; shift ;;
--json) FORMAT="json"; shift ;;
--no-status) SHOW_STATUS=false; shift ;;
--help|-h)
cat << 'EOF'
homelab backups - Show backup jobs and status
USAGE:
homelab backups [options]
OPTIONS:
--global, -g Show backups from entire fleet
--local, -l Show local backups (default)
--json Output JSON format
--no-status Don't check systemd timer status
EXAMPLES:
homelab backups
homelab backups --global
homelab backups --no-status
EOF
return 0
;;
*) shift ;;
esac
done
if [[ "$FORMAT" == "json" ]]; then
if [[ "$SCOPE" == "global" ]]; then
jq -r '.backups.global // {}' "$HOMELAB_CONFIG"
else
jq -r '.backups.local // {}' "$HOMELAB_CONFIG"
fi
else
info "Homelab Backups ($SCOPE)"
echo "=========================="
echo
backup_data=$(jq -r "
if \"$SCOPE\" == \"global\" then .backups.global.allJobs // []
else .backups.local.allJobs // []
end |
.[] |
[.name, (.sourceNode // .node // \"local\"), .backend, (.labels | to_entries | map(\"\(.key)=\(.value)\") | join(\",\"))] |
@tsv
" "$HOMELAB_CONFIG" 2>/dev/null || echo "")
if [[ -z "$backup_data" ]]; then
warn "No backup jobs found"
return 0
fi
printf "%-25s %-12s %-8s %-15s %-15s %-15s %s\n" "JOB" "NODE" "STATUS" "BACKEND" "LAST RUN" "NEXT RUN" "LABELS"
printf "%-25s %-12s %-8s %-15s %-15s %-15s %s\n" "---" "----" "------" "-------" "--------" "--------" "------"
while IFS=$'\t' read -r job node backend labels; do
last_run="Unknown"
status="❓"
next_run="Unknown"
if [[ "$SHOW_STATUS" == "true" && "$node" == "local" ]]; then
timer_patterns=(
"backup-$job"
"$job-backup"
"restic-backups-$job"
"restic-backup-$job"
"$job.timer"
"backup-$job.timer"
)
found_timer=""
actual_timer_name=""
for pattern in "${timer_patterns[@]}"; do
for timer_name in "$pattern" "$pattern.timer"; do
if systemctl list-timers --no-pager --no-legend "$timer_name" 2>/dev/null | grep -q "$timer_name"; then
found_timer="$timer_name"
if [[ "$timer_name" == *.timer ]]; then
actual_timer_name="$timer_name"
else
actual_timer_name="$timer_name.timer"
fi
break 2
fi
done
done
if [[ -n "$found_timer" ]]; then
last_trigger=$(systemctl show -p LastTriggerUSec "$actual_timer_name" --value 2>/dev/null)
if [[ "$last_trigger" != "n/a" && -n "$last_trigger" && "$last_trigger" != "Thu 1970-01-01"* ]]; then
last_run=$(date -d "$last_trigger" "+%Y-%m-%d %H:%M" 2>/dev/null || echo "Parse Error")
last_epoch=$(date -d "$last_trigger" +%s 2>/dev/null || echo 0)
current_epoch=$(date +%s)
if [[ "$last_epoch" != "0" && "$last_epoch" -gt 0 ]]; then
hours_since=$(( (current_epoch - last_epoch) / 3600 ))
if [[ $hours_since -lt 25 ]]; then
status="✅"
elif [[ $hours_since -lt 48 ]]; then
status="⚠️"
else
status="❌"
fi
else
status="❓"
fi
else
last_run="Never"
status="⏸️"
fi
next_trigger=$(systemctl show -p NextElapseUSecRealtime "$actual_timer_name" --value 2>/dev/null)
if [[ "$next_trigger" != "n/a" && -n "$next_trigger" && "$next_trigger" != "0" ]]; then
next_run=$(date -d "$next_trigger" "+%Y-%m-%d %H:%M" 2>/dev/null || echo "Parse Error")
else
next_run="Unknown"
fi
if [[ "$status" == "✅" ]]; then
if journalctl -u "$actual_timer_name" --since "24 hours ago" --no-pager -q 2>/dev/null | grep -qi "error\|failed\|timeout"; then
status="❌"
elif journalctl -u "$actual_timer_name" --since "24 hours ago" --no-pager -q 2>/dev/null | grep -qi "success\|completed\|finished"; then
status="✅"
fi
fi
fi
fi
printf "%-25s %-12s %-8s %-15s %-15s %-15s %s\n" "$job" "$node" "$status" "$backend" "$last_run" "$next_run" "$labels"
done <<< "$backup_data"
echo
job_count=$(echo "$backup_data" | wc -l)
success "Total backup jobs: $job_count"
if [[ "$SHOW_STATUS" == "true" ]]; then
echo
info "Status: ✅=Recent(<25h) ⚠️=Overdue(1-2d) ❌=Failed(>2d) ⏸️=Never ❓=Unknown"
fi
fi
}
# Proxy command
cmd_proxy() {
local SCOPE="local"
local FORMAT="table"
while [[ $# -gt 0 ]]; do
case $1 in
--global|-g) SCOPE="global"; shift ;;
--local|-l) SCOPE="local"; shift ;;
--json) FORMAT="json"; shift ;;
--help|-h)
cat << 'EOF'
homelab proxy - Show reverse proxy entries
USAGE:
homelab proxy [options]
OPTIONS:
--global, -g Show proxy entries from entire fleet
--local, -l Show local proxy entries (default)
--json Output JSON format
EXAMPLES:
homelab proxy
homelab proxy --global
EOF
return 0
;;
*) shift ;;
esac
done
if [[ "$FORMAT" == "json" ]]; then
if [[ "$SCOPE" == "global" ]]; then
jq -r '.reverseProxy.global // {}' "$HOMELAB_CONFIG"
else
jq -r '.reverseProxy.local // {}' "$HOMELAB_CONFIG"
fi
else
info "Homelab Reverse Proxy ($SCOPE)"
echo "==============================="
echo
proxy_data=$(jq -r "
if \"$SCOPE\" == \"global\" then .reverseProxy.global.allEntries // []
else .reverseProxy.local.allEntries // []
end |
.[] |
[.subdomain, (.sourceNode // .node // \"local\"), .host, (.port // \"N/A\"), (.enableAuth // false), (.enableSSL // true)] |
@tsv
" "$HOMELAB_CONFIG" 2>/dev/null || echo "")
if [[ -z "$proxy_data" ]]; then
warn "No proxy entries found"
return 0
fi
printf "%-20s %-12s %-15s %-8s %-6s %-6s %s\n" "SUBDOMAIN" "NODE" "BACKEND" "PORT" "AUTH" "SSL" "EXTERNAL URL"
printf "%-20s %-12s %-15s %-8s %-6s %-6s %s\n" "---------" "----" "-------" "----" "----" "---" "------------"
external_domain=$(jq -r '.externalDomain // "lab.local"' "$HOMELAB_CONFIG" 2>/dev/null)
while IFS=$'\t' read -r subdomain node host port auth ssl; do
auth_icon=$(if [[ "$auth" == "true" ]]; then echo "🔒"; else echo "🌐"; fi)
ssl_icon=$(if [[ "$ssl" == "true" ]]; then echo "🔐"; else echo "❌"; fi)
external_url="https://$subdomain.$external_domain"
if [[ "$ssl" == "false" ]]; then
external_url="http://$subdomain.$external_domain"
fi
printf "%-20s %-12s %-15s %-8s %-6s %-6s %s\n" "$subdomain" "$node" "$host" "$port" "$auth_icon" "$ssl_icon" "$external_url"
done <<< "$proxy_data"
echo
entry_count=$(echo "$proxy_data" | wc -l)
success "Total proxy entries: $entry_count"
fi
}
# Monitoring command
cmd_monitoring() {
local SCOPE="local"
local FORMAT="table"
local SHOW_TYPE="all"
while [[ $# -gt 0 ]]; do
case $1 in
--global|-g) SCOPE="global"; shift ;;
--local|-l) SCOPE="local"; shift ;;
--json) FORMAT="json"; shift ;;
--metrics) SHOW_TYPE="metrics"; shift ;;
--health) SHOW_TYPE="health"; shift ;;
--logs) SHOW_TYPE="logs"; shift ;;
--help|-h)
cat << 'EOF'
homelab monitoring - Show monitoring configuration
USAGE:
homelab monitoring [options]
OPTIONS:
--global, -g Show monitoring from entire fleet
--local, -l Show local monitoring (default)
--json Output JSON format
--metrics Show only metrics endpoints
--health Show only health checks
--logs Show only log sources
EXAMPLES:
homelab monitoring
homelab monitoring --global --metrics
EOF
return 0
;;
*) shift ;;
esac
done
if [[ "$FORMAT" == "json" ]]; then
if [[ "$SCOPE" == "global" ]]; then
jq -r '.monitoring.global // {}' "$HOMELAB_CONFIG"
else
jq -r '.monitoring.local // {}' "$HOMELAB_CONFIG"
fi
else
info "Homelab Monitoring ($SCOPE)"
echo "============================"
echo
# Show metrics
if [[ "$SHOW_TYPE" == "all" || "$SHOW_TYPE" == "metrics" ]]; then
info "📊 Metrics Endpoints"
echo "--------------------"
metrics_data=$(jq -r "
if \"$SCOPE\" == \"global\" then .monitoring.global.allMetrics // []
else .monitoring.local.allMetrics // []
end |
.[] |
[.name, (.sourceNode // .node // \"local\"), .host, (.port // \"N/A\"), .path, .jobName] |
@tsv
" "$HOMELAB_CONFIG" 2>/dev/null || echo "")
if [[ -n "$metrics_data" ]]; then
printf "%-20s %-12s %-15s %-8s %-12s %s\n" "NAME" "NODE" "HOST" "PORT" "PATH" "JOB"
printf "%-20s %-12s %-15s %-8s %-12s %s\n" "----" "----" "----" "----" "----" "---"
while IFS=$'\t' read -r name node host port path job; do
printf "%-20s %-12s %-15s %-8s %-12s %s\n" "$name" "$node" "$host" "$port" "$path" "$job"
done <<< "$metrics_data"
echo
metrics_count=$(echo "$metrics_data" | wc -l)
success "Found $metrics_count metrics endpoints"
else
warn "No metrics endpoints found"
fi
echo
fi
# Show health checks
if [[ "$SHOW_TYPE" == "all" || "$SHOW_TYPE" == "health" ]]; then
info "🏥 Health Checks"
echo "----------------"
health_data=$(jq -r "
if \"$SCOPE\" == \"global\" then .monitoring.global.allHealthChecks // []
else .monitoring.local.allHealthChecks // []
end |
.[] |
[.name, (.sourceNode // .node // \"local\"), .host, (.port // \"N/A\"), .path, .protocol, (.enabled // true)] |
@tsv
" "$HOMELAB_CONFIG" 2>/dev/null || echo "")
if [[ -n "$health_data" ]]; then
printf "%-20s %-12s %-15s %-8s %-12s %-8s %s\n" "NAME" "NODE" "HOST" "PORT" "PATH" "PROTOCOL" "STATUS"
printf "%-20s %-12s %-15s %-8s %-12s %-8s %s\n" "----" "----" "----" "----" "----" "--------" "------"
while IFS=$'\t' read -r name node host port path protocol enabled; do
status_icon=$(if [[ "$enabled" == "true" ]]; then echo "✅"; else echo "❌"; fi)
printf "%-20s %-12s %-15s %-8s %-12s %-8s %s\n" "$name" "$node" "$host" "$port" "$path" "$protocol" "$status_icon"
done <<< "$health_data"
echo
health_count=$(echo "$health_data" | wc -l)
success "Found $health_count health checks"
else
warn "No health checks found"
fi
fi
fi
}
# Status command
cmd_status() {
local FORMAT="table"
while [[ $# -gt 0 ]]; do
case $1 in
--json) FORMAT="json"; shift ;;
--help|-h)
cat << 'EOF'
homelab status - Show overall homelab status
USAGE:
homelab status [options]
OPTIONS:
--json Output JSON format
EXAMPLES:
homelab status
EOF
return 0
;;
*) shift ;;
esac
done
if [[ "$FORMAT" == "json" ]]; then
cat "$HOMELAB_CONFIG"
else
# Get basic info
hostname=$(jq -r '.hostname // "unknown"' "$HOMELAB_CONFIG")
domain=$(jq -r '.domain // "lab"' "$HOMELAB_CONFIG")
external_domain=$(jq -r '.externalDomain // "unknown"' "$HOMELAB_CONFIG")
environment=$(jq -r '.environment // "unknown"' "$HOMELAB_CONFIG")
info "🏠 Homelab Status"
echo "=================="
echo
echo "Node Information:"
echo " Hostname: $hostname"
echo " Domain: $domain"
echo " External: $external_domain"
echo " Environment: $environment"
echo
# Services summary
local_services=$(jq -r '.services.local.count // 0' "$HOMELAB_CONFIG" 2>/dev/null)
global_services=$(jq -r '.services.global.count // 0' "$HOMELAB_CONFIG" 2>/dev/null)
echo "📋 Services:"
echo " Local: $local_services"
echo " Fleet: $global_services"
echo
# Monitoring summary
local_metrics=$(jq -r '.monitoring.local.count // 0' "$HOMELAB_CONFIG" 2>/dev/null)
global_metrics=$(jq -r '.monitoring.global.summary.totalMetrics // 0' "$HOMELAB_CONFIG" 2>/dev/null)
echo "📊 Monitoring:"
echo " Local Metrics: $local_metrics"
echo " Fleet Metrics: $global_metrics"
echo
# Backup summary
local_backups=$(jq -r '.backups.local.count // 0' "$HOMELAB_CONFIG" 2>/dev/null)
global_backups=$(jq -r '.backups.global.summary.total // 0' "$HOMELAB_CONFIG" 2>/dev/null)
echo "💾 Backups:"
echo " Local Jobs: $local_backups"
echo " Fleet Jobs: $global_backups"
echo
success "Use 'homelab <command> --help' for detailed information"
fi
}

View file

@ -0,0 +1,295 @@
{
config,
lib,
pkgs,
...
}:
with lib; let
cfg = config.homelab;
extractServiceData = services:
mapAttrsToList (name: svc: {
inherit name;
enabled = svc.enable or false;
port = svc.port or null;
description = svc.description or name;
tags = svc.tags or [];
systemdServices = svc.systemdServices or ["${name}.service" name];
}) (filterAttrs (name: svc: svc.enable or false) services);
extractListData = list:
if isList list
then
map (
item:
if isAttrs item
then
filterAttrs (
k: v:
!(isFunction v)
&& !(isAttrs v && v ? "_type")
&& k != "_module"
)
item
else item
)
list
else [];
homelabCli = pkgs.writeShellScriptBin "homelab" ''
#!/usr/bin/env bash
set -euo pipefail
HOMELAB_CONFIG="/etc/homelab/config.json"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
# Helper functions
error() { echo -e "''${RED}Error: $1''${NC}" >&2; }
info() { echo -e "''${BLUE}$1''${NC}"; }
success() { echo -e "''${GREEN}$1''${NC}"; }
warn() { echo -e "''${YELLOW}$1''${NC}"; }
# Check config exists
if [[ ! -f "$HOMELAB_CONFIG" ]]; then
error "Homelab configuration not found"
error "Make sure homelab.enable = true and rebuild"
exit 1
fi
# Load command implementations
source ${./cli-commands.sh}
# Help function
show_help() {
cat << 'EOF'
Homelab Management CLI
USAGE:
homelab <command> [options]
COMMANDS:
services List and manage services
backups Show backup jobs and status
proxy Show reverse proxy entries
monitoring Show monitoring configuration
status Overall homelab status
help Show this help
GLOBAL OPTIONS:
--global, -g Show fleet-wide information
--local, -l Show local information (default)
--json Output JSON format
--help, -h Show help
EXAMPLES:
homelab services --global
homelab backups --local
homelab status
EOF
}
# Main command dispatcher
case "''${1:-help}" in
services)
shift
cmd_services "$@"
;;
backups)
shift
cmd_backups "$@"
;;
proxy)
shift
cmd_proxy "$@"
;;
monitoring)
shift
cmd_monitoring "$@"
;;
status)
shift
cmd_status "$@"
;;
help|--help|-h)
show_help
;;
*)
error "Unknown command: $1"
show_help
exit 1
;;
esac
'';
in {
# Only enable when homelab is enabled
config = mkIf cfg.enable {
# Install CLI tools
environment.systemPackages = [
homelabCli
# Create convenient aliases
(pkgs.writeShellScriptBin "hl" "exec homelab \"$@\"")
(pkgs.writeShellScriptBin "hls" "exec homelab services \"$@\"")
(pkgs.writeShellScriptBin "hlb" "exec homelab backups \"$@\"")
(pkgs.writeShellScriptBin "hlp" "exec homelab proxy \"$@\"")
(pkgs.writeShellScriptBin "hlm" "exec homelab monitoring \"$@\"")
];
# Generate minimal, safe JSON config
environment.etc."homelab/config.json" = {
text = builtins.toJSON {
# Basic homelab info (always safe)
hostname = cfg.hostname or "unknown";
domain = cfg.domain or "lab";
externalDomain = cfg.externalDomain or "lab.local";
environment = cfg.environment or "production";
location = cfg.location or "homelab";
tags = cfg.tags or [];
# Services - only extract what we have locally
services = {
local = {
all =
if (cfg ? services)
then extractServiceData cfg.services
else [];
count =
if (cfg ? services)
then length (attrNames (filterAttrs (n: s: s.enable or false) cfg.services))
else 0;
};
# For global data, we'll try to read it but provide empty fallback
global = {
all = [];
count = 0;
summary = {};
};
};
# Monitoring - extract only basic data
monitoring = {
local = {
allMetrics =
if (hasAttr "monitoring" cfg && hasAttr "allMetrics" cfg.monitoring)
then extractListData cfg.monitoring.allMetrics
else [];
allHealthChecks =
if (hasAttr "monitoring" cfg && hasAttr "allHealthChecks" cfg.monitoring)
then extractListData cfg.monitoring.allHealthChecks
else [];
count =
if (hasAttr "monitoring" cfg && hasAttr "allMetrics" cfg.monitoring)
then length cfg.monitoring.allMetrics
else 0;
};
global = {
allMetrics = [];
allHealthChecks = [];
summary = {
totalMetrics = 0;
totalHealthChecks = 0;
};
};
};
# Logging
logging = {
local = {
allSources =
if (hasAttr "logging" cfg && hasAttr "allSources" cfg.logging)
then extractListData cfg.logging.allSources
else [];
count =
if (hasAttr "logging" cfg && hasAttr "allSources" cfg.logging)
then length cfg.logging.allSources
else 0;
};
global = {
allSources = [];
summary = {};
};
};
# Backups
backups = {
local = {
allJobs =
if (hasAttr "backups" cfg && hasAttr "allJobs" cfg.backups)
then extractListData cfg.backups.allJobs
else [];
count =
if (hasAttr "backups" cfg && hasAttr "allJobs" cfg.backups)
then length cfg.backups.allJobs
else 0;
};
global = {
allJobs = [];
summary = {};
};
};
# Reverse Proxy
reverseProxy = {
local = {
allEntries =
if (hasAttr "reverseProxy" cfg && hasAttr "allEntries" cfg.reverseProxy)
then extractListData cfg.reverseProxy.allEntries
else [];
count =
if (hasAttr "reverseProxy" cfg && hasAttr "allEntries" cfg.reverseProxy)
then length cfg.reverseProxy.allEntries
else 0;
};
global = {
allEntries = [];
summary = {};
};
};
# Metadata
_metadata = {
# generated = toString builtins.currentTime;
version = "1.0.0";
};
};
mode = "0644";
};
# Add bash completion
environment.etc."bash_completion.d/homelab".text = ''
_homelab_completion() {
local cur prev opts
COMPREPLY=()
cur="''${COMP_WORDS[COMP_CWORD]}"
prev="''${COMP_WORDS[COMP_CWORD-1]}"
case ''${COMP_CWORD} in
1)
opts="services backups proxy monitoring status help"
COMPREPLY=( $(compgen -W "''${opts}" -- ''${cur}) )
return 0
;;
*)
case ''${COMP_WORDS[1]} in
services|backups|proxy|monitoring|status)
opts="--global --local --json --help"
;;
*)
opts="--help"
;;
esac
COMPREPLY=( $(compgen -W "''${opts}" -- ''${cur}) )
return 0
;;
esac
}
complete -F _homelab_completion homelab hl
'';
};
}

View file

@ -124,19 +124,19 @@ in {
# Always exposed aggregated data # Always exposed aggregated data
allJobs = mkOption { allJobs = mkOption {
type = types.listOf types.attrs; type = types.listOf types.attrs;
default = []; default = localAggregation.allJobs;
readOnly = true; readOnly = true;
}; };
allBackends = mkOption { allBackends = mkOption {
type = types.listOf types.str; type = types.listOf types.str;
default = []; default = localAggregation.allBackends;
readOnly = true; readOnly = true;
}; };
global = mkOption { global = mkOption {
type = types.attrs; type = types.attrs;
default = {}; default = globalAggregation;
readOnly = true; readOnly = true;
}; };
}; };
@ -151,13 +151,13 @@ in {
]; ];
# Always expose both local and global # Always expose both local and global
homelab.backups = { # homelab.backups = {
allJobs = localAggregation.allJobs; # allJobs = localAggregation.allJobs;
allBackends = localAggregation.allBackends; # allBackends = localAggregation.allBackends;
global = # global =
if hasNodes # if hasNodes
then globalAggregation # then globalAggregation
else {}; # else {};
}; # };
}; };
} }

View file

@ -140,13 +140,13 @@ in {
# Always exposed aggregated data # Always exposed aggregated data
allSources = mkOption { allSources = mkOption {
type = types.listOf types.attrs; type = types.listOf types.attrs;
default = []; default = localAggregation.allSources;
readOnly = true; readOnly = true;
}; };
global = mkOption { global = mkOption {
type = types.attrs; type = types.attrs;
default = {}; default = globalAggregation;
readOnly = true; readOnly = true;
}; };
}; };
@ -198,12 +198,12 @@ in {
networking.firewall.allowedTCPPorts = optionals cfg.promtail.enable [cfg.promtail.port]; networking.firewall.allowedTCPPorts = optionals cfg.promtail.enable [cfg.promtail.port];
homelab.logging = { # homelab.logging = {
allSources = localAggregation.allSources; # allSources = localAggregation.allSources;
global = # global =
if hasNodes # if hasNodes
then globalAggregation # then globalAggregation
else {}; # else {};
}; # };
}; };
} }

View file

@ -74,25 +74,25 @@ in {
# Always exposed aggregated data # Always exposed aggregated data
allEntries = mkOption { allEntries = mkOption {
type = types.listOf types.attrs; type = types.listOf types.attrs;
default = []; default = localAggregation.allEntries;
readOnly = true; readOnly = true;
}; };
global = mkOption { global = mkOption {
type = types.attrs; type = types.attrs;
default = {}; default = globalAggregation;
readOnly = true; readOnly = true;
}; };
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
# Always expose both local and global # Always expose both local and global
homelab.reverseProxy = { # homelab.reverseProxy = {
allEntries = localAggregation.allEntries; # allEntries = localAggregation.allEntries;
global = # global =
if hasNodes # if hasNodes
then globalAggregation # then globalAggregation
else {}; # else {};
}; # };
}; };
} }

View file

@ -12,9 +12,7 @@ with lib; let
# Generate Prometheus scrape configs from global monitoring data # Generate Prometheus scrape configs from global monitoring data
prometheusScrapeConfigs = let prometheusScrapeConfigs = let
# Get all metrics - try global first, fallback to local # Get all metrics - try global first, fallback to local
allMetrics = homelabCfg.monitoring.global.allMetrics allMetrics = homelabCfg.monitoring.global.allMetrics;
or homelabCfg.monitoring.allMetrics
or [];
jobGroups = groupBy (m: m.jobName) allMetrics; jobGroups = groupBy (m: m.jobName) allMetrics;
@ -157,6 +155,15 @@ in {
default = []; default = [];
description = "Additional rule files to load"; description = "Additional rule files to load";
}; };
systemdServices = mkOption {
type = types.listOf types.str;
default = [
"prometheus.service"
"prometheus"
];
description = "Systemd services to monitor";
};
}; };
# Service configuration with smart defaults # Service configuration with smart defaults

View file

@ -3,4 +3,5 @@
pkgs: { pkgs: {
# example = pkgs.callPackage ./example { }; # example = pkgs.callPackage ./example { };
ente-web = pkgs.callPackage ./ente-web.nix {}; ente-web = pkgs.callPackage ./ente-web.nix {};
homelab-docs = pkgs.callPackage ./homelab-docs.nix {};
} }

841
pkgs/homelab-docs.nix Normal file
View file

@ -0,0 +1,841 @@
# homelab-docs.nix - Standalone documentation generator package
{
lib,
stdenv,
writeShellScriptBin,
jq,
nixfmt,
}: let
# Main documentation generator script
docsGenerator = writeShellScriptBin "homelab-generate-docs" ''
#!/usr/bin/env bash
set -euo pipefail
# Colors
BLUE='\033[0;34m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
NC='\033[0m'
info() { echo -e "''${BLUE}$1''${NC}"; }
success() { echo -e "''${GREEN}$1''${NC}"; }
warn() { echo -e "''${YELLOW}$1''${NC}"; }
error() { echo -e "''${RED}$1''${NC}"; }
# Configuration
DOCS_DIR="''${1:-./docs}"
info "📚 Generating homelab documentation..."
echo " Output directory: $DOCS_DIR"
echo
# Check if we're in a directory with a flake
if [[ ! -f flake.nix ]]; then
error "No flake.nix found in current directory"
echo "Please run this command from your homelab flake directory"
exit 1
fi
# Check if colmena is available
if ! command -v colmena >/dev/null 2>&1; then
error "colmena command not found."
echo "Please ensure colmena is available in your environment"
echo "Add it to your devShell or install it globally"
exit 1
fi
# Create docs directory
mkdir -p "$DOCS_DIR"
# Generate fleet overview
info " 🌐 Generating fleet overview..."
homelab-docs-fleet > "$DOCS_DIR/fleet-overview.md"
# Generate node documentation
info " 🖥 Generating node configurations..."
homelab-docs-nodes > "$DOCS_DIR/nodes.md"
# Generate service documentation
info " Generating service configurations..."
homelab-docs-services > "$DOCS_DIR/services.md"
# Generate current deployment
info " 🏠 Generating current deployment..."
homelab-docs-deployment > "$DOCS_DIR/current-deployment.md"
# Generate README
info " 📋 Generating README..."
homelab-docs-readme > "$DOCS_DIR/README.md"
success " Documentation generated successfully!"
echo
echo "Generated files:"
echo " 🌐 fleet-overview.md - Fleet statistics and overview"
echo " 🖥 nodes.md - Per-node configurations"
echo " services.md - Service configurations"
echo " 🏠 current-deployment.md - Current deployment state"
echo " 📋 README.md - Documentation index"
echo
echo "💡 Tip: Add these files to your repository and set up GitHub Actions"
echo " to automatically regenerate documentation on changes!"
'';
# Fleet overview generator
fleetDocsGenerator = writeShellScriptBin "homelab-docs-fleet" ''
#!/usr/bin/env bash
set -euo pipefail
cat << EOF
# Homelab Fleet Overview
> Auto-generated fleet overview
>
> Generated on: $(date)
> Source: $(pwd)
## Fleet Statistics
EOF
# Get basic fleet stats
echo "### Basic Information"
echo
fleet_stats=$(colmena eval -E '{ nodes, pkgs, lib, ... }: {
totalNodes = lib.length (lib.attrNames nodes);
nodeNames = lib.attrNames nodes;
}')
total_nodes=$(echo "$fleet_stats" | ${jq}/bin/jq -r '.totalNodes')
node_names=$(echo "$fleet_stats" | ${jq}/bin/jq -r '.nodeNames[]' | tr '\n' ' ')
echo "| Metric | Value |"
echo "|--------|-------|"
echo "| Total Nodes | $total_nodes |"
echo "| Node Names | $node_names |"
echo
# Get homelab-enabled nodes
echo "### Homelab Configuration"
echo
homelab_info=$(colmena eval -E '{ nodes, pkgs, lib, ... }: {
homelabNodes = lib.mapAttrs (name: node: {
enabled = node.config.homelab.enable or false;
hostname = node.config.homelab.hostname or null;
environment = node.config.homelab.environment or null;
}) nodes;
}')
echo "| Node | Homelab Enabled | Hostname | Environment |"
echo "|------|----------------|----------|-------------|"
echo "$homelab_info" | ${jq}/bin/jq -r '.homelabNodes | to_entries[] |
[.key, (.value.enabled | tostring), (.value.hostname // "N/A"), (.value.environment // "N/A")] |
@tsv' | while IFS=$'\t' read -r node enabled hostname environment; do
enabled_icon=$(if [[ "$enabled" == "true" ]]; then echo ""; else echo ""; fi)
echo "| \`$node\` | $enabled_icon | $hostname | $environment |"
done
echo
# Get service distribution
echo "### Service Distribution"
echo
service_info=$(colmena eval -E '{ nodes, pkgs, lib, ... }:
lib.mapAttrs (name: node:
if (node.config.homelab.enable or false) then {
serviceCount = lib.length (lib.attrNames (lib.filterAttrs (n: v: v.enable or false) (node.config.homelab.services or {})));
serviceNames = lib.attrNames (lib.filterAttrs (n: v: v.enable or false) (node.config.homelab.services or {}));
} else {
serviceCount = 0;
serviceNames = [];
}
) nodes')
echo "| Node | Service Count | Services |"
echo "|------|---------------|----------|"
echo "$service_info" | ${jq}/bin/jq -r 'to_entries[] |
[.key, (.value.serviceCount | tostring), (.value.serviceNames | join(", "))] |
@tsv' | while IFS=$'\t' read -r node count services; do
echo "| \`$node\` | $count | $services |"
done
echo
# Environment distribution
echo "### Environment Distribution"
echo
env_distribution=$(echo "$homelab_info" | ${jq}/bin/jq -r '
[.homelabNodes | to_entries[] | select(.value.enabled == true) | .value.environment // "unknown"] |
group_by(.) |
map({environment: .[0], count: length}) |
.[]')
if [[ -n "$env_distribution" ]]; then
echo "| Environment | Node Count |"
echo "|-------------|------------|"
echo "$env_distribution" | ${jq}/bin/jq -r '[.environment, (.count | tostring)] | @tsv' | \
while IFS=$'\t' read -r env count; do
echo "| $env | $count |"
done
else
echo "No homelab-enabled nodes found."
fi
echo
echo "---"
echo
echo "*Fleet overview generated from colmena evaluation*"
'';
# Node documentation generator
nodeDocsGenerator = writeShellScriptBin "homelab-docs-nodes" ''
#!/usr/bin/env bash
set -euo pipefail
cat << EOF
# Node Configurations
> Detailed per-node configuration
>
> Generated on: $(date)
EOF
# Get all node information
node_info=$(colmena eval -E '{ nodes, pkgs, lib, ... }:
lib.mapAttrs (name: node: {
# Basic system info
nixosVersion = node.config.system.nixos.version;
hostName = node.config.networking.hostName;
system = node.config.nixpkgs.system;
# Homelab config (safe extraction)
homelab = if (node.config.homelab.enable or false) then {
enabled = true;
hostname = node.config.homelab.hostname or null;
domain = node.config.homelab.domain or null;
externalDomain = node.config.homelab.externalDomain or null;
environment = node.config.homelab.environment or null;
location = node.config.homelab.location or null;
tags = node.config.homelab.tags or [];
} else {
enabled = false;
};
# Services (safe extraction)
services = if (node.config.homelab.enable or false) then
lib.mapAttrs (svcName: svc: {
enabled = svc.enable or false;
port = svc.port or null;
description = svc.description or svcName;
tags = svc.tags or [];
}) (node.config.homelab.services or {})
else {};
}) nodes')
echo "$node_info" | ${jq}/bin/jq -r 'to_entries[] | .key' | while read -r node; do
echo "## Node: $node"
echo
# Basic system information
echo "### System Information"
echo
nixos_version=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].nixosVersion")
hostname=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].hostName")
system=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].system")
echo "| Property | Value |"
echo "|----------|-------|"
echo "| NixOS Version | \`$nixos_version\` |"
echo "| Hostname | \`$hostname\` |"
echo "| System | \`$system\` |"
echo
# Homelab configuration
homelab_enabled=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].homelab.enabled")
if [[ "$homelab_enabled" == "true" ]]; then
echo "### Homelab Configuration"
echo
hl_hostname=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].homelab.hostname // \"N/A\"")
hl_domain=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].homelab.domain // \"N/A\"")
hl_external=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].homelab.externalDomain // \"N/A\"")
hl_env=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].homelab.environment // \"N/A\"")
hl_location=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].homelab.location // \"N/A\"")
hl_tags=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].homelab.tags | join(\", \")")
echo "| Property | Value |"
echo "|----------|-------|"
echo "| Homelab Hostname | \`$hl_hostname\` |"
echo "| Domain | \`$hl_domain\` |"
echo "| External Domain | \`$hl_external\` |"
echo "| Environment | \`$hl_env\` |"
echo "| Location | \`$hl_location\` |"
echo "| Tags | $hl_tags |"
echo
# Services
echo "### Services"
echo
services_data=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].services")
service_count=$(echo "$services_data" | ${jq}/bin/jq 'length')
if [[ "$service_count" -gt 0 ]]; then
echo "| Service | Enabled | Port | Description | Tags |"
echo "|---------|---------|------|-------------|------|"
echo "$services_data" | ${jq}/bin/jq -r 'to_entries[] |
[.key, (.value.enabled | tostring), (.value.port // "N/A" | tostring), (.value.description // "N/A"), (.value.tags | join(", "))] |
@tsv' | while IFS=$'\t' read -r service enabled port description tags; do
enabled_icon=$(if [[ "$enabled" == "true" ]]; then echo ""; else echo ""; fi)
echo "| \`$service\` | $enabled_icon | $port | $description | $tags |"
done
else
echo "No services configured."
fi
else
echo "### Homelab Configuration"
echo
echo " Homelab is not enabled on this node."
fi
echo
echo "---"
echo
done
'';
# Service documentation generator - refocused on service capabilities
serviceDocsGenerator = writeShellScriptBin "homelab-docs-services" ''
#!/usr/bin/env bash
set -euo pipefail
cat << EOF
# Service Catalog
> Available services and their configuration options
>
> Generated on: $(date)
This document catalogs all available homelab services, their configuration options, and integration capabilities.
EOF
# Get all services and their configurations
services_catalog=$(colmena eval -E '{ nodes, pkgs, lib, ... }:
let
# Collect all services from all nodes to build a complete catalog
allServiceConfigs = lib.flatten (lib.mapAttrsToList (nodeName: node:
if (node.config.homelab.enable or false) then
lib.mapAttrsToList (serviceName: service: {
inherit serviceName;
config = {
# Core service options
enable = service.enable or false;
port = service.port or null;
description = service.description or serviceName;
tags = service.tags or [];
# Integration options
monitoring = {
enabled = service.monitoring.enable or false;
metricsPath = service.monitoring.metrics.path or "/metrics";
healthPath = service.monitoring.healthCheck.path or "/health";
extraLabels = service.monitoring.extraLabels or {};
};
logging = {
enabled = service.logging.enable or false;
files = service.logging.files or [];
extraLabels = service.logging.extraLabels or {};
};
proxy = {
enabled = service.proxy.enable or false;
subdomain = service.proxy.subdomain or serviceName;
enableAuth = service.proxy.enableAuth or false;
additionalSubdomains = service.proxy.additionalSubdomains or [];
};
# Service-specific options (everything else)
serviceSpecific = removeAttrs service [
"enable" "port" "description" "tags"
"monitoring" "logging" "proxy"
];
};
}) (node.config.homelab.services or {})
else []
) nodes);
# Group by service name and merge configurations
serviceGroups = lib.groupBy (svc: svc.serviceName) allServiceConfigs;
# Get unique services with merged configuration examples
uniqueServices = lib.mapAttrs (serviceName: instances:
let
# Take the first enabled instance as the canonical example
enabledInstances = lib.filter (inst: inst.config.enable) instances;
canonicalConfig = if enabledInstances != [] then (lib.head enabledInstances).config else (lib.head instances).config;
in {
inherit serviceName;
config = canonicalConfig;
deploymentCount = lib.length (lib.filter (inst: inst.config.enable) instances);
availableOn = lib.unique (map (inst: inst.nodeName or "unknown") enabledInstances);
}
) serviceGroups;
in {
services = uniqueServices;
totalUniqueServices = lib.length (lib.attrNames uniqueServices);
}')
total_services=$(echo "$services_catalog" | ${jq}/bin/jq -r '.totalUniqueServices')
echo "## Overview"
echo
echo "**Total Available Services:** $total_services"
echo
echo "## Service Reference"
echo
# Process each service
echo "$services_catalog" | ${jq}/bin/jq -r '.services | to_entries[] | .key' | sort | while read -r service; do
echo "### $service"
echo
# Get service details
service_data=$(echo "$services_catalog" | ${jq}/bin/jq -r ".services[\"$service\"]")
description=$(echo "$service_data" | ${jq}/bin/jq -r '.config.description // "No description available"')
port=$(echo "$service_data" | ${jq}/bin/jq -r '.config.port // "N/A"')
tags=$(echo "$service_data" | ${jq}/bin/jq -r '.config.tags | join(", ")')
deployment_count=$(echo "$service_data" | ${jq}/bin/jq -r '.deploymentCount')
echo "**Description:** $description"
echo
echo "**Default Port:** \`$port\`"
echo
if [[ -n "$tags" && "$tags" != "" ]]; then
echo "**Tags:** $tags"
echo
fi
echo "**Current Deployments:** $deployment_count instance(s)"
echo
# Core Configuration
echo "#### Core Configuration"
echo
echo "\`\`\`nix"
echo "homelab.services.$service = {"
echo " enable = true;"
if [[ "$port" != "N/A" ]]; then
echo " port = $port;"
fi
echo " description = \"$description\";"
if [[ -n "$tags" && "$tags" != "" ]]; then
echo " tags = [ $(echo "$tags" | sed 's/, /" "/g' | sed 's/^/"/; s/$/"/') ];"
fi
echo "};"
echo "\`\`\`"
echo
# Service-specific options
service_specific=$(echo "$service_data" | ${jq}/bin/jq -r '.config.serviceSpecific')
if [[ "$service_specific" != "{}" && "$service_specific" != "null" ]]; then
echo "#### Service-Specific Options"
echo
echo "Available configuration options for $service:"
echo
echo "\`\`\`nix"
echo "homelab.services.$service = {"
echo " # ... core options above ..."
echo
echo " # Service-specific configuration"
echo "$service_specific" | ${jq}/bin/jq -r 'to_entries[] | " \(.key) = \(.value | tostring);"'
echo "};"
echo "\`\`\`"
echo
fi
# Integration Options
monitoring_enabled=$(echo "$service_data" | ${jq}/bin/jq -r '.config.monitoring.enabled')
logging_enabled=$(echo "$service_data" | ${jq}/bin/jq -r '.config.logging.enabled')
proxy_enabled=$(echo "$service_data" | ${jq}/bin/jq -r '.config.proxy.enabled')
if [[ "$monitoring_enabled" == "true" || "$logging_enabled" == "true" || "$proxy_enabled" == "true" ]]; then
echo "#### Available Integrations"
echo
fi
# Monitoring Integration
if [[ "$monitoring_enabled" == "true" ]]; then
metrics_path=$(echo "$service_data" | ${jq}/bin/jq -r '.config.monitoring.metricsPath')
health_path=$(echo "$service_data" | ${jq}/bin/jq -r '.config.monitoring.healthPath')
extra_labels=$(echo "$service_data" | ${jq}/bin/jq -r '.config.monitoring.extraLabels')
echo "##### 📊 Monitoring"
echo
echo "- **Metrics Endpoint:** \`$metrics_path\`"
echo "- **Health Check:** \`$health_path\`"
if [[ "$extra_labels" != "{}" ]]; then
echo "- **Default Labels:** $(echo "$extra_labels" | ${jq}/bin/jq -r 'to_entries[] | "\(.key)=\(.value)"' | paste -sd, -)"
fi
echo
echo "\`\`\`nix"
echo "homelab.services.$service.monitoring = {"
echo " enable = true;"
echo " metrics.path = \"$metrics_path\";"
echo " healthCheck.path = \"$health_path\";"
if [[ "$extra_labels" != "{}" ]]; then
echo " extraLabels = $extra_labels;"
fi
echo "};"
echo "\`\`\`"
echo
fi
# Logging Integration
if [[ "$logging_enabled" == "true" ]]; then
log_files=$(echo "$service_data" | ${jq}/bin/jq -r '.config.logging.files[]?')
log_labels=$(echo "$service_data" | ${jq}/bin/jq -r '.config.logging.extraLabels')
echo "##### 📝 Logging"
echo
if [[ -n "$log_files" ]]; then
echo "- **Log Files:**"
echo "$log_files" | while read -r file; do
echo " - \`$file\`"
done
fi
if [[ "$log_labels" != "{}" ]]; then
echo "- **Default Labels:** $(echo "$log_labels" | ${jq}/bin/jq -r 'to_entries[] | "\(.key)=\(.value)"' | paste -sd, -)"
fi
echo
echo "\`\`\`nix"
echo "homelab.services.$service.logging = {"
echo " enable = true;"
if [[ -n "$log_files" ]]; then
echo " files = ["
echo "$log_files" | while read -r file; do
echo " \"$file\""
done
echo " ];"
fi
if [[ "$log_labels" != "{}" ]]; then
echo " extraLabels = $log_labels;"
fi
echo "};"
echo "\`\`\`"
echo
fi
# Proxy Integration
if [[ "$proxy_enabled" == "true" ]]; then
subdomain=$(echo "$service_data" | ${jq}/bin/jq -r '.config.proxy.subdomain')
enable_auth=$(echo "$service_data" | ${jq}/bin/jq -r '.config.proxy.enableAuth')
additional_subdomains=$(echo "$service_data" | ${jq}/bin/jq -r '.config.proxy.additionalSubdomains')
echo "##### 🔀 Reverse Proxy"
echo
echo "- **Primary Subdomain:** \`$subdomain\`"
echo "- **Authentication Required:** $(if [[ "$enable_auth" == "true" ]]; then echo " Yes"; else echo " No"; fi)"
if [[ "$additional_subdomains" != "[]" && "$additional_subdomains" != "null" ]]; then
echo "- **Additional Subdomains:** Available"
fi
echo
echo "\`\`\`nix"
echo "homelab.services.$service.proxy = {"
echo " enable = true;"
echo " subdomain = \"$subdomain\";"
echo " enableAuth = $enable_auth;"
if [[ "$additional_subdomains" != "[]" && "$additional_subdomains" != "null" ]]; then
echo " additionalSubdomains = ["
echo " # Configure additional proxy entries as needed"
echo " ];"
fi
echo "};"
echo "\`\`\`"
echo
fi
# Usage Examples
echo "#### Complete Example"
echo
echo "\`\`\`nix"
echo "# Full configuration example for $service"
echo "homelab.services.$service = {"
echo " enable = true;"
if [[ "$port" != "N/A" ]]; then
echo " port = $port;"
fi
echo " description = \"$description\";"
# Add integration examples
if [[ "$monitoring_enabled" == "true" ]]; then
echo " "
echo " # Monitoring integration"
echo " monitoring.enable = true;"
fi
if [[ "$logging_enabled" == "true" ]]; then
echo " "
echo " # Logging integration"
echo " logging.enable = true;"
fi
if [[ "$proxy_enabled" == "true" ]]; then
echo " "
echo " # Reverse proxy integration"
echo " proxy = {"
echo " enable = true;"
echo " subdomain = \"$subdomain\";"
echo " enableAuth = $enable_auth;"
echo " };"
fi
echo "};"
echo "\`\`\`"
echo
echo "---"
echo
done
echo "## Integration Summary"
echo
echo "### Available Integration Types"
echo
echo "| Integration | Purpose | Configuration |"
echo "|-------------|---------|---------------|"
echo "| **Monitoring** | Prometheus metrics + health checks | \`monitoring.enable = true\` |"
echo "| **Logging** | Centralized log collection | \`logging.enable = true\` |"
echo "| **Proxy** | Reverse proxy with SSL + auth | \`proxy.enable = true\` |"
echo
echo "### Integration Benefits"
echo
echo "- **🔄 Automatic Discovery:** Enabled integrations are automatically discovered by fleet-wide services"
echo "- **📊 Unified Monitoring:** All metrics and health checks appear in Prometheus/Grafana"
echo "- **📝 Centralized Logging:** All logs are collected and indexed in Loki"
echo "- **🌐 Consistent Access:** All services get consistent subdomain access with SSL"
echo
echo "---"
echo
echo "*This service catalog is generated from actual service configurations across your homelab fleet.*"
'';
# Current deployment generator
deploymentDocsGenerator = writeShellScriptBin "homelab-docs-deployment" ''
#!/usr/bin/env bash
set -euo pipefail
cat << EOF
# Current Deployment State
> Current homelab deployment configuration
>
> Generated on: $(date)
> Working directory: $(pwd)
## Deployment Summary
EOF
# Get deployment summary
deployment_summary=$(colmena eval -E '{ nodes, pkgs, lib, ... }:
let
homelabNodes = lib.filterAttrs (name: node: node.config.homelab.enable or false) nodes;
allServices = lib.flatten (lib.mapAttrsToList (nodeName: node:
lib.attrNames (lib.filterAttrs (n: v: v.enable or false) (node.config.homelab.services or {}))
) homelabNodes);
in {
totalNodes = lib.length (lib.attrNames nodes);
homelabEnabledNodes = lib.length (lib.attrNames homelabNodes);
uniqueServices = lib.length (lib.unique allServices);
totalServiceInstances = lib.length allServices;
nodeNames = lib.attrNames nodes;
homelabNodeNames = lib.attrNames homelabNodes;
}')
total_nodes=$(echo "$deployment_summary" | ${jq}/bin/jq -r '.totalNodes')
homelab_nodes=$(echo "$deployment_summary" | ${jq}/bin/jq -r '.homelabEnabledNodes')
unique_services=$(echo "$deployment_summary" | ${jq}/bin/jq -r '.uniqueServices')
service_instances=$(echo "$deployment_summary" | ${jq}/bin/jq -r '.totalServiceInstances')
echo "| Metric | Count |"
echo "|--------|-------|"
echo "| Total Nodes | $total_nodes |"
echo "| Homelab-Enabled Nodes | $homelab_nodes |"
echo "| Unique Services | $unique_services |"
echo "| Service Instances | $service_instances |"
echo
echo "## Node Status"
echo
# Get detailed node status
node_status=$(colmena eval -E '{ nodes, pkgs, lib, ... }:
lib.mapAttrs (name: node: {
homelabEnabled = node.config.homelab.enable or false;
environment = node.config.homelab.environment or "unknown";
serviceCount = if (node.config.homelab.enable or false) then
lib.length (lib.attrNames (lib.filterAttrs (n: v: v.enable or false) (node.config.homelab.services or {})))
else 0;
monitoringEnabled = if (node.config.homelab.enable or false) then
node.config.homelab.monitoring.enable or false
else false;
backupsEnabled = if (node.config.homelab.enable or false) then
node.config.homelab.backups.enable or false
else false;
proxyEnabled = if (node.config.homelab.enable or false) then
node.config.homelab.reverseProxy.enable or false
else false;
}) nodes')
echo "| Node | Homelab | Environment | Services | Monitoring | Backups | Proxy |"
echo "|------|---------|-------------|----------|------------|---------|-------|"
echo "$node_status" | ${jq}/bin/jq -r 'to_entries[] |
[.key, (.value.homelabEnabled | if . then "" else "" end), .value.environment, (.value.serviceCount | tostring),
(.value.monitoringEnabled | if . then "" else "" end),
(.value.backupsEnabled | if . then "" else "" end),
(.value.proxyEnabled | if . then "" else "" end)] |
@tsv' | while IFS=$'\t' read -r node homelab env services monitoring backups proxy; do
echo "| \`$node\` | $homelab | $env | $services | $monitoring | $backups | $proxy |"
done
echo
echo "---"
echo
echo "*Deployment state extracted from live colmena configuration*"
'';
# README generator
readmeGenerator = writeShellScriptBin "homelab-docs-readme" ''
#!/usr/bin/env bash
set -euo pipefail
cat << EOF
# Homelab Documentation
> Auto-generated documentation for the homelab deployment
>
> Generated on: $(date)
> Source: $(pwd)
## 📚 Documentation Files
This documentation is automatically generated from your colmena flake configuration.
### 📊 Overview Documents
- **[Fleet Overview](fleet-overview.md)** - High-level fleet statistics and service distribution
- **[Current Deployment](current-deployment.md)** - Current deployment state and node status
### 📖 Detailed Configuration
- **[Node Configurations](nodes.md)** - Per-node detailed configuration and services
- **[Service Configurations](services.md)** - Service configurations across the fleet
## 🚀 Quick Actions
### View Current Status
\`\`\`bash
# Service status across fleet (if homelab CLI is available)
homelab services --global
# Backup status
homelab backups --global
# Overall status
homelab status
\`\`\`
### Update Documentation
\`\`\`bash
# Regenerate all documentation
homelab-generate-docs ./docs
# Generate in different directory
homelab-generate-docs /path/to/output
\`\`\`
## 📋 Quick Stats
EOF
# Add live stats
quick_stats=$(colmena eval -E '{ nodes, pkgs, lib, ... }:
let
homelabNodes = lib.filterAttrs (name: node: node.config.homelab.enable or false) nodes;
in {
totalNodes = lib.length (lib.attrNames nodes);
homelabNodes = lib.length (lib.attrNames homelabNodes);
}')
total_nodes=$(echo "$quick_stats" | ${jq}/bin/jq -r '.totalNodes')
homelab_nodes=$(echo "$quick_stats" | ${jq}/bin/jq -r '.homelabNodes')
echo "- **Total Nodes**: $total_nodes"
echo "- **Homelab-Enabled Nodes**: $homelab_nodes"
echo "- **Generated**: $(date)"
echo
echo "## 🛠 Management Tools"
echo
echo "### Documentation Commands"
echo "- \`homelab-generate-docs\` - Regenerate this documentation"
echo "- \`homelab-docs-fleet\` - Generate fleet overview only"
echo "- \`homelab-docs-nodes\` - Generate node configurations only"
echo "- \`homelab-docs-services\` - Generate service configurations only"
echo "- \`homelab-docs-deployment\` - Generate deployment state only"
echo
echo "### Colmena Commands"
echo "- \`colmena eval\` - Evaluate flake expressions"
echo "- \`colmena apply\` - Deploy configuration changes"
echo "- \`colmena build\` - Build configurations without deploying"
echo
echo "---"
echo
echo "*This documentation reflects the live state of your homelab deployment as evaluated by colmena.*"
'';
in
stdenv.mkDerivation {
pname = "homelab-docs";
version = "1.0.0";
dontUnpack = true;
dontBuild = true;
installPhase = ''
mkdir -p $out/bin
# Install all the generators
cp ${docsGenerator}/bin/homelab-generate-docs $out/bin/
cp ${fleetDocsGenerator}/bin/homelab-docs-fleet $out/bin/
cp ${nodeDocsGenerator}/bin/homelab-docs-nodes $out/bin/
cp ${serviceDocsGenerator}/bin/homelab-docs-services $out/bin/
cp ${deploymentDocsGenerator}/bin/homelab-docs-deployment $out/bin/
cp ${readmeGenerator}/bin/homelab-docs-readme $out/bin/
# Make sure they're executable
chmod +x $out/bin/*
'';
meta = with lib; {
description = "Documentation generator for homelab colmena deployments";
longDescription = ''
A collection of tools to generate comprehensive documentation
for homelab deployments managed with colmena. Extracts configuration
from flakes and generates markdown documentation.
'';
license = licenses.mit;
maintainers = [];
platforms = platforms.all;
};
}