Compare commits
1 commit
3082c898c7
...
ac59749e9f
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ac59749e9f |
28 changed files with 179 additions and 3570 deletions
|
|
@ -1,99 +0,0 @@
|
|||
# Homelab Documentation
|
||||
|
||||
> Auto-generated documentation for the homelab deployment
|
||||
>
|
||||
> Generated on: Wed, 30 Jul 2025 02:13:08 +0200
|
||||
> Source: /home/plasmagoat/homelab
|
||||
|
||||
## 📚 Documentation Files
|
||||
|
||||
This documentation is automatically generated from your colmena flake configuration.
|
||||
|
||||
### 📊 Overview Documents
|
||||
- **[Fleet Overview](fleet-overview.md)** - High-level fleet statistics and service distribution
|
||||
- **[Current Deployment](current-deployment.md)** - Current deployment state and node status
|
||||
|
||||
### 📖 Detailed Configuration
|
||||
- **[Node Configurations](nodes.md)** - Per-node detailed configuration and services
|
||||
- **[Service Configurations](services.md)** - Service configurations across the fleet
|
||||
|
||||
## 🚀 Quick Actions
|
||||
|
||||
### View Current Status
|
||||
```bash
|
||||
# Service status across fleet (if homelab CLI is available)
|
||||
homelab services --global
|
||||
|
||||
# Backup status
|
||||
homelab backups --global
|
||||
|
||||
# Overall status
|
||||
homelab status
|
||||
```
|
||||
|
||||
### Update Documentation
|
||||
```bash
|
||||
# Regenerate all documentation
|
||||
homelab-generate-docs ./docs
|
||||
|
||||
# Generate in different directory
|
||||
homelab-generate-docs /path/to/output
|
||||
```
|
||||
|
||||
## 📋 Quick Stats
|
||||
|
||||
- **Total Nodes**: 2
|
||||
- **Homelab-Enabled Nodes**: 2
|
||||
- **Generated**: Wed, 30 Jul 2025 02:13:11 +0200
|
||||
|
||||
## 🛠️ Management Tools
|
||||
|
||||
### Documentation Commands
|
||||
- `homelab-generate-docs` - Regenerate this documentation
|
||||
- `homelab-docs-fleet` - Generate fleet overview only
|
||||
- `homelab-docs-nodes` - Generate node configurations only
|
||||
- `homelab-docs-services` - Generate service configurations only
|
||||
- `homelab-docs-deployment` - Generate deployment state only
|
||||
|
||||
### Colmena Commands
|
||||
- `colmena eval` - Evaluate flake expressions
|
||||
- `colmena apply` - Deploy configuration changes
|
||||
- `colmena build` - Build configurations without deploying
|
||||
|
||||
## 🎯 Integration with CI/CD
|
||||
|
||||
### GitHub Actions Example
|
||||
|
||||
```yaml
|
||||
name: Generate Documentation
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
|
||||
jobs:
|
||||
docs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: cachix/install-nix-action@v24
|
||||
- name: Generate docs
|
||||
run: nix develop --command homelab-generate-docs ./docs
|
||||
- name: Commit docs
|
||||
run: |
|
||||
git add docs/
|
||||
git commit -m "docs: update homelab documentation" || exit 0
|
||||
git push
|
||||
```
|
||||
|
||||
### Manual Generation
|
||||
|
||||
```bash
|
||||
# From your homelab directory
|
||||
nix develop
|
||||
homelab-generate-docs ./docs
|
||||
git add docs/ && git commit -m "Update docs"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*This documentation reflects the live state of your homelab deployment as evaluated by colmena.*
|
||||
|
|
@ -1,26 +0,0 @@
|
|||
# Current Deployment State
|
||||
|
||||
> Current homelab deployment configuration
|
||||
>
|
||||
> Generated on: Wed, 30 Jul 2025 02:13:01 +0200
|
||||
> Working directory: /home/plasmagoat/homelab
|
||||
|
||||
## Deployment Summary
|
||||
|
||||
| Metric | Count |
|
||||
|--------|-------|
|
||||
| Total Nodes | 2 |
|
||||
| Homelab-Enabled Nodes | 2 |
|
||||
| Unique Services | 4 |
|
||||
| Service Instances | 4 |
|
||||
|
||||
## Node Status
|
||||
|
||||
| Node | Homelab | Environment | Services | Monitoring | Backups | Proxy |
|
||||
|------|---------|-------------|----------|------------|---------|-------|
|
||||
| `photos` | ✅ | production | 1 | ✅ | ❌ | ❌ |
|
||||
| `sandbox` | ✅ | production | 3 | ✅ | ✅ | ❌ |
|
||||
|
||||
---
|
||||
|
||||
*Deployment state extracted from live colmena configuration*
|
||||
|
|
@ -1,33 +0,0 @@
|
|||
# Homelab Fleet Overview
|
||||
|
||||
> Auto-generated fleet overview
|
||||
>
|
||||
> Generated on: Wed, 30 Jul 2025 02:12:41 +0200
|
||||
> Source: /home/plasmagoat/homelab
|
||||
|
||||
## Fleet Statistics
|
||||
|
||||
### Basic Information
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Total Nodes | 2 |
|
||||
| Node Names | photos sandbox |
|
||||
|
||||
### Homelab Configuration
|
||||
|
||||
| Node | Homelab Enabled | Hostname | Environment |
|
||||
|------|----------------|----------|-------------|
|
||||
| `photos` | ✅ | photos | production |
|
||||
| `sandbox` | ✅ | sandbox | production |
|
||||
|
||||
### Service Distribution
|
||||
|
||||
| Node | Service Count | Services |
|
||||
|------|---------------|----------|
|
||||
| `photos` | 1 | minio |
|
||||
| `sandbox` | 3 | gatus, grafana, prometheus |
|
||||
|
||||
---
|
||||
|
||||
*Fleet overview generated from colmena evaluation*
|
||||
|
|
@ -1,70 +0,0 @@
|
|||
# Node Configurations
|
||||
|
||||
> Detailed per-node configuration
|
||||
>
|
||||
> Generated on: Wed, 30 Jul 2025 02:12:50 +0200
|
||||
|
||||
## Node: photos
|
||||
|
||||
### System Information
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| NixOS Version | `25.11pre-git` |
|
||||
| Hostname | `photos` |
|
||||
| System | `x86_64-linux` |
|
||||
|
||||
### Homelab Configuration
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| Homelab Hostname | `photos` |
|
||||
| Domain | `lab` |
|
||||
| External Domain | `procopius.dk` |
|
||||
| Environment | `production` |
|
||||
| Location | `proxmox-cluster` |
|
||||
| Tags | photos |
|
||||
|
||||
### Services
|
||||
|
||||
| Service | Enabled | Port | Description | Tags |
|
||||
|---------|---------|------|-------------|------|
|
||||
| `gatus` | ❌ | 8080 | Gatus Status Page | |
|
||||
| `grafana` | ❌ | 3000 | Grafana Metrics Dashboard | |
|
||||
| `minio` | ✅ | 9000 | minio | |
|
||||
| `prometheus` | ❌ | 9090 | Prometheus Monitoring Server | |
|
||||
|
||||
---
|
||||
|
||||
## Node: sandbox
|
||||
|
||||
### System Information
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| NixOS Version | `25.11pre-git` |
|
||||
| Hostname | `sandbox` |
|
||||
| System | `x86_64-linux` |
|
||||
|
||||
### Homelab Configuration
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| Homelab Hostname | `sandbox` |
|
||||
| Domain | `lab` |
|
||||
| External Domain | `procopius.dk` |
|
||||
| Environment | `production` |
|
||||
| Location | `proxmox-cluster` |
|
||||
| Tags | sandbox |
|
||||
|
||||
### Services
|
||||
|
||||
| Service | Enabled | Port | Description | Tags |
|
||||
|---------|---------|------|-------------|------|
|
||||
| `gatus` | ✅ | 8080 | Gatus Status Page | |
|
||||
| `grafana` | ✅ | 3000 | Grafana Metrics Dashboard | |
|
||||
| `minio` | ❌ | 9000 | minio | |
|
||||
| `prometheus` | ✅ | 9090 | Prometheus Monitoring Server | |
|
||||
|
||||
---
|
||||
|
||||
399
docs/services.md
399
docs/services.md
|
|
@ -1,399 +0,0 @@
|
|||
# Service Catalog
|
||||
|
||||
> Complete service documentation with core options, feature integrations, and smart defaults
|
||||
>
|
||||
> Generated on: Wed, 30 Jul 2025 02:12:53 +0200
|
||||
|
||||
This document provides comprehensive documentation for homelab services, organized by:
|
||||
- **Core Service Options**: The main service configuration
|
||||
- **Feature Integrations**: Available monitoring, logging, and proxy features
|
||||
- **Service Defaults**: What this service configures by default for each feature
|
||||
|
||||
## Overview
|
||||
|
||||
**Total Available Services:** 4
|
||||
|
||||
## Service Integration Matrix
|
||||
|
||||
| Service | Core Options | Monitoring | Logging | Proxy | Deployments |
|
||||
|---------|--------------|------------|---------|-------|-------------|
|
||||
| `gatus` | 11 | 📊 | 📝 | 🔀 | 1 |
|
||||
| `grafana` | 3 | 📊 | 📝 | 🔀 | 1 |
|
||||
| `minio` | 4 | ❌ | ❌ | ❌ | 1 |
|
||||
| `prometheus` | 12 | 📊 | 📝 | 🔀 | 1 |
|
||||
|
||||
**Legend:** 📊📝🔀 = Feature available, ❌ = Feature not available
|
||||
|
||||
## Service Documentation
|
||||
|
||||
### gatus
|
||||
|
||||
**Deployment Status:** 1/2 nodes have this service enabled
|
||||
|
||||
#### Core Service Options
|
||||
|
||||
The main configuration options for gatus:
|
||||
|
||||
```nix
|
||||
homelab.services.gatus = {
|
||||
alerting = {}; # Gatus alerting configuration
|
||||
description = Gatus Status Page; # No description
|
||||
enable = false; # Whether to enable Gatus Status Page.
|
||||
extraConfig = {}; # Additional Gatus configuration options
|
||||
port = 8080; # No description
|
||||
storage = {
|
||||
"type": "memory"
|
||||
}; # Gatus storage configuration
|
||||
ui.buttons = [
|
||||
{
|
||||
"link": "https://grafana.procopius.dk",
|
||||
"name": "Grafana"
|
||||
},
|
||||
{
|
||||
"link": "https://prometheus.procopius.dk",
|
||||
"name": "Prometheus"
|
||||
}
|
||||
]; # Navigation buttons in the Gatus interface
|
||||
ui.header = Homelab Services Status; # Header text for the Gatus interface
|
||||
ui.link = https://status.procopius.dk; # Link in the Gatus header
|
||||
ui.title = Homelab Status; # Title for the Gatus web interface
|
||||
web.address = 0.0.0.0; # Web interface bind address
|
||||
};
|
||||
```
|
||||
|
||||
#### Feature Integrations
|
||||
|
||||
##### 📊 Monitoring Integration
|
||||
|
||||
Available monitoring options:
|
||||
|
||||
```nix
|
||||
homelab.services.gatus = {
|
||||
# ... core options above ...
|
||||
|
||||
monitoring.enable = true; # Enable monitoring for gatus
|
||||
monitoring.extraLabels = {}; # No description
|
||||
monitoring.healthCheck.conditions = [
|
||||
"[STATUS] == 200"
|
||||
]; # Health check conditions. Setting conditions enables health checks.
|
||||
monitoring.healthCheck.enable = true; # No description
|
||||
monitoring.healthCheck.extraChecks = []; # Additional health checks. Adding checks enables health monitoring.
|
||||
# monitoring.healthCheck.path = <null or string>; # Health check endpoint path. Setting this enables health checks.
|
||||
monitoring.metrics.enable = false; # No description
|
||||
monitoring.metrics.extraEndpoints = []; # Additional metrics endpoints. Adding endpoints enables metrics collection.
|
||||
# monitoring.metrics.path = <null or string>; # Metrics endpoint path. Setting this enables metrics collection.
|
||||
};
|
||||
```
|
||||
|
||||
**gatus sets these monitoring defaults:**
|
||||
```nix
|
||||
enable = true;
|
||||
extraLabels = {};
|
||||
healthCheck = {"conditions":["[STATUS] == 200"],"enable":true,"extraChecks":[],"path":null};
|
||||
metrics = {"enable":false,"extraEndpoints":[],"path":null};
|
||||
```
|
||||
|
||||
##### 📝 Logging Integration
|
||||
|
||||
Available logging options:
|
||||
|
||||
```nix
|
||||
homelab.services.gatus = {
|
||||
# ... core options above ...
|
||||
|
||||
logging.enable = false; # Enable logging for gatus
|
||||
logging.extraLabels = {}; # No description
|
||||
logging.extraSources = []; # No description
|
||||
logging.files = []; # No description
|
||||
# logging.multiline = <null or (submodule)>; # No description
|
||||
logging.parsing.extractFields = []; # No description
|
||||
# logging.parsing.regex = <null or string>; # No description
|
||||
};
|
||||
```
|
||||
|
||||
**gatus sets these logging defaults:**
|
||||
```nix
|
||||
enable = false;
|
||||
extraLabels = {};
|
||||
extraSources = [];
|
||||
files = [];
|
||||
multiline = null;
|
||||
parsing = {"extractFields":[],"regex":null};
|
||||
```
|
||||
|
||||
##### 🔀 Proxy Integration
|
||||
|
||||
Available proxy options:
|
||||
|
||||
```nix
|
||||
homelab.services.gatus = {
|
||||
# ... core options above ...
|
||||
|
||||
proxy.additionalSubdomains = []; # No description
|
||||
proxy.enable = true; # Enable reverse proxy for gatus
|
||||
proxy.enableAuth = false; # No description
|
||||
proxy.subdomain = gatus; # No description
|
||||
};
|
||||
```
|
||||
|
||||
**gatus sets these proxy defaults:**
|
||||
```nix
|
||||
additionalSubdomains = [];
|
||||
enable = true;
|
||||
enableAuth = false;
|
||||
subdomain = gatus;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### grafana
|
||||
|
||||
**Deployment Status:** 1/2 nodes have this service enabled
|
||||
|
||||
#### Core Service Options
|
||||
|
||||
The main configuration options for grafana:
|
||||
|
||||
```nix
|
||||
homelab.services.grafana = {
|
||||
description = Grafana Metrics Dashboard; # No description
|
||||
enable = false; # Whether to enable Grafana Dashboard.
|
||||
port = 3000; # No description
|
||||
};
|
||||
```
|
||||
|
||||
#### Feature Integrations
|
||||
|
||||
##### 📊 Monitoring Integration
|
||||
|
||||
Available monitoring options:
|
||||
|
||||
```nix
|
||||
homelab.services.grafana = {
|
||||
# ... core options above ...
|
||||
|
||||
monitoring.enable = true; # Enable monitoring for grafana
|
||||
monitoring.extraLabels = {}; # No description
|
||||
monitoring.healthCheck.conditions = [
|
||||
"[STATUS] == 200"
|
||||
]; # Health check conditions. Setting conditions enables health checks.
|
||||
monitoring.healthCheck.enable = true; # No description
|
||||
monitoring.healthCheck.extraChecks = []; # Additional health checks. Adding checks enables health monitoring.
|
||||
# monitoring.healthCheck.path = <null or string>; # Health check endpoint path. Setting this enables health checks.
|
||||
monitoring.metrics.enable = false; # No description
|
||||
monitoring.metrics.extraEndpoints = []; # Additional metrics endpoints. Adding endpoints enables metrics collection.
|
||||
# monitoring.metrics.path = <null or string>; # Metrics endpoint path. Setting this enables metrics collection.
|
||||
};
|
||||
```
|
||||
|
||||
**grafana sets these monitoring defaults:**
|
||||
```nix
|
||||
enable = true;
|
||||
extraLabels = {};
|
||||
healthCheck = {"conditions":["[STATUS] == 200"],"enable":true,"extraChecks":[],"path":null};
|
||||
metrics = {"enable":false,"extraEndpoints":[],"path":null};
|
||||
```
|
||||
|
||||
##### 📝 Logging Integration
|
||||
|
||||
Available logging options:
|
||||
|
||||
```nix
|
||||
homelab.services.grafana = {
|
||||
# ... core options above ...
|
||||
|
||||
logging.enable = false; # Enable logging for grafana
|
||||
logging.extraLabels = {}; # No description
|
||||
logging.extraSources = []; # No description
|
||||
logging.files = []; # No description
|
||||
# logging.multiline = <null or (submodule)>; # No description
|
||||
logging.parsing.extractFields = []; # No description
|
||||
# logging.parsing.regex = <null or string>; # No description
|
||||
};
|
||||
```
|
||||
|
||||
**grafana sets these logging defaults:**
|
||||
```nix
|
||||
enable = false;
|
||||
extraLabels = {};
|
||||
extraSources = [];
|
||||
files = [];
|
||||
multiline = null;
|
||||
parsing = {"extractFields":[],"regex":null};
|
||||
```
|
||||
|
||||
##### 🔀 Proxy Integration
|
||||
|
||||
Available proxy options:
|
||||
|
||||
```nix
|
||||
homelab.services.grafana = {
|
||||
# ... core options above ...
|
||||
|
||||
proxy.additionalSubdomains = []; # No description
|
||||
proxy.enable = true; # Enable reverse proxy for grafana
|
||||
proxy.enableAuth = false; # No description
|
||||
proxy.subdomain = grafana; # No description
|
||||
};
|
||||
```
|
||||
|
||||
**grafana sets these proxy defaults:**
|
||||
```nix
|
||||
additionalSubdomains = [];
|
||||
enable = true;
|
||||
enableAuth = false;
|
||||
subdomain = grafana;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### minio
|
||||
|
||||
**Deployment Status:** 1/2 nodes have this service enabled
|
||||
|
||||
#### Core Service Options
|
||||
|
||||
The main configuration options for minio:
|
||||
|
||||
```nix
|
||||
homelab.services.minio = {
|
||||
enable = false; # Whether to enable Minio Object Storage.
|
||||
openFirewall = true; # Whether to open the ports specified in `port` and `webPort` in the firewall.
|
||||
port = 9000; # Port of the server.
|
||||
webPort = 9001; # Port of the web UI (console).
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### prometheus
|
||||
|
||||
**Deployment Status:** 1/2 nodes have this service enabled
|
||||
|
||||
#### Core Service Options
|
||||
|
||||
The main configuration options for prometheus:
|
||||
|
||||
```nix
|
||||
homelab.services.prometheus = {
|
||||
alertmanager.enable = true; # Enable integration with Alertmanager
|
||||
alertmanager.url = alertmanager.lab:9093; # Alertmanager URL
|
||||
description = Prometheus Monitoring Server; # No description
|
||||
enable = false; # Whether to enable Prometheus Monitoring Server.
|
||||
extraAlertingRules = []; # Additional alerting rules
|
||||
extraFlags = []; # Extra command line flags
|
||||
extraScrapeConfigs = []; # Additional scrape configurations
|
||||
globalConfig = {
|
||||
"evaluation_interval": "15s",
|
||||
"scrape_interval": "15s"
|
||||
}; # Global Prometheus configuration
|
||||
port = 9090; # No description
|
||||
retention = 15d; # How long to retain metrics data
|
||||
ruleFiles = []; # Additional rule files to load
|
||||
systemdServices = [
|
||||
"prometheus.service",
|
||||
"prometheus"
|
||||
]; # Systemd services to monitor
|
||||
};
|
||||
```
|
||||
|
||||
#### Feature Integrations
|
||||
|
||||
##### 📊 Monitoring Integration
|
||||
|
||||
Available monitoring options:
|
||||
|
||||
```nix
|
||||
homelab.services.prometheus = {
|
||||
# ... core options above ...
|
||||
|
||||
monitoring.enable = true; # Enable monitoring for prometheus
|
||||
monitoring.extraLabels = {}; # No description
|
||||
monitoring.healthCheck.conditions = [
|
||||
"[STATUS] == 200"
|
||||
]; # Health check conditions. Setting conditions enables health checks.
|
||||
monitoring.healthCheck.enable = true; # No description
|
||||
monitoring.healthCheck.extraChecks = []; # Additional health checks. Adding checks enables health monitoring.
|
||||
# monitoring.healthCheck.path = <null or string>; # Health check endpoint path. Setting this enables health checks.
|
||||
monitoring.metrics.enable = false; # No description
|
||||
monitoring.metrics.extraEndpoints = []; # Additional metrics endpoints. Adding endpoints enables metrics collection.
|
||||
# monitoring.metrics.path = <null or string>; # Metrics endpoint path. Setting this enables metrics collection.
|
||||
};
|
||||
```
|
||||
|
||||
**prometheus sets these monitoring defaults:**
|
||||
```nix
|
||||
enable = true;
|
||||
extraLabels = {};
|
||||
healthCheck = {"conditions":["[STATUS] == 200"],"enable":true,"extraChecks":[],"path":null};
|
||||
metrics = {"enable":false,"extraEndpoints":[],"path":null};
|
||||
```
|
||||
|
||||
##### 📝 Logging Integration
|
||||
|
||||
Available logging options:
|
||||
|
||||
```nix
|
||||
homelab.services.prometheus = {
|
||||
# ... core options above ...
|
||||
|
||||
logging.enable = false; # Enable logging for prometheus
|
||||
logging.extraLabels = {}; # No description
|
||||
logging.extraSources = []; # No description
|
||||
logging.files = []; # No description
|
||||
# logging.multiline = <null or (submodule)>; # No description
|
||||
logging.parsing.extractFields = []; # No description
|
||||
# logging.parsing.regex = <null or string>; # No description
|
||||
};
|
||||
```
|
||||
|
||||
**prometheus sets these logging defaults:**
|
||||
```nix
|
||||
enable = false;
|
||||
extraLabels = {};
|
||||
extraSources = [];
|
||||
files = [];
|
||||
multiline = null;
|
||||
parsing = {"extractFields":[],"regex":null};
|
||||
```
|
||||
|
||||
##### 🔀 Proxy Integration
|
||||
|
||||
Available proxy options:
|
||||
|
||||
```nix
|
||||
homelab.services.prometheus = {
|
||||
# ... core options above ...
|
||||
|
||||
proxy.additionalSubdomains = []; # No description
|
||||
proxy.enable = true; # Enable reverse proxy for prometheus
|
||||
proxy.enableAuth = false; # No description
|
||||
proxy.subdomain = prometheus; # No description
|
||||
};
|
||||
```
|
||||
|
||||
**prometheus sets these proxy defaults:**
|
||||
```nix
|
||||
additionalSubdomains = [];
|
||||
enable = true;
|
||||
enableAuth = false;
|
||||
subdomain = prometheus;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Feature Reference
|
||||
|
||||
### Integration Features
|
||||
|
||||
Homelab services can integrate with three main features:
|
||||
|
||||
- **📊 Monitoring**: Prometheus metrics and health checks
|
||||
- **📝 Logging**: Centralized log collection with Promtail/Loki
|
||||
- **🔀 Proxy**: Reverse proxy with SSL and authentication
|
||||
|
||||
Each service can import these features and set service-specific defaults.
|
||||
|
||||
---
|
||||
|
||||
*This documentation is generated from actual NixOS module evaluations.*
|
||||
6
flake.lock
generated
6
flake.lock
generated
|
|
@ -204,11 +204,11 @@
|
|||
},
|
||||
"nixpkgs_2": {
|
||||
"locked": {
|
||||
"lastModified": 1753851946,
|
||||
"narHash": "sha256-4mPb6o8uWlVdAchtkcF81gjJbQFuCZTz2s/n8neYLx8=",
|
||||
"lastModified": 1753766937,
|
||||
"narHash": "sha256-L4kx7TQsmD7GnNaGGXlLhqZVPwilP7QIwDPw88Wzzrs=",
|
||||
"owner": "nixos",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "bf8fe59e5830c2afc6961f11ac941660d6f3fa80",
|
||||
"rev": "23296a5649db3ae810cdc41ecd5055d131a82b11",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
|
|
|
|||
|
|
@ -25,7 +25,6 @@
|
|||
self,
|
||||
nixpkgs,
|
||||
# systems,
|
||||
colmena,
|
||||
sops-nix,
|
||||
# home-manager,
|
||||
simple-nixos-mailserver,
|
||||
|
|
@ -55,18 +54,18 @@
|
|||
# Reusable nixos modules
|
||||
nixosModules = import ./modules/nixos;
|
||||
|
||||
colmenaHive = colmena.lib.makeHive self.outputs.colmena;
|
||||
colmenaHive = inputs.colmena.lib.makeHive self.outputs.colmena;
|
||||
colmena = import ./colmena.nix {inherit inputs outputs;};
|
||||
|
||||
# Development shells
|
||||
devShells = forAllSystems (
|
||||
system: let
|
||||
inherit (inputs.colmena.packages."${pkgs.system}") colmena;
|
||||
pkgs = nixpkgs.legacyPackages.${system};
|
||||
in {
|
||||
default = pkgs.mkShell {
|
||||
packages = with pkgs; [
|
||||
self.packages.${system}.homelab-docs
|
||||
colmena.packages.${system}.colmena
|
||||
# colmena
|
||||
sops
|
||||
age
|
||||
nix-output-monitor
|
||||
|
|
@ -78,7 +77,7 @@
|
|||
echo "🏠 Homelab Development Environment"
|
||||
echo "Available commands:"
|
||||
echo " colmena apply - Deploy all hosts"
|
||||
echo " colmena apply --on @tag - Deploy specific tagged hosts"
|
||||
echo " colmena apply --on HOST - Deploy specific host"
|
||||
echo " sops secrets/secrets.yaml - Edit secrets"
|
||||
echo ""
|
||||
'';
|
||||
|
|
|
|||
|
|
@ -1,577 +0,0 @@
|
|||
# Homelab Configuration Documentation
|
||||
|
||||
## Overview
|
||||
|
||||
This homelab configuration system provides a unified way to manage services across multiple nodes with automatic aggregation of monitoring, logging, backup, and reverse proxy configurations. The system is built on NixOS and follows a modular architecture with both local and global configuration scopes.
|
||||
|
||||
## Core Homelab Options
|
||||
|
||||
### Basic Configuration (`homelab.*`)
|
||||
|
||||
```nix
|
||||
homelab = {
|
||||
enable = true; # Enable homelab fleet configuration
|
||||
hostname = "node-01"; # Hostname for this system
|
||||
domain = "lab"; # Base domain for the homelab (default: "lab")
|
||||
externalDomain = "procopius.dk"; # External domain to the homelab
|
||||
environment = "production"; # Environment type: "production" | "staging" | "development"
|
||||
location = "homelab"; # Physical location identifier
|
||||
tags = ["web" "database"]; # Tags for this system
|
||||
};
|
||||
```
|
||||
|
||||
### Monitoring System (`homelab.monitoring.*`)
|
||||
|
||||
```nix
|
||||
homelab.monitoring = {
|
||||
enable = true; # Enable monitoring system
|
||||
|
||||
# Node exporter (automatically enabled)
|
||||
nodeExporter = {
|
||||
enable = true; # Enable node exporter (default: true)
|
||||
port = 9100; # Node exporter port (default: 9100)
|
||||
};
|
||||
|
||||
# Manual metrics endpoints
|
||||
metrics = [
|
||||
{
|
||||
name = "custom-app"; # Metric endpoint name
|
||||
host = "localhost"; # Host (default: homelab.hostname)
|
||||
port = 8080; # Port for metrics endpoint
|
||||
path = "/metrics"; # Metrics path (default: "/metrics")
|
||||
jobName = "custom"; # Prometheus job name
|
||||
scrapeInterval = "30s"; # Scrape interval (default: "30s")
|
||||
labels = { # Additional labels
|
||||
component = "web";
|
||||
};
|
||||
}
|
||||
];
|
||||
|
||||
# Manual health checks
|
||||
healthChecks = [
|
||||
{
|
||||
name = "web-service"; # Health check name
|
||||
host = "localhost"; # Host (default: homelab.hostname)
|
||||
port = 80; # Port (nullable)
|
||||
path = "/health"; # Health check path (default: "/")
|
||||
protocol = "http"; # Protocol: "http" | "https" | "tcp" | "icmp"
|
||||
method = "GET"; # HTTP method (default: "GET")
|
||||
interval = "30s"; # Check interval (default: "30s")
|
||||
timeout = "10s"; # Timeout (default: "10s")
|
||||
conditions = [ # Check conditions
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
group = "web"; # Group name (default: "manual")
|
||||
labels = {}; # Additional labels
|
||||
enabled = true; # Enable check (default: true)
|
||||
}
|
||||
];
|
||||
|
||||
# Read-only aggregated data (automatically populated)
|
||||
allMetrics = [...]; # All metrics from this node
|
||||
allHealthChecks = [...]; # All health checks from this node
|
||||
global = { # Global aggregation from all nodes
|
||||
allMetrics = [...]; # All metrics from entire fleet
|
||||
allHealthChecks = [...]; # All health checks from entire fleet
|
||||
metricsByJobName = {...}; # Grouped by job name
|
||||
healthChecksByGroup = {...}; # Grouped by group
|
||||
summary = {
|
||||
totalMetrics = 42;
|
||||
totalHealthChecks = 15;
|
||||
nodesCovered = ["node-01" "node-02"];
|
||||
};
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
### Logging System (`homelab.logging.*`)
|
||||
|
||||
```nix
|
||||
homelab.logging = {
|
||||
enable = true; # Enable logging system
|
||||
|
||||
# Promtail configuration
|
||||
promtail = {
|
||||
enable = true; # Enable Promtail (default: true)
|
||||
port = 9080; # Promtail port (default: 9080)
|
||||
clients = [ # Loki clients
|
||||
{
|
||||
url = "http://monitor.lab:3100/loki/api/v1/push";
|
||||
tenant_id = null; # Optional tenant ID
|
||||
}
|
||||
];
|
||||
};
|
||||
|
||||
# Log sources
|
||||
sources = [
|
||||
{
|
||||
name = "app-logs"; # Source name
|
||||
type = "file"; # Type: "journal" | "file" | "syslog" | "docker"
|
||||
files = {
|
||||
paths = ["/var/log/app.log"]; # File paths
|
||||
multiline = { # Optional multiline config
|
||||
firstLineRegex = "^\\d{4}-\\d{2}-\\d{2}";
|
||||
maxWaitTime = "3s";
|
||||
};
|
||||
};
|
||||
journal = { # Journal config (for type="journal")
|
||||
path = "/var/log/journal";
|
||||
};
|
||||
labels = { # Additional labels
|
||||
application = "myapp";
|
||||
};
|
||||
pipelineStages = []; # Promtail pipeline stages
|
||||
enabled = true; # Enable source (default: true)
|
||||
}
|
||||
];
|
||||
|
||||
defaultLabels = { # Default labels for all sources
|
||||
hostname = "node-01";
|
||||
environment = "production";
|
||||
location = "homelab";
|
||||
};
|
||||
|
||||
# Read-only aggregated data
|
||||
allSources = [...]; # All sources from this node
|
||||
global = { # Global aggregation
|
||||
allSources = [...]; # All sources from entire fleet
|
||||
sourcesByType = {...}; # Grouped by type
|
||||
summary = {
|
||||
total = 25;
|
||||
byType = {...};
|
||||
byNode = {...};
|
||||
};
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
### Backup System (`homelab.backups.*`)
|
||||
|
||||
```nix
|
||||
homelab.backups = {
|
||||
enable = true; # Enable backup system
|
||||
|
||||
# Backup jobs
|
||||
jobs = [
|
||||
{
|
||||
name = "database-backup"; # Job name
|
||||
backend = "restic-s3"; # Backend name (must exist in backends)
|
||||
backendOptions = { # Backend-specific overrides
|
||||
repository = "custom-repo";
|
||||
};
|
||||
labels = { # Additional labels
|
||||
type = "database";
|
||||
};
|
||||
}
|
||||
];
|
||||
|
||||
# Backend configurations (defined by imported modules)
|
||||
backends = {
|
||||
restic-s3 = {...}; # Defined in restic.nix
|
||||
};
|
||||
|
||||
defaultLabels = { # Default labels for all jobs
|
||||
hostname = "node-01";
|
||||
environment = "production";
|
||||
location = "homelab";
|
||||
};
|
||||
|
||||
monitoring = true; # Enable backup monitoring (default: true)
|
||||
|
||||
# Read-only aggregated data
|
||||
allJobs = [...]; # All jobs from this node
|
||||
allBackends = [...]; # All backend names from this node
|
||||
global = { # Global aggregation
|
||||
allJobs = [...]; # All jobs from entire fleet
|
||||
allBackends = [...]; # All backends from entire fleet
|
||||
jobsByBackend = {...}; # Grouped by backend
|
||||
summary = {
|
||||
total = 15;
|
||||
byBackend = {...};
|
||||
byNode = {...};
|
||||
uniqueBackends = ["restic-s3" "borgbackup"];
|
||||
};
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
### Reverse Proxy System (`homelab.reverseProxy.*`)
|
||||
|
||||
```nix
|
||||
homelab.reverseProxy = {
|
||||
enable = true; # Enable reverse proxy system
|
||||
|
||||
# Proxy entries
|
||||
entries = [
|
||||
{
|
||||
subdomain = "app"; # Subdomain
|
||||
host = "localhost"; # Backend host (default: homelab.hostname)
|
||||
port = 8080; # Backend port
|
||||
path = "/"; # Backend path (default: "/")
|
||||
enableAuth = false; # Enable authentication (default: false)
|
||||
enableSSL = true; # Enable SSL (default: true)
|
||||
}
|
||||
];
|
||||
|
||||
# Read-only aggregated data
|
||||
allEntries = [...]; # All entries from this node
|
||||
global = { # Global aggregation
|
||||
allEntries = [...]; # All entries from entire fleet
|
||||
entriesBySubdomain = {...}; # Grouped by subdomain
|
||||
entriesWithAuth = [...]; # Entries with authentication
|
||||
entriesWithoutAuth = [...]; # Entries without authentication
|
||||
summary = {
|
||||
total = 12;
|
||||
byNode = {...};
|
||||
withAuth = 5;
|
||||
withoutAuth = 7;
|
||||
};
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
## Service Configuration Pattern
|
||||
|
||||
All services follow a consistent pattern with automatic monitoring, logging, and proxy integration.
|
||||
|
||||
### Generic Service Structure (`homelab.services.${serviceName}.*`)
|
||||
|
||||
```nix
|
||||
homelab.services.myservice = {
|
||||
enable = true; # Enable the service
|
||||
port = 8080; # Main service port
|
||||
description = "My Service"; # Service description
|
||||
|
||||
# Monitoring integration (automatic when enabled)
|
||||
monitoring = {
|
||||
enable = true; # Enable monitoring (default: true when service enabled)
|
||||
|
||||
metrics = {
|
||||
enable = true; # Enable metrics endpoint (default: true)
|
||||
path = "/metrics"; # Metrics path (default: "/metrics")
|
||||
extraEndpoints = [ # Additional metric endpoints
|
||||
{
|
||||
name = "admin-metrics";
|
||||
port = 8081;
|
||||
path = "/admin/metrics";
|
||||
jobName = "myservice-admin";
|
||||
}
|
||||
];
|
||||
};
|
||||
|
||||
healthCheck = {
|
||||
enable = true; # Enable health check (default: true)
|
||||
path = "/health"; # Health check path (default: "/health")
|
||||
conditions = [ # Check conditions
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
extraChecks = [ # Additional health checks
|
||||
{
|
||||
name = "myservice-api";
|
||||
port = 8080;
|
||||
path = "/api/health";
|
||||
conditions = ["[STATUS] == 200" "[RESPONSE_TIME] < 500"];
|
||||
}
|
||||
];
|
||||
};
|
||||
|
||||
extraLabels = { # Additional labels for all monitoring
|
||||
tier = "application";
|
||||
};
|
||||
};
|
||||
|
||||
# Logging integration (automatic when enabled)
|
||||
logging = {
|
||||
enable = true; # Enable logging
|
||||
files = [ # Log files to collect
|
||||
"/var/log/myservice/app.log"
|
||||
"/var/log/myservice/error.log"
|
||||
];
|
||||
|
||||
parsing = {
|
||||
regex = "^(?P<timestamp>\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}) (?P<level>\\w+) (?P<message>.*)";
|
||||
extractFields = ["level"]; # Fields to extract as labels
|
||||
};
|
||||
|
||||
multiline = { # Multiline log handling
|
||||
firstLineRegex = "^\\d{4}-\\d{2}-\\d{2}";
|
||||
maxWaitTime = "3s";
|
||||
};
|
||||
|
||||
extraLabels = { # Additional labels
|
||||
application = "myservice";
|
||||
};
|
||||
|
||||
extraSources = [ # Additional log sources
|
||||
{
|
||||
name = "myservice-access";
|
||||
type = "file";
|
||||
files.paths = ["/var/log/myservice/access.log"];
|
||||
}
|
||||
];
|
||||
};
|
||||
|
||||
# Reverse proxy integration (automatic when enabled)
|
||||
proxy = {
|
||||
enable = true; # Enable reverse proxy
|
||||
subdomain = "myservice"; # Subdomain (default: service name)
|
||||
enableAuth = false; # Enable authentication (default: false)
|
||||
|
||||
additionalSubdomains = [ # Additional proxy entries
|
||||
{
|
||||
subdomain = "myservice-api";
|
||||
port = 8081;
|
||||
path = "/api";
|
||||
enableAuth = true;
|
||||
}
|
||||
];
|
||||
};
|
||||
|
||||
# Service-specific options
|
||||
customOption = "value"; # Service-specific configuration
|
||||
};
|
||||
```
|
||||
|
||||
## Example Service Implementations
|
||||
|
||||
### Prometheus Service
|
||||
|
||||
```nix
|
||||
homelab.services.prometheus = {
|
||||
enable = true;
|
||||
port = 9090;
|
||||
|
||||
# Prometheus-specific options
|
||||
retention = "15d"; # Data retention period
|
||||
alertmanager = {
|
||||
enable = true;
|
||||
url = "alertmanager.lab:9093";
|
||||
};
|
||||
extraScrapeConfigs = []; # Additional scrape configs
|
||||
extraAlertingRules = []; # Additional alerting rules
|
||||
globalConfig = { # Prometheus global config
|
||||
scrape_interval = "15s";
|
||||
evaluation_interval = "15s";
|
||||
};
|
||||
extraFlags = []; # Additional command line flags
|
||||
ruleFiles = []; # Additional rule files
|
||||
|
||||
# Automatic integrations
|
||||
monitoring.enable = true; # Self-monitoring
|
||||
logging.enable = true; # Log collection
|
||||
proxy = {
|
||||
enable = true;
|
||||
subdomain = "prometheus";
|
||||
enableAuth = true; # Admin interface needs protection
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
### Gatus Service
|
||||
|
||||
```nix
|
||||
homelab.services.gatus = {
|
||||
enable = true;
|
||||
port = 8080;
|
||||
|
||||
# Gatus-specific options
|
||||
ui = {
|
||||
title = "Homelab Status";
|
||||
header = "Homelab Services Status";
|
||||
link = "https://status.procopius.dk";
|
||||
buttons = [
|
||||
{ name = "Grafana"; link = "https://grafana.procopius.dk"; }
|
||||
{ name = "Prometheus"; link = "https://prometheus.procopius.dk"; }
|
||||
];
|
||||
};
|
||||
|
||||
alerting = { # Discord/Slack/etc notifications
|
||||
discord = {
|
||||
webhook-url = "https://discord.com/api/webhooks/...";
|
||||
default-alert = {
|
||||
enabled = true;
|
||||
failure-threshold = 3;
|
||||
success-threshold = 2;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
storage = { # Storage backend
|
||||
type = "memory"; # or "postgres", "sqlite"
|
||||
};
|
||||
|
||||
web.address = "0.0.0.0";
|
||||
extraConfig = {}; # Additional Gatus configuration
|
||||
|
||||
# Automatic integrations
|
||||
monitoring.enable = true;
|
||||
logging.enable = true;
|
||||
proxy = {
|
||||
enable = true;
|
||||
subdomain = "status";
|
||||
enableAuth = false; # Status page should be public
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
## Global Aggregation System
|
||||
|
||||
The homelab system automatically aggregates configuration from all nodes in your fleet, making it easy to have centralized monitoring and management.
|
||||
|
||||
### How Global Aggregation Works
|
||||
|
||||
1. **Local Configuration**: Each node defines its own services and configurations
|
||||
2. **Automatic Collection**: The system automatically collects data from all nodes using the `base.nix` aggregator
|
||||
3. **Enhancement**: Each collected item is enhanced with node context (`_nodeName`, `_nodeConfig`, etc.)
|
||||
4. **Global Exposure**: Aggregated data is exposed in `*.global.*` options
|
||||
|
||||
### Global Data Structure
|
||||
|
||||
```nix
|
||||
# Available on every node with global data from entire fleet
|
||||
homelab.monitoring.global = {
|
||||
allMetrics = [ # All metrics from all nodes
|
||||
{
|
||||
name = "prometheus-main";
|
||||
host = "monitor";
|
||||
port = 9090;
|
||||
# ... other fields
|
||||
_nodeName = "monitor"; # Source node name
|
||||
_nodeConfig = {...}; # Source node config
|
||||
_fullAddress = "monitor:9090";
|
||||
_metricsUrl = "http://monitor:9090/metrics";
|
||||
}
|
||||
# ... more metrics from other nodes
|
||||
];
|
||||
|
||||
allHealthChecks = [...]; # All health checks from all nodes
|
||||
metricsByJobName = { # Grouped by Prometheus job name
|
||||
"prometheus" = [...];
|
||||
"node" = [...];
|
||||
};
|
||||
healthChecksByGroup = { # Grouped by health check group
|
||||
"services" = [...];
|
||||
"infrastructure" = [...];
|
||||
};
|
||||
summary = {
|
||||
totalMetrics = 42;
|
||||
totalHealthChecks = 15;
|
||||
nodesCovered = ["monitor" "web-01" "db-01"];
|
||||
};
|
||||
};
|
||||
|
||||
homelab.logging.global = {
|
||||
allSources = [...]; # All log sources from all nodes
|
||||
sourcesByType = {
|
||||
"file" = [...];
|
||||
"journal" = [...];
|
||||
};
|
||||
summary = {...};
|
||||
};
|
||||
|
||||
homelab.backups.global = {
|
||||
allJobs = [...]; # All backup jobs from all nodes
|
||||
allBackends = [...]; # All backup backends from all nodes
|
||||
jobsByBackend = {...};
|
||||
summary = {...};
|
||||
};
|
||||
|
||||
homelab.reverseProxy.global = {
|
||||
allEntries = [...]; # All proxy entries from all nodes
|
||||
entriesBySubdomain = {...};
|
||||
entriesWithAuth = [...];
|
||||
entriesWithoutAuth = [...];
|
||||
summary = {...};
|
||||
};
|
||||
```
|
||||
|
||||
### Using Global Data
|
||||
|
||||
Services like Prometheus and Gatus automatically use global data:
|
||||
|
||||
```nix
|
||||
# Prometheus automatically scrapes ALL metrics from the entire fleet
|
||||
services.prometheus.scrapeConfigs =
|
||||
# Automatically generated from homelab.monitoring.global.allMetrics
|
||||
|
||||
# Gatus automatically monitors ALL health checks from the entire fleet
|
||||
services.gatus.settings.endpoints =
|
||||
# Automatically generated from homelab.monitoring.global.allHealthChecks
|
||||
```
|
||||
|
||||
## Integration Examples
|
||||
|
||||
### Adding a New Service
|
||||
|
||||
1. **Create the service configuration**:
|
||||
```nix
|
||||
homelab.services.myapp = {
|
||||
enable = true;
|
||||
port = 3000;
|
||||
monitoring.enable = true;
|
||||
logging.enable = true;
|
||||
proxy = {
|
||||
enable = true;
|
||||
subdomain = "myapp";
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
2. **The system automatically**:
|
||||
- Adds metrics endpoint to Prometheus (fleet-wide)
|
||||
- Adds health check to Gatus (fleet-wide)
|
||||
- Configures log collection to Loki
|
||||
- Sets up reverse proxy entry
|
||||
- Exposes the service globally for other nodes
|
||||
|
||||
### Multi-Node Setup
|
||||
|
||||
```nix
|
||||
# Node 1 (monitor.nix)
|
||||
homelab = {
|
||||
hostname = "monitor";
|
||||
services.prometheus.enable = true;
|
||||
services.gatus.enable = true;
|
||||
};
|
||||
|
||||
# Node 2 (web.nix)
|
||||
homelab = {
|
||||
hostname = "web-01";
|
||||
services.nginx.enable = true;
|
||||
services.webapp.enable = true;
|
||||
};
|
||||
|
||||
# Node 3 (database.nix)
|
||||
homelab = {
|
||||
hostname = "db-01";
|
||||
services.postgresql.enable = true;
|
||||
services.redis.enable = true;
|
||||
};
|
||||
```
|
||||
|
||||
Result: Monitor node automatically discovers and monitors all services across all three nodes.
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
homelab/
|
||||
├── default.nix # Main homelab options and imports
|
||||
├── lib/
|
||||
│ ├── systems/ # Core system modules
|
||||
│ │ ├── monitoring.nix # Monitoring aggregation
|
||||
│ │ ├── logging.nix # Logging aggregation
|
||||
│ │ ├── backups.nix # Backup aggregation
|
||||
│ │ └── proxy.nix # Reverse proxy aggregation
|
||||
│ ├── features/ # Service feature modules
|
||||
│ │ ├── monitoring.nix # Service monitoring template
|
||||
│ │ ├── logging.nix # Service logging template
|
||||
│ │ └── proxy.nix # Service proxy template
|
||||
│ └── aggregators/
|
||||
│ └── base.nix # Base aggregation functions
|
||||
└── services/ # Individual service implementations
|
||||
├── prometheus.nix
|
||||
├── gatus.nix
|
||||
└── ...
|
||||
```
|
||||
|
||||
This architecture provides a scalable, consistent way to manage a homelab fleet with automatic service discovery, monitoring, and management across all nodes.
|
||||
|
|
@ -6,6 +6,8 @@
|
|||
}:
|
||||
with lib; let
|
||||
cfg = config.homelab;
|
||||
|
||||
nodeAgg = import ./lib/node-aggregation.nix {inherit lib;};
|
||||
in {
|
||||
imports = [
|
||||
./lib/systems/monitoring.nix
|
||||
|
|
@ -13,8 +15,6 @@ in {
|
|||
./lib/systems/proxy.nix
|
||||
./lib/systems/backups.nix
|
||||
|
||||
./lib/cli/homelab-cli.nix
|
||||
|
||||
./services
|
||||
./motd
|
||||
];
|
||||
|
|
|
|||
|
|
@ -1,943 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
# CLI command implementations
|
||||
|
||||
# Services command
|
||||
# Enhanced services command with detailed service information
|
||||
cmd_services() {
|
||||
local SCOPE="local"
|
||||
local FORMAT="table"
|
||||
local SHOW_SYSTEMD=true
|
||||
local DETAIL_SERVICE=""
|
||||
local ACTION=""
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--global|-g) SCOPE="global"; shift ;;
|
||||
--local|-l) SCOPE="local"; shift ;;
|
||||
--json) FORMAT="json"; shift ;;
|
||||
--no-systemd) SHOW_SYSTEMD=false; shift ;;
|
||||
--detail|-d)
|
||||
DETAIL_SERVICE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--logs)
|
||||
ACTION="logs"
|
||||
DETAIL_SERVICE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--status)
|
||||
ACTION="status"
|
||||
DETAIL_SERVICE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--restart)
|
||||
ACTION="restart"
|
||||
DETAIL_SERVICE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--errors)
|
||||
ACTION="errors"
|
||||
DETAIL_SERVICE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--help|-h)
|
||||
cat << 'EOF'
|
||||
homelab services - List and manage services
|
||||
|
||||
USAGE:
|
||||
homelab services [options]
|
||||
homelab services --detail <service-name>
|
||||
homelab services --logs <service-name>
|
||||
homelab services --status <service-name>
|
||||
homelab services --restart <service-name>
|
||||
homelab services --errors <service-name>
|
||||
|
||||
OPTIONS:
|
||||
--global, -g Show services from entire fleet
|
||||
--local, -l Show local services (default)
|
||||
--json Output JSON format
|
||||
--no-systemd Don't check systemd status
|
||||
--detail, -d <name> Show detailed info for service
|
||||
--logs <name> Show recent logs for service
|
||||
--status <name> Show detailed status for service
|
||||
--restart <name> Restart service
|
||||
--errors <name> Show recent errors for service
|
||||
|
||||
EXAMPLES:
|
||||
homelab services
|
||||
homelab services --global
|
||||
homelab services --detail prometheus
|
||||
homelab services --logs grafana
|
||||
homelab services --errors nginx
|
||||
homelab services --restart prometheus
|
||||
EOF
|
||||
return 0
|
||||
;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Handle specific service actions
|
||||
if [[ -n "$DETAIL_SERVICE" ]]; then
|
||||
case "$ACTION" in
|
||||
logs)
|
||||
show_service_logs "$DETAIL_SERVICE"
|
||||
return $?
|
||||
;;
|
||||
status)
|
||||
show_service_status "$DETAIL_SERVICE"
|
||||
return $?
|
||||
;;
|
||||
restart)
|
||||
restart_service "$DETAIL_SERVICE"
|
||||
return $?
|
||||
;;
|
||||
errors)
|
||||
show_service_errors "$DETAIL_SERVICE"
|
||||
return $?
|
||||
;;
|
||||
*)
|
||||
show_service_detail "$DETAIL_SERVICE"
|
||||
return $?
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
# Regular service listing
|
||||
if [[ "$FORMAT" == "json" ]]; then
|
||||
if [[ "$SCOPE" == "global" ]]; then
|
||||
jq -r '.services.global // {}' "$HOMELAB_CONFIG"
|
||||
else
|
||||
jq -r '.services.local // {}' "$HOMELAB_CONFIG"
|
||||
fi
|
||||
else
|
||||
info "Homelab Services ($SCOPE)"
|
||||
echo "=============================="
|
||||
echo
|
||||
|
||||
services_data=$(jq -r "
|
||||
if \"$SCOPE\" == \"global\" then .services.global.all // []
|
||||
else .services.local.all // []
|
||||
end |
|
||||
.[] |
|
||||
[.name, (.node // \"local\"), (.port // \"N/A\"), (.description // \"\")] |
|
||||
@tsv
|
||||
" "$HOMELAB_CONFIG" 2>/dev/null || echo "")
|
||||
|
||||
if [[ -z "$services_data" ]]; then
|
||||
warn "No services found"
|
||||
return 0
|
||||
fi
|
||||
|
||||
printf "%-20s %-12s %-8s %-12s %-8s %s\n" "SERVICE" "NODE" "PORT" "SYSTEMD" "UPTIME" "DESCRIPTION"
|
||||
printf "%-20s %-12s %-8s %-12s %-8s %s\n" "-------" "----" "----" "-------" "------" "-----------"
|
||||
|
||||
while IFS=$'\t' read -r service node port description; do
|
||||
systemd_status="N/A"
|
||||
uptime="N/A"
|
||||
|
||||
if [[ "$SHOW_SYSTEMD" == "true" && "$node" == "local" ]]; then
|
||||
# Get systemd service names for this service
|
||||
systemd_services=($(jq -r ".services.local.all[] | select(.name == \"$service\") | .systemdServices[]?" "$HOMELAB_CONFIG" 2>/dev/null))
|
||||
|
||||
if [[ ${#systemd_services[@]} -eq 0 ]]; then
|
||||
# Fallback to common patterns
|
||||
systemd_services=("$service" "$service.service")
|
||||
fi
|
||||
|
||||
for svc_name in "${systemd_services[@]}"; do
|
||||
if systemctl is-enabled "$svc_name" >/dev/null 2>&1; then
|
||||
if systemctl is-active "$svc_name" >/dev/null 2>&1; then
|
||||
systemd_status="${GREEN}active${NC}"
|
||||
|
||||
# Get uptime
|
||||
active_since=$(systemctl show -p ActiveEnterTimestamp "$svc_name" --value 2>/dev/null)
|
||||
if [[ -n "$active_since" && "$active_since" != "n/a" ]]; then
|
||||
active_epoch=$(date -d "$active_since" +%s 2>/dev/null || echo 0)
|
||||
current_epoch=$(date +%s)
|
||||
if [[ "$active_epoch" -gt 0 ]]; then
|
||||
uptime_seconds=$((current_epoch - active_epoch))
|
||||
uptime=$(format_duration $uptime_seconds)
|
||||
fi
|
||||
fi
|
||||
else
|
||||
systemd_status="${RED}inactive${NC}"
|
||||
uptime="0s"
|
||||
fi
|
||||
break
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
printf "%-20s %-12s %-8s %-12b %-8s %s\n" "$service" "$node" "$port" "$systemd_status" "$uptime" "$description"
|
||||
done <<< "$services_data"
|
||||
|
||||
echo
|
||||
service_count=$(echo "$services_data" | wc -l)
|
||||
success "Total services: $service_count"
|
||||
|
||||
echo
|
||||
info "💡 Use 'homelab services --detail <service-name>' for detailed information"
|
||||
info "💡 Use 'homelab services --logs <service-name>' to view logs"
|
||||
fi
|
||||
}
|
||||
|
||||
# Helper function to format duration
|
||||
format_duration() {
|
||||
local seconds=$1
|
||||
local days=$((seconds / 86400))
|
||||
local hours=$(((seconds % 86400) / 3600))
|
||||
local minutes=$(((seconds % 3600) / 60))
|
||||
local secs=$((seconds % 60))
|
||||
|
||||
if [[ $days -gt 0 ]]; then
|
||||
echo "${days}d ${hours}h"
|
||||
elif [[ $hours -gt 0 ]]; then
|
||||
echo "${hours}h ${minutes}m"
|
||||
elif [[ $minutes -gt 0 ]]; then
|
||||
echo "${minutes}m"
|
||||
else
|
||||
echo "${secs}s"
|
||||
fi
|
||||
}
|
||||
|
||||
# Robust service detection function
|
||||
find_systemd_service() {
|
||||
local service_name="$1"
|
||||
|
||||
# Get configured systemd services from homelab config
|
||||
local systemd_services=($(jq -r ".services.local.all[] | select(.name == \"$service_name\") | .systemdServices[]?" "$HOMELAB_CONFIG" 2>/dev/null))
|
||||
|
||||
# If no configured services, use common patterns
|
||||
if [[ ${#systemd_services[@]} -eq 0 ]]; then
|
||||
systemd_services=(
|
||||
"$service_name.service"
|
||||
"$service_name"
|
||||
"nixos-$service_name.service"
|
||||
"nixos-$service_name"
|
||||
"$service_name-nixos.service"
|
||||
)
|
||||
fi
|
||||
|
||||
# Try each potential service name with multiple detection methods
|
||||
for svc_name in "${systemd_services[@]}"; do
|
||||
# Method 1: Check if systemctl can show the unit (most reliable)
|
||||
if systemctl show "$svc_name" >/dev/null 2>&1; then
|
||||
echo "$svc_name"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Method 2: Check if unit file exists
|
||||
if systemctl list-unit-files --no-pager --no-legend "$svc_name" 2>/dev/null | grep -q "^$svc_name"; then
|
||||
echo "$svc_name"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Method 3: Check if unit is loaded
|
||||
if systemctl list-units --no-pager --no-legend "$svc_name" 2>/dev/null | grep -q "^$svc_name"; then
|
||||
echo "$svc_name"
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
|
||||
# If still not found, try a broader search
|
||||
local found_service=$(systemctl list-units --all --no-pager --no-legend | grep -E "^$service_name[.-]|^$service_name\.service" | head -1 | awk '{print $1}')
|
||||
if [[ -n "$found_service" ]]; then
|
||||
echo "$found_service"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Last resort: check unit files
|
||||
found_service=$(systemctl list-unit-files --no-pager --no-legend | grep -E "^$service_name[.-]|^$service_name\.service" | head -1 | awk '{print $1}')
|
||||
if [[ -n "$found_service" ]]; then
|
||||
echo "$found_service"
|
||||
return 0
|
||||
fi
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
show_service_logs() {
|
||||
local service_name="$1"
|
||||
local lines="${2:-100}"
|
||||
local follow="${3:-false}"
|
||||
|
||||
if [[ -z "$service_name" ]]; then
|
||||
error "Service name required"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Use robust service detection
|
||||
local found_service=$(find_systemd_service "$service_name")
|
||||
|
||||
if [[ -z "$found_service" ]]; then
|
||||
error "No systemd service found for '$service_name'"
|
||||
echo
|
||||
info "💡 Available services containing '$service_name':"
|
||||
systemctl list-units --all --no-pager --no-legend | grep -i "$service_name" | awk '{print " " $1}' || echo " None found"
|
||||
return 1
|
||||
fi
|
||||
|
||||
info "📝 Logs for $service_name ($found_service)"
|
||||
echo "=================================="
|
||||
echo
|
||||
|
||||
local journalctl_args="-u $found_service -n $lines --no-pager"
|
||||
if [[ "$follow" == "true" ]]; then
|
||||
journalctl_args="$journalctl_args -f"
|
||||
info "Following logs (Press Ctrl+C to stop)..."
|
||||
echo
|
||||
fi
|
||||
|
||||
journalctl $journalctl_args
|
||||
}
|
||||
|
||||
show_service_detail() {
|
||||
local service_name="$1"
|
||||
|
||||
if [[ -z "$service_name" ]]; then
|
||||
error "Service name required"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Get service info from config
|
||||
local service_info=$(jq -r ".services.local.all[] | select(.name == \"$service_name\")" "$HOMELAB_CONFIG" 2>/dev/null)
|
||||
|
||||
if [[ -z "$service_info" ]]; then
|
||||
error "Service '$service_name' not found in homelab configuration"
|
||||
return 1
|
||||
fi
|
||||
|
||||
info "🔍 Service Details: $service_name"
|
||||
echo "================================="
|
||||
echo
|
||||
|
||||
# Basic info
|
||||
local port=$(echo "$service_info" | jq -r '.port // "N/A"')
|
||||
local description=$(echo "$service_info" | jq -r '.description // "N/A"')
|
||||
local tags=$(echo "$service_info" | jq -r '.tags[]? // empty' | tr '\n' ',' | sed 's/,$//')
|
||||
|
||||
echo "📋 Configuration:"
|
||||
echo " Port: $port"
|
||||
echo " Description: $description"
|
||||
echo " Tags: ${tags:-"None"}"
|
||||
echo
|
||||
|
||||
# Use robust service detection
|
||||
local found_service=$(find_systemd_service "$service_name")
|
||||
|
||||
echo "🔧 Systemd Status:"
|
||||
if [[ -n "$found_service" ]]; then
|
||||
echo " Service: $found_service"
|
||||
echo " Status: $(systemctl is-active "$found_service" 2>/dev/null || echo "unknown")"
|
||||
echo " Enabled: $(systemctl is-enabled "$found_service" 2>/dev/null || echo "unknown")"
|
||||
|
||||
# Detailed status
|
||||
local active_since=$(systemctl show -p ActiveEnterTimestamp "$found_service" --value 2>/dev/null)
|
||||
if [[ -n "$active_since" && "$active_since" != "n/a" ]]; then
|
||||
echo " Active since: $active_since"
|
||||
fi
|
||||
|
||||
local main_pid=$(systemctl show -p MainPID "$found_service" --value 2>/dev/null)
|
||||
if [[ -n "$main_pid" && "$main_pid" != "0" ]]; then
|
||||
echo " Main PID: $main_pid"
|
||||
|
||||
# Memory usage
|
||||
local memory_usage=$(systemctl show -p MemoryCurrent "$found_service" --value 2>/dev/null)
|
||||
if [[ -n "$memory_usage" && "$memory_usage" != "[not set]" && "$memory_usage" -gt 0 ]]; then
|
||||
local memory_mb=$((memory_usage / 1024 / 1024))
|
||||
echo " Memory: ${memory_mb}MB"
|
||||
fi
|
||||
fi
|
||||
echo
|
||||
|
||||
# Recent logs preview
|
||||
echo "📝 Recent Logs (last 10 lines):"
|
||||
echo "--------------------------------"
|
||||
journalctl -u "$found_service" -n 10 --no-pager --output=short 2>/dev/null || echo "No logs available"
|
||||
echo
|
||||
|
||||
# Check for recent errors
|
||||
local error_count=$(journalctl -u "$found_service" --since "24 hours ago" --no-pager -q 2>/dev/null | grep -i "error\|failed\|exception" | wc -l)
|
||||
if [[ "$error_count" -gt 0 ]]; then
|
||||
warn "⚠️ Found $error_count error(s) in last 24 hours"
|
||||
echo " Use 'homelab services --errors $service_name' to view them"
|
||||
else
|
||||
success "✅ No errors found in last 24 hours"
|
||||
fi
|
||||
echo
|
||||
|
||||
info "📊 Available Actions:"
|
||||
echo " homelab services --logs $service_name # View full logs"
|
||||
echo " homelab services --errors $service_name # View recent errors"
|
||||
echo " homelab services --restart $service_name # Restart service"
|
||||
|
||||
else
|
||||
warn "No systemd service found for '$service_name'"
|
||||
echo
|
||||
info "💡 Available services containing '$service_name':"
|
||||
systemctl list-units --all --no-pager --no-legend | grep -i "$service_name" | awk '{print " " $1}' || echo " None found"
|
||||
fi
|
||||
}
|
||||
|
||||
show_service_errors() {
|
||||
local service_name="$1"
|
||||
local since="${2:-24 hours ago}"
|
||||
|
||||
if [[ -z "$service_name" ]]; then
|
||||
error "Service name required"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Use robust service detection
|
||||
local found_service=$(find_systemd_service "$service_name")
|
||||
|
||||
if [[ -z "$found_service" ]]; then
|
||||
error "No systemd service found for '$service_name'"
|
||||
return 1
|
||||
fi
|
||||
|
||||
info "🚨 Errors for $service_name ($found_service) since $since"
|
||||
echo "=============================================="
|
||||
echo
|
||||
|
||||
# Get logs with priority filtering for errors and higher
|
||||
local systemd_errors=$(journalctl -u "$found_service" --since "$since" --no-pager -p err 2>/dev/null)
|
||||
|
||||
# Also get application-level errors from all logs but with better filtering
|
||||
local app_errors=$(journalctl -u "$found_service" --since "$since" --no-pager 2>/dev/null | \
|
||||
grep -E "(ERROR|FATAL|CRITICAL|Exception|Traceback|failed to|cannot|unable to|connection refused|timeout|denied)" | \
|
||||
grep -v -E "(debug|DEBUG|info|INFO|warn|WARNING|notice|NOTICE)" | \
|
||||
grep -v -E "(successfully|completed|started|stopped|reloaded)")
|
||||
|
||||
local has_errors=false
|
||||
|
||||
# Show systemd-level errors (priority err and above)
|
||||
if [[ -n "$systemd_errors" ]]; then
|
||||
warn "📍 System-level errors (priority: err/crit/alert/emerg):"
|
||||
echo "────────────────────────────────────────────────────────"
|
||||
echo "$systemd_errors"
|
||||
echo
|
||||
has_errors=true
|
||||
fi
|
||||
|
||||
# Show application-level errors
|
||||
if [[ -n "$app_errors" ]]; then
|
||||
warn "📍 Application-level errors:"
|
||||
echo "─────────────────────────────"
|
||||
echo "$app_errors"
|
||||
echo
|
||||
has_errors=true
|
||||
fi
|
||||
|
||||
# Check for service failures/restarts
|
||||
local service_failures=$(journalctl -u "$found_service" --since "$since" --no-pager 2>/dev/null | \
|
||||
grep -E "(Failed|failed|Stopped|stopped|Restarted|restarted|Exit code|exit code)" | \
|
||||
grep -v -E "(successfully|gracefully)")
|
||||
|
||||
if [[ -n "$service_failures" ]]; then
|
||||
warn "📍 Service state changes/failures:"
|
||||
echo "───────────────────────────────────"
|
||||
echo "$service_failures"
|
||||
echo
|
||||
has_errors=true
|
||||
fi
|
||||
|
||||
if [[ "$has_errors" == "false" ]]; then
|
||||
success "✅ No errors found since $since"
|
||||
echo
|
||||
info "💡 Error detection includes:"
|
||||
echo " • System-level errors (journald priority: err/crit/alert/emerg)"
|
||||
echo " • Application errors (ERROR, FATAL, CRITICAL, Exception, etc.)"
|
||||
echo " • Service failures and unexpected restarts"
|
||||
else
|
||||
echo
|
||||
local total_systemd=$(echo "$systemd_errors" | grep -c . || echo 0)
|
||||
local total_app=$(echo "$app_errors" | grep -c . || echo 0)
|
||||
local total_failures=$(echo "$service_failures" | grep -c . || echo 0)
|
||||
|
||||
warn "Summary: $total_systemd system errors, $total_app application errors, $total_failures service issues"
|
||||
echo
|
||||
info "💡 Use 'homelab services --logs $service_name' to view all logs"
|
||||
fi
|
||||
}
|
||||
|
||||
show_service_status() {
|
||||
local service_name="$1"
|
||||
|
||||
if [[ -z "$service_name" ]]; then
|
||||
error "Service name required"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Use robust service detection
|
||||
local found_service=$(find_systemd_service "$service_name")
|
||||
|
||||
if [[ -z "$found_service" ]]; then
|
||||
error "No systemd service found for '$service_name'"
|
||||
return 1
|
||||
fi
|
||||
|
||||
info "📊 Status for $service_name ($found_service)"
|
||||
echo "=================================="
|
||||
echo
|
||||
|
||||
systemctl status "$found_service" --no-pager -l
|
||||
}
|
||||
|
||||
restart_service() {
|
||||
local service_name="$1"
|
||||
|
||||
if [[ -z "$service_name" ]]; then
|
||||
error "Service name required"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Use robust service detection
|
||||
local found_service=$(find_systemd_service "$service_name")
|
||||
|
||||
if [[ -z "$found_service" ]]; then
|
||||
error "No systemd service found for '$service_name'"
|
||||
return 1
|
||||
fi
|
||||
|
||||
info "🔄 Restarting $service_name ($found_service)..."
|
||||
|
||||
if sudo systemctl restart "$found_service"; then
|
||||
success "✅ Successfully restarted $service_name"
|
||||
|
||||
# Show brief status
|
||||
sleep 2
|
||||
if systemctl is-active "$found_service" >/dev/null 2>&1; then
|
||||
success "✅ Service is now active"
|
||||
else
|
||||
error "❌ Service failed to start properly"
|
||||
warn "Use 'homelab services --status $service_name' to check details"
|
||||
fi
|
||||
else
|
||||
error "❌ Failed to restart $service_name"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Backups command
|
||||
cmd_backups() {
|
||||
local SCOPE="local"
|
||||
local FORMAT="table"
|
||||
local SHOW_STATUS=true
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--global|-g) SCOPE="global"; shift ;;
|
||||
--local|-l) SCOPE="local"; shift ;;
|
||||
--json) FORMAT="json"; shift ;;
|
||||
--no-status) SHOW_STATUS=false; shift ;;
|
||||
--help|-h)
|
||||
cat << 'EOF'
|
||||
homelab backups - Show backup jobs and status
|
||||
|
||||
USAGE:
|
||||
homelab backups [options]
|
||||
|
||||
OPTIONS:
|
||||
--global, -g Show backups from entire fleet
|
||||
--local, -l Show local backups (default)
|
||||
--json Output JSON format
|
||||
--no-status Don't check systemd timer status
|
||||
|
||||
EXAMPLES:
|
||||
homelab backups
|
||||
homelab backups --global
|
||||
homelab backups --no-status
|
||||
EOF
|
||||
return 0
|
||||
;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ "$FORMAT" == "json" ]]; then
|
||||
if [[ "$SCOPE" == "global" ]]; then
|
||||
jq -r '.backups.global // {}' "$HOMELAB_CONFIG"
|
||||
else
|
||||
jq -r '.backups.local // {}' "$HOMELAB_CONFIG"
|
||||
fi
|
||||
else
|
||||
info "Homelab Backups ($SCOPE)"
|
||||
echo "=========================="
|
||||
echo
|
||||
|
||||
backup_data=$(jq -r "
|
||||
if \"$SCOPE\" == \"global\" then .backups.global.allJobs // []
|
||||
else .backups.local.allJobs // []
|
||||
end |
|
||||
.[] |
|
||||
[.name, (.sourceNode // .node // \"local\"), .backend, (.labels | to_entries | map(\"\(.key)=\(.value)\") | join(\",\"))] |
|
||||
@tsv
|
||||
" "$HOMELAB_CONFIG" 2>/dev/null || echo "")
|
||||
|
||||
if [[ -z "$backup_data" ]]; then
|
||||
warn "No backup jobs found"
|
||||
return 0
|
||||
fi
|
||||
|
||||
printf "%-25s %-12s %-8s %-15s %-15s %-15s %s\n" "JOB" "NODE" "STATUS" "BACKEND" "LAST RUN" "NEXT RUN" "LABELS"
|
||||
printf "%-25s %-12s %-8s %-15s %-15s %-15s %s\n" "---" "----" "------" "-------" "--------" "--------" "------"
|
||||
|
||||
while IFS=$'\t' read -r job node backend labels; do
|
||||
last_run="Unknown"
|
||||
status="❓"
|
||||
next_run="Unknown"
|
||||
|
||||
if [[ "$SHOW_STATUS" == "true" && "$node" == "local" ]]; then
|
||||
timer_patterns=(
|
||||
"backup-$job"
|
||||
"$job-backup"
|
||||
"restic-backups-$job"
|
||||
"restic-backup-$job"
|
||||
"$job.timer"
|
||||
"backup-$job.timer"
|
||||
)
|
||||
|
||||
found_timer=""
|
||||
actual_timer_name=""
|
||||
for pattern in "${timer_patterns[@]}"; do
|
||||
for timer_name in "$pattern" "$pattern.timer"; do
|
||||
if systemctl list-timers --no-pager --no-legend "$timer_name" 2>/dev/null | grep -q "$timer_name"; then
|
||||
found_timer="$timer_name"
|
||||
if [[ "$timer_name" == *.timer ]]; then
|
||||
actual_timer_name="$timer_name"
|
||||
else
|
||||
actual_timer_name="$timer_name.timer"
|
||||
fi
|
||||
break 2
|
||||
fi
|
||||
done
|
||||
done
|
||||
|
||||
if [[ -n "$found_timer" ]]; then
|
||||
last_trigger=$(systemctl show -p LastTriggerUSec "$actual_timer_name" --value 2>/dev/null)
|
||||
if [[ "$last_trigger" != "n/a" && -n "$last_trigger" && "$last_trigger" != "Thu 1970-01-01"* ]]; then
|
||||
last_run=$(date -d "$last_trigger" "+%Y-%m-%d %H:%M" 2>/dev/null || echo "Parse Error")
|
||||
|
||||
last_epoch=$(date -d "$last_trigger" +%s 2>/dev/null || echo 0)
|
||||
current_epoch=$(date +%s)
|
||||
if [[ "$last_epoch" != "0" && "$last_epoch" -gt 0 ]]; then
|
||||
hours_since=$(( (current_epoch - last_epoch) / 3600 ))
|
||||
|
||||
if [[ $hours_since -lt 25 ]]; then
|
||||
status="✅"
|
||||
elif [[ $hours_since -lt 48 ]]; then
|
||||
status="⚠️"
|
||||
else
|
||||
status="❌"
|
||||
fi
|
||||
else
|
||||
status="❓"
|
||||
fi
|
||||
else
|
||||
last_run="Never"
|
||||
status="⏸️"
|
||||
fi
|
||||
|
||||
next_trigger=$(systemctl show -p NextElapseUSecRealtime "$actual_timer_name" --value 2>/dev/null)
|
||||
if [[ "$next_trigger" != "n/a" && -n "$next_trigger" && "$next_trigger" != "0" ]]; then
|
||||
next_run=$(date -d "$next_trigger" "+%Y-%m-%d %H:%M" 2>/dev/null || echo "Parse Error")
|
||||
else
|
||||
next_run="Unknown"
|
||||
fi
|
||||
|
||||
if [[ "$status" == "✅" ]]; then
|
||||
if journalctl -u "$actual_timer_name" --since "24 hours ago" --no-pager -q 2>/dev/null | grep -qi "error\|failed\|timeout"; then
|
||||
status="❌"
|
||||
elif journalctl -u "$actual_timer_name" --since "24 hours ago" --no-pager -q 2>/dev/null | grep -qi "success\|completed\|finished"; then
|
||||
status="✅"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
printf "%-25s %-12s %-8s %-15s %-15s %-15s %s\n" "$job" "$node" "$status" "$backend" "$last_run" "$next_run" "$labels"
|
||||
done <<< "$backup_data"
|
||||
|
||||
echo
|
||||
job_count=$(echo "$backup_data" | wc -l)
|
||||
success "Total backup jobs: $job_count"
|
||||
|
||||
if [[ "$SHOW_STATUS" == "true" ]]; then
|
||||
echo
|
||||
info "Status: ✅=Recent(<25h) ⚠️=Overdue(1-2d) ❌=Failed(>2d) ⏸️=Never ❓=Unknown"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
# Proxy command
|
||||
cmd_proxy() {
|
||||
local SCOPE="local"
|
||||
local FORMAT="table"
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--global|-g) SCOPE="global"; shift ;;
|
||||
--local|-l) SCOPE="local"; shift ;;
|
||||
--json) FORMAT="json"; shift ;;
|
||||
--help|-h)
|
||||
cat << 'EOF'
|
||||
homelab proxy - Show reverse proxy entries
|
||||
|
||||
USAGE:
|
||||
homelab proxy [options]
|
||||
|
||||
OPTIONS:
|
||||
--global, -g Show proxy entries from entire fleet
|
||||
--local, -l Show local proxy entries (default)
|
||||
--json Output JSON format
|
||||
|
||||
EXAMPLES:
|
||||
homelab proxy
|
||||
homelab proxy --global
|
||||
EOF
|
||||
return 0
|
||||
;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ "$FORMAT" == "json" ]]; then
|
||||
if [[ "$SCOPE" == "global" ]]; then
|
||||
jq -r '.reverseProxy.global // {}' "$HOMELAB_CONFIG"
|
||||
else
|
||||
jq -r '.reverseProxy.local // {}' "$HOMELAB_CONFIG"
|
||||
fi
|
||||
else
|
||||
info "Homelab Reverse Proxy ($SCOPE)"
|
||||
echo "==============================="
|
||||
echo
|
||||
|
||||
proxy_data=$(jq -r "
|
||||
if \"$SCOPE\" == \"global\" then .reverseProxy.global.allEntries // []
|
||||
else .reverseProxy.local.allEntries // []
|
||||
end |
|
||||
.[] |
|
||||
[.subdomain, (.sourceNode // .node // \"local\"), .host, (.port // \"N/A\"), (.enableAuth // false), (.enableSSL // true)] |
|
||||
@tsv
|
||||
" "$HOMELAB_CONFIG" 2>/dev/null || echo "")
|
||||
|
||||
if [[ -z "$proxy_data" ]]; then
|
||||
warn "No proxy entries found"
|
||||
return 0
|
||||
fi
|
||||
|
||||
printf "%-20s %-12s %-15s %-8s %-6s %-6s %s\n" "SUBDOMAIN" "NODE" "BACKEND" "PORT" "AUTH" "SSL" "EXTERNAL URL"
|
||||
printf "%-20s %-12s %-15s %-8s %-6s %-6s %s\n" "---------" "----" "-------" "----" "----" "---" "------------"
|
||||
|
||||
external_domain=$(jq -r '.externalDomain // "lab.local"' "$HOMELAB_CONFIG" 2>/dev/null)
|
||||
|
||||
while IFS=$'\t' read -r subdomain node host port auth ssl; do
|
||||
auth_icon=$(if [[ "$auth" == "true" ]]; then echo "🔒"; else echo "🌐"; fi)
|
||||
ssl_icon=$(if [[ "$ssl" == "true" ]]; then echo "🔐"; else echo "❌"; fi)
|
||||
|
||||
external_url="https://$subdomain.$external_domain"
|
||||
if [[ "$ssl" == "false" ]]; then
|
||||
external_url="http://$subdomain.$external_domain"
|
||||
fi
|
||||
|
||||
printf "%-20s %-12s %-15s %-8s %-6s %-6s %s\n" "$subdomain" "$node" "$host" "$port" "$auth_icon" "$ssl_icon" "$external_url"
|
||||
done <<< "$proxy_data"
|
||||
|
||||
echo
|
||||
entry_count=$(echo "$proxy_data" | wc -l)
|
||||
success "Total proxy entries: $entry_count"
|
||||
fi
|
||||
}
|
||||
|
||||
# Monitoring command
|
||||
cmd_monitoring() {
|
||||
local SCOPE="local"
|
||||
local FORMAT="table"
|
||||
local SHOW_TYPE="all"
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--global|-g) SCOPE="global"; shift ;;
|
||||
--local|-l) SCOPE="local"; shift ;;
|
||||
--json) FORMAT="json"; shift ;;
|
||||
--metrics) SHOW_TYPE="metrics"; shift ;;
|
||||
--health) SHOW_TYPE="health"; shift ;;
|
||||
--logs) SHOW_TYPE="logs"; shift ;;
|
||||
--help|-h)
|
||||
cat << 'EOF'
|
||||
homelab monitoring - Show monitoring configuration
|
||||
|
||||
USAGE:
|
||||
homelab monitoring [options]
|
||||
|
||||
OPTIONS:
|
||||
--global, -g Show monitoring from entire fleet
|
||||
--local, -l Show local monitoring (default)
|
||||
--json Output JSON format
|
||||
--metrics Show only metrics endpoints
|
||||
--health Show only health checks
|
||||
--logs Show only log sources
|
||||
|
||||
EXAMPLES:
|
||||
homelab monitoring
|
||||
homelab monitoring --global --metrics
|
||||
EOF
|
||||
return 0
|
||||
;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ "$FORMAT" == "json" ]]; then
|
||||
if [[ "$SCOPE" == "global" ]]; then
|
||||
jq -r '.monitoring.global // {}' "$HOMELAB_CONFIG"
|
||||
else
|
||||
jq -r '.monitoring.local // {}' "$HOMELAB_CONFIG"
|
||||
fi
|
||||
else
|
||||
info "Homelab Monitoring ($SCOPE)"
|
||||
echo "============================"
|
||||
echo
|
||||
|
||||
# Show metrics
|
||||
if [[ "$SHOW_TYPE" == "all" || "$SHOW_TYPE" == "metrics" ]]; then
|
||||
info "📊 Metrics Endpoints"
|
||||
echo "--------------------"
|
||||
|
||||
metrics_data=$(jq -r "
|
||||
if \"$SCOPE\" == \"global\" then .monitoring.global.allMetrics // []
|
||||
else .monitoring.local.allMetrics // []
|
||||
end |
|
||||
.[] |
|
||||
[.name, (.sourceNode // .node // \"local\"), .host, (.port // \"N/A\"), .path, .jobName] |
|
||||
@tsv
|
||||
" "$HOMELAB_CONFIG" 2>/dev/null || echo "")
|
||||
|
||||
if [[ -n "$metrics_data" ]]; then
|
||||
printf "%-20s %-12s %-15s %-8s %-12s %s\n" "NAME" "NODE" "HOST" "PORT" "PATH" "JOB"
|
||||
printf "%-20s %-12s %-15s %-8s %-12s %s\n" "----" "----" "----" "----" "----" "---"
|
||||
|
||||
while IFS=$'\t' read -r name node host port path job; do
|
||||
printf "%-20s %-12s %-15s %-8s %-12s %s\n" "$name" "$node" "$host" "$port" "$path" "$job"
|
||||
done <<< "$metrics_data"
|
||||
|
||||
echo
|
||||
metrics_count=$(echo "$metrics_data" | wc -l)
|
||||
success "Found $metrics_count metrics endpoints"
|
||||
else
|
||||
warn "No metrics endpoints found"
|
||||
fi
|
||||
echo
|
||||
fi
|
||||
|
||||
# Show health checks
|
||||
if [[ "$SHOW_TYPE" == "all" || "$SHOW_TYPE" == "health" ]]; then
|
||||
info "🏥 Health Checks"
|
||||
echo "----------------"
|
||||
|
||||
health_data=$(jq -r "
|
||||
if \"$SCOPE\" == \"global\" then .monitoring.global.allHealthChecks // []
|
||||
else .monitoring.local.allHealthChecks // []
|
||||
end |
|
||||
.[] |
|
||||
[.name, (.sourceNode // .node // \"local\"), .host, (.port // \"N/A\"), .path, .protocol, (.enabled // true)] |
|
||||
@tsv
|
||||
" "$HOMELAB_CONFIG" 2>/dev/null || echo "")
|
||||
|
||||
if [[ -n "$health_data" ]]; then
|
||||
printf "%-20s %-12s %-15s %-8s %-12s %-8s %s\n" "NAME" "NODE" "HOST" "PORT" "PATH" "PROTOCOL" "STATUS"
|
||||
printf "%-20s %-12s %-15s %-8s %-12s %-8s %s\n" "----" "----" "----" "----" "----" "--------" "------"
|
||||
|
||||
while IFS=$'\t' read -r name node host port path protocol enabled; do
|
||||
status_icon=$(if [[ "$enabled" == "true" ]]; then echo "✅"; else echo "❌"; fi)
|
||||
printf "%-20s %-12s %-15s %-8s %-12s %-8s %s\n" "$name" "$node" "$host" "$port" "$path" "$protocol" "$status_icon"
|
||||
done <<< "$health_data"
|
||||
|
||||
echo
|
||||
health_count=$(echo "$health_data" | wc -l)
|
||||
success "Found $health_count health checks"
|
||||
else
|
||||
warn "No health checks found"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Status command
|
||||
cmd_status() {
|
||||
local FORMAT="table"
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--json) FORMAT="json"; shift ;;
|
||||
--help|-h)
|
||||
cat << 'EOF'
|
||||
homelab status - Show overall homelab status
|
||||
|
||||
USAGE:
|
||||
homelab status [options]
|
||||
|
||||
OPTIONS:
|
||||
--json Output JSON format
|
||||
|
||||
EXAMPLES:
|
||||
homelab status
|
||||
EOF
|
||||
return 0
|
||||
;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ "$FORMAT" == "json" ]]; then
|
||||
cat "$HOMELAB_CONFIG"
|
||||
else
|
||||
# Get basic info
|
||||
hostname=$(jq -r '.hostname // "unknown"' "$HOMELAB_CONFIG")
|
||||
domain=$(jq -r '.domain // "lab"' "$HOMELAB_CONFIG")
|
||||
external_domain=$(jq -r '.externalDomain // "unknown"' "$HOMELAB_CONFIG")
|
||||
environment=$(jq -r '.environment // "unknown"' "$HOMELAB_CONFIG")
|
||||
|
||||
info "🏠 Homelab Status"
|
||||
echo "=================="
|
||||
echo
|
||||
echo "Node Information:"
|
||||
echo " Hostname: $hostname"
|
||||
echo " Domain: $domain"
|
||||
echo " External: $external_domain"
|
||||
echo " Environment: $environment"
|
||||
echo
|
||||
|
||||
# Services summary
|
||||
local_services=$(jq -r '.services.local.count // 0' "$HOMELAB_CONFIG" 2>/dev/null)
|
||||
global_services=$(jq -r '.services.global.count // 0' "$HOMELAB_CONFIG" 2>/dev/null)
|
||||
|
||||
echo "📋 Services:"
|
||||
echo " Local: $local_services"
|
||||
echo " Fleet: $global_services"
|
||||
echo
|
||||
|
||||
# Monitoring summary
|
||||
local_metrics=$(jq -r '.monitoring.local.count // 0' "$HOMELAB_CONFIG" 2>/dev/null)
|
||||
global_metrics=$(jq -r '.monitoring.global.summary.totalMetrics // 0' "$HOMELAB_CONFIG" 2>/dev/null)
|
||||
|
||||
echo "📊 Monitoring:"
|
||||
echo " Local Metrics: $local_metrics"
|
||||
echo " Fleet Metrics: $global_metrics"
|
||||
echo
|
||||
|
||||
# Backup summary
|
||||
local_backups=$(jq -r '.backups.local.count // 0' "$HOMELAB_CONFIG" 2>/dev/null)
|
||||
global_backups=$(jq -r '.backups.global.summary.total // 0' "$HOMELAB_CONFIG" 2>/dev/null)
|
||||
|
||||
echo "💾 Backups:"
|
||||
echo " Local Jobs: $local_backups"
|
||||
echo " Fleet Jobs: $global_backups"
|
||||
echo
|
||||
|
||||
success "Use 'homelab <command> --help' for detailed information"
|
||||
fi
|
||||
}
|
||||
|
|
@ -1,295 +0,0 @@
|
|||
{
|
||||
config,
|
||||
lib,
|
||||
pkgs,
|
||||
...
|
||||
}:
|
||||
with lib; let
|
||||
cfg = config.homelab;
|
||||
|
||||
extractServiceData = services:
|
||||
mapAttrsToList (name: svc: {
|
||||
inherit name;
|
||||
enabled = svc.enable or false;
|
||||
port = svc.port or null;
|
||||
description = svc.description or name;
|
||||
tags = svc.tags or [];
|
||||
systemdServices = svc.systemdServices or ["${name}.service" name];
|
||||
}) (filterAttrs (name: svc: svc.enable or false) services);
|
||||
|
||||
extractListData = list:
|
||||
if isList list
|
||||
then
|
||||
map (
|
||||
item:
|
||||
if isAttrs item
|
||||
then
|
||||
filterAttrs (
|
||||
k: v:
|
||||
!(isFunction v)
|
||||
&& !(isAttrs v && v ? "_type")
|
||||
&& k != "_module"
|
||||
)
|
||||
item
|
||||
else item
|
||||
)
|
||||
list
|
||||
else [];
|
||||
|
||||
homelabCli = pkgs.writeShellScriptBin "homelab" ''
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
HOMELAB_CONFIG="/etc/homelab/config.json"
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
# Helper functions
|
||||
error() { echo -e "''${RED}Error: $1''${NC}" >&2; }
|
||||
info() { echo -e "''${BLUE}$1''${NC}"; }
|
||||
success() { echo -e "''${GREEN}$1''${NC}"; }
|
||||
warn() { echo -e "''${YELLOW}$1''${NC}"; }
|
||||
|
||||
# Check config exists
|
||||
if [[ ! -f "$HOMELAB_CONFIG" ]]; then
|
||||
error "Homelab configuration not found"
|
||||
error "Make sure homelab.enable = true and rebuild"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Load command implementations
|
||||
source ${./cli-commands.sh}
|
||||
|
||||
# Help function
|
||||
show_help() {
|
||||
cat << 'EOF'
|
||||
Homelab Management CLI
|
||||
|
||||
USAGE:
|
||||
homelab <command> [options]
|
||||
|
||||
COMMANDS:
|
||||
services List and manage services
|
||||
backups Show backup jobs and status
|
||||
proxy Show reverse proxy entries
|
||||
monitoring Show monitoring configuration
|
||||
status Overall homelab status
|
||||
help Show this help
|
||||
|
||||
GLOBAL OPTIONS:
|
||||
--global, -g Show fleet-wide information
|
||||
--local, -l Show local information (default)
|
||||
--json Output JSON format
|
||||
--help, -h Show help
|
||||
|
||||
EXAMPLES:
|
||||
homelab services --global
|
||||
homelab backups --local
|
||||
homelab status
|
||||
EOF
|
||||
}
|
||||
|
||||
# Main command dispatcher
|
||||
case "''${1:-help}" in
|
||||
services)
|
||||
shift
|
||||
cmd_services "$@"
|
||||
;;
|
||||
backups)
|
||||
shift
|
||||
cmd_backups "$@"
|
||||
;;
|
||||
proxy)
|
||||
shift
|
||||
cmd_proxy "$@"
|
||||
;;
|
||||
monitoring)
|
||||
shift
|
||||
cmd_monitoring "$@"
|
||||
;;
|
||||
status)
|
||||
shift
|
||||
cmd_status "$@"
|
||||
;;
|
||||
help|--help|-h)
|
||||
show_help
|
||||
;;
|
||||
*)
|
||||
error "Unknown command: $1"
|
||||
show_help
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
'';
|
||||
in {
|
||||
# Only enable when homelab is enabled
|
||||
config = mkIf cfg.enable {
|
||||
# Install CLI tools
|
||||
environment.systemPackages = [
|
||||
homelabCli
|
||||
# Create convenient aliases
|
||||
(pkgs.writeShellScriptBin "hl" "exec homelab \"$@\"")
|
||||
(pkgs.writeShellScriptBin "hls" "exec homelab services \"$@\"")
|
||||
(pkgs.writeShellScriptBin "hlb" "exec homelab backups \"$@\"")
|
||||
(pkgs.writeShellScriptBin "hlp" "exec homelab proxy \"$@\"")
|
||||
(pkgs.writeShellScriptBin "hlm" "exec homelab monitoring \"$@\"")
|
||||
];
|
||||
|
||||
# Generate minimal, safe JSON config
|
||||
environment.etc."homelab/config.json" = {
|
||||
text = builtins.toJSON {
|
||||
# Basic homelab info (always safe)
|
||||
hostname = cfg.hostname or "unknown";
|
||||
domain = cfg.domain or "lab";
|
||||
externalDomain = cfg.externalDomain or "lab.local";
|
||||
environment = cfg.environment or "production";
|
||||
location = cfg.location or "homelab";
|
||||
tags = cfg.tags or [];
|
||||
|
||||
# Services - only extract what we have locally
|
||||
services = {
|
||||
local = {
|
||||
all =
|
||||
if (cfg ? services)
|
||||
then extractServiceData cfg.services
|
||||
else [];
|
||||
count =
|
||||
if (cfg ? services)
|
||||
then length (attrNames (filterAttrs (n: s: s.enable or false) cfg.services))
|
||||
else 0;
|
||||
};
|
||||
# For global data, we'll try to read it but provide empty fallback
|
||||
global = {
|
||||
all = [];
|
||||
count = 0;
|
||||
summary = {};
|
||||
};
|
||||
};
|
||||
|
||||
# Monitoring - extract only basic data
|
||||
monitoring = {
|
||||
local = {
|
||||
allMetrics =
|
||||
if (hasAttr "monitoring" cfg && hasAttr "allMetrics" cfg.monitoring)
|
||||
then extractListData cfg.monitoring.allMetrics
|
||||
else [];
|
||||
allHealthChecks =
|
||||
if (hasAttr "monitoring" cfg && hasAttr "allHealthChecks" cfg.monitoring)
|
||||
then extractListData cfg.monitoring.allHealthChecks
|
||||
else [];
|
||||
count =
|
||||
if (hasAttr "monitoring" cfg && hasAttr "allMetrics" cfg.monitoring)
|
||||
then length cfg.monitoring.allMetrics
|
||||
else 0;
|
||||
};
|
||||
global = {
|
||||
allMetrics = [];
|
||||
allHealthChecks = [];
|
||||
summary = {
|
||||
totalMetrics = 0;
|
||||
totalHealthChecks = 0;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
# Logging
|
||||
logging = {
|
||||
local = {
|
||||
allSources =
|
||||
if (hasAttr "logging" cfg && hasAttr "allSources" cfg.logging)
|
||||
then extractListData cfg.logging.allSources
|
||||
else [];
|
||||
count =
|
||||
if (hasAttr "logging" cfg && hasAttr "allSources" cfg.logging)
|
||||
then length cfg.logging.allSources
|
||||
else 0;
|
||||
};
|
||||
global = {
|
||||
allSources = [];
|
||||
summary = {};
|
||||
};
|
||||
};
|
||||
|
||||
# Backups
|
||||
backups = {
|
||||
local = {
|
||||
allJobs =
|
||||
if (hasAttr "backups" cfg && hasAttr "allJobs" cfg.backups)
|
||||
then extractListData cfg.backups.allJobs
|
||||
else [];
|
||||
count =
|
||||
if (hasAttr "backups" cfg && hasAttr "allJobs" cfg.backups)
|
||||
then length cfg.backups.allJobs
|
||||
else 0;
|
||||
};
|
||||
global = {
|
||||
allJobs = [];
|
||||
summary = {};
|
||||
};
|
||||
};
|
||||
|
||||
# Reverse Proxy
|
||||
reverseProxy = {
|
||||
local = {
|
||||
allEntries =
|
||||
if (hasAttr "reverseProxy" cfg && hasAttr "allEntries" cfg.reverseProxy)
|
||||
then extractListData cfg.reverseProxy.allEntries
|
||||
else [];
|
||||
count =
|
||||
if (hasAttr "reverseProxy" cfg && hasAttr "allEntries" cfg.reverseProxy)
|
||||
then length cfg.reverseProxy.allEntries
|
||||
else 0;
|
||||
};
|
||||
global = {
|
||||
allEntries = [];
|
||||
summary = {};
|
||||
};
|
||||
};
|
||||
|
||||
# Metadata
|
||||
_metadata = {
|
||||
# generated = toString builtins.currentTime;
|
||||
version = "1.0.0";
|
||||
};
|
||||
};
|
||||
mode = "0644";
|
||||
};
|
||||
|
||||
# Add bash completion
|
||||
environment.etc."bash_completion.d/homelab".text = ''
|
||||
_homelab_completion() {
|
||||
local cur prev opts
|
||||
COMPREPLY=()
|
||||
cur="''${COMP_WORDS[COMP_CWORD]}"
|
||||
prev="''${COMP_WORDS[COMP_CWORD-1]}"
|
||||
|
||||
case ''${COMP_CWORD} in
|
||||
1)
|
||||
opts="services backups proxy monitoring status help"
|
||||
COMPREPLY=( $(compgen -W "''${opts}" -- ''${cur}) )
|
||||
return 0
|
||||
;;
|
||||
*)
|
||||
case ''${COMP_WORDS[1]} in
|
||||
services|backups|proxy|monitoring|status)
|
||||
opts="--global --local --json --help"
|
||||
;;
|
||||
*)
|
||||
opts="--help"
|
||||
;;
|
||||
esac
|
||||
COMPREPLY=( $(compgen -W "''${opts}" -- ''${cur}) )
|
||||
return 0
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
complete -F _homelab_completion homelab hl
|
||||
'';
|
||||
};
|
||||
}
|
||||
|
|
@ -6,18 +6,9 @@ serviceName: {
|
|||
with lib; let
|
||||
cfg = config.homelab.services.${serviceName};
|
||||
homelabCfg = config.homelab;
|
||||
|
||||
shouldEnableLogging =
|
||||
cfg.logging.files
|
||||
!= []
|
||||
|| cfg.logging.extraSources != [];
|
||||
in {
|
||||
options.homelab.services.${serviceName}.logging = {
|
||||
enable = mkOption {
|
||||
type = types.bool;
|
||||
description = "Enable logging for ${serviceName}";
|
||||
default = shouldEnableLogging;
|
||||
};
|
||||
enable = mkEnableOption "logging for ${serviceName}";
|
||||
|
||||
files = mkOption {
|
||||
type = types.listOf types.str;
|
||||
|
|
@ -60,10 +51,10 @@ in {
|
|||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
homelab.logging.sources = mkIf cfg.logging.enable (
|
||||
# Only create file source if files are specified
|
||||
(optional (cfg.logging.files != []) {
|
||||
config = mkIf (cfg.enable && cfg.logging.enable) {
|
||||
homelab.logging.sources =
|
||||
[
|
||||
{
|
||||
name = "${serviceName}-logs";
|
||||
type = "file";
|
||||
files = {
|
||||
|
|
@ -78,15 +69,19 @@ in {
|
|||
environment = homelabCfg.environment;
|
||||
};
|
||||
pipelineStages =
|
||||
(optional (cfg.logging.parsing.regex != null) {
|
||||
mkIf (cfg.logging.parsing.regex != null) [
|
||||
{
|
||||
regex.expression = cfg.logging.parsing.regex;
|
||||
})
|
||||
++ (optional (cfg.logging.parsing.extractFields != []) {
|
||||
}
|
||||
]
|
||||
++ [
|
||||
{
|
||||
labels = listToAttrs (map (field: nameValuePair field null) cfg.logging.parsing.extractFields);
|
||||
});
|
||||
}
|
||||
];
|
||||
enabled = true;
|
||||
})
|
||||
++ cfg.logging.extraSources
|
||||
);
|
||||
}
|
||||
]
|
||||
++ cfg.logging.extraSources;
|
||||
};
|
||||
}
|
||||
|
|
|
|||
|
|
@ -6,69 +6,47 @@ serviceName: {
|
|||
with lib; let
|
||||
cfg = config.homelab.services.${serviceName};
|
||||
homelabCfg = config.homelab;
|
||||
|
||||
hasMetricsConfig =
|
||||
cfg.monitoring.metrics.path
|
||||
!= null
|
||||
|| cfg.monitoring.metrics.extraEndpoints != [];
|
||||
|
||||
hasHealthCheckConfig =
|
||||
cfg.monitoring.healthCheck.path
|
||||
!= null
|
||||
|| cfg.monitoring.healthCheck.conditions != []
|
||||
|| cfg.monitoring.healthCheck.extraChecks != [];
|
||||
in {
|
||||
# Define the service-specific monitoring options
|
||||
options.homelab.services.${serviceName}.monitoring = {
|
||||
enable = mkOption {
|
||||
type = types.bool;
|
||||
description = "Enable monitoring for ${serviceName}";
|
||||
default = hasMetricsConfig || hasHealthCheckConfig;
|
||||
};
|
||||
enable = mkEnableOption "monitoring for ${serviceName}";
|
||||
|
||||
metrics = {
|
||||
enable = mkOption {
|
||||
type = types.bool;
|
||||
default = hasMetricsConfig;
|
||||
default = true;
|
||||
};
|
||||
|
||||
path = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = "Metrics endpoint path. Setting this enables metrics collection.";
|
||||
type = types.str;
|
||||
default = "/metrics";
|
||||
};
|
||||
|
||||
extraEndpoints = mkOption {
|
||||
type = types.listOf types.attrs;
|
||||
default = [];
|
||||
description = "Additional metrics endpoints. Adding endpoints enables metrics collection.";
|
||||
};
|
||||
};
|
||||
|
||||
healthCheck = {
|
||||
enable = mkOption {
|
||||
type = types.bool;
|
||||
default = hasHealthCheckConfig;
|
||||
default = true;
|
||||
};
|
||||
|
||||
path = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = "Health check endpoint path. Setting this enables health checks.";
|
||||
example = "/health";
|
||||
type = types.str;
|
||||
default = "/health";
|
||||
};
|
||||
|
||||
conditions = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = ["[STATUS] == 200"];
|
||||
description = "Health check conditions. Setting conditions enables health checks.";
|
||||
example = ["[STATUS] == 200"];
|
||||
};
|
||||
|
||||
extraChecks = mkOption {
|
||||
type = types.listOf types.attrs;
|
||||
default = [];
|
||||
description = "Additional health checks. Adding checks enables health monitoring.";
|
||||
};
|
||||
};
|
||||
|
||||
|
|
@ -79,10 +57,11 @@ in {
|
|||
};
|
||||
|
||||
# Generate the homelab config automatically when service is enabled
|
||||
config = mkIf cfg.enable {
|
||||
homelab.monitoring = mkIf cfg.monitoring.enable {
|
||||
metrics = mkIf hasMetricsConfig (
|
||||
(optional (cfg.monitoring.metrics.path != null) {
|
||||
config = mkIf (cfg.enable && cfg.monitoring.enable) {
|
||||
homelab.monitoring = {
|
||||
metrics =
|
||||
[
|
||||
{
|
||||
name = "${serviceName}-main";
|
||||
host = homelabCfg.hostname;
|
||||
port = cfg.port;
|
||||
|
|
@ -96,12 +75,13 @@ in {
|
|||
node = homelabCfg.hostname;
|
||||
environment = homelabCfg.environment;
|
||||
};
|
||||
})
|
||||
++ cfg.monitoring.metrics.extraEndpoints
|
||||
);
|
||||
}
|
||||
]
|
||||
++ cfg.monitoring.metrics.extraEndpoints;
|
||||
|
||||
healthChecks = mkIf hasHealthCheckConfig (
|
||||
(optional (cfg.monitoring.healthCheck.path != null) {
|
||||
healthChecks =
|
||||
[
|
||||
{
|
||||
name = "${serviceName}-health";
|
||||
host = homelabCfg.hostname;
|
||||
port = cfg.port;
|
||||
|
|
@ -120,9 +100,9 @@ in {
|
|||
environment = homelabCfg.environment;
|
||||
};
|
||||
enabled = true;
|
||||
})
|
||||
++ cfg.monitoring.healthCheck.extraChecks
|
||||
);
|
||||
}
|
||||
]
|
||||
++ cfg.monitoring.healthCheck.extraChecks;
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
|||
|
|
@ -8,11 +8,7 @@ with lib; let
|
|||
homelabCfg = config.homelab;
|
||||
in {
|
||||
options.homelab.services.${serviceName}.proxy = {
|
||||
enable = mkOption {
|
||||
type = types.bool;
|
||||
description = "Enable reverse proxy for ${serviceName}";
|
||||
default = true;
|
||||
};
|
||||
enable = mkEnableOption "reverse proxy for ${serviceName}";
|
||||
|
||||
subdomain = mkOption {
|
||||
type = types.str;
|
||||
|
|
@ -43,8 +39,8 @@ in {
|
|||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
homelab.reverseProxy.entries = mkIf cfg.proxy.enable (
|
||||
config = mkIf (cfg.enable && cfg.proxy.enable) {
|
||||
homelab.reverseProxy.entries =
|
||||
[
|
||||
{
|
||||
subdomain = cfg.proxy.subdomain;
|
||||
|
|
@ -63,7 +59,6 @@ in {
|
|||
enableAuth = sub.enableAuth;
|
||||
enableSSL = true;
|
||||
})
|
||||
cfg.proxy.additionalSubdomains
|
||||
);
|
||||
cfg.proxy.additionalSubdomains;
|
||||
};
|
||||
}
|
||||
|
|
|
|||
|
|
@ -124,19 +124,19 @@ in {
|
|||
# Always exposed aggregated data
|
||||
allJobs = mkOption {
|
||||
type = types.listOf types.attrs;
|
||||
default = localAggregation.allJobs;
|
||||
default = [];
|
||||
readOnly = true;
|
||||
};
|
||||
|
||||
allBackends = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = localAggregation.allBackends;
|
||||
default = [];
|
||||
readOnly = true;
|
||||
};
|
||||
|
||||
global = mkOption {
|
||||
type = types.attrs;
|
||||
default = globalAggregation;
|
||||
default = {};
|
||||
readOnly = true;
|
||||
};
|
||||
};
|
||||
|
|
@ -151,13 +151,13 @@ in {
|
|||
];
|
||||
|
||||
# Always expose both local and global
|
||||
# homelab.backups = {
|
||||
# allJobs = localAggregation.allJobs;
|
||||
# allBackends = localAggregation.allBackends;
|
||||
# global =
|
||||
# if hasNodes
|
||||
# then globalAggregation
|
||||
# else {};
|
||||
# };
|
||||
homelab.backups = {
|
||||
allJobs = localAggregation.allJobs;
|
||||
allBackends = localAggregation.allBackends;
|
||||
global =
|
||||
if hasNodes
|
||||
then globalAggregation
|
||||
else {};
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
|||
|
|
@ -140,13 +140,13 @@ in {
|
|||
# Always exposed aggregated data
|
||||
allSources = mkOption {
|
||||
type = types.listOf types.attrs;
|
||||
default = localAggregation.allSources;
|
||||
default = [];
|
||||
readOnly = true;
|
||||
};
|
||||
|
||||
global = mkOption {
|
||||
type = types.attrs;
|
||||
default = globalAggregation;
|
||||
default = {};
|
||||
readOnly = true;
|
||||
};
|
||||
};
|
||||
|
|
@ -198,12 +198,12 @@ in {
|
|||
|
||||
networking.firewall.allowedTCPPorts = optionals cfg.promtail.enable [cfg.promtail.port];
|
||||
|
||||
# homelab.logging = {
|
||||
# allSources = localAggregation.allSources;
|
||||
# global =
|
||||
# if hasNodes
|
||||
# then globalAggregation
|
||||
# else {};
|
||||
# };
|
||||
homelab.logging = {
|
||||
allSources = localAggregation.allSources;
|
||||
global =
|
||||
if hasNodes
|
||||
then globalAggregation
|
||||
else {};
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
|||
|
|
@ -74,25 +74,25 @@ in {
|
|||
# Always exposed aggregated data
|
||||
allEntries = mkOption {
|
||||
type = types.listOf types.attrs;
|
||||
default = localAggregation.allEntries;
|
||||
default = [];
|
||||
readOnly = true;
|
||||
};
|
||||
|
||||
global = mkOption {
|
||||
type = types.attrs;
|
||||
default = globalAggregation;
|
||||
default = {};
|
||||
readOnly = true;
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
# Always expose both local and global
|
||||
# homelab.reverseProxy = {
|
||||
# allEntries = localAggregation.allEntries;
|
||||
# global =
|
||||
# if hasNodes
|
||||
# then globalAggregation
|
||||
# else {};
|
||||
# };
|
||||
homelab.reverseProxy = {
|
||||
allEntries = localAggregation.allEntries;
|
||||
global =
|
||||
if hasNodes
|
||||
then globalAggregation
|
||||
else {};
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
|||
|
|
@ -219,7 +219,8 @@ in {
|
|||
homelab.services.${serviceName}.monitoring.enable = mkDefault true;
|
||||
}
|
||||
|
||||
{
|
||||
# Smart defaults for Gatus
|
||||
(mkIf cfg.monitoring.enable {
|
||||
homelab.services.${serviceName}.monitoring = mkDefault {
|
||||
metrics = {
|
||||
path = "/metrics";
|
||||
|
|
@ -239,9 +240,9 @@ in {
|
|||
tier = "monitoring";
|
||||
};
|
||||
};
|
||||
}
|
||||
})
|
||||
|
||||
{
|
||||
(mkIf cfg.logging.enable {
|
||||
homelab.services.${serviceName}.logging = mkDefault {
|
||||
files = ["/var/log/gatus/gatus.log"];
|
||||
parsing = {
|
||||
|
|
@ -254,13 +255,13 @@ in {
|
|||
application = "gatus";
|
||||
};
|
||||
};
|
||||
}
|
||||
})
|
||||
|
||||
{
|
||||
(mkIf cfg.proxy.enable {
|
||||
homelab.services.${serviceName}.proxy = mkDefault {
|
||||
subdomain = "status";
|
||||
enableAuth = false; # Status page should be public
|
||||
};
|
||||
}
|
||||
})
|
||||
]);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ in {
|
|||
}
|
||||
|
||||
# Smart defaults for Grafana
|
||||
{
|
||||
(mkIf cfg.logging.enable {
|
||||
# Grafana-specific log setup
|
||||
homelab.services.${serviceName}.logging = mkDefault {
|
||||
files = ["/var/log/grafana/grafana.log"];
|
||||
|
|
@ -59,8 +59,9 @@ in {
|
|||
component = "dashboard";
|
||||
};
|
||||
};
|
||||
}
|
||||
{
|
||||
})
|
||||
|
||||
(mkIf cfg.monitoring.enable {
|
||||
homelab.services.${serviceName}.monitoring = mkDefault {
|
||||
metrics.path = "/metrics";
|
||||
healthCheck = {
|
||||
|
|
@ -72,13 +73,14 @@ in {
|
|||
tier = "monitoring";
|
||||
};
|
||||
};
|
||||
}
|
||||
{
|
||||
})
|
||||
|
||||
(mkIf cfg.proxy.enable {
|
||||
# Grafana needs auth by default (admin interface)
|
||||
homelab.services.${serviceName}.proxy = mkDefault {
|
||||
subdomain = "grafana";
|
||||
# enableAuth = true;
|
||||
};
|
||||
}
|
||||
})
|
||||
]);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -12,7 +12,9 @@ with lib; let
|
|||
# Generate Prometheus scrape configs from global monitoring data
|
||||
prometheusScrapeConfigs = let
|
||||
# Get all metrics - try global first, fallback to local
|
||||
allMetrics = homelabCfg.monitoring.global.allMetrics;
|
||||
allMetrics = homelabCfg.monitoring.global.allMetrics
|
||||
or homelabCfg.monitoring.allMetrics
|
||||
or [];
|
||||
|
||||
jobGroups = groupBy (m: m.jobName) allMetrics;
|
||||
|
||||
|
|
@ -155,19 +157,11 @@ in {
|
|||
default = [];
|
||||
description = "Additional rule files to load";
|
||||
};
|
||||
|
||||
systemdServices = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = [
|
||||
"prometheus.service"
|
||||
"prometheus"
|
||||
];
|
||||
description = "Systemd services to monitor";
|
||||
};
|
||||
};
|
||||
|
||||
# Service configuration with smart defaults
|
||||
config = mkIf cfg.enable (mkMerge [
|
||||
# Core Prometheus service
|
||||
{
|
||||
services.prometheus = {
|
||||
enable = true;
|
||||
|
|
@ -202,21 +196,39 @@ in {
|
|||
};
|
||||
|
||||
networking.firewall.allowedTCPPorts = [cfg.port];
|
||||
}
|
||||
{
|
||||
homelab.services.${serviceName}.monitoring = {
|
||||
metrics.path = "/metrics";
|
||||
healthCheck.path = "/-/healthy"; # ✅ Enables health checks
|
||||
healthCheck.conditions = ["[STATUS] == 200" "[RESPONSE_TIME] < 1000"];
|
||||
|
||||
homelab.services.${serviceName}.monitoring.enable = mkDefault true;
|
||||
}
|
||||
|
||||
# Smart defaults for Prometheus
|
||||
(mkIf cfg.monitoring.enable {
|
||||
homelab.services.${serviceName}.monitoring = mkDefault {
|
||||
metrics = {
|
||||
path = "/metrics";
|
||||
extraEndpoints = [];
|
||||
};
|
||||
healthCheck = {
|
||||
path = "/-/healthy";
|
||||
conditions = ["[STATUS] == 200" "[RESPONSE_TIME] < 1000"];
|
||||
extraChecks = [
|
||||
{
|
||||
name = "prometheus-ready";
|
||||
port = cfg.port;
|
||||
path = "/-/ready";
|
||||
conditions = ["[STATUS] == 200"];
|
||||
group = "monitoring";
|
||||
}
|
||||
];
|
||||
};
|
||||
extraLabels = {
|
||||
component = "monitoring-server";
|
||||
tier = "monitoring";
|
||||
};
|
||||
};
|
||||
}
|
||||
{
|
||||
homelab.services.${serviceName}.logging = {
|
||||
})
|
||||
|
||||
(mkIf cfg.logging.enable {
|
||||
homelab.services.${serviceName}.logging = mkDefault {
|
||||
files = ["/var/log/prometheus/prometheus.log"];
|
||||
parsing = {
|
||||
# Prometheus log format: ts=2024-01-01T12:00:00.000Z caller=main.go:123 level=info msg="message"
|
||||
|
|
@ -228,11 +240,13 @@ in {
|
|||
application = "prometheus";
|
||||
};
|
||||
};
|
||||
}
|
||||
{
|
||||
homelab.services.${serviceName}.proxy = {
|
||||
enableAuth = true;
|
||||
})
|
||||
|
||||
(mkIf cfg.proxy.enable {
|
||||
homelab.services.${serviceName}.proxy = mkDefault {
|
||||
subdomain = "prometheus";
|
||||
enableAuth = true; # Admin interface needs protection
|
||||
};
|
||||
}
|
||||
})
|
||||
]);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3,5 +3,4 @@
|
|||
pkgs: {
|
||||
# example = pkgs.callPackage ./example { };
|
||||
ente-web = pkgs.callPackage ./ente-web.nix {};
|
||||
homelab-docs = pkgs.callPackage ./homelab-docs {};
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,50 +0,0 @@
|
|||
# homelab-docs.nix - Main documentation generator package
|
||||
{
|
||||
lib,
|
||||
stdenv,
|
||||
writeShellScriptBin,
|
||||
jq,
|
||||
nixfmt,
|
||||
}: let
|
||||
# Import individual CLI generators
|
||||
docsGenerator = import ./main.nix {inherit writeShellScriptBin;};
|
||||
fleetDocsGenerator = import ./fleet.nix {inherit writeShellScriptBin jq;};
|
||||
nodeDocsGenerator = import ./nodes.nix {inherit writeShellScriptBin jq;};
|
||||
serviceDocsGenerator = import ./services.nix {inherit writeShellScriptBin jq;};
|
||||
deploymentDocsGenerator = import ./deployment.nix {inherit writeShellScriptBin jq;};
|
||||
readmeGenerator = import ./readme.nix {inherit writeShellScriptBin jq;};
|
||||
in
|
||||
stdenv.mkDerivation {
|
||||
pname = "homelab-docs";
|
||||
version = "1.0.0";
|
||||
|
||||
dontUnpack = true;
|
||||
dontBuild = true;
|
||||
|
||||
installPhase = ''
|
||||
mkdir -p $out/bin
|
||||
|
||||
# Install all the generators
|
||||
cp ${docsGenerator}/bin/homelab-generate-docs $out/bin/
|
||||
cp ${fleetDocsGenerator}/bin/homelab-docs-fleet $out/bin/
|
||||
cp ${nodeDocsGenerator}/bin/homelab-docs-nodes $out/bin/
|
||||
cp ${serviceDocsGenerator}/bin/homelab-docs-services $out/bin/
|
||||
cp ${deploymentDocsGenerator}/bin/homelab-docs-deployment $out/bin/
|
||||
cp ${readmeGenerator}/bin/homelab-docs-readme $out/bin/
|
||||
|
||||
# Make sure they're executable
|
||||
chmod +x $out/bin/*
|
||||
'';
|
||||
|
||||
meta = with lib; {
|
||||
description = "Documentation generator for homelab colmena deployments";
|
||||
longDescription = ''
|
||||
A collection of tools to generate comprehensive documentation
|
||||
for homelab deployments managed with colmena. Extracts configuration
|
||||
from flakes and generates markdown documentation.
|
||||
'';
|
||||
license = licenses.mit;
|
||||
maintainers = [];
|
||||
platforms = platforms.all;
|
||||
};
|
||||
}
|
||||
|
|
@ -1,89 +0,0 @@
|
|||
# homelab-docs-deployment.nix - Deployment documentation generator CLI
|
||||
{
|
||||
writeShellScriptBin,
|
||||
jq,
|
||||
}:
|
||||
writeShellScriptBin "homelab-docs-deployment" ''
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
cat << EOF
|
||||
# Current Deployment State
|
||||
|
||||
> Current homelab deployment configuration
|
||||
>
|
||||
> Generated on: $(date -R)
|
||||
> Working directory: $(pwd)
|
||||
|
||||
## Deployment Summary
|
||||
|
||||
EOF
|
||||
|
||||
# Get deployment summary
|
||||
deployment_summary=$(colmena eval -E '{ nodes, pkgs, lib, ... }:
|
||||
let
|
||||
homelabNodes = lib.filterAttrs (name: node: node.config.homelab.enable or false) nodes;
|
||||
allServices = lib.flatten (lib.mapAttrsToList (nodeName: node:
|
||||
lib.attrNames (lib.filterAttrs (n: v: v.enable or false) (node.config.homelab.services or {}))
|
||||
) homelabNodes);
|
||||
in {
|
||||
totalNodes = lib.length (lib.attrNames nodes);
|
||||
homelabEnabledNodes = lib.length (lib.attrNames homelabNodes);
|
||||
uniqueServices = lib.length (lib.unique allServices);
|
||||
totalServiceInstances = lib.length allServices;
|
||||
nodeNames = lib.attrNames nodes;
|
||||
homelabNodeNames = lib.attrNames homelabNodes;
|
||||
}')
|
||||
|
||||
total_nodes=$(echo "$deployment_summary" | ${jq}/bin/jq -r '.totalNodes')
|
||||
homelab_nodes=$(echo "$deployment_summary" | ${jq}/bin/jq -r '.homelabEnabledNodes')
|
||||
unique_services=$(echo "$deployment_summary" | ${jq}/bin/jq -r '.uniqueServices')
|
||||
service_instances=$(echo "$deployment_summary" | ${jq}/bin/jq -r '.totalServiceInstances')
|
||||
|
||||
echo "| Metric | Count |"
|
||||
echo "|--------|-------|"
|
||||
echo "| Total Nodes | $total_nodes |"
|
||||
echo "| Homelab-Enabled Nodes | $homelab_nodes |"
|
||||
echo "| Unique Services | $unique_services |"
|
||||
echo "| Service Instances | $service_instances |"
|
||||
echo
|
||||
|
||||
echo "## Node Status"
|
||||
echo
|
||||
|
||||
# Get detailed node status
|
||||
node_status=$(colmena eval -E '{ nodes, pkgs, lib, ... }:
|
||||
lib.mapAttrs (name: node: {
|
||||
homelabEnabled = node.config.homelab.enable or false;
|
||||
environment = node.config.homelab.environment or "unknown";
|
||||
serviceCount = if (node.config.homelab.enable or false) then
|
||||
lib.length (lib.attrNames (lib.filterAttrs (n: v: v.enable or false) (node.config.homelab.services or {})))
|
||||
else 0;
|
||||
monitoringEnabled = if (node.config.homelab.enable or false) then
|
||||
node.config.homelab.monitoring.enable or false
|
||||
else false;
|
||||
backupsEnabled = if (node.config.homelab.enable or false) then
|
||||
node.config.homelab.backups.enable or false
|
||||
else false;
|
||||
proxyEnabled = if (node.config.homelab.enable or false) then
|
||||
node.config.homelab.reverseProxy.enable or false
|
||||
else false;
|
||||
}) nodes')
|
||||
|
||||
echo "| Node | Homelab | Environment | Services | Monitoring | Backups | Proxy |"
|
||||
echo "|------|---------|-------------|----------|------------|---------|-------|"
|
||||
|
||||
echo "$node_status" | ${jq}/bin/jq -r 'to_entries[] |
|
||||
[.key, (.value.homelabEnabled | if . then "✅" else "❌" end), .value.environment, (.value.serviceCount | tostring),
|
||||
(.value.monitoringEnabled | if . then "✅" else "❌" end),
|
||||
(.value.backupsEnabled | if . then "✅" else "❌" end),
|
||||
(.value.proxyEnabled | if . then "✅" else "❌" end)] |
|
||||
@tsv' | while IFS=$'\t' read -r node homelab env services monitoring backups proxy; do
|
||||
echo "| \`$node\` | $homelab | $env | $services | $monitoring | $backups | $proxy |"
|
||||
done
|
||||
|
||||
echo
|
||||
echo "---"
|
||||
echo
|
||||
echo "*Deployment state extracted from live colmena configuration*"
|
||||
''
|
||||
|
|
@ -1,91 +0,0 @@
|
|||
{
|
||||
writeShellScriptBin,
|
||||
jq,
|
||||
}:
|
||||
writeShellScriptBin "homelab-docs-fleet" ''
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
cat << EOF
|
||||
# Homelab Fleet Overview
|
||||
|
||||
> Auto-generated fleet overview
|
||||
>
|
||||
> Generated on: $(date -R)
|
||||
> Source: $(pwd)
|
||||
|
||||
## Fleet Statistics
|
||||
|
||||
EOF
|
||||
|
||||
# Get basic fleet stats
|
||||
echo "### Basic Information"
|
||||
echo
|
||||
|
||||
fleet_stats=$(colmena eval -E '{ nodes, pkgs, lib, ... }: {
|
||||
totalNodes = lib.length (lib.attrNames nodes);
|
||||
nodeNames = lib.attrNames nodes;
|
||||
}')
|
||||
|
||||
total_nodes=$(echo "$fleet_stats" | ${jq}/bin/jq -r '.totalNodes')
|
||||
node_names=$(echo "$fleet_stats" | ${jq}/bin/jq -r '.nodeNames[]' | tr '\n' ' ')
|
||||
|
||||
echo "| Metric | Value |"
|
||||
echo "|--------|-------|"
|
||||
echo "| Total Nodes | $total_nodes |"
|
||||
echo "| Node Names | $node_names |"
|
||||
echo
|
||||
|
||||
# Get homelab-enabled nodes
|
||||
echo "### Homelab Configuration"
|
||||
echo
|
||||
|
||||
homelab_info=$(colmena eval -E '{ nodes, pkgs, lib, ... }: {
|
||||
homelabNodes = lib.mapAttrs (name: node: {
|
||||
enabled = node.config.homelab.enable or false;
|
||||
hostname = node.config.homelab.hostname or null;
|
||||
environment = node.config.homelab.environment or null;
|
||||
}) nodes;
|
||||
}')
|
||||
|
||||
echo "| Node | Homelab Enabled | Hostname | Environment |"
|
||||
echo "|------|----------------|----------|-------------|"
|
||||
|
||||
echo "$homelab_info" | ${jq}/bin/jq -r '.homelabNodes | to_entries[] |
|
||||
[.key, (.value.enabled | tostring), (.value.hostname // "N/A"), (.value.environment // "N/A")] |
|
||||
@tsv' | while IFS=$'\t' read -r node enabled hostname environment; do
|
||||
enabled_icon=$(if [[ "$enabled" == "true" ]]; then echo "✅"; else echo "❌"; fi)
|
||||
echo "| \`$node\` | $enabled_icon | $hostname | $environment |"
|
||||
done
|
||||
|
||||
echo
|
||||
|
||||
# Get service distribution
|
||||
echo "### Service Distribution"
|
||||
echo
|
||||
|
||||
service_info=$(colmena eval -E '{ nodes, pkgs, lib, ... }:
|
||||
lib.mapAttrs (name: node:
|
||||
if (node.config.homelab.enable or false) then {
|
||||
serviceCount = lib.length (lib.attrNames (lib.filterAttrs (n: v: v.enable or false) (node.config.homelab.services or {})));
|
||||
serviceNames = lib.attrNames (lib.filterAttrs (n: v: v.enable or false) (node.config.homelab.services or {}));
|
||||
} else {
|
||||
serviceCount = 0;
|
||||
serviceNames = [];
|
||||
}
|
||||
) nodes')
|
||||
|
||||
echo "| Node | Service Count | Services |"
|
||||
echo "|------|---------------|----------|"
|
||||
|
||||
echo "$service_info" | ${jq}/bin/jq -r 'to_entries[] |
|
||||
[.key, (.value.serviceCount | tostring), (.value.serviceNames | join(", "))] |
|
||||
@tsv' | while IFS=$'\t' read -r node count services; do
|
||||
echo "| \`$node\` | $count | $services |"
|
||||
done
|
||||
|
||||
echo
|
||||
echo "---"
|
||||
echo
|
||||
echo "*Fleet overview generated from colmena evaluation*"
|
||||
''
|
||||
|
|
@ -1,73 +0,0 @@
|
|||
{writeShellScriptBin}:
|
||||
writeShellScriptBin "homelab-generate-docs" ''
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Colors
|
||||
BLUE='\033[0;34m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
RED='\033[0;31m'
|
||||
NC='\033[0m'
|
||||
|
||||
info() { echo -e "''${BLUE}$1''${NC}"; }
|
||||
success() { echo -e "''${GREEN}$1''${NC}"; }
|
||||
warn() { echo -e "''${YELLOW}$1''${NC}"; }
|
||||
error() { echo -e "''${RED}$1''${NC}"; }
|
||||
|
||||
# Configuration
|
||||
DOCS_DIR="''${1:-./docs}"
|
||||
|
||||
info "📚 Generating homelab documentation..."
|
||||
echo " Output directory: $DOCS_DIR"
|
||||
echo
|
||||
|
||||
# Check if we're in a directory with a flake
|
||||
if [[ ! -f flake.nix ]]; then
|
||||
error "No flake.nix found in current directory"
|
||||
echo "Please run this command from your homelab flake directory"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if colmena is available
|
||||
if ! command -v colmena >/dev/null 2>&1; then
|
||||
error "colmena command not found."
|
||||
echo "Please ensure colmena is available in your environment"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create docs directory
|
||||
mkdir -p "$DOCS_DIR"
|
||||
|
||||
# Generate fleet overview
|
||||
info " 🌐 Generating fleet overview..."
|
||||
homelab-docs-fleet > "$DOCS_DIR/fleet-overview.md"
|
||||
|
||||
# Generate node documentation
|
||||
info " 🖥️ Generating node configurations..."
|
||||
homelab-docs-nodes > "$DOCS_DIR/nodes.md"
|
||||
|
||||
# Generate service documentation
|
||||
info " ⚙️ Generating service configurations..."
|
||||
homelab-docs-services > "$DOCS_DIR/services.md"
|
||||
|
||||
# Generate current deployment
|
||||
info " 🏠 Generating current deployment..."
|
||||
homelab-docs-deployment > "$DOCS_DIR/current-deployment.md"
|
||||
|
||||
# Generate README
|
||||
info " 📋 Generating README..."
|
||||
homelab-docs-readme > "$DOCS_DIR/README.md"
|
||||
|
||||
success "✅ Documentation generated successfully!"
|
||||
echo
|
||||
echo "Generated files:"
|
||||
echo " 🌐 fleet-overview.md - Fleet statistics and overview"
|
||||
echo " 🖥️ nodes.md - Per-node configurations"
|
||||
echo " ⚙️ services.md - Service configurations"
|
||||
echo " 🏠 current-deployment.md - Current deployment state"
|
||||
echo " 📋 README.md - Documentation index"
|
||||
echo
|
||||
echo "💡 Tip: Add these files to your repository and set up GitHub Actions"
|
||||
echo " to automatically regenerate documentation on changes!"
|
||||
''
|
||||
|
|
@ -1,123 +0,0 @@
|
|||
{
|
||||
writeShellScriptBin,
|
||||
jq,
|
||||
}:
|
||||
writeShellScriptBin "homelab-docs-nodes" ''
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
cat << EOF
|
||||
# Node Configurations
|
||||
|
||||
> Detailed per-node configuration
|
||||
>
|
||||
> Generated on: $(date -R)
|
||||
|
||||
EOF
|
||||
|
||||
# Get all node information
|
||||
node_info=$(colmena eval -E '{ nodes, pkgs, lib, ... }:
|
||||
lib.mapAttrs (name: node: {
|
||||
# Basic system info
|
||||
nixosVersion = node.config.system.nixos.version;
|
||||
hostName = node.config.networking.hostName;
|
||||
system = node.config.nixpkgs.system;
|
||||
|
||||
# Homelab config (safe extraction)
|
||||
homelab = if (node.config.homelab.enable or false) then {
|
||||
enabled = true;
|
||||
hostname = node.config.homelab.hostname or null;
|
||||
domain = node.config.homelab.domain or null;
|
||||
externalDomain = node.config.homelab.externalDomain or null;
|
||||
environment = node.config.homelab.environment or null;
|
||||
location = node.config.homelab.location or null;
|
||||
tags = node.config.homelab.tags or [];
|
||||
} else {
|
||||
enabled = false;
|
||||
};
|
||||
|
||||
# Services (safe extraction)
|
||||
services = if (node.config.homelab.enable or false) then
|
||||
lib.mapAttrs (svcName: svc: {
|
||||
enabled = svc.enable or false;
|
||||
port = svc.port or null;
|
||||
description = svc.description or svcName;
|
||||
tags = svc.tags or [];
|
||||
}) (node.config.homelab.services or {})
|
||||
else {};
|
||||
}) nodes')
|
||||
|
||||
echo "$node_info" | ${jq}/bin/jq -r 'to_entries[] | .key' | while read -r node; do
|
||||
echo "## Node: $node"
|
||||
echo
|
||||
|
||||
# Basic system information
|
||||
echo "### System Information"
|
||||
echo
|
||||
|
||||
nixos_version=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].nixosVersion")
|
||||
hostname=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].hostName")
|
||||
system=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].system")
|
||||
|
||||
echo "| Property | Value |"
|
||||
echo "|----------|-------|"
|
||||
echo "| NixOS Version | \`$nixos_version\` |"
|
||||
echo "| Hostname | \`$hostname\` |"
|
||||
echo "| System | \`$system\` |"
|
||||
echo
|
||||
|
||||
# Homelab configuration
|
||||
homelab_enabled=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].homelab.enabled")
|
||||
|
||||
if [[ "$homelab_enabled" == "true" ]]; then
|
||||
echo "### Homelab Configuration"
|
||||
echo
|
||||
|
||||
hl_hostname=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].homelab.hostname // \"N/A\"")
|
||||
hl_domain=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].homelab.domain // \"N/A\"")
|
||||
hl_external=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].homelab.externalDomain // \"N/A\"")
|
||||
hl_env=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].homelab.environment // \"N/A\"")
|
||||
hl_location=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].homelab.location // \"N/A\"")
|
||||
hl_tags=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].homelab.tags | join(\", \")")
|
||||
|
||||
echo "| Property | Value |"
|
||||
echo "|----------|-------|"
|
||||
echo "| Homelab Hostname | \`$hl_hostname\` |"
|
||||
echo "| Domain | \`$hl_domain\` |"
|
||||
echo "| External Domain | \`$hl_external\` |"
|
||||
echo "| Environment | \`$hl_env\` |"
|
||||
echo "| Location | \`$hl_location\` |"
|
||||
echo "| Tags | $hl_tags |"
|
||||
echo
|
||||
|
||||
# Services
|
||||
echo "### Services"
|
||||
echo
|
||||
|
||||
services_data=$(echo "$node_info" | ${jq}/bin/jq -r ".[\"$node\"].services")
|
||||
service_count=$(echo "$services_data" | ${jq}/bin/jq 'length')
|
||||
|
||||
if [[ "$service_count" -gt 0 ]]; then
|
||||
echo "| Service | Enabled | Port | Description | Tags |"
|
||||
echo "|---------|---------|------|-------------|------|"
|
||||
|
||||
echo "$services_data" | ${jq}/bin/jq -r 'to_entries[] |
|
||||
[.key, (.value.enabled | tostring), (.value.port // "N/A" | tostring), (.value.description // "N/A"), (.value.tags | join(", "))] |
|
||||
@tsv' | while IFS=$'\t' read -r service enabled port description tags; do
|
||||
enabled_icon=$(if [[ "$enabled" == "true" ]]; then echo "✅"; else echo "❌"; fi)
|
||||
echo "| \`$service\` | $enabled_icon | $port | $description | $tags |"
|
||||
done
|
||||
else
|
||||
echo "No services configured."
|
||||
fi
|
||||
else
|
||||
echo "### Homelab Configuration"
|
||||
echo
|
||||
echo "❌ Homelab is not enabled on this node."
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "---"
|
||||
echo
|
||||
done
|
||||
''
|
||||
|
|
@ -1,124 +0,0 @@
|
|||
# homelab-docs-readme.nix - README generator CLI
|
||||
{
|
||||
writeShellScriptBin,
|
||||
jq,
|
||||
}:
|
||||
writeShellScriptBin "homelab-docs-readme" ''
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
cat << EOF
|
||||
# Homelab Documentation
|
||||
|
||||
> Auto-generated documentation for the homelab deployment
|
||||
>
|
||||
> Generated on: $(date -R)
|
||||
> Source: $(pwd)
|
||||
|
||||
## 📚 Documentation Files
|
||||
|
||||
This documentation is automatically generated from your colmena flake configuration.
|
||||
|
||||
### 📊 Overview Documents
|
||||
- **[Fleet Overview](fleet-overview.md)** - High-level fleet statistics and service distribution
|
||||
- **[Current Deployment](current-deployment.md)** - Current deployment state and node status
|
||||
|
||||
### 📖 Detailed Configuration
|
||||
- **[Node Configurations](nodes.md)** - Per-node detailed configuration and services
|
||||
- **[Service Configurations](services.md)** - Service configurations across the fleet
|
||||
|
||||
## 🚀 Quick Actions
|
||||
|
||||
### View Current Status
|
||||
\`\`\`bash
|
||||
# Service status across fleet (if homelab CLI is available)
|
||||
homelab services --global
|
||||
|
||||
# Backup status
|
||||
homelab backups --global
|
||||
|
||||
# Overall status
|
||||
homelab status
|
||||
\`\`\`
|
||||
|
||||
### Update Documentation
|
||||
\`\`\`bash
|
||||
# Regenerate all documentation
|
||||
homelab-generate-docs ./docs
|
||||
|
||||
# Generate in different directory
|
||||
homelab-generate-docs /path/to/output
|
||||
\`\`\`
|
||||
|
||||
## 📋 Quick Stats
|
||||
|
||||
EOF
|
||||
|
||||
# Add live stats
|
||||
quick_stats=$(colmena eval -E '{ nodes, pkgs, lib, ... }:
|
||||
let
|
||||
homelabNodes = lib.filterAttrs (name: node: node.config.homelab.enable or false) nodes;
|
||||
in {
|
||||
totalNodes = lib.length (lib.attrNames nodes);
|
||||
homelabNodes = lib.length (lib.attrNames homelabNodes);
|
||||
}')
|
||||
|
||||
total_nodes=$(echo "$quick_stats" | ${jq}/bin/jq -r '.totalNodes')
|
||||
homelab_nodes=$(echo "$quick_stats" | ${jq}/bin/jq -r '.homelabNodes')
|
||||
|
||||
echo "- **Total Nodes**: $total_nodes"
|
||||
echo "- **Homelab-Enabled Nodes**: $homelab_nodes"
|
||||
echo "- **Generated**: $(date -R)"
|
||||
echo
|
||||
echo "## 🛠️ Management Tools"
|
||||
echo
|
||||
echo "### Documentation Commands"
|
||||
echo "- \`homelab-generate-docs\` - Regenerate this documentation"
|
||||
echo "- \`homelab-docs-fleet\` - Generate fleet overview only"
|
||||
echo "- \`homelab-docs-nodes\` - Generate node configurations only"
|
||||
echo "- \`homelab-docs-services\` - Generate service configurations only"
|
||||
echo "- \`homelab-docs-deployment\` - Generate deployment state only"
|
||||
echo
|
||||
echo "### Colmena Commands"
|
||||
echo "- \`colmena eval\` - Evaluate flake expressions"
|
||||
echo "- \`colmena apply\` - Deploy configuration changes"
|
||||
echo "- \`colmena build\` - Build configurations without deploying"
|
||||
echo
|
||||
echo "## 🎯 Integration with CI/CD"
|
||||
echo
|
||||
echo "### GitHub Actions Example"
|
||||
echo
|
||||
echo "\`\`\`yaml"
|
||||
echo "name: Generate Documentation"
|
||||
echo "on:"
|
||||
echo " push:"
|
||||
echo " branches: [ main ]"
|
||||
echo ""
|
||||
echo "jobs:"
|
||||
echo " docs:"
|
||||
echo " runs-on: ubuntu-latest"
|
||||
echo " steps:"
|
||||
echo " - uses: actions/checkout@v4"
|
||||
echo " - uses: cachix/install-nix-action@v24"
|
||||
echo " - name: Generate docs"
|
||||
echo " run: nix develop --command homelab-generate-docs ./docs"
|
||||
echo " - name: Commit docs"
|
||||
echo " run: |"
|
||||
echo " git add docs/"
|
||||
echo " git commit -m \"docs: update homelab documentation\" || exit 0"
|
||||
echo " git push"
|
||||
echo "\`\`\`"
|
||||
echo
|
||||
echo "### Manual Generation"
|
||||
echo
|
||||
echo "\`\`\`bash"
|
||||
echo "# From your homelab directory"
|
||||
echo "nix develop"
|
||||
echo "homelab-generate-docs ./docs"
|
||||
echo "git add docs/ && git commit -m \"Update docs\""
|
||||
echo "\`\`\`"
|
||||
echo
|
||||
echo "---"
|
||||
echo
|
||||
echo "*This documentation reflects the live state of your homelab deployment as evaluated by colmena.*"
|
||||
''
|
||||
|
|
@ -1,384 +0,0 @@
|
|||
{
|
||||
writeShellScriptBin,
|
||||
jq,
|
||||
}:
|
||||
writeShellScriptBin "homelab-docs-services" ''
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
cat << EOF
|
||||
# Service Catalog
|
||||
|
||||
> Complete service documentation with core options, feature integrations, and smart defaults
|
||||
>
|
||||
> Generated on: $(date -R)
|
||||
|
||||
This document provides comprehensive documentation for homelab services, organized by:
|
||||
- **Core Service Options**: The main service configuration
|
||||
- **Feature Integrations**: Available monitoring, logging, and proxy features
|
||||
- **Service Defaults**: What this service configures by default for each feature
|
||||
|
||||
EOF
|
||||
|
||||
# Extract comprehensive service information
|
||||
echo "Extracting service information..." >&2
|
||||
services_catalog=$(colmena eval -E '
|
||||
{ nodes, pkgs, lib, ... }:
|
||||
let
|
||||
# Helper to extract option information
|
||||
extractOptions = path: options:
|
||||
lib.flatten (lib.mapAttrsToList (name: value:
|
||||
let
|
||||
currentPath = path ++ [name];
|
||||
pathStr = lib.concatStringsSep "." currentPath;
|
||||
in
|
||||
if (value._type or null) == "option" then
|
||||
[{
|
||||
name = pathStr;
|
||||
type = value.type.description or "unknown";
|
||||
default = value.default or null;
|
||||
defaultText = if value ? defaultText then value.defaultText.text or null else null;
|
||||
description = value.description or "No description";
|
||||
readOnly = value.readOnly or false;
|
||||
}]
|
||||
else if lib.isAttrs value && !(lib.hasAttr "_type" value) then
|
||||
extractOptions currentPath value
|
||||
else []
|
||||
) options);
|
||||
|
||||
# Get first node for option definitions
|
||||
firstNode = lib.head (lib.attrValues nodes);
|
||||
homelabServices = firstNode.options.homelab.services or {};
|
||||
|
||||
# Process each service
|
||||
serviceInfo = lib.mapAttrs (serviceName: serviceOptions:
|
||||
let
|
||||
allOptions = extractOptions [] serviceOptions;
|
||||
|
||||
# Separate core options from feature options
|
||||
coreOptions = lib.filter (opt:
|
||||
!(lib.hasPrefix "monitoring." opt.name) &&
|
||||
!(lib.hasPrefix "logging." opt.name) &&
|
||||
!(lib.hasPrefix "proxy." opt.name)
|
||||
) allOptions;
|
||||
|
||||
monitoringOptions = lib.filter (opt: lib.hasPrefix "monitoring." opt.name) allOptions;
|
||||
loggingOptions = lib.filter (opt: lib.hasPrefix "logging." opt.name) allOptions;
|
||||
proxyOptions = lib.filter (opt: lib.hasPrefix "proxy." opt.name) allOptions;
|
||||
|
||||
# Get actual service configuration to see what defaults are set
|
||||
serviceConfigs = lib.mapAttrs (nodeName: node:
|
||||
let
|
||||
serviceConfig = node.config.homelab.services.''${serviceName} or null;
|
||||
in
|
||||
if serviceConfig != null then {
|
||||
exists = true;
|
||||
enabled = serviceConfig.enable or false;
|
||||
# Extract the computed configuration values
|
||||
monitoring = serviceConfig.monitoring or {};
|
||||
logging = serviceConfig.logging or {};
|
||||
proxy = serviceConfig.proxy or {};
|
||||
# Get other core options
|
||||
coreConfig = removeAttrs serviceConfig ["monitoring" "logging" "proxy"];
|
||||
} else {
|
||||
exists = false;
|
||||
}
|
||||
) nodes;
|
||||
|
||||
# Find a node where this service exists to get default values
|
||||
nodeWithService = lib.findFirst (nodeName: serviceConfigs.''${nodeName}.exists) null (lib.attrNames nodes);
|
||||
exampleConfig = if nodeWithService != null then serviceConfigs.''${nodeWithService} else null;
|
||||
|
||||
in {
|
||||
inherit serviceName;
|
||||
coreOptions = coreOptions;
|
||||
features = {
|
||||
monitoring = {
|
||||
available = monitoringOptions != [];
|
||||
options = monitoringOptions;
|
||||
defaults = if exampleConfig != null then exampleConfig.monitoring else {};
|
||||
};
|
||||
logging = {
|
||||
available = loggingOptions != [];
|
||||
options = loggingOptions;
|
||||
defaults = if exampleConfig != null then exampleConfig.logging else {};
|
||||
};
|
||||
proxy = {
|
||||
available = proxyOptions != [];
|
||||
options = proxyOptions;
|
||||
defaults = if exampleConfig != null then exampleConfig.proxy else {};
|
||||
};
|
||||
};
|
||||
deployment = {
|
||||
totalNodes = lib.length (lib.filter (cfg: cfg.exists) (lib.attrValues serviceConfigs));
|
||||
enabledNodes = lib.length (lib.filter (cfg: cfg.exists && cfg.enabled) (lib.attrValues serviceConfigs));
|
||||
};
|
||||
}
|
||||
) homelabServices;
|
||||
|
||||
in {
|
||||
services = serviceInfo;
|
||||
totalServices = lib.length (lib.attrNames serviceInfo);
|
||||
}
|
||||
')
|
||||
|
||||
total_services=$(echo "$services_catalog" | ${jq}/bin/jq -r '.totalServices')
|
||||
|
||||
echo "## Overview"
|
||||
echo
|
||||
echo "**Total Available Services:** $total_services"
|
||||
echo
|
||||
|
||||
# Service matrix
|
||||
echo "## Service Integration Matrix"
|
||||
echo
|
||||
echo "| Service | Core Options | Monitoring | Logging | Proxy | Deployments |"
|
||||
echo "|---------|--------------|------------|---------|-------|-------------|"
|
||||
|
||||
echo "$services_catalog" | ${jq}/bin/jq -r '.services | keys[]' | sort | while read -r service; do
|
||||
service_data=$(echo "$services_catalog" | ${jq}/bin/jq -r ".services[\"$service\"]")
|
||||
|
||||
core_count=$(echo "$service_data" | ${jq}/bin/jq -r '.coreOptions | length')
|
||||
has_monitoring=$(echo "$service_data" | ${jq}/bin/jq -r '.features.monitoring.available')
|
||||
has_logging=$(echo "$service_data" | ${jq}/bin/jq -r '.features.logging.available')
|
||||
has_proxy=$(echo "$service_data" | ${jq}/bin/jq -r '.features.proxy.available')
|
||||
enabled_deployments=$(echo "$service_data" | ${jq}/bin/jq -r '.deployment.enabledNodes')
|
||||
|
||||
monitoring_icon=$(if [[ "$has_monitoring" == "true" ]]; then echo "📊"; else echo "❌"; fi)
|
||||
logging_icon=$(if [[ "$has_logging" == "true" ]]; then echo "📝"; else echo "❌"; fi)
|
||||
proxy_icon=$(if [[ "$has_proxy" == "true" ]]; then echo "🔀"; else echo "❌"; fi)
|
||||
|
||||
echo "| \`$service\` | $core_count | $monitoring_icon | $logging_icon | $proxy_icon | $enabled_deployments |"
|
||||
done
|
||||
|
||||
echo
|
||||
echo "**Legend:** 📊📝🔀 = Feature available, ❌ = Feature not available"
|
||||
echo
|
||||
|
||||
echo "## Service Documentation"
|
||||
echo
|
||||
|
||||
# Process each service
|
||||
echo "$services_catalog" | ${jq}/bin/jq -r '.services | keys[]' | sort | while read -r service; do
|
||||
echo "### $service"
|
||||
echo
|
||||
|
||||
service_data=$(echo "$services_catalog" | ${jq}/bin/jq -r ".services[\"$service\"]")
|
||||
enabled_deployments=$(echo "$service_data" | ${jq}/bin/jq -r '.deployment.enabledNodes')
|
||||
total_deployments=$(echo "$service_data" | ${jq}/bin/jq -r '.deployment.totalNodes')
|
||||
|
||||
if [[ "$total_deployments" -gt 0 ]]; then
|
||||
echo "**Deployment Status:** $enabled_deployments/$total_deployments nodes have this service enabled"
|
||||
else
|
||||
echo "**Deployment Status:** Available but not configured"
|
||||
fi
|
||||
echo
|
||||
|
||||
# Core Service Configuration
|
||||
echo "#### Core Service Options"
|
||||
echo
|
||||
echo "The main configuration options for $service:"
|
||||
echo
|
||||
echo '```nix'
|
||||
echo "homelab.services.$service = {"
|
||||
|
||||
echo "$service_data" | ${jq}/bin/jq -r '.coreOptions[] | @base64' | while IFS= read -r option_b64; do
|
||||
option=$(echo "$option_b64" | base64 -d)
|
||||
|
||||
name=$(echo "$option" | ${jq}/bin/jq -r '.name')
|
||||
type=$(echo "$option" | ${jq}/bin/jq -r '.type')
|
||||
default_val=$(echo "$option" | ${jq}/bin/jq -r '.default')
|
||||
description=$(echo "$option" | ${jq}/bin/jq -r '.description')
|
||||
read_only=$(echo "$option" | ${jq}/bin/jq -r '.readOnly')
|
||||
|
||||
if [[ "$read_only" == "true" ]]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
clean_description=$(echo "$description" | sed 's/"/\\"/g' | tr -d $'\n\r')
|
||||
|
||||
if [[ "$default_val" == "null" ]]; then
|
||||
echo " # $name = <$type>; # $clean_description"
|
||||
else
|
||||
echo " $name = $default_val; # $clean_description"
|
||||
fi
|
||||
done
|
||||
|
||||
echo "};"
|
||||
echo '```'
|
||||
echo
|
||||
|
||||
# Feature Integrations
|
||||
has_monitoring=$(echo "$service_data" | ${jq}/bin/jq -r '.features.monitoring.available')
|
||||
has_logging=$(echo "$service_data" | ${jq}/bin/jq -r '.features.logging.available')
|
||||
has_proxy=$(echo "$service_data" | ${jq}/bin/jq -r '.features.proxy.available')
|
||||
|
||||
if [[ "$has_monitoring" == "true" || "$has_logging" == "true" || "$has_proxy" == "true" ]]; then
|
||||
echo "#### Feature Integrations"
|
||||
echo
|
||||
|
||||
# Monitoring Feature
|
||||
if [[ "$has_monitoring" == "true" ]]; then
|
||||
echo "##### 📊 Monitoring Integration"
|
||||
echo
|
||||
echo "Available monitoring options:"
|
||||
echo
|
||||
echo '```nix'
|
||||
echo "homelab.services.$service = {"
|
||||
echo " # ... core options above ..."
|
||||
echo
|
||||
|
||||
echo "$service_data" | ${jq}/bin/jq -r '.features.monitoring.options[] | @base64' | while IFS= read -r option_b64; do
|
||||
option=$(echo "$option_b64" | base64 -d)
|
||||
|
||||
name=$(echo "$option" | ${jq}/bin/jq -r '.name')
|
||||
type=$(echo "$option" | ${jq}/bin/jq -r '.type')
|
||||
default_val=$(echo "$option" | ${jq}/bin/jq -r '.default')
|
||||
description=$(echo "$option" | ${jq}/bin/jq -r '.description')
|
||||
read_only=$(echo "$option" | ${jq}/bin/jq -r '.readOnly')
|
||||
|
||||
if [[ "$read_only" == "true" ]]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
clean_description=$(echo "$description" | sed 's/"/\\"/g' | tr -d $'\n\r')
|
||||
|
||||
if [[ "$default_val" == "null" ]]; then
|
||||
echo " # $name = <$type>; # $clean_description"
|
||||
else
|
||||
echo " $name = $default_val; # $clean_description"
|
||||
fi
|
||||
done
|
||||
|
||||
echo "};"
|
||||
echo '```'
|
||||
|
||||
# Show service-specific monitoring defaults
|
||||
monitoring_defaults=$(echo "$service_data" | ${jq}/bin/jq -r '.features.monitoring.defaults')
|
||||
if [[ "$monitoring_defaults" != "{}" && "$monitoring_defaults" != "null" ]]; then
|
||||
echo
|
||||
echo "**$service sets these monitoring defaults:**"
|
||||
echo '```nix'
|
||||
echo "$monitoring_defaults" | ${jq}/bin/jq -r 'to_entries[] | " \(.key) = \(.value);"'
|
||||
echo '```'
|
||||
fi
|
||||
echo
|
||||
fi
|
||||
|
||||
# Logging Feature
|
||||
if [[ "$has_logging" == "true" ]]; then
|
||||
echo "##### 📝 Logging Integration"
|
||||
echo
|
||||
echo "Available logging options:"
|
||||
echo
|
||||
echo '```nix'
|
||||
echo "homelab.services.$service = {"
|
||||
echo " # ... core options above ..."
|
||||
echo
|
||||
|
||||
echo "$service_data" | ${jq}/bin/jq -r '.features.logging.options[] | @base64' | while IFS= read -r option_b64; do
|
||||
option=$(echo "$option_b64" | base64 -d)
|
||||
|
||||
name=$(echo "$option" | ${jq}/bin/jq -r '.name')
|
||||
type=$(echo "$option" | ${jq}/bin/jq -r '.type')
|
||||
default_val=$(echo "$option" | ${jq}/bin/jq -r '.default')
|
||||
description=$(echo "$option" | ${jq}/bin/jq -r '.description')
|
||||
read_only=$(echo "$option" | ${jq}/bin/jq -r '.readOnly')
|
||||
|
||||
if [[ "$read_only" == "true" ]]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
clean_description=$(echo "$description" | sed 's/"/\\"/g' | tr -d $'\n\r')
|
||||
|
||||
if [[ "$default_val" == "null" ]]; then
|
||||
echo " # $name = <$type>; # $clean_description"
|
||||
else
|
||||
echo " $name = $default_val; # $clean_description"
|
||||
fi
|
||||
done
|
||||
|
||||
echo "};"
|
||||
echo '```'
|
||||
|
||||
# Show service-specific logging defaults
|
||||
logging_defaults=$(echo "$service_data" | ${jq}/bin/jq -r '.features.logging.defaults')
|
||||
if [[ "$logging_defaults" != "{}" && "$logging_defaults" != "null" ]]; then
|
||||
echo
|
||||
echo "**$service sets these logging defaults:**"
|
||||
echo '```nix'
|
||||
echo "$logging_defaults" | ${jq}/bin/jq -r 'to_entries[] | " \(.key) = \(.value);"'
|
||||
echo '```'
|
||||
fi
|
||||
echo
|
||||
fi
|
||||
|
||||
# Proxy Feature
|
||||
if [[ "$has_proxy" == "true" ]]; then
|
||||
echo "##### 🔀 Proxy Integration"
|
||||
echo
|
||||
echo "Available proxy options:"
|
||||
echo
|
||||
echo '```nix'
|
||||
echo "homelab.services.$service = {"
|
||||
echo " # ... core options above ..."
|
||||
echo
|
||||
|
||||
echo "$service_data" | ${jq}/bin/jq -r '.features.proxy.options[] | @base64' | while IFS= read -r option_b64; do
|
||||
option=$(echo "$option_b64" | base64 -d)
|
||||
|
||||
name=$(echo "$option" | ${jq}/bin/jq -r '.name')
|
||||
type=$(echo "$option" | ${jq}/bin/jq -r '.type')
|
||||
default_val=$(echo "$option" | ${jq}/bin/jq -r '.default')
|
||||
description=$(echo "$option" | ${jq}/bin/jq -r '.description')
|
||||
read_only=$(echo "$option" | ${jq}/bin/jq -r '.readOnly')
|
||||
|
||||
if [[ "$read_only" == "true" ]]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
clean_description=$(echo "$description" | sed 's/"/\\"/g' | tr -d $'\n\r')
|
||||
|
||||
if [[ "$default_val" == "null" ]]; then
|
||||
echo " # $name = <$type>; # $clean_description"
|
||||
else
|
||||
echo " $name = $default_val; # $clean_description"
|
||||
fi
|
||||
done
|
||||
|
||||
echo "};"
|
||||
echo '```'
|
||||
|
||||
# Show service-specific proxy defaults
|
||||
proxy_defaults=$(echo "$service_data" | ${jq}/bin/jq -r '.features.proxy.defaults')
|
||||
if [[ "$proxy_defaults" != "{}" && "$proxy_defaults" != "null" ]]; then
|
||||
echo
|
||||
echo "**$service sets these proxy defaults:**"
|
||||
echo '```nix'
|
||||
echo "$proxy_defaults" | ${jq}/bin/jq -r 'to_entries[] | " \(.key) = \(.value);"'
|
||||
echo '```'
|
||||
fi
|
||||
echo
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "---"
|
||||
echo
|
||||
done
|
||||
|
||||
echo "## Feature Reference"
|
||||
echo
|
||||
echo "### Integration Features"
|
||||
echo
|
||||
echo "Homelab services can integrate with three main features:"
|
||||
echo
|
||||
echo "- **📊 Monitoring**: Prometheus metrics and health checks"
|
||||
echo "- **📝 Logging**: Centralized log collection with Promtail/Loki"
|
||||
echo "- **🔀 Proxy**: Reverse proxy with SSL and authentication"
|
||||
echo
|
||||
echo "Each service can import these features and set service-specific defaults."
|
||||
echo
|
||||
echo "---"
|
||||
echo
|
||||
echo "*This documentation is generated from actual NixOS module evaluations.*"
|
||||
''
|
||||
Loading…
Add table
Add a link
Reference in a new issue