Compare commits

3 Commits

Author SHA1 Message Date
mjallen18
ab81e78b60 init xrt and fflm 2026-03-25 20:46:42 -05:00
mjallen18
2013804b17 lemonade 2026-03-25 19:59:49 -05:00
mjallen18
7fcbd0bb7c plasma 2026-03-25 18:23:08 -05:00
43 changed files with 1669 additions and 595 deletions

View File

@@ -56,7 +56,6 @@ This NixOS configuration repository is built using [Nix Flakes](https://nixos.wi
| `nixpkgs-stable` | `github:NixOS/nixpkgs/nixos-25.11` | Stable package set |
| `nixpkgs-otbr` | `github:mrene/nixpkgs` (fork) | OpenThread Border Router packages |
| `home-manager-unstable` | `github:nix-community/home-manager` | User environment management |
| `home-manager-stable` | `github:nix-community/home-manager/release-25.11` | Stable home-manager |
| `snowfall-lib` | `github:mjallen18/snowfall-lib` | Flake structure library (personal fork) |
| `impermanence` | `github:nix-community/impermanence` | Ephemeral root filesystem support |
| `lanzaboote` | `github:nix-community/lanzaboote/v1.0.0` | Secure Boot |

348
docs/flake-improvements.md Normal file
View File

@@ -0,0 +1,348 @@
# Flake Improvement Suggestions
A methodical review of the flake against what Snowfall Lib provides and what the codebase currently does. Suggestions are grouped by theme and ordered roughly from highest to lowest impact.
---
## 1. Flake-level: HM module registration — single source of truth via snowfall-lib fix
**Root cause discovered**: Snowfall Lib's `mkFlake` previously merged `systems.modules.home` into `homes` only for standalone `homeConfigurations`. The `homes` attrset passed to `create-systems` (which builds `nixosConfigurations`) was the raw unmerged value, so `systems.modules.home` had no effect on NixOS-integrated homes.
**Fix applied**: Patched the personal snowfall-lib fork (`github:mjallen18/snowfall-lib`) to extract the merge into a shared `homes-with-system-modules` binding and pass it to both `create-homes` (standalone) and `create-systems` (NixOS-integrated). `flake.lock` updated to the new commit.
`modules/nixos/home/default.nix` no longer needs `sharedModules``systems.modules.home` in `flake.nix` is now the single authoritative list for all contexts.
---
## 2. Flake-level: Duplicated Darwin HM module registration
**Problem**: Same issue as above for Darwin. `flake.nix:160167` registers Darwin HM modules via `systems.modules.darwin`, but none of those are actually Home Manager modules — `nix-homebrew`, `home-manager.darwinModules.home-manager`, `nix-plist-manager`, `nix-rosetta-builder`, `nix-index-database`, and `stylix.darwinModules.stylix` are all NixOS-style Darwin system modules, not HM `sharedModules`. This is the correct place for them. The `modules/darwin/home/default.nix` module handles the Darwin-side HM bridge.
**No change needed here**, but add a comment to clarify why this list stays in `flake.nix` while the `modules.home` list should move:
```nix
# Common darwin system-level modules (not HM sharedModules — those live in modules/darwin/home/)
modules.darwin = with inputs; [ ... ];
```
---
## 3. System-level: Repeated nebula lighthouse config
**Problem**: Three systems (`matt-nixos`, `allyx`, `macbook-pro-nixos`) each independently spell out the same lighthouse peer config:
```nix
# Repeated verbatim in 3 files:
lighthouses = [ "10.1.1.1" ];
staticHostMap = {
"10.1.1.1" = [ "mjallen.dev:4242" ];
};
port = 4242;
```
**Suggestion**: Add defaults to `modules/nixos/services/nebula/default.nix` options so that non-lighthouse nodes don't need to spell this out. Since this is a personal network with one lighthouse, the defaults can encode that:
```nix
# In nebula/default.nix options:
lighthouses = lib.mjallen.mkOpt (types.listOf types.str) [ "10.1.1.1" ]
"Nebula overlay IPs of lighthouse nodes";
staticHostMap = lib.mjallen.mkOpt (types.attrsOf (types.listOf types.str))
{ "10.1.1.1" = [ "mjallen.dev:4242" ]; }
"Static host map";
port = lib.mjallen.mkOpt types.port 4242 "Nebula listen port";
```
Client systems can then reduce to:
```nix
services.nebula = {
enable = true;
secretsPrefix = "matt-nixos/nebula";
secretsFile = lib.snowfall.fs.get-file "secrets/desktop-secrets.yaml";
hostSecretName = "matt-nixos";
};
```
The lighthouse (`pi5`) already overrides `isLighthouse = true` and doesn't set `lighthouses`/`staticHostMap`, so it would be unaffected.
---
## 4. System-level: `systemd-networkd-wait-online` scattered disablement
**Problem**: `systemd.services.systemd-networkd-wait-online.enable = lib.mkForce false` appears in:
- `systems/x86_64-linux/matt-nixos/default.nix:92`
- `systems/x86_64-linux/allyx/default.nix:135`
`modules/nixos/network/default.nix` already disables `NetworkManager-wait-online` and `systemd.network.wait-online`, but not `systemd-networkd-wait-online`. These are the same underlying concern.
**Suggestion**: Add `systemd.services.systemd-networkd-wait-online.enable = lib.mkForce false;` unconditionally to `modules/nixos/network/default.nix` alongside the existing `NetworkManager-wait-online` disablement (line 89). Remove the per-system overrides.
---
## 5. System-level: `coolercontrol` and GNOME desktop environment variables
**Problem**: Two systems (`matt-nixos:91`, `allyx:82`) share identical config blocks:
```nix
programs.coolercontrol.enable = true;
environment.variables = {
GDK_SCALE = "1";
EDITOR = "${lib.getExe' pkgs.vscodium "codium"} --wait";
VISUAL = "${lib.getExe' pkgs.vscodium "codium"} --wait";
};
```
These belong to a desktop AMD gaming profile, not to the system configs themselves.
**Suggestions** (pick one or both):
- **A** — Add a `coolercontrol.enable` option to `modules/nixos/hardware/amd/default.nix` (default `false`) and wire `programs.coolercontrol.enable` inside it. Each system opts in with `hardware.amd.coolercontrol.enable = true`.
- **B** — Add `vscodium` as the default `EDITOR`/`VISUAL` to `modules/nixos/desktop/gnome/default.nix` behind a `vscodium.enable` option (default `false`). The two systems that want it set `desktop.gnome.vscodium.enable = true`.
- **C** — Create a shared `modules/nixos/desktop/common/default.nix` (or `profiles/desktop.nix`) that both GNOME and Hyprland modules consume, and put `GDK_SCALE` there.
---
## 6. System-level: `networking.networkmanager.wifi.backend = "iwd"` bypass
**Problem**: `matt-nixos:100` and `allyx:140` set `networking.networkmanager.wifi.backend = "iwd"` directly, bypassing the `${namespace}.network.iwd.enable` option that the `network` module already provides.
Looking at `modules/nixos/network/default.nix:143154`, enabling `cfg.iwd.enable` does set this value via `mkForce`, but it also forces `networkmanager.enable = mkForce false` — which is unwanted on these systems that use NetworkManager with the iwd backend.
**Root cause**: The module conflates "use iwd" (the WiFi daemon) with "disable NetworkManager" (the connection manager). These are separate concerns. NetworkManager can use iwd as its WiFi backend while still being the connection manager.
**Suggestion**: Restructure the `network` module's iwd handling:
```nix
# Instead of forcing NM off when iwd is enabled:
networking = {
wireless.iwd.enable = cfg.iwd.enable;
networkmanager = mkIf cfg.networkmanager.enable {
enable = true;
wifi.backend = mkIf cfg.iwd.enable "iwd";
# ... rest of NM config
};
};
```
Then the per-system lines become:
```nix
${namespace}.network = {
hostName = "matt-nixos";
iwd.enable = true;
networkmanager.enable = true;
};
```
---
## 7. System-level: `fileSystems."/etc".neededForBoot` not in impermanence module
**Problem**: `fileSystems."/etc".neededForBoot = true` is set manually in four system configs (`nuc-nixos`, `pi5`, `jallen-nas`, `graphical`). This is a prerequisite of impermanence (tmpfs root), not a per-system choice.
**Suggestion**: Add to `modules/nixos/impermanence/default.nix`:
```nix
config = mkIf cfg.enable {
fileSystems."/etc".neededForBoot = true;
# ... existing config
};
```
Then remove the manual setting from each system. (`macbook-pro-nixos` and `matt-nixos` may already have this in their `filesystems.nix` — verify and remove duplicates there too.)
---
## 8. System-level: `system.stateVersion` and `time.timeZone` should be module options
**Problem**: In `modules/nixos/system/default.nix`:
- Line 3: `timezone = "America/Chicago"` is hardcoded
- Line 54: `system.stateVersion = "23.11"` is hardcoded
Both are set unconditionally for every system with no way to override without using `lib.mkForce`.
**Suggestions**:
```nix
# modules/nixos/system/default.nix
{ config, lib, namespace, pkgs, system, ... }:
let
cfg = config.${namespace}.system;
in
{
options.${namespace}.system = {
timezone = lib.mkOption {
type = lib.types.str;
default = "America/Chicago";
description = "System timezone";
};
stateVersion = lib.mkOption {
type = lib.types.str;
default = "23.11";
description = "NixOS state version. Should match the version used when the system was first installed.";
};
};
config = {
time.timeZone = cfg.timezone;
system.stateVersion = cfg.stateVersion;
# ... packages
};
}
```
This maintains the current default for all systems (no change required) while allowing any system to say `${namespace}.system.stateVersion = "24.05"` cleanly.
---
## 9. Module-level: Darwin and NixOS `nix` modules share ~90% of their content
**Problem**: `modules/darwin/nix/default.nix` and `modules/nixos/nix/default.nix` differ only in:
- Darwin lacks `daemonCPUSchedPolicy`/`daemonIOSchedClass`/`daemonIOSchedPriority`
- Darwin lacks the `systemd.services.nix-gc.serviceConfig` block
- Darwin lacks `cudaSupport`/`rocmSupport` in `nixpkgs.config`
- Darwin's substituters list omits `attic.xuyh0120.win/lantian`
Everything else — substituters, trusted keys, `warn-dirty`, `experimental-features`, `trusted-users`, `builders-use-substitutes`, `connect-timeout`, `fallback`, `log-lines`, `max-free`, `min-free`, GC settings, `optimise` — is identical.
**Suggestion**: Extract a shared Nix attrset into `lib/nix-settings/default.nix` (or a plain `.nix` file imported by both):
```nix
# lib/nix-settings/default.nix
{ lib }:
{
commonSubstituters = [
"http://jallen-nas.local:9012/nas-cache"
"https://nixos-apple-silicon.cachix.org"
"https://nixos-raspberrypi.cachix.org"
"https://nix-community.cachix.org"
"https://cache.nixos.org/"
];
commonTrustedPublicKeys = [ ... ];
commonSettings = { warn-dirty = ...; experimental-features = ...; ... };
commonGc = { automatic = true; options = "--delete-older-than 30d"; };
}
```
Both modules import and spread this. The NixOS module adds scheduler policies and systemd GC service tweaks on top.
---
## 10. Module-level: Home SOPS configuration is inconsistent across homes
**Problem**: Three different patterns are used to configure SOPS in home configs:
1. **`${namespace}.sops.enable = true`** — uses the module at `modules/home/sops/default.nix` (macbook-pro-nixos home, jallen-nas home)
2. **Inline SOPS config** — sets `sops.*` directly (allyx home, pi5 home)
3. **Nothing** — some homes don't configure sops at all (matt-nixos home relies on system-level secrets only)
The `modules/home/sops/default.nix` module already handles the `age.keyFile` path, `defaultSopsFile`, and SSH key setup. The inline patterns duplicate this.
**Suggestion**: Migrate all homes that configure sops inline to use `${namespace}.sops.enable = true`. If the home needs a different `defaultSopsFile` (e.g. pi5 uses `secrets/pi5-secrets.yaml`), that should be a module option:
```nix
# modules/home/sops/default.nix — add option:
options.${namespace}.sops = {
enable = lib.mkEnableOption "home sops";
defaultSopsFile = lib.mkOption {
type = lib.types.nullOr lib.types.path;
default = null; # falls back to global secrets.yaml
description = "Override the default SOPS file for this home";
};
};
```
---
## 11. Module-level: `modules/nixos/home/default.nix` — `home-manager` input key coupling
**Problem**: `systems.modules.nixos` in `flake.nix:147` explicitly includes `home-manager.nixosModules.home-manager`. However Snowfall Lib **automatically** injects the home-manager NixOS module when the `home-manager` input is present and there are home configurations (Snowfall Lib `system/default.nix` lines 265270).
**Suggestion**: Verify (by temporarily removing the explicit entry) whether `home-manager.nixosModules.home-manager` can be dropped from `systems.modules.nixos`. If Snowfall Lib handles this automatically, removing it eliminates the manual coupling.
---
## 12. System-level: `nuc-nixos` — large monolithic default.nix
**Problem**: `systems/x86_64-linux/nuc-nixos/default.nix` is over 330 lines and contains everything inline: disk config, networking, Home Assistant dashboard definitions (~170 lines of inline Nix), kernel config, user setup, and services. Every other complex system (jallen-nas) already uses a split structure with `apps.nix`, `services.nix`, `nas-defaults.nix`, etc.
**Suggestion**: Extract into separate files following the jallen-nas pattern:
```
systems/x86_64-linux/nuc-nixos/
├── default.nix # thin: imports + top-level options
├── boot.nix # disk/luks/filesystem config
├── dashboard.nix # Home Assistant dashboard card definitions
├── services.nix # postgres, redis, HA, OTBR etc.
└── sops.nix # (or reuse the shared module)
```
The dashboard in particular (currently lines ~88260) should be isolated so HA configuration changes don't require touching system-level config.
---
## 13. System-level: Verify `admin@jallen-nas` steam-rom-manager double-import
**Problem**: `homes/x86_64-linux/admin@jallen-nas/default.nix:16` explicitly imports `steam-rom-manager.homeManagerModules.default`. This same module is injected globally via `modules/nixos/home/default.nix:92` for all x86_64 systems (the ARM guard is `!isArm`, and jallen-nas is x86_64).
**Suggestion**: Remove the explicit import from `admin@jallen-nas/default.nix`. If it was added for standalone `home-manager switch` builds (without NixOS), document that reason in a comment rather than keeping a potentially conflicting double-import.
---
## 14. Flake-level: `pi5` host entry with empty modules list
**Problem**: `flake.nix:218221` defines:
```nix
pi5 = {
modules = [ ];
};
```
An empty modules list is the default behavior — this entry has no effect and can be removed. The comment `# disko is already in systems.modules.nixos above` is incorrect (disko is global for all systems, not specific to pi5). The comment itself is misleading.
**Suggestion**: Remove the `pi5` host entry from `flake.nix` entirely. If the comment is meant to remind future maintainers that disko is global, move that context to `AGENTS.md` or a comment near the global `systems.modules.nixos` list.
---
## 15. Flake-level: `home-manager-stable` input is pulled in but never used
**Problem**: `flake.nix:1013` defines `home-manager-stable` but `home-manager = home-manager-unstable` is the alias (line 21). No system or module references `home-manager-stable` directly. It adds to lock file churn and evaluation time.
**Suggestion**: Remove `home-manager-stable` unless there is a concrete plan to use it for a stable-channel system. If stable Home Manager support is desired in the future, add it back at that point.
---
## 16. Flake-level: Consider using Snowfall Lib `alias` for formatter output
**Problem**: The `outputs-builder` in `flake.nix:277280` is used only to register the `treefmt` formatter. Snowfall Lib supports an `alias` mechanism and also allows `outputs-builder` to be used, but this is the only use of `outputs-builder` in the entire flake.
**Suggestion**: This is fine as-is, but note that `outputs-builder` output can be overridden by auto-discovery. Since the formatter isn't auto-discovered, `outputs-builder` is the correct approach. No change needed — but the comment on line 279 about the mjallen-lib overlay being auto-discovered is accurate and good to keep.
---
## Summary Table
| # | Location | Type | Effort | Impact |
|---|----------|------|--------|--------|
| 1 | `flake.nix` | Deduplication | Low | High — removes confusing double-registration |
| 2 | `flake.nix` | Documentation | Low | Low |
| 3 | `nebula/default.nix` | Better defaults | Low | Medium — 3 systems simplified |
| 4 | `network/default.nix` | Consolidation | Low | Medium — remove per-system workarounds |
| 5 | `hardware/amd` + `desktop/gnome` | New options | Medium | Medium — DRY gaming desktop profile |
| 6 | `network/default.nix` | Bug fix / refactor | Medium | High — current iwd handling is incorrect |
| 7 | `impermanence/default.nix` | Consolidation | Low | Medium — remove 4 manual entries |
| 8 | `system/default.nix` | New options | Low | Medium — allows per-system overrides cleanly |
| 9 | `lib/` + `darwin/nix` + `nixos/nix` | Extraction | Medium | Medium — single source of truth for nix config |
| 10 | `homes/*/` + `modules/home/sops` | Consistency | Low | Low — consistency improvement |
| 11 | `flake.nix` | Simplification | Low | Low — possible dead entry |
| 12 | `systems/nuc-nixos/` | Refactor | Medium | High — maintainability |
| 13 | `homes/admin@jallen-nas` | Bug fix | Trivial | Low — potential double-import |
| 14 | `flake.nix` | Cleanup | Trivial | Low — dead code |
| 15 | `flake.nix` | Cleanup | Trivial | Low — reduces lock churn |
| 16 | `flake.nix` | N/A | None | No change needed |

52
flake.lock generated
View File

@@ -581,27 +581,6 @@
"type": "github"
}
},
"home-manager-stable": {
"inputs": {
"nixpkgs": [
"nixpkgs-stable"
]
},
"locked": {
"lastModified": 1774274588,
"narHash": "sha256-dnHvv5EMUgTzGZmA+3diYjQU2O6BEpGLEOgJ1Qe9LaY=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "cf9686ba26f5ef788226843bc31fda4cf72e373b",
"type": "github"
},
"original": {
"owner": "nix-community",
"ref": "release-25.11",
"repo": "home-manager",
"type": "github"
}
},
"home-manager-unstable": {
"inputs": {
"nixpkgs": [
@@ -1254,6 +1233,29 @@
"type": "github"
}
},
"plasma-manager": {
"inputs": {
"home-manager": [
"home-manager"
],
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1772361940,
"narHash": "sha256-B1Cz+ydL1iaOnGlwOFld/C8lBECPtzhiy/pP93/CuyY=",
"owner": "nix-community",
"repo": "plasma-manager",
"rev": "a4b33606111c9c5dcd10009042bb710307174f51",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "plasma-manager",
"type": "github"
}
},
"pre-commit": {
"inputs": {
"flake-compat": "flake-compat_3",
@@ -1356,7 +1358,6 @@
"darwin": "darwin",
"disko": "disko",
"home-manager": "home-manager",
"home-manager-stable": "home-manager-stable",
"home-manager-unstable": "home-manager-unstable",
"homebrew-cask": "homebrew-cask",
"homebrew-core": "homebrew-core",
@@ -1376,6 +1377,7 @@
"nixpkgs-otbr": "nixpkgs-otbr",
"nixpkgs-stable": "nixpkgs-stable_2",
"nixpkgs-unstable": "nixpkgs-unstable",
"plasma-manager": "plasma-manager",
"pre-commit-hooks-nix": "pre-commit-hooks-nix",
"snowfall-lib": "snowfall-lib",
"sops-nix": "sops-nix",
@@ -1435,11 +1437,11 @@
"treefmt-nix": "treefmt-nix"
},
"locked": {
"lastModified": 1774474975,
"narHash": "sha256-3cHaGufox1mL8KbYDGHW2ySYwmwgslHTkcDpA7hoKk8=",
"lastModified": 1774478645,
"narHash": "sha256-NeEWeisE2QLCCJg688/vaLp9/V7osVenn/EUm3JXsgg=",
"owner": "mjallen18",
"repo": "snowfall-lib",
"rev": "94056dbb28a42d49ca6bf45acb38122da42d8bfc",
"rev": "23e5a04d70389d58b7c1447924d59cfb78218215",
"type": "github"
},
"original": {

View File

@@ -7,11 +7,6 @@
# Used by modules/nixos/homeassistant/services/thread/default.nix
nixpkgs-otbr.url = "github:mrene/nixpkgs/openthread-border-router";
home-manager-stable = {
url = "github:nix-community/home-manager/release-25.11";
inputs.nixpkgs.follows = "nixpkgs-stable";
};
home-manager-unstable = {
url = "github:nix-community/home-manager";
inputs.nixpkgs.follows = "nixpkgs-unstable";
@@ -116,6 +111,12 @@
url = "github:Jovian-Experiments/Jovian-NixOS";
inputs.nixpkgs.follows = "nixpkgs";
};
plasma-manager = {
url = "github:nix-community/plasma-manager";
inputs.nixpkgs.follows = "nixpkgs";
inputs.home-manager.follows = "home-manager";
};
};
# We will handle this in the next section.
@@ -149,14 +150,19 @@
stylix.nixosModules.stylix
];
# External HM modules injected into ALL homes — both standalone
# homeConfigurations and homes embedded in nixosConfigurations.
# The snowfall-lib fork patches create-systems to pass systems.modules.home
# into create-home-system-modules so both paths are covered from here.
# The ARM guard for steam-rom-manager is handled by that module itself.
modules.home = with inputs; [
nix-index-database.homeModules.nix-index
steam-rom-manager.homeManagerModules.default
sops-nix.homeManagerModules.sops
stylix.homeModules.stylix
plasma-manager.homeModules.plasma-manager
];
# common darwin modules
modules.darwin = with inputs; [
nix-homebrew.darwinModules.nix-homebrew
home-manager.darwinModules.home-manager
@@ -212,14 +218,6 @@
];
};
# ######################################################
# Pi5 #
# ######################################################
pi5 = {
# disko is already in systems.modules.nixos above
modules = [ ];
};
# ######################################################
# Mac #
# ######################################################

View File

@@ -11,10 +11,9 @@ in
home.username = "matt";
${namespace}.sops.enable = true;
sops = {
age.keyFile = "/home/matt/.config/sops/age/keys.txt";
defaultSopsFile = "/etc/nixos/secrets/secrets.yaml";
validateSopsFiles = false;
secrets = {
"ssh-keys-public/pi5" = {
path = "/home/matt/.ssh/id_ed25519.pub";

View File

@@ -9,9 +9,11 @@ let
inherit (lib.${namespace}) enabled;
in
{
# steam-rom-manager HM module is needed for the steam-rom-manager program
# options. On NixOS hosts it's provided via sharedModules; here we add it
# explicitly so the standalone homeConfiguration build also includes it.
# steam-rom-manager is also injected globally via modules/nixos/home/default.nix
# sharedModules for x86_64 NixOS builds. This explicit import ensures it is
# also available for standalone `home-manager switch` runs (where sharedModules
# are not applied). NixOS's module system deduplicates the import when both
# paths resolve to the same derivation.
imports = [
inputs.steam-rom-manager.homeManagerModules.default
];
@@ -35,40 +37,36 @@ in
${namespace} = {
sops.enable = true;
programs.opencode = enabled;
desktop.plasma = enabled;
};
sops = {
age.keyFile = "/home/admin/.config/sops/age/keys.txt";
defaultSopsFile = "/etc/nixos/secrets/secrets.yaml";
validateSopsFiles = false;
secrets = {
"ssh-keys-public/jallen-nas" = {
path = "/home/admin/.ssh/id_ed25519.pub";
mode = "0644";
};
"ssh-keys-private/jallen-nas" = {
path = "/home/admin/.ssh/id_ed25519";
mode = "0600";
};
"ssh-keys-public/desktop-nixos" = {
path = "/home/admin/.ssh/authorized_keys";
mode = "0600";
};
sops.secrets = {
"ssh-keys-public/jallen-nas" = {
path = "/home/admin/.ssh/id_ed25519.pub";
mode = "0644";
};
"ssh-keys-private/jallen-nas" = {
path = "/home/admin/.ssh/id_ed25519";
mode = "0600";
};
"ssh-keys-public/desktop-nixos" = {
path = "/home/admin/.ssh/authorized_keys";
mode = "0600";
};
"ssh-keys-public/desktop-nixos-root" = {
path = "/home/admin/.ssh/authorized_keys2";
mode = "0600";
};
"ssh-keys-public/desktop-nixos-root" = {
path = "/home/admin/.ssh/authorized_keys2";
mode = "0600";
};
"ssh-keys-public/desktop-windows" = {
path = "/home/admin/.ssh/authorized_keys3";
mode = "0600";
};
"ssh-keys-public/desktop-windows" = {
path = "/home/admin/.ssh/authorized_keys3";
mode = "0600";
};
"ssh-keys-public/macbook-macos" = {
path = "/home/admin/.ssh/authorized_keys4";
mode = "0600";
};
"ssh-keys-public/macbook-macos" = {
path = "/home/admin/.ssh/authorized_keys4";
mode = "0600";
};
};

View File

@@ -10,21 +10,19 @@ in
{
home.username = "matt";
${namespace}.desktop.gnome = enabled;
${namespace} = {
desktop.gnome = enabled;
sops.enable = true;
};
sops = {
age.keyFile = "/home/matt/.config/sops/age/keys.txt";
defaultSopsFile = "/etc/nixos/secrets/secrets.yaml";
validateSopsFiles = false;
secrets = {
"ssh-keys-public/matt" = {
path = "/home/matt/.ssh/id_ed25519.pub";
mode = "0644";
};
"ssh-keys-private/matt" = {
path = "/home/matt/.ssh/id_ed25519";
mode = "0600";
};
sops.secrets = {
"ssh-keys-public/matt" = {
path = "/home/matt/.ssh/id_ed25519.pub";
mode = "0644";
};
"ssh-keys-private/matt" = {
path = "/home/matt/.ssh/id_ed25519";
mode = "0600";
};
};

View File

@@ -0,0 +1,51 @@
# Shared nix daemon / nixpkgs settings used by both the NixOS and nix-darwin
# modules (modules/nixos/nix/default.nix and modules/darwin/nix/default.nix).
#
# Snowfall Lib discovers this file and merges its return value into
# lib.<namespace>.* — the nested attrset is accessed as:
# lib.${namespace}.nixSettings.commonSubstituters
# lib.${namespace}.nixSettings.commonTrustedPublicKeys
# lib.${namespace}.nixSettings.commonSettings
# lib.${namespace}.nixSettings.commonGc
{ lib, ... }:
{
nixSettings = {
commonSubstituters = [
"http://jallen-nas.local:9012/nas-cache"
"https://nixos-apple-silicon.cachix.org"
"https://nixos-raspberrypi.cachix.org"
"https://nix-community.cachix.org"
"https://cache.nixos.org/"
];
commonTrustedPublicKeys = [
"nas-cache:eK0eRVAt9QNwbkLIyOo9N5Z5+zi6ukI4mSlL196C7Yg="
"nixos-apple-silicon.cachix.org-1:8psDu5SA5dAD7qA0zMy5UT292TxeEPzIz8VVEr2Js20="
"nixos-raspberrypi.cachix.org-1:4iMO9LXa8BqhU+Rpg6LQKiGa2lsNh/j2oiYLNOQ5sPI="
"nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs="
];
commonSettings = {
warn-dirty = lib.mkForce false;
experimental-features = lib.mkForce [
"nix-command"
"flakes"
];
trusted-users = [
"@wheel"
"@admin"
];
builders-use-substitutes = true;
connect-timeout = lib.mkDefault 5;
fallback = true;
log-lines = lib.mkDefault 25;
max-free = lib.mkDefault (3000 * 1024 * 1024);
min-free = lib.mkDefault (512 * 1024 * 1024);
};
commonGc = {
automatic = lib.mkDefault true;
options = lib.mkDefault "--delete-older-than 30d";
};
};
}

View File

@@ -1,48 +1,20 @@
{
lib,
namespace,
...
}:
let
nixSettings = lib.${namespace}.nixSettings;
in
{
nix = {
settings = {
# extra-sandbox-paths = [ config.programs.ccache.cacheDir ];
substituters = [
"http://jallen-nas.local:9012/nas-cache"
"https://nixos-apple-silicon.cachix.org"
"https://nixos-raspberrypi.cachix.org"
"https://nix-community.cachix.org"
"https://cache.nixos.org/"
];
trusted-public-keys = [
"nas-cache:eK0eRVAt9QNwbkLIyOo9N5Z5+zi6ukI4mSlL196C7Yg="
"nixos-apple-silicon.cachix.org-1:8psDu5SA5dAD7qA0zMy5UT292TxeEPzIz8VVEr2Js20="
"nixos-raspberrypi.cachix.org-1:4iMO9LXa8BqhU+Rpg6LQKiGa2lsNh/j2oiYLNOQ5sPI="
"nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs="
];
warn-dirty = lib.mkForce false;
experimental-features = lib.mkForce [
"nix-command"
"flakes"
];
trusted-users = [
"@wheel"
"@admin"
];
builders-use-substitutes = true;
connect-timeout = lib.mkDefault 5;
fallback = true;
log-lines = lib.mkDefault 25;
max-free = lib.mkDefault (3000 * 1024 * 1024);
min-free = lib.mkDefault (512 * 1024 * 1024);
};
# Garbage collect automatically every week
gc = {
automatic = lib.mkDefault true;
options = lib.mkDefault "--delete-older-than 30d";
settings = nixSettings.commonSettings // {
substituters = nixSettings.commonSubstituters;
trusted-public-keys = nixSettings.commonTrustedPublicKeys;
};
gc = nixSettings.commonGc;
optimise.automatic = lib.mkDefault true;
};

View File

@@ -0,0 +1,90 @@
{
config,
lib,
pkgs,
namespace,
...
}:
let
cfg = config.${namespace}.desktop.plasma;
in
{
imports = [ ./options.nix ];
config = lib.mkIf cfg.enable {
home.packages = with pkgs.kdePackages; [
plasma-browser-integration
kdeplasma-addons
];
programs.plasma = {
enable = true;
workspace = {
colorScheme = "BreezeDark";
cursor = {
theme = "breeze_cursors";
size = 24;
};
iconTheme = "breeze-dark";
theme = "breeze-dark";
lookAndFeel = "org.kde.breezedark.desktop";
};
# input.mice and input.touchpads require device-specific vendorId/productId
# identifiers — configure those per-host in the home config instead.
kscreenlocker = {
autoLock = true;
timeout = 10;
};
kwin = {
effects = {
shakeCursor.enable = false;
};
nightLight = {
enable = true;
mode = "constant";
temperature = {
day = 6500;
night = 3500;
};
};
virtualDesktops = {
number = 4;
rows = 1;
};
};
panels = [
{
location = "bottom";
floating = true;
height = 44;
widgets = [
"org.kde.plasma.kickoff"
"org.kde.plasma.icontasks"
"org.kde.plasma.marginsseparator"
"org.kde.plasma.systemtray"
"org.kde.plasma.digitalclock"
];
}
];
shortcuts = {
kwin = {
"Switch to Desktop 1" = "Meta+1";
"Switch to Desktop 2" = "Meta+2";
"Switch to Desktop 3" = "Meta+3";
"Switch to Desktop 4" = "Meta+4";
"Window to Desktop 1" = "Meta+Shift+1";
"Window to Desktop 2" = "Meta+Shift+2";
"Window to Desktop 3" = "Meta+Shift+3";
"Window to Desktop 4" = "Meta+Shift+4";
"Toggle Overview" = "Meta+Tab";
};
};
};
};
}

View File

@@ -0,0 +1,7 @@
{ lib, namespace, ... }:
with lib;
{
options.${namespace}.desktop.plasma = {
enable = mkEnableOption "KDE Plasma 6 home-manager configuration via plasma-manager";
};
}

View File

@@ -1,21 +1,20 @@
{
config,
lib,
namespace,
...
}:
let
cfg = config.${namespace}.desktop.cosmic;
in
{
options.${namespace}.desktop.cosmic = {
enable = lib.mkEnableOption "enable cosmic settings";
};
config = lib.mkIf cfg.enable {
services = {
desktopManager.cosmic.enable = true;
displayManager.cosmic-greeter.enable = true;
};
};
# TODO: COSMIC DE has an active bug that prevents it from being used.
# Re-enable once upstream fixes land:
# config = lib.mkIf config.${namespace}.desktop.cosmic.enable {
# services = {
# desktopManager.cosmic.enable = true;
# displayManager.cosmic-greeter.enable = true;
# };
# };
config = { };
}

View File

@@ -6,12 +6,14 @@
...
}:
let
inherit (lib.${namespace}) enabled disabled;
inherit (lib.${namespace}) enabled disabled mkBoolOpt;
cfg = config.${namespace}.desktop.gnome;
in
{
options.${namespace}.desktop.gnome = {
enable = lib.mkEnableOption "GNOME desktop environment";
vscodium.enable = mkBoolOpt false "Set VSCodium as the default EDITOR/VISUAL";
};
config = lib.mkIf cfg.enable {
@@ -53,5 +55,10 @@ in
enable = false;
package = pkgs.gnomeExtensions.gsconnect;
};
environment.variables = lib.mkIf cfg.vscodium.enable {
EDITOR = "${lib.getExe' pkgs.vscodium "codium"} --wait";
VISUAL = "${lib.getExe' pkgs.vscodium "codium"} --wait";
};
};
}

View File

@@ -0,0 +1,30 @@
{
config,
lib,
namespace,
...
}:
let
inherit (lib.${namespace}) mkBoolOpt;
cfg = config.${namespace}.desktop.plasma;
in
{
options.${namespace}.desktop.plasma = {
enable = lib.mkEnableOption "KDE Plasma 6 desktop environment";
wayland.enable = mkBoolOpt true "Use the Wayland session (default) instead of X11";
};
config = lib.mkIf cfg.enable {
services = {
desktopManager.plasma6.enable = true;
displayManager.sddm = {
enable = true;
wayland.enable = cfg.wayland.enable;
};
};
xdg.portal.extraPortals = [ ];
};
}

View File

@@ -10,7 +10,8 @@ let
hasDesktop =
config.${namespace}.desktop.gnome.enable
|| config.${namespace}.desktop.hyprland.enable
|| config.${namespace}.desktop.cosmic.enable;
|| config.${namespace}.desktop.cosmic.enable
|| config.${namespace}.desktop.plasma.enable;
in
{
imports = [ ./options.nix ];
@@ -19,7 +20,7 @@ in
assertions = [
{
assertion = hasDesktop;
message = "mjallen.gaming.enable requires a desktop environment (gnome, hyprland, or cosmic) to be enabled.";
message = "mjallen.gaming.enable requires a desktop environment (gnome, hyprland, cosmic, or plasma) to be enabled.";
}
];
# Network option required using sysctl to let Ubisoft Connect work as of 7-12-2023

View File

@@ -14,6 +14,8 @@ in
options.${namespace}.hardware.amd = {
enable = mkEnableOption "AMD hardware configuration";
coolercontrol.enable = mkBoolOpt false "Enable CoolerControl fan/cooling control";
corectrl.enable = mkBoolOpt false "Enable CoreCtrl GPU control";
corectrl.enablePolkit = mkBoolOpt false "Enable CoreCtrl polkit rules";
corectrl.polkitGroup = mkOpt types.str "wheel" "Group allowed to use CoreCtrl without password";
@@ -46,6 +48,8 @@ in
package = pkgs.corectrl;
};
programs.coolercontrol.enable = lib.mkIf cfg.coolercontrol.enable true;
environment = {
variables = {
AMD_VULKAN_ICD = "RADV";

View File

@@ -20,8 +20,9 @@ in
assertion =
!config.${namespace}.desktop.gnome.enable
&& !config.${namespace}.desktop.hyprland.enable
&& !config.${namespace}.desktop.cosmic.enable;
message = "mjallen.headless.enable = true is incompatible with having a desktop environment enabled (gnome, hyprland, or cosmic).";
&& !config.${namespace}.desktop.cosmic.enable
&& !config.${namespace}.desktop.plasma.enable;
message = "mjallen.headless.enable = true is incompatible with having a desktop environment enabled (gnome, hyprland, cosmic, or plasma).";
}
];

View File

@@ -4,16 +4,14 @@
options,
namespace,
inputs,
system,
...
}:
let
isArm = ("aarch64-linux" == system) || ("aarch64-darwin" == system);
isDarwin = ("aarch64-darwin" == system);
hasDestopEnvironment =
config.${namespace}.desktop.cosmic.enable
|| config.${namespace}.desktop.gnome.enable
|| config.${namespace}.desktop.hyprland.enable;
|| config.${namespace}.desktop.hyprland.enable
|| config.${namespace}.desktop.plasma.enable;
in
{
@@ -42,6 +40,7 @@ in
config.${namespace}.desktop.gnome.enable
config.${namespace}.desktop.hyprland.enable
config.${namespace}.desktop.cosmic.enable
config.${namespace}.desktop.plasma.enable
];
in
[
@@ -54,6 +53,7 @@ in
lib.optional config.${namespace}.desktop.gnome.enable "gnome"
++ lib.optional config.${namespace}.desktop.hyprland.enable "hyprland"
++ lib.optional config.${namespace}.desktop.cosmic.enable "cosmic"
++ lib.optional config.${namespace}.desktop.plasma.enable "plasma"
)
}.
'';
@@ -79,19 +79,6 @@ in
inherit inputs namespace hasDestopEnvironment;
};
# Make ALL external HM modules available globally
sharedModules =
with inputs;
[
sops-nix.homeManagerModules.sops
nix-plist-manager.homeManagerModules.default
nix-index-database.homeModules.nix-index
stylix.homeModules.stylix
# Add any other external HM modules here
]
++ (if (!isArm) then with inputs; [ steam-rom-manager.homeManagerModules.default ] else [ ])
++ (if (isDarwin) then with inputs; [ ] else [ ]);
users.${config.${namespace}.user.name} =
lib.mkAliasDefinitions
options.${namespace}.home.extraOptions;

View File

@@ -50,6 +50,9 @@ in
};
config = mkIf cfg.enable {
# /etc must be available before the impermanence bind-mounts are set up.
fileSystems."/etc".neededForBoot = true;
assertions = [
{
assertion = lib.hasPrefix "/" cfg.persistencePath;

View File

@@ -87,6 +87,7 @@ in
systemd = {
services = {
NetworkManager-wait-online.enable = false;
systemd-networkd-wait-online.enable = lib.mkForce false;
systemd-networkd.stopIfChanged = false;
systemd-resolved.stopIfChanged = false;
};
@@ -139,43 +140,39 @@ in
extraCommands = lib.mkIf (cfg.extraFirewallCommands != "") cfg.extraFirewallCommands;
};
# Configure iwd if enabled
# Enable iwd daemon when requested.
# When iwd is enabled alongside NetworkManager, iwd acts as the WiFi
# backend for NM (iwd handles scanning/association; NM handles
# connection management). They are not mutually exclusive.
wireless.iwd = lib.mkIf cfg.iwd.enable {
enable = true;
settings = cfg.iwd.settings;
};
# Configure NetworkManager
networkmanager = mkMerge [
# Disable NetworkManager when iwd is enabled
(mkIf cfg.iwd.enable {
enable = mkForce false;
wifi.backend = mkForce "iwd";
})
# Configure NetworkManager when enabled
networkmanager = mkIf cfg.networkmanager.enable {
enable = true;
# Use iwd as the WiFi backend when iwd is also enabled
wifi.backend = mkIf cfg.iwd.enable "iwd";
wifi.powersave = cfg.networkmanager.powersave;
settings.connectivity.uri = mkDefault "http://nmcheck.gnome.org/check_network_status.txt";
plugins = with pkgs; [
networkmanager-fortisslvpn
networkmanager-iodine
networkmanager-l2tp
networkmanager-openconnect
networkmanager-openvpn
networkmanager-sstp
networkmanager-strongswan
networkmanager-vpnc
];
# Enable NetworkManager when wifi is enabled and iwd is disabled
(mkIf (cfg.networkmanager.enable && !cfg.iwd.enable) {
enable = true;
wifi.powersave = cfg.networkmanager.powersave;
settings.connectivity.uri = mkDefault "http://nmcheck.gnome.org/check_network_status.txt";
plugins = with pkgs; [
networkmanager-fortisslvpn
networkmanager-iodine
networkmanager-l2tp
networkmanager-openconnect
networkmanager-openvpn
networkmanager-sstp
networkmanager-strongswan
networkmanager-vpnc
];
# Configure WiFi profiles if any are defined
ensureProfiles = mkIf (cfg.networkmanager.profiles != { }) {
environmentFiles = lib.optional (config.sops.secrets ? wifi) config.sops.secrets.wifi.path;
profiles = profiles;
};
})
];
# Configure WiFi profiles if any are defined
ensureProfiles = mkIf (cfg.networkmanager.profiles != { }) {
environmentFiles = lib.optional (config.sops.secrets ? wifi) config.sops.secrets.wifi.path;
profiles = profiles;
};
};
};
};
}

View File

@@ -4,57 +4,36 @@
namespace,
...
}:
let
nixSettings = lib.${namespace}.nixSettings;
in
{
nix = {
settings = {
settings = nixSettings.commonSettings // {
# extra-sandbox-paths = [ config.programs.ccache.cacheDir ];
substituters = [
# NixOS-only: lantian attic cache (has some useful packages)
"https://attic.xuyh0120.win/lantian"
"http://jallen-nas.local:9012/nas-cache"
"https://nixos-apple-silicon.cachix.org"
"https://nixos-raspberrypi.cachix.org"
"https://nix-community.cachix.org"
"https://cache.nixos.org/"
];
]
++ nixSettings.commonSubstituters;
trusted-public-keys = [
"lantian:EeAUQ+W+6r7EtwnmYjeVwx5kOGEBpjlBfPlzGlTNvHc="
"nas-cache:eK0eRVAt9QNwbkLIyOo9N5Z5+zi6ukI4mSlL196C7Yg="
"nixos-apple-silicon.cachix.org-1:8psDu5SA5dAD7qA0zMy5UT292TxeEPzIz8VVEr2Js20="
"nixos-raspberrypi.cachix.org-1:4iMO9LXa8BqhU+Rpg6LQKiGa2lsNh/j2oiYLNOQ5sPI="
"nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs="
];
warn-dirty = lib.mkForce false;
experimental-features = lib.mkForce [
"nix-command"
"flakes"
];
trusted-users = [
"@wheel"
"@admin"
];
builders-use-substitutes = true;
connect-timeout = lib.mkDefault 5;
fallback = true;
log-lines = lib.mkDefault 25;
max-free = lib.mkDefault (3000 * 1024 * 1024);
min-free = lib.mkDefault (512 * 1024 * 1024);
]
++ nixSettings.commonTrustedPublicKeys;
};
# Linux-specific: run the nix daemon at idle priority to avoid impacting
# interactive work during builds.
daemonCPUSchedPolicy = lib.mkDefault "idle";
daemonIOSchedClass = lib.mkDefault "idle";
daemonIOSchedPriority = lib.mkDefault 7;
# Garbage collect automatically every week
gc = {
automatic = lib.mkDefault true;
options = lib.mkDefault "--delete-older-than 30d";
};
gc = nixSettings.commonGc;
optimise.automatic = lib.mkDefault true;
};
# Give the nix-gc systemd unit the same idle-IO treatment as the daemon.
systemd.services.nix-gc.serviceConfig = {
CPUSchedulingPolicy = "batch";
IOSchedulingClass = "idle";

View File

@@ -400,34 +400,30 @@ let
# ntfy via the Grafana webhook contact point. Grafana POSTs a JSON
# body; ntfy accepts any body as the message text. We use the
# message template below to format it nicely.
# Basic auth credentials are read from the SOPS secret at runtime
# via Grafana's $__file{} provider.
contactPoints.settings = {
apiVersion = 1;
contactPoints = [
{
name = "ntfy";
receivers = [
{
uid = "ntfy-webhook";
type = "webhook";
settings = {
url = "https://ntfy.mjallen.dev/grafana-alerts";
httpMethod = "POST";
username = "$__file{${config.sops.secrets."jallen-nas/ntfy/user".path}}";
password = "$__file{${config.sops.secrets."jallen-nas/ntfy/password".path}}";
# Pass alert title and state as ntfy headers via the
# custom message template (defined below).
httpHeaders = {
"Tags" = "chart,bell";
};
};
disableResolveMessage = false;
}
];
}
];
};
#
# Credentials are injected via Grafana's $__env{} provider, which
# reads from the process environment. The GRAFANA_NTFY_USER and
# GRAFANA_NTFY_PASSWORD variables are set via the SOPS-managed
# grafana.env EnvironmentFile on the grafana.service unit.
#
# Note: $__file{} only works in grafana.ini settings, not in
# provisioning YAML files — using it here causes a parse error.
contactPoints.path = pkgs.writeTextDir "contactPoints.yaml" ''
apiVersion: 1
contactPoints:
- name: ntfy
receivers:
- uid: ntfy-webhook
type: webhook
disableResolveMessage: false
settings:
url: https://ntfy.mjallen.dev/grafana-alerts
httpMethod: POST
username: $__env{GRAFANA_NTFY_USER}
password: $__env{GRAFANA_NTFY_PASSWORD}
httpHeaders:
Tags: "chart,bell"
'';
# ── Notification message template ───────────────────────────────────
# Grafana sends the rendered template body as the POST body.
@@ -878,6 +874,11 @@ let
};
};
# Inject ntfy credentials into Grafana's environment so the $__env{}
# provider in contactPoints.yaml can resolve them at runtime.
# The grafana.env template is managed by SOPS and owned by grafana:grafana.
systemd.services.grafana.serviceConfig.EnvironmentFile = config.sops.templates."grafana.env".path;
# The redis exporter needs AF_INET to reach TCP Redis instances.
# The default systemd hardening only allows AF_UNIX.
systemd.services.prometheus-redis-exporter.serviceConfig.RestrictAddressFamilies = [

View File

@@ -0,0 +1,146 @@
{
config,
lib,
pkgs,
namespace,
...
}:
let
inherit (lib) mkForce getExe;
inherit (lib.${namespace}) mkModule mkOpt;
name = "lemonade";
cfg = config.${namespace}.services.${name};
# lemonade-server serve args built from config options
serveArgs = lib.concatStringsSep " " (
[
"serve"
"--no-tray"
"--port ${toString cfg.port}"
"--host ${cfg.host}"
"--log-level ${cfg.logLevel}"
]
++ lib.optional (cfg.maxLoadedModels != 1) "--max-loaded-models ${toString cfg.maxLoadedModels}"
++ lib.optional (cfg.extraModelsDir != null) "--extra-models-dir ${cfg.extraModelsDir}"
++ cfg.extraArgs
);
lemonadeConfig = mkModule {
inherit config name;
description = "Lemonade local LLM server";
options = {
# Override mkModule's default port of 80 with lemonade's actual default.
port = mkOpt lib.types.int 8000 "Port lemonade-router listens on";
host = mkOpt lib.types.str "127.0.0.1" "Address lemonade-router binds to";
logLevel = mkOpt (lib.types.enum [
"critical"
"error"
"warning"
"info"
"debug"
"trace"
]) "info" "Log level for lemonade-router";
maxLoadedModels = mkOpt lib.types.int 1 "Maximum number of models to keep loaded simultaneously";
extraModelsDir =
mkOpt (lib.types.nullOr lib.types.str) null
"Extra directory scanned for local GGUF model files";
extraArgs =
mkOpt (lib.types.listOf lib.types.str) [ ]
"Extra arguments passed verbatim to lemonade-server serve";
modelsDir =
mkOpt lib.types.str "/var/lib/${name}/models"
"Directory where downloaded models are stored (exposed as HF_HOME)";
apiKeyFile =
mkOpt (lib.types.nullOr lib.types.str) null
"Path to a file containing the LEMONADE_API_KEY (e.g. a sops secret path)";
};
moduleConfig = {
# Install the package system-wide so lemonade-server / lemonade-router are
# available in PATH for interactive use alongside the daemon.
environment.systemPackages = [ pkgs.${namespace}.lemonade ];
systemd.services.${name} = {
description = "Lemonade local LLM server";
wantedBy = [ "multi-user.target" ];
after = [
"network.target"
"network-online.target"
];
wants = [ "network-online.target" ];
# lemonade-server discover lemonade-router by reading /proc/self/exe,
# so we must use ExecStart with the real binary, not a shell wrapper.
serviceConfig = {
Type = "simple";
ExecStart = "${getExe pkgs.${namespace}.lemonade} ${serveArgs}";
User = name;
Group = name;
DynamicUser = mkForce false;
# Models and HuggingFace cache land under modelsDir.
# HF_HOME overrides the default ~/.cache/huggingface location.
Environment = [
"HF_HOME=${cfg.modelsDir}"
"XDG_RUNTIME_DIR=/run/${name}"
];
# Load an API key from a secrets file if provided.
EnvironmentFile = lib.optional (cfg.apiKeyFile != null) cfg.apiKeyFile;
# Runtime directory for PID file / lock file (created automatically
# by systemd and owned by the service user).
RuntimeDirectory = name;
RuntimeDirectoryMode = "0755";
# Persistent state: models cache.
StateDirectory = name;
StateDirectoryMode = "0750";
# Home directory for the service user (needed by some HF tooling).
WorkingDirectory = "/var/lib/${name}";
Restart = "on-failure";
RestartSec = "5s";
StandardOutput = "journal";
StandardError = "journal";
SyslogIdentifier = name;
# Hardening — lemonade needs network access and subprocess execution
# for spawning llama.cpp / whisper.cpp backends.
NoNewPrivileges = true;
PrivateTmp = true;
ProtectSystem = "strict";
ProtectHome = true;
ReadWritePaths = [
"/var/lib/${name}"
"/run/${name}"
];
};
};
users.users.${name} = {
isSystemUser = true;
group = name;
home = "/var/lib/${name}";
createHome = true;
description = "Lemonade LLM server daemon";
};
users.groups.${name} = { };
};
};
in
{
imports = [ lemonadeConfig ];
}

View File

@@ -24,6 +24,9 @@ let
isLighthouse = lib.${namespace}.mkBoolOpt false "Act as a Nebula lighthouse";
isRelay = lib.${namespace}.mkBoolOpt false "Act as a Nebula relay node";
# Override the mkModule port default (80) with the nebula default (4242).
port = lib.${namespace}.mkOpt types.port 4242 "UDP port nebula listens on";
# -----------------------------------------------------------------------
# Network identity
# -----------------------------------------------------------------------
@@ -66,13 +69,13 @@ let
# -----------------------------------------------------------------------
# Peer addressing (ignored on lighthouse nodes)
# -----------------------------------------------------------------------
lighthouses =
lib.${namespace}.mkOpt (types.listOf types.str) [ ]
"Nebula overlay IPs of lighthouse nodes (leave empty on lighthouses)";
lighthouses = lib.${namespace}.mkOpt (types.listOf types.str) [
"10.1.1.1"
] "Nebula overlay IPs of lighthouse nodes (leave empty on lighthouses)";
staticHostMap = lib.${namespace}.mkOpt (types.attrsOf (
types.listOf types.str
)) { } "Static host map: overlay IP list of public addr:port strings";
staticHostMap = lib.${namespace}.mkOpt (types.attrsOf (types.listOf types.str)) {
"10.1.1.1" = [ "mjallen.dev:4242" ];
} "Static host map: overlay IP list of public addr:port strings";
# -----------------------------------------------------------------------
# Firewall rules inside the overlay

View File

@@ -1,55 +1,68 @@
{ pkgs, system, ... }:
{
config,
lib,
namespace,
pkgs,
system,
...
}:
let
timezone = "America/Chicago";
inherit (lib.${namespace}) mkOpt;
cfg = config.${namespace}.system;
isArm = system == "aarch64-linux";
in
{
options.${namespace}.system = {
timezone = mkOpt lib.types.str "America/Chicago" "System timezone (e.g. \"America/New_York\").";
environment.systemPackages =
with pkgs;
[
brightnessctl
dconf
disko
kdiskmark
nil
nix-output-monitor
nixos-anywhere
qemu
udisks2
unzip
]
++ (
if isArm then
[ ]
else
[
acpilight
aha
aspell
aspellDicts.en
aspellDicts.en-computers
aspellDicts.en-science
ddcui
ddcutil
ddccontrol
ddccontrol-db
efibootmgr
memtest86-efi
memtest86plus
os-prober
sbctl
tpm2-tools
tpm2-tss
winetricks
]
);
# Time config
time = {
# Set your time zone.
timeZone = timezone;
stateVersion =
mkOpt lib.types.str "23.11"
"NixOS state version. Should match the version in use when the system was first installed.";
};
system.stateVersion = "23.11";
config = {
environment.systemPackages =
with pkgs;
[
brightnessctl
dconf
disko
kdiskmark
nil
nix-output-monitor
nixos-anywhere
qemu
udisks2
unzip
]
++ (
if isArm then
[ ]
else
[
acpilight
aha
aspell
aspellDicts.en
aspellDicts.en-computers
aspellDicts.en-science
ddcui
ddcutil
ddccontrol
ddccontrol-db
efibootmgr
memtest86-efi
memtest86plus
os-prober
sbctl
tpm2-tools
tpm2-tss
winetricks
]
);
time.timeZone = cfg.timezone;
system.stateVersion = cfg.stateVersion;
};
}

View File

@@ -0,0 +1,248 @@
{
lib,
stdenv,
fetchFromGitHub,
cmake,
ninja,
pkg-config,
rustPlatform,
cargo,
rustc,
# C++ build-time dependencies
boost,
curl,
openssl,
fftw,
fftwFloat, # fftw3f (single-precision)
fftwLongDouble, # fftw3l (long-double-precision)
ffmpeg,
readline,
libdrm,
libuuid,
# ELF patching for the bundled proprietary .so files
autoPatchelfHook,
patchelf,
gcc-unwrapped,
# Access to other flake packages (packages/xrt)
pkgs,
namespace,
}:
# FastFlowLM (FLM) — Ollama-style LLM runtime for AMD Ryzen AI (XDNA 2) NPUs.
#
# Build overview
# ==============
# The repository contains:
# src/ C++20 CMake project → produces the `flm` binary
# third_party/
# tokenizers-cpp/ git submodule — builds tokenizers_cpp (C++) +
# libtokenizers_c.a (Rust staticlib via cargo)
# src/lib/*.so Proprietary NPU kernel libraries (pre-built, bundled)
# src/xclbins/ AIE bitstreams (pre-built, loaded at runtime by .so)
# src/model_list.json Model registry
#
# Runtime prerequisites (managed outside this package):
# • Linux >= 6.14 with amdxdna in-tree driver, or amdxdna-dkms on older
# kernels
# • linux-firmware >= 20260221 (NPU firmware >= 1.1.0.0)
# • Memlock = unlimited for the FLM process
# • packages/xrt (libxrt_coreutil) built and available
#
# To update to a new release
# ==========================
# 1. Bump `version` below.
# 2. Update `srcHash` (run: nix-prefetch-git --url ...FastFlowLM --rev v<X>).
# 3. If the tokenizers-cpp submodule rev changed (check .gitmodules / git
# submodule status), update `tokenizersRev` and `tokenizersHash`:
# nix-prefetch-git --url .../tokenizers-cpp --rev <REV> --fetch-submodules
# 4. Update `cargoVendorHash`: set to lib.fakeHash, run nix build, copy hash.
let
version = "0.9.36";
# XRT userspace runtime — built from packages/xrt in this flake.
xrt = pkgs.${namespace}.xrt;
# ── tokenizers-cpp submodule ──────────────────────────────────────────────
# Pinned to the commit referenced in FastFlowLM v0.9.36 .gitmodules.
tokenizersRev = "34885cfd7b9ef27b859c28a41e71413dd31926f5";
tokenizers-cpp-src = fetchFromGitHub {
owner = "mlc-ai";
repo = "tokenizers-cpp";
rev = tokenizersRev;
# Includes sentencepiece + msgpack sub-submodules.
hash = "sha256-m3A9OhCXJgvvV9UbVL/ijaUC1zkLHlddnQLqZEA5t4w=";
fetchSubmodules = true;
};
# Vendor the Rust crates from tokenizers-cpp/rust/Cargo.toml offline.
# This fixed-output derivation has network access; everything else is sandboxed.
# To compute the hash: set to lib.fakeHash → nix build → copy printed hash.
cargoVendorDir = rustPlatform.fetchCargoVendor {
src = tokenizers-cpp-src;
sourceRoot = "source/rust";
hash = lib.fakeHash; # FIXME: replace after first successful build attempt
};
in
stdenv.mkDerivation rec {
pname = "fastflowlm";
inherit version;
src = fetchFromGitHub {
owner = "FastFlowLM";
repo = "FastFlowLM";
rev = "v${version}";
# We do NOT fetch submodules here — tokenizers-cpp is injected separately
# (above) so that its Rust deps can be vendored in a fixed-output derivation.
hash = "sha256-uq/ZxvJA5HTJbMxofO4Hrz7ULvV1fPC7OHRXulMqwqw=";
};
nativeBuildInputs = [
cmake
ninja
pkg-config
cargo
rustc
autoPatchelfHook
patchelf
];
buildInputs = [
boost
curl
openssl
fftw
fftwFloat
fftwLongDouble
ffmpeg
readline
libdrm
libuuid
xrt
# libstdc++ / libgcc_s needed at runtime by the bundled NPU .so files.
gcc-unwrapped.lib
];
# autoPatchelfHook uses runtimeDependencies to add NEEDED entries to the
# ELF RPATH, covering libraries that the bundled .so files depend on.
runtimeDependencies = [
xrt
gcc-unwrapped.lib
fftw
fftwFloat
fftwLongDouble
ffmpeg
curl
openssl
boost
readline
libdrm
];
# CMakeLists.txt lives in src/, not the repo root.
cmakeDir = "src";
preConfigure = ''
# 1. Populate the tokenizers-cpp submodule directory
# CMakeLists.txt references the submodule as:
# add_subdirectory(''${CMAKE_SOURCE_DIR}/../third_party/tokenizers-cpp ...)
# The cmake setup hook unpacks sources to $TMPDIR/source; we write the
# submodule content there before cmake is invoked.
mkdir -p third_party/tokenizers-cpp
cp -r --no-preserve=mode,ownership "${tokenizers-cpp-src}/." \
third_party/tokenizers-cpp/
# 2. Configure cargo to use the pre-vendored crates (offline)
mkdir -p third_party/tokenizers-cpp/rust/.cargo
cat > third_party/tokenizers-cpp/rust/.cargo/config.toml << EOF
[source.crates-io]
replace-with = "vendored-sources"
[source.vendored-sources]
directory = "${cargoVendorDir}"
EOF
'';
cmakeFlags = [
# The build system requires these two version strings (checked at configure).
"-DFLM_VERSION=${version}"
"-DNPU_VERSION=32.0.203.311"
"-DCMAKE_BUILD_TYPE=Release"
# Override the default XRT install prefix (/opt/xilinx/xrt).
"-DXRT_INCLUDE_DIR=${xrt}/include"
"-DXRT_LIB_DIR=${xrt}/lib"
# xclbins/ path baked into the binary via CMAKE_XCLBIN_PREFIX.
"-DCMAKE_XCLBIN_PREFIX=${placeholder "out"}/share/flm"
];
installPhase = ''
runHook preInstall
cmake --install . --prefix "$out"
# Copy bundled proprietary NPU kernel .so files
# The upstream CMakeLists installs them via:
# file(GLOB so_libs "''${CMAKE_SOURCE_DIR}/lib/*.so")
# install(FILES ''${so_libs} DESTINATION lib)
# and sets RPATH=$ORIGIN/../lib on the flm binary.
# We reproduce that layout: $out/lib/lib*.so alongside $out/bin/flm.
mkdir -p "$out/lib"
for so in "$src/src/lib"/lib*.so; do
install -m755 "$so" "$out/lib/"
done
runHook postInstall
'';
# autoPatchelfHook runs automatically and patches the bundled .so files.
# We additionally fix the RPATH on the flm binary to include both:
# • $out/lib (bundled NPU .so files)
# • system libs path (XRT, ffmpeg, boost, …)
postFixup = ''
patchelf \
--set-rpath "${lib.makeLibraryPath buildInputs}:$out/lib" \
"$out/bin/flm"
'';
meta = with lib; {
description = "LLM runtime for AMD Ryzen AI XDNA 2 NPUs";
longDescription = ''
FastFlowLM (FLM) runs large language models on AMD Ryzen AI (XDNA 2)
NPU silicon Strix Point, Strix Halo, Kraken Point, Gorgon Point.
It provides an Ollama-compatible REST API (port 52625) and a CLI.
Models are stored in ~/.config/flm/ by default;
override with the FLM_MODEL_PATH environment variable.
Usage:
flm validate # check NPU driver + firmware health
flm run llama3.2:1b # interactive chat (downloads model on first run)
flm serve llama3.2:1b # OpenAI-compatible server on port 52625
flm list # list available models
flm pull <model> # pre-download a model
System requirements:
Linux >= 6.14 (amdxdna in-tree) or amdxdna-dkms on older kernels
linux-firmware >= 20260221 (NPU firmware >= 1.1.0.0)
Unlimited memlock for the flm process, e.g. in NixOS:
security.pam.loginLimits = [{
domain = "*"; type = "-";
item = "memlock"; value = "unlimited";
}];
License note: CLI/orchestration code is MIT. The bundled NPU kernel
shared libraries are proprietary (free for commercial use up to
USD 10 M annual revenue). See LICENSE_BINARY.txt upstream.
'';
homepage = "https://fastflowlm.com";
license = with licenses; [
mit
unfreeRedistributable
];
mainProgram = "flm";
platforms = [ "x86_64-linux" ];
maintainers = [ ];
};
}

View File

@@ -0,0 +1,136 @@
{
lib,
stdenv,
fetchFromGitHub,
cmake,
ninja,
pkg-config,
# C++ build-time dependencies
curl,
openssl,
zlib,
nlohmann_json,
libwebsockets,
cli11,
# Linux system libraries (optional, enable richer features)
systemd,
libdrm,
libcap,
}:
let
# cpp-httplib is not yet in nixpkgs; pre-fetch for CMake FetchContent override.
# The CMakeLists.txt version gate requires >= 0.26.0.
cpp-httplib-src = fetchFromGitHub {
owner = "yhirose";
repo = "cpp-httplib";
rev = "v0.26.0";
hash = "sha256-+VPebnFMGNyChM20q4Z+kVOyI/qDLQjRsaGS0vo8kDM=";
};
in
stdenv.mkDerivation rec {
pname = "lemonade";
version = "10.0.1";
src = fetchFromGitHub {
owner = "lemonade-sdk";
repo = "lemonade";
rev = "v${version}";
hash = "sha256-aswK7OXMWTFUNHrrktf1Vx3nvTkLWMEhAgWlil1Zu2c=";
};
nativeBuildInputs = [
cmake
ninja
pkg-config
];
buildInputs = [
curl
openssl
zlib
nlohmann_json
libwebsockets
cli11
systemd
libdrm
libcap
];
cmakeFlags = [
# Disable the web app (requires Node.js / npm at build time)
"-DBUILD_WEB_APP=OFF"
# Disable Linux tray app (requires GTK3 + AppIndicator3; optional)
"-DREQUIRE_LINUX_TRAY=OFF"
"-DCMAKE_BUILD_TYPE=Release"
# Prevent FetchContent from reaching the network (sandbox-safe)
"-DFETCHCONTENT_FULLY_DISCONNECTED=ON"
# Provide pre-fetched sources for deps not in nixpkgs
"-DFETCHCONTENT_SOURCE_DIR_HTTPLIB=${cpp-httplib-src}"
];
installPhase = ''
runHook preInstall
mkdir -p "$out/bin"
# HTTP server pure OpenAI-compatible REST API, no CLI interface
install -Dm755 lemonade-router "$out/bin/lemonade-router"
# Console CLI client primary user-facing tool (serve, list, pull, run,
# status, stop, logs, recipes, ); manages the lemonade-router process
install -Dm755 lemonade-server "$out/bin/lemonade-server"
# Standalone HTTP-only CLI client (newer, lightweight alternative to
# lemonade-server for scripting; requires a running lemonade-router)
if [ -f lemonade ]; then
install -Dm755 lemonade "$out/bin/lemonade"
fi
# Resources: model registry (server_models.json), backend version config,
# and static web UI assets served by lemonade-router.
#
# get_resource_path() in path_utils.cpp resolves files as:
# <exe_dir>/resources/<file> (first check, highest priority)
# /usr/share/lemonade-server/resources/<file> (fallback)
#
# We install the real files under share/ and symlink bin/resources
# ../share/lemonade-server/resources so the first check succeeds regardless
# of whether the Nix store path is in any of the hardcoded fallback prefixes.
if [ -d resources ]; then
mkdir -p "$out/share/lemonade-server/resources"
cp -r resources/. "$out/share/lemonade-server/resources/"
ln -s "$out/share/lemonade-server/resources" "$out/bin/resources"
fi
runHook postInstall
'';
meta = with lib; {
description = "Local LLM server that serves optimized LLMs from GPUs and NPUs";
longDescription = ''
Lemonade helps users discover and run local AI apps by serving
optimized LLMs, images, and speech directly from their own GPUs and
NPUs. It exposes an OpenAI-compatible REST API at
http://localhost:8000/api/v1 and bundles a web UI, model manager,
and CLI client.
Binaries:
lemonade-router HTTP server (OpenAI-compatible API on :8000)
lemonade-server CLI client: serve, list, pull, run, status, stop,
lemonade standalone HTTP-only CLI for scripting
Typical usage:
lemonade-server serve # start the server (headless on Linux)
lemonade-server list # browse available models
lemonade-server pull Gemma-3-4b-it-GGUF
lemonade-server run Gemma-3-4b-it-GGUF
'';
homepage = "https://lemonade-server.ai";
license = licenses.asl20;
mainProgram = "lemonade-server";
platforms = platforms.linux;
maintainers = [ ];
};
}

114
packages/xrt/default.nix Normal file
View File

@@ -0,0 +1,114 @@
{
lib,
stdenv,
fetchFromGitHub,
cmake,
ninja,
pkg-config,
python3,
boost,
curl,
openssl,
systemd,
libdrm,
ncurses,
protobuf,
elfutils,
zlib,
rapidjson,
util-linux, # provides libuuid
xz, # provides liblzma
}:
# AMD XRT (Xilinx Runtime) userspace library for NPU (XDNA 2) devices.
#
# This package builds the XRT base library from the commit pinned as a
# submodule in amd/xdna-driver. It provides:
# $out/lib/libxrt_coreutil.so — core utility library (linked by flm)
# $out/lib/libxrt_core.so — platform-independent core
# $out/include/xrt/ — public C++ headers
# $out/include/experimental/
#
# The xrt source tree lives under the src/ subdirectory of the Xilinx/XRT
# repository (see src/CMakeLists.txt which includes CMake/nativeLnx.cmake).
#
# XRT version 2.19.0 — pinned to the commit used by amd/xdna-driver main
# as of 2026-03-25 (xrt @ 481583d).
#
# Runtime note: this package only provides the userspace library. The
# kernel driver (amdxdna.ko) is a separate concern:
# • Linux >= 6.14 ships it in-tree (boot.kernelPackages.linux_latest).
# • Older kernels can use hardware.amdxdna.enable (once packaged).
stdenv.mkDerivation rec {
pname = "xrt";
version = "2.19.0";
src = fetchFromGitHub {
owner = "Xilinx";
repo = "XRT";
rev = "481583db9a26cb506a37cab7f1881ae7c7de2f32";
hash = "sha256-WLZDjuuEGd3i77zXpAJkfQy/AszdSQ9pagy64yGX58Q=";
fetchSubmodules = false; # XRT submodules are Windows-only tools
};
nativeBuildInputs = [
cmake
ninja
pkg-config
python3
];
buildInputs = [
boost
curl
openssl
systemd # for libudev (device enumeration)
libdrm
ncurses
protobuf
elfutils # libelf
zlib
rapidjson
util-linux # libuuid
xz # liblzma
];
# XRT's CMakeLists.txt is in the src/ subdirectory.
cmakeDir = "src";
cmakeFlags = [
"-DCMAKE_BUILD_TYPE=Release"
"-DCMAKE_INSTALL_PREFIX=${placeholder "out"}"
# Build the NPU/XDNA variant (skips PCIe FPGA-specific components).
"-DXRT_NATIVE_BUILD=yes"
# Disable components we do not need:
"-DXRT_ENABLE_WERROR=OFF"
# Install libraries to lib/ (some builds default to lib64/).
"-DCMAKE_INSTALL_LIBDIR=lib"
];
# XRT's install target places a setup.sh in the prefix root; we don't need
# that for Nix — the binary wrapper / RPATH mechanism handles library lookup.
postInstall = ''
# Remove the CMake-generated setup.sh not needed in a Nix env.
rm -f "$out"/setup.sh "$out"/setup.csh 2>/dev/null || true
'';
meta = with lib; {
description = "AMD XRT (Xilinx Runtime) userspace library for XDNA NPUs";
longDescription = ''
XRT is the userspace component of AMD's XRT stack for their FPGA and
NPU devices. This package builds only the base library
(libxrt_coreutil, libxrt_core) that FastFlowLM links against to
communicate with the AMD XDNA 2 NPU via the amdxdna kernel driver.
The kernel driver (amdxdna.ko) is built in since Linux 6.14.
For older kernels it can be loaded via a DKMS package.
'';
homepage = "https://github.com/Xilinx/XRT";
license = licenses.asl20;
platforms = [ "x86_64-linux" ];
maintainers = [ ];
};
}

View File

@@ -75,23 +75,8 @@
};
network = {
hostName = "macbook-pro-nixos";
iwd.enable = true;
networkmanager.enable = true;
# iwd = {
# enable = true;
# settings = {
# General = {
# EnableNetworkConfiguration = true;
# };
# Rank = {
# BandModifier2_4GHz = 1.0;
# BandModifier5GHz = 5.0;
# BandModifier6GHz = 10.0;
# };
# Network = {
# AutoConnect = true;
# };
# };
# };
extraFirewallCommands = ''
iptables -I INPUT -m pkttype --pkt-type multicast -j ACCEPT
iptables -A INPUT -m pkttype --pkt-type multicast -j ACCEPT
@@ -101,11 +86,6 @@
services = {
nebula = {
enable = true;
port = 4242;
lighthouses = [ "10.1.1.1" ];
staticHostMap = {
"10.1.1.1" = [ "mjallen.dev:4242" ];
};
secretsPrefix = "macbook-pro-nixos/nebula";
secretsFile = lib.snowfall.fs.get-file "secrets/mac-secrets.yaml";
hostSecretName = "macbook-pro-nixos";
@@ -161,8 +141,6 @@
omnissa
]);
networking.networkmanager.wifi.backend = "iwd";
environment.sessionVariables = {
DBX_CONTAINER_MANAGER = "podman";
GSK_RENDERER = "opengl";

View File

@@ -14,7 +14,6 @@ let
in
{
fileSystems = {
"/etc".neededForBoot = true;
# Network shares
"/media/nas/backup" = {
device = "//10.0.1.3/Backup";

View File

@@ -148,7 +148,6 @@
# ###################################################
boot.supportedFilesystems = [ "bcachefs" ];
fileSystems."/etc".neededForBoot = true;
programs.seahorse.enable = false;
}

View File

@@ -40,7 +40,7 @@
};
specialisation.graphical.configuration = {
${namespace}.desktop.cosmic.enable = true;
${namespace}.desktop.plasma.enable = true;
};
boot = {
@@ -56,10 +56,6 @@
};
};
fileSystems = {
"/etc".neededForBoot = true;
};
home-manager.users.nixos.snowfallorg.user.name = "nixos";
sops.defaultSopsFile = lib.mkForce "/dev/null";

View File

@@ -51,7 +51,6 @@ in
};
fileSystems = {
"/etc".neededForBoot = true;
"/media/sdcard" = {
device = "/dev/mmcblk0p1";
fsType = "bcachefs";

View File

@@ -31,7 +31,10 @@
bootloader.lanzaboote.enable = true;
desktop.gnome.enable = true;
desktop.gnome = {
enable = true;
vscodium.enable = true;
};
gaming.enable = true;
@@ -44,6 +47,7 @@
amd = {
enable = true;
lact.enable = true;
coolercontrol.enable = true;
};
};
@@ -61,17 +65,13 @@
network = {
hostName = "allyx";
iwd.enable = true;
networkmanager.enable = true;
};
services = {
nebula = {
enable = true;
port = 4242;
lighthouses = [ "10.1.1.1" ];
staticHostMap = {
"10.1.1.1" = [ "mjallen.dev:4242" ];
};
secretsPrefix = "allyx/nebula";
secretsFile = lib.snowfall.fs.get-file "secrets/allyx-secrets.yaml";
hostSecretName = "allyx";
@@ -79,14 +79,6 @@
};
};
programs.coolercontrol.enable = true;
environment.variables = {
GDK_SCALE = "1";
EDITOR = "${lib.getExe' pkgs.vscodium "codium"} --wait";
VISUAL = "${lib.getExe' pkgs.vscodium "codium"} --wait";
};
services = {
handheld-daemon = {
enable = true;
@@ -132,10 +124,9 @@
};
systemd = {
services = {
systemd-networkd-wait-online.enable = lib.mkForce false;
power-profiles-daemon.enable = lib.mkForce false;
inputplumber.enable = lib.mkForce false;
};
};
networking.networkmanager.wifi.backend = "iwd";
}

View File

@@ -132,6 +132,12 @@ in
port = 2283;
reverseProxy = enabled;
};
lemonade = {
enable = true;
port = 8001;
modelsDir = "/media/nas/main/ai/lemonade/models";
reverseProxy = disabled;
};
jellyfin = {
enable = true;
port = 8096;

View File

@@ -37,9 +37,9 @@ in
# # Desktop # #
# ###################################################
# COSMIC is enabled for occasional local display access.
# Plasma is enabled for occasional local display access.
# headless.enable only disables watchdog/emergency mode, not the display server.
desktop.cosmic = enabled;
desktop.plasma = enabled;
# ###################################################
# # Development # #
@@ -285,7 +285,6 @@ in
];
};
"/etc".neededForBoot = true;
};
# Ensure Samba share root directories are owned by nix-apps:jallen-nas

View File

@@ -56,6 +56,7 @@ in
"headscale"
"immich"
"jellyfin"
"lemonade"
"jellyseerr"
"lubelogger"
"manyfold"

View File

@@ -366,6 +366,19 @@ in
];
};
# Grafana reads ntfy credentials via systemd EnvironmentFile so the
# $__env{} provider works in alerting provisioning YAML. The file
# provider ($__file{}) only works in grafana.ini, not in provisioning.
"grafana.env" = {
content = ''
GRAFANA_NTFY_USER=${config.sops.placeholder."jallen-nas/ntfy/user"}
GRAFANA_NTFY_PASSWORD=${config.sops.placeholder."jallen-nas/ntfy/password"}
'';
mode = "0400";
owner = "grafana";
restartUnits = [ "grafana.service" ];
};
# CrowdSec HTTP notification plugin config with credentials baked in.
# The plugin process spawned by crowdsec/cscli reads this file directly.
# Credentials are embedded in the URL using HTTP basic auth so no

View File

@@ -2,8 +2,8 @@
{
# as well as the libraries available from your flake's inputs.
lib,
# An instance of `pkgs` with your overlays and packages applied is also available.
pkgs,
# # An instance of `pkgs` with your overlays and packages applied is also available.
# pkgs,
# # You also have access to your flake's inputs.
# inputs,
@@ -40,7 +40,10 @@
enable = false;
wallpaperSource = "bing";
};
gnome.enable = true;
gnome = {
enable = true;
vscodium.enable = true;
};
};
gaming.enable = true;
@@ -54,6 +57,7 @@
amd = {
enable = true;
lact.enable = true;
coolercontrol.enable = true;
};
};
@@ -71,16 +75,13 @@
network = {
hostName = "matt-nixos";
iwd.enable = true;
networkmanager.enable = true;
};
services = {
nebula = {
enable = true;
port = 4242;
lighthouses = [ "10.1.1.1" ];
staticHostMap = {
"10.1.1.1" = [ "mjallen.dev:4242" ];
};
secretsPrefix = "matt-nixos/nebula";
secretsFile = lib.snowfall.fs.get-file "secrets/desktop-secrets.yaml";
hostSecretName = "matt-nixos";
@@ -88,17 +89,6 @@
};
};
programs.coolercontrol.enable = true;
systemd.services.systemd-networkd-wait-online.enable = lib.mkForce false;
environment.variables = {
GDK_SCALE = "1";
EDITOR = "${lib.getExe' pkgs.vscodium "codium"} --wait";
VISUAL = "${lib.getExe' pkgs.vscodium "codium"} --wait";
};
networking.networkmanager.wifi.backend = "iwd";
# security.wrappers.librepods = {
# source = "${pkgs.${namespace}.librepods}/bin/librepods";
# owner = "matt";
@@ -113,15 +103,13 @@
# kernelPackages = lib.mkOverride 90 pkgs.${namespace}.linuxPackages_cachyos-rc-lto-znver4;
# };
#};
"cosmic" = {
"plasma" = {
configuration = {
${namespace} = {
sops.enable = true;
desktop = {
cosmic.enable = lib.mkForce true;
hyprland = {
enable = lib.mkForce false;
};
plasma.enable = lib.mkForce true;
hyprland.enable = lib.mkForce false;
gnome.enable = lib.mkForce false;
};
};

View File

@@ -24,7 +24,6 @@ in
};
fileSystems = {
"/etc".neededForBoot = true;
"/media/matt/data" = {
device = "/dev/disk/by-uuid/f851d21e-27b3-4353-aa19-590d244db6e5";
fsType = "bcachefs";

View File

@@ -0,0 +1,4 @@
{ pkgs, ... }:
{
boot.kernelPackages = pkgs.cachyosKernels.linuxPackages-cachyos-latest-lto-x86_64-v4;
}

View File

@@ -0,0 +1,181 @@
# Home Assistant dashboard definitions for nuc-nixos.
# Imported by default.nix and passed into the home-assistant module's
# `dashboards` option.
{ namespace, ... }:
{
${namespace}.services.home-assistant.dashboards = [
{
title = "Pets & Air Quality";
path = "pets-air";
icon = "mdi:paw";
cards = [
{
type = "markdown";
title = "🐾 Pets";
content = "## Pet Management";
}
{
type = "horizontal-stack";
cards = [
{
type = "entity";
entity = "select.joey_smart_feeder_manual_feed_quantity";
name = "Joey Feed";
}
{
type = "entity";
entity = "select.luci_smart_feeder_manual_feed_quantity";
name = "Luci Feed";
}
{
type = "entity";
entity = "text.joey_smart_feeder_text_on_display";
name = "Joey Display";
}
{
type = "entity";
entity = "text.luci_smart_feeder_text_on_display";
name = "Luci Display";
}
];
}
{
type = "entities";
title = "🐱 Litter Robot";
show_header_toggle = false;
entities = [
"vacuum.litter_robot_4_litter_box"
{
type = "button";
entity = "button.garbage_goober_deep";
name = "Deep Clean";
}
{
type = "button";
entity = "button.garbage_goober_pet_area_cleaning";
name = "Pet Area";
}
{
type = "button";
entity = "button.garbage_goober_intensive_sweeping";
name = "Intensive";
}
];
}
{
type = "markdown";
title = "🌬 Air Quality";
content = "## Air Purifiers & Humidity";
}
{
type = "horizontal-stack";
cards = [
{
type = "tile";
entity = "fan.living_room_air_purifier";
name = "Living Room";
icon = "mdi:air-purifier";
features = [
{
type = "fan-speed";
speed_count = 3;
}
];
}
{
type = "tile";
entity = "fan.bedroom_air_purifier";
name = "Bedroom";
icon = "mdi:air-purifier";
features = [
{
type = "fan-speed";
speed_count = 3;
}
];
}
{
type = "tile";
entity = "fan.bedroom_fan";
name = "Bedroom Fan";
icon = "mdi:fan";
features = [
{
type = "fan-speed";
speed_count = 4;
presets = [
{
name = "auto";
icon = "mdi:fan-auto";
}
{
name = "low";
icon = "mdi:fan-speed-1";
}
{
name = "medium";
icon = "mdi:fan-speed-2";
}
{
name = "high";
icon = "mdi:fan-speed-3";
}
];
}
];
}
];
}
{
type = "horizontal-stack";
cards = [
{
type = "tile";
entity = "humidifier.bedroom_humidifier";
name = "Bedroom Humidifier";
icon = "mdi:water-percent";
features = [
{
type = "humidifier-toggle";
}
{
type = "humidifier-modes";
modes = [
{
name = "auto";
icon = "mdi:refresh-auto";
}
{
name = "low";
icon = "mdi:water-percent";
}
{
name = "medium";
icon = "mdi:water-percent";
}
{
name = "high";
icon = "mdi:water-percent";
}
];
}
{
type = "humidifier-target-humidity";
}
];
}
];
}
{
type = "entities";
title = " Status";
show_header_toggle = false;
entities = [
"binary_sensor.bedroom_humidifier_low_water"
"binary_sensor.bedroom_humidifier_water_tank_lifted"
];
}
];
}
];
}

View File

@@ -1,34 +1,23 @@
{ namespace, ... }:
{
pkgs,
namespace,
...
}:
{
imports = [
./boot.nix
./dashboard.nix
];
${namespace} = {
sops.enable = true;
# ###################################################
# # Boot # #
# ###################################################
bootloader.lanzaboote.enable = true;
# ###################################################
# # Hardware # #
# ###################################################
hardware.disko = {
enable = true;
enableLuks = true;
filesystem = "btrfs";
# rootDisk = "/dev/loop0";
};
headless.enable = true;
# ###################################################
# # Impermanence # #
# ###################################################
impermanence = {
enable = true;
extraDirectories = [
@@ -41,10 +30,6 @@
];
};
# ###################################################
# # Network # #
# ###################################################
network = {
hostName = "nuc-nixos";
ipv4 = {
@@ -71,194 +56,11 @@
};
};
# ###################################################
# # Security # #
# ###################################################
security.tpm.enable = true;
# ###################################################
# # Services # #
# ###################################################
services = {
home-assistant = {
enable = true;
dashboards = [
{
title = "Pets & Air Quality";
path = "pets-air";
icon = "mdi:paw";
cards = [
{
type = "markdown";
title = "🐾 Pets";
content = "## Pet Management";
}
{
type = "horizontal-stack";
cards = [
{
type = "entity";
entity = "select.joey_smart_feeder_manual_feed_quantity";
name = "Joey Feed";
}
{
type = "entity";
entity = "select.luci_smart_feeder_manual_feed_quantity";
name = "Luci Feed";
}
{
type = "entity";
entity = "text.joey_smart_feeder_text_on_display";
name = "Joey Display";
}
{
type = "entity";
entity = "text.luci_smart_feeder_text_on_display";
name = "Luci Display";
}
];
}
{
type = "entities";
title = "🐱 Litter Robot";
show_header_toggle = false;
entities = [
"vacuum.litter_robot_4_litter_box"
{
type = "button";
entity = "button.garbage_goober_deep";
name = "Deep Clean";
}
{
type = "button";
entity = "button.garbage_goober_pet_area_cleaning";
name = "Pet Area";
}
{
type = "button";
entity = "button.garbage_goober_intensive_sweeping";
name = "Intensive";
}
];
}
{
type = "markdown";
title = "🌬 Air Quality";
content = "## Air Purifiers & Humidity";
}
{
type = "horizontal-stack";
cards = [
{
type = "tile";
entity = "fan.living_room_air_purifier";
name = "Living Room";
icon = "mdi:air-purifier";
features = [
{
type = "fan-speed";
speed_count = 3;
}
];
}
{
type = "tile";
entity = "fan.bedroom_air_purifier";
name = "Bedroom";
icon = "mdi:air-purifier";
features = [
{
type = "fan-speed";
speed_count = 3;
}
];
}
{
type = "tile";
entity = "fan.bedroom_fan";
name = "Bedroom Fan";
icon = "mdi:fan";
features = [
{
type = "fan-speed";
speed_count = 4;
presets = [
{
name = "auto";
icon = "mdi:fan-auto";
}
{
name = "low";
icon = "mdi:fan-speed-1";
}
{
name = "medium";
icon = "mdi:fan-speed-2";
}
{
name = "high";
icon = "mdi:fan-speed-3";
}
];
}
];
}
];
}
{
type = "horizontal-stack";
cards = [
{
type = "tile";
entity = "humidifier.bedroom_humidifier";
name = "Bedroom Humidifier";
icon = "mdi:water-percent";
features = [
{
type = "humidifier-toggle";
}
{
type = "humidifier-modes";
modes = [
{
name = "auto";
icon = "mdi:refresh-auto";
}
{
name = "low";
icon = "mdi:water-percent";
}
{
name = "medium";
icon = "mdi:water-percent";
}
{
name = "high";
icon = "mdi:water-percent";
}
];
}
{
type = "humidifier-target-humidity";
}
];
}
];
}
{
type = "entities";
title = " Status";
show_header_toggle = false;
entities = [
"binary_sensor.bedroom_humidifier_low_water"
"binary_sensor.bedroom_humidifier_water_tank_lifted"
];
}
];
}
];
automation = {
lightswitch = {
living-room-lights = {
@@ -309,21 +111,9 @@
};
};
# ###################################################
# # User # #
# ###################################################
user = {
name = "admin";
linger = true;
};
};
# ###################################################
# # Boot # #
# ###################################################
boot.kernelPackages = pkgs.cachyosKernels.linuxPackages-cachyos-latest-lto-x86_64-v4;
fileSystems."/etc".neededForBoot = true;
}