24 Commits

Author SHA1 Message Date
8bd45f7e07 Runbook: Docker Host Setup with RBAC and Node Exporter on Proxmox 2026-01-23 22:56:10 +13:00
3c645cb215 Merge pull request 'Upload runbook for speedtest setup' (#12) from docker into main
Reviewed-on: #12
2026-01-23 18:58:36 +13:00
f66c0d51fb Upload runbook for speedtest setup 2026-01-23 18:40:11 +13:00
506e1b3045 Merge pull request 'k8s_update2' (#11) from k8s_update2 into main
Reviewed-on: #11
2025-12-08 14:25:00 +13:00
255446d1a8 k8s install update 2 2025-12-08 14:21:56 +13:00
d6448f4039 general 1 2025-12-08 08:07:54 +13:00
6e892435d2 Merge pull request 'update 1.1' (#10) from Kubeadm_1_1 into main
Reviewed-on: #10
2025-12-08 08:03:40 +13:00
0f880b4d91 update 1.1 2025-12-08 08:03:27 +13:00
beed20ea37 Merge pull request 'Kubeadm' (#9) from Kubeadm into main
Reviewed-on: #9
2025-12-08 08:00:57 +13:00
bb1b54b32e update 1 2025-12-08 08:00:26 +13:00
d0f40d8157 something weird again? 2025-12-07 22:13:36 +13:00
e61a54a23a Merge pull request 'Stupid formatting' (#8) from Networking_3 into main
Reviewed-on: #8
2025-12-07 14:52:20 +13:00
317061526a Stupid formatting 2025-12-07 14:52:08 +13:00
a40d8210a6 Merge pull request 'Added the .md' (#6) from Networking_1 into main
Reviewed-on: #6
2025-12-07 14:47:23 +13:00
963569e324 Added the .md 2025-12-07 14:47:10 +13:00
d0236ad079 Merge pull request 'Adding debian specific Static IP address changes, and hostname updates' (#5) from Networking into main
Reviewed-on: #5
2025-12-07 14:46:20 +13:00
37ced0534d Adding debian specific Static IP address changes, and hostname updates 2025-12-07 14:45:56 +13:00
03fe3c8ab9 Merge pull request 'Adding notes about ssh key generation 1' (#4) from keygen_2 into main
Reviewed-on: #4
2025-12-07 14:21:34 +13:00
297c4a8459 Adding notes about ssh key generation 1 2025-12-07 14:21:14 +13:00
e217a003f2 Merge pull request 'Adding notes about ssh key generation' (#3) from keygen into main
Reviewed-on: #3
2025-12-07 14:20:08 +13:00
1535a55316 Adding notes about ssh key generation 2025-12-07 14:19:51 +13:00
67629ca840 Merge pull request 'Notes completed for kvm cloning' (#2) from kvm-clones-1 into main
Reviewed-on: #2
2025-12-07 13:30:11 +13:00
9e9efba5c3 Notes completed for kvm cloning 2025-12-07 13:29:48 +13:00
ce332cd791 Merge pull request 'v0.1' (#1) from kvm-clones into main
Reviewed-on: #1
2025-12-07 13:02:57 +13:00
10 changed files with 1147 additions and 1 deletions

View File

@@ -0,0 +1,104 @@
# Speedtest Tracker (LinuxServer.io) Installation Runbook
## Purpose
Deploy **Speedtest Tracker** using the **LinuxServer.io Docker image** to automatically track internet performance over time using Ookla Speedtest.
This runbook covers:
- Application key requirements
- Docker Compose configuration
- Initial access and login
- Validation checks
---
## 1. Prerequisites
- Docker and Docker Compose installed
- A persistent storage location available on the host
- LAN access to the host
- Known timezone (e.g. `Pacific/Auckland`)
---
## 2. Application Key (APP_KEY)
⚠️ **Mandatory** the container will refuse to start without an application key.
The LinuxServer.io image **does not generate an APP_KEY automatically**.
A valid key **must be generated externally** and provided via environment variables **before the container starts**.
### Important notes
- The key must be in `base64:` format
- The key must remain stable for the lifetime of the deployment
- Regenerating the key later will invalidate encrypted data
---
## 3. Create Docker Compose File
Create or edit `docker-compose.yml`:
```
services:
speedtest-tracker:
image: lscr.io/linuxserver/speedtest-tracker:latest
container_name: speedtest-tracker
restart: unless-stopped
ports:
- "8765:80"
environment:
- PUID=1000
- PGID=1000
- TZ=Pacific/Auckland
- DISPLAY_TIMEZONE=Pacific/Auckland
- APP_KEY=base64:REDACTED
- APP_URL=http://192.168.50.253:8765
- DB_CONNECTION=sqlite
- SPEEDTEST_SCHEDULE=0 * * * *
volumes:
- /mnt/storage01/docker/speedtest-tracker:/config
```
## 4. Prepare Persistent Storage
Ensure the host directory exists and is owned by the configured PUID/PGID.
```
bash
mkdir -p /mnt/storage01/docker/speedtest-tracker
chown -R 1000:1000 /mnt/storage01/docker/speedtest-tracker
```
## 5. Start the Container
Start the service using Docker Compose:
```
bash
docker compose up -d
```
## 6. Access the site
```
http://<IP_ADDRESS>:8765
```
## 7. Initial Login
The LinuxServer.io image includes a pre-seeded default administrator account.
Use the following credentials to log in for the first time:
```
Email: admin@example.com
Password: password
```
## 8. User Account
Now go create a new user, made it an administrator.
Logout as admin, and sign in with newly create account.
Change the admin again to a user account, delete the guest account.
## FIN

View File

@@ -0,0 +1,255 @@
# Runbook: Docker Host Setup with RBAC and Node Exporter on Proxmox
## Purpose
This runbook describes how to prepare a Proxmox host to run Docker services safely and predictably, using:
* ZFS-backed storage (not the OS disk)
* Role-based access control (RBAC)
* Non-root daily administration
* Docker Engine installed from the official repository
* Node Exporter as a first monitoring workload
The goal is to support **multiple administrators**, minimise risk, and avoid VM sprawl for simple host-level services.
---
## Scope
This runbook covers:
* Storage decisions on a Proxmox host
* Admin group and user setup
* Docker installation and configuration
* Docker data-root relocation to ZFS
* Node Exporter deployment via Docker Compose
This runbook does **not** cover:
* Prometheus or Grafana configuration
* Firewall rules
* Proxmox cluster configuration
* Advanced Docker security hardening
---
## Assumptions
* Proxmox VE is already installed and operational
* A ZFS pool exists (e.g. `/tank`)
* SSH access to the Proxmox host is available
* Root access is available for initial setup
---
## High-Level Design
### Storage Model
* Proxmox OS and system services remain on the OS disk
* ZFS pool is used for:
* Docker engine data
* Docker service directories
* Proxmox-managed VM storage remains isolated (e.g. `tank/vmdata`)
### Access Model (RBAC)
* One top-level admin group with full control
* Sub-admin groups for scoped access (Docker, VMs, monitoring)
* Users operate as themselves, not as root
* Root access is only used when required
---
## Admin Groups
Create the following Unix groups:
* `sysadmin` full administrative access (sudo)
* `docker-admin` Docker administration
* `vm-admin` VM and Proxmox-related tasks
* `monitoring-admin` monitoring-related services
> Groups represent **roles**, not individuals.
---
## User Setup
* Create named user accounts for administrators
* Add full administrators to:
* `sysadmin`
* `docker-admin`
* Sub-admins may be added only to the groups they require
### Sudo Access
* Grant full sudo access to the `sysadmin` group via `/etc/sudoers.d/`
* Avoid per-user sudo rules
---
## Docker Base Directory
All Docker-related data must live under the ZFS pool.
Recommended structure:
```
/tank/docker
├── engine/ # Docker internal data-root
├── node-exporter/ # Monitoring exporter
└── <future-services>/
```
Ownership and permissions:
* Owner: `root`
* Group: `docker-admin`
* Permissions: `2775` (setgid enabled)
---
## Docker Installation
Install Docker Engine from the **official Docker repository**.
Reasons:
* Predictable updates
* Supported versions
* Includes Compose plugin
Packages installed:
* docker-ce
* docker-ce-cli
* containerd.io
* docker-buildx-plugin
* docker-compose-plugin
---
## Docker Data-Root Configuration
Dockers internal data-root must be moved off the OS disk.
Configure Docker to use:
```
/tank/docker/engine
```
via `/etc/docker/daemon.json`:
```json
{
"data-root": "/tank/docker/engine",
"group": "docker-admin"
}
```
---
## Docker Socket Permissions
Docker access is controlled via the Unix socket:
```
/var/run/docker.sock
```
To align with RBAC:
* Override the systemd socket unit
* Set socket group to `docker-admin`
This allows Docker administration without:
* root shells
* sudo for every command
* use of the default `docker` group
---
## Verification (Docker Access)
As a non-root admin user:
```bash
docker version
docker info
```
Success criteria:
* No `permission denied` errors
* Docker daemon responds
* Commands run without sudo
---
## Node Exporter Deployment
Node Exporter is used to expose host-level metrics.
### Deployment Model
* Runs directly on the Proxmox host
* Deployed via Docker Compose
* Uses host networking
* Read-only filesystem access
### Service Directory
```
/tank/docker/node-exporter
```
### Compose Characteristics
* `network_mode: host`
* `pid: host`
* Read-only root filesystem
* Bind mounts for `/proc`, `/sys`, and `/`
### Verification
```bash
curl http://localhost:9100/metrics
```
A successful response returns a large metrics output.
---
## Operational Notes
* No monitoring services (Prometheus, Grafana) should run on the Proxmox host
* Exporters only — no stateful services
* Docker services should remain minimal and infrastructure-focused
* ZFS allows future snapshotting and rollback if required
---
## Outcome
After completing this runbook, the Proxmox host will have:
* Clean separation between OS, VM storage, and Docker workloads
* ZFS-backed Docker storage
* Role-based admin access
* Non-root Docker administration
* A working Node Exporter endpoint ready for Prometheus
---
## Future Extensions
* Add Prometheus scrape configuration
* Add firewall rules for exporter ports
* Add additional exporters (SMART, Proxmox)
* Automate via Ansible or Terraform
* Convert into a standard “Host Baseline” template

View File

@@ -3,4 +3,82 @@
*Prep*
Install the base OS.
Install all required apps
Install all required updates
* openssh-server
Install all required updates
Shutdown guest
* virsh shotdown $guest-vm
On the VM Host server make sure you have libgustsfs-tools
```
apt list --installed |grep -i libguestfs-tools
```
if its not there, install it.
```
sudo apt install libguestfs-tools
```
This step strips stuff that must be unique per VM (machine-id, SSH keys, etc.) from the *template*.
```
sudo virt-sysprep -d $guest-vm \
--operations machine-id,ssh-hostkeys,udev-persistent-net,logfiles,tmp-files
```
Your output should be similiar to the following:
```
sudo virt-sysprep -d Debian-Base --operations machine-id,ssh-hostkeys,udev-persistent-net,logfiles,tmp-files
[ 0.0] Examining the guest ...
[ 17.4] Performing "logfiles" ...
[ 17.6] Performing "machine-id" ...
[ 17.6] Performing "ssh-hostkeys" ...
[ 17.6] Performing "tmp-files" ...
[ 17.6] Performing "udev-persistent-net" ...
```
The base is now ready to go.
**Create Clone**
sudo virt-clone --original $guest-vm \
--name guest-01 \
--auto-clone
example
```
sudo virt-clone --original Debian-Base \
--name Node01 \
--auto-clone
Allocating 'Node01.qcow2' | 1.6 GB 00:00:03 ...
Clone 'Node01' created successfully
```
Confirm your clones have been made
```
virsh list --all
Id Name State
---------------------------------
1 downloads running
- Debian-Base shut off
- k8s-node1 shut off
- k8s-node2 shut off
- k8s-node3 shut off
- k8s-node4 shut off
- k8s-node5 shut off
- Node01 shut off
- Node02 shut off
- Node03 shut off
- Node04 shut off
- Node05 shut off
- Ubuntu_Default shut off
```

View File

@@ -0,0 +1,129 @@
**Debian Specific Static IP Address Setup**
Get the interface name by looking at
```
ip a
```
Example - here the interface we are targeting is enp1s0
```
~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:0c:f6:e7 brd ff:ff:ff:ff:ff:ff
altname enx5254000cf6e7
inet 192.168.50.80/24 brd 192.168.50.255 scope global dynamic noprefixroute enp1s0
valid_lft 85984sec preferred_lft 75184sec
inet6 2404:4400:4181:9200:5054:ff:fe0c:f6e7/64 scope global dynamic mngtmpaddr proto kernel_ra
valid_lft 86366sec preferred_lft 86366sec
inet6 2404:4400:4181:9200:617f:906e:3877:3f00/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86366sec preferred_lft 86366sec
inet6 fe80::b2a2:4462:bece:c8b7/64 scope link
valid_lft forever preferred_lft forever
~$
```
We will be updated the interfaces file int he networking dir.
Before we do anything we always make a backup copy
```
sudo cp /etc/network/interfaces /etc/network/interfaces.bak
```
looking at the interface file its shows that the interface is set to dynamic
** Orginal interface file
```
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
allow-hotplug enp1s0
iface enp1s0 inet dhcp
# This is an autoconfigured IPv6 interface
iface enp1s0 inet6 auto
```
We will update the ***face enp1s0 inet dhcp***
section to look like this
Example of updated file
```
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
allow-hotplug enp1s0
iface enp1s0 inet static
address 192.168.50.20
netmask 255.255.255.0
gateway 192.168.50.254
dns-nameservers 192.168.50.254 8.8.8.8
# This is an autoconfigured IPv6 interface
iface enp1s0 inet6 auto
```
After you have made this edit you can restart the service to get the new IP address
```
luddie@Node1-master:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:0c:f6:e7 brd ff:ff:ff:ff:ff:ff
altname enx5254000cf6e7
inet 192.168.50.20/24 brd 192.168.50.255 scope global enp1s0
valid_lft forever preferred_lft forever
inet 192.168.50.80/24 brd 192.168.50.255 scope global secondary dynamic noprefixroute enp1s0
valid_lft 86372sec preferred_lft 75572sec
inet6 2404:4400:4181:9200:617f:906e:3877:3f00/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86369sec preferred_lft 86369sec
inet6 2404:4400:4181:9200:5054:ff:fe0c:f6e7/64 scope global dynamic mngtmpaddr proto kernel_ra
valid_lft 86369sec preferred_lft 86369sec
inet6 fe80::b2a2:4462:bece:c8b7/64 scope link
valid_lft forever preferred_lft forever
luddie@Node1-master:~$
```
The network is now available via the updated ip address... HOWEVER did you see the old IP is still there?
```
inet 192.168.50.80/24 brd 192.168.50.255 scope global secondary dynamic noprefixroute enp1s0
valid_lft 86372sec preferred_lft 75572sec
```
Easiest way of dealing with this...
```
sudo reboot
```
And when the machine comes back up, ssh using the newly statically assigned IP address.
Update - Dont forget to updated the /etc/resolve.conf with your nameserver address

53
Networking/Hostname.md Normal file
View File

@@ -0,0 +1,53 @@
**Setup Hostname**
Log into the hostname (ssh)
Run the following command
```
sudo hostnamectl set-hostname NewHostName
```
Also need to update the hosts name
```
sudo vi /etc/hosts
```
***Example of old host file***
```
127.0.0.1 localhost
127.0.1.1 old-hostname.vocus.co.nz old-hostname
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
```
***Example of updated host***
```
127.0.0.1 localhost
127.0.1.1 New-hostname.vocus.co.nz New-hostname
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
```
While hostnamectl typically applies the changes immediately, some services or applications might still be referencing the old hostname. You can restart network services or reboot the system for a complete refresh, although often it's not strictly necessary.
To restart network services:
```
sudo systemctl restart network-online.target
```
or just reboot
```
Sudo Reboot
```

View File

@@ -0,0 +1,402 @@
**Installing k8s with kubeadm**
***PLEASE NOTE***
The default behavior of a kubelet is to fail to start if swap memory is detected on a node.
Disable it by turning it off on the fly
```
sudo swapoff -a
```
or to make the change persistent across reboots remove it from the fstab
**Container runtime - containerd**
Go to - https://github.com/containerd/containerd/blob/main/docs/getting-started.md and follow the instructions
This must be done on all the nodes, master and worker.
1. Download the containerd runtime
```
wget https://github.com/containerd/containerd/releases/download/v2.2.0/containerd-2.2.0-linux-amd64.tar.gz
```
2. Extract it to /usr/local
```
$ sudo tar Cxzvf /usr/local containerd-1.6.2-linux-amd64.tar.gz
bin/
bin/containerd-shim-runc-v2
bin/containerd-shim
bin/ctr
bin/containerd-shim-runc-v1
bin/containerd
bin/containerd-stress
```
3. systemd
download
```
wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
```
Place it in the correct location
For debian trixie it was
```
/usr/lib/systemd/system/
```
Then reload and enable
```
sudo systemctl daemon-reload
sudo systemctl enable --now containerd
```
4. runc
Download the runc binary from https://github.com/opencontainers/runc/releases
```
wget https://github.com/opencontainers/runc/releases/download/v1.4.0/runc.amd64
```
now install it
```
sudo install -m 755 runc.amd64 /usr/local/sbin/runc
```
5. Install CNI plugin
Download the the associated cni-plugin from https://github.com/containernetworking/plugins/releases
```
wget https://github.com/containernetworking/plugins/releases/download/v1.8.0/cni-plugins-linux-amd64-v1.8.0.tgz
```
and extact under /opt/cni/bin
```
mkdir -p /opt/cni/bin
sudo tar Czxvf /opt/cni/bin cni-plugins-linux-amd64-v1.8.0.tgz
```
6. Install kubelet kubeadm kubectl
Update the apt package index and install packages needed to use the Kubernetes apt repository:
```
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
```
Download the public signing key for the Kubernetes package repositories. The same signing key is used for all repositories so you can disregard the version in the URL:
If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
```
sudo mkdir -p -m 755 /etc/apt/keyrin
```
else continue
```
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
```
Added the apt repo for k8s 1.34
```
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
```
Install the packages
```
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
```
Enable kubelet before running adm
```
sudo systemctl enable --now kubelet
```
7. Configure cgroup driver
Both the container runtime and the kubelet have a property called "cgroup driver", which is important for the management of cgroups on Linux machines.
Feel free to go read more at https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd
and Advanced topics at https://github.com/containerd/containerd/blob/main/docs/getting-started.md
but in short...
```
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
```
8. Network configuration
Enable IPv4 packet forwarding
To manually enable IPv4 packet forwarding:
```
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
```
Validate that it is set to 1
```
sysctl net.ipv4.ip_forward
```
8. Create the Cluster
```
sudo kubeadm init --apiserver-advertise-address=The_Master_Node_IP --pod-network-cidr=POD_NODE_CIDR_BLOCK
```
```
sudo kubeadm init --apiserver-advertise-address=192.168.50.20 --pod-network-cidr=10.244.0.0/16
```
THe output should be similar to the following
```
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.50.20:6443 --token 3m8ogx.4w779pfudv9sr9h0 \
--discovery-token-ca-cert-hash sha256:2b70f7b1939e9a3c17a94ddc94a8714a5782dc476bf50ba915a6c848710dd0ba
```
You want to keep that join for later.
Now create your kube conf
```
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
Look at pods and nodes
```
Node01-master:~$ kubectl get nodes
kubectl get pods -A
NAME STATUS ROLES AGE VERSION
node01-master NotReady control-plane 161m v1.34.2
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bc5c9577-wrfwh 0/1 Pending 0 161m
kube-system coredns-66bc5c9577-ztwtc 0/1 Pending 0 161m
kube-system etcd-node01-master 1/1 Running 0 161m
kube-system kube-apiserver-node01-master 1/1 Running 0 161m
kube-system kube-controller-manager-node01-master 1/1 Running 0 161m
kube-system kube-proxy-tqcnj 1/1 Running 0 161m
kube-system kube-scheduler-node01-master 1/1 Running 0 161m
Node01-master:~$
```
Not everything is running. we have no CNI
We could create new pods but it will not start cause no networking
```
Node01-master:~$ kubectl run nginx --image=nginx:latest
pod/nginx created
Node01-master:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 0/1 Pending 0 5s
Node01-master:~$ kubectl describe pod nginx -n default
Name: nginx
Namespace: default
Priority: 0
Service Account: default
Node: <none>
Labels: run=nginx
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
nginx:
Image: nginx:latest
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s7gvx (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-s7gvx:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 22s default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Node01-master:~$
```
This is not a taint issue. I can add the worker node to the cluster using kubeadm join, and it will still be the same.
```
Node01-master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node01-master NotReady control-plane 167m v1.34.2
node02-worker NotReady <none> 87s v1.34.2
Node01-master:~$ kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
nginx 0/1 Pending 0 3m36s
Node01-master:~$
```
9. Install network add-on (Cillium)
Using https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/ as ref
We only do this on the master node
```
CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
```
```
Node01-master:~$ echo $CLI_VERSION
v0.18.9
Node01-master:~$
```
Download the binary
```
curl -L --remote-name \
https://github.com/cilium/cilium-cli/releases/download/${CLI_VERSION}/cilium-linux-amd64.tar.gz
```
Extract to /usr/local/bin and validate
```
Node01-master:~$ sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
cilium
Node01-master:~$ cilium version
cilium-cli: v0.18.9 compiled with go1.25.5 on linux/amd64
cilium image (default): v1.18.3
cilium image (stable): v1.18.4
cilium image (running): unknown. Unable to obtain cilium version. Reason: release: not found
Node01-master:~$
```
Now install cilium
```
Node01-master:~$ cilium install
Using Cilium version 1.18.3
🔮 Auto-detected cluster name: kubernetes
🔮 Auto-detected kube-proxy has been installed
Node01-master:~$
```
As you are installing this into k8s it is using the kubeconf so no sudo needed.
Validate its installed
```
Node01-master:~$ cilium status
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: OK
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet cilium-envoy Desired: 2, Ready: 2/2, Available: 2/2
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 2
cilium-envoy Running: 2
cilium-operator Running: 1
clustermesh-apiserver
hubble-relay
Cluster Pods: 3/3 managed by Cilium
Helm chart version: 1.18.3
Image versions cilium quay.io/cilium/cilium:v1.18.3@sha256:5649db451c88d928ea585514746d50d91e6210801b300c897283ea319d68de15: 2
cilium-envoy quay.io/cilium/cilium-envoy:v1.34.10-1761014632-c360e8557eb41011dfb5210f8fb53fed6c0b3222@sha256:ca76eb4e9812d114c7f43215a742c00b8bf41200992af0d21b5561d46156fd15: 2
cilium-operator quay.io/cilium/operator-generic:v1.18.3@sha256:b5a0138e1a38e4437c5215257ff4e35373619501f4877dbaf92c89ecfad81797: 1
Node01-master:~$
```
If you want, there is also a connectivity test that can be runm, but it can take some time to complete (more than a 120 tests).
If you now look at your cluster, it should be up and running
```
Node01-master:~$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cilium-test-1 client-64d966fcbd-ctrb8 1/1 Running 0 2m46s 10.0.1.63 node02-worker <none> <none>
cilium-test-1 client2-5f6d9498c7-9th7m 1/1 Running 0 2m46s 10.0.1.146 node02-worker <none> <none>
cilium-test-1 echo-same-node-7889ff4c85-5hcqb 2/2 Running 0 2m46s 10.0.1.155 node02-worker <none> <none>
cilium-test-ccnp1 client-ccnp-86d7f7bd68-s89zf 1/1 Running 0 2m46s 10.0.1.228 node02-worker <none> <none>
cilium-test-ccnp2 client-ccnp-86d7f7bd68-tmc76 1/1 Running 0 2m45s 10.0.1.202 node02-worker <none> <none>
default nginx 1/1 Running 0 18m 10.0.1.91 node02-worker <none> <none>
kube-system cilium-5s7hd 1/1 Running 0 4m55s 192.168.50.20 node01-master <none> <none>
kube-system cilium-bj2rz 1/1 Running 0 4m55s 192.168.50.21 node02-worker <none> <none>
kube-system cilium-envoy-fgqm6 1/1 Running 0 4m55s 192.168.50.20 node01-master <none> <none>
kube-system cilium-envoy-mtgzs 1/1 Running 0 4m55s 192.168.50.21 node02-worker <none> <none>
kube-system cilium-operator-68bd8cc456-fb2kg 1/1 Running 0 4m55s 192.168.50.21 node02-worker <none> <none>
kube-system coredns-66bc5c9577-wrfwh 1/1 Running 0 3h2m 10.0.1.176 node02-worker <none> <none>
kube-system coredns-66bc5c9577-ztwtc 1/1 Running 0 3h2m 10.0.1.206 node02-worker <none> <none>
kube-system etcd-node01-master 1/1 Running 0 3h2m 192.168.50.20 node01-master <none> <none>
kube-system kube-apiserver-node01-master 1/1 Running 0 3h2m 192.168.50.20 node01-master <none> <none>
kube-system kube-controller-manager-node01-master 1/1 Running 0 3h2m 192.168.50.20 node01-master <none> <none>
kube-system kube-proxy-tqcnj 1/1 Running 0 3h2m 192.168.50.20 node01-master <none> <none>
kube-system kube-proxy-wm8r7 1/1 Running 0 16m 192.168.50.21 node02-worker <none> <none>
kube-system kube-scheduler-node01-master 1/1 Running 0 3h2m 192.168.50.20 node01-master <none> <none>
Node01-master:~$
Node01-master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node01-master Ready control-plane 3h3m v1.34.2
node02-worker Ready <none> 17m v1.34.2
Node01-master:~$
```

View File

@@ -0,0 +1,2 @@
lol

View File

@@ -0,0 +1,67 @@
**Ubuntu Server WireGuard Setup**
1. update and install
```
sudo apt update
sudo apt install -y wireguard
```
2. validate that its installed
```
wg --version
```
3. WireGuard configs live here:
```
/etc/wireguard/
```
Filename must NOT match the interface name
```
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:4f:6e:1b brd ff:ff:ff:ff:ff:ff
inet 192.168.50.47/24 metric 100 brd 192.168.50.255 scope global dynamic enp1s0
valid_lft 86009sec preferred_lft 86009sec
inet6 2404:4400:4181:9200:5054:ff:fe4f:6e1b/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86331sec preferred_lft 86331sec
inet6 fe80::5054:ff:fe4f:6e1b/64 scope link
valid_lft forever preferred_lft forever
# touch /etc/wireguard/wg0.conf
#
```
I made a blank file and will copy and paste the data from the wireguard conf I have into the blank one I created.
Set permissions
```
sudo chmod 600 /etc/wireguard/wg0.conf
```
Bring up WireGuard
```
sudo wg-quick up wg0
```
Validate
```
sudo wg
```
Can enable at boot if that is your bag
```
sudo systemctl enable wg-quick@wg0
```

View File

@@ -0,0 +1,3 @@
**Read me for Notes**
Thiniking of chaning my ways... no more Joplin, rather just... commit

53
SSH/keygen.md Normal file
View File

@@ -0,0 +1,53 @@
**SSH Key Gen**
After a fresh install we want to generate an SSH key pair (public and private)
we can then use this key to ssh onto hosts without having to share passwords.
On the new host
```
ssh-keygen -t ecdsa
```
You could add a -C for comment and then add your emaiul address but... meh
Example
```
~$ ssh-keygen -t ecdsa
Generating public/private ecdsa key pair.
Enter file in which to save the key (/home/luddie/.ssh/id_ecdsa):
Created directory '/home/luddie/.ssh'.
Enter passphrase for "/home/luddie/.ssh/id_ecdsa" (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/luddie/.ssh/id_ecdsa
Your public key has been saved in /home/luddie/.ssh/id_ecdsa.pub
The key fingerprint is:
SHA256:gA+5oVKPdtlG7JQC5pL3NQ+OokUK7WoosTevWBCd1E0 luddie@debian-base
The key's randomart image is:
+---[ECDSA 256]---+
| +. oE |
| B o.+.. |
|= 1 * X |
|.O = / = |
|B = B * S |
|.X o . |
|*.+ |
|o+ o |
|. ... |
+----[SHA256]-----+
~$
```
This will generate 2 keys in the .ssh folder
```
~/.ssh$ ls
id_ecdsa id_ecdsa.pub
~/.ssh$
```
Use can then cat the .pub file to get the public keyu for that host, which can be added to the authorized_host file of other machines to gain access.
You can also create an authorized_keys on your cost, and add other ssh pub key to allow them to have direct access to this host.