12 Commits

5 changed files with 478 additions and 1 deletions

View File

@@ -63,7 +63,7 @@ We will update the ***face enp1s0 inet dhcp***
section to look like this section to look like this
Example of updated file Example of updated file
```
# This file describes the network interfaces available on your system # This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5). # and how to activate them. For more information, see interfaces(5).
@@ -82,6 +82,7 @@ iface enp1s0 inet static
dns-nameservers 192.168.50.254 8.8.8.8 dns-nameservers 192.168.50.254 8.8.8.8
# This is an autoconfigured IPv6 interface # This is an autoconfigured IPv6 interface
iface enp1s0 inet6 auto iface enp1s0 inet6 auto
```
After you have made this edit you can restart the service to get the new IP address After you have made this edit you can restart the service to get the new IP address
@@ -124,3 +125,5 @@ sudo reboot
``` ```
And when the machine comes back up, ssh using the newly statically assigned IP address. And when the machine comes back up, ssh using the newly statically assigned IP address.
Update - Dont forget to updated the /etc/resolve.conf with your nameserver address

View File

@@ -0,0 +1,402 @@
**Installing k8s with kubeadm**
***PLEASE NOTE***
The default behavior of a kubelet is to fail to start if swap memory is detected on a node.
Disable it by turning it off on the fly
```
sudo swapoff -a
```
or to make the change persistent across reboots remove it from the fstab
**Container runtime - containerd**
Go to - https://github.com/containerd/containerd/blob/main/docs/getting-started.md and follow the instructions
This must be done on all the nodes, master and worker.
1. Download the containerd runtime
```
wget https://github.com/containerd/containerd/releases/download/v2.2.0/containerd-2.2.0-linux-amd64.tar.gz
```
2. Extract it to /usr/local
```
$ sudo tar Cxzvf /usr/local containerd-1.6.2-linux-amd64.tar.gz
bin/
bin/containerd-shim-runc-v2
bin/containerd-shim
bin/ctr
bin/containerd-shim-runc-v1
bin/containerd
bin/containerd-stress
```
3. systemd
download
```
wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
```
Place it in the correct location
For debian trixie it was
```
/usr/lib/systemd/system/
```
Then reload and enable
```
sudo systemctl daemon-reload
sudo systemctl enable --now containerd
```
4. runc
Download the runc binary from https://github.com/opencontainers/runc/releases
```
wget https://github.com/opencontainers/runc/releases/download/v1.4.0/runc.amd64
```
now install it
```
sudo install -m 755 runc.amd64 /usr/local/sbin/runc
```
5. Install CNI plugin
Download the the associated cni-plugin from https://github.com/containernetworking/plugins/releases
```
wget https://github.com/containernetworking/plugins/releases/download/v1.8.0/cni-plugins-linux-amd64-v1.8.0.tgz
```
and extact under /opt/cni/bin
```
mkdir -p /opt/cni/bin
sudo tar Czxvf /opt/cni/bin cni-plugins-linux-amd64-v1.8.0.tgz
```
6. Install kubelet kubeadm kubectl
Update the apt package index and install packages needed to use the Kubernetes apt repository:
```
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
```
Download the public signing key for the Kubernetes package repositories. The same signing key is used for all repositories so you can disregard the version in the URL:
If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
```
sudo mkdir -p -m 755 /etc/apt/keyrin
```
else continue
```
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
```
Added the apt repo for k8s 1.34
```
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
```
Install the packages
```
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
```
Enable kubelet before running adm
```
sudo systemctl enable --now kubelet
```
7. Configure cgroup driver
Both the container runtime and the kubelet have a property called "cgroup driver", which is important for the management of cgroups on Linux machines.
Feel free to go read more at https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd
and Advanced topics at https://github.com/containerd/containerd/blob/main/docs/getting-started.md
but in short...
```
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
```
8. Network configuration
Enable IPv4 packet forwarding
To manually enable IPv4 packet forwarding:
```
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
```
Validate that it is set to 1
```
sysctl net.ipv4.ip_forward
```
8. Create the Cluster
```
sudo kubeadm init --apiserver-advertise-address=The_Master_Node_IP --pod-network-cidr=POD_NODE_CIDR_BLOCK
```
```
sudo kubeadm init --apiserver-advertise-address=192.168.50.20 --pod-network-cidr=10.244.0.0/16
```
THe output should be similar to the following
```
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.50.20:6443 --token 3m8ogx.4w779pfudv9sr9h0 \
--discovery-token-ca-cert-hash sha256:2b70f7b1939e9a3c17a94ddc94a8714a5782dc476bf50ba915a6c848710dd0ba
```
You want to keep that join for later.
Now create your kube conf
```
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
Look at pods and nodes
```
Node01-master:~$ kubectl get nodes
kubectl get pods -A
NAME STATUS ROLES AGE VERSION
node01-master NotReady control-plane 161m v1.34.2
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bc5c9577-wrfwh 0/1 Pending 0 161m
kube-system coredns-66bc5c9577-ztwtc 0/1 Pending 0 161m
kube-system etcd-node01-master 1/1 Running 0 161m
kube-system kube-apiserver-node01-master 1/1 Running 0 161m
kube-system kube-controller-manager-node01-master 1/1 Running 0 161m
kube-system kube-proxy-tqcnj 1/1 Running 0 161m
kube-system kube-scheduler-node01-master 1/1 Running 0 161m
Node01-master:~$
```
Not everything is running. we have no CNI
We could create new pods but it will not start cause no networking
```
Node01-master:~$ kubectl run nginx --image=nginx:latest
pod/nginx created
Node01-master:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 0/1 Pending 0 5s
Node01-master:~$ kubectl describe pod nginx -n default
Name: nginx
Namespace: default
Priority: 0
Service Account: default
Node: <none>
Labels: run=nginx
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
nginx:
Image: nginx:latest
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s7gvx (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-s7gvx:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 22s default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Node01-master:~$
```
This is not a taint issue. I can add the worker node to the cluster using kubeadm join, and it will still be the same.
```
Node01-master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node01-master NotReady control-plane 167m v1.34.2
node02-worker NotReady <none> 87s v1.34.2
Node01-master:~$ kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
nginx 0/1 Pending 0 3m36s
Node01-master:~$
```
9. Install network add-on (Cillium)
Using https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/ as ref
We only do this on the master node
```
CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
```
```
Node01-master:~$ echo $CLI_VERSION
v0.18.9
Node01-master:~$
```
Download the binary
```
curl -L --remote-name \
https://github.com/cilium/cilium-cli/releases/download/${CLI_VERSION}/cilium-linux-amd64.tar.gz
```
Extract to /usr/local/bin and validate
```
Node01-master:~$ sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
cilium
Node01-master:~$ cilium version
cilium-cli: v0.18.9 compiled with go1.25.5 on linux/amd64
cilium image (default): v1.18.3
cilium image (stable): v1.18.4
cilium image (running): unknown. Unable to obtain cilium version. Reason: release: not found
Node01-master:~$
```
Now install cilium
```
Node01-master:~$ cilium install
Using Cilium version 1.18.3
🔮 Auto-detected cluster name: kubernetes
🔮 Auto-detected kube-proxy has been installed
Node01-master:~$
```
As you are installing this into k8s it is using the kubeconf so no sudo needed.
Validate its installed
```
Node01-master:~$ cilium status
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: OK
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet cilium-envoy Desired: 2, Ready: 2/2, Available: 2/2
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 2
cilium-envoy Running: 2
cilium-operator Running: 1
clustermesh-apiserver
hubble-relay
Cluster Pods: 3/3 managed by Cilium
Helm chart version: 1.18.3
Image versions cilium quay.io/cilium/cilium:v1.18.3@sha256:5649db451c88d928ea585514746d50d91e6210801b300c897283ea319d68de15: 2
cilium-envoy quay.io/cilium/cilium-envoy:v1.34.10-1761014632-c360e8557eb41011dfb5210f8fb53fed6c0b3222@sha256:ca76eb4e9812d114c7f43215a742c00b8bf41200992af0d21b5561d46156fd15: 2
cilium-operator quay.io/cilium/operator-generic:v1.18.3@sha256:b5a0138e1a38e4437c5215257ff4e35373619501f4877dbaf92c89ecfad81797: 1
Node01-master:~$
```
If you want, there is also a connectivity test that can be runm, but it can take some time to complete (more than a 120 tests).
If you now look at your cluster, it should be up and running
```
Node01-master:~$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cilium-test-1 client-64d966fcbd-ctrb8 1/1 Running 0 2m46s 10.0.1.63 node02-worker <none> <none>
cilium-test-1 client2-5f6d9498c7-9th7m 1/1 Running 0 2m46s 10.0.1.146 node02-worker <none> <none>
cilium-test-1 echo-same-node-7889ff4c85-5hcqb 2/2 Running 0 2m46s 10.0.1.155 node02-worker <none> <none>
cilium-test-ccnp1 client-ccnp-86d7f7bd68-s89zf 1/1 Running 0 2m46s 10.0.1.228 node02-worker <none> <none>
cilium-test-ccnp2 client-ccnp-86d7f7bd68-tmc76 1/1 Running 0 2m45s 10.0.1.202 node02-worker <none> <none>
default nginx 1/1 Running 0 18m 10.0.1.91 node02-worker <none> <none>
kube-system cilium-5s7hd 1/1 Running 0 4m55s 192.168.50.20 node01-master <none> <none>
kube-system cilium-bj2rz 1/1 Running 0 4m55s 192.168.50.21 node02-worker <none> <none>
kube-system cilium-envoy-fgqm6 1/1 Running 0 4m55s 192.168.50.20 node01-master <none> <none>
kube-system cilium-envoy-mtgzs 1/1 Running 0 4m55s 192.168.50.21 node02-worker <none> <none>
kube-system cilium-operator-68bd8cc456-fb2kg 1/1 Running 0 4m55s 192.168.50.21 node02-worker <none> <none>
kube-system coredns-66bc5c9577-wrfwh 1/1 Running 0 3h2m 10.0.1.176 node02-worker <none> <none>
kube-system coredns-66bc5c9577-ztwtc 1/1 Running 0 3h2m 10.0.1.206 node02-worker <none> <none>
kube-system etcd-node01-master 1/1 Running 0 3h2m 192.168.50.20 node01-master <none> <none>
kube-system kube-apiserver-node01-master 1/1 Running 0 3h2m 192.168.50.20 node01-master <none> <none>
kube-system kube-controller-manager-node01-master 1/1 Running 0 3h2m 192.168.50.20 node01-master <none> <none>
kube-system kube-proxy-tqcnj 1/1 Running 0 3h2m 192.168.50.20 node01-master <none> <none>
kube-system kube-proxy-wm8r7 1/1 Running 0 16m 192.168.50.21 node02-worker <none> <none>
kube-system kube-scheduler-node01-master 1/1 Running 0 3h2m 192.168.50.20 node01-master <none> <none>
Node01-master:~$
Node01-master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node01-master Ready control-plane 3h3m v1.34.2
node02-worker Ready <none> 17m v1.34.2
Node01-master:~$
```

View File

@@ -0,0 +1,2 @@
lol

View File

@@ -0,0 +1,67 @@
**Ubuntu Server WireGuard Setup**
1. update and install
```
sudo apt update
sudo apt install -y wireguard
```
2. validate that its installed
```
wg --version
```
3. WireGuard configs live here:
```
/etc/wireguard/
```
Filename must NOT match the interface name
```
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:4f:6e:1b brd ff:ff:ff:ff:ff:ff
inet 192.168.50.47/24 metric 100 brd 192.168.50.255 scope global dynamic enp1s0
valid_lft 86009sec preferred_lft 86009sec
inet6 2404:4400:4181:9200:5054:ff:fe4f:6e1b/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86331sec preferred_lft 86331sec
inet6 fe80::5054:ff:fe4f:6e1b/64 scope link
valid_lft forever preferred_lft forever
# touch /etc/wireguard/wg0.conf
#
```
I made a blank file and will copy and paste the data from the wireguard conf I have into the blank one I created.
Set permissions
```
sudo chmod 600 /etc/wireguard/wg0.conf
```
Bring up WireGuard
```
sudo wg-quick up wg0
```
Validate
```
sudo wg
```
Can enable at boot if that is your bag
```
sudo systemctl enable wg-quick@wg0
```

View File

@@ -0,0 +1,3 @@
**Read me for Notes**
Thiniking of chaning my ways... no more Joplin, rather just... commit