diff --git a/Networking/k8s/kubeadmin_install/Install_Instruction.md b/Networking/k8s/kubeadmin_install/Install_Instruction.md index e74c04a..d3ebd36 100644 --- a/Networking/k8s/kubeadmin_install/Install_Instruction.md +++ b/Networking/k8s/kubeadmin_install/Install_Instruction.md @@ -1,34 +1,402 @@ **Installing k8s with kubeadm** -kubeadm join 192.168.50.20:6443 --token 72ckd0.rnphe03eqa135cjj \ - --discovery-token-ca-cert-hash sha256:75add2111581b5b0a4a074f3748c46b67be82d246f110e557be049da0ef44941 + +***PLEASE NOTE*** +The default behavior of a kubelet is to fail to start if swap memory is detected on a node. + +Disable it by turning it off on the fly + +``` +sudo swapoff -a +``` + +or to make the change persistent across reboots remove it from the fstab -worker node +**Container runtime - containerd** -container runtime +Go to - https://github.com/containerd/containerd/blob/main/docs/getting-started.md and follow the instructions +This must be done on all the nodes, master and worker. +1. Download the containerd runtime +``` wget https://github.com/containerd/containerd/releases/download/v2.2.0/containerd-2.2.0-linux-amd64.tar.gz +``` -download the systemctl -https://raw.githubusercontent.com/containerd/containerd/main/containerd.service -and move it to +2. Extract it to /usr/local +``` +$ sudo tar Cxzvf /usr/local containerd-1.6.2-linux-amd64.tar.gz +bin/ +bin/containerd-shim-runc-v2 +bin/containerd-shim +bin/ctr +bin/containerd-shim-runc-v1 +bin/containerd +bin/containerd-stress +``` +3. systemd +download +``` +wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service +``` -sudo cp containerd.service /usr/lib/systemd/system +Place it in the correct location +For debian trixie it was -runc -download -https://github.com/opencontainers/runc/releases/download/v1.4.0/runc.amd64 +``` +/usr/lib/systemd/system/ +``` + +Then reload and enable + +``` +sudo systemctl daemon-reload +sudo systemctl enable --now containerd +``` + +4. runc +Download the runc binary from https://github.com/opencontainers/runc/releases + +``` +wget https://github.com/opencontainers/runc/releases/download/v1.4.0/runc.amd64 +``` + +now install it + +``` +sudo install -m 755 runc.amd64 /usr/local/sbin/runc +``` + +5. Install CNI plugin + +Download the the associated cni-plugin from https://github.com/containernetworking/plugins/releases + +``` +wget https://github.com/containernetworking/plugins/releases/download/v1.8.0/cni-plugins-linux-amd64-v1.8.0.tgz +``` +and extact under /opt/cni/bin + +``` +mkdir -p /opt/cni/bin +sudo tar Czxvf /opt/cni/bin cni-plugins-linux-amd64-v1.8.0.tgz +``` +6. Install kubelet kubeadm kubectl +Update the apt package index and install packages needed to use the Kubernetes apt repository: +``` +sudo apt-get update +sudo apt-get install -y apt-transport-https ca-certificates curl gpg +``` + + +Download the public signing key for the Kubernetes package repositories. The same signing key is used for all repositories so you can disregard the version in the URL: + +If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below. + +``` +sudo mkdir -p -m 755 /etc/apt/keyrin +``` + +else continue + +``` +curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg +``` + +Added the apt repo for k8s 1.34 + +``` +echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list +``` + +Install the packages + +``` +sudo apt-get update +sudo apt-get install -y kubelet kubeadm kubectl +sudo apt-mark hold kubelet kubeadm kubectl +``` + +Enable kubelet before running adm + +``` +sudo systemctl enable --now kubelet +``` + +7. Configure cgroup driver + +Both the container runtime and the kubelet have a property called "cgroup driver", which is important for the management of cgroups on Linux machines. + +Feel free to go read more at https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd +and Advanced topics at https://github.com/containerd/containerd/blob/main/docs/getting-started.md + +but in short... + +``` sudo mkdir -p /etc/containerd sudo containerd config default | sudo tee /etc/containerd/config.toml + sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml +``` +8. Network configuration +Enable IPv4 packet forwarding +To manually enable IPv4 packet forwarding: - sudo cat < +Labels: run=nginx +Annotations: +Status: Pending +IP: +IPs: +Containers: + nginx: + Image: nginx:latest + Port: + Host Port: + Environment: + Mounts: + /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s7gvx (ro) +Conditions: + Type Status + PodScheduled False +Volumes: + kube-api-access-s7gvx: + Type: Projected (a volume that contains injected data from multiple sources) + TokenExpirationSeconds: 3607 + ConfigMapName: kube-root-ca.crt + Optional: false + DownwardAPI: true +QoS Class: BestEffort +Node-Selectors: +Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s + node.kubernetes.io/unreachable:NoExecute op=Exists for 300s +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Warning FailedScheduling 22s default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. +Node01-master:~$ +``` + +This is not a taint issue. I can add the worker node to the cluster using kubeadm join, and it will still be the same. + +``` +Node01-master:~$ kubectl get nodes +NAME STATUS ROLES AGE VERSION +node01-master NotReady control-plane 167m v1.34.2 +node02-worker NotReady 87s v1.34.2 +Node01-master:~$ kubectl get pods -n default +NAME READY STATUS RESTARTS AGE +nginx 0/1 Pending 0 3m36s +Node01-master:~$ +``` + + +9. Install network add-on (Cillium) + +Using https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/ as ref +We only do this on the master node + +``` +CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt) +``` +``` +Node01-master:~$ echo $CLI_VERSION +v0.18.9 +Node01-master:~$ +``` + +Download the binary +``` +curl -L --remote-name \ + https://github.com/cilium/cilium-cli/releases/download/${CLI_VERSION}/cilium-linux-amd64.tar.gz +``` + +Extract to /usr/local/bin and validate +``` +Node01-master:~$ sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin +cilium +Node01-master:~$ cilium version +cilium-cli: v0.18.9 compiled with go1.25.5 on linux/amd64 +cilium image (default): v1.18.3 +cilium image (stable): v1.18.4 +cilium image (running): unknown. Unable to obtain cilium version. Reason: release: not found +Node01-master:~$ +``` + +Now install cilium + +``` +Node01-master:~$ cilium install +ℹ️ Using Cilium version 1.18.3 +🔮 Auto-detected cluster name: kubernetes +🔮 Auto-detected kube-proxy has been installed +Node01-master:~$ +``` +As you are installing this into k8s it is using the kubeconf so no sudo needed. + +Validate its installed + +``` +Node01-master:~$ cilium status + /¯¯\ + /¯¯\__/¯¯\ Cilium: OK + \__/¯¯\__/ Operator: OK + /¯¯\__/¯¯\ Envoy DaemonSet: OK + \__/¯¯\__/ Hubble Relay: disabled + \__/ ClusterMesh: disabled + +DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2 +DaemonSet cilium-envoy Desired: 2, Ready: 2/2, Available: 2/2 +Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1 +Containers: cilium Running: 2 + cilium-envoy Running: 2 + cilium-operator Running: 1 + clustermesh-apiserver + hubble-relay +Cluster Pods: 3/3 managed by Cilium +Helm chart version: 1.18.3 +Image versions cilium quay.io/cilium/cilium:v1.18.3@sha256:5649db451c88d928ea585514746d50d91e6210801b300c897283ea319d68de15: 2 + cilium-envoy quay.io/cilium/cilium-envoy:v1.34.10-1761014632-c360e8557eb41011dfb5210f8fb53fed6c0b3222@sha256:ca76eb4e9812d114c7f43215a742c00b8bf41200992af0d21b5561d46156fd15: 2 + cilium-operator quay.io/cilium/operator-generic:v1.18.3@sha256:b5a0138e1a38e4437c5215257ff4e35373619501f4877dbaf92c89ecfad81797: 1 +Node01-master:~$ +``` + +If you want, there is also a connectivity test that can be runm, but it can take some time to complete (more than a 120 tests). + + +If you now look at your cluster, it should be up and running + + +``` +Node01-master:~$ kubectl get pods -A -o wide +NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +cilium-test-1 client-64d966fcbd-ctrb8 1/1 Running 0 2m46s 10.0.1.63 node02-worker +cilium-test-1 client2-5f6d9498c7-9th7m 1/1 Running 0 2m46s 10.0.1.146 node02-worker +cilium-test-1 echo-same-node-7889ff4c85-5hcqb 2/2 Running 0 2m46s 10.0.1.155 node02-worker +cilium-test-ccnp1 client-ccnp-86d7f7bd68-s89zf 1/1 Running 0 2m46s 10.0.1.228 node02-worker +cilium-test-ccnp2 client-ccnp-86d7f7bd68-tmc76 1/1 Running 0 2m45s 10.0.1.202 node02-worker +default nginx 1/1 Running 0 18m 10.0.1.91 node02-worker +kube-system cilium-5s7hd 1/1 Running 0 4m55s 192.168.50.20 node01-master +kube-system cilium-bj2rz 1/1 Running 0 4m55s 192.168.50.21 node02-worker +kube-system cilium-envoy-fgqm6 1/1 Running 0 4m55s 192.168.50.20 node01-master +kube-system cilium-envoy-mtgzs 1/1 Running 0 4m55s 192.168.50.21 node02-worker +kube-system cilium-operator-68bd8cc456-fb2kg 1/1 Running 0 4m55s 192.168.50.21 node02-worker +kube-system coredns-66bc5c9577-wrfwh 1/1 Running 0 3h2m 10.0.1.176 node02-worker +kube-system coredns-66bc5c9577-ztwtc 1/1 Running 0 3h2m 10.0.1.206 node02-worker +kube-system etcd-node01-master 1/1 Running 0 3h2m 192.168.50.20 node01-master +kube-system kube-apiserver-node01-master 1/1 Running 0 3h2m 192.168.50.20 node01-master +kube-system kube-controller-manager-node01-master 1/1 Running 0 3h2m 192.168.50.20 node01-master +kube-system kube-proxy-tqcnj 1/1 Running 0 3h2m 192.168.50.20 node01-master +kube-system kube-proxy-wm8r7 1/1 Running 0 16m 192.168.50.21 node02-worker +kube-system kube-scheduler-node01-master 1/1 Running 0 3h2m 192.168.50.20 node01-master +Node01-master:~$ +Node01-master:~$ kubectl get nodes +NAME STATUS ROLES AGE VERSION +node01-master Ready control-plane 3h3m v1.34.2 +node02-worker Ready 17m v1.34.2 +Node01-master:~$ +``` + + + diff --git a/Networking/wireGuard/ubuntu-server.md b/Networking/wireGuard/ubuntu-server.md new file mode 100644 index 0000000..1c7c706 --- /dev/null +++ b/Networking/wireGuard/ubuntu-server.md @@ -0,0 +1,67 @@ +**Ubuntu Server WireGuard Setup** + +1. update and install + +``` +sudo apt update +sudo apt install -y wireguard +``` + +2. validate that its installed + +``` +wg --version +``` + +3. WireGuard configs live here: + +``` +/etc/wireguard/ +``` + +Filename must NOT match the interface name + +``` +# ip a +1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host noprefixroute + valid_lft forever preferred_lft forever +2: enp1s0: mtu 1500 qdisc fq_codel state UP group default qlen 1000 + link/ether 52:54:00:4f:6e:1b brd ff:ff:ff:ff:ff:ff + inet 192.168.50.47/24 metric 100 brd 192.168.50.255 scope global dynamic enp1s0 + valid_lft 86009sec preferred_lft 86009sec + inet6 2404:4400:4181:9200:5054:ff:fe4f:6e1b/64 scope global dynamic mngtmpaddr noprefixroute + valid_lft 86331sec preferred_lft 86331sec + inet6 fe80::5054:ff:fe4f:6e1b/64 scope link + valid_lft forever preferred_lft forever +# touch /etc/wireguard/wg0.conf +# +``` +I made a blank file and will copy and paste the data from the wireguard conf I have into the blank one I created. + +Set permissions + +``` +sudo chmod 600 /etc/wireguard/wg0.conf +``` + +Bring up WireGuard + +``` +sudo wg-quick up wg0 +``` + +Validate + +``` +sudo wg +``` + +Can enable at boot if that is your bag + +``` +sudo systemctl enable wg-quick@wg0 +``` \ No newline at end of file diff --git a/README.md b/README.md index e69de29..7a3e7ec 100644 --- a/README.md +++ b/README.md @@ -0,0 +1,3 @@ +**Read me for Notes** + +Thiniking of chaning my ways... no more Joplin, rather just... commit \ No newline at end of file