Compare commits

..

10 Commits

72 changed files with 763 additions and 234 deletions

View File

@@ -1,83 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: portfolio-app
labels:
app: portfolio-app
spec:
replicas: 1
selector:
matchLabels:
app: portfolio-app
template:
metadata:
labels:
app: portfolio-app
spec:
imagePullSecrets:
- name: my-registry-secret
containers:
- name: portfolio-app
image: "${DOCKER_REGISTRY_HOST}/my-portfolio-app:latest"
imagePullPolicy: Always
ports:
- containerPort: 80
restartPolicy: Always
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: portfolio-app-svc
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: portfolio-app
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: portfolio
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
tls:
- hosts:
- "${DNSNAME}"
secretName: wildcard-cert-secret
rules:
- host: "${PORTFOLIO_HOST}"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: portfolio-app-svc
port:
number: 80
- path: /experience
pathType: Prefix
backend:
service:
name: react-app-service
port:
number: 80
- path: /interest
pathType: Prefix
backend:
service:
name: react-app-service
port:
number: 80
- path: /project
pathType: Prefix
backend:
service:
name: react-app-service
port:
number: 80

286
README.md
View File

@@ -1,173 +1,229 @@
# Homeserver Setup # 🏠 Homeserver Setup Guide: Kubernetes on Proxmox
[![License](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](CONTRIBUTING.md)
``` ```
© 2023 Taqi Tahmid © 2023 Taqi Tahmid
``` ```
This is the top level directory for the homeserver setup. It contains the > Build your own modern homelab with Kubernetes on Proxmox! This guide walks
following directories: > you through setting up a complete home server infrastructure with essential
> self-hosted services.
1. `ansible`: Contains the ansible playbooks and roles for setting up the ## 🌟 Highlights
homeserver. Proxmox is used as the hypervisor, so the playbooks are written with
that in mind. The Kubernetes cluster is set up using K3s. This has not been
automated yet and is a manual process.
2. `docker`: Contains the docker-compose files for setting up the different - Fully automated setup using Ansible
services. Right now Kubernetes is preferred over docker-compose, so this - Production-grade Kubernetes (K3s) cluster
directory is not actively maintained. - High-availability Proxmox configuration
- Popular self-hosted applications ready to deploy
3. `kubernetes`: Contains the kubernetes manifests and helm charts for setting ## 📁 Repository Structure
up the different services.
- `ansible/` - Automated provisioning with Ansible playbooks. [Ansible Guide](ansible/README.md)
- `kubernetes/` - K8s manifests and Helm charts. [Kubernetes Guide](kubernetes/README.md)
- `docker/` - Legacy docker-compose files (Kubernetes preferred). [Docker Guide](docker/README.md)
# Services ## 🚀 Running Services
The following services are set up on the homeserver: - ✨ AdGuard Home - Network-wide ad blocking
1. AdGuard Home - 🐳 Private Docker Registry
2. Private Docker Registry - 🎬 Jellyfin Media Server
3. Jellyfin - 🌐 Portfolio Website
4. My Portfolio Website - 🗄️ PostgreSQL Database
5. Postgres Database - 📦 Pocketbase Backend
6. Pocketbase Backend - 🍵 Gitea Git Server and Actions for CI/CD
In future the following services are planned to be set up: ### 📋 Coming Soon
1. Nextcloud
2. Gitea
3. Monitoring Stack
- Nextcloud
- Monitoring Stack with Prometheus and Grafana
# Homeserver Setup ## 💻 Hardware Setup
I have two mini PCs with Intel N1000 CPUs and 16GB of RAM each and 500GB SSDs. - 2x Mini PCs with Intel N100 CPUs
Both of them are running Proxmox and are connected to a 1Gbps network. The - 16GB RAM each
Proxmox is set up in a cluster configuration. - 500GB SSDs
- 1Gbps networking
- Proxmox Cluster Configuration
There are four VMs dedicated for the Kubernetes cluster. The VMs are running ## 🛠️ Installation Steps
Ubuntu 22.04 and have 4GB of RAM and 2 CPUs each. The VMs are connected to a
bridge network so that they can communicate with each other. Two VMs are
confiugred as dual control plane and worker nodes and two VMs are configured as
worker nodes. The Kubernetes cluster is set up using K3s.
## Proxmox Installation ### 1. Setting up Proxmox Infrastructure
The Proxmox installation is done on the mini PCs. The installation is done by #### Proxmox Base Installation
booting from a USB drive with the Proxmox ISO. The installation is done on the
SSD and the network is configured during the installation. The Proxmox is
configured in a cluster configuration.
Ref: [proxmox-installation](https://pve.proxmox.com/wiki/Installation) - Boot mini PCs from Proxmox USB drive
- Install on SSD and configure networking
- Set up cluster configuration
> 📚 Reference: [Official Proxmox Installation Guide](https://pve.proxmox.com/wiki/Installation)
## Promox VM increase disk size #### 3. Cloud Image Implementation
Access the Proxmox Web Interface: Cloud images provide:
1. Log in to the Proxmox web interface. - 🚀 Pre-configured, optimized disk images
Select the VM: - 📦 Minimal software footprint
2. In the left sidebar, click on the VM to resize. - ⚡ Quick VM deployment
Resize the Disk: - 🔧 Cloud-init support for easy customization
3. Go to the Hardware tab.
4. Select the disk to resize (e.g., scsi0, ide0, etc.).
5. Click on the Resize button in the toolbar.
6. Enter 50G (or 50000 for 50GB) in the size field.
After that login to the VM and create a new partition or add to existing one These lightweight images are perfect for rapid virtual machine deployment in
``` your homelab environment.
#### Proxmox VM Disk Management
**Expanding VM Disk Size:**
1. Access Proxmox web interface
2. Select target VM
3. Navigate to Hardware tab
4. Choose disk to resize
5. Click Resize and enter new size (e.g., 50G)
**Post-resize VM Configuration:**
```bash
# Access VM and configure partitions
sudo fdisk /dev/sda sudo fdisk /dev/sda
Press p to print the partition table. # Key commands:
Press d to delete the existing partition (note: ensure data is safe). # p - print partition table
Press n to create a new partition and follow the prompts. Make sure to use the # d - delete partition
same starting sector as the previous partition to avoid data loss. Press w to # n - create new partition
write changes and exit. # w - write changes
sudo mkfs.ext4 /dev/sdaX sudo mkfs.ext4 /dev/sdaX
``` ```
## Proxmox Passthrough Physical Disk to VM #### Physical Disk Passthrough
Ref: [proxmox-pve](https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)) Pass physical disks (e.g., NVME storage) to VMs:
It is possible to pass through a physical disk attached to the hardware to
the VM. This implementation passes through a NVME storage to the
dockerhost VM.
```bash ```bash
# List the disk by-id with lsblk # List disk IDs
lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" \ lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'
";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");\
print "";}'|grep -v -E 'part|lvm'
# Hot plug or add the physical device as a new SCSI disk # Add disk to VM (example for VM ID 103)
qm set 103 -scsi2 /dev/disk/by-id/usb-WD_BLACK_SN770_1TB_012938055C4B-0:0 qm set 103 -scsi2 /dev/disk/by-id/usb-WD_BLACK_SN770_1TB_012938055C4B-0:0
# Check with the following command # Verify configuration
grep 5C4B /etc/pve/qemu-server/103.conf grep 5C4B /etc/pve/qemu-server/103.conf
# After that reboot the VM and verify with lsblk command
lsblk
``` ```
## Setup Master and worker Nodes for K3s > 📚 Reference: [Proxmox Disk Passthrough Guide](<https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)>)
The cluster configuration consists of 4 VMs configured as 2 master and 2 worker k3s nodes. ### 2. Kubernetes Cluster Setup
The master nodes also function as worker nodes.
#### K3s Cluster Configuration
Setting up a 4-node cluster (2 master + 2 worker):
**Master Node 1:**
```bash ```bash
# On the first master run the following command
curl -sfL https://get.k3s.io | sh -s - server --cluster-init --disable servicelb curl -sfL https://get.k3s.io | sh -s - server --cluster-init --disable servicelb
# This will generate a token under /var/lib/rancher/k3s/server/node-token
# Which will be required to for adding nodes to the cluster
# On the second master run the following command
export TOKEN=<token>
export MASTER1_IP=<ip>
curl -sfL https://get.k3s.io | \
sh -s - server --server https://${MASTER1_IP}:6443 \
--token ${TOKEN} --disable servicelb
# Similarly on the worker nodes run the following command
export TOKEN=<token>
export MASTER1_IP=<ip>
curl -sfL https://get.k3s.io | \
K3S_URL=https://${MASTER1_IP}:6443 K3S_TOKEN=${TOKEN} sh -
``` ```
## Configure Metallb load balancer for k3s **Master Node 2:**
The metallb loadbalancer is used for services instead of k3s default
servicelb as it offers advanced features and supports IP address pools for load
balancer configuration.
```bash ```bash
# On any of the master nodes run the following command export TOKEN=<token>
kubectl apply -f \ export MASTER1_IP=<ip>
https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml curl -sfL https://get.k3s.io | sh -s - server --server https://${MASTER1_IP}:6443 --token ${TOKEN} --disable servicelb
```
# Ensure that metallb is installed with the following command **Worker Nodes:**
```bash
export TOKEN=<token>
export MASTER1_IP=<ip>
curl -sfL https://get.k3s.io | K3S_URL=https://${MASTER1_IP}:6443 K3S_TOKEN=${TOKEN} sh -
```
#### MetalLB Load Balancer Setup
```bash
# Install MetalLB
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
# Verify installation
kubectl get pods -n metallb-system kubectl get pods -n metallb-system
# On the host machine apply the metallb cofigmap under metallb directory # Apply configuration
kubectl apply -f /home/taqi/homeserver/k3s-infra/metallb/metallbConfig.yaml kubectl apply -f /home/taqi/homeserver/k3s-infra/metallb/metallbConfig.yaml
```
# Test that the loadbalancer is working with nginx deployment **Quick Test:**
```bash
# Deploy test nginx
kubectl create namespace nginx kubectl create namespace nginx
kubectl create deployment nginx --image=nginx -n nginx kubectl create deployment nginx --image=nginx -n nginx
kubectl expose deployment nginx --port=80 --type=LoadBalancer -n nginx kubectl expose deployment nginx --port=80 --type=LoadBalancer -n nginx
# If nginx service gets an external IP and it is accessible from browser then # Cleanup after testing
the configuration is complete
kubectl delete namespace nginx kubectl delete namespace nginx
``` ```
## Cloud Image for VMs ## 🤝 Contributing
A cloud image is a pre-configured disk image that is optimized for use in cloud Contributions welcome! Feel free to open issues or submit PRs.
environments. These images typically include a minimal set of software and
configurations that are necessary to run in a virtualized environment,
such as a cloud or a hypervisor like Proxmox. Cloud images are designed to be
lightweight and can be quickly deployed to create new virtual machines. They
often come with cloud-init support, which allows for easy customization and
initialization of instances at boot time.
The cloud iamges are used for setting up the VMs in Proxmox. No traditional ## 📝 License
template is used for setting up the VMs. The cloud images are available for
download from the following link: MIT License - feel free to use this as a template for your own homelab!
[Ubuntu 22.04](https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img)
# Upgrade K3s cluster
Ref: https://github.com/k3s-io/k3s-upgrade
## Deploying the K3s Upgrade Controller
First deploy the k3s upgrade controller
```bash
kubectl apply -f https://raw.githubusercontent.com/rancher/system-upgrade-controller/master/manifests/system-upgrade-controller.yaml
```
Check that the controller is running. If not, check if the serviceaccount is
bound to the correct role.
```bash
kubectl get pods -n kube-system
kubectl create clusterrolebinding system-upgrade \
--clusterrole=cluster-admin \
--serviceaccount=system-upgrade:system-upgrade
```
## Create the upgrade plan
First label the selected node with `k3s-upgrade=true` label. This is
needed to select the node for upgrade.
```bash
kubectl label node <node-name> k3s-upgrade=true
```
It is best practice to upgrade node one by one. Thus, the cluster will
still be operational during the upgrade. And, for any issues, it is possible
to rollback the upgrade.
## Create the upgrade plan
Then create the upgrade plan. The plan will be created in the `system-upgrade`
namespace. You can change the namespace by using the `--namespace` flag.
```bash
kubectl apply -f /home/taqi/homeserver/kubernetes/k3s-upgrade/plan.yaml
```
The plan will fitst try to cordon and drain the node. If it fails, check
the logs of the plan.
The longhorn CSI pods might node be drained. In that case, you can
cordon the node and drain it manually.
Ref: https://github.com/longhorn/longhorn/discussions/4102
```bash
kubectl drain vm4 --ignore-daemonsets \
--delete-emptydir-data \
--pod-selector='app!=csi-attacher,app!=csi-provisioner'
```

View File

@@ -33,6 +33,30 @@ virtual machines (VMs) on a Proxmox server.
and individual host-related variables to the files under the `host_vars` and individual host-related variables to the files under the `host_vars`
directory. Ansible will automatically pick up these variables. directory. Ansible will automatically pick up these variables.
4. Add the following secrets to the ansible-vault:
- proxmox_api_token_id
- proxmox_api_token
- ansible_proxmox_user
- ansible_vm_user
- proxmox_user
- ansible_ssh_private_key_file
- ciuser
- cipassword
One can create the secret file using the following command:
```sh
ansible-vault create secrets/vault.yml
```
To encrypt and decrypt the file, use the following commands:
```sh
ansible-vault encrypt secrets/vault.yml
ansible-vault decrypt secrets/vault.yml
```
The password for vault file can be stored in a file or can be provided during
the encryption/decryption process. The password file location can be specified
in the `ansible.cfg` file.
## Playbooks ## Playbooks
### Create VM ### Create VM

View File

@@ -1,5 +1,16 @@
Setup K3s Kubernetes Cluster # Setup K3s Kubernetes Cluster
===================================
# Configure Traefik with extra values
The Traefik ingress controller is deployed along with K3s. To modify the
default values,
```bash
# k3s still uses traefik V2
helm upgrade traefik traefik/traefik \
-n kube-system -f traefik/traefik-values.yaml \
--version 22.1.0
```
# Configure Cert Manager for automating SSL certificate handling # Configure Cert Manager for automating SSL certificate handling
@@ -38,6 +49,7 @@ export KUBE_EDITOR=nvim
# Change the forward section with . 1.1.1.1 1.0.0.1 # Change the forward section with . 1.1.1.1 1.0.0.1
kubectl -n kube-system edit configmap coredns kubectl -n kube-system edit configmap coredns
``` ```
Next, deploy the ClusterIssuer, WildcardCert, and secrets using helm Next, deploy the ClusterIssuer, WildcardCert, and secrets using helm
```bash ```bash
@@ -61,12 +73,15 @@ Here are some troubleshoot commands to test:
```bash ```bash
kubectl get clusterissuer kubectl get clusterissuer
kubectl describe clusterissuer kubectl describe clusterissuer
kubectl get certificate -n nginx-test kubectl get certificate -n cert-manager
kubectl get certificateRequest -n nginx-test kubectl get certificateRequest -n cert-manager
kubectl describe challenges -n cert-manager kubectl describe challenges -n cert-manager
kubectl describe orders -n cert-manager kubectl describe orders -n cert-manager
``` ```
Alternatively, it is possible to generate service specific certs
in desired namespaces by deploying the Certificate resource in the namespace.
# Deploy Private Docker Registry # Deploy Private Docker Registry
Create a new namespace called docker-registry and deploy the private Create a new namespace called docker-registry and deploy the private
@@ -97,7 +112,6 @@ helm install registry docker-registry-helm-chart/ \
--atomic --atomic
``` ```
# Deploy Portfolio Website from Private Docker Registry # Deploy Portfolio Website from Private Docker Registry
First, create a secret to access the private docker registry. Then copy the First, create a secret to access the private docker registry. Then copy the
@@ -105,8 +119,8 @@ wildcard CA cert and deploy the portfolio webapp.
```bash ```bash
kubectl create namespace my-portfolio kubectl create namespace my-portfolio
kubectl get secret wildcard-cert-secret --namespace=cert -o yaml \ kubectl get secret wildcard-cert-secret --namespace=cert-manager -o yaml \
| sed 's/namespace: cert/namespace: my-portfolio/' | kubectl apply -f - | sed 's/namespace: cert-manager/namespace: my-portfolio/' | kubectl apply -f -
source .env source .env
kubectl create secret docker-registry my-registry-secret \ kubectl create secret docker-registry my-registry-secret \
@@ -120,7 +134,6 @@ envsubst < my-portfolio/portfolioManifest.yaml | \
kubectl apply -n my-portfolio -f - kubectl apply -n my-portfolio -f -
``` ```
# Expose External Services via Traefik Ingress Controller # Expose External Services via Traefik Ingress Controller
External services hosted outside the kubernetes cluster can be exposed using External services hosted outside the kubernetes cluster can be exposed using
@@ -141,7 +154,6 @@ envsubst < external-service/proxmox.yaml | \
kubectl apply -n external-services -f - kubectl apply -n external-services -f -
``` ```
# Create Shared NFS Storage for Plex and Jellyfin # Create Shared NFS Storage for Plex and Jellyfin
A 1TB NVME SSD is mounted to one of the original homelab VMs. This serves as an A 1TB NVME SSD is mounted to one of the original homelab VMs. This serves as an
@@ -174,6 +186,7 @@ sudo systemctl enable nfs-kernel-server
``` ```
## On all the K3s VMs: ## On all the K3s VMs:
``` ```
sudo apt install nfs-common sudo apt install nfs-common
sudo mkdir /mnt/media sudo mkdir /mnt/media
@@ -184,7 +197,6 @@ sudo mount 192.168.1.113:/media/flexdrive /mnt/media
sudo umount /mnt/media sudo umount /mnt/media
``` ```
# Deploy Jellyfin Container in K3s # Deploy Jellyfin Container in K3s
Jellyfin is a media server that can be used to organize, play, and stream Jellyfin is a media server that can be used to organize, play, and stream
@@ -223,7 +235,6 @@ kubectl exec -it temp-pod -n media -- bash
cp -r /mnt/source/* /mnt/destination/ cp -r /mnt/source/* /mnt/destination/
``` ```
# Create Storage Solution # Create Storage Solution
Longhorn is a distributed block storage solution for Kubernetes that is built Longhorn is a distributed block storage solution for Kubernetes that is built
@@ -262,7 +273,8 @@ kubectl -n longhorn-system edit svc longhorn-frontend
kubectl -n longhorn-system get nodes.longhorn.io kubectl -n longhorn-system get nodes.longhorn.io
kubectl -n longhorn-system edit nodes.longhorn.io <node-name> kubectl -n longhorn-system edit nodes.longhorn.io <node-name>
```
````
Add the following block under disks for all nodes: Add the following block under disks for all nodes:
```bash ```bash
@@ -274,7 +286,7 @@ Add the following block under disks for all nodes:
path: /mnt/longhorn # Specify the new mount path path: /mnt/longhorn # Specify the new mount path
storageReserved: 0 # Adjust storageReserved if needed storageReserved: 0 # Adjust storageReserved if needed
tags: [] tags: []
``` ````
## Setting the number of replicas ## Setting the number of replicas
@@ -287,7 +299,6 @@ kubectl edit configmap -n longhorn-system longhorn-storageclass
set the numberOfReplicas: "1" set the numberOfReplicas: "1"
``` ```
# Configure AdGuard Adblocker # Configure AdGuard Adblocker
AdGuard is deployed in the K3S cluster for network ad protection. AdGuard is deployed in the K3S cluster for network ad protection.
@@ -309,7 +320,6 @@ helm install adguard \
--atomic adguard-helm-chart --atomic adguard-helm-chart
``` ```
# Pocketbase Database and Authentication Backend # Pocketbase Database and Authentication Backend
Pocketbase serves as the database and authentication backend for Pocketbase serves as the database and authentication backend for
@@ -368,7 +378,6 @@ qBittorrent pod:
curl ipinfo.io curl ipinfo.io
``` ```
# PostgreSQL Database # PostgreSQL Database
The PostgreSQL database uses the bitnami postgres helm chart with one primary The PostgreSQL database uses the bitnami postgres helm chart with one primary
@@ -399,7 +408,7 @@ psql -U $POSTGRES_USER -d postgres --host 192.168.1.145 -p 5432
## Backup and Restore PostgreSQL Database ## Backup and Restore PostgreSQL Database
```bash ```bash
# To backup # To backup§
# Dump format is compressed and allows parallel restore # Dump format is compressed and allows parallel restore
pg_dump -U $POSTGRES_USER -h 192.168.1.145 -p 5432 -F c \ pg_dump -U $POSTGRES_USER -h 192.168.1.145 -p 5432 -F c \
-f db_backup.dump postgres -f db_backup.dump postgres
@@ -457,7 +466,7 @@ kubectl get secret wildcard-cert-secret --namespace=cert-manager -o yaml \
| sed 's/namespace: cert-manager/namespace: gitea/' | kubectl apply -f - | sed 's/namespace: cert-manager/namespace: gitea/' | kubectl apply -f -
# The configMap contains the app.ini file values for gitea # The configMap contains the app.ini file values for gitea
kubectl apply -f gitea/configMap.yaml -n gitea envsubst < gitea/configMap.yaml | kubectl apply -n gitea -f -
helm install gitea gitea-charts/gitea -f gitea/values.yaml \ helm install gitea gitea-charts/gitea -f gitea/values.yaml \
--namespace gitea \ --namespace gitea \
@@ -478,7 +487,6 @@ and set the replicas to the desired number.
kubectl edit statefulset gitea-act-runner -n gitea kubectl edit statefulset gitea-act-runner -n gitea
``` ```
## Authentication Middleware Configuration for Traefik Ingress Controller ## Authentication Middleware Configuration for Traefik Ingress Controller
The Traefik Ingress Controller provides robust authentication capabilities The Traefik Ingress Controller provides robust authentication capabilities
@@ -503,8 +511,35 @@ envsubst < traefik-middleware/auth_secret.yaml | kubectl apply -n my-portfolio -
kubernetes apply -f traefik-middleware/auth.yaml -n my-portfolio kubernetes apply -f traefik-middleware/auth.yaml -n my-portfolio
``` ```
Following middleware deployment, the authentication must be enabled by adding the appropriate annotation to the service's Ingress object specification: Following middleware deployment, the authentication must be enabled by adding
the appropriate annotation to the service's Ingress object specification:
``` ```
traefik.ingress.kubernetes.io/router.middlewares: my-portfolio-basic-auth@kubernetescrd traefik.ingress.kubernetes.io/router.middlewares: my-portfolio-basic-auth@kubernetescrd
``` ```
# LLDAP Authentication Server
LDAP is a protocol used to access and maintain distributed directory information.
To provide central authentication for all services, an LDAP server is deployed in the
k3s cluster. LLDAP is a lightweight LDAP server that is easy to deploy and manage.
The LLDAP server is deployed using the helm chart and is accessible via the ingress
controller.
```bash
source .env
kubectl create namespace ldap
kubectl get secret wildcard-cert-secret --namespace=cert-manager -o yaml \
| sed 's/namespace: cert-manager/namespace: ldap/' | kubectl apply -f -
helm install ldap \
lldap-helm-chart/ \
--set ingress.hosts.host=$LDAP_HOST \
--set ingress.tls[0].hosts[0]=$DNSNAME \
--set secret.lldapUserName=$LLDAP_ADMIN_USER \
--set secret.lldapJwtSecret=$LLDAP_JWT_SECRET \
--set secret.lldapUserPass=$LLDAP_ADMIN_PASSWORD \
--atomic \
-n ldap
```

View File

@@ -11,7 +11,7 @@ spec:
name: {{ .Values.clusterIssuer.privateKeySecretRef }} name: {{ .Values.clusterIssuer.privateKeySecretRef }}
solvers: solvers:
- dns01: - dns01:
cloudflare: cloudflare: # Use the DNS-01 challenge mechanism for Cloudflare
email: {{ .Values.clusterIssuer.email }} email: {{ .Values.clusterIssuer.email }}
apiTokenSecretRef: apiTokenSecretRef:
name: {{ .Values.clusterIssuer.apiTokenSecretRef.name }} name: {{ .Values.clusterIssuer.apiTokenSecretRef.name }}

View File

@@ -4,5 +4,5 @@ metadata:
name: {{ .Values.secret.name }} name: {{ .Values.secret.name }}
namespace: {{ .Values.namespace }} namespace: {{ .Values.namespace }}
type: Opaque type: Opaque
data: stringData:
api-token: {{ .Values.secret.apiToken }} api-token: {{ .Values.secret.apiToken }}

View File

@@ -18,4 +18,4 @@ wildcardCert:
secret: secret:
type: Opaque type: Opaque
name: cloudflare-api-token-secret name: cloudflare-api-token-secret
apiToken: base64encodedtoken apiToken: cloudflareToken

View File

@@ -14,6 +14,10 @@ gitea:
password: password password: password
email: email email: email
image:
repository: gitea
tag: 1.23.7
postgresql: postgresql:
enabled: false enabled: false

View File

@@ -0,0 +1,17 @@
---
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: k3s-latest
namespace: system-upgrade
spec:
concurrency: 1
version: v1.32.4-k3s1
nodeSelector:
matchExpressions:
- {key: k3s-upgrade, operator: Exists}
serviceAccountName: system-upgrade
drain:
force: true
upgrade:
image: rancher/k3s-upgrade

View File

@@ -0,0 +1,6 @@
apiVersion: v2
name: lldap-chart
description: lldap - Light LDAP implementation for authentication
type: application
version: 0.1.0
appVersion: "latest"

View File

@@ -0,0 +1,62 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "lldap-chart.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "lldap-chart.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "lldap-chart.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "lldap-chart.labels" -}}
helm.sh/chart: {{ include "lldap-chart.chart" . }}
{{ include "lldap-chart.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "lldap-chart.selectorLabels" -}}
app.kubernetes.io/name: {{ include "lldap-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "lldap-chart.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "lldap-chart.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,99 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: lldap
namespace: {{ .Values.namespace }}
labels:
app: lldap
annotations:
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: lldap
strategy:
type: Recreate
template:
metadata:
labels:
app: lldap
annotations:
spec:
containers:
- name: lldap
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
env:
- name: GID
value: "{{ .Values.env.GID }}"
- name: LLDAP_JWT_SECRET
valueFrom:
secretKeyRef:
name: {{ .Values.secret.name }}
key: lldap-jwt-secret
- name: LLDAP_LDAP_BASE_DN
valueFrom:
secretKeyRef:
name: {{ .Values.secret.name }}
key: base-dn
- name: LLDAP_LDAP_USER_DN
valueFrom:
secretKeyRef:
name: {{ .Values.secret.name }}
key: lldap-ldap-user-name
- name: LLDAP_LDAP_USER_PASS
valueFrom:
secretKeyRef:
name: {{ .Values.secret.name }}
key: lldap-ldap-user-pass
- name: TZ
value: "{{ .Values.env.TZ }}"
- name: UID
value: "{{ .Values.env.UID }}"
{{- if .Values.extraEnv}}
{{- toYaml .Values.extraEnv | nindent 12}}
{{- end }}
ports:
- containerPort: 3890
- containerPort: 6360
- containerPort: 17170
volumeMounts:
{{- if .Values.persistence.enabled }}
- mountPath: /data
name: lldap-data
{{- end }}
{{- if .Values.extraVolumeMounts}}
{{- toYaml .Values.extraVolumeMounts | nindent 12}}
{{- end }}
volumes:
{{- if .Values.persistence.enabled}}
- name: lldap-data
persistentVolumeClaim:
claimName: lldap-data
{{- end }}
{{- if .Values.extraVolumes}}
{{- toYaml .Values.extraVolumes | nindent 8}}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -0,0 +1,38 @@
{{- if .Values.ingress.enabled -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Values.ingress.name | quote }}
namespace: {{ .Values.namespace | quote }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.ingress.labels }}
labels:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
ingressClassName: {{ .Values.ingress.ingressClassName | quote }}
rules:
- host: {{ .Values.ingress.hosts.host | quote }}
http:
paths:
- path: {{ .Values.ingress.hosts.paths.path | quote }}
pathType: {{ .Values.ingress.hosts.paths.pathType | default "Prefix" | quote }}
backend:
service:
name: {{ $.Values.service.webui.name | quote }}
port:
number: {{ $.Values.service.webui.ports.port | default 17170 }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName | quote }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,40 @@
{{- if .Values.persistence.enabled }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: lldap-data
namespace: {{ .Values.namespace }}
labels:
app: lldap
spec:
{{- if .Values.persistence.storageClassName }}
storageClassName: {{ .Values.persistence.storageClassName }}
{{- end }}
accessModes:
- {{ .Values.persistence.accessMode }}
resources:
requests:
storage: {{ .Values.persistence.storageSize }}
{{- end }}
{{- if and .Values.persistence.enabled .Values.persistence.manualProvision }}
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: lldap-data-pv
namespace: {{ .Values.namespace }}
labels:
app: lldap
spec:
capacity:
storage: {{ .Values.persistence.storageSize }}
accessModes:
- {{ .Values.persistence.accessMode }}
{{- if .Values.persistence.storageClassName }}
storageClassName: {{ .Values.persistence.storageClassName }}
{{- end }}
{{- if .Values.persistence.localPath }}
hostPath:
path: {{ .Values.persistence.localPath }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,13 @@
{{- if .Values.secret.create }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.secret.name }}
namespace: {{ .Values.namespace }}
type: Opaque
data:
lldap-jwt-secret: {{ .Values.secret.lldapJwtSecret | b64enc }}
lldap-ldap-user-name: {{ .Values.secret.lldapUserName | b64enc }}
lldap-ldap-user-pass: {{ .Values.secret.lldapUserPass | b64enc }}
base-dn: {{ .Values.secret.lldapBaseDn | b64enc }}
{{- end }}

View File

@@ -0,0 +1,33 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.service.webui.name }}
namespace: {{ .Values.namespace }}
labels:
app: lldap
spec:
type: {{ .Values.service.webui.type }}
ports:
- name: {{ .Values.service.webui.ports.name | quote }}
port: {{ .Values.service.webui.ports.port }}
targetPort: {{ .Values.service.webui.ports.targetPort }}
selector:
app: lldap
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.service.ldap.name }}
annotations:
external-dns.alpha.kubernetes.io/hostname: lldap.tahmidcloud.com
namespace: {{ .Values.namespace }}
labels:
app: lldap
spec:
type: {{ .Values.service.ldap.type }}
ports:
- name: {{ .Values.service.ldap.ports.name | quote }}
port: {{ .Values.service.ldap.ports.port }}
targetPort: {{ .Values.service.ldap.ports.targetPort }}
selector:
app: lldap

View File

@@ -0,0 +1,97 @@
##### secret creation
secret:
create: true
name: lldap-credentials
lldapJwtSecret: "placeholder"
lldapUserName: "placeholder"
lldapUserPass: "placeholder"
lldapBaseDn: "dc=homelab,dc=local"
##### pvc
persistence:
enabled: true
storageClassName: ""
storageSize: "100Mi"
accessMode: "ReadWriteOnce"
# in case the StorageClass used does not automatically provision volumes,
# you can specify a local path for manual mounting here like for example /mnt/data/lldap
# if the StorageClass supports automatic provisioning, leave this field empty.
localPath: "" # Local filesystem path for storage, used if 'local-path' is the SC.
# if manualProvision is set to true, a persistentVolume is created with helm
# if the StorageClass used supports automatic provisioning, this should be set to false.
# and if it does not supports automatic provisioning, set to true. Default is false
manualProvision: false
extraVolumes: []
extraVolumeMounts: []
##### deployment
# hour zone
env:
TZ: "EET"
GID: "1001"
UID: "1001"
extraEnv: []
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 50m
memory: 50M
nodeSelector: {}
tolerations: []
affinity: {}
replicaCount: 1
image:
repository: "nitnelave/lldap"
tag: "v0.6.1"
pullPolicy: "IfNotPresent"
#### service this is unique service, so no enabled is added as if not it wont work
service:
webui:
name: lldap-service
type: ClusterIP
ports:
name: "17170"
port: 17170
targetPort: 17170
ldap:
name: lldap
type: LoadBalancer
ports:
name: "3890"
port: 3890
targetPort: 3890
#####ingress
ingress:
ingressClassName: "traefik"
enabled: true
name: lldap-web-ingress
annotations: {}
labels: {}
hosts:
host: "placeholder.test.com"
paths:
path: "/"
pathType: "Prefix"
tls:
- secretName: "lldap-secret-tls"
hosts:
- "placeholder.test.com"

View File

@@ -72,7 +72,7 @@ spec:
type: ClusterIP type: ClusterIP
--- ---
apiVersion: traefik.containo.us/v1alpha1 apiVersion: traefik.io/v1alpha1
kind: IngressRoute kind: IngressRoute
metadata: metadata:
name: jellyfin-ingress name: jellyfin-ingress
@@ -91,7 +91,7 @@ spec:
secretName: wildcard-cert-secret secretName: wildcard-cert-secret
--- ---
apiVersion: traefik.containo.us/v1alpha1 apiVersion: traefik.io/v1alpha1
kind: Middleware kind: Middleware
metadata: metadata:
name: jellyfin-headers name: jellyfin-headers

View File

@@ -0,0 +1,62 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: portfolio-app
labels:
app: portfolio-app
spec:
replicas: 1
selector:
matchLabels:
app: portfolio-app
template:
metadata:
labels:
app: portfolio-app
spec:
imagePullSecrets:
- name: my-registry-secret
containers:
- name: portfolio-app
image: "${DOCKER_REGISTRY_HOST}/my-portfolio-app:latest"
imagePullPolicy: Always
ports:
- containerPort: 80
restartPolicy: Always
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: portfolio-app-svc
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: portfolio-app
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: portfolio
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
tls:
- hosts:
- "${DNSNAME}"
secretName: wildcard-cert-secret
rules:
- host: "${PORTFOLIO_HOST}"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: portfolio-app-svc
port:
number: 80

View File

@@ -74,7 +74,7 @@ kind: Service
metadata: metadata:
name: pgadmin-service name: pgadmin-service
spec: spec:
type: LoadBalancer # or NodePort based on your setup type: ClusterIP
ports: ports:
- port: 80 - port: 80
targetPort: 80 targetPort: 80

View File

@@ -0,0 +1,26 @@
USER-SUPPLIED VALUES:
deployment:
podAnnotations:
prometheus.io/port: "8082"
prometheus.io/scrape: "true"
global:
systemDefaultRegistry: ""
image:
repository: rancher/mirrored-library-traefik
tag: 2.11.8
priorityClassName: system-cluster-critical
providers:
kubernetesIngress:
publishedService:
enabled: true
service:
ipFamilyPolicy: PreferDualStack
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists