updated readme and restructure project

This commit is contained in:
2025-02-28 20:04:52 +02:00
parent 8e8d1a65e2
commit efdaba6169
60 changed files with 109 additions and 120 deletions

2
kubernetes/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
.secrets/
.env

510
kubernetes/README.md Normal file
View File

@ -0,0 +1,510 @@
Setup K3s Kubernetes Cluster
===================================
# Configure Cert Manager for automating SSL certificate handling
Cert manager handles SSL certificate creation and renewal from Let's Encrypt.
```bash
helm repo add jetstack https://charts.jetstack.io --force-update
helm repo update
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.15.3 \
--set crds.enabled=true \
--set prometheus.enabled=false \
--set webhook.timeoutSeconds=4 \
```
Next, deploy the certificate Issuer. Issuers, and ClusterIssuers,
are Kubernetes resources that represent certificate authorities (CAs) that are
able to generate signed certificates by honoring certificate signing requests.
All cert-manager certificates require a referenced issuer that is in a ready
condition to attempt to honor the request.
[Ref](https://cert-manager.io/docs/concepts/issuer/).
The template for ClusterIssuer is in the cert-manager directory. A single
wildcard-cert will be created and used for all ingress subdomains. Create a new
certificate and cert in cert directory and copy the secret manually to all the
namespaces.
First add the DNS servers to the coreDNS config:
```bash
export KUBE_EDITOR=nvim
# Change the forward section with . 1.1.1.1 1.0.0.1
kubectl -n kube-system edit configmap coredns
```
Next, deploy the ClusterIssuer, WildcardCert, and secrets using helm
```bash
source .env
helm install cert-handler cert-manager-helm-chart \
--atomic --set secret.apiToken=$CLOUDFLARE_TOKEN \
--set clusterIssuer.email=$EMAIL \
--set wildcardCert.dnsNames[0]=$DNSNAME
# Copy the wildcard certificate to other namespaces
kubectl get secret wildcard-cert-secret --namespace=cert-manager -o yaml \
| sed 's/namespace: cert-manager/namespace: <namespace>/' | kubectl apply -f -
```
If for some reason certificate secret `wildcard-cert-secret` is not generated,
the issue can be related to cloudflare API token is wrong, the token secret is
missing, the Issuer or ClusterIssuer is not ready etc.
Here are some troubleshoot commands to test:
```bash
kubectl get clusterissuer
kubectl describe clusterissuer
kubectl get certificate -n nginx-test
kubectl get certificateRequest -n nginx-test
kubectl describe challenges -n cert-manager
kubectl describe orders -n cert-manager
```
# Deploy Private Docker Registry
Create a new namespace called docker-registry and deploy the private
docker-registry.
First create docker credentials with htpasswd:
```bash
htpasswd -cB registry-passwords USERNAME
kubectl create namespace docker-registry
kubectl create secret generic registry-credentials \
--from-file=.secrets/registry-passwords \
-n docker-registry
```
Next, deploy the docker registry with helm chart:
```bash
source .env
kubectl get secret wildcard-cert-secret --namespace=cert-manager -o yaml \
| sed 's/namespace: cert-manager/namespace: docker-registry/' \
| kubectl apply -f -
helm install registry docker-registry-helm-chart/ \
--set host=$DOCKER_REGISTRY_HOST \
--set ingress.tls.host=$DNSNAME \
--atomic
```
# Deploy Portfolio Website from Private Docker Registry
First, create a secret to access the private docker registry. Then copy the
wildcard CA cert and deploy the portfolio webapp.
```bash
kubectl create namespace my-portfolio
kubectl get secret wildcard-cert-secret --namespace=cert -o yaml \
| sed 's/namespace: cert/namespace: my-portfolio/' | kubectl apply -f -
source .env
kubectl create secret docker-registry my-registry-secret \
--docker-server="${DOCKER_REGISTRY_HOST}" \
--docker-username="${DOCKER_USER}" \
--docker-password="${DOCKER_PASSWORD}" \
-n my-portfolio
# use envsubst to substitute the environment variables in the manifest
envsubst < my-portfolio/portfolioManifest.yaml | \
kubectl apply -n my-portfolio -f -
```
# Expose External Services via Traefik Ingress Controller
External services hosted outside the kubernetes cluster can be exposed using
the kubernetes traefik reverse proxy.
A nginx http server is deployed as a proxy that listens on port 80
and redirects requests to the proxmox local IP address. The server has an
associated clusterIP service which is exposed via ingress. The nginx proxy can
be configured to listen to other ports and forward traffic to other external
services running locally or remotely.
```bash
source .env
kubectl create namespace external-services
kubectl get secret wildcard-cert-secret --namespace=cert -o yaml \
| sed 's/namespace: cert/namespace: external-services/' | kubectl apply -f -
envsubst < external-service/proxmox.yaml | \
kubectl apply -n external-services -f -
```
# Create Shared NFS Storage for Plex and Jellyfin
A 1TB NVME SSD is mounted to one of the original homelab VMs. This serves as an
NFS mount for all k3s nodes to use as shared storage for plex and jellyfin
containers.
## On the host VM:
```bash
sudo apt update
sudo apt install nfs-kernel-server
sudo chown nobody:nogroup /media/flexdrive
# Configure mount on /etc/fstab to persist across reboot
sudo vim /etc/fstab
# Add the following line. Change the filsystem if other than ntfs
# /dev/sdb2 /media/flexdrive ntfs defaults 0 2
# Configure NFS exports by editing the NFS exports file
sudo vim /etc/exports
# Add the following line to the file
# /media/flexdrive 192.168.1.113/24(rw,sync,no_subtree_check,no_root_squash)
# Apply the exports config
sudo exportfs -ra
# Start and enable NFS Server
sudo systemctl start nfs-kernel-server
sudo systemctl enable nfs-kernel-server
```
## On all the K3s VMs:
```
sudo apt install nfs-common
sudo mkdir /mnt/media
sudo mount 192.168.1.113:/media/flexdrive /mnt/media
# And test if the contents are visible
# After that unmount with the following command as mounting will be taken care
# by k8s
sudo umount /mnt/media
```
# Deploy Jellyfin Container in K3s
Jellyfin is a media server that can be used to organize, play, and stream
audio and video files. The Jellyfin container is deployed in the k3s cluster
using the NFS shared storage for media files. Due to segregated nature of the
media manifest files, it has not been helm charted.
```bash
source .env
kubectl create namespace media
kubectl get secret wildcard-cert-secret --namespace=cert-manager -o yaml \
| sed 's/namespace: cert-manager/namespace: media/' | kubectl apply -f -
# Create a new storageclass called manual to not use longhorn storageclass
kubectl apply -f media/storageclass-nfs.yaml
# Create NFS PV and PVC
envsubst < media/pv.yaml | kubectl apply -n media -f -
kubectl apply -f media/pvc.yaml -n media
# Deploy Jellyfin
envsubst < media/jellyfin-deploy.yaml | kubectl apply -n media -f -
```
## Transfer media files from one PVC to another (Optional)
To transfer media files from one PVC to another, create a temporary pod to copy
files from one PVC to another. The following command will create a temporary
pod in the media namespace to copy files from one PVC to another.
```bash
# Create a temporary pod to copy files from one PVC to another
k apply -f temp-deploy.yaml -n media
# Copy files from one PVC to another
kubectl exec -it temp-pod -n media -- bash
cp -r /mnt/source/* /mnt/destination/
```
# Create Storage Solution
Longhorn is a distributed block storage solution for Kubernetes that is built
using containers. It provides a simple and efficient way to manage persistent
volumes. Longhorn is deployed in the k3s cluster to provide storage for the
containers. For security reasons, the longhorn UI is not exposed outside the
network. It is accessible locally via port-forwarding or loadbalancer.
In order to use Longhorn, the storage disk must be formatted and mounted on
each VM. The following commands format the disk and mount it on /mnt/longhorn
directory. For deployment, the longhorn helm chart is used to install longhorn
in the longhorn-system namespace.
```bash
# On each VM
sudo mkfs.ext4 /dev/sda4
sudo mkdir /mnt/longhorn
sudo mount /dev/sda4 /mnt/longhorn
helm repo add longhorn https://charts.longhorn.io
helm repo update
kubectl create namespace longhorn-system
helm install longhorn longhorn/longhorn --namespace longhorn-system
kubectl -n longhorn-system get pods
# Access longhorn UI
kubectl -n longhorn-system port-forward svc/longhorn-frontend 8080:80
# Or make it permanent by setting the longhorn-frontend service type to
# LoadBalancer.
kubectl -n longhorn-system edit svc longhorn-frontend
```
## If the /mnt/longhorn is not shown
kubectl -n longhorn-system get nodes.longhorn.io
kubectl -n longhorn-system edit nodes.longhorn.io <node-name>
```
Add the following block under disks for all nodes:
```bash
custom-disk-mnt-longhorn: # New disk for /mnt/longhorn
allowScheduling: true
diskDriver: ""
diskType: filesystem
evictionRequested: false
path: /mnt/longhorn # Specify the new mount path
storageReserved: 0 # Adjust storageReserved if needed
tags: []
```
## Setting the number of replicas
To set the number of replicas, edit the longhorn-storageclass configmap and
set the numberOfReplicas to the desired number.
```bash
# Set number of replica count to 1
kubectl edit configmap -n longhorn-system longhorn-storageclass
set the numberOfReplicas: "1"
```
# Configure AdGuard Adblocker
AdGuard is deployed in the K3S cluster for network ad protection.
A loadbalancer service is used for DNS resolution and clusterIP
and ingress for the WEBUI.
The adguard initial admin port is 3000 which is bound to the loadbalancer IP
from the local network. The AdGuard UI is accessible from the ingress
domain on the internet.
```bash
kubectl create namespace adguard
kubectl get secret wildcard-cert-secret --namespace=cert -o yaml \
| sed 's/namespace: cert/namespace: adguard/' | kubectl apply -f -
source .env
helm install adguard \
--set host=$ADGUARD_HOST \
--atomic adguard-helm-chart
```
# Pocketbase Database and Authentication Backend
Pocketbase serves as the database and authentication backend for
various side projects.
```bash
# Create namespace and copy the wildcard cert secret
kubectl create namespace pocketbase
kubectl get secret wildcard-cert-secret --namespace=cert-manager -o yaml \
| sed 's/namespace: cert-manager/namespace: pocketbase/' | kubectl apply -f -
# Deploy pocketbase using helm chart
helm install pocketbase \
--set ingress.host=$POCKETBASE_HOST \
--set ingress.tls.hosts[0]=$DNSNAME \
--atomic pocketbase-helm-chart
```
It may be required to create initial user and password for the superuser.
To do that, exec into the pod and run the following command:
```bash
pocketbase superuser create email password
```
# qBittorrent with Wireguard
qBittorrent is deployed with wireguard to route traffic through a VPN tunnel.
The following packages must be installed on each node:
```bash
# On each k3s node
sudo apt update
sudo apt install -y wireguard wireguard-tools linux-headers-$(uname -r)
```
The qBittorrent is deplyoyed via helm chart. The qBittorrent deployment uses
the `media-nfs-pv` common NFS PVC for downloads. The helm chart contains both
qBittorrent and wireguard. For security, qBittorrent is not exposed outside the
network via ingress. It is accessible locally via loadbalancer IP address.
```bash
helm install qbittorrent qbittorrent-helm-chart/ --atomic
```
After deployment, verify qBittorrent is accessible on the loadbalancer IP and
port. Login to the qBittorrent UI with default credentials from the deployment
log. Change the user settings under settings/WebUI. Configure the network
interface (wg0) in settings/Advanced and set download/upload speeds in
settings/speed.
Also verify the VPM is working by executing the following command on the
qBittorrent pod:
```bash
curl ipinfo.io
```
# PostgreSQL Database
The PostgreSQL database uses the bitnami postgres helm chart with one primary
and one replica statefulset, totaling 2 postgres pods.
```bash
# Add the Bitnami repo if not already added
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
# Install PostgreSQL with these values
source .env
helm install my-postgres \
bitnami/postgresql -f values.yaml \
--set global.postgresql.auth.username=$POSTGRES_USER \
--set global.postgresql.auth.password=$POSTGRES_PASSWORD \
--set global.postgresql.auth.postgresPassword=$POSTGRES_PASSWORD \
--atomic \
-n postgres
```
## Connect to the Database
```bash
psql -U $POSTGRES_USER -d postgres --host 192.168.1.145 -p 5432
```
## Backup and Restore PostgreSQL Database
```bash
# To backup
# Dump format is compressed and allows parallel restore
pg_dump -U $POSTGRES_USER -h 192.168.1.145 -p 5432 -F c \
-f db_backup.dump postgres
# To restore
pg_restore -U $POSTGRES_USER -h 192.168.1.145 -p 5432 -d postgres db_backup.dump
```
## pgAdmin
pgAdmin provides GUI support for PostgreSQL database management. Deploy using
pgadmin.yaml manifest under postgres directory. The environment variables are
substituted from the .env file.
```bash
source .env
envsubst < postgres/pgadmin.yaml | kubectl apply -n postgres -f -
```
## Gitea Git Server
Reference:
https://gitea.com/gitea/helm-chart/
https://docs.gitea.com/installation/database-prep
Gitea is a self-hosted Git service that is deployed in the k3s cluster. The
Gitea deployment uses existing posrgres database for data storage. The Gitea
service is exposed via ingress and is accessible from the internet.
Configure a new user, database, and schema for Gitea in the postgres database.
```bash
CREATE ROLE gitea WITH LOGIN PASSWORD 'gitea';
CREATE DATABASE giteadb
WITH OWNER gitea
TEMPLATE template0
ENCODING UTF8
LC_COLLATE 'en_US.UTF-8'
LC_CTYPE 'en_US.UTF-8';
\c giteadb
CREATE SCHEMA gitea;
GRANT USAGE ON SCHEMA gitea TO gitea;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA gitea TO gitea;
ALTER SCHEMA gitea OWNER TO gitea;
```
Next, deploy the Gitea helm chart with the following values:
```bash
source .env
kubectl create namespace gitea
kubectl get secret wildcard-cert-secret --namespace=cert-manager -o yaml \
| sed 's/namespace: cert-manager/namespace: gitea/' | kubectl apply -f -
# The configMap contains the app.ini file values for gitea
kubectl apply -f gitea/configMap.yaml -n gitea
helm install gitea gitea-charts/gitea -f gitea/values.yaml \
--namespace gitea \
--atomic \
--set ingress.hosts[0].host=$GITEA_HOST \
--set ingress.tls[0].hosts[0]=$DNSNAME \
--set gitea.admin.username=$GITEA_USER \
--set gitea.admin.password=$GITEA_PASSWORD \
--set gitea.admin.email=$GITEA_EMAIL \
--set gitea.config.database.PASSWD=$POSTGRES_PASSWORD \
--set gitea.config.database.HOST=$POSTGRES_URL
```
To scale the gitea Runner replicas, edit the `gitea-act-runner` statefulset
and set the replicas to the desired number.
```bash
kubectl edit statefulset gitea-act-runner -n gitea
```
## Authentication Middleware Configuration for Traefik Ingress Controller
The Traefik Ingress Controller provides robust authentication capabilities
through middleware implementation. This functionality enables HTTP Basic
Authentication for services that do not include native user authentication
mechanisms.
To implement authentication, a Traefik middleware must be configured within
the target namespace. The process requires creating a secret file containing
authentication credentials (username and password). These credentials must
be base64 encoded before being integrated into the secret manifest file.
Execute the following commands to configure the authentication:
```bash
htpasswd -c traefik_auth username
echo traefik_auth | base64
source .env
envsubst < traefik-middleware/auth_secret.yaml | kubectl apply -n my-portfolio -f -
kubernetes apply -f traefik-middleware/auth.yaml -n my-portfolio
```
Following middleware deployment, the authentication must be enabled by adding the appropriate annotation to the service's Ingress object specification:
```
traefik.ingress.kubernetes.io/router.middlewares: my-portfolio-basic-auth@kubernetescrd
```

View File

@ -0,0 +1,6 @@
apiVersion: v2
name: adguard
description: A Helm chart for deploying AdGuard Home
type: application
version: 0.1.0
appVersion: "latest"

View File

@ -0,0 +1,37 @@
{{/*
Expand the name of the release
*/}}
{{- define "adguard.fullname" -}}
{{- printf "%s-%s" .Release.Name .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels for all resources
*/}}
{{- define "adguard.labels" -}}
app: {{ include "adguard.fullname" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- end -}}
{{/*
Generate a name for the PVC
*/}}
{{- define "adguard.pvcName" -}}
{{ printf "%s-pvc" (include "adguard.fullname" .) }}
{{- end -}}
{{/*
Generate a name for the service
*/}}
{{- define "adguard.serviceName" -}}
{{ printf "%s-service" (include "adguard.fullname" .) }}
{{- end -}}
{{/*
Generate a name for the ingress
*/}}
{{- define "adguard.ingressName" -}}
{{ printf "%s-ingress" (include "adguard.fullname" .) }}
{{- end -}}

View File

@ -0,0 +1,44 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-home
namespace: {{ .Values.namespace }}
labels:
app: {{ .Values.deployment.labels.app }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Values.deployment.labels.app }}
template:
metadata:
labels:
app: {{ .Values.deployment.labels.app }}
spec:
containers:
- name: {{ .Values.deployment.labels.app }}
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
ports:
- containerPort: {{ .Values.deployment.adminContainerPort }}
protocol: TCP
name: admin
- containerPort: {{ .Values.deployment.httpContainerPort }}
name: http
protocol: TCP
- containerPort: {{ .Values.deployment.dns.tcp }}
name: dns-tcp
protocol: TCP
- containerPort: {{ .Values.deployment.dns.udp }}
name: dns-udp
protocol: UDP
volumeMounts:
- name: adguard-config
mountPath: /opt/adguardhome/conf
- name: adguard-work
mountPath: /opt/adguardhome/work
volumes:
- name: adguard-config
persistentVolumeClaim:
claimName: {{ .Values.pvc.claimName }}
- name: adguard-work
emptyDir: {}

View File

@ -0,0 +1,27 @@
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}-ingress
namespace: {{ .Values.namespace }}
annotations:
{{- toYaml .Values.ingress.annotations | nindent 4 }}
spec:
rules:
- host: {{ .Values.host }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Release.Name }}-web-ui
port:
number: {{ .Values.service.port }}
{{- if .Values.ingress.tls.enabled }}
tls:
- hosts:
- {{ .Values.host }}
secretName: {{ .Values.ingress.tls.secretName }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-pvc
namespace: {{ .Values.namespace }}
spec:
accessModes:
- {{ .Values.pvc.accessModes }}
resources:
requests:
storage: {{ .Values.pvc.size }}
storageClassName: {{ .Values.pvc.storageClass }}

View File

@ -0,0 +1,37 @@
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-service
namespace: {{ .Values.namespace }}
spec:
selector:
app: {{ .Release.Name }}-home
ports:
- port: {{ .Values.service.dnsPort.udp }}
targetPort: {{ .Values.service.dnsPort.udp }}
protocol: UDP
name: dns-udp
- port: {{ .Values.service.dnsPort.tcp }}
targetPort: {{ .Values.service.dnsPort.tcp }}
protocol: TCP
name: dns-tcp
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.adminTargetPort }}
protocol: TCP
name: admin
type: {{ .Values.service.type }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-web-ui
namespace: {{ .Values.namespace }}
spec:
selector:
app: {{ .Release.Name }}-home
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.webUiPort }}
protocol: TCP
name: http

View File

@ -0,0 +1,58 @@
apiVersion: v1
appVersion: "latest"
description: A Helm chart for deploying AdGuard Home
name: adguard-home
namespace: adguard
version: 0.1.0
replicaCount: 1
host: adguard.example.com # Change this to your domain
deployment:
adminContainerPort: 3000
httpContainerPort: 80
dns:
tcp: 53
udp: 53
labels:
app: adguard-home
image:
repository: adguard/adguardhome
tag: latest
pullPolicy: IfNotPresent
service:
type: LoadBalancer
port: 80
adminTargetPort: 3000
webUiPort: 80
dnsPort:
udp: 53
tcp: 53
ingress:
enabled: true
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/session-cookie-name: "SESSIONID"
hosts:
- host:
paths:
- /
tls:
enabled: true
secretName: wildcard-cert-secret
pvc:
claimName: adguard-pvc
enabled: true
storageClass: longhorn
accessModes: ReadWriteOnce
size: 1Gi
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}

View File

@ -0,0 +1,5 @@
apiVersion: v2
name: cert-manager
description: A Helm chart for cert-manager
version: 0.1.0
appVersion: "v1.11.0"

View File

@ -0,0 +1,18 @@
# filepath: /home/taqi/homeserver/k3s-infra/cert-manager/templates/clusterIssuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: {{ .Values.clusterIssuer.name }}
namespace: {{ .Values.namespace }}
spec:
acme:
server: {{ .Values.clusterIssuer.server }}
privateKeySecretRef:
name: {{ .Values.clusterIssuer.privateKeySecretRef }}
solvers:
- dns01:
cloudflare:
email: {{ .Values.clusterIssuer.email }}
apiTokenSecretRef:
name: {{ .Values.clusterIssuer.apiTokenSecretRef.name }}
key: {{ .Values.clusterIssuer.apiTokenSecretRef.key }}

View File

@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.secret.name }}
namespace: {{ .Values.namespace }}
type: Opaque
data:
api-token: {{ .Values.secret.apiToken }}

View File

@ -0,0 +1,14 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ .Values.wildcardCert.name }}
namespace: {{ .Values.namespace }}
spec:
secretName: {{ .Values.wildcardCert.secretName }}
issuerRef:
name: {{ .Values.clusterIssuer.name }}
kind: ClusterIssuer
dnsNames:
{{- range .Values.wildcardCert.dnsNames }}
- "{{ . }}"
{{- end }}

View File

@ -0,0 +1,21 @@
namespace: cert-manager
clusterIssuer:
name: acme-issuer
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef: example-issuer-account-key
email: EMAIL
apiTokenSecretRef:
name: cloudflare-api-token-secret
key: api-token
wildcardCert:
name: wildcard-cert
secretName: wildcard-cert-secret
dnsNames:
- ".example.com"
secret:
type: Opaque
name: cloudflare-api-token-secret
apiToken: base64encodedtoken

View File

@ -0,0 +1,5 @@
apiVersion: v2
name: docker-registry-helm-chart
description: A Helm chart for deploying a Docker registry
version: 0.1.0
appVersion: "latest"

View File

@ -0,0 +1,49 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
namespace: {{ .Values.namespace }}
spec:
replicas: {{ .Values.deployment.replicas }}
selector:
matchLabels:
app: {{ .Values.name }}
template:
metadata:
labels:
app: {{ .Values.name }}
spec:
containers:
- name: {{ .Values.name }}
image: {{ .Values.deployment.image }}
ports:
- containerPort: {{ .Values.deployment.containerPort }}
env:
- name: REGISTRY_AUTH
value: "htpasswd"
- name: REGISTRY_AUTH_HTPASSWD_REALM
value: "Registry Realm"
- name: REGISTRY_AUTH_HTPASSWD_PATH
value: "/auth/registry-passwords"
- name: REGISTRY_AUTH_HTPASSWD_FILE
value: "/auth/registry-passwords"
- name: REGISTRY_HTTP_HEADERS
value: |
Access-Control-Allow-Origin: ['{{ .Values.uiDomain }}']
Access-Control-Allow-Methods: ['HEAD', 'GET', 'OPTIONS', 'DELETE', 'POST', 'PUT']
Access-Control-Allow-Headers: ['Authorization', 'Accept', 'Content-Type', 'X-Requested-With', 'Cache-Control']
Access-Control-Max-Age: [1728000]
Access-Control-Allow-Credentials: [true]
Access-Control-Expose-Headers: ['Docker-Content-Digest']
volumeMounts:
- name: {{ .Values.deployment.registryStorageVolumeName }}
mountPath: /var/lib/registry
- name: {{ .Values.deployment.authStorageVolumeName }}
mountPath: /auth
volumes:
- name: {{ .Values.deployment.registryStorageVolumeName }}
persistentVolumeClaim:
claimName: {{ .Values.pvc.claimName }}
- name: {{ .Values.deployment.authStorageVolumeName }}
secret:
secretName: {{ .Values.credentialSecret.name }}

View File

@ -0,0 +1,29 @@
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Values.name }}-ingress
namespace: {{ .Values.namespace }}
annotations:
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
rules:
- host: "{{ .Values.host }}"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Values.name }}-service
port:
number: 5000
{{- if .Values.ingress.tls.enabled }}
tls:
- hosts:
- "{{ .Values.ingress.tls.host }}"
secretName: {{ .Values.ingress.tls.secretName }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvc.claimName }}
namespace: {{ .Values.namespace }}
spec:
accessModes:
- {{ .Values.pvc.accessMode }}
resources:
requests:
storage: {{ .Values.pvc.size }}

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name }}-service
namespace: {{ .Values.namespace }}
spec:
selector:
app: {{ .Values.name }}
ports:
- protocol: TCP
port: {{ .Values.service.port }}
targetPort: {{ .Values.deployment.containerPort }}
type: {{ .Values.service.type }}

View File

@ -0,0 +1,34 @@
name: registry
namespace: docker-registry
storage: 5Gi
host: registry.example.com
deployment:
replicas: 1
containerPort: 5000
image: registry:2
registryStorageVolumeName: registry-storage
authStorageVolumeName: auth-storage
ingress:
enabled: true
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
tls:
enabled: true
host: "*.example.com"
secretName: wildcard-cert-secret
service:
type: ClusterIP
port: 5000
pvc:
claimName: registry-pvc
enabled: true
storageClass: longhorn
accessMode: ReadWriteOnce
size: 5Gi
credentialSecret:
name: registry-credentials

View File

@ -0,0 +1,79 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxmox-proxy
namespace: external-services
spec:
replicas: 1
selector:
matchLabels:
app: proxmox-proxy
template:
metadata:
labels:
app: proxmox-proxy
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-config
configMap:
name: proxmox-proxy-config
---
apiVersion: v1
kind: Service
metadata:
name: proxmox-proxy
namespace: external-services
spec:
selector:
app: proxmox-proxy
ports:
- port: 80
targetPort: 80
---
apiVersion: v1
kind: ConfigMap
metadata:
name: proxmox-proxy-config
namespace: external-services
data:
nginx.conf: |
events {}
http {
server {
listen 80;
location / {
proxy_pass https://${PROXMOX_IP};
proxy_ssl_verify off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: proxmox-route
namespace: external-services
spec:
entryPoints:
- websecure
routes:
- match: Host(`${PROXMOX_HOST}`)
kind: Rule
services:
- name: proxmox-proxy
port: 80
tls:
secretName: wildcard-cert-secret

View File

@ -0,0 +1,8 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: gitea-app-ini-plaintext
namespace: gitea
data:
service: |
DISABLE_REGISTRATION = true

View File

@ -0,0 +1,63 @@
gitea:
config:
database:
DB_TYPE: postgres
HOST: postgres
NAME: giteadb
USER: gitea
PASSWD: password
additionalConfigSources:
- configMap:
name: gitea-app-ini-plaintext
admin:
username: admin
password: password
email: email
image:
repository: gitea/gitea
tag: 1.23.4
postgresql:
enabled: false
postgresql-ha:
enabled: false
redis-cluster:
enabled: false
redis:
enabled: false
persistence:
enabled: true
accessModes: [ "ReadWriteMany" ]
size: "10Gi"
resources:
limits:
cpu: 1000m
memory: 512Mi
requests:
cpu: 100m
memory: 512Mi
ingress:
enabled: true
hosts:
- host: git.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: wildcard-cert-secret
hosts:
- "*.example.com"
actions:
enabled: true
runner:
replicas: 3
provisioning:
enabled: true

View File

@ -0,0 +1,103 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: jellyfin-network-config
data:
network.xml: |
<?xml version="1.0" encoding="utf-8"?>
<NetworkConfiguration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<BaseUrl>/</BaseUrl>
<EnableHttps>true</EnableHttps>
<RequireHttps>true</RequireHttps>
<InternalHttpPort>8096</InternalHttpPort>
<InternalHttpsPort>8920</InternalHttpsPort>
<PublicHttpPort>80</PublicHttpPort>
<PublicHttpsPort>443</PublicHttpsPort>
<EnableRemoteAccess>true</EnableRemoteAccess>
<EnablePublishedServerUriByRequest>true</EnablePublishedServerUriByRequest>
<PublishedServerUri>https://${JELLYFIN_HOST}</PublishedServerUri>
</NetworkConfiguration>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jellyfin
spec:
replicas: 1
selector:
matchLabels:
app: jellyfin
template:
metadata:
labels:
app: jellyfin
spec:
containers:
- name: jellyfin
image: jellyfin/jellyfin:latest
ports:
- containerPort: 8096
volumeMounts:
- name: plex-media
mountPath: /media
- name: config
mountPath: /config
- name: network-config
mountPath: /config/config/ network.xml
subPath: network.xml
volumes:
- name: plex-media
persistentVolumeClaim:
claimName: media-nfs-pvc
- name: config
persistentVolumeClaim:
claimName: plex-config-pvc
- name: network-config
configMap:
name: jellyfin-network-config
---
apiVersion: v1
kind: Service
metadata:
name: jellyfin-service
spec:
selector:
app: jellyfin
ports:
- protocol: TCP
port: 8096
targetPort: 8096
type: ClusterIP
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: jellyfin-ingress
annotations:
traefik.ingress.kubernetes.io/router.middlewares: jellyfin-headers@kubernetescrd
spec:
entryPoints:
- websecure
routes:
- match: Host(`${JELLYFIN_HOST}`)
kind: Rule
services:
- name: jellyfin-service
port: 8096
tls:
secretName: wildcard-cert-secret
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: jellyfin-headers
spec:
headers:
customRequestHeaders:
X-Forwarded-Proto: "https"
customResponseHeaders:
X-Frame-Options: "SAMEORIGIN"

16
kubernetes/media/pv.yaml Normal file
View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: media-nfs-pv
labels:
app: media
spec:
capacity:
storage: 900Gi
accessModes:
- ReadWriteMany
nfs:
path: /media/flexdrive # Path to your NFS share
server: "${NFS_SERVER}" # IP of your NFS server (replace with correct IP)
persistentVolumeReclaimPolicy: Retain
storageClassName: manual

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: plex-config-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi # Storage for Jellyfin config files
storageClassName: longhorn # Make sure this matches your Longhorn storage class

14
kubernetes/media/pvc.yaml Normal file
View File

@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: media-nfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 900Gi
storageClassName: manual
selector:
matchLabels:
app: media

View File

@ -0,0 +1,6 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

View File

@ -0,0 +1,22 @@
apiVersion: v1
kind: Pod
metadata:
name: media-transfer-pod
spec:
restartPolicy: Never
containers:
- name: media-transfer
image: alpine # Use a lightweight image
command: ["/bin/sh", "-c", "sleep 3600"] # Keep the pod alive
volumeMounts:
- name: plex-media
mountPath: /mnt/longhorn
- name: existing-media
mountPath: /mnt/existing
volumes:
- name: plex-media
persistentVolumeClaim:
claimName: plex-media-longhorn
- name: existing-media
persistentVolumeClaim:
claimName: plex-media-pvc

View File

@ -0,0 +1,14 @@
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
namespace: metallb-system
name: my-ip-pool
spec:
addresses:
- 192.168.1.141-192.168.1.150
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
namespace: metallb-system
name: my-l2-advertisement

View File

@ -0,0 +1,83 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: portfolio-app
labels:
app: portfolio-app
spec:
replicas: 1
selector:
matchLabels:
app: portfolio-app
template:
metadata:
labels:
app: portfolio-app
spec:
imagePullSecrets:
- name: my-registry-secret
containers:
- name: portfolio-app
image: "${DOCKER_REGISTRY_HOST}/my-portfolio-app:latest"
imagePullPolicy: Always
ports:
- containerPort: 80
restartPolicy: Always
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: portfolio-app-svc
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: portfolio-app
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: portfolio
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
tls:
- hosts:
- "${DNSNAME}"
secretName: wildcard-cert-secret
rules:
- host: "${PORTFOLIO_HOST}"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: portfolio-app-svc
port:
number: 80
- path: /experience
pathType: Prefix
backend:
service:
name: react-app-service
port:
number: 80
- path: /interest
pathType: Prefix
backend:
service:
name: react-app-service
port:
number: 80
- path: /project
pathType: Prefix
backend:
service:
name: react-app-service
port:
number: 80

View File

@ -0,0 +1,5 @@
apiVersion: v2
name: pocketbase
description: A Helm chart for deploying PocketBase
version: 0.1.0
appVersion: "latest"

View File

@ -0,0 +1,29 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.deployment.name }}
namespace: {{ .Values.namespace }}
labels:
app: {{ .Values.deployment.labels.app }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Values.deployment.labels.app }}
template:
metadata:
labels:
app: {{ .Values.deployment.labels.app }}
spec:
containers:
- name: pocketbase
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: {{ .Values.deployment.containerPort }}
volumeMounts:
- name: {{ .Values.pvc.name }}
mountPath: {{ index .Values.deployment.volumeMounts 0 "mountPath" }}
volumes:
- name: {{ .Values.pvc.name }}
persistentVolumeClaim:
claimName: {{ .Values.persistence.name }}

View File

@ -0,0 +1,32 @@
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Values.ingress.name }}
namespace: {{ .Values.namespace }}
annotations:
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
ingressClassName: {{ .Values.ingress.className }}
rules:
- host: {{ .Values.ingress.host }}
http:
paths:
- path: {{ .Values.ingress.path }}
pathType: {{ .Values.ingress.pathType }}
backend:
service:
name: {{ .Values.service.name }}
port:
number: {{ .Values.service.port }}
{{- if .Values.ingress.tls.enabled }}
tls:
- hosts:
{{- range .Values.ingress.tls.hosts }}
- "{{ . }}"
{{- end }}
secretName: {{ .Values.ingress.tls.secretName }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.persistence.name }}
namespace: {{ .Values.namespace }}
spec:
accessModes:
- {{ .Values.persistence.accessMode }}
resources:
requests:
storage: {{ .Values.persistence.size }}

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.service.name }}
namespace: {{ .Values.namespace }}
spec:
selector:
app: {{ .Values.deployment.labels.app }}
ports:
- protocol: TCP
port: {{ .Values.service.port }}
targetPort: {{ .Values.deployment.containerPort }}
type: {{ .Values.service.type }}

View File

@ -0,0 +1,43 @@
replicaCount: 1
namespace: pocketbase
deployment:
name: pocketbase
containerPort: 8090
labels:
app: pocketbase
volumeMounts:
- mountPath: /pb_data
image:
repository: ghcr.io/muchobien/pocketbase
tag: latest
pullPolicy: IfNotPresent
service:
name: pocketbase
type: ClusterIP
port: 80
ingress:
enabled: true
name: pocketbase-ingress
className: traefik
annotations: {}
host: pocketbase.example.com
path: /
pathType: Prefix
tls:
enabled: true
secretName: wildcard-cert-secret
hosts:
- "*.example.com"
persistence:
enabled: true
name: pocketbase-pvc
accessMode: ReadWriteOnce
size: 5Gi
pvc:
name: pocketbase-data

View File

@ -0,0 +1,105 @@
apiVersion: v1
kind: Secret
metadata:
name: pgadmin-secret
type: Opaque
stringData:
pgadmin-password: "${PGADMIN_PASSWORD}"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pgadmin-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin
template:
metadata:
labels:
app: pgadmin
spec:
securityContext:
fsGroup: 5050 # pgAdmin group ID
runAsUser: 5050 # pgAdmin user ID
initContainers:
- name: init-chmod
image: busybox
command: ["sh", "-c", "chown -R 5050:5050 /var/lib/pgadmin"]
volumeMounts:
- name: pgadmin-data
mountPath: /var/lib/pgadmin
securityContext:
runAsUser: 0 # Run as root for chmod
containers:
- name: pgadmin
image: dpage/pgadmin4:latest
env:
- name: SCRIPT_NAME
value: /console
- name: PGADMIN_DEFAULT_EMAIL
value: "${PGADMIN_EMAIL}"
- name: PGADMIN_DEFAULT_PASSWORD
valueFrom:
secretKeyRef:
name: pgadmin-secret
key: pgadmin-password
ports:
- containerPort: 80
volumeMounts:
- name: pgadmin-data
mountPath: /var/lib/pgadmin
securityContext:
runAsUser: 5050 # pgAdmin user ID
runAsGroup: 5050 # pgAdmin group ID
volumes:
- name: pgadmin-data
persistentVolumeClaim:
claimName: pgadmin-pvc
---
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
type: LoadBalancer # or NodePort based on your setup
ports:
- port: 80
targetPort: 80
selector:
app: pgadmin
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pgadmin-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
tls:
- hosts:
- "${DNSNAME}"
secretName: wildcard-cert-secret
rules:
- host: "${PGADMIN_HOST}"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: pgadmin-service
port:
number: 80

View File

@ -0,0 +1,31 @@
image:
registry: docker.io
repository: bitnami/postgresql
tag: "16.2.0-debian-11-r0"
global:
postgresql:
auth:
postgresPassword: "placeholder"
username: "placeholder"
password: "placeholder"
database: "postgres"
primary:
persistence:
enabled: true
size: 5Gi
service:
type: LoadBalancer
ports:
postgresql: 5432
readReplicas:
replicaCount: 1 # This plus primary makes 2 total
persistence:
enabled: true
size: 5Gi
architecture: replication
volumePermissions:
enabled: true

View File

@ -0,0 +1,6 @@
apiVersion: v2
name: qbittorrent
description: A Helm chart for deploying qBittorrent with WireGuard
type: application
version: 0.1.0
appVersion: "latest"

View File

@ -0,0 +1,19 @@
{{/*
Expand the helper functions for the qBittorrent Helm chart
*/}}
{{- define "qbittorrent.fullname" -}}
{{- printf "%s-%s" .Release.Name .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "qbittorrent.serviceName" -}}
{{- printf "%s-service" (include "qbittorrent.fullname" .) -}}
{{- end -}}
{{- define "qbittorrent.deploymentName" -}}
{{- printf "%s-deployment" (include "qbittorrent.fullname" .) -}}
{{- end -}}
{{- define "qbittorrent.configMapName" -}}
{{- printf "%s-config" (include "qbittorrent.fullname" .) -}}
{{- end -}}

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.persistence.configMap.name }}
namespace: {{ .Values.namespace }}
data:
wg0.conf: |
[Interface]
Address = {{ .Values.wireguard.address }}
PrivateKey = {{ .Values.wireguard.privateKey }}
MTU = {{ .Values.wireguard.mtu }}
DNS = {{ .Values.wireguard.dns }}
ListenPort = {{ .Values.wireguard.listenPort }}
[Peer]
PublicKey = {{ .Values.wireguard.peerPublicKey }}
PresharedKey = {{ .Values.wireguard.presharedKey }}
AllowedIPs = {{ .Values.wireguard.allowedIPs }}
Endpoint = {{ .Values.wireguard.endpoint }}
PersistentKeepalive = {{ .Values.wireguard.persistentKeepalive }}

View File

@ -0,0 +1,125 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
namespace: {{ .Values.namespace }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ .Release.Name }}
spec:
initContainers:
- name: wireguard-init
image: {{ .Values.wireguardImage.repository }}:{{ .Values.wireguardImage.tag }}
imagePullPolicy: {{ .Values.wireguardImage.pullPolicy }}
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
command:
- /bin/sh
- -c
- |
set -x
echo "Starting WireGuard initialization..."
mkdir -p /etc/wireguard
cp /config/wg_confs/wg0.conf /etc/wireguard/wg0.conf
chmod 600 /etc/wireguard/wg0.conf
if ! lsmod | grep -q wireguard; then
modprobe wireguard || echo "Failed to load wireguard module"
fi
wg-quick up wg0 || echo "Failed to bring up WireGuard interface"
ip link show wg0
wg show
volumeMounts:
- name: wireguard-config
mountPath: /config/wg_confs
- name: modules
mountPath: /lib/modules
containers:
- name: wireguard
image: {{ .Values.wireguardImage.repository }}:{{ .Values.wireguardImage.tag }}
imagePullPolicy: {{ .Values.wireguardImage.pullPolicy }}
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
env:
- name: PUID
value: "{{ .Values.config.puid }}"
- name: PGID
value: "{{ .Values.config.pgid }}"
- name: UMASK_SET
value: "{{ .Values.config.umask }}"
- name: TZ
value: "{{ .Values.config.timezone }}"
volumeMounts:
- name: wireguard-config
mountPath: /config/wg_confs
- name: modules
mountPath: /lib/modules
command:
- /bin/sh
- -c
- |
while true; do
if ! ip link show wg0 > /dev/null 2>&1; then
wg-quick up wg0
fi
sleep 30
done
ports:
- containerPort: {{ .Values.service.wireguardPort }}
protocol: UDP
- name: qbittorrent
image: {{ .Values.qbittorrentImage.repository }}:{{ .Values.qbittorrentImage.tag }}
imagePullPolicy: {{ .Values.qbittorrentImage.pullPolicy }}
env:
- name: PUID
value: "{{ .Values.config.puid }}"
- name: PGID
value: "{{ .Values.config.pgid }}"
- name: TZ
value: "{{ .Values.config.timezone }}"
- name: WEBUI_PORT
value: "{{ .Values.config.webuiPort }}"
volumeMounts:
- name: qbittorrent-config
mountPath: /config
- name: downloads
mountPath: /downloads
ports:
- containerPort: {{ .Values.deployment.containerPort }}
protocol: TCP
readinessProbe:
httpGet:
path: /
port: {{ .Values.deployment.containerPort }}
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 3
volumes:
- name: qbittorrent-config
persistentVolumeClaim:
claimName: {{ .Values.persistence.config.name }}
- name: wireguard-config
configMap:
name: {{ .Values.persistence.configMap.name }}
- name: downloads
persistentVolumeClaim:
claimName: {{ .Values.persistence.downloads.existingClaim }}
- name: modules
hostPath:
path: /lib/modules

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.persistence.config.name }}
namespace: {{ .Values.namespace }}
spec:
accessModes:
- {{ .Values.persistence.config.accessMode }}
resources:
requests:
storage: {{ .Values.persistence.config.size }}
storageClassName: {{ .Values.persistence.config.storageClass }}

View File

@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.service.name }}
namespace: {{ .Values.namespace }}
spec:
selector:
app: {{ .Release.Name }}
ports:
- protocol: TCP
name: qbittorrent
port: {{ .Values.service.port }}
targetPort: {{ .Values.deployment.containerPort }}
- protocol: UDP
name: wireguard
port: {{ .Values.service.wireguardPort }}
targetPort: {{ .Values.service.wireguardPort }}
type: {{ .Values.service.type }}

View File

@ -0,0 +1,60 @@
replicaCount: 1
namespace: media
deployment:
labels:
app: qbittorrent
containerPort: 8080
image:
repository: linuxserver/qbittorrent
tag: latest
pullPolicy: Always
qbittorrentImage:
repository: linuxserver/qbittorrent
tag: latest
pullPolicy: Always
wireguardImage:
repository: linuxserver/wireguard
tag: latest
pullPolicy: Always
service:
name: qbittorrent-service
type: LoadBalancer
port: 8080
wireguardPort: 51820
persistence:
config:
enabled: true
name: qbittorrent-config-pvc
accessMode: ReadWriteOnce
size: 1Gi
storageClass: longhorn
downloads:
enabled: true
existingClaim: media-nfs-pvc
configMap:
enabled: true
name: wireguard-config
config:
puid: 1000
pgid: 1000
timezone: Europe/Helsinki
umask: 022
webuiPort: 8080
wireguard:
address: 10.182.199.210/32
privateKey: WNDT2JsSZWw4q5EgsUKkBEX1hpWlpJGUTV/ibfJZOVo=
mtu: 1329
dns: 10.128.0.1
listenPort: 51820
peerPublicKey: PyLCXAQT8KkM4T+dUsOQfn+Ub3pGxfGlxkIApuig+hk=
presharedKey: jSEf0xVUv/LwLmybp+LSM21Q2VOPbWPGcI/Dc4LLGkM=
endpoint: europe3.vpn.airdns.org:1637
allowedIPs: 0.0.0.0/0
persistentKeepalive: 15

View File

@ -0,0 +1,46 @@
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 1
labels:
app: hog
name: hog
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: hog
template:
metadata:
creationTimestamp: null
labels:
app: hog
spec:
containers:
- image: vish/stress
imagePullPolicy: Always
name: stress
resources:
limits:
memory: "1Gi"
requests:
memory: "500Mi"
args:
- -cpus
- "2"
- -mem-total
- "1250Mi"
- -mem-alloc-size
- "100Mi"
- -mem-alloc-sleep
- "1s"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30

View File

@ -0,0 +1,7 @@
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: basic-auth
spec:
basicAuth:
secret: traefik-basic-auth

View File

@ -0,0 +1,7 @@
apiVersion: v1
kind: Secret
metadata:
name: traefik-basic-auth
type: Opaque
data:
auth: "${TRAEFIK_SECRET}"