homeserver initial commit

- ansible
- docker-compose
- Kubernetes_deployments
This commit is contained in:
2025-02-12 20:10:56 +02:00
commit e5e8aa6b87
70 changed files with 2860 additions and 0 deletions

2
Kubernetes_deployments/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
.secrets/
.env

View File

@ -0,0 +1,393 @@
Setup K3s Kubernetes Cluster
===================================
# Configure Cert Manager for automating SSL certificate handling
Cert manager handles SSL certificate creation and renewal from Let's Encrypt.
```bash
helm repo add jetstack https://charts.jetstack.io --force-update
helm repo update
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.15.3 \
--set crds.enabled=true \
--set prometheus.enabled=false \
--set webhook.timeoutSeconds=4 \
```
Next, deploy the certificate Issuer. Issuers, and ClusterIssuers,
are Kubernetes resources that represent certificate authorities (CAs) that are
able to generate signed certificates by honoring certificate signing requests.
All cert-manager certificates require a referenced issuer that is in a ready
condition to attempt to honor the request.
[Ref](https://cert-manager.io/docs/concepts/issuer/).
The template for ClusterIssuer is in the cert-manager directory. A single wildcard-cert
will be created and used for all ingress subdomains.
Create a new certificate and cert in cert directory
and copy the secret manually to all the namespaces.
First add the DNS servers to the coreDNS config:
```bash
export KUBE_EDITOR=nvim
# Change the forward section with . 1.1.1.1 1.0.0.1
kubectl -n kube-system edit configmap coredns
```
Next, deploy the ClusterIssuer, WildcardCert, and secrets using helm
```bash
source .env
helm install cert-handler cert-manager-helm-chart \
--atomic --set secret.apiToken=$CLOUDFLARE_TOKEN \
--set clusterIssuer.email=$EMAIL \
--set wildcardCert.dnsNames[0]=$DNSNAME
# Copy the wildcard certificate to other namespaces
kubectl get secret wildcard-cert-secret --namespace=cert-manager -o yaml \
| sed 's/namespace: cert-manager/namespace: <namespace>/' | kubectl apply -f -
```
If for some reason certificate secret `wildcard-cert-secret` is not generated,
the issue can be related to cloudflare API token is wrong, the token secret is
missing, the Issuer or ClusterIssuer is not ready etc.
Here are some troubleshoot commands to test:
```bash
kubectl get clusterissuer
kubectl describe clusterissuer
kubectl get certificate -n nginx-test
kubectl get certificateRequest -n nginx-test
kubectl describe challenges -n cert-manager
kubectl describe orders -n cert-manager
```
# Deploy Private Docker Registry
Create a new namespace called docker-registry and deploy the private
docker-registry.
First create docker credentials with htpasswd:
```bash
htpasswd -cB registry-passwords USERNAME
kubectl create namespace docker-registry
kubectl create secret generic registry-credentials \
--from-file=.secrets/registry-passwords \
-n docker-registry
```
Next, deploy the docker registry with helm chart:
```bash
source .env
kubectl get secret wildcard-cert-secret --namespace=cert-manager -o yaml \
| sed 's/namespace: cert-manager/namespace: docker-registry/' \
| kubectl apply -f -
helm install registry docker-registry-helm-chart/ \
--set host=$DOCKER_REGISTRY_HOST \
--set ingress.tls.host=$DNSNAME \
--atomic
```
# Deploy Portfolio Website from Private Docker Registry
First, create a secret to access the private docker registry. Then copy the
wildcard CA cert and deploy the portfolio webapp.
```bash
kubectl create namespace my-portfolio
kubectl get secret wildcard-cert-secret --namespace=cert -o yaml \
| sed 's/namespace: cert/namespace: my-portfolio/' | kubectl apply -f -
source .env
kubectl create secret docker-registry my-registry-secret \
--docker-server="${DOCKER_REGISTRY_HOST}" \
--docker-username="${DOCKER_USER}" \
--docker-password="${DOCKER_PASSWORD}" \
-n my-portfolio
# use envsubst to substitute the environment variables in the manifest
envsubst < my-portfolio/portfolioManifest.yaml | \
kubectl apply -n my-portfolio -f -
```
# Expose External Services via Traefik Ingress Controller
External services hosted outside the kubernetes cluster can be exposed using
the kubernetes traefik reverse proxy.
A nginx http server is deployed as a proxy that listens on port 80
and redirects requests to the proxmox local IP address. The server has an
associated clusterIP service which is exposed via ingress. The nginx proxy can
be configured to listen to other ports and forward traffic to other external
services running locally or remotely.
```bash
source .env
kubectl create namespace external-services
kubectl get secret wildcard-cert-secret --namespace=cert -o yaml \
| sed 's/namespace: cert/namespace: external-services/' | kubectl apply -f -
envsubst < external-service/proxmox.yaml | kubectl apply -n external-services -f -
```
# Create Shared NFS Storage for Plex and Jellyfin
A 1TB NVME SSD is mounted to one of the original homelab VMs. This serves as an
NFS mount for all k3s nodes to use as shared storage for plex and jellyfin
containers.
## On the host VM:
```bash
sudo apt update
sudo apt install nfs-kernel-server
sudo chown nobody:nogroup /media/flexdrive
# Configure mount on /etc/fstab to persist across reboot
sudo vim /etc/fstab
# Add the following line. Change the filsystem if other than ntfs
# /dev/sdb2 /media/flexdrive ntfs defaults 0 2
# Configure NFS exports by editing the NFS exports file
sudo vim /etc/exports
# Add the following line to the file
# /media/flexdrive 192.168.1.113/24(rw,sync,no_subtree_check,no_root_squash)
# Apply the exports config
sudo exportfs -ra
# Start and enable NFS Server
sudo systemctl start nfs-kernel-server
sudo systemctl enable nfs-kernel-server
```
## On all the K3s VMs:
```
sudo apt install nfs-common
sudo mkdir /mnt/media
sudo mount 192.168.1.113:/media/flexdrive /mnt/media
# And test if the contents are visible
# After that unmount with the following command as mounting will be taken care
# by k8s
sudo umount /mnt/media
```
# Deploy Jellyfin Container in K3s
Jellyfin is a media server that can be used to organize, play, and stream
audio and video files. The Jellyfin container is deployed in the k3s cluster
using the NFS shared storage for media files. Due to segregated nature of the
media manifest files, it has not been helm charted.
```bash
source .env
kubectl create namespace media
kubectl get secret wildcard-cert-secret --namespace=cert-manager -o yaml \
| sed 's/namespace: cert-manager/namespace: media/' | kubectl apply -f -
# Create a new storageclass called manual to not use longhorn storageclass
kubectl apply -f media/storageclass-nfs.yaml
# Create NFS PV and PVC
envsubst < media/pv.yaml | kubectl apply -n media -f -
kubectl apply -f media/pvc.yaml -n media
# Deploy Jellyfin
envsubst < media/jellyfin-deploy.yaml | kubectl apply -n media -f -
```
## Transfer media files from one PVC to another (Optional)
To transfer media files from one PVC to another, create a temporary pod to copy
files from one PVC to another. The following command will create a temporary
pod in the media namespace to copy files from one PVC to another.
```bash
# Create a temporary pod to copy files from one PVC to another
k apply -f temp-deploy.yaml -n media
# Copy files from one PVC to another
kubectl exec -it temp-pod -n media -- bash
cp -r /mnt/source/* /mnt/destination/
```
# Create Storage Solution
```bash
# On each VM
sudo mkfs.ext4 /dev/sda4
sudo mkdir /mnt/longhorn
sudo mount /dev/sda4 /mnt/longhorn
helm repo add longhorn https://charts.longhorn.io
helm repo update
kubectl create namespace longhorn-system
helm install longhorn longhorn/longhorn --namespace longhorn-system
kubectl -n longhorn-system get pods
# Access longhorn UI
kubectl -n longhorn-system port-forward svc/longhorn-frontend 8080:80
# If the /mnt/longhorn is not shown
kubectl -n longhorn-system get nodes.longhorn.io
kubectl -n longhorn-system edit nodes.longhorn.io <node-name>
```
Add the following block under disks for all nodes:
```bash
custom-disk-mnt-longhorn: # New disk for /mnt/longhorn
allowScheduling: true
diskDriver: ""
diskType: filesystem
evictionRequested: false
path: /mnt/longhorn # Specify the new mount path
storageReserved: 0 # Adjust storageReserved if needed
tags: []
# Set number of replica count to 1
kubectl edit configmap -n longhorn-system longhorn-storageclass
set the numberOfReplicas: "1"
```
# Configure AdGuard Adblocker
AdGuard is deployed in the K3S cluster for network ad protection.
A loadbalancer service is used for DNS resolution and clusterIP
and ingress for the WEBUI.
The adguard initial admin port is 3000 which is bound to the loadbalancer IP
from the local network. The AdGuard UI is accessible from the ingress
domain on the internet.
```bash
kubectl create namespace adguard
kubectl get secret wildcard-cert-secret --namespace=cert -o yaml \
| sed 's/namespace: cert/namespace: adguard/' | kubectl apply -f -
source .env
helm install adguard \
--set host=$ADGUARD_HOST \
--atomic adguard-helm-chart
```
# Pocketbase Database and Authentication Backend
Pocketbase serves as the database and authentication backend for
various side projects.
```bash
# Create namespace and copy the wildcard cert secret
kubectl create namespace pocketbase
kubectl get secret wildcard-cert-secret --namespace=cert-manager -o yaml \
| sed 's/namespace: cert-manager/namespace: pocketbase/' | kubectl apply -f -
# Deploy pocketbase using helm chart
helm install pocketbase \
--set ingress.host=$POCKETBASE_HOST \
--set ingress.tls.hosts[0]=$DNSNAME \
--atomic pocketbase-helm-chart
```
It may be required to create initial user and password for the superuser.
To do that, exec into the pod and run the following command:
```bash
pocketbase superuser create email password
```
# qBittorrent with Wireguard
qBittorrent is deployed with wireguard to route traffic through a VPN tunnel.
The following packages must be installed on each node:
```bash
# On each k3s node
sudo apt update
sudo apt install -y wireguard wireguard-tools linux-headers-$(uname -r)
```
The qBittorrent is deplyoyed via helm chart. The qBittorrent deployment uses
the `media-nfs-pv` common NFS PVC for downloads. The helm chart contains both
qBittorrent and wireguard. For security, qBittorrent is not exposed outside the
network via ingress. It is accessible locally via loadbalancer IP address.
```bash
helm install qbittorrent qbittorrent-helm-chart/ --atomic
```
After deployment, verify qBittorrent is accessible on the loadbalancer IP and
port. Login to the qBittorrent UI with default credentials from the deployment log.
Change the user settings under settings/WebUI.
Configure the network interface (wg0) in settings/Advanced and
set download/upload speeds in settings/speed.
Also verify the VPM is working by executing the following command on the qBittorrent pod:
```bash
curl ipinfo.io
```
# PostgreSQL Database
The PostgreSQL database uses the bitnami postgres helm chart with one primary
and one replica statefulset, totaling 2 postgres pods.
```bash
# Add the Bitnami repo if not already added
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
# Install PostgreSQL with these values
source .env
helm install my-postgres \
bitnami/postgresql -f values.yaml \
--set global.postgresql.auth.username=$POSTGRES_USER \
--set global.postgresql.auth.password=$POSTGRES_PASSWORD \
--set global.postgresql.auth.postgresPassword=$POSTGRES_PASSWORD \
--atomic \
-n postgres
```
## Connect to the Database
```bash
psql -U $POSTGRES_USER -d postgres --host 192.168.1.145 -p 5432
```
## Backup and Restore PostgreSQL Database
```bash
# To backup
# Dump format is compressed and allows parallel restore
pg_dump -U $POSTGRES_USER -h 192.168.1.145 -p 5432 -F c -f db_backup.dump postgres
# To restore
pg_restore -U $POSTGRES_USER -h 192.168.1.145 -p 5432 -d postgres db_backup.dump
```
## pgAdmin
pgAdmin provides GUI support for PostgreSQL database management. Deploy using
pgadmin.yaml manifest under postgres directory. The environment variables are
substituted from the .env file.
```bash
source .env
envsubst < postgres/pgadmin.yaml | kubectl apply -n postgres -f -
```

View File

@ -0,0 +1,6 @@
apiVersion: v2
name: adguard
description: A Helm chart for deploying AdGuard Home
type: application
version: 0.1.0
appVersion: "latest"

View File

@ -0,0 +1,37 @@
{{/*
Expand the name of the release
*/}}
{{- define "adguard.fullname" -}}
{{- printf "%s-%s" .Release.Name .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels for all resources
*/}}
{{- define "adguard.labels" -}}
app: {{ include "adguard.fullname" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- end -}}
{{/*
Generate a name for the PVC
*/}}
{{- define "adguard.pvcName" -}}
{{ printf "%s-pvc" (include "adguard.fullname" .) }}
{{- end -}}
{{/*
Generate a name for the service
*/}}
{{- define "adguard.serviceName" -}}
{{ printf "%s-service" (include "adguard.fullname" .) }}
{{- end -}}
{{/*
Generate a name for the ingress
*/}}
{{- define "adguard.ingressName" -}}
{{ printf "%s-ingress" (include "adguard.fullname" .) }}
{{- end -}}

View File

@ -0,0 +1,44 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-home
namespace: {{ .Values.namespace }}
labels:
app: {{ .Values.deployment.labels.app }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Values.deployment.labels.app }}
template:
metadata:
labels:
app: {{ .Values.deployment.labels.app }}
spec:
containers:
- name: {{ .Values.deployment.labels.app }}
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
ports:
- containerPort: {{ .Values.deployment.adminContainerPort }}
protocol: TCP
name: admin
- containerPort: {{ .Values.deployment.httpContainerPort }}
name: http
protocol: TCP
- containerPort: {{ .Values.deployment.dns.tcp }}
name: dns-tcp
protocol: TCP
- containerPort: {{ .Values.deployment.dns.udp }}
name: dns-udp
protocol: UDP
volumeMounts:
- name: adguard-config
mountPath: /opt/adguardhome/conf
- name: adguard-work
mountPath: /opt/adguardhome/work
volumes:
- name: adguard-config
persistentVolumeClaim:
claimName: {{ .Values.pvc.claimName }}
- name: adguard-work
emptyDir: {}

View File

@ -0,0 +1,27 @@
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}-ingress
namespace: {{ .Values.namespace }}
annotations:
{{- toYaml .Values.ingress.annotations | nindent 4 }}
spec:
rules:
- host: {{ .Values.host }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Release.Name }}-web-ui
port:
number: {{ .Values.service.port }}
{{- if .Values.ingress.tls.enabled }}
tls:
- hosts:
- {{ .Values.host }}
secretName: {{ .Values.ingress.tls.secretName }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-pvc
namespace: {{ .Values.namespace }}
spec:
accessModes:
- {{ .Values.pvc.accessModes }}
resources:
requests:
storage: {{ .Values.pvc.size }}
storageClassName: {{ .Values.pvc.storageClass }}

View File

@ -0,0 +1,37 @@
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-service
namespace: {{ .Values.namespace }}
spec:
selector:
app: {{ .Release.Name }}-home
ports:
- port: {{ .Values.service.dnsPort.udp }}
targetPort: {{ .Values.service.dnsPort.udp }}
protocol: UDP
name: dns-udp
- port: {{ .Values.service.dnsPort.tcp }}
targetPort: {{ .Values.service.dnsPort.tcp }}
protocol: TCP
name: dns-tcp
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.adminTargetPort }}
protocol: TCP
name: admin
type: {{ .Values.service.type }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-web-ui
namespace: {{ .Values.namespace }}
spec:
selector:
app: {{ .Release.Name }}-home
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.webUiPort }}
protocol: TCP
name: http

View File

@ -0,0 +1,58 @@
apiVersion: v1
appVersion: "latest"
description: A Helm chart for deploying AdGuard Home
name: adguard-home
namespace: adguard
version: 0.1.0
replicaCount: 1
host: adguard.example.com # Change this to your domain
deployment:
adminContainerPort: 3000
httpContainerPort: 80
dns:
tcp: 53
udp: 53
labels:
app: adguard-home
image:
repository: adguard/adguardhome
tag: latest
pullPolicy: IfNotPresent
service:
type: LoadBalancer
port: 80
adminTargetPort: 3000
webUiPort: 80
dnsPort:
udp: 53
tcp: 53
ingress:
enabled: true
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/session-cookie-name: "SESSIONID"
hosts:
- host:
paths:
- /
tls:
enabled: true
secretName: wildcard-cert-secret
pvc:
claimName: adguard-pvc
enabled: true
storageClass: longhorn
accessModes: ReadWriteOnce
size: 1Gi
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}

View File

@ -0,0 +1,5 @@
apiVersion: v2
name: cert-manager
description: A Helm chart for cert-manager
version: 0.1.0
appVersion: "v1.11.0"

View File

@ -0,0 +1,18 @@
# filepath: /home/taqi/homeserver/k3s-infra/cert-manager/templates/clusterIssuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: {{ .Values.clusterIssuer.name }}
namespace: {{ .Values.namespace }}
spec:
acme:
server: {{ .Values.clusterIssuer.server }}
privateKeySecretRef:
name: {{ .Values.clusterIssuer.privateKeySecretRef }}
solvers:
- dns01:
cloudflare:
email: {{ .Values.clusterIssuer.email }}
apiTokenSecretRef:
name: {{ .Values.clusterIssuer.apiTokenSecretRef.name }}
key: {{ .Values.clusterIssuer.apiTokenSecretRef.key }}

View File

@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.secret.name }}
namespace: {{ .Values.namespace }}
type: Opaque
data:
api-token: {{ .Values.secret.apiToken }}

View File

@ -0,0 +1,14 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ .Values.wildcardCert.name }}
namespace: {{ .Values.namespace }}
spec:
secretName: {{ .Values.wildcardCert.secretName }}
issuerRef:
name: {{ .Values.clusterIssuer.name }}
kind: ClusterIssuer
dnsNames:
{{- range .Values.wildcardCert.dnsNames }}
- "{{ . }}"
{{- end }}

View File

@ -0,0 +1,21 @@
namespace: cert-manager
clusterIssuer:
name: acme-issuer
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef: example-issuer-account-key
email: EMAIL
apiTokenSecretRef:
name: cloudflare-api-token-secret
key: api-token
wildcardCert:
name: wildcard-cert
secretName: wildcard-cert-secret
dnsNames:
- ".example.com"
secret:
type: Opaque
name: cloudflare-api-token-secret
apiToken: base64encodedtoken

View File

@ -0,0 +1,5 @@
apiVersion: v2
name: docker-registry-helm-chart
description: A Helm chart for deploying a Docker registry
version: 0.1.0
appVersion: "latest"

View File

@ -0,0 +1,49 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
namespace: {{ .Values.namespace }}
spec:
replicas: {{ .Values.deployment.replicas }}
selector:
matchLabels:
app: {{ .Values.name }}
template:
metadata:
labels:
app: {{ .Values.name }}
spec:
containers:
- name: {{ .Values.name }}
image: {{ .Values.deployment.image }}
ports:
- containerPort: {{ .Values.deployment.containerPort }}
env:
- name: REGISTRY_AUTH
value: "htpasswd"
- name: REGISTRY_AUTH_HTPASSWD_REALM
value: "Registry Realm"
- name: REGISTRY_AUTH_HTPASSWD_PATH
value: "/auth/registry-passwords"
- name: REGISTRY_AUTH_HTPASSWD_FILE
value: "/auth/registry-passwords"
- name: REGISTRY_HTTP_HEADERS
value: |
Access-Control-Allow-Origin: ['{{ .Values.uiDomain }}']
Access-Control-Allow-Methods: ['HEAD', 'GET', 'OPTIONS', 'DELETE', 'POST', 'PUT']
Access-Control-Allow-Headers: ['Authorization', 'Accept', 'Content-Type', 'X-Requested-With', 'Cache-Control']
Access-Control-Max-Age: [1728000]
Access-Control-Allow-Credentials: [true]
Access-Control-Expose-Headers: ['Docker-Content-Digest']
volumeMounts:
- name: {{ .Values.deployment.registryStorageVolumeName }}
mountPath: /var/lib/registry
- name: {{ .Values.deployment.authStorageVolumeName }}
mountPath: /auth
volumes:
- name: {{ .Values.deployment.registryStorageVolumeName }}
persistentVolumeClaim:
claimName: {{ .Values.pvc.claimName }}
- name: {{ .Values.deployment.authStorageVolumeName }}
secret:
secretName: {{ .Values.credentialSecret.name }}

View File

@ -0,0 +1,29 @@
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Values.name }}-ingress
namespace: {{ .Values.namespace }}
annotations:
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
rules:
- host: "{{ .Values.host }}"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Values.name }}-service
port:
number: 5000
{{- if .Values.ingress.tls.enabled }}
tls:
- hosts:
- "{{ .Values.ingress.tls.host }}"
secretName: {{ .Values.ingress.tls.secretName }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvc.claimName }}
namespace: {{ .Values.namespace }}
spec:
accessModes:
- {{ .Values.pvc.accessMode }}
resources:
requests:
storage: {{ .Values.pvc.size }}

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name }}-service
namespace: {{ .Values.namespace }}
spec:
selector:
app: {{ .Values.name }}
ports:
- protocol: TCP
port: {{ .Values.service.port }}
targetPort: {{ .Values.deployment.containerPort }}
type: {{ .Values.service.type }}

View File

@ -0,0 +1,34 @@
name: registry
namespace: docker-registry
storage: 5Gi
host: registry.example.com
deployment:
replicas: 1
containerPort: 5000
image: registry:2
registryStorageVolumeName: registry-storage
authStorageVolumeName: auth-storage
ingress:
enabled: true
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
tls:
enabled: true
host: "*.example.com"
secretName: wildcard-cert-secret
service:
type: ClusterIP
port: 5000
pvc:
claimName: registry-pvc
enabled: true
storageClass: longhorn
accessMode: ReadWriteOnce
size: 5Gi
credentialSecret:
name: registry-credentials

View File

@ -0,0 +1,79 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxmox-proxy
namespace: external-services
spec:
replicas: 1
selector:
matchLabels:
app: proxmox-proxy
template:
metadata:
labels:
app: proxmox-proxy
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-config
configMap:
name: proxmox-proxy-config
---
apiVersion: v1
kind: Service
metadata:
name: proxmox-proxy
namespace: external-services
spec:
selector:
app: proxmox-proxy
ports:
- port: 80
targetPort: 80
---
apiVersion: v1
kind: ConfigMap
metadata:
name: proxmox-proxy-config
namespace: external-services
data:
nginx.conf: |
events {}
http {
server {
listen 80;
location / {
proxy_pass https://${PROXMOX_IP};
proxy_ssl_verify off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: proxmox-route
namespace: external-services
spec:
entryPoints:
- websecure
routes:
- match: Host(`${PROXMOX_HOST}`)
kind: Rule
services:
- name: proxmox-proxy
port: 80
tls:
secretName: wildcard-cert-secret

View File

@ -0,0 +1,103 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: jellyfin-network-config
data:
network.xml: |
<?xml version="1.0" encoding="utf-8"?>
<NetworkConfiguration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<BaseUrl>/</BaseUrl>
<EnableHttps>true</EnableHttps>
<RequireHttps>true</RequireHttps>
<InternalHttpPort>8096</InternalHttpPort>
<InternalHttpsPort>8920</InternalHttpsPort>
<PublicHttpPort>80</PublicHttpPort>
<PublicHttpsPort>443</PublicHttpsPort>
<EnableRemoteAccess>true</EnableRemoteAccess>
<EnablePublishedServerUriByRequest>true</EnablePublishedServerUriByRequest>
<PublishedServerUri>https://${JELLYFIN_HOST}</PublishedServerUri>
</NetworkConfiguration>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jellyfin
spec:
replicas: 1
selector:
matchLabels:
app: jellyfin
template:
metadata:
labels:
app: jellyfin
spec:
containers:
- name: jellyfin
image: jellyfin/jellyfin:latest
ports:
- containerPort: 8096
volumeMounts:
- name: plex-media
mountPath: /media
- name: config
mountPath: /config
- name: network-config
mountPath: /config/config/ network.xml
subPath: network.xml
volumes:
- name: plex-media
persistentVolumeClaim:
claimName: media-nfs-pvc
- name: config
persistentVolumeClaim:
claimName: plex-config-pvc
- name: network-config
configMap:
name: jellyfin-network-config
---
apiVersion: v1
kind: Service
metadata:
name: jellyfin-service
spec:
selector:
app: jellyfin
ports:
- protocol: TCP
port: 8096
targetPort: 8096
type: ClusterIP
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: jellyfin-ingress
annotations:
traefik.ingress.kubernetes.io/router.middlewares: jellyfin-headers@kubernetescrd
spec:
entryPoints:
- websecure
routes:
- match: Host(`${JELLYFIN_HOST}`)
kind: Rule
services:
- name: jellyfin-service
port: 8096
tls:
secretName: wildcard-cert-secret
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: jellyfin-headers
spec:
headers:
customRequestHeaders:
X-Forwarded-Proto: "https"
customResponseHeaders:
X-Frame-Options: "SAMEORIGIN"

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: media-nfs-pv
labels:
app: media
spec:
capacity:
storage: 900Gi
accessModes:
- ReadWriteMany
nfs:
path: /media/flexdrive # Path to your NFS share
server: "${NFS_SERVER}" # IP of your NFS server (replace with correct IP)
persistentVolumeReclaimPolicy: Retain
storageClassName: manual

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: plex-config-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi # Storage for Jellyfin config files
storageClassName: longhorn # Make sure this matches your Longhorn storage class

View File

@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: media-nfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 900Gi
storageClassName: manual
selector:
matchLabels:
app: media

View File

@ -0,0 +1,6 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

View File

@ -0,0 +1,22 @@
apiVersion: v1
kind: Pod
metadata:
name: media-transfer-pod
spec:
restartPolicy: Never
containers:
- name: media-transfer
image: alpine # Use a lightweight image
command: ["/bin/sh", "-c", "sleep 3600"] # Keep the pod alive
volumeMounts:
- name: plex-media
mountPath: /mnt/longhorn
- name: existing-media
mountPath: /mnt/existing
volumes:
- name: plex-media
persistentVolumeClaim:
claimName: plex-media-longhorn
- name: existing-media
persistentVolumeClaim:
claimName: plex-media-pvc

View File

@ -0,0 +1,14 @@
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
namespace: metallb-system
name: my-ip-pool
spec:
addresses:
- 192.168.1.141-192.168.1.150
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
namespace: metallb-system
name: my-l2-advertisement

View File

@ -0,0 +1,83 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: portfolio-app
labels:
app: portfolio-app
spec:
replicas: 1
selector:
matchLabels:
app: portfolio-app
template:
metadata:
labels:
app: portfolio-app
spec:
imagePullSecrets:
- name: my-registry-secret
containers:
- name: portfolio-app
image: "${DOCKER_REGISTRY_HOST}/my-portfolio-app:latest"
imagePullPolicy: Always
ports:
- containerPort: 80
restartPolicy: Always
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: portfolio-app-svc
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: portfolio-app
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: portfolio
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
tls:
- hosts:
- "${DNSNAME}"
secretName: wildcard-cert-secret
rules:
- host: "${PORTFOLIO_HOST}"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: portfolio-app-svc
port:
number: 80
- path: /experience
pathType: Prefix
backend:
service:
name: react-app-service
port:
number: 80
- path: /interest
pathType: Prefix
backend:
service:
name: react-app-service
port:
number: 80
- path: /project
pathType: Prefix
backend:
service:
name: react-app-service
port:
number: 80

View File

@ -0,0 +1,5 @@
apiVersion: v2
name: pocketbase
description: A Helm chart for deploying PocketBase
version: 0.1.0
appVersion: "latest"

View File

@ -0,0 +1,29 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.deployment.name }}
namespace: {{ .Values.namespace }}
labels:
app: {{ .Values.deployment.labels.app }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Values.deployment.labels.app }}
template:
metadata:
labels:
app: {{ .Values.deployment.labels.app }}
spec:
containers:
- name: pocketbase
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: {{ .Values.deployment.containerPort }}
volumeMounts:
- name: {{ .Values.pvc.name }}
mountPath: {{ index .Values.deployment.volumeMounts 0 "mountPath" }}
volumes:
- name: {{ .Values.pvc.name }}
persistentVolumeClaim:
claimName: {{ .Values.persistence.name }}

View File

@ -0,0 +1,32 @@
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Values.ingress.name }}
namespace: {{ .Values.namespace }}
annotations:
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
ingressClassName: {{ .Values.ingress.className }}
rules:
- host: {{ .Values.ingress.host }}
http:
paths:
- path: {{ .Values.ingress.path }}
pathType: {{ .Values.ingress.pathType }}
backend:
service:
name: {{ .Values.service.name }}
port:
number: {{ .Values.service.port }}
{{- if .Values.ingress.tls.enabled }}
tls:
- hosts:
{{- range .Values.ingress.tls.hosts }}
- "{{ . }}"
{{- end }}
secretName: {{ .Values.ingress.tls.secretName }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.persistence.name }}
namespace: {{ .Values.namespace }}
spec:
accessModes:
- {{ .Values.persistence.accessMode }}
resources:
requests:
storage: {{ .Values.persistence.size }}

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.service.name }}
namespace: {{ .Values.namespace }}
spec:
selector:
app: {{ .Values.deployment.labels.app }}
ports:
- protocol: TCP
port: {{ .Values.service.port }}
targetPort: {{ .Values.deployment.containerPort }}
type: {{ .Values.service.type }}

View File

@ -0,0 +1,43 @@
replicaCount: 1
namespace: pocketbase
deployment:
name: pocketbase
containerPort: 8090
labels:
app: pocketbase
volumeMounts:
- mountPath: /pb_data
image:
repository: ghcr.io/muchobien/pocketbase
tag: latest
pullPolicy: IfNotPresent
service:
name: pocketbase
type: ClusterIP
port: 80
ingress:
enabled: true
name: pocketbase-ingress
className: traefik
annotations: {}
host: pocketbase.example.com
path: /
pathType: Prefix
tls:
enabled: true
secretName: wildcard-cert-secret
hosts:
- "*.example.com"
persistence:
enabled: true
name: pocketbase-pvc
accessMode: ReadWriteOnce
size: 5Gi
pvc:
name: pocketbase-data

View File

@ -0,0 +1,105 @@
apiVersion: v1
kind: Secret
metadata:
name: pgadmin-secret
type: Opaque
stringData:
pgadmin-password: "${PGADMIN_PASSWORD}"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pgadmin-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin
template:
metadata:
labels:
app: pgadmin
spec:
securityContext:
fsGroup: 5050 # pgAdmin group ID
runAsUser: 5050 # pgAdmin user ID
initContainers:
- name: init-chmod
image: busybox
command: ["sh", "-c", "chown -R 5050:5050 /var/lib/pgadmin"]
volumeMounts:
- name: pgadmin-data
mountPath: /var/lib/pgadmin
securityContext:
runAsUser: 0 # Run as root for chmod
containers:
- name: pgadmin
image: dpage/pgadmin4:latest
env:
- name: SCRIPT_NAME
value: /console
- name: PGADMIN_DEFAULT_EMAIL
value: "${PGADMIN_EMAIL}"
- name: PGADMIN_DEFAULT_PASSWORD
valueFrom:
secretKeyRef:
name: pgadmin-secret
key: pgadmin-password
ports:
- containerPort: 80
volumeMounts:
- name: pgadmin-data
mountPath: /var/lib/pgadmin
securityContext:
runAsUser: 5050 # pgAdmin user ID
runAsGroup: 5050 # pgAdmin group ID
volumes:
- name: pgadmin-data
persistentVolumeClaim:
claimName: pgadmin-pvc
---
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
type: LoadBalancer # or NodePort based on your setup
ports:
- port: 80
targetPort: 80
selector:
app: pgadmin
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pgadmin-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
tls:
- hosts:
- "${DNSNAME}"
secretName: wildcard-cert-secret
rules:
- host: "${PGADMIN_HOST}"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: pgadmin-service
port:
number: 80

View File

@ -0,0 +1,31 @@
image:
registry: docker.io
repository: bitnami/postgresql
tag: "16.2.0-debian-11-r0"
global:
postgresql:
auth:
postgresPassword: "placeholder"
username: "placeholder"
password: "placeholder"
database: "postgres"
primary:
persistence:
enabled: true
size: 5Gi
service:
type: LoadBalancer
ports:
postgresql: 5432
readReplicas:
replicaCount: 1 # This plus primary makes 2 total
persistence:
enabled: true
size: 5Gi
architecture: replication
volumePermissions:
enabled: true

View File

@ -0,0 +1,6 @@
apiVersion: v2
name: qbittorrent
description: A Helm chart for deploying qBittorrent with WireGuard
type: application
version: 0.1.0
appVersion: "latest"

View File

@ -0,0 +1,19 @@
{{/*
Expand the helper functions for the qBittorrent Helm chart
*/}}
{{- define "qbittorrent.fullname" -}}
{{- printf "%s-%s" .Release.Name .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "qbittorrent.serviceName" -}}
{{- printf "%s-service" (include "qbittorrent.fullname" .) -}}
{{- end -}}
{{- define "qbittorrent.deploymentName" -}}
{{- printf "%s-deployment" (include "qbittorrent.fullname" .) -}}
{{- end -}}
{{- define "qbittorrent.configMapName" -}}
{{- printf "%s-config" (include "qbittorrent.fullname" .) -}}
{{- end -}}

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.persistence.configMap.name }}
namespace: {{ .Values.namespace }}
data:
wg0.conf: |
[Interface]
Address = {{ .Values.wireguard.address }}
PrivateKey = {{ .Values.wireguard.privateKey }}
MTU = {{ .Values.wireguard.mtu }}
DNS = {{ .Values.wireguard.dns }}
ListenPort = {{ .Values.wireguard.listenPort }}
[Peer]
PublicKey = {{ .Values.wireguard.peerPublicKey }}
PresharedKey = {{ .Values.wireguard.presharedKey }}
AllowedIPs = {{ .Values.wireguard.allowedIPs }}
Endpoint = {{ .Values.wireguard.endpoint }}
PersistentKeepalive = {{ .Values.wireguard.persistentKeepalive }}

View File

@ -0,0 +1,125 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
namespace: {{ .Values.namespace }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ .Release.Name }}
spec:
initContainers:
- name: wireguard-init
image: {{ .Values.wireguardImage.repository }}:{{ .Values.wireguardImage.tag }}
imagePullPolicy: {{ .Values.wireguardImage.pullPolicy }}
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
command:
- /bin/sh
- -c
- |
set -x
echo "Starting WireGuard initialization..."
mkdir -p /etc/wireguard
cp /config/wg_confs/wg0.conf /etc/wireguard/wg0.conf
chmod 600 /etc/wireguard/wg0.conf
if ! lsmod | grep -q wireguard; then
modprobe wireguard || echo "Failed to load wireguard module"
fi
wg-quick up wg0 || echo "Failed to bring up WireGuard interface"
ip link show wg0
wg show
volumeMounts:
- name: wireguard-config
mountPath: /config/wg_confs
- name: modules
mountPath: /lib/modules
containers:
- name: wireguard
image: {{ .Values.wireguardImage.repository }}:{{ .Values.wireguardImage.tag }}
imagePullPolicy: {{ .Values.wireguardImage.pullPolicy }}
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
env:
- name: PUID
value: "{{ .Values.config.puid }}"
- name: PGID
value: "{{ .Values.config.pgid }}"
- name: UMASK_SET
value: "{{ .Values.config.umask }}"
- name: TZ
value: "{{ .Values.config.timezone }}"
volumeMounts:
- name: wireguard-config
mountPath: /config/wg_confs
- name: modules
mountPath: /lib/modules
command:
- /bin/sh
- -c
- |
while true; do
if ! ip link show wg0 > /dev/null 2>&1; then
wg-quick up wg0
fi
sleep 30
done
ports:
- containerPort: {{ .Values.service.wireguardPort }}
protocol: UDP
- name: qbittorrent
image: {{ .Values.qbittorrentImage.repository }}:{{ .Values.qbittorrentImage.tag }}
imagePullPolicy: {{ .Values.qbittorrentImage.pullPolicy }}
env:
- name: PUID
value: "{{ .Values.config.puid }}"
- name: PGID
value: "{{ .Values.config.pgid }}"
- name: TZ
value: "{{ .Values.config.timezone }}"
- name: WEBUI_PORT
value: "{{ .Values.config.webuiPort }}"
volumeMounts:
- name: qbittorrent-config
mountPath: /config
- name: downloads
mountPath: /downloads
ports:
- containerPort: {{ .Values.deployment.containerPort }}
protocol: TCP
readinessProbe:
httpGet:
path: /
port: {{ .Values.deployment.containerPort }}
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 3
volumes:
- name: qbittorrent-config
persistentVolumeClaim:
claimName: {{ .Values.persistence.config.name }}
- name: wireguard-config
configMap:
name: {{ .Values.persistence.configMap.name }}
- name: downloads
persistentVolumeClaim:
claimName: {{ .Values.persistence.downloads.existingClaim }}
- name: modules
hostPath:
path: /lib/modules

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.persistence.config.name }}
namespace: {{ .Values.namespace }}
spec:
accessModes:
- {{ .Values.persistence.config.accessMode }}
resources:
requests:
storage: {{ .Values.persistence.config.size }}
storageClassName: {{ .Values.persistence.config.storageClass }}

View File

@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.service.name }}
namespace: {{ .Values.namespace }}
spec:
selector:
app: {{ .Release.Name }}
ports:
- protocol: TCP
name: qbittorrent
port: {{ .Values.service.port }}
targetPort: {{ .Values.deployment.containerPort }}
- protocol: UDP
name: wireguard
port: {{ .Values.service.wireguardPort }}
targetPort: {{ .Values.service.wireguardPort }}
type: {{ .Values.service.type }}

View File

@ -0,0 +1,60 @@
replicaCount: 1
namespace: media
deployment:
labels:
app: qbittorrent
containerPort: 8080
image:
repository: linuxserver/qbittorrent
tag: latest
pullPolicy: Always
qbittorrentImage:
repository: linuxserver/qbittorrent
tag: latest
pullPolicy: Always
wireguardImage:
repository: linuxserver/wireguard
tag: latest
pullPolicy: Always
service:
name: qbittorrent-service
type: LoadBalancer
port: 8080
wireguardPort: 51820
persistence:
config:
enabled: true
name: qbittorrent-config-pvc
accessMode: ReadWriteOnce
size: 1Gi
storageClass: longhorn
downloads:
enabled: true
existingClaim: media-nfs-pvc
configMap:
enabled: true
name: wireguard-config
config:
puid: 1000
pgid: 1000
timezone: Europe/Helsinki
umask: 022
webuiPort: 8080
wireguard:
address: 10.182.199.210/32
privateKey: WNDT2JsSZWw4q5EgsUKkBEX1hpWlpJGUTV/ibfJZOVo=
mtu: 1329
dns: 10.128.0.1
listenPort: 51820
peerPublicKey: PyLCXAQT8KkM4T+dUsOQfn+Ub3pGxfGlxkIApuig+hk=
presharedKey: jSEf0xVUv/LwLmybp+LSM21Q2VOPbWPGcI/Dc4LLGkM=
endpoint: europe3.vpn.airdns.org:1637
allowedIPs: 0.0.0.0/0
persistentKeepalive: 15

View File

@ -0,0 +1,46 @@
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 1
labels:
app: hog
name: hog
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: hog
template:
metadata:
creationTimestamp: null
labels:
app: hog
spec:
containers:
- image: vish/stress
imagePullPolicy: Always
name: stress
resources:
limits:
memory: "1Gi"
requests:
memory: "500Mi"
args:
- -cpus
- "2"
- -mem-total
- "1250Mi"
- -mem-alloc-size
- "100Mi"
- -mem-alloc-sleep
- "1s"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30

173
README.md Normal file
View File

@ -0,0 +1,173 @@
# Homeserver Setup
```
© 2023 Taqi Tahmid
```
This is the top level directory for the homeserver setup. It contains the
following directories:
1. `ansible`: Contains the ansible playbooks and roles for setting up the
homeserver. Proxmox is used as the hypervisor, so the playbooks are written with
that in mind. The Kubernetes cluster is set up using K3s. This has not been
automated yet and is a manual process.
2. `docker`: Contains the docker-compose files for setting up the different
services. Right now Kubernetes is preferred over docker-compose, so this
directory is not actively maintained.
3. `kubernetes`: Contains the kubernetes manifests and helm charts for setting
up the different services.
# Services
The following services are set up on the homeserver:
1. AdGuard Home
2. Private Docker Registry
3. Jellyfin
4. My Portfolio Website
5. Postgres Database
6. Pocketbase Backend
In future the following services are planned to be set up:
1. Nextcloud
2. Gitea
3. Monitoring Stack
# Homeserver Setup
I have two mini PCs with Intel N1000 CPUs and 16GB of RAM each and 500GB SSDs.
Both of them are running Proxmox and are connected to a 1Gbps network. The
Proxmox is set up in a cluster configuration.
There are four VMs dedicated for the Kubernetes cluster. The VMs are running
Ubuntu 22.04 and have 4GB of RAM and 2 CPUs each. The VMs are connected to a
bridge network so that they can communicate with each other. Two VMs are
confiugred as dual control plane and worker nodes and two VMs are configured as
worker nodes. The Kubernetes cluster is set up using K3s.
## Proxmox Installation
The Proxmox installation is done on the mini PCs. The installation is done by
booting from a USB drive with the Proxmox ISO. The installation is done on the
SSD and the network is configured during the installation. The Proxmox is
configured in a cluster configuration.
Ref: [proxmox-installation](https://pve.proxmox.com/wiki/Installation)
## Promox VM increase disk size
Access the Proxmox Web Interface:
1. Log in to the Proxmox web interface.
Select the VM:
2. In the left sidebar, click on the VM to resize.
Resize the Disk:
3. Go to the Hardware tab.
4. Select the disk to resize (e.g., scsi0, ide0, etc.).
5. Click on the Resize button in the toolbar.
6. Enter 50G (or 50000 for 50GB) in the size field.
After that login to the VM and create a new partition or add to existing one
```
sudo fdisk /dev/sda
Press p to print the partition table.
Press d to delete the existing partition (note: ensure data is safe).
Press n to create a new partition and follow the prompts. Make sure to use the
same starting sector as the previous partition to avoid data loss. Press w to
write changes and exit.
sudo mkfs.ext4 /dev/sdaX
```
## Proxmox Passthrough Physical Disk to VM
Ref: [proxmox-pve](https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM))
It is possible to pass through a physical disk attached to the hardware to
the VM. This implementation passes through a NVME storage to the
dockerhost VM.
```bash
# List the disk by-id with lsblk
lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" \
";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");\
print "";}'|grep -v -E 'part|lvm'
# Hot plug or add the physical device as a new SCSI disk
qm set 103 -scsi2 /dev/disk/by-id/usb-WD_BLACK_SN770_1TB_012938055C4B-0:0
# Check with the following command
grep 5C4B /etc/pve/qemu-server/103.conf
# After that reboot the VM and verify with lsblk command
lsblk
```
## Setup Master and worker Nodes for K3s
The cluster configuration consists of 4 VMs configured as 2 master and 2 worker k3s nodes.
The master nodes also function as worker nodes.
```bash
# On the first master run the following command
curl -sfL https://get.k3s.io | sh -s - server --cluster-init --disable servicelb
# This will generate a token under /var/lib/rancher/k3s/server/node-token
# Which will be required to for adding nodes to the cluster
# On the second master run the following command
export TOKEN=<token>
export MASTER1_IP=<ip>
curl -sfL https://get.k3s.io | \
sh -s - server --server https://${MASTER1_IP}:6443 \
--token ${TOKEN} --disable servicelb
# Similarly on the worker nodes run the following command
export TOKEN=<token>
export MASTER1_IP=<ip>
curl -sfL https://get.k3s.io | \
K3S_URL=https://${MASTER1_IP}:6443 K3S_TOKEN=${TOKEN} sh -
```
## Configure Metallb load balancer for k3s
The metallb loadbalancer is used for services instead of k3s default
servicelb as it offers advanced features and supports IP address pools for load
balancer configuration.
```bash
# On any of the master nodes run the following command
kubectl apply -f \
https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
# Ensure that metallb is installed with the following command
kubectl get pods -n metallb-system
# On the host machine apply the metallb cofigmap under metallb directory
kubectl apply -f /home/taqi/homeserver/k3s-infra/metallb/metallbConfig.yaml
# Test that the loadbalancer is working with nginx deployment
kubectl create namespace nginx
kubectl create deployment nginx --image=nginx -n nginx
kubectl expose deployment nginx --port=80 --type=LoadBalancer -n nginx
# If nginx service gets an external IP and it is accessible from browser then
the configuration is complete
kubectl delete namespace nginx
```
## Cloud Image for VMs
A cloud image is a pre-configured disk image that is optimized for use in cloud
environments. These images typically include a minimal set of software and
configurations that are necessary to run in a virtualized environment,
such as a cloud or a hypervisor like Proxmox. Cloud images are designed to be
lightweight and can be quickly deployed to create new virtual machines. They
often come with cloud-init support, which allows for easy customization and
initialization of instances at boot time.
The cloud iamges are used for setting up the VMs in Proxmox. No traditional
template is used for setting up the VMs. The cloud images are available for
download from the following link:
[Ubuntu 22.04](https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img)

1
ansible/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
secrets/

86
ansible/README.md Normal file
View File

@ -0,0 +1,86 @@
# Ansible Playbook for Proxmox VM Management
This Ansible playbook automates the creation, deletion, and configuration of
virtual machines (VMs) on a Proxmox server.
## Prerequisites
- Ansible installed on the local machine
- Ansible community.general.proxmox_kvm module
- Access to a Proxmox server with API access enabled
- Python `proxmoxer` library installed (`pip install proxmoxer`)
## Setup
1. Clone this repository:
```sh
git clone https://github.com/TheTaqiTahmid/proxmox_ansible_automation
```
2. Update the `inventory` file with your Proxmox server details:
```yaml
all:
hosts:
proxmox:
ansible_host: your_proxmox_ip
ansible_user: your_proxmox_user
ansible_password: your_proxmox_password
```
In the current example implementation in `inventories/hosts.yaml`, there are
multiple groups depending on the types of hosts.
3. Add group-related variables to the group file under the `group_vars` directory
and individual host-related variables to the files under the `host_vars`
directory. Ansible will automatically pick up these variables.
## Playbooks
### Create VM
To create the VMs, run the following command:
```sh
ansible-playbook playbooks/create-vms.yaml
```
The playbook can be run against specific Proxmox instance using:
```sh
ansible-playbook playbooks/create-vms.yaml --limit proxmox1
```
### Delete VM
To delete existing VMs, run the following command:
```sh
ansible-playbook playbooks/destroy-vms.yaml
```
Similarly the destory playbook can be run against specific Proxmox instance using:
```sh
ansible-playbook playbooks/destroy-vms.yaml --limit proxmox1
```
### Configure VM
To configure an existing VM, run the following command:
```sh
ansible-playbook playbooks/configure-vms.yaml
```
The configuration can be limited to individual VMs using limits:
```sh
ansible-playbook playbooks/configure-vms.yaml --limit vm6
```
## Variables
The playbooks use the following variables, which can be customized in the
`group_vars/proxmox.yml` file:
- `vm_id`: The ID of the VM
- `vm_name`: The name of the VM
- `vm_memory`: The amount of memory for the VM
- `vm_cores`: The number of CPU cores for the VM
- `vm_disk_size`: The size of the VM disk
## Author
- Taqi Tahmid (mdtaqitahmid@gmail.com)

5
ansible/ansible.cfg Normal file
View File

@ -0,0 +1,5 @@
[defaults]
inventory = ./inventory/hosts.yaml
roles_path = ./roles
host_key_checking = False
vault_password_file = ~/.ansible_vault_pass

View File

@ -0,0 +1,11 @@
# Proxmox access related variables
proxmox_api_url: "192.168.1.121"
# Cloud-init image related variables
image_url: "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
image_dest: "/tmp/cloud-image.img"
image_format: "qcow2"
storage_name: "local"
# ansible venv
ansible_venv: "/home/taqi/.venv/ansible/bin/python"

View File

@ -0,0 +1,4 @@
apt_packages:
- curl
- vim
- htop

View File

@ -0,0 +1,29 @@
# VM related variables
vm_list:
- id: 106
name: "vm6"
memory: 4096
cores: 2
disk_size: 30G
ip: "192.168.1.151/24"
gateway: "192.168.1.1"
nameserver1: "1.1.1.1"
nameserver2: "8.8.8.8"
- id: 107
name: "vm7"
memory: 4096
cores: 2
disk_size: 30G
ip: "192.168.1.152/24"
gateway: "192.168.1.1"
nameserver1: "1.1.1.1"
nameserver2: "8.8.8.8"
# cloud-init variables
node: "homeserver1"
net0: "virtio,bridge=vmbr0"
# disk_name: "local:1000/vm-1000-disk-0.raw,discard=on"
disk_path: "/var/lib/vz/images/1000"
ide2: "local:cloudinit,format=qcow2"
boot_order: "order=scsi0"
scsi_hw: "virtio-scsi-pci"

View File

@ -0,0 +1,30 @@
# VM related variables
vm_list:
- id: 206
name: "vm8"
memory: 4096
cores: 2
disk_size: 30G
ip: "192.168.1.161/24"
gateway: "192.168.1.1"
nameserver1: "1.1.1.1"
nameserver2: "8.8.8.8"
- id: 207
name: "vm9"
memory: 4096
cores: 2
disk_size: 30G
ip: "192.168.1.162/24"
gateway: "192.168.1.1"
nameserver1: "1.1.1.1"
nameserver2: "8.8.8.8"
# cloud-init template variables
node: "homeserver2"
net0: "virtio,bridge=vmbr0"
# disk_name: "local:2000/vm-2000-disk-0.raw,discard=on"
disk_path: "/var/lib/vz/images/2000"
ide2: "local:cloudinit,format=qcow2"
boot_order: "order=scsi0"
scsi_hw: "virtio-scsi-pci"

View File

@ -0,0 +1,51 @@
all:
children:
hypervisors:
vms:
hypervisors:
children:
server1:
server2:
server1:
hosts:
proxmox1:
ansible_host: 192.168.1.121
ansible_user: "{{ ansible_proxmox_user }}"
ansible_ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"
server2:
hosts:
proxmox2:
ansible_host: 192.168.1.122
ansible_user: "{{ ansible_proxmox_user }}"
ansible_ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"
vms:
children:
vm_group_1:
vm_group_2:
vm_group_1:
hosts:
vm6:
ansible_host: 192.168.1.151
ansible_user: "{{ ansible_vm_user }}"
ansible_ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"
vm7:
ansible_host: 192.168.1.152
ansible_user: "{{ ansible_vm_user }}"
ansible_ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"
vm_group_2:
hosts:
vm8:
ansible_host: 192.168.1.161
ansible_user: "{{ ansible_vm_user }}"
ansible_ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"
vm9:
ansible_host: 192.168.1.162
ansible_user: "{{ ansible_vm_user }}"
ansible_ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"

View File

@ -0,0 +1,6 @@
- name: Create Proxmox VMs
hosts: vms
vars_files:
- ../secrets/vault.yaml # Load the encrypted vault file
roles:
- configure-vms

View File

@ -0,0 +1,6 @@
- name: Create Proxmox VMs
hosts: hypervisors
vars_files:
- ../secrets/vault.yaml # Load the encrypted vault file
roles:
- create-vms

View File

@ -0,0 +1,6 @@
- name: Destroy Proxmox VMs
hosts: hypervisors
vars_files:
- ../secrets/vault.yaml # Load the encrypted vault file
roles:
- destroy-vms

View File

@ -0,0 +1,11 @@
---
- name: Update apt cache
ansible.builtin.apt:
update_cache: yes
become: true
- name: Install necessary packages
ansible.builtin.apt:
name: "{{ apt_packages }}"
state: present
become: true

View File

@ -0,0 +1,70 @@
---
- name: Download cloud image
get_url:
url: "{{ image_url }}"
dest: "{{ image_dest }}"
use_netrc: yes
- name: create VMs
delegate_to: localhost
vars:
ansible_python_interpreter: /home/taqi/.venv/ansible/bin/python
community.general.proxmox_kvm:
api_host: "{{ proxmox_api_url }}"
api_user: "{{ proxmox_user }}"
api_token_id: "{{ proxmox_api_token_id }}"
api_token_secret: "{{ proxmox_api_token }}"
node: "{{ node }}"
vmid: "{{ item.id }}"
name: "{{ item.name }}"
memory: "{{ item.memory }}"
cores: "{{ item.cores }}"
scsihw: "{{ scsi_hw }}"
boot: "{{ boot_order }}"
net:
net0: "{{ net0 }}"
ipconfig:
ipconfig0: "ip={{ item.ip }},gw={{ item.gateway }}"
ide:
ide2: "{{ ide2 }}"
nameservers: "{{ item.nameserver1 }},{{ item.nameserver2 }}"
ciuser: "{{ ciuser }}"
cipassword: "{{ cipassword }}"
sshkeys: "{{ lookup('file', '/home/taqi/.ssh/homeserver.pub') }}"
loop: "{{ vm_list }}"
- name: Import disk image
ansible.builtin.shell: |
qm importdisk "{{ item.id }}" "{{ image_dest }}" "{{ storage_name }}" --format "{{ image_format }}"
loop: "{{ vm_list }}"
- name: Attach disk to VM
ansible.builtin.shell: |
qm set "{{ item.id }}" --scsi0 "{{ storage_name }}:{{ item.id }}/vm-{{ item.id }}-disk-0.{{ image_format }},discard=on"
loop: "{{ vm_list }}"
- name: Resize disk
ansible.builtin.shell: |
qm resize {{ item.id }} scsi0 {{ item.disk_size }}
loop: "{{ vm_list }}"
- name: Start VMs
delegate_to: localhost
vars:
ansible_python_interpreter: /home/taqi/.venv/ansible/bin/python
community.general.proxmox_kvm:
api_host: "{{ proxmox_api_url }}"
api_user: "{{ proxmox_user }}"
api_token_id: "{{ proxmox_api_token_id }}"
api_token_secret: "{{ proxmox_api_token }}"
node: "{{ node }}"
name: "{{ item.name }}"
state: started
loop: "{{ vm_list }}"
tags:
- start_vms
- name: Clean up downloaded image
file:
path: "{{ image_dest }}"
state: absent

View File

@ -0,0 +1,59 @@
---
- name: Get VM current state
delegate_to: localhost
vars:
ansible_python_interpreter: "{{ ansible_venv }}"
community.general.proxmox_kvm:
api_host: "{{ proxmox_api_url }}"
api_user: "{{ proxmox_user }}"
api_token_id: "{{ proxmox_api_token_id }}"
api_token_secret: "{{ proxmox_api_token }}"
name: "{{ item.name }}"
node: "{{ node }}"
state: current
register: vm_state
ignore_errors: yes
loop: "{{ vm_list }}"
tags:
- vm_delete
- name: Debug VM state
debug:
msg: "VM state: {{ vm_state.results[0].status }}"
when: vm_state is succeeded
loop: "{{ vm_list }}"
- name: Stop VM
delegate_to: localhost
vars:
ansible_python_interpreter: "{{ ansible_venv }}"
community.general.proxmox_kvm:
api_host: "{{ proxmox_api_url }}"
api_user: "{{ proxmox_user }}"
api_token_id: "{{ proxmox_api_token_id }}"
api_token_secret: "{{ proxmox_api_token }}"
name: "{{ item.name }}"
node: "{{ node }}"
state: stopped
force: true
when: vm_state is succeeded and vm_state.results[0].status != 'absent'
loop: "{{ vm_list }}"
tags:
- vm_delete
- name: Delete VM
delegate_to: localhost
vars:
ansible_python_interpreter: "{{ ansible_venv }}"
community.general.proxmox_kvm:
api_host: "{{ proxmox_api_url }}"
api_user: "{{ proxmox_user }}"
api_token_id: "{{ proxmox_api_token_id }}"
api_token_secret: "{{ proxmox_api_token }}"
name: "{{ item.name }}"
node: "{{ node }}"
state: absent
when: vm_state is succeeded and vm_state.results[0].status != 'absent'
loop: "{{ vm_list }}"
tags:
- vm_delete

1
docker_compose/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
.env

246
docker_compose/README.md Normal file
View File

@ -0,0 +1,246 @@
Homeserver Notes
================
# Future Plan
- Add authentication frontend like Authentik which will handle the authentication
- Add Nextcloud
- Add Gitea
# List of Service Running on Homeserver
- Adguard
- Plex
- Sonarr
- Radarr
- qbittorrent
- Portainer
- Jackett
- Jellyfin
- Wireguard
# List of Basic CLI tools installed on server
- ca-certificates
- curl
- gnupg
- lsb-release
- ntp
- ncdu
- net-tools
- apache2-utils
- apt-transport-https
- htop
# Firewall Rules (Currently Disabled)
I am using ufw to set different firewall rules. As I go I will update the rules
```
sudo ufw default allow outgoing
sudo ufw default allow incoming
sudo ufw allow from 192.168.1.0/24
sudo ufw allow 443
sudo ufw allow 80
sudo ufw enable
```
# Hardware Transcoding for Jellyfin
The media stream applications such as Jellyfin and Plex uses transcoding to
convert video format which might be necessary if the end user device does not
support some video formats or resolution. If hardware transcoding is not enabled
Plex/Jellyfin uses software based transcoding which is resource intensive.
Most of the new CPU/GPU support HW based transcoding. For our Ryzen 5 2500U
processor, we have HW transcoding. Here is the process to enable it:
```
sudo apt-get update
sudo apt-get install vainfo mesa-va-drivers libva2 libva-utils
# Run the following command to make sure VA API is working properly
vainfo
# Add the following to services to the Jellyfin container compose file
devices:
- /dev/dri/renderD128:/dev/dri/renderD128 # VA-API device for hardware acceleration
group_add:
- video # Add the container to the 'video' group
```
# Traefik Reverse proxy
- Traefik is modern HTTP reverse proxy and load balancerthat can be used to route
traffic to different internal containers or ports based on subdomain name.
- In addition to that It can also automatically handle SSL certificate genertion
and renewal for HTTPS automatically handle SSL certificate genertion
and renewal for HTTPS.
## Configuration
In order to get wildcard certificates from LetsEncrypt, I will be using DNS challange
method. DNS challange method is one of the methods provided by LetsEncrypt to verify
the ownership of the domain by adding specific DNS records.
To do that with cloudflare, I have created a new API token with name _CF_DNS_API_TOKEN_
and saved it as docker secret under ~/docker/secrets directory
```
# To save the appdata for traefik3, created the following folders
mkdir -p ~/docker/appdata/traefik3/acme
mkdir -p ~/docker/appdata/traefik3/rules/udms
# To save teh LetsEncrypt certificate, created the following file
touch acme.json
chmod 600 acme.json # without 600, Traefik will not start
# To save logs, created following files
touch traefik.log
touch access.log
```
After creating the Docker Compose file, add these TLS options like this:
```
# Under DOCKERDIR/appdata/traefik3/rules/udms/tls-opts.yml
tls:
options:
tls-opts:
minVersion: VersionTLS12
cipherSuites:
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_AES_128_GCM_SHA256
- TLS_AES_256_GCM_SHA384
- TLS_CHACHA20_POLY1305_SHA256
- TLS_FALLBACK_SCSV # Client is doing version fallback. See RFC 7507
curvePreferences:
- CurveP521
- CurveP384
sniStrict: true
```
Add the middleware Basic Auth:
```
# Under DOCKERDIR/appdata/traefik3/rules/udms/middlewares-basic-auth.yml
http:
middlewares:
middlewares-basic-auth:
basicAuth:
# users:
# - "user:password"
usersFile: "/run/secrets/basic_auth_credentials"
realm: "Traefik 3 Basic Auth"
```
Add middleware rate limited to prevent DDoS attack
```
# Under DOCKERDIR/appdata/traefik3/rules/udms/middlewares-rate-limit.yaml
http:
middlewares:
middlewares-rate-limit:
rateLimit:
average: 100
burst: 50
```
Add secure headers middleware
```
# Under DOCKERDIR/appdata/traefik3/rules/udms/middlewares-secure-headers.yaml
http:
middlewares:
middlewares-secure-headers:
headers:
accessControlAllowMethods:
- GET
- OPTIONS
- PUT
accessControlMaxAge: 100
hostsProxyHeaders:
- "X-Forwarded-Host"
stsSeconds: 63072000
stsIncludeSubdomains: true
stsPreload: true
# forceSTSHeader: true # This is a good thing but it can be tricky. Enable after everything works.
customFrameOptionsValue: SAMEORIGIN # https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options
contentTypeNosniff: true
browserXssFilter: true
referrerPolicy: "same-origin"
permissionsPolicy: "camera=(), microphone=(), geolocation=(), payment=(), usb=(), vr=()"
customResponseHeaders:
X-Robots-Tag: "none,noarchive,nosnippet,notranslate,noimageindex," # disable search engines from indexing home server
server: "" # hide server info from visitors
```
## Networking
Create a default Bridge network for the Traefik
# Wireguard VPN setup
In order for qbittorrent container to use the wireguard VPN tunnel
wireguard container has been added to the qbittorrent docker compose
file.
- qbittorrent container depends on the wireguard container. If
wireguard container is down, qbittorrent network will not work.
- Since, qbittorrent is using the wireguard container, port 9500
has been forwared to the host 9500 port from the wireguard container
- qbittorrent is using wireguard network interface. So, to access
the qbittorrent GUI, iptables rules had to be setup. Also, when the pc restarts
the wireguard container IP might change.
```
# Forward traffic coming to port 9500 on the host to port 9500 on the WireGuard container
sudo iptables -t nat -A PREROUTING -p tcp --dport 9500 -j DNAT --to-destination 172.18.0.6:9500
# Forward traffic from the WireGuard container back to the host's port 9500
sudo iptables -t nat -A POSTROUTING -p tcp -d 172.18.0.6 --dport 9500 -j MASQUERADE
```
- We can check the host ip geolocation by the following command. In that way
we can verify VPN is working.
```
docker exec -it qbittorrent curl ipinfo.io
{
"ip": "1.2.3.4",
"hostname": "1.2.3.4.in-addr.arpa",
"city": "Amsterdam",
"region": "North Holland",
"country": "NL",
"loc": "55.3740,41.8897",
"org": "Some Company",
"postal": "1234",
"timezone": "Europe/Amsterdam",
"readme": "https://ipinfo.io/missingauth"
}
```
- We can check the wireguard VPN connection status with the following command
```
docker exec -it wireguard wg
interface: wg0
public key: <public key>
private key: (hidden)
listening port: 56791
fwmark: 0xca6c
peer: <public key>
preshared key: (hidden)
endpoint: <ip>:51820
allowed ips: 0.0.0.0/0, ::/0
latest handshake: 1 minute, 47 seconds ago
transfer: 12.69 MiB received, 822.64 KiB sent
persistent keepalive: every 15 seconds
```
# FAQ
1. How to get the plex claim?
-> Go the the url and login: https://www.plex.tv/claim/

View File

@ -0,0 +1,18 @@
version: "3"
services:
jackett:
image: "linuxserver/jackett"
container_name: "jackett"
env_file:
./.env
volumes:
- ${DOCKERDIR}/appdata/jackett:/config
- ${DATADIR}/downloads:/downloads
- "/etc/localtime:/etc/localtime:ro"
ports:
- "9117:9117"
restart: unless-stopped
environment:
- PUID=${PUID}
- PGID=${PGID}
- TZ=${TZ}

View File

@ -0,0 +1,38 @@
services:
jellyfin:
image: jellyfin/jellyfin
container_name: jellyfin
env_file:
- ./.env
volumes:
- ${DOCKERDIR}/appdata/jellyfin:/config
- ${DATADIR}/downloads:/downloads
- type: bind
source: ${DATADIR}
target: /media
read_only: true
ports:
- "8096:8096" # Optional, if you rely on Traefik for routing, this can be removed.
restart: 'unless-stopped'
devices:
- /dev/dri/renderD128:/dev/dri/renderD128 # VA-API device for hardware acceleration
group_add:
- video # Add the container to the video group necessary for accesing /dev/dri
environment:
- PUID=${PUID}
- PGID=${PGID}
- TZ=${TZ}
- UMASK_SET=002
labels:
- "traefik.enable=true"
- "traefik.http.routers.jellyfin-rtr.rule=Host(`jellyfin.${DOMAINNAME}`)"
- "traefik.http.routers.jellyfin-rtr.entrypoints=websecure"
- "traefik.http.routers.jellyfin-rtr.service=jellyfin-svc"
- "traefik.http.services.jellyfin-svc.loadbalancer.server.port=8096"
- "traefik.http.routers.traefik-rtr.middlewares=middlewares-rate-limit@file,middlewares-secure-headers@file"
networks:
- t3_proxy
networks:
t3_proxy:
external: true

40
docker_compose/plex.yaml Normal file
View File

@ -0,0 +1,40 @@
version: '3.5'
services:
plex:
image: plexinc/pms-docker
container_name: plex
env_file:
./.env
environment:
- PLEX_UID=${PUID}
- PLEX_GID=${PGID}
- TZ=${TZ}
- VERSION=docker
- PLEX_CLAIM=${PLEX_CLAIM}
ports:
- "32400:32400/tcp"
- "3005:3005/tcp"
- "8324:8324/tcp"
- "32469:32469/tcp"
- "1899:1900/udp"
- "32410:32410/udp"
- "32412:32412/udp"
- "32413:32413/udp"
- "32414:32414/udp"
volumes:
- ${DOCKERDIR}/appdata/plex:/config
- ${DATADIR}/tvshows:/tv
- ${DATADIR}/movies:/movies
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.plex-rtr.rule=Host(`plex.${DOMAINNAME}`)"
- "traefik.http.routers.plex-rtr.entrypoints=websecure"
- "traefik.http.routers.plex-rtr.service=plex-svc"
- "traefik.http.services.plex-svc.loadbalancer.server.port=32400"
- "traefik.http.routers.traefik-rtr.middlewares=middlewares-rate-limit@file,middlewares-secure-headers@file"
networks:
- t3_proxy
networks:
t3_proxy:
external: true

View File

@ -0,0 +1,35 @@
version: "3"
services:
portainer:
image: portainer/portainer-ce:latest
ports:
- 9000:9000
volumes:
- /home/taqi/docker/portainer/data:/data
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
env_file:
- ./.env
networks:
- t3_proxy
labels:
- "traefik.enable=true"
# HTTP Routers
- "traefik.http.routers.portainer-rtr.entrypoints=websecure"
- "traefik.http.routers.portainer-rtr.rule=Host(`portainer.${DOMAINNAME}`)"
# HTTP Services
- "traefik.http.routers.portainer-rtr.tls=true"
- "traefik.http.routers.portainer-rtr.service=portainer-svc"
- "traefik.http.services.portainer-svc.loadbalancer.server.port=9000"
- "traefik.http.routers.traefik-rtr.middlewares=middlewares-rate-limit@file,middlewares-secure-headers@file"
command:
--http-enabled
environment:
- TZ=${TZ}
- DOMAINNAME=${DOMAINNAME}
volumes:
data:
networks:
t3_proxy:
external: true

View File

@ -0,0 +1,52 @@
version: "3"
services:
wireguard:
image: linuxserver/wireguard:latest
container_name: wireguard
env_file:
./.env
cap_add:
- NET_ADMIN
- SYS_MODULE
environment:
- PUID=${PUID}
- PGID=${PGID}
- TZ=Europe/Helsinki
volumes:
- ${WIREGUARD_CONFIG}:/config/wg0.conf
- /lib/modules:/lib/modules
ports:
- 51820:51820/udp
- 9500:9500 # qbittorrent
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
- net.ipv6.conf.all.disable_ipv6=0
restart: unless-stopped
networks:
dockercompose_default:
ipv4_address: 172.18.0.100
qbittorrent:
image: "linuxserver/qbittorrent"
container_name: "qbittorrent"
env_file:
./.env
volumes:
- ${DOCKERDIR}/appdata/qbittorrent:/config
- ${DATADIR}/downloads:/downloads
restart: unless-stopped
environment:
- PUID=${PUID}
- PGID=${PGID}
- TZ=${TZ}
- UMASK_SET=002
- WEBUI_PORT=9500
network_mode: service:wireguard
depends_on:
- wireguard
networks:
dockercompose_default:
external: true

View File

@ -0,0 +1,21 @@
version: "3"
services:
radarr:
image: "linuxserver/radarr"
container_name: "radarr"
env_file:
./.env
volumes:
- ${DOCKERDIR}/appdata/radarr:/config
- ${DATADIR}/downloads:/downloads
- ${DATADIR}/movies:/movies
- "/etc/localtime:/etc/localtime:ro"
ports:
- "7878:7878"
restart: always
environment:
- PUID=${PUID}
- PGID=${PGID}
- TZ=${TZ}
networks:
- bridge

View File

@ -0,0 +1,19 @@
version: "3"
services:
sonarr:
image: "linuxserver/sonarr"
container_name: "sonarr"
env_file:
./.env
volumes:
- ${DOCKERDIR}/appdata/sonarr:/config
- ${DATADIR}/downloads:/downloads
- ${DATADIR}/tvshows:/tvshows
- "/etc/localtime:/etc/localtime:ro"
ports:
- "8989:8989"
restart: always
environment:
- PUID=${PUID}
- PGID=${PGID}
- TZ=${TZ}

View File

@ -0,0 +1,96 @@
version: '3.8'
networks:
t3_proxy:
name: t3_proxy
driver: bridge
ipam:
config:
- subnet: 192.168.90.0/24
secrets:
basic_auth_credentials:
file: $DOCKERDIR/secrets/basic_auth_credentials
cf_dns_api_token:
file: $DOCKERDIR/secrets/cf_dns_api_token
services:
traefik:
container_name: traefik
image: traefik:3.0
restart: unless-stopped
env_file:
- ./.env
networks:
t3_proxy:
ipv4_address: 192.168.90.254
command:
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --entrypoints.traefik.address=:8080
- --entrypoints.websecure.http.tls=true
# The following two options redirects http request at port 80 to https
- --entrypoints.web.http.redirections.entrypoint.to=websecure
- --entrypoints.web.http.redirections.entrypoint.scheme=https
- --entrypoints.web.http.redirections.entrypoint.permanent=true
- --api=true
- --api.dashboard=true
# - --api.insecure=true
- --entrypoints.websecure.forwardedHeaders.trustedIPs=$CLOUDFLARE_IPS,$LOCAL_IPS
- --log=true
- --log.filePath=/logs/traefik.log
- --log.level=DEBUG
- --accessLog=true
- --accessLog.filePath=/logs/access.log
- --accessLog.bufferingSize=100
- --accessLog.filters.statusCodes=204-299,400-499,500-599
- --providers.docker=true
- --providers.docker.network=t3_proxy
- --entrypoints.websecure.http.tls.options=tls-opts@file
- --entrypoints.websecure.http.tls.certresolver=dns-cloudflare
- --entrypoints.websecure.http.tls.domains[0].main=$DOMAINNAME
- --entrypoints.websecure.http.tls.domains[0].sans=*.$DOMAINNAME
- --providers.file.directory=/rules
- --providers.file.watch=true
- --certificatesResolvers.dns-cloudflare.acme.storage=/acme.json
- --certificatesResolvers.dns-cloudflare.acme.dnsChallenge.provider=cloudflare
- --certificatesResolvers.dns-cloudflare.acme.dnsChallenge.resolvers=1.1.1.1:53,1.0.0.1:53
ports:
# - 80:80
- 443:443
- 8080:8080
# - target: 80
# published: 80
# protocol: tcp
# mode: host
# - target: 443
# published: 443
# protocol: tcp
# mode: host
# - target: 8080
# published: 8585
# protocol: tcp
# mode: host
volumes:
- $DOCKERDIR/appdata/traefik3/rules/$HOSTNAME:/rules
- /var/run/docker.sock:/var/run/docker.sock:ro
- $DOCKERDIR/appdata/traefik3/acme/acme.json:/acme.json
- $DOCKERDIR/logs/$HOSTNAME/traefik:/logs
environment:
- PUID=${PUID}
- PGID=${PGID}
- TZ=$TZ
- CF_DNS_API_TOKEN_FILE=/run/secrets/cf_dns_api_token
- HTPASSWD_FILE=/run/secrets/basic_auth_credentials
- DOMAINNAME=${DOMAINNAME}
secrets:
- cf_dns_api_token
- basic_auth_credentials
labels:
- "traefik.enable=true"
- "traefik.http.routers.dashboard.tls=true"
- "traefik.http.routers.traefik-rtr.entrypoints=websecure"
- "traefik.http.routers.traefik-rtr.rule=Host(`traefik.${DOMAINNAME}`)"
- "traefik.http.routers.traefik-rtr.service=api@internal"
# Middlewares
- "traefik.http.routers.traefik-rtr.middlewares=middlewares-rate-limit@file,middlewares-secure-headers@file,middlewares-basic-auth@file"