updated readme and restructure project
This commit is contained in:
225
README.md
225
README.md
@ -1,173 +1,158 @@
|
|||||||
# Homeserver Setup
|
# 🏠 Homeserver Setup Guide: Kubernetes on Proxmox
|
||||||
|
|
||||||
|
[](LICENSE)
|
||||||
|
[](CONTRIBUTING.md)
|
||||||
|
|
||||||
```
|
```
|
||||||
© 2023 Taqi Tahmid
|
© 2023 Taqi Tahmid
|
||||||
```
|
```
|
||||||
|
|
||||||
This is the top level directory for the homeserver setup. It contains the
|
> Build your own modern homelab with Kubernetes on Proxmox! This guide walks
|
||||||
following directories:
|
> you through setting up a complete home server infrastructure with essential
|
||||||
|
> self-hosted services.
|
||||||
|
|
||||||
1. `ansible`: Contains the ansible playbooks and roles for setting up the
|
## 🌟 Highlights
|
||||||
homeserver. Proxmox is used as the hypervisor, so the playbooks are written with
|
|
||||||
that in mind. The Kubernetes cluster is set up using K3s. This has not been
|
|
||||||
automated yet and is a manual process.
|
|
||||||
|
|
||||||
2. `docker`: Contains the docker-compose files for setting up the different
|
- Fully automated setup using Ansible
|
||||||
services. Right now Kubernetes is preferred over docker-compose, so this
|
- Production-grade Kubernetes (K3s) cluster
|
||||||
directory is not actively maintained.
|
- High-availability Proxmox configuration
|
||||||
|
- Popular self-hosted applications ready to deploy
|
||||||
|
|
||||||
3. `kubernetes`: Contains the kubernetes manifests and helm charts for setting
|
## 📁 Repository Structure
|
||||||
up the different services.
|
|
||||||
|
|
||||||
|
- `ansible/` - Automated provisioning with Ansible playbooks. [Ansible Guide](ansible/README.md)
|
||||||
|
- `kubernetes/` - K8s manifests and Helm charts. [Kubernetes Guide](kubernetes/README.md)
|
||||||
|
- `docker/` - Legacy docker-compose files (Kubernetes preferred). [Docker Guide](docker/README.md)
|
||||||
|
|
||||||
# Services
|
## 🚀 Running Services
|
||||||
|
|
||||||
The following services are set up on the homeserver:
|
- ✨ AdGuard Home - Network-wide ad blocking
|
||||||
1. AdGuard Home
|
- 🐳 Private Docker Registry
|
||||||
2. Private Docker Registry
|
- 🎬 Jellyfin Media Server
|
||||||
3. Jellyfin
|
- 🌐 Portfolio Website
|
||||||
4. My Portfolio Website
|
- 🗄️ PostgreSQL Database
|
||||||
5. Postgres Database
|
- 📦 Pocketbase Backend
|
||||||
6. Pocketbase Backend
|
- 💻 Gitea Git Server
|
||||||
|
|
||||||
In future the following services are planned to be set up:
|
### 📋 Coming Soon
|
||||||
1. Nextcloud
|
- Nextcloud
|
||||||
2. Gitea
|
- Monitoring Stack
|
||||||
3. Monitoring Stack
|
|
||||||
|
|
||||||
|
## 💻 Hardware Setup
|
||||||
|
|
||||||
# Homeserver Setup
|
- 2x Mini PCs with Intel N1000 CPUs
|
||||||
|
- 16GB RAM each
|
||||||
|
- 500GB SSDs
|
||||||
|
- 1Gbps networking
|
||||||
|
- Proxmox Cluster Configuration
|
||||||
|
|
||||||
I have two mini PCs with Intel N1000 CPUs and 16GB of RAM each and 500GB SSDs.
|
## 🛠️ Installation Steps
|
||||||
Both of them are running Proxmox and are connected to a 1Gbps network. The
|
|
||||||
Proxmox is set up in a cluster configuration.
|
|
||||||
|
|
||||||
There are four VMs dedicated for the Kubernetes cluster. The VMs are running
|
### 1. Setting up Proxmox Infrastructure
|
||||||
Ubuntu 22.04 and have 4GB of RAM and 2 CPUs each. The VMs are connected to a
|
|
||||||
bridge network so that they can communicate with each other. Two VMs are
|
|
||||||
confiugred as dual control plane and worker nodes and two VMs are configured as
|
|
||||||
worker nodes. The Kubernetes cluster is set up using K3s.
|
|
||||||
|
|
||||||
## Proxmox Installation
|
#### Proxmox Base Installation
|
||||||
|
- Boot mini PCs from Proxmox USB drive
|
||||||
|
- Install on SSD and configure networking
|
||||||
|
- Set up cluster configuration
|
||||||
|
> 📚 Reference: [Official Proxmox Installation Guide](https://pve.proxmox.com/wiki/Installation)
|
||||||
|
|
||||||
The Proxmox installation is done on the mini PCs. The installation is done by
|
#### 3. Cloud Image Implementation
|
||||||
booting from a USB drive with the Proxmox ISO. The installation is done on the
|
|
||||||
SSD and the network is configured during the installation. The Proxmox is
|
|
||||||
configured in a cluster configuration.
|
|
||||||
|
|
||||||
Ref: [proxmox-installation](https://pve.proxmox.com/wiki/Installation)
|
Cloud images provide:
|
||||||
|
- 🚀 Pre-configured, optimized disk images
|
||||||
|
- 📦 Minimal software footprint
|
||||||
|
- ⚡ Quick VM deployment
|
||||||
|
- 🔧 Cloud-init support for easy customization
|
||||||
|
|
||||||
## Promox VM increase disk size
|
These lightweight images are perfect for rapid virtual machine deployment in
|
||||||
|
your homelab environment.
|
||||||
|
|
||||||
Access the Proxmox Web Interface:
|
#### Proxmox VM Disk Management
|
||||||
|
|
||||||
1. Log in to the Proxmox web interface.
|
**Expanding VM Disk Size:**
|
||||||
Select the VM:
|
1. Access Proxmox web interface
|
||||||
2. In the left sidebar, click on the VM to resize.
|
2. Select target VM
|
||||||
Resize the Disk:
|
3. Navigate to Hardware tab
|
||||||
3. Go to the Hardware tab.
|
4. Choose disk to resize
|
||||||
4. Select the disk to resize (e.g., scsi0, ide0, etc.).
|
5. Click Resize and enter new size (e.g., 50G)
|
||||||
5. Click on the Resize button in the toolbar.
|
|
||||||
6. Enter 50G (or 50000 for 50GB) in the size field.
|
|
||||||
|
|
||||||
After that login to the VM and create a new partition or add to existing one
|
**Post-resize VM Configuration:**
|
||||||
```
|
```bash
|
||||||
|
# Access VM and configure partitions
|
||||||
sudo fdisk /dev/sda
|
sudo fdisk /dev/sda
|
||||||
Press p to print the partition table.
|
# Key commands:
|
||||||
Press d to delete the existing partition (note: ensure data is safe).
|
# p - print partition table
|
||||||
Press n to create a new partition and follow the prompts. Make sure to use the
|
# d - delete partition
|
||||||
same starting sector as the previous partition to avoid data loss. Press w to
|
# n - create new partition
|
||||||
write changes and exit.
|
# w - write changes
|
||||||
sudo mkfs.ext4 /dev/sdaX
|
sudo mkfs.ext4 /dev/sdaX
|
||||||
```
|
```
|
||||||
|
|
||||||
## Proxmox Passthrough Physical Disk to VM
|
#### Physical Disk Passthrough
|
||||||
|
Pass physical disks (e.g., NVME storage) to VMs:
|
||||||
Ref: [proxmox-pve](https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM))
|
|
||||||
It is possible to pass through a physical disk attached to the hardware to
|
|
||||||
the VM. This implementation passes through a NVME storage to the
|
|
||||||
dockerhost VM.
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# List the disk by-id with lsblk
|
# List disk IDs
|
||||||
lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" \
|
lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'
|
||||||
";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");\
|
|
||||||
print "";}'|grep -v -E 'part|lvm'
|
|
||||||
|
|
||||||
# Hot plug or add the physical device as a new SCSI disk
|
# Add disk to VM (example for VM ID 103)
|
||||||
qm set 103 -scsi2 /dev/disk/by-id/usb-WD_BLACK_SN770_1TB_012938055C4B-0:0
|
qm set 103 -scsi2 /dev/disk/by-id/usb-WD_BLACK_SN770_1TB_012938055C4B-0:0
|
||||||
|
|
||||||
# Check with the following command
|
# Verify configuration
|
||||||
grep 5C4B /etc/pve/qemu-server/103.conf
|
grep 5C4B /etc/pve/qemu-server/103.conf
|
||||||
|
|
||||||
# After that reboot the VM and verify with lsblk command
|
|
||||||
lsblk
|
|
||||||
```
|
```
|
||||||
|
> 📚 Reference: [Proxmox Disk Passthrough Guide](https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM))
|
||||||
|
|
||||||
## Setup Master and worker Nodes for K3s
|
### 2. Kubernetes Cluster Setup
|
||||||
|
|
||||||
The cluster configuration consists of 4 VMs configured as 2 master and 2 worker k3s nodes.
|
#### K3s Cluster Configuration
|
||||||
The master nodes also function as worker nodes.
|
Setting up a 4-node cluster (2 master + 2 worker):
|
||||||
|
|
||||||
|
**Master Node 1:**
|
||||||
```bash
|
```bash
|
||||||
# On the first master run the following command
|
|
||||||
curl -sfL https://get.k3s.io | sh -s - server --cluster-init --disable servicelb
|
curl -sfL https://get.k3s.io | sh -s - server --cluster-init --disable servicelb
|
||||||
|
|
||||||
# This will generate a token under /var/lib/rancher/k3s/server/node-token
|
|
||||||
# Which will be required to for adding nodes to the cluster
|
|
||||||
|
|
||||||
# On the second master run the following command
|
|
||||||
export TOKEN=<token>
|
|
||||||
export MASTER1_IP=<ip>
|
|
||||||
curl -sfL https://get.k3s.io | \
|
|
||||||
sh -s - server --server https://${MASTER1_IP}:6443 \
|
|
||||||
--token ${TOKEN} --disable servicelb
|
|
||||||
|
|
||||||
# Similarly on the worker nodes run the following command
|
|
||||||
export TOKEN=<token>
|
|
||||||
export MASTER1_IP=<ip>
|
|
||||||
curl -sfL https://get.k3s.io | \
|
|
||||||
K3S_URL=https://${MASTER1_IP}:6443 K3S_TOKEN=${TOKEN} sh -
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Configure Metallb load balancer for k3s
|
**Master Node 2:**
|
||||||
|
|
||||||
The metallb loadbalancer is used for services instead of k3s default
|
|
||||||
servicelb as it offers advanced features and supports IP address pools for load
|
|
||||||
balancer configuration.
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# On any of the master nodes run the following command
|
export TOKEN=<token>
|
||||||
kubectl apply -f \
|
export MASTER1_IP=<ip>
|
||||||
https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
|
curl -sfL https://get.k3s.io | sh -s - server --server https://${MASTER1_IP}:6443 --token ${TOKEN} --disable servicelb
|
||||||
|
```
|
||||||
|
|
||||||
# Ensure that metallb is installed with the following command
|
**Worker Nodes:**
|
||||||
|
```bash
|
||||||
|
export TOKEN=<token>
|
||||||
|
export MASTER1_IP=<ip>
|
||||||
|
curl -sfL https://get.k3s.io | K3S_URL=https://${MASTER1_IP}:6443 K3S_TOKEN=${TOKEN} sh -
|
||||||
|
```
|
||||||
|
|
||||||
|
#### MetalLB Load Balancer Setup
|
||||||
|
```bash
|
||||||
|
# Install MetalLB
|
||||||
|
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
|
||||||
|
|
||||||
|
# Verify installation
|
||||||
kubectl get pods -n metallb-system
|
kubectl get pods -n metallb-system
|
||||||
|
|
||||||
# On the host machine apply the metallb cofigmap under metallb directory
|
# Apply configuration
|
||||||
kubectl apply -f /home/taqi/homeserver/k3s-infra/metallb/metallbConfig.yaml
|
kubectl apply -f /home/taqi/homeserver/k3s-infra/metallb/metallbConfig.yaml
|
||||||
|
```
|
||||||
|
|
||||||
# Test that the loadbalancer is working with nginx deployment
|
**Quick Test:**
|
||||||
|
```bash
|
||||||
|
# Deploy test nginx
|
||||||
kubectl create namespace nginx
|
kubectl create namespace nginx
|
||||||
kubectl create deployment nginx --image=nginx -n nginx
|
kubectl create deployment nginx --image=nginx -n nginx
|
||||||
kubectl expose deployment nginx --port=80 --type=LoadBalancer -n nginx
|
kubectl expose deployment nginx --port=80 --type=LoadBalancer -n nginx
|
||||||
|
|
||||||
# If nginx service gets an external IP and it is accessible from browser then
|
# Cleanup after testing
|
||||||
the configuration is complete
|
|
||||||
kubectl delete namespace nginx
|
kubectl delete namespace nginx
|
||||||
```
|
```
|
||||||
|
|
||||||
## Cloud Image for VMs
|
## 🤝 Contributing
|
||||||
|
|
||||||
A cloud image is a pre-configured disk image that is optimized for use in cloud
|
Contributions welcome! Feel free to open issues or submit PRs.
|
||||||
environments. These images typically include a minimal set of software and
|
|
||||||
configurations that are necessary to run in a virtualized environment,
|
|
||||||
such as a cloud or a hypervisor like Proxmox. Cloud images are designed to be
|
|
||||||
lightweight and can be quickly deployed to create new virtual machines. They
|
|
||||||
often come with cloud-init support, which allows for easy customization and
|
|
||||||
initialization of instances at boot time.
|
|
||||||
|
|
||||||
The cloud iamges are used for setting up the VMs in Proxmox. No traditional
|
## 📝 License
|
||||||
template is used for setting up the VMs. The cloud images are available for
|
|
||||||
download from the following link:
|
MIT License - feel free to use this as a template for your own homelab!
|
||||||
[Ubuntu 22.04](https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img)
|
|
||||||
@ -14,6 +14,10 @@ gitea:
|
|||||||
password: password
|
password: password
|
||||||
email: email
|
email: email
|
||||||
|
|
||||||
|
image:
|
||||||
|
repository: gitea/gitea
|
||||||
|
tag: 1.23.4
|
||||||
|
|
||||||
postgresql:
|
postgresql:
|
||||||
enabled: false
|
enabled: false
|
||||||
|
|
||||||
Reference in New Issue
Block a user