my-portfolio/src/routes/homelab/+page.svelte

98 lines
7.6 KiB
Svelte
Raw Normal View History

2024-04-25 19:16:31 +00:00
<script>
2024-04-25 21:53:31 +00:00
import asrockImg from '$lib/images/asrock.jpg'
import rackImg from '$lib/images/rack.jpg'
2024-04-25 19:16:31 +00:00
</script>
2024-04-20 14:33:05 +00:00
2024-04-25 21:53:31 +00:00
<div >
<h1 class="text-2xl mb-4 font-normal">Homelab</h1>
<div class="content text-left">
<article>
<h2 class="text-xl text-center">Origin and Hardware</h2>
<p>My interest in “Homelabbing” arose long before I was familiar with the term Homelab. I have always been curious, and drawn towards things that are unfamiliar to me. I am also a builder and problem solver by heart, which has given me a broad understanding of tech and maker-culture.</p>
<p>My lab started almost 4 years ago, when I wanted to set up my own DNS server at home, to add network wide ad-block (Pihole). As I learned more about Linux and containerization I quickly got drawn into the hobby of self-hosting web-services. </p>
<p>Raspberry Pi's are great, but it takes ages to build docker images, so it wasn't long before I upgraded it to a Lenovo Tiny M73 workstation with 8gb ram and a 4th gen i5. This was a big step up, which enabled me to host a lot more services and experiment more with building and deploying my owner docker images.</p>
<p>As I wanted more storage, and the expansion ports on a mini pc are limited, I upgraded to a slightly larger Mini PC (Asrock Deskmini x300). The specs mentioned 3 m2 slots and 2 sata slots which would be perfect to build a low power NAS as the max power draw less than 50w.</p>
<p>I quickly learned that the low power motherboard obviously wasn't able to power two 10TB Ironwolf HDD's, so I had to hack up a solution with an external power supply. This was a bit of a mess (See picture below) and the flaky setup also led to occasional smart errors from the HDD array so I knew I had to move to new hardware soon.</p>
<img src={asrockImg} alt="" class="rounded mt-4">
<p class="italic text-sm mb-4">The two HDD's are powered by an external power supply that I had to manually switch on and off when I rebooted the server. The patched cables also led to lots of S.M.A.R.T. errors which is obvious not great.</p>
<p>In the beginning of March, I upgraded my motherboard and ram in my main pc, and used the old mb and ram as a base to build my server. I plan to mount everything in a rack when we get more space, so I built the new server in a 2U rack case, which also leaves lots of room for storage upgrades. I used consumer grade hardware to keep idle power low:</p>
<ul class="list-disc list-inside indent-4">
<li>MSI Pro-690-P (Lots of IOMMU groups)</li>
<li>Intel i3-12100f (Low power idle)</li>
<li>32GB Corsair Vengeance DDR4 3200</li>
<li>1tb Samsung Evo 970 (Host boot and VM disks)</li>
<li>512gb Samsung Evo 970 (NAS cache)</li>
<li>2 x 10TB Seagate Ironwolf (RAID1)</li>
<li>Geforce 1650ti 4gb (for transcoding)</li>
</ul>
<img class="rounded my-4" src={rackImg} alt="">
<p>The homelab is also connected to two 3D printers, that are running open source custom firmware (klipper) and controlled through a web interface (Mainsail) from any device in the local network. I started with a Ender3, and during my parental leave last year I built my own printer from a open source project called Voron0. The parts are all sourced, and the rest of the parts are 3D printed in ABS+. It took me roughly 40 hours to build it and calibrate it.</p>
<h2 class="text-xl text-center mt-4">OS</h2>
<p>The server is running ProxmoxVE which is great for experimenting, as I can provision development and test environments easily from templates and spin up machines to experiment with Kubernetes without affecting my services. It's also interesting to learn the basics of Proxmox even though virtualization is not necessarily something I will do a deep dive on. It also makes it easy to separate services in LXC containers and virtual machines.</p>
<h2 class="text-xl text-center mt-4">VMs and Services</h2>
<h3 class="text-lg text-center">Docker Host</h3>
<p>This VM is my production environment where I run my docker services that have been tested an properly implemented with network volumes and backup. </p>
<br>
<p class="underline">Currently I am hosting the following services:</p>
<ul class="list-disc">
<li><p>Traefik</p>
<p>Reverse Proxy to direct and secure external traffic to external services. Traefik also handles ssl certificates from letsencrypt, as all of my external an internal domains have ssl encryption.</p>
</li>
<li>
<p>Traefik-bouncer</p>
<p>Monitors traefik, and bans incoming connection from known threat actors and a set of predefined rules. Also bans for multiple failed login attempts, and monitors logs of all exposed services.</p>
</li>
<li>
<p>Portfolio web page</p>
<p>This is my portfolio page, which will soon be available at rannes.dev. I'm using svelte/sveltekit and the docker image is built with nginx to serve the page. It is currently not exposed to the internet as I am still building it.</p>
</li>
<li>
<p>Crowdsec Security Engine</p>
</li>
<li>
<p>Prometheus</p>
</li>
<li>
<p>Grafana</p>
</li>
<li>
<p>Authelia</p>
<p>IAM layer with 2FA.</p>
</li>
<li>
<p>Media Stack</p>
<ul class="list-disc list-inside indent-4">
<li>Jellyfin</li>
<li>Radarr</li>
<li>Sonarr</li>
<li>Prowlarr</li>
<li>Blazarr</li>
<li>Gluetun</li>
<li>Jellyseerr</li>
</ul>
</li>
<p>Syncthing</p>
<li>
<p>Gitea</p>
<p>self-hosted git repository</p>
</li>
<p>Gitea Runners</p>
<p>Runners for Continuous deployment.</p>
<li>
<p>Kuma Uptime</p>
</li>
<li>
<p>Portainer</p>
</li>
</ul>
<h3 class="text-lg text-center mt-4">Pi-Hole</h3>
<p>LXC container running Pi-hole dns blocker. This is separate so the network is not affected when servicing other VMs. First this was included in the docker-stack but it created too many issues as I had to boot services in a specific order.</p>
<p>For this reason, I am also planning on moving Crowdsec and Traefik to a separate containers.</p>
<h3 class="text-lg text-center mt-4">Unraid</h3>
<p>This VM is running Unraid, which is managing my BTRFS array in Raid1. Upon boot it is loaded from a USB-stick, and runs in the RAM of the VM. I recently moved to unraid as it allows me to add more drives as needed, where as
ZFS is very specific about the size of drives added, and I simply don't have the knowledge (or the will) of storage systems to manage it if I can avoid it.</p>
<p>This VM has the SATA controller passed through, for full control of the HDD's.</p>
</article>
</div>
2024-04-21 15:11:29 +00:00
</div>