pdpi 3 hours ago

> ignore current warnings - I’m using a MacBook Pro charger + cable and still got the warning that I need a 5V/5A PSU.

You need to be careful with this one.

The USB spec goes up to 15W (3A) for its 5V PD profiles, and the standard way to get 25W would be to use the 9V profile. I assume the Pi 5 lacks the necessary hardware to convert a 9V input to 5V, and, instead, the Pi 5 and its official power supply support a custom, out-of-spec 25W (5A) mode.

Using an Apple charger gets you the standard 15W mode, and, on 15W, the Pi can only offer 600mA for accessories, which may or may not be enough to power your NVMe. Using the 25W supply, it can offer 1.6A instead, which gives you plenty more headroom.

  • wpm 3 hours ago

    5W5A is not a custom profile. It is part of the USB-PD standard, but as an optional profile that can only be provided if you ensure the cable used is safe to handle 5 amps of current. It’s why the official Rapsberry Pi 5 PSU has a non-removable cable.

  • otter-in-a-suit 2 hours ago

    That's good to know, thank you. It's using the official charger in the rack, but I used the charger I've had on my desk while setting it up. I added a note to the article.

srjilarious 7 hours ago

I just learned about the whole homelab thing a week ago; it's a much deeper rabbit hole than I expected. I'm planning to setup ProxMox today for the first time in fact and retire my Ubuntu Server setup running on a NUC that's been serving me well for last couple years.

I hadn't heard about mealie yet, but sounds like a great one to install.

  • jrmg 5 hours ago

    Ubuntu Server setup running on a NUC that's been serving me well

    In my book, that’s a homelab, it's just a small one (an efficient one?...)

  • perdomon an hour ago

    I've had a ton of fun with CasaOS in the past few months. I don't mind managing docker-compose text files, but CasaOS comes with a simple UI and an "App Store" that makes the process really simple and doesn't overly-complicate things when you want to customize something about a container.

  • mvATM99 4 hours ago

    You should definitely try mealie yes. On top of a good way to host your own recipes, the entire thing just feels...really well put together?

    I'm not even using the features beyond the recipes yet, but i'm already very happy that i can migrate my recipes from google docs to over there

  • tom1337 7 hours ago

    If you want to go another, related rabbit hole, check out the DataHoarder subreddit. But don't blame me, if you’re buying terabytes of storage over the next few months :)

    • PenguinCoder 6 hours ago

      Data Hoarding is a bit more involved than just a homelab. Don't want your data hoard to go down or missing, whole you're labbing new techs and protocols.

    • blitzar 5 hours ago

      don't blame me if you’re buying terabytes of USB drives and pulling out the hard drives

  • skelpmargyar an hour ago

    Proxmox is awesome! I've been running it for ~5 years and it's been absolutely stable and pleasant to run services on.

    The Proxmox Backup Server is the killer feature for me. Incremental and encrypted backups with seamless restoration for LXC and VMs has been amazing.

    • strbean 37 minutes ago

      I've been looking to get offsite backups going. Where do you keep your backups? NAS + cloud?

      I also wanted to back up my big honking zpool of media, but it doesn't economical to store 10+ TB offsite when the data isn't really that critical.

      • somehnguy 9 minutes ago

        My PBS server has 2 datasources - one local external drive & Backblaze B2. I snapshot to the local drive frequently throughout the day & B2 once in the evening.

        Yeah I don't backup any of my media zpool. It can all be replaced quite easily, not worth paying for the backup storage.

  • walthamstow 7 hours ago

    I have Proxmox running on top of a clean Debian install on my NUC, I wanted to allow Plex to use the hardware decoding and it got a bit funny trying to do that with Plex running in a VM, so it runs on the host and I use VMs for other stuff

    • ysleepy 4 hours ago

      It's very easy to do this with LXC containers in Proxmox now, as passing devices to a container is now possible from the UI.

      • phito 3 hours ago

        Just as easy with VMs, just have to pass the device to the VM

    • nodesocket 5 hours ago

      I have an Intel (12th Gen i5-12450H) mini-pc and at first had issues getting the GPU firmware loaded and working in Debian 12. However upgrading to Debian 13 (trixie) and doing apt update and upgrade resolved the issue and was able to pass the onboard Intel GPU through Docker to a Jellyfin container just fine. I believe the issue is related to older linux kernels and GPU firmware compatibility. Perhaps that’s your issue.

  • wltr 7 hours ago

    A Few Moments Later

    • blitzar 5 hours ago

      There is time dialation in the homelab vortex ... what feels like a few hours can turn out to be years in the real world.

      • matthewfcarlson 3 hours ago

        My best McConaughey voice: “this little server is gonna cost us 51 years”

      • wltr 5 hours ago

        That’s precisely what I meant! I’m at my sixth year, I guess. Maybe longer, I’ve lost my count.

Havoc 8 hours ago

My most recent learning - DDR4 ECC UDIMMs are comically expensive. To the point where I considered just replacing the entire platform with something RDIMM rather than swapping to ECC sticks.

>No space left on device.

>In other words, you can lock yourself out of PBS. That’s… a design.

Run PBS in LXC with the base on a zfs dataset with dedup & compression turned off. If it bombs you can increase disk size in proxmox & reboot it. Unlike VMs you don't need to do anything inside the container to resize FS so this generally works as fix.

>PiHole

AGH is worth considering because it has built in DoH

>Raspberry Pi 5, ARM64 Proxmox

Interesting. I'm leaning more towards k8s for integrating pis meaningfully

  • everforward 4 hours ago

    You seem knowledgeable so you may already know, but it's worth looking at the x86 mini PCs. Performance per watt has gotten pretty close on the newer low power CPUs (e.g. N150, unsure what AMD's line for that is), and performance per $ spent on hardware is way higher. I'm seeing 8GB Pi 5s with a power supply and no SD card for $100; you can get an N150 mini PC with 16GB of RAM and 500GB SSD pre-installed for like $160. Double the RAM, double the CPU performance, and comes with an SSD.

    Imo, Raspberry Pis haven't been cost competitive general compute devices for a while now unless you want GPIO pins.

    • Havoc 4 hours ago

      Yeah have a collection of minipc - they are indeed great. This build was more NAS focused. 9x SATA SSD and 6x NVME...minipcs just don't have the connectivity for that sort of thing

      >Imo, Raspberry Pis haven't been cost competitive general compute devices for a while now unless you want GPIO pins.

      I have a bunch of rasp 4Bs that I'll use for a k8s HA control plane but yeah outside of that they're not idea. Especially with the fragility of SD card instead of nvme (unless you buy the silly HAT thing).

    • dangus 3 hours ago

      The first thing I thought when I read this article was how raspberry pi’s just make this kind of thing more difficult and annoying compared to a regular normal PC, new (e.g. cheap mini PC) or used (e.g. used business workstation or just a plain desktop PC).

      And if you want GPIO pins I’d imagine that a lot of those applications you’d be better served with an ESP32 and that a raspberry pi is essentially overkill for many of those use cases.

      The Venn diagram for where the pi makes sense seems smaller than ever these days.

      • perdomon an hour ago

        You're right that the Venn diagram is smaller than it was 5 years ago, but there are still some folks whose primary concern is electricity usage. Even the pi 5 shines there (as long as you don't need too much compute).

  • master_crab an hour ago

    Don’t do K8s on Pis. The Pis will spend the majority of their horsepower running etcd, CNI of choice, other essential services (MetalLB, envoy, etc). You’ll be left with a minimal percentage of resources for the pods that actually do things you need outside the cluster.

    And don’t get me started on if you intend to run any storage solutions like Rook-Ceph on cluster.

  • reeredfdfdf 42 minutes ago

    Maybe I just got lucky, but a year ago or so I managed to find Kingston 32GB DDR4 ECC UDIMM's from Amazon for a price that was more or less identical to normal non-ECC RAM. Running a Ryzen system with 128gb of memory now.

  • Aurornis 8 hours ago

    > My most recent learning - DDR4 ECC UDIMMs are comically expensive. To the point where I considered just replacing the entire platform with something RDIMM rather than swapping to ECC sticks.

    DDR4 anything is becoming very expensive right now because manufacturers have been switching over to DDR5.

    • Havoc 7 hours ago

      Yeah, built on AM4 and in hindsight spending more on mobo & CPU to hop on AM5 would have been the smart move. Live & learn.

      On the plus side I have a lot of non-ECC DDR4 sticks that I'm dumping into the expensive market rn

j1elo 9 hours ago

I went in thinking that maybe there's something to learn for my grand total of 1 ThinkCentre M910q "homelab", but this author's setup is on another league, I'm sure closer (or surpassing) the needs of a small/medium company!

  • tylerflick 7 hours ago

    It’s another league, but I don’t get the point of mixing enterprise rack-mounts with Raspberry Pis.

    • bombcar 3 hours ago

      Pis are a relatively quick and cheap (including power) way to get "another computer" that isn't a VM or otherwise dependent.

      VMs can add a lot of complexity that you don't really need or want to manage.

      And (perhaps unadmitted) lots of people bought Pis and then searched for use cases for them.

      • fluoridation an hour ago

        They're really not that cheap anymore, though. If you just need "another computer" you can probably find something more capable in the used market.

        • bombcar 38 minutes ago

          They never were terribly cheap, but there was a time when people who hadn't seen a single-board computer (that small) and didn't really keep abreast of the used market gobbled them up.

          One advantage over the used market is that you can easily keep getting the exact same one over and over again.

    • otter-in-a-suit 6 hours ago

      You'd be delighted (or terrified) to know that I just added an old gaming computer in a 4U case to the cluster, so I can play with PCI/GPU passthrough.

      The Dell is essentially the main machine that runs everything we actually use - the other hardware is either used as redundancy or for experiments (or both). I got the Pi from a work thing and this has been a fun use case. Not that I necessarily recommend it...

0x457 3 hours ago

I don't understand some home labs. You see a beefy, but old rack server that has next to none single-thread performance (relatively). Usually servery underutilized un running Proxmox (issa lab after all) and RPi doing something for some reason?

Those setups always pure "home-lab" because it's too small or macgyvered together for anything, but the smallest businesses...where it will be an overkill.

Sometimes it's people running 2-3 node k8s cluster to run a few static workloads. You're not going to learn much about k8s from that, but you will waste CPU cycles on running the infra.

  • otter-in-a-suit 2 hours ago

    > none single-thread performance (relatively)

    I find horizontal scaling with many smaller cores and lots of memory more impactful for virtualization workloads than heavy single core performance (which, fwiw, is pretty decent on these Xeon Golds).

    The biggest bottleneck is I/O performance, since I rely on SAS drives (since running full VMs has a lot of disk overhead), rather than SSDs, but I cannot justify the expense to upgrade to SSDs, not to mention NVME.

    > Those setups always pure "home-lab" because it's too small or macgyvered together for anything, but the smallest businesses...where it will be an overkill.

    That is a core part of the hobby. You do some things very enterprise-y and over-engineered (such as redundant PSUs and UPSs), while simultaneously using old hard drives that rely on your SMART monitor and pure chance to work (to pick 2 random examples).

    I also re-use old hardware that piles up around the house constantly, such as the Pi. I commented elsewhere that I just slapped an old gaming PC into a 4U case since I want to play/tinker with/learn from GPU passthrough. I would not do this for a business, but I'm happy to spend $200 for a case and rails and stomach an additional ~60W idle power draw to do such. I don't even know what exactly I'll be running on it yet. But I _do_ know that I know embarrassingly little about GPUs, X11, VNC, ... actually work and that I have an unused GTX 1080.

    Some of this is simply a build-vs-buy thing (where I get actual usage out of it and have something subjectively better than an off the shelf product), others is pure tinkering. Hacking, if you will. I know a website that usually likes stuff like that.

    > You're not going to learn much about k8s from that

    It's possible you and I learn things very differently then (and I mean this a lot less snarky than it sounds). I built Raft from scratch in Scala 3 and that told me a lot about Raft and Scala 3, despite being utterly pointless as a product (it's on my website if you care to read it). I have the same experience with everything home lab / Linux / networking - I always learn something new. And I work for a networking company...

    • 0x457 an hour ago

      > It's possible you and I learn things very differently then (and I mean this a lot less snarky than it sounds). I built Raft from scratch in Scala 3 and that told me a lot about Raft and Scala 3, despite being utterly pointless as a product (it's on my website if you care to read it). I have the same experience with everything home lab / Linux / networking - I always learn something new. And I work for a networking company...

      Building k8s from scratch, you're going to learn how to build k8s from scratch. Not how to operate and/or use k8s. Maybe you will learn some configuration management tool along the way unless your plan is to just copy-paste commands from some website into terminal.

      > find horizontal scaling with many smaller cores and lots of memory more impactful for virtualization workloads than heavy single core performance (which, fwiw, is pretty decent on these Xeon Golds).

      Yeah, if you run a VM for every thing that should be a systemd service, it scales well that way.

  • _dain_ 2 hours ago

    the point of the hobby is to have something to tinker with. if it were too easy it wouldn't be any fun.

    • 0x457 2 hours ago

      I know, that's why I have my own "lab". I just don't get why most of other labs are so cookie-cutter proxmox + home assistant + (unifi controller) + pihole and there is always RPi somewhere next to a chunky server.

  • unethical_ban 2 hours ago

    Someone who successfully configures and tests a k8s cluster at home knows more about it than I do.

    • 0x457 an hour ago

      You overestimate the complexity. Even if you don't use k8s installer/distribution and do it from scratch - you are not going to learn much about operating/using k8s.

srcreigh 6 hours ago

Can somebody explain the whole proxmox thing? I haven’t used it, I use k3s.

I don’t get why people use VMs for stuff when there’s docker.

Thanks!

  • Normal_gaussian 6 hours ago

    Primarily, docker isn't isolation. Where isolation is important, VMs are just better.

    Outside of that:

    Docker & k8s are great for sharing resources, VMs allow you to explicitly not share resources.

    VMs can be simpler to backup, restore, migrate.

    Some software only runs in VMs.

    Passing through displays, USB devices, PCI devices, network interfaces etc. often works better with a VM than with Docker.

    For my setup, I have a handful of VMs and dozens of containers. I have a proxmox cluster with the VMs, and some of the VMs are Talos nodes, which is my K8s cluster, which has my containers. Separately I have a zimaboard with the pfsense & reverse proxy for my cluster, and another machine with pfsense as my home router.

    My primary networking is done on dedicated boxes for isolation (not performance).

    My VMs run: plex, home assistant, my backup orchestrator, and a few windows test hosts. This is because:

    - The windows test hosts don't containerise well; I'd rather containerise them. - plex has a dedicated network port and storage device, which is simpler to set up this way. - Home Assistant uses a specific USB port & device, which is simpler to set up this way. - I don't want plex, home assistant, or the backup orchestrator to be affected by issues relating to my other services / k8s. These are the services where small transient or temporary issues would impact the whole household.

    Also note, I don't use the proxmox container support (I use talos) for two reasons. 1 - I prefer k8s to manage services. 2 - the isolation boundary is better.

    • 0x457 2 hours ago

      > Primarily, docker isn't isolation. Where isolation is important, VMs are just better.

      Better how? What isolation are we talking about, home-lab? Multi-tenant environments for every family member?

      > Some software only runs in VMs.

      Like OS kernels and software not compiled for host OS?

      > Passing through displays, USB devices, PCI devices, network interfaces etc. often works better with a VM than with Docker.

      Insane take because we're talking about binding something from /dev/ to a namespace, which is much easier and faster than any VM pass-through even if your CPU has features for that pass-through.

      > plex has a dedicated network port and storage device, which is simpler to set up this way.

      Same, but my plex is just a systemd unit and my *arrs are in nspawn container also on its own port (only because I want to be able to access them without authentication on the overlay network).

      > I don't want plex, home assistant, or the backup orchestrator to be affected by issues relating to my other services / k8s.

      Hosting Plex in k8s is objectively wrong, so you're right there. I don't see how adding proxmox into the place instead of those services being systemd units. If they run on the same node - you're not getting any fault tolerance, just adding another thing that can go wrong (proxmox)

  • varun_ch 5 hours ago

    Maybe my use case is abnormal, but I allocate the majority of my resources to a primary VM where I run everything, including containers, etc. but by running Proxmox now I can backup my entire server and even transfer it across the network. If I ever have some software to try out, I can do it in a new VM rather than on my main host. I can also ‘reboot’ my ‘server’ without actually rebooting the real computer, which meant less fan noises and interruption back when I used an actual rack mounted server at home.

  • Modified3019 3 hours ago

    For my home archive NAS boxes, Proxmox is just a Debian distro with selective (mostly virtualization) things more up to date, and has ZFS and a web UI out of the box.

    I disable the high availability stuff I don’t use that otherwise just grinds away at disks because of all the syncing it does.

    It has quirks to work through, but at this point for me dealing with it is fairly simple, repeatable and most importantly, low effort/mental overhead enough for my few machines without having to go full orchestration, or worse, NixOS.

  • ab71e5 an hour ago

    You can manage containers with proxmox. The idea with proxmox is also to use something that you might encounter at work to practice with at home.

  • otter-in-a-suit 6 hours ago

    Proxmox is essentially a clustered hypervisor (a KVM wrapper, really). It has some overlap with K8s (via containers), but is simpler for what I do and has some very nice features, namely around backups, redundancy/HA, hardware passthrough, and the fact that it has a usable GUI.

    I also use K8s at work, so this is a nice contrast to use something else for my home lab experiments. And tbh, I often find that if I want something done (or something breaks), muscle-memory-Linux-things come back to me a lot quicker than some obscure K8s incantation, but I suppose that's just my personal bias.

    Several of my VMs (which are very different than containers, obviously - even though I believe VMs on K8s _can_ be a thing...) run (multiple) docker containers.

  • baq 6 hours ago

    it's so you can have a machine to run docker on, basically.

    especially useful if you want multiple of those, and also helpful if you don't want one of them anymore.

    • oceanplexian 4 hours ago

      "Why would I need virtualization when I have Kubernetes".. sounds like a someone who has never had to update the K8s control plane and had everything go completely wrong. If it happens to you, you will be begging for an HVM with real snapshots.

    • indigodaddy 6 hours ago

      Makes backups of the KVM VM running docker easy too right

      • baq 6 hours ago

        and you can move the whole vm to a different host approximately trivially

  • MezzoDelCammin 6 hours ago

    personally : proxmox /VM is great if You'd like to separate physical HW. In my case - virtualized TrueNAS means I can give it a whole SATA controller and keep this as an isolated storage machine.

    Whatever uses that storage usually runs in a Docker inside an LXC container.

    If I need something more isolated (think public facing cloudflare) - that's a separate docker in another network routed through another OPNSense VM.

    Desktop - VM where I passed down a whole GPU and a USB hub.

    Best part - it all runs on a fairly low power HW (<20W idle NAS plus whatever the harddrives take - generally ~5W / HDD).

  • messe 5 hours ago

    Not everything I put in a VM runs on Linux.

NoboruWataya 8 hours ago

I second the shout out for Mealie, it's very useful. Importing from URLs works very well, and it gives you a centralised place for all your recipes, without ads or filler content and safe from linkrot.

ryandrake 7 hours ago

Not sure I understand the need to use a Raspberry Pi here. They're cool and all, but wouldn't a regular old PC be simpler to setup, maintain, and attach hardware to? It's a hobby--and you can do whatever you want, but I wouldn't involve a Pi in a home server setup unless I specifically needed something it bought me, like the small form factor, low power usage, GPIO pins and so on.

  • fartfeatures 7 hours ago

    I always need lower power consumption. I'm in the UK and my power costs are $0.40 per kWh. Even running a raspberry pi 5 24/7 would cost me $25 per year

    • baq 6 hours ago

      N100 minipcs will burn about as much power as a fully decked rpi5 and they're so much more hypervisor-friendly.

    • Spooky23 6 hours ago

      Look for used thin clients. You can get HP t630s for ~$50 or less. They have a nice AMD SoC. If mostly idle, they draw about 2x a Pi. Loaded, they are similar.

  • otter-in-a-suit 6 hours ago

    I just commented on this above, but I actually got for the Pi for free and it's a very capable device. I wouldn't buy one for this use case (nor do I really recommend it, but it _does_ work).

ocdtrekkie 8 hours ago

One of my favorite CyberPower perks is their RMCARDs for network monitoring: It's a separate module that works in basically all of their rackmount UPSes. You can replace the entire UPS without having to pay for the little mini web server again, it'll just pop right into the new unit.

  • icyfox 6 hours ago

    It's a neat chip but I couldn't bring myself to spend in excess of the price of the UPS ($439.00 for RMCARD at time of writing). I ended up hooking my UPS via USB to my existing home server via NUT and it's been working well.

    • ocdtrekkie 5 hours ago

      It is a little hefty for a homelab-level setup, but the impressive bit to me is that they've kept compatibility with it longer than we've replaced UPSes at work (it looks like the RMCARD205 and 305 were introduced in 2018), so instead of paying for that hardware built into each unit, the RMCARD has been a one-time purchase we can bring from unit to unit.

      • bobbob1921 4 hours ago

        APC UPSes that support their network management card add in - then the cards can frequently be purchased on eBay for $30-$100, you have to make sure the UPS supports the card, but they are excellent. Model numbers like ap9631 (if i recall). I run about 10 of them across different locations and they work great , some of them for 8+ years now. ( about a year ago when I got a new ups, Apc still offered the older firmware for download, however after a certain FW version it started going to a cloud subscription model - so be sure to keep the old firmware. If you’re worried about the firm or not being updated on the non-cloud version, just be sure to vlan/firewall control access to them which you should be doing anyway)

philip1209 6 hours ago

Good reminder for me to set up a UPS for my home lab before I go on vacation. . .

immibis 7 hours ago

I've recently learned that "homelab" is a specific thing meaning you run certain software (like Proxmox), and not a generic term for running a 'server lab' at home.

  • shermantanktop 6 hours ago

    Most “homelabs” are built by a developer LARPing as a sysadmin, with a user population of one (themselves) or zero for most of the features.

    It’s the SUV that has off-road tires but never leaves the pavement, the beginner guitarist with an arena-ready amp, the occasional cook with a $5k knife. No judgment, everyone should do what they want, but the discussions get very serious even though the stakes are low.

    • shepherdjerred 4 hours ago

      LARPing as a sysadmin has a lot of benefits. It's taught me Ansible, Docker, Kubernetes, etc.

      Which are all pretty useful considering my day job is a software engineer.

      Many of these things have been directly applicable at work, e.g. when something weird happens in AWS, or we have a project using obscure Docker features.

    • titanomachy 4 hours ago

      I don't personally have a homelab, but I think that (unlike a giant amp or SUV) the homelab lets you learn interesting skills that would be hard to learn otherwise. It seems more defensible to me.

      • shermantanktop 3 hours ago

        I have a small setup that could be considered homelab-ish - a NAS, a server, Docker+Portainer running a variety of services including HomeAssistant, a Plex server, UPS with graceful shutdown, and other stuff. I agree it's educational, it certainly has been for me; but everything I run has a practical purpose.

        People will build a huge multinode cluster in their basement with Raspberry PIs, and benchmark it to point out performance issues that they absolutely can't live with and so they are off to buy new SSDs or whatever. It's a hobby, but it's shaped like someone's actual job.

  • bombcar 3 hours ago

    People might gatekeep it, but in my opinion if it is YOUR fault the TV won't play the show, you've a homelab; if it's the ISP or streaming service's fault, then you don't have a homelab.

    You know when you know.

  • woodrowbarlow 7 hours ago

    some people think it's not "homelabbing" unless you're doing things the way it's done at large scale. i think these people are aiming to enter IT as a career and consider a homelab to be a resume project.

    but proxmox and kubernetes are overkill, imo, for most homelab setups. setting them up is a good learning experience but not necessarily an appropriate architecture for maintaining a few mini PCs in a closet long term.

    you can ignore the gatekeeping.

    • placardloop 6 hours ago

      Homelabbing is a hobby for most people involved in it, and like other hobbies, some people dip their toes in it while others go diving in the deep end. But would you say it’s “overkill” for a hobbyist fisher to have multiple fishing poles? Or for a hobbyist painter to try multiple sets of paintbrushes? Or a hobbyist programmer to know multiple programming languages?

      There’s a lot of overlap between “I run a server to store my photos” and “I run a bunch of servers for fun”, which has resulted in annoying gatekeeping (or reverse gatekeeping) where people tell each other they are “doing it wrong”, but on Reddit at least it’s somewhat being self-organized into r/selfhosted and r/homelab, respectively.

    • thewebguyd 6 hours ago

      > i think these people are aiming to enter IT as a career and consider a homelab to be a resume project.

      It's funny. I did this (before it really became a more mainstream hobby, this was early 00s), but now that I work in ops I barely even want to touch a computer after work.

    • orthoxerox 6 hours ago

      k8s is definitely an overkill if your goal is not learning k8s.

      proxmox is great, though. It's worth running it even if you treat it as nothing more than a BMC.

      • baq 6 hours ago

        I'm running an ubuntu server as a hypervisor only because the proxmox installer is using an older kernel than the actual system and wouldn't install on my box :/

  • otter-in-a-suit 6 hours ago

    Run whatever you like!

    I personally enjoy the big machines (I've also always enjoyed meaninglessly large benchmark numbers on gaming hardware) and enterprise features, redundancy etc. (in other words, over-engineering).

    I know others really enjoy playing with K8s, which is its own rabbit hole.

    My main goal - apart from the truly useful core services - is to learn something new. Sometimes it's applicable to work (I am indeed an SWE larping as a sysadmin, as another commenter called out :-) ), sometimes it's not.

  • walkabout 5 hours ago

    Where’d we get this term? I hear “home lab” and I think of having equipment to accomplish something new, not… running ordinary server software in fairly ordinary ways. Like Tony Stark designing his suits has a “home lab”. People 3D printing Warhammer figures or with a couple little servers running PiHole and Wireguard and such… not so much?

    I’ve had one or two machines running serving stuff at home for a couple decades [edit: oh god, closer to 2.5 decades…], including serving public web sites for a while, and at no point would I have thought the term “home lab” was a good label for what I was doing.

  • tehlike 7 hours ago

    You can run whatever. You don't need specific software

  • jrmg 5 hours ago

    Wait, what?

    Surely people have had ‘homelabs’ for longer than VMs and container have been mainstream?