Hacker News
4 years ago by garettmd

There is a large community of folks running kubernetes at home, aptly known as k8s-at-home (https://k8s-at-home.com/). They're a great resource for anyone wanting to get started with k8s at home (and especially on Pi's). They have helm charts for deploying a lot of common apps, a Discord channel, tutorials, etc.

4 years ago by dharmab

Thanks for this! I recently set up a k8s home cluster for running an older game's dedicated servers. This will he useful!

(For the inevitable "why?": One if the quirks of this old game is you can run multiple server instances networked together to work around lack of multithreading in the code- I'm developing k8s tooling to leverage that to scale performance. https://community.bistudio.com/wiki/Arma_3:_Headless_Client)

4 years ago by gclawes

They have the highest quality helm charts I've ever seen, great project.

4 years ago by Cieric

Excuse my ignorance, but in what way is this bare-metal? I've always taken that term to mean running without an operating system. My only assumption is because it's not run in the cloud, but I figure that would be a given since "Raspberry Pi" is in the name.

4 years ago by jeroenhd

Many, if not most, Kubernetes systems are all running in the cloud in virtual machines or managed containers. Here, Kubernetes is running on the Pi itself, with no hypervisors or time sharing to chop away at performance.

That's not to say that running k8s on bare metal isn't something that's done. It's more difficult, because you need to do a lot of configuration and verification of best practices yourself, but it can easily become the cheaper option if you have some scaling requirements but do not require the infinite scaling possibilities of the cloud. The entire industry seems to be moving back and forth between cloud and bate metal every few years and I'm not really sure in what part of this cycle we are right now (I think more and more companies are in the process of going back to bare metal? The move in the other direction could've already started, I'm not sure.)

Technically you could set up a distributed hypervisor cluster consisting solely of networked Raspberry Pis, but I doubt you'd have many interested customers. So yes, "bare metal" is probably the norm for these computers. It's not for Kubernetes, though, and that's what makes this different from your typical deployment.

4 years ago by thrashh

That’s a lot of text to say that bare metal means “no OS” to firmware engineers and “running on a dedicated system” to application engineers.

4 years ago by jacobwg

This is "bare-metal" in the sense of "not virtualized", meaning the host operating system is not running in a virtual machine. You can see this distinction in cloud environments too, for instance most AWS EC2 machine types are virtualized, but AWS also offers "bare metal" instance types that provide direct access to a physical machine.

4 years ago by undefined
[deleted]
4 years ago by loxias

I understand "bare-metal" to mean without an operating system. More recently, the definition has confusingly expanded to sometimes include an operating system, but without a hypervisor.

This is a tutorial on installing Ubuntu, then k3s, then other software. What exactly is "bare-metal" about this?? :)

4 years ago by outworlder

> What exactly is "bare-metal" about this?? :)

In this context, this means running K8s nodes directly on the hardware.

As opposed to running the nodes as virtual machines. Normally VMs are used in the context of cloud providers but it's not uncommon (with beefier hardware) to run k8s as VMs in datacenters. Deployments on top of OpenStack, Azure Stack or Anthos are common. As is ESXi. It's another abstraction layer, but that gives you easier handling of things like storage and, in some cases, networking.

> More recently, the definition has confusingly expanded to sometimes include an operating system, but without a hypervisor.

That's exactly it - the definition has expanded.

4 years ago by topdancing

> I understand "bare-metal" to mean without an operating system.

In your mind, how does a computer system function without an operating system?

4 years ago by hnlmorg

Bare bones software (in the original sense of the term) interfaces directly with the hardware without abstracting it via an operating system. Much like an operating system would need to do in order to provide that abstraction (albeit you might not need to worry about paging, kernel rings etc with bare bones software).

You see this in plenty of domains: firmware, embedded systems, uEFI, bootloaders, etc.

This used to be the norm too. Old 8-bit personal computers like Commodores didn't run an OS, instead they'd have BASIC run as firmware (though you could get CP/M, GEM and others for a lot of the later generations of 8-bit micros).

You can also get modern software that runs bare metal without an OS. eg this game: https://github.com/adventurerok/Bare-metal-Space-Invaders-Cl...

4 years ago by jandrese

Like an embedded system? Where at boot it just jumps to some offset in ROM where your program lives and starts executing. If you want I/O you better bring your own library and/or be willing to set registers.

4 years ago by topdancing

Even embedded systems run on an OS, wondering of the GP knows that Ubuntu is running Linux (ie, an OS) under the hood.

And that Kubernetes deployments out there primarily run on Linux.

4 years ago by loxias

Simple. It only runs one what you think of as a "program".

I've written for bare metal on several platforms. Operating systems exist for a specific use case: when you want to run potentially multiple programs on the same piece of hardware, over time (not nec. concurrently), and you don't want your software to have to think about how to interface with the hardware. That's what operating systems do. There are plenty of cases when you don't need, or want, an OS.

4 years ago by op00to

The term has different definitions based on context.

4 years ago by m0zg

Ubuntu is installed on bare metal, isn't it? :-) But on a more serious note, "bare metal" in the context of K8S means merely that K8S is not pre-installed by a cloud provider for you, and there are no external services (such as storage or load balancing) available out of the box.

4 years ago by rasulkireev

This is awesome! 100% I will do a similar project in the future, so saved this article for reference. Well written.

Quick question, if for example you decided to another RPi to the cluster, how easy do you think it would be? Just attach it and connect to the network?

4 years ago by amzans

Hey! I'm the author of the post, glad you enjoyed it!

Yes, you would just attach it to the network switch and have it join the cluster.

The control plane will "discover" the additional capacity and redistribute workloads if necessary.

With k3s, a new node can join the cluster with a single command (assuming you have the API token).

Something like:

  $ curl -sfL https://get.k3s.io | K3S_URL=https://$YOUR_SERVER_NODE_IP:6443 K3S_TOKEN=$YOUR_CLUSTER_TOKEN sh -
4 years ago by zzyzxd

One disadvantage of k3s, is that it does not have HA control plane out of box (specifically, users are expected to bring their own HA database solution[1]). Without that, losing the single point of failure control plane node is going to give you a very bad day.

I use kubespray[2] to manage my raspberry pi based k8s homelab, and replacing any nodes, including HA control plane nodes, is as easy as swapping the board and executing an ansible playbook. The downsides of this, are that it requires the users to have more knowledge about operating k8s, and a single ansible playbook run takes 30-40 minutes...

1. https://rancher.com/docs/k3s/latest/en/installation/ha/

2. https://github.com/kubernetes-sigs/kubespray

4 years ago by yankcrime

> One disadvantage of k3s, is that it does not have HA control plane out of box

This hasn't been true for a while, since these days K3s ships with etcd embedded: https://rancher.com/docs/k3s/latest/en/installation/ha-embed...

4 years ago by thedougd

I like k3s, a lot, and this is not an endorsement of MicroK8S over k3s. I found it quite easy to burn an SD card with the latest Ubuntu Server image for RPi, and install microk8s. Yes, it has the snapd stuff that seemingly nobody likes. However, this quick experiment of mine has been running for nearly two years and I haven't felt compelled to change it. I've been through 1.18 to 1.21 of k8s upgrades. Also, while at first the plugins annoyed me, I let it go and found it easy to add metallb and other necessities through the provided plugins.

4 years ago by ozarkerD

Wow this is almost exactly like my setup :D one thing I noticed was much better performance after I switched from booting/running off an SD card to a decent flash drive. Nice write up!

4 years ago by PureParadigm

I've been running something similar on my three Raspberry Pi 4 with microk8s and flux [1]. Flux is great for a homelab environment because I can fearlessly destroy my cluster and install my services on a fresh one with just a few commands.

Next on my list is set up a service mesh like istio and try inter-cluster networking between my cloud cluster and home Raspberry Pi cluster. Perhaps I can save some money on my cloud cluster by offloading non-essential services to the pi cluster.

I'm also curious about getting a couple more external SSDs and setting up some Ceph storage. Has anyone tried this? How is the performance?

One of my pain points is the interaction of the load balancer (metallb) with the router. It seems to want to assign my cluster an IP from a range, but may choose different ones at different times. Then I have to go update the port-forwarding rules on my router. What solutions do you all use for exposing Kubernetes services to the internet?

[1] https://fluxcd.io/

4 years ago by merb

your assumption about metallb is wrong. metallb uses the range you configured and btw. you can disable the auto assign via: `auto-assign: false`

https://metallb.universe.tf/configuration/

However now your Type: LoadBalancer won't get any ip. BUT you can now use the spec.loadBalancerIP value to manually set the ip you want.

- https://metallb.universe.tf/usage/#requesting-specific-ips

- https://kubernetes.io/docs/concepts/services-networking/serv...

4 years ago by deorder

Also have a look at Kilo to connect your cloud and home cluster using WireGuard: https://kilo.squat.ai/

Instead of MetalLB I used PureLB, but now I use no LB: https://purelb.gitlab.io/docs/

I use Rook for storage: https://rook.io/

4 years ago by outworlder

> One of my pain points is the interaction of the load balancer (metallb) with the router

That part is incredibly annoying. Wondering about that as well. The ideal solution would involve something like a CSI driver that could talk to the router directly, as is done with cloud provider APIs.

4 years ago by awinter-py

ugh it's great that k3s exists, but frustrating that kube can't hit this target on its own

it seems like small clusters are not economical with vanilla kube. (I say this having frequently tried and failed to do this, but not having done napkin math on system pod budgets to prove it to myself generally). and this gets worse once you try to install any kind of plugins or monitoring tools.

I really wonder if there's a hole in the market for 'manage 5-10 containers with ingress on one or two smallish nodes'. Or if there's a hard core of users of alternatives like swarm mode. this guy https://mrkaran.dev/posts/home-server-nomad/ evolved his home lab from kube to nomad to terraformed pure docker over 3 years.

4 years ago by runlevel1

It'd be a whole lot easier to hit this goal without needing a minimum of 3 nodes for Etcd quorum.

I'd love to see K8s get K3s' support for using an external MySQL, Postgres, or SQLite instead of Etcd.

4 years ago by awinter-py

from this https://github.com/k3s-io/kine it seems like k3s was born out of a rancher labs project called 'kine' that does what you're describing?

4 years ago by outworlder

> it seems like small clusters are not economical with vanilla kube

Why, though? The memory footprint is a couple hundred MB. You do need ideally 3 nodes but you _can_ run in one. I have deployed a single-node MicroK8s without issues.

Usually, the containers themselves (your workloads) are the hogs. Deploying multiple pod replicas in a single machine has innate inefficiencies.

4 years ago by gizdan

Majority of people using K8s aren't hobbyists. They're enterprising running hundreds if not thousands of nodes. For most of them the offering of K3s is irrelevant. They can spare that extra few hundred megs of RAM needed.

4 years ago by awinter-py

as the former parent of a 100+ node cluster, am mostly with you -- but we also had dev environments that were 1-10 pods, and where we would have liked low-overhead and didn't need HA

also valuable for creating local copies of cloud infra so you can develop + test

Daily Digest

Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.