Container Management
Kubernetes K3S

Kubernetes K3S

K3s Explained: What Is K3s?

K3s is a lightweight, easy to install, deploy, and manage version of stock Kubernetes (K8s). No, it is not a fork of Kubernetes. K3s is a certified Kubernetes distribution.

Although K3s is a refined version of Kubernetes (the upstream version), it does not change how Kubernetes works at its core.

K3s removes a lot of “bloat” from stock Kubernetes. Depending on the version, the single binary file can range from under 40MB to 100MB in size and run on less than 512MB of RAM.

Rancher made K3s lightweight by removing over 3 billion lines of code from the K8s source code. It trimmed most non-CSI storage providers, alpha features, and legacy components that weren’t necessary to implement the Kubernetes API fully.

What Are The Advantages Of K3s?

K3s boasts several powerful benefits:

  • Lightweight – The single binary file is under 100MB, making it faster and less resource-hungry than K8s. The master, nodes, and workers do not need to run in multiple instances to boost efficiency.
  • A flatter learning curve – There are fewer components to learn before applying it to real-world situations.
  • Certified Kubernetes distribution – It works well with other Kubernetes distributions. It is not inferior to K8s.
  • Easier and faster installation and deployment – K3s takes seconds to minutes to install and run.
  • Run Kubernetes on ARM architecture – Devices that use ARM architecture, such as mobile phones, can run Kubernetes with K3s.
  • Run Kubernetes on Raspberry Pi – It’s so lightweight that it supports clusters made with Raspberry Pi.
  • Supports low-resource environments – For example, IoT devices and edge computing.
  • Remote deployment is easy – Bootstrap it with manifests to install after it comes online.
  • Smaller attack surface but ships with batteries – Although K3s is bare-bones, it contains all the necessary components you need, including CRI (containerd), ingress controller (Traefik Proxy), CNI (Flannel), and manifests to help install essential components like CoreDNS.
  • Supports single-node clusters – An in-built service load balancer connects Kubernetes Services with the host IP.
  • Flexibility – Aside from the default SQLite, K3s also supports PostgreSQL, MySQL, and etcd datastores.
  • Start/stop feature – Turn K3s on and off without interfering with the environment. So, you can easily update K3s, restart it, and continue using it without problems.

But K3s isn’t limitless.

What Are The Disadvantages Of K3s?

Among its limitations is that K3s does not come with a distributed database by default. This limits the control plane’s high availability capabilities. You need to point K3s servers to an external database endpoint (etcd, PostgreSQL, and MySQL) to achieve high availability of its control plane.

Fortunately, Rancher has developed Dqlite, a distributed, high-availability, and fast SQLite database. It uses C-Raft to ensure a tiny footprint and optimal efficiency if you want to replace the default SQLite database in K3s.

What Is The Difference Between K3s And K8s?

The most significant difference between K3s and K8s is how they are packaged.

Kubernetes and K3s share the same source code (upstream version), but K3s contains fewer dependencies, cloud provider integrations, add-ons, and other components that are not absolutely necessary for installing and running Kubernetes.

So, it may be more relevant to compare the use cases each Kubernetes version excels at rather than comparing K3s and K8s in terms of which is better.

Before that, here are a few differences between the K3s and K8s:

  • K3s is a lighter version of K8, which has more extensions and drivers. So, while K8s often takes 10 minutes to deploy, K3s can execute the Kubernetes API in as little as one minute, is faster to start up, and is easier to auto-update and learn.
  • K8s runs components in separate processes, whereas K3s combines and operates all control plane components (including kubelet and kube-proxy) in a single binary, server, and Agent process.
  • K3s relies on SQLite3 datastore by default it supports MySQL, PostgreSQL, and etcd3 for more complex needs, but K8s only supports etcd.
  • Unlike K8s, you can switch off embedded K3s components, giving you the freedom to install your own DNS server, CNI, and ingress controller. You can also use an existing Docker installation for your CRI to replace the containerd installation that ships with K3s.

When To Use K3s

Among the best uses for K3s are:

  • With K3’s lightweight architecture, small businesses can run operations faster and with fewer resources while enjoying high availability, scalability, security, and other full-blown Kubernetes (K8) architecture benefits.
  • Create a single-node cluster using K3s to maintain the manifests’ deployment workflow. It works great when you install K3s on an edge device or server. This enables you to use your existing CI/CD workflows and container images with the YAML files or Helm charts.
  • For cloud deployments, point K3s servers to a managed database like Google Cloud SQL or Amazon RDS in order to use multiple agents to deploy a highly available control plane. For maximum uptime, K3s can run each server in a separate availability zone.
  • K3s supports AMD64, ARM64, and ARMv7 architectures, among others. That means you can run it anywhere from a Raspberry Pi Zero, Intel NUC, or NVIDIA Jetson Nano to an a1.4xlarge Amazon EC2 instance, as long as you use a consistent installation process.
  • K3s can also handle environments with limited resources and connectivity, including industrial IoT devices, edge computing, remote locations, and unattended appliances.
  • It is a great way to run Kubernetes on-premises without additional cloud provider extensions. Using K3s with a third-party RDBMS or external/embedded etcd cluster can improve performance compared to a stock Kubernetes running in the same environment.
  • K3s comes online faster than K8s, so it’s suitable for running batch jobs, cloud bursting, CI testing, and various other workloads requiring frequent cluster scaling.

So where does that leave stock Kubernetes?

When To Use K8s

For everything else requiring heavy-duty, upstream Kubernetes, K8s is the best choice.

In the long run, both small and larger companies can use K8s to handle complex applications with multiple extensions, cloud provider add-ons, and external drivers to get things done.

What Next: Control Kubernetes Costs With CloudZero

Choosing K3s or K8s will ultimately depend on your project requirements and your available resources.

Alternatively, you can run both versions of Kubernetes for different purposes.

Either way, you still have to keep tabs on your Kubernetes costs, since they can rapidly spiral out of control. With CloudZero Kubernetes cost analysis, you can continuously track container costs by cluster, pod, or namespace.

For example, you can examine and understand how your cluster’s usage and cost fluctuate over time as you scale it up or down. Manual allocation rules are not required.

Find out more here.