MET : Kubernetes & Rancher (4/6)

| Digital

Kubernetes has become the industry standard for managing containerized applications. But its complexity deters many teams. Rancher changes the game by making Kubernetes accessible to everyone.

In previous articles, we saw how Terraform creates our servers and Ansible configures them. Today, we dive into the heart of our infrastructure: Kubernetes, and its management interface Rancher.

Why Kubernetes?

Before Kubernetes, deploying an application looked like this:

  1. SSH into the server
  2. Update the code (git pull, rsync…)
  3. Restart services
  4. Pray it works
  5. If it crashes at 3am, get up to fix it

With Kubernetes:

  1. Describe the desired state (how many instances, which image, what resources)
  2. Kubernetes takes care of the rest
  3. If a container crashes, it’s automatically recreated
  4. Sleep peacefully

“Kubernetes is an orchestration system that maintains your applications in the state you defined, automatically.”

Why K3s instead of “classic” Kubernetes?

K3s is a Kubernetes distribution created by Rancher Labs. It’s Kubernetes, but in a lightweight version:

Aspect Kubernetes (kubeadm) K3s
Binary size ~1 GB ~100 MB
Minimum RAM 2 GB 512 MB
Installation Complex (multiple components) Single command
Database etcd (manage separately) SQLite integrated (or optional etcd)
CNCF Certification

K3s is 100% Kubernetes compatible (CNCF certified), but much simpler to install and maintain. Perfect for SMBs and teams without a full-time Kubernetes engineer.

Key Kubernetes concepts

Before going further, some essential concepts:

🏠 Namespace

An isolated space to group resources. We use one namespace per client or project:

├── infra-system      # Infrastructure services
├── databases         # Shared databases
├── client-alpha      # Client Alpha site
├── client-beta       # Client Beta site
└── monitoring        # Prometheus, Grafana

📦 Pod

The basic unit: one or more containers sharing network and storage. In practice, often 1 pod = 1 container.

🔄 Deployment

Manages the pod lifecycle: how many instances, which image, how to update.

🌐 Service

Exposes pods on the cluster’s internal network. Allows applications to communicate with each other.

🚪 Ingress

Exposes services to the outside (Internet) with domain and SSL management.

Concrete example: deploying an application

Here’s what a simple Kubernetes deployment looks like:

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-application
  namespace: client-alpha
spec:
  replicas: 2  # 2 instances for high availability
  selector:
    matchLabels:
      app: my-application
  template:
    metadata:
      labels:
        app: my-application
    spec:
      containers:
        - name: app
          image: my-registry/my-app:v1.2.3
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: "256Mi"
              cpu: "100m"
            limits:
              memory: "512Mi"
              cpu: "500m"
          livenessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 10
            periodSeconds: 30

This file describes:

  • 2 replicas: High availability, if one pod fails, the other takes over
  • Resources: CPU/RAM limits to prevent an app from monopolizing the server
  • livenessProbe: Kubernetes checks if the app responds, otherwise it restarts it

Exposing the application

# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-application
  namespace: client-alpha
spec:
  selector:
    app: my-application
  ports:
    - port: 80
      targetPort: 8080

---
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-application
  namespace: client-alpha
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  ingressClassName: traefik
  tls:
    - hosts:
        - app.my-client.com
      secretName: app-tls
  rules:
    - host: app.my-client.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-application
                port:
                  number: 80

The Ingress automatically configures:

  • The domain: app.my-client.com
  • The SSL certificate: Automatically generated by Let’s Encrypt
  • Routing: To our service

Rancher: Kubernetes for humans

Kubernetes on the command line is powerful but verbose. Rancher provides an intuitive graphical interface that allows you to:

  • Visualize cluster state at a glance
  • Deploy applications from a catalog
  • Manage users and permissions
  • Monitor resources
  • Access logs in real-time
Rancher Dashboard: cluster overview

SSO with Office 365

We configured Rancher with Office 365 authentication. Result:

  • No additional password to manage
  • Two-factor authentication inherited from Office 365
  • Centralized access management

Simplified deployment

With Rancher, deploying an application becomes as simple as:

  1. Go to the client’s namespace
  2. Click “Deploy”
  3. Select the Helm chart (WordPress, Odoo, custom…)
  4. Fill in parameters (domain, resources…)
  5. Click “Install”

It’s the “Jelastic-like” experience we were looking for, but on our own infrastructure.

Our Kubernetes ecosystem

Around K3s and Rancher, we deployed several essential components:

🚦 Traefik – Ingress Controller

Traefik handles all incoming traffic:

  • Routing to the right applications based on domain
  • SSL termination
  • Automatic HTTP → HTTPS redirect
  • Load balancing between pods

🔐 Cert-Manager – SSL Certificates

Never worry about expired certificates again:

  • Automatic generation via Let’s Encrypt
  • Automatic renewal before expiration
  • Wildcard support (*.mydomain.com)

💾 Longhorn – Distributed Storage

Persistent storage with high availability:

  • Data replication across 2 nodes
  • Automatic snapshots
  • Backup to S3
  • Web interface for management
Longhorn interface: volume and backup management

🗄️ Shared databases

Rather than one database per application, we share:

  • PostgreSQL 15: For modern applications (Odoo, n8n…)
  • MariaDB 11: For WordPress and PHP applications
  • Redis 7: Cache and sessions

Each client has their own database, but on a shared instance. Resource savings and simplified maintenance.

High availability in practice

With our 2-node configuration:

Scenario Impact
A pod crashes Automatically recreated in seconds
A worker node fails Pods migrate to the other node (~30 seconds)
Application update Rolling update: zero downtime
Load spike Horizontal scaling (more pods)

All of this, without manual intervention.

Isolation and security

Each client is isolated in their namespace with:

NetworkPolicies

Firewall rules at the Kubernetes level:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: client-alpha
spec:
  podSelector: {}
  policyTypes:
    - Ingress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              name: infra-system  # Only Traefik can access

ResourceQuotas

Resource limits per namespace:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: client-quota
  namespace: client-alpha
spec:
  hard:
    requests.cpu: "2"
    requests.memory: 4Gi
    limits.cpu: "4"
    limits.memory: 8Gi
    persistentvolumeclaims: "5"

A client cannot consume more than their quota, protecting others.

What’s next?

We now have a complete Kubernetes platform:

  • ✅ High availability K3s cluster
  • ✅ Rancher interface with SSO
  • ✅ Ingress, SSL, storage configured
  • ✅ Namespace isolation

But deploying applications with YAML files remains tedious. In the next article, we’ll see how Helm allows packaging and deploying any application in minutes.

🚀 Interested in Kubernetes but find it complex?

We can guide you through Kubernetes adoption, from training to production deployment. Benefit from our experience to avoid common pitfalls.

Let’s talk about your project →