Luxury Definition by a Kubernetes Engineer

Abdennour Toumi
3 min readSep 4, 2019


What’s your definition of Luxury ? What is your perspective of Luxury ? How can you define the Luxury ?

If you love traveling & visiting Resorts, you may think about this :


If you are fond of cars, you may define the Luxury with the following :

If you are Kuberenetes engineer, how you will define the Luxury ?

The last month, I had the opportunity to setup a new Kuberntes cluster EKS.

While preparing this cluster, I feel the Luxury of the environment and I want to share this experience with the cloud-native community:


ansible 🖤k8s

Your local machine requires some applications & executables ( docker, kubectl, helm, helmfile, aws-iam-authenticator, terraform, kind, aws cli, ..).

Ansible is used to setup my local environment; I don’t need to struggle with brew install kubernetes-helm or similar commands, instead, I just reuse ready-made Ansible roles (i.e: abdennour.helmfile, andrewrothstein.kubernetes-helm , andrewrothstein.kubectl ).

The playbook will reuse these roles, prepare your local environment, and you just need to watch its execution progress like a Boss 😎

- name: Laptop with kube dependencies  hosts: localhost  roles:   - role: abdennour.kube_local_environment

This is my playbook to install kube utilities.Enjoy it !


Terraform 💜 k8s

I don’t need to login to the AWS console and provision the EKS cluster by my hands.

I don’t have to struggle with aws-iam-authenticator and required IAM roles.

I don’t need to even provision worker nodes with Cloudformation.

Instead, I used ready Terraform module(s) and I just played with module’s attributes :

module "eks" {  source   = "terraform-aws-modules/eks/aws"  version  = "5.0.0"  cluster_name = local.cluster_name  subnets      = module.vpc.public_subnets  vpc_id       = module.vpc.vpc_id  manage_aws_auth = true  map_users      = var.map_users  workers_additional_policies = var.workers_additional_policies  worker_groups_launch_template = [  {    name                 = "worker-group-1"    instance_type        = "m5.large"    asg_desired_capacity = 4    asg_max_size = 10    asg_min_size            = 3    autoscaling_enabled = true  }
worker_groups_launch_template_mixed = [ { name = "spot-mixed" override_instance_types = ["m5.large", "m5a.large"] spot_instance_pools = 3 spot_allocation_strategy = "lowest-price" asg_max_size = 30 asg_min_size = 1 asg_desired_capacity = 3 kubelet_extra_args = "" autoscaling_enabled = true } ]}


Helm 💙 k8s

I don’t need to loop thru my k8s yaml manifests and run apply each one.

I don’t have to repeat myself by copying redundant names among all manifests to establish consistency.

I don’t need to write custom bash script to automate deployment of k8s manifests.

I don’t have to assign version for each change of k8s manifests

Instead, I create a Helm Chart for each microservice category, then I let Helm continue the mission.

Also, I don’t have to reinvent the wheel, hence, I reused a lot of ready-made Helm charts ( Prometheus Chart, Ingress Chart,… ).

Helm is my Release engineer 🌹. Thanks Helm!


Oops! 😬 I have a lot of Charts: Charts from scratch & Community Charts

Customizing community Charts require providing Values during installation.

How shall I organize these Values files ? How can I make them maintainable ?

Helmfile is one of the best solution : It’s the Helm of Helm.

Organize your Helm releases in a file named helmfile.yml ,which a definition file of charts that you want to install inside a k8s cluster.

Like docker-composee.yaml, it is a definition file of containers that you want to run.

Sync your Kubernetes cluster state to the desired one by running only one command :

helmfile apply


This is how my helmfile looks like :

releases:- name: metrics-server  namespace: kube-system  chart: stable/metrics-server  version: 2.8.2  values:  - "./values/metrics-server.yaml"
- name: cluster-autoscaler namespace: default chart: stable/cluster-autoscaler values: - "./values/cluster-autoscaler.yaml"- name: ingress namespace: ingress chart: stable/nginx-ingress values: - "./values/nginx-ingress.yaml"
- name: prom namespace: monitoring chart: stable/prometheus-operator version: ~6.7.4 set: - name: grafana.adminPassword value: {{ requiredEnv "GRAFANA_PASSWD" }} values: - "./values/prometheus.yaml"# ..............


Udemy course about running EKS on production :



Abdennour Toumi

Software engineer, Cloud Architect, 5/5 AWS|GCP|PSM Certified, Owner of