Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF). Kubernetes provides a robust and scalable infrastructure to manage and deploy containerized applications in a consistent and efficient manner.

In kubernetes series article, we will explore Kubernetes' essential features, its role in managing containerized workloads, and its extensibility options, empowering developers to harness its full potential.

A Kubernetes cluster is a set of physical or virtual machines (nodes) that are connected together to form a cluster. The cluster is managed by the Kubernetes control plane, which includes several components responsible for maintaining the desired state of the cluster. These components ensure that applications are running as intended and handle tasks such as scaling, load balancing, and monitoring.

Node with following components:

  1. Master Node: The master node is the control plane of the cluster and manages the overall state of the cluster. It includes several components like:
    • API Server: Accepts and processes API requests, serving as the primary interface to the cluster.
    • Scheduler: Assigns workloads (pods) to available worker nodes based on resource requirements and constraints.
    • Controller Manager: Watches the state of the cluster and makes adjustments to ensure the desired state is maintained.
    • etcd: A distributed key-value store that stores the configuration data for the cluster.
  2. Worker Nodes (Minions): Worker nodes are the machines where containerized applications are deployed. They run the actual pods (group of one or more containers) that make up the application. Each node has the following components:
    • Kubelet: Communicates with the master node and manages the containers on the node.
    • Container Runtime: Manages the containers on the node (e.g., Docker, containerd).
    • Kube-proxy: Manages network routing and load balancing for services within the cluster.

Create EKS Cluster

Next, have two way to create cluster:

1. Create from dashboard

Go to Amazon EKS dashboard to create EKS cluster, and cluster name as demo-version

https://console.aws.amazon.com/eks/home#/clusters

Add cluster > config name, version 1.27, select role : adamAmazonEKSClusterRole and press Next > Select VPC and others as defult and press Next > logging as Default and press Next > add-ons settings as Default and press Next > Create

Wait for few minites for cluster creating.

(Here I set the cluster name: demo-version)

Create nodess

Should make sure the node IAM are ready (can refer this)

  1. Open the AWS Management Console and navigate to the Amazon EKS service.
  2. Select your EKS cluster from the list of clusters.
  3. In the cluster details page, click on the “Compute” tab.
  4. Click on the “Add node group” button.
  5. Fill in the required information for the node group, including the name, instance type, desired capacity, and other configuration options.
  6. You can choose to use an existing IAM role for the node group or create a new IAM role. The IAM role should have the necessary permissions to interact with your EKS cluster and other AWS resources.
  7. Configure the networking settings for the node group, such as the VPC and subnets to be used.
  8. Optionally, you can add tags to your node group for better organization and identification.
  9. Review the configuration and click on the “Create” button to create the node group.
  10. The node group creation process will start, and it will take a few minutes to provision the new nodes.

Once the node group is created successfully, the new nodes will be added to your EKS cluster, and they will be ready to run pods. The nodes will automatically join the EKS cluster and become part of the Kubernetes cluster, managed by the EKS control plane.

2. Create from eksctl

cluster.yaml (or you can create with new IAM, VPC refer this)

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${EKS_CLUSTER_NAME}
  region: ${AWS_REGION} 
  version: '1.27'
  tags:
    created-by: adam
    env: ${EKS_CLUSTER_NAME}

managedNodeGroups:
- name: nodeGroupV2
  desiredCapacity: 2
  minSize: 1
  maxSize: 3
  instanceTypes: ['t3.small', 't3.medium']
  privateNetworking: true
  labels:
    user: 'adam'

iam:
  withOIDC: true
  serviceAccounts:
  - metadata:
      name: eks-cluster-policy
      namespace: backend-apps
      labels: {aws-usage: 'application'}
    attachPolicyARNs:
    - 'arn:aws:iam::aws:policy/AmazonEKSClusterPolicy'

# custom vpc (already have vpc)
vpc:
  id: 'vpc-{vpc_id}'
  clusterEndpoints:
    privateAccess: true
    publicAccess: true # make cluster endpoint accessable
  subnets:
    private:
      us-west-2a:
          id: 'subnet-{subnet_id}' #1c
      us-west-2c:
          id: 'subnet-{subnet_id}' #1c
    public:
      us-west-2a:
          id: 'subnet-{subnet_id}' #1a
      us-west-2c:
          id: 'subnet-{subnet_id}' #1a

## Create new VPC and two availability zones
#availabilityZones:
#- ${AWS_REGION}a
#- ${AWS_REGION}c

#vpc:
#  cidr: 10.42.0.0/16
#  clusterEndpoints:
#    privateAccess: true
#    publicAccess: true

# Enable logging (not enabled by default)
cloudWatch:
  clusterLogging:
    enableTypes:
      - 'controllerManager'



Apply the cluster configuration ( generally takes 20 minutes):

eksctl create cluster -f cluster.yaml

After the cluster created, run following command to use the cluster

use-cluster $EKS_CLUSTER_NAME

//or
aws eks update-kubeconfig --region $AWS_REGION --name $EKS_CLUSTER_NAME

Confirm cluster status

kubectl get all -all-namespaces
kubectl get nodes

Try get pods (It’s okay to show no resources before we deploy app)

$ kubectl get pods
No resources found in default namespace.

Delete cluster

eksctl delete cluster --region=$AWS_REGION --name=$EKS_CLUSTER_NAME