AWS EKS Deployment Guide

Step-by-step guide to deploy a production-ready Amazon EKS cluster

Deploying an AWS EKS Cluster

Amazon Elastic Kubernetes Service (EKS) is a fully managed Kubernetes service that makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS.

This guide walks you through the process of setting up a production-ready EKS cluster using best practices.

Prerequisites

Before you begin, ensure you have the following:

  • AWS account with appropriate permissions
  • AWS CLI installed and configured
  • kubectl installed
  • eksctl installed (the official CLI for Amazon EKS)
  • Helm (optional, for deploying applications)

Setting Up Your Environment

First, configure your AWS CLI with appropriate credentials:

aws configure

You'll be prompted to enter:

  • AWS Access Key ID
  • AWS Secret Access Key
  • Default region (e.g., us-west-2)
  • Default output format (json is recommended)

Verify your configuration:

aws sts get-caller-identity

Creating an EKS Cluster

The simplest way to create an EKS cluster is using eksctl, a CLI tool developed by Weaveworks specifically for Amazon EKS.

  1. Create a simple cluster configuration file eks-cluster.yaml:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: my-eks-cluster
  region: us-west-2
  version: "1.27"

nodeGroups:
  - name: ng-1
    instanceType: t3.large
    desiredCapacity: 3
    minSize: 2
    maxSize: 5
    iam:
      withAddonPolicies:
        albIngress: true
        cloudWatch: true
        autoScaler: true
    labels:
      role: worker
    ssh:
      allow: true # Enable SSH access
      publicKeyName: your-key-name # Specify your EC2 key pair name

vpc:
  cidr: 10.0.0.0/16
  nat:
    gateway: HighlyAvailable # Ensures NAT gateways in each AZ for HA

availabilityZones: ["us-west-2a", "us-west-2b", "us-west-2c"]
  1. Create the cluster (this may take 15-20 minutes):
eksctl create cluster -f eks-cluster.yaml

Option 2: Using AWS Console

If you prefer a visual interface:

  1. Open the Amazon EKS console
  2. Click "Create cluster"
  3. Enter cluster details:
    • Name: my-eks-cluster
    • Kubernetes version: 1.27 (or latest available)
    • Cluster service role: Create new or select existing IAM role with EKS permissions
  4. Configure networking:
    • VPC: Create new or select existing
    • Subnets: Select at least two in different availability zones
    • Security groups: Create new or select existing
  5. Configure logging (optional but recommended)
  6. Review and create

Note that when creating through the console, you'll need to create node groups separately.

Configuring kubectl to Connect to Your EKS Cluster

Once your cluster is created, configure kubectl:

aws eks update-kubeconfig --name my-eks-cluster --region us-west-2

Verify your connection:

kubectl get nodes

Setting Up Core Add-ons

1. Cluster Autoscaler

The Kubernetes Cluster Autoscaler automatically adjusts the number of nodes when pods fail or are rescheduled due to insufficient resources.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

Edit the deployment to add your cluster name:

kubectl -n kube-system edit deployment cluster-autoscaler

Add these flags to the command section:

--node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/my-eks-cluster
--balance-similar-node-groups
--skip-nodes-with-system-pods=false

Then patch the deployment with the correct image:

kubectl -n kube-system set image deployment.apps/cluster-autoscaler cluster-autoscaler=k8s.gcr.io/autoscaling/cluster-autoscaler:v1.27.3

2. AWS Load Balancer Controller

The AWS Load Balancer Controller manages AWS Elastic Load Balancers for Kubernetes services:

helm repo add eks https://aws.github.io/eks-charts
helm repo update

helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=my-eks-cluster \
  --set serviceAccount.create=true \
  --set serviceAccount.name=aws-load-balancer-controller

3. Amazon EBS CSI Driver for Persistent Volumes

For persistent volumes with Amazon EBS:

eksctl create iamserviceaccount \
  --name ebs-csi-controller-sa \
  --namespace kube-system \
  --cluster my-eks-cluster \
  --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
  --approve \
  --role-only \
  --role-name AmazonEKS_EBS_CSI_DriverRole

eksctl create addon \
  --name aws-ebs-csi-driver \
  --cluster my-eks-cluster \
  --service-account-role-arn arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):role/AmazonEKS_EBS_CSI_DriverRole \
  --force

Setting Up Monitoring

Deploy Prometheus and Grafana for monitoring:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

helm install prometheus prometheus-community/kube-prometheus-stack \
  --namespace monitoring \
  --create-namespace \
  --set grafana.adminPassword=yourSecurePassword

Access Grafana:

kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80

Then visit http://localhost:3000 (default credentials: admin/yourSecurePassword)

Implementing Security Best Practices

Network Policies

Deploy Calico for network policies:

kubectl apply -f https://docs.projectcalico.org/v3.25/manifests/calico.yaml

Create a default deny policy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

Pod Security Policies

Create a restricted pod security context:

securityContext:
  runAsNonRoot: true
  runAsUser: 1000
  runAsGroup: 3000
  fsGroup: 2000
  allowPrivilegeEscalation: false
  capabilities:
    drop:
      - ALL

Setting Up CI/CD with AWS CodePipeline (Optional)

  1. Create an ECR repository for your images
  2. Set up CodeBuild and CodePipeline with your source repository
  3. Configure the pipeline to build images, push to ECR, and update EKS deployments

Cleaning Up

When you're done with your cluster:

eksctl delete cluster --name my-eks-cluster --region us-west-2

Or through the AWS Console:

  1. Delete all services with Load Balancers
  2. Delete node groups
  3. Delete the EKS cluster
  4. Delete associated CloudFormation stacks

Troubleshooting

Common Issues

  1. Node group creation fails: Check IAM permissions and VPC configurations
  2. Pods stuck in pending state: Check node resources and taints
  3. Service external IP not assigned: Verify AWS Load Balancer Controller deployment

Getting Logs

kubectl logs -n kube-system deployment/aws-load-balancer-controller
kubectl logs -n kube-system daemonset/aws-node

Additional Resources


This guide provides a foundation for deploying a production-ready EKS cluster. For specific use cases or additional configuration, refer to the AWS EKS documentation.