The main reasons why we chose it over other alternatives are:
- It seamlessly integrates with other AWS services, allowing us to easily integrate with EC2 for automatic worker provisioning, IAM for in-cluster authentication and authorization, Redis for In-VPC cache, and Elastic Load Balancing for serving applications.
- As all its infrastructure is cloud based, administering it becomes a much simpler task.
- It complies with several certifications from ISO and CSA. Many of these certifications are focused on granting that the entity follows best practices regarding secure cloud-based environments and information security.
- It is supported by almost all Kubernetes SIGs utilities.
- Clusters can be fully managed using Terraform.
- It is constantly updated to support new Kubernetes versions.
- It supports OIDC, allowing our Kubernetes cluster to perform actions within AWS like automatically creating load balancers when applications are deployed.
- Google Kubernetes Engine (GKE): We tested it a few years ago. Google engineers are the creators of Kubernetes, and that is one of the main reasons why GCP offers a more complete service. Overall speaking, its GUI offered a lot more insights regarding nodes and pods, It also supported Terraform, configuring it was easier, and support for new versions was faster. The reason why we did not chose it was simple: We needed it to integrate with other cloud solutions that were already hosted in AWS. This is a clear example of cloud dependency.
- Azure Kubernetes Service (AKS): Pending to review.
We use EKS for:
- Providing Networking infrastructure for our Kubernetes cluster.
- Automatically deploying worker groups.
- Connecting to IAM for in-cluster authentication and authorization.
- Connecting to EC2 for automatic worker provisioning.
- Connecting to Redis for In-VPC cache.