In the previous article, we packaged up the reference implementation's
Kubernetes manifests into a
Helm chart. We then used
Helm's
CLI to install and uninstall that
Helm chart to our local
Microk8s
Kubernetes cluster. Today, we tackle installing the application to a
cloud provider.
We will be using Amazon as our cloud provider, specifically, their Elastic Kubernetes Service. Currently, AWS has the largest market share (33%) of the leading cloud infrastructure service providers. So there is a high probability you will be entertaining AWS as your provider of choice.
All regions other than China.
We can verify our cluster was created using the following command:
Navigate to our Helm chart home and execute the following command.
In a production environment, we would attach our load balance to a DNS CNAME record.
Congratulations, you have now deployed the reference implementation to your first Kubernetes cloud provider platform.
We will be using Amazon as our cloud provider, specifically, their Elastic Kubernetes Service. Currently, AWS has the largest market share (33%) of the leading cloud infrastructure service providers. So there is a high probability you will be entertaining AWS as your provider of choice.
Amazon's Elastic Kubernetes Service (EKS)
What is Amazon's Elastic Kubernetes Service (EKS)? EKS is Amazon Web Services managed Kubernetes service product offering that handles the provisioning and management of an organization's Kubernetes infrastructure. EKS allows us to create Kubernetes scalable clusters on-demand in the cloud and only pay for the resources consumed.Setting up
The first step is to create an AWS account. If you have never used AWS, they offer a Free Tier that provides 12-month free, always free and short-term free options to explore their products. After the Free Tier expiration, each EKS cluster costs only $0.10/hour. Using Namespaces and IAM security policies, each cluster can run multiple applications. Once we have our account, we must setup four Command Line Interface (CLI) tools: AWS CLI, kubectl, eksctl, and Helm before deploying our Helm charts to our EKS cluster.AWS CLI
Once you have a valid AWS account, the next step is to install the AWS Command Line Interface (CLI) tool. This tool allows us to interact with AWS from the command line. We will be using version 2.x of the tool. Follow the installation instructions for your operating system.Linux
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/installAfter installation ,verify the CLI is working correctly by checking its version.
aws --version
macOs
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg" sudo installer -pkg AWSCLIV2.pkg -target /After installation, verify the CLI is working correctly by checking its version.
aws --version
Windows
- Download the AWS CLI MSI installer for Windows (64-bit) at https://awscli.amazonaws.com/AWSCLIV2.msi
- Run the MSI installer. By default, the AWS CLI installs to C:\Program Files\Amazon\AWSCLIV2.
aws --version
AWS CLI configuration
We must configure the CLI with our security credentials, the desired default region, and the preferred default output format to successfully communicate with AWS. We can accomplish this with the configure command.aws configureThis command requires us to supply values for the following four fields.
AWS Access Key ID [None]: <YOUR-AWS-ACCESS-KEY-ID> AWS Secret Access Key [None]: <YOUR-AWS-SECRET-ACCESS-KEY> Default region name [None]: <YOUR-DEFAULT-REGION-NAME> Default output format [None]: <YOUR-DEFAULT-OUTPUT-FORMAT>These values will be used whenever you access the CLI unless otherwise directed. For more information, refer to the Configuring the AWS CLI page.
kubectl CLI again
We were introduced to and used the kubectl command packaged with Microk8s to communicate with our Kubernetes cluster in previous articles. To deploy our Helm chart to EKS, we will need to install a version of kubectl that corresponds with Amazon Web Services EKS. Follow the instructions for your specific operating system and operating region.Linux
All regions other than China.curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.18.8/2020-09-18/bin/linux/amd64/kubectlBeijing and Ningxia China regions.
curl -o kubectl https://amazon-eks.s3.cn-north-1.amazonaws.com.cn/1.18.8/2020-09-18/bin/linux/amd64/kubectlMake the file executable.
chmod +x ./kubectlMove the file to an appropriate location.
sudo mv ./kubectl /usr/local/binNow verify it was correctly installed using the following command:
kubectl version --short --client
macOs
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.18.8/2020-09-18/bin/darwin/amd64/kubectlBeijing and Ningxia China regions.
curl -o kubectl https://amazon-eks.s3.cn-north-1.amazonaws.com.cn/1.18.8/2020-09-18/bin/darwin/amd64/kubectlMake the file executable.
chmod +x ./kubectlMove the file to an appropriate location.
sudo mv ./kubectl /usr/local/binNow verify it was correctly installed using the following command:
kubectl version --short --client
Windows
Using PowerShell, download the appropriate binary for your region.All regions other than China.
curl -o kubectl.exe https://amazon-eks.s3.us-west-2.amazonaws.com/1.18.8/2020-09-18/bin/windows/amd64/kubectl.exeBeijing and Ningxia China regions.
curl -o kubectl.exe https://amazon-eks.s3.cn-north-1.amazonaws.com.cn/1.18.8/2020-09-18/bin/windows/amd64/kubectl.exeCopy the downloaded file to a location that is included in your PATH environment variable. Now verify it was correctly installed using the following command:
kubectl version --short --client
eksctl CLI
In the same way that we use the AWS CLI to communicate and interact with AWS, we will use the eksctl tool to interact with our EKS clusters. Install the version that corresponds with your operating system.Linux
Download the latest version of eksctl.curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmpMove the binary to an appropriate location.
sudo mv /tmp/eksctl /usr/local/binVerify the installation by checking eksctl's version.
eksctl version
macOs
The easiest way to install eksctl is to use Homebrew. You can install Homebrew with the following command:/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"Now install the Homebrew tap.
brew tap weaveworks/tapNow install eksctl
brew install weaveworks/tap/eksctlYou can verify the installation using the version command.
eksctl version
Windows
The easiest way to install eksctl on Windows is to use the Chocolatey package manager.chocolatey install -y eksctlNow verify the install by checking eksctl's version.
eksctl version
Helm CLI
We will also need to install a separate Helm instance that can use outside of our Microk8S cluster. Follow the instructions for your specific operating system.Linux
sudo snap install helm --classic
macOS
brew install helm
Windows
chocolatey install kubernetes-helm
Create an EKS cluster
Now that we finally have all the tooling in place, we can create our first EKS cluster. We can create a simple cluster using the following command.eksctl create cluster
[ℹ] eksctl version 0.31.0-rc.0 [ℹ] using region us-east-1 [ℹ] setting availability zones to [us-east-1c us-east-1a] [ℹ] subnets for us-east-1c - public:192.168.0.0/19 private:192.168.64.0/19 [ℹ] subnets for us-east-1a - public:192.168.32.0/19 private:192.168.96.0/19 [ℹ] nodegroup "ng-82ff01a0" will use "ami-07250434f8a7bc5f1" [AmazonLinux2/1.17] [ℹ] using Kubernetes version 1.17 [ℹ] creating EKS cluster "attractive-unicorn-1604460020" in "us-east-1" region with un-managed nodes [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --cluster=attractive-unicorn-1604460020' [ℹ] CloudWatch logging will not be enabled for cluster "attractive-unicorn-1604460020" in "us-east-1" [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-1 --cluster=attractive-unicorn-1604460020' [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "attractive-unicorn-1604460020" in "us-east-1" [ℹ] 2 sequential tasks: { create cluster control plane "attractive-unicorn-1604460020", 2 sequential sub-tasks: { no tasks, create nodegroup "ng-82ff01a0" } } [ℹ] building cluster stack "eksctl-attractive-unicorn-1604460020-cluster" [ℹ] deploying stack "eksctl-attractive-unicorn-1604460020-cluster" [ℹ] building nodegroup stack "eksctl-attractive-unicorn-1604460020-nodegroup-ng-82ff01a0" [ℹ] --nodes-min=2 was set automatically for nodegroup ng-82ff01a0 [ℹ] --nodes-max=2 was set automatically for nodegroup ng-82ff01a0 [ℹ] deploying stack "eksctl-attractive-unicorn-1604460020-nodegroup-ng-82ff01a0" [ℹ] waiting for the control plane availability... [✔] saved kubeconfig as "/home/cwoodward/.kube/config" [ℹ] no tasks [✔] all EKS cluster resources for "attractive-unicorn-1604460020" have been created [ℹ] adding identity "arn:aws:iam::370794467349:role/eksctl-attractive-unicorn-1604460-NodeInstanceRole-H7F4UT95UFNK" to auth ConfigMap [ℹ] nodegroup "ng-82ff01a0" has 0 node(s) [ℹ] waiting for at least 2 node(s) to become ready in "ng-82ff01a0"This command builds a cluster using the default configured region, two t2.medium nodes, and a generated cluster name. We can customize the node by passing in various command line flags.
eksctl create cluster --name t16s-cluster --region=us-west-1 --node-type=t2.large --nodes=4This command builds a cluster named t16s-cluster, deployed in the us-west-1 region with four t2.large nodes.
We can verify our cluster was created using the following command:
eksctl get clustersThis command lists all deployed clusters.
NAME REGION attractive-unicorn-1604460020 us-east-1We can list the cluster nodes created using the following command.
kubectl get nodesYou should see an output similar to this:
NAME STATUS ROLES AGE VERSION ip-192-168-11-13.ec2.internal Ready <none> 21m v1.17.11-eks-cfdc40 ip-192-168-62-133.ec2.internal Ready <none> 24m v1.17.11-eks-cfdc40
Adding an external load-balancer
When we ran our application using Microk8s, we could exercise the application's API-Gateway-Service by looking up the service's cluster IP address and port and calling it directly. In a Cloud environment, we must expose the service outside of the cluster using a load balancer. We do this by replacing our api-gateway-service.yaml manifest file with this loadbalancer.yaml manifest.
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml
kompose.service.type: clusterip
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: api-gateway-service
name: load-balancer-service
namespace: {{ .Values.application.namespace }}
spec:
ports:
- name: "8443"
port: {{ .Values.application.apiGatewayService.port }}
targetPort: 8443
protocol: TCP
selector:
io.kompose.service: api-gateway-service
type: LoadBalancer
Here we see that the
Service's spec.type is set to
LoadBalancer instead of
ClusterIP. This will instruct
EKS to expose the service outside of the cluster on a public host and port address.
Deploy the Helm Chart to EKS
With the new loadbalancer.yaml service, we are ready to deploy our Helm chart. At this point, we install the chart exactly how we did in the previous article:Navigate to our Helm chart home and execute the following command.
helm install t16s ./thinkmicroservices
NAME: t16s LAST DEPLOYED: Tue Nov 3 23:17:38 2020 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: You have installed: thinkmicroservices-ri. Your release is named t16s. Use the following command to determine the API-GATEWAY-SERVICE IP and port. kubectl get services --namespace t16s | grep "load-balancer-service" | awk '{print $4 "\t" $5}'Vverify the deployments.
kubectl get deployments --namespace t16s
NAME READY UP-TO-DATE AVAILABLE AGE account-history-service 1/1 1 1 3m10s account-profile-service 1/1 1 1 3m10s administration-service 1/1 1 1 3m10s api-gateway-service 1/1 1 1 3m10s auth-service 1/1 1 1 3m10s config-service 1/1 1 1 3m10s content-service 1/1 1 1 3m10s discovery-service 1/1 1 1 3m10s elasticsearch 1/1 1 1 3m10s email-outbound-service 1/1 1 1 3m10s feature-service 1/1 1 1 3m10s fluentd 1/1 1 1 3m10s grafana-deployment 0/1 1 0 3m10s kibana 1/1 1 1 3m10s mongo-db-service 1/1 1 1 3m10s mongo-express 1/1 1 1 3m10s notification-service 1/1 1 1 3m10s peer-signaling-service 1/1 1 1 3m10s postgresadmin 1/1 1 1 3m10s postgresdb 1/1 1 1 3m10s prometheus-deployment 1/1 1 1 3m10s rabbitmq 1/1 1 1 3m10s sms-outbound-service 1/1 1 1 3m10s telemetry-service 1/1 1 1 3m10sVerify the pods.
kubectl get pods --namespace t16s
NAME READY STATUS RESTARTS AGE account-history-service-6cb5c876c8-cfgrd 1/1 Running 1 4m39s account-profile-service-7655c8879-v48jd 1/1 Running 1 4m40s administration-service-8548d7b8bd-whxnz 1/1 Running 1 4m39s api-gateway-service-5db844d49-wbr2q 1/1 Running 1 4m38s auth-service-5bbc5c6c96-vp6l9 1/1 Running 1 4m37s config-service-bc6c89dcd-bz5xn 1/1 Running 0 4m37s content-service-69b5d6fb8c-cmmnl 1/1 Running 0 4m39s discovery-service-5f5ff79ffc-v7zkn 1/1 Running 2 4m38s elasticsearch-5fd866768d-n7wws 1/1 Running 0 4m40s email-outbound-service-654f4bf8bf-kxt54 1/1 Running 1 4m39s feature-service-7f9695898d-d88fn 1/1 Running 1 4m37s fluentd-6cbdc8d87b-qw2d6 1/1 Running 0 4m40s grafana-deployment-5ffdf5587d-tpphg 1/1 Running 0 4m40s kibana-79ddb4c975-jqtzz 1/1 Running 0 4m40s mongo-db-service-7755b47586-kts2x 1/1 Running 0 4m39s mongo-express-86c7589f68-49nq2 1/1 Running 0 4m37s notification-service-6797784867-gmbsv 1/1 Running 1 4m40s peer-signaling-service-5c56fcbb77-l9nb7 1/1 Running 1 4m40s postgresadmin-689cb675b9-jrtbv 1/1 Running 0 4m40s postgresdb-55df4c5794-x2fzc 1/1 Running 0 4m38s prometheus-deployment-58b7ddd47d-fwv2l 1/1 Running 0 4m38s rabbitmq-5b9744ddc6-xp77q 1/1 Running 0 4m40s sms-outbound-service-567df8bf8d-4585j 1/1 Running 1 4m40s telemetry-service-867df545-wlckj 1/1 Running 1 4m40sVerify the services.
kubectl get services --namespace t16s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE account-history-service ClusterIP 10.100.253.33 <none> 5010/TCP,5019/TCP 5m49s account-profile-service ClusterIP 10.100.230.63 <none> 5020/TCP 5m48s administration-service ClusterIP 10.100.235.195 <none> 9999/TCP 5m48s auth-service ClusterIP 10.100.158.12 <none> 7777/TCP 5m48s config-service ClusterIP 10.100.64.212 <none> 8888/TCP 5m49s content-service ClusterIP 10.100.142.123 <none> 4040/TCP 5m49s discovery-service ClusterIP 10.100.205.151 <none> 8761/TCP 5m48s elasticsearch ClusterIP 10.100.90.73 <none> 9200/TCP 5m48s email-outbound-service ClusterIP 10.100.110.2 <none> 6010/TCP 5m48s feature-service ClusterIP 10.100.19.60 <none> 3550/TCP 5m48s fluentd ClusterIP 10.100.125.11 <none> 24224/TCP,24224/UDP 5m48s grafana NodePort 10.100.171.253 <none> 3000:30000/TCP 5m48s kibana ClusterIP 10.100.38.252 <none> 5601/TCP 5m48s load-balancer-service LoadBalancer 10.100.140.213 a7c4a2ff65cfb4eba9acdeb0da60486b-678811868.us-east-1.elb.amazonaws.com 9443:30261/TCP 5m49s mongo-db-service ClusterIP 10.100.43.153 <none> 27017/TCP 5m49s mongo-express ClusterIP 10.100.202.47 <none> 8081/TCP 5m48s notification-service ClusterIP 10.100.216.222 <none> 6005/TCP 5m49s peer-signaling-service ClusterIP 10.100.151.132 <none> 18433/TCP 5m48s postgresadmin ClusterIP 10.100.148.131 <none> 1080/TCP 5m49s postgresdb NodePort 10.100.166.44 <none> 5432:30034/TCP 5m48s prometheus NodePort 10.100.32.66 <none> 9090:31000/TCP 5m48s rabbitmq ClusterIP 10.100.3.204 <none> 15672/TCP,5672/TCP 5m48s sms-outbound-service ClusterIP 10.100.47.147 <none> 6020/TCP 5m49s telemetry-service ClusterIP 10.100.141.87 <none> 3500/TCP 5m49sWe can identify the host domain and port for the cluster load-balancer service endpoint using the command supplied by our installed the Helm chart.
kubectl get services --namespace t16s | grep "load-balancer-service" | awk '{print $4 "\t" $5}'When we run that command, we see our cluster publishes the load-balancer service on the following host and port (your will be different).
a7c4a2ff65cfb4eba9acdeb0da60486b-678811868.us-east-1.elb.amazonaws.com 9443:30261/TCPWe can access the application in our browser via https://a7c4a2ff65cfb4eba9acdeb0da60486b-678811868.us-east-1.elb.amazonaws.com:9443/content/
In a production environment, we would attach our load balance to a DNS CNAME record.
uninstall the helm chart
When we are finished running the Helm chart, we can uninstall it using:helm uninstall t16s
uninstall the cluster
When we are finished with the cluster, we can delete it from EKS using:eksctl delete cluster --name=<name of cluster identified earlier>For our cluster we will call:
eksctl delete cluster --name=attractive-unicorn-1604460020
[ℹ] eksctl version 0.31.0-rc.0 [ℹ] using region us-east-1 [ℹ] deleting EKS cluster "attractive-unicorn-1604460020" [ℹ] deleted 0 Fargate profile(s) [✔] kubeconfig has been updated [ℹ] cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress [ℹ] 2 sequential tasks: { delete nodegroup "ng-82ff01a0", delete cluster control plane "attractive-unicorn-1604460020" [async] } [ℹ] will delete stack "eksctl-attractive-unicorn-1604460020-nodegroup-ng-82ff01a0" [ℹ] waiting for stack "eksctl-attractive-unicorn-1604460020-nodegroup-ng-82ff01a0" to get deleted [ℹ] will delete stack "eksctl-attractive-unicorn-1604460020-cluster" [✔] all cluster resources were deletedAll EKS resources will be returned and billing for those resources will cease.
Congratulations, you have now deployed the reference implementation to your first Kubernetes cloud provider platform.
Twitter
Facebook
Reddit
LinkedIn
Email