This repository is a walkthrough of deploying the Coder.com official code-server Docker image of VSCode in a repeatable tutorial/workshop format. After working through this lab you will have a web-accessable deployment of Code-server running on Amazon EKS Kubernetes, backed by persistent EFS volumes. Now you can code anywhere using your preferred IDE!
-
Download and Install AWS CLI v2.
$ curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg" sudo installer -pkg AWSCLIV2.pkg -target /
-
Check for successful installation
$ aws --version aws-cli/2.0.38 Python/3.7.4 Darwin/17.7.0 exe/x86_64
-
Configure your AWS credentials for your preferred region
$ aws configure AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Default region name [None]: us-east-2 Default output format [None]: json
-
If using a Mac, we are assuming you have homebrew installed, otherwise please install it now
$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
-
Next, install the Weavworks tap
$ brew tap weaveworks/tap
-
Finally, install eksctl
$ brew install weaveworks/tap/eksctl
-
Check for successful installation
$ eksctl version 0.25.0
-
Download and Install kubectl
$ curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl" $ chmod +x ./kubectl $ sudo mv ./kubectl /usr/local/bin/kubectl
-
Check for successful installation
$ kubectl version --client Client Version: version.Info { Major:"1", Minor:"18", GitVersion:"v1.18.4", GitCommit:"c96aede7b5205121079932896c4ad89bb93260af", GitTreeState:"clean", BuildDate:"2020-06-17T11:41:22Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64" }
-
Install HELM
brew install helm
In order to deploy the Docker image for code-server, we need an EKS cluster to work with. There are two options for us for backing an EKS cluster, Fargate, or your own managed VPC nodes. Since VSCode needs some persistence in order to make this usuable for you in a production-like setting, we are opting for managed nodes deployment to make it easier to deploy with persistent volume claims via EFS.
-
Download or copy your AWS public SSH key to your current directory and make sure the permissions are set securely.
$ cp ~/eks.pub . && chmod 600 eks.pub
-
Create an EKS cluster in your region, with a node size of your choice and using your key, for example:
$ eksctl create cluster \ --name code-server \ --version 1.17 \ --region us-east-2 \ --nodegroup-name linux-nodes \ --node-type t3.small \ --nodes 3 \ --nodes-min 1 \ --nodes-max 3 \ --ssh-access \ --ssh-public-key eks.pub \ --managed
-
After 10-15 minutes, test that Kubectl has been configured and see the ready status of your cluster nodes.
$ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-192-168-10-182.us-east-2.compute.internal Ready <none> 3m46s v1.17.9-eks-4c6976 ip-192-168-58-121.us-east-2.compute.internal Ready <none> 3m49s v1.17.9-eks-4c6976 ip-192-168-74-214.us-east-2.compute.internal Ready <none> 3m49s v1.17.9-eks-4c6976
-
Install the AWS EFS CSI Driver for mounting EFS volumes in pods
$ helm install aws-efs-csi-driver https://github.com/kubernetes-sigs/aws-efs-csi-driver/releases/download/v1.0.0/helm-chart.tgz NAME: aws-efs-csi-driver LAST DEPLOYED: Thu Aug 13 13:42:28 2020 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: To verify that aws-efs-csi-driver has started, run: kubectl get pod -n kube-system -l "app.kubernetes.io/name=aws-efs-csi-driver,app.kubernetes.io/instance=aws-efs-csi-driver"
-
Verifying the EFS CSI Driver
$ kubectl get pod -n kube-system -l "app.kubernetes.io/name=aws-efs-csi-driver,app.kubernetes.io/instance=aws-efs-csi-driver" NAME READY STATUS RESTARTS AGE efs-csi-node-6xmcf 3/3 Running 0 57s efs-csi-node-7bg9c 3/3 Running 0 57s efs-csi-node-m9l47 3/3 Running 0 57s
To have access to our IDE via the browser, we need to load balancce to the running container within Kubernetes. To do this we create an Application Load Balancer. Fortunetly, AWS provides an ALB controller to make this easy for us which integrates nicely with EKS.
-
Create an IAM OIDC provider and associate it with your cluster
$ eksctl utils associate-iam-oidc-provider \ --region us-east-2 \ --cluster code-server \ --approve
-
Download an IAM policy for the ALB Ingress Controller pod that allows it to make calls to AWS APIs on your behalf
curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/iam-policy.json
-
Create an IAM policy called ALBIngressControllerIAMPolicy using the policy downloaded in the previous step. Take note of the ARN policy string that was created for a following step.
$ aws iam create-policy \ --policy-name ALBIngressControllerIAMPolicy \ --policy-document file://iam-policy.json
-
Create a Kubernetes service account named alb-ingress-controller in the kube-system namespace, a cluster role, and a cluster role binding for the ALB Ingress Controller to use with the following command
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/rbac-role.yaml
-
Create an IAM role for the ALB Ingress Controller and attach the role to the service account created in the previous step. The command that follows only works for clusters that were created with eksctl. Note use your ARN from the previous step.
eksctl create iamserviceaccount \ --region us-east-2 \ --name alb-ingress-controller \ --namespace kube-system \ --cluster code-server \ --attach-policy-arn arn:aws:iam::111122223333:policy/ALBIngressControllerIAMPolicy \ --override-existing-serviceaccounts \ --approve
-
Deploy your ALB Controller, note that initially it will error until the subsequent steps
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/alb-ingress-controller.yaml
-
Edit the ALB Deployment with your cluster name, your EKS VPC, and region
kubectl edit deployment.apps/alb-ingress-controller -n kube-system ... spec: containers: - args: - --ingress-class=alb - --cluster-name=code-server - --aws-vpc-id=vpc-03468a8157edca5bd - --aws-region=us-east-2
-
Confirm that the ALB Ingress Controller is running with the following command.
$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE alb-ingress-controller-646d767ccf-4h624 1/1 Running 0 12s
-
Get your VPC ID for your cluster (again)
$ aws eks describe-cluster --name code-server --query "cluster.resourcesVpcConfig.vpcId" --output text vpc-015b916167f38076a
-
Locate the CIDR range for your cluster
$ aws ec2 describe-vpcs --vpc-ids vpc-015b916167f38076a --query "Vpcs[].CidrBlock" --output text 192.168.0.0/16
-
Create an Amazon EFS file system for your Amazon EKS cluster ((Steps 3 & 4 on AWS Docs)
-
Verify your File system ID
$ aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output text fs-ab9729d3
-
The container runs as user 'Coder' with UID 1000 and this impacts writing to the EFS volume. Create an EFS Application access point for the 'Coder' user to write to our project file mount on behalf of yourself or whoever is making changed in the IDE.
aws efs create-access-point --file-system-id fs-738c320b --posix-user Uid=1000,Gid=1000 --root-directory "Path=/home/coder/project, CreationInfo={OwnerUid=1000,OwnerGid=1000,Permissions=777}" { "ClientToken": "90d663c9-be81-4009-8d8a-5ded1a8c1512", "Tags": [], "AccessPointId": "fsap-027a9fe34c790c6b0", "AccessPointArn": "arn:aws:elasticfilesystem:us-east-2:370534301218:access-point/fsap-027a9fe34c790c6b0", "FileSystemId": "fs-738c320b", "PosixUser": { "Uid": 1000, "Gid": 1000 }, "RootDirectory": { "Path": "/home/coder/project", "CreationInfo": { "OwnerUid": 1000, "OwnerGid": 1000, "Permissions": "777" } }, "OwnerId": "370534301218", "LifeCycleState": "creating" }
-
Record the Accesspoint created above for use in the PV specification
fsap-027a9fe34c790c6b0
-
Deploy the StorageClass
$ kubectl apply -f efs-storageclass.yml -n code-server storageclass.storage.k8s.io/efs-sc created
-
View Storage Classes
$ kubectl get storageclass -n code-server NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE efs-sc efs.csi.aws.com Delete Immediate false 13m gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 4h26m
-
Create our code-server namespace
$ kubectl apply -f code-server-namespace.yml namespace/code-server created
-
Update the code-server-pv.yml csi:volumeHandle with your EFS Filesystem ID and EFS Accesspoint ID
apiVersion: "v1" kind: "PersistentVolume" metadata: name: "code-server-pv" spec: capacity: storage: "5Gi" accessModes: - "ReadWriteOnce" persistentVolumeReclaimPolicy: Retain storageClassName: efs-sc csi: driver: efs.csi.aws.com volumeHandle: fs-ab9729d3::fsap-027a9fe34c790c6b0
-
Deploy the PV
$ kubectl apply -f code-server-pv.yml persistentvolume/code-server created
-
Create a PV Claim for the PV
$ kubectl apply -f code-server-pv-claim.yml -n code-server persistentvolumeclaim/code-server-pv-claim created
-
Check the status of the PV and PV claim
$ kubectl get pv -n code-server NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE code-server-pv 5Gi RWX Retain Bound code-server/code-server-pv-claim efs-sc 3m47s $ kubectl get pvc -n code-server NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE code-server-pv-claim Bound code-server-pv 5Gi RWX efs-sc 58s
-
Create a Kubernetes Deployment for code-server. Change the password in the manifest if desired is also recommended.
$ kubectl apply -f code-server-deployment.yml -n code-server deployment.apps/code-server-deployment created
-
Create a service for connecting to our container on its exposed port
$ kubectl apply -f code-server-service.yml -n code-server service/service-code-server created
-
If all goes well you should see output as such
$ kubectl get all -n code-server NAME READY STATUS RESTARTS AGE pod/code-server-deployment-6555d99486-jdbcd 1/1 Running 0 27s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/service-code-server NodePort 10.100.19.111 <none> 80:32366/TCP 1s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/code-server-deployment 1/1 1 1 27s NAME DESIRED CURRENT READY AGE replicaset.apps/code-server-deployment-6555d99486 1 1 1 28s
Now that your code-server deployment is sucessfully running on EKS with an EFS backed persistent volume, we need to create the ALB ingress we will use to connect to the instance in our browser.
-
Create the ALB Ingress
$ kubectl apply -f code-server-ingress.yml -n code-server ingress.extensions/code-server-ingress created
-
Obtain the ALB Ingress address
$ kubectl get ingress -n code-server NAME HOSTS ADDRESS PORTS AGE code-server-ingress * 5e1c4c56-codeserver-codese-74a8-22329140.us-east-2.elb.amazonaws.com 80 33s
After a few minutes, open the browser and connect to your instance on the ALB address, if prompted for a password, use the one you created in the deployment manifest
Now lets start using our IDE. We will use the Terminal to clone this repo to the [project] directory and use the GUI to create a text file.
Navigate to the side bar and Open a new Terminal shell
You will be presented with an empty terminal
Navigate to the ~/project folder and clone a GIT repo and use the GUI to create a text file
Then delete the deployment and redeploy. Hopefully our project is still there when we redeploy!
-
Delete the deployment and all running pods
$ kubectl delete deployment code-server-deployment -n code-server deployment.apps "code-server-deployment" deleted
-
Check there is no pods remaining
$ kubectl get pods -n code-server No resources found in code-server namespace.
-
Redeploy with all project files intact
$ kubectl apply -f code-server-deployment.yml -n code-server deployment.apps/code-server-deployment created