If you are setting up a Kubernetes cluster on AWS then you would probably want a cluster that is not accessible to the world. You do that by toggling off public access while creating the cluster, however a problem with that is the DNS Resolution for EKS with private endpoint.
The only problem with this approach is that you can’t resolve the DNS from on premise because
- AWS does not allow you to change the DNS name of the endpoint.
- AWS creates a private hosted zone for the endpoint DNS.
This problem is described here. One suggested solution is to create Route53 inbound and outbound endpoints as described in this blog. However, the problem with that is that every time you create a cluster you will need to add IPs to our local resolver and If you local infrastructure is maintained by someone else then it might take days to get that done.
We solved that problem by writing a small script that updates /etc/hosts with the IP and dns name of the endpoint. This is a hack but works well. Here’s how the script looks
ips=`aws ec2 describe-network-interfaces --filters Name=description,Values="Amazon EKS $clusterName" | grep "PrivateIpAddress\"" | cut -d ":" -f 2 | sed 's/[*",]//g' | sed 's/^\s*//'| uniq`
endpoint=`aws eks describe-cluster --name $clusterName | grep endpoint\" | cut -d ":" -f 3 | sed 's/[\/,"]//g'`
# create backup of /etc/hosts
sudo cp /etc/hosts /etc/hosts_backup
sudo sh -c "cat /etc/hosts | grep -v $endpoint > /etc/hosts_new"
for item in $ips
sudo sh -c "echo $item $endpoint >> /etc/hosts_new"
sudo sh -c "cat /etc/hosts_new > /etc/hosts"
Pass in your cluster name to the script and it updates /etc/hosts file on your machine.
The script hasnt been tested on a mac yet, but should work well.
In an enterprise environment this can be used for the development environment, whereas test and production would probably deploy everything through a tool such as Jenkins and this tool would be deployed in AWS itself.