Hi everyone
We use localstack-pro:latest-bigdata and are currently trying to configure EKS using an existing k8s installation, as detailed in the guide here.
We are having an issue connecting it all together & I’ll detail the steps. The following commands are all being run from within our localstack container.
We have created the k8s installation using minikube and then mounted the config that is created to /root/.kube/config into the localstack container.
The localstack container is then started with the envar EKS_K8S_PROVIDER=local .
root@189db70ea719:/opt/code/localstack# env | grep EKS
EKS_K8S_PROVIDER=local
The /root/.kube/config that is generated/mounted can be seen here (certs removed for brevity).
root@189db70ea719:/opt/code/localstack# cat /root/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ***
extensions:
- extension:
last-update: Wed, 06 Nov 2024 14:35:25 GMT
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://mwaa-minikube:8443
name: mwaa-minikube
contexts:
- context:
cluster: mwaa-minikube
extensions:
- extension:
last-update: Wed, 06 Nov 2024 14:35:25 GMT
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: mwaa-minikube
name: mwaa-minikube
current-context: mwaa-minikube
kind: Config
preferences: {}
users:
- name: mwaa-minikube
user:
client-certificate-data: ***
client-key-data: ***
The server https://mwaa-minikube:8443
is the name of the container that the cluster is using and this container is on the same docker network as localstack
We can see that the mwaa-minikube cluster has been created successfully and is accessible from localstack
root@189db70ea719:/opt/code/localstack# kubectl cluster-info
Kubernetes control plane is running at https://mwaa-minikube:8443
CoreDNS is running at https://mwaa-minikube:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
The next step is to run the eks create-cluster command.
awslocal eks create-cluster \
--name mwaa-minikube \
--role-arn arn:aws:iam::000000000000:role/eks-role \
--resources-vpc-config '{}'
This outputs the following(certificateAuthority removed for brevity)
root@189db70ea719:/opt/code/localstack# awslocal eks describe-cluster --name mwaa-minikube
{
"cluster": {
"name": "mwaa-minikube",
"arn": "arn:aws:eks:eu-west-1:000000000000:cluster/mwaa-minikube",
"createdAt": 1730903751.558,
"version": "1.22",
"endpoint": "https://localhost.localstack.cloud:4511",
"roleArn": "arn:aws:iam::000000000000:role/eks-role",
"resourcesVpcConfig": {
"securityGroupIds": [],
"endpointPublicAccess": true,
"endpointPrivateAccess": false,
"publicAccessCidrs": [
"0.0.0.0/0"
]
},
"identity": {
"oidc": {
"issuer": "https://localhost.localstack.cloud/eks-oidc"
}
},
"status": "ACTIVE",
"certificateAuthority": {
"data": "***"
},
"platformVersion": "eks.5",
"tags": {}
}
}
According to the docs the endpoint here should be something like https://172.17.0.1:6443
or in our case I would expect to see something like https://mwaa-minikube:8443
But instead we are seeing https://localhost.localstack.cloud:4511
Should the endpoint that is generated with the eks create-cluster command be the same as the server parameter in the /root/.kube/config
Any thoughts on whether this makes sense would be greatly appreciated!
Many thanks