Enabling Teleport to act as a Kubernetes proxy for trusted/leaf clusters

One model that some customers use is to have a main/root Teleport cluster which does not run Kubernetes, then other trusted/leaf clusters linked back to the main cluster which do run Kubernetes.

In this situation, there is a bug where ~/.kube/config is not updated when logging into a trusted/leaf cluster with Kubernetes enabled, even though it should be. This bug is tracked here on Github: https://github.com/gravitational/teleport/issues/3087

One way to work around this is to set up a ‘dummy’ kubeconfig file on your main/root cluster which actually points to nowhere, but is nevertheless valid YAML and will pass a rudimentary syntax check. This causes the Kubernetes proxy subsystem to be enabled within the cluster so this functionality can used elsewhere.

Here is an example:

apiVersion: v1
clusters:
- cluster:
    server: https://localhost/
    certificate-authority-data: yadayadayada
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: teleport
  name: teleport
current-context: teleport
kind: Config
preferences: {}
users:
- name: dummy
  user: {}

Save this file as /var/lib/teleport/dummy.kubeconfig and update the Teleport config file on your main/root cluster to use it:

proxy_service:
  kubernetes:
    enabled: yes
    public_addr: ["teleport.example.com:3026"]
    kubeconfig_file: /var/lib/teleport/dummy.kubeconfig

After this, restart Teleport on your main/root cluster, tsh logout on your client (if you were already logged in), then tsh login --proxy=teleport.example.com again. You should see that your ~/.kube/config file is updated with a context pointing to your main/root Teleport cluster on port 3026.

Once this is done, you can use tsh login --proxy=teleport.example.com trustedkubernetes.example.com to log into your trusted cluster - you should have another context added to your ~/.kube/config file. At this point, kubectl commands should work correctly against that trusted cluster.

1 Like