Does kubeconfig set for the user when login to node in teleport web UI,

does kubeconfig set implicitly for the user when login to a node in teleport web UI, considering kubeconfig set in teleport.yaml ?

I’m not entirely sure I understand the question but i’ll try to answer.

The Teleport web UI just hosts a web-based terminal which will connect you to a node. It doesn’t do anything on the node itself, like change ~/.kube/config or anything similar. You would need to do that via some other method (like a ~/.bash_login script or similar)

kubeconfig file has not set for teleport user after ssh or web login to teleport node though kubernetes enabled for gke

errors :
kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?

steps followed :

  1. have teleport hosted in GCP VM and enabled kubernetes (for gke)

teleport:
nodename: teleport-master
log:
output: /var/lib/teleport/teleport.log
severity: DEBUG
data_dir: /var/lib/teleport/
storage:
type: dir
proxy_service:
enabled: yes
public_addr: 0.0.0.0:3080
web_listen_addr: 0.0.0.0:3080
listen_addr: 0.0.0.0:3023
kubernetes:
enabled: yes
kubeconfig_file: /etc/kubeconfig
public_addr: 0.0.0.0:3026
listen_addr: 0.0.0.0:3026
auth_service:
enabled: yes
tokens:
- “trusted_cluster:joinc”
authentication:
type: local
second_factor: off
listen_addr: 0.0.0.0:3025
cluster_name: “teleport-master”
session_recording: “node”
ssh_service:
enabled: yes

  1. created kubeconfig with https://github.com/gravitational/teleport/blob/master/examples/gke-auth/get-kubeconfig.sh

apiVersion: v1
clusters:

  • cluster:
    certificate-authority-data: x ( obscured )
    server: https://1.1.1.1
    name: k8s
    contexts:
  • context:
    cluster: k8s
    user: teleport
    name: k8s
    current-context: k8s
    kind: Config
    preferences: {}
    users:
  • name: teleport
    user:
    client-certificate-data: x ( obscured )
    client-key-data: x ( obscured )
  1. created cluster role permissions in gke

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: teleport-impersonation
rules:

  • apiGroups:
    • “”
      resources:
    • users
    • groups
    • serviceaccounts
      verbs:
    • impersonate

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: teleport
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: teleport-impersonation
subjects:

  • kind: ServiceAccount
    name: teleport
    namespace: default
  • kind: User
    name: system:anonymous
  1. start teleport

sudo /usr/local/bin/teleport start -d --config=/etc/teleport.yaml --pid-file=/var/run/teleport.pid --insecure &

  1. add user

tctl users add user1 --k8s-groups=“system:masters”
tsh login --proxy=1.1.1.1–insecure --user=user1

  1. can login to teleport node without issues

tsh ssh user1@teleport-master

I think you’re misunderstanding what Teleport does here. The Kubernetes proxy functionality affects the machine where you run tsh login rather than the Teleport node itself.

I would expect the machine where you run tsh login to have an updated ~/.kube/config file containing details on how to connect to Kubernetes.

I would not expect there to be a ~/.kube/config file on the Teleport node.

What is in the ~/.kubeconfig file on the machine where you run tsh login --proxy=1.1.1.1–insecure --user=user1?

thanks for the info, the following got confused me in “Admin Manual”

# Authentication step to retrieve the certificates. tsh login places the SSH
# certificate into `~/.tsh` as usual and updates kubeconfig with Kubernetes
# credentials:
$ tsh --proxy=teleport.example.com login

# Execute SSH commands to access SSH nodes:
$ tsh ssh login@ssh-node

# Execute any kubectl commands to access the Kubernetes cluster:
$ kubectl get pods

but ~/.kubeconfig is not updated after login

teleport-dev-2:~/.kube$ ls -ltr
total 20
drwxr-x— 3 4096 Feb 27 10:09 cache
-rwxr-xr-x 1 5945 Mar 27 13:05 config-backup
-rwxrwxrwx 1 2491 Mar 27 13:16 config
drwxr-x— 3 4096 Mar 27 13:18 http-cache

teleport-dev-2:~/.kube$ sudo tsh login --proxy=1.1.1.1:3080 --insecure --user=user1
WARNING: You are using insecure connection to SSH proxy https://1.1.1.1:3080
Enter password for Teleport user user1:
WARNING: You are using insecure connection to SSH proxy https://1.1.1.1:3080

Profile URL: https://1.1.1.1:3080
Logged in as: user1
Cluster: teleport-master
Roles: admin*
Logins: user1
Valid until: 2020-03-28 01:35:45 +0000 UTC [valid for 12h0m0s]
Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty

teleport-dev-2:~/.kube$ ls -ltr
total 20
drwxr-x— 3 4096 Feb 27 10:09 cache
-rwxr-xr-x 1 5945 Mar 27 13:05 config-backup
-rwxrwxrwx 1 2491 Mar 27 13:16 config
drwxr-x— 3 4096 Mar 27 13:18 http-cache

This is probably how it should be worded:

# Authentication step to retrieve the certificates. tsh login places the SSH
# certificate into `~/.tsh` as usual and updates `~/.kube/config` on your local
# client with Kubernetes credentials:
[client] $ tsh --proxy=teleport.example.com login

# Execute SSH commands on your local client to access SSH nodes:
[client] $ tsh ssh login@ssh-node

# Execute any kubectl commands on your local client to access the Kubernetes cluster:
[client] $ kubectl get pods

The wording of this is confusing, I’ll try to get it updated. Essentially it’s saying you can run all of these commands on the client where you run tsh login, not that you can run kubectl on a node and have it work.

If you wanted to be able to run kubectl on your node, you would need to run tsh login from the node and not from your local client.

Thanks !,

~/.kube/config has not updated after tsh login, insecure option an issue?

--insecure shouldn’t cause you trouble here.

You’re definitely checking ~/.kube/config on the same machine where you’re running tsh login? Are there any entries in the file at all? What’s the output of kubectl config get-contexts?

.kube/config file created in root home dir rather my login since teleport is running as root,

thanks !