Node showing on dashboard but user unable to login

Hello Team,

I have setup teleport and able to add the node as well as node showing over dashboard. I have added the user also and use able to access the dashboard. But when that user try to access the node from dashboard he/she is not able to access it.

But when i am try to login from teleport server using the below command for same user able to login:

tsh --proxy=localhost ssh --user=rana rana@x.y.z.z

Can you please guide me what mistake i am doing? Its seems something wrong in teleport.yaml configuration file.

Please help me.

Thanks.

What error do you get when trying to access the node via the web interface? Can you post a screenshot?

Hello Gus,

Thank you for your response.

Please refer the attached screenshot.
Selection_033

I am able to connect servers from dashboard when running the service using below command:

teleport start --roles=node --token=0401dd8b8949202197942376f2dce0b8 --ca-pin=sha256:97bae0c606fb9d1648c0f410fd5f5c3b3aeb6d442c1105191ff8e9f525f25ffa --auth-server=x.y.z.z:3025

But when running the teleport service using systemd service is running but unable to connect server from dashboard. So its seems some option is enable in teleport.yaml file which is restricting the access from teleport dashboard

Can you post the /etc/teleport.yaml file that you’re using?

Hello Gus,

I am using the below teleport.yaml file which is copied from teleport documentation and changed the authtoken, ca_pin. Only S3 bucket part which is used to store the session in s3 is commented by me.

# By default, this file should be stored in /etc/teleport.yaml

# This section of the configuration file applies to all teleport
# services.
teleport:
    # nodename allows to assign an alternative name this node can be reached by.
    # by default it's equal to hostname
    nodename: teleportnode

    # Data directory where Teleport daemon keeps its data.
    # See "Filesystem Layout" section above for more details.
    data_dir: /var/lib/teleport

    # Invitation token used to join a cluster. it is not used on
    # subsequent starts
    auth_token: 0401dd8b8949202197942376f2dce0b8

    # Optional CA pin of the auth server. This enables more secure way of adding new
    # nodes to a cluster. See "Adding Nodes" section above.
    ca_pin: "sha256:97bae0c606fb9d1648c0f410fd5f5c3b3aeb6d442c1105191ff8e9f525f25ffa"

    # When running in multi-homed or NATed environments Teleport nodes need
    # to know which IP it will be reachable at by other nodes
    #
    # This value can be specified as FQDN e.g. host.example.com
    advertise_ip: my_teleport_server_public_ip

    # list of auth servers in a cluster. you will have more than one auth server
    # if you configure teleport auth to run in HA configuration
    auth_servers:
        - my_teleport_public_ip:3025
#        - 10.1.0.6:3025

    # Teleport throttles all connections to avoid abuse. These settings allow
    # you to adjust the default limits
    connection_limits:
        max_connections: 1000
        max_users: 250

    # Logging configuration. Possible output values are 'stdout', 'stderr' and
    # 'syslog'. Possible severity values are INFO, WARN and ERROR (default).
    log:
        output: stderr
        severity: ERROR

    # Configuration for the storage back-end used for the cluster state and the
    # audit log. Several back-end types are supported. See "High Availability"
    # section of this Admin Manual below to learn how to configure DynamoDB, 
    # S3, etcd and other highly available back-ends.
#    storage:
        # By default teleport uses the `data_dir` directory on a local filesystem
#        type: dir

        # Array of locations where the audit log events will be stored. by
        # default they are stored in `/var/lib/teleport/log`
#        audit_events_uri: ['file:///var/lib/teleport/log', 'dynamodb://events_table_name']

        # Use this setting to configure teleport to store the recorded sessions in
        # an AWS S3 bucket. see "Using Amazon S3" chapter for more information.
#        audit_sessions_uri: 's3://example.com/path/to/bucket?region=us-east-1'

    # Cipher algorithms that the server supports. This section only needs to be
    # set if you want to override the defaults.
    ciphers:
      - aes128-ctr
      - aes192-ctr
      - aes256-ctr
      - aes128-gcm@openssh.com
      - chacha20-poly1305@openssh.com

    # Key exchange algorithms that the server supports. This section only needs
    # to be set if you want to override the defaults.
    kex_algos:
      - curve25519-sha256@libssh.org
      - ecdh-sha2-nistp256
      - ecdh-sha2-nistp384
      - ecdh-sha2-nistp521

    # Message authentication code (MAC) algorithms that the server supports.
    # This section only needs to be set if you want to override the defaults.
    mac_algos:
      - hmac-sha2-256-etm@openssh.com
      - hmac-sha2-256

    # List of the supported ciphersuites. If this section is not specified,
    # only the default ciphersuites are enabled.
    ciphersuites:
       - tls-rsa-with-aes-128-gcm-sha256
       - tls-rsa-with-aes-256-gcm-sha384
       - tls-ecdhe-rsa-with-aes-128-gcm-sha256
       - tls-ecdhe-ecdsa-with-aes-128-gcm-sha256
       - tls-ecdhe-rsa-with-aes-256-gcm-sha384
       - tls-ecdhe-ecdsa-with-aes-256-gcm-sha384
       - tls-ecdhe-rsa-with-chacha20-poly1305
       - tls-ecdhe-ecdsa-with-chacha20-poly1305


# This section configures the 'auth service':
auth_service:
    # Turns 'auth' role on. Default is 'yes'
    enabled: yes

    # A cluster name is used as part of a signature in certificates
    # generated by this CA.
    #
    # We strongly recommend to explicitly set it to something meaningful as it
    # becomes important when configuring trust between multiple clusters.
    #
    # By default an automatically generated name is used (not recommended)
    #
    # IMPORTANT: if you change cluster_name, it will invalidate all generated
    # certificates and keys (may need to wipe out /var/lib/teleport directory)
    cluster_name: "main"

    authentication:
        # default authentication type. possible values are 'local', 'oidc' and 'saml'
        # only local authentication (Teleport's own user DB) is supported in the open
        # source version
        type: local
        # second_factor can be off, otp, or u2f
        second_factor: otp
        # this section is used if second_factor is set to 'u2f'
        u2f:
            # app_id must point to the URL of the Teleport Web UI (proxy) accessible
            # by the end users
            app_id: https://teleport_server_public_ip:3080
            # facets must list all proxy servers if there are more than one deployed
            facets:
            - https://teleport_server_public_ip:3080

    # IP and the port to bind to. Other Teleport nodes will be connecting to
    # this port (AKA "Auth API" or "Cluster API") to validate client
    # certificates
    listen_addr: 0.0.0.0:3025

    # The optional DNS name the auth server if located behind a load balancer.
    # (see public_addr section below)
    public_addr: teleport_server_public_ip:3025

    # Pre-defined tokens for adding new nodes to a cluster. Each token specifies
    # the role a new node will be allowed to assume. The more secure way to
    # add nodes is to use `ttl node add --ttl` command to generate auto-expiring
    # tokens.
    #
    # We recommend to use tools like `pwgen` to generate sufficiently random
    # tokens of 32+ byte length.
    tokens:
        - "proxy,node:xxxxx"
        - "auth:yyyy"

    # Optional setting for configuring session recording. Possible values are:
    #    "node"  : sessions will be recorded on the node level  (the default)
    #    "proxy" : recording on the proxy level, see "recording proxy mode" section.
    #    "off"   : session recording is turned off
    session_recording: "node"

    # This setting determines if a Teleport proxy performs strict host key checks.
    # Only applicable if session_recording=proxy, see "recording proxy mode" for details.
    proxy_checks_host_keys: yes

    # Determines if SSH sessions to cluster nodes are forcefully terminated
    # after no activity from a client (idle client).
    # Examples: "30m", "1h" or "1h30m"
    client_idle_timeout: never

    # Determines if the clients will be forcefully disconnected when their
    # certificates expire in the middle of an active SSH session. (default is 'no')
    disconnect_expired_cert: no

    # License file to start auth server with. Note that this setting is ignored
    # in open-source Teleport and is required only for Teleport Pro, Business
    # and Enterprise subscription plans.
    #
    # The path can be either absolute or relative to the configured `data_dir`
    # and should point to the license file obtained from Teleport Download Portal.
    #
    # If not set, by default Teleport will look for the `license.pem` file in
    # the configured `data_dir`.
    license_file: /var/lib/teleport/license.pem

    # DEPRECATED in Teleport 3.2 (moved to proxy_service section)
    kubeconfig_file: /path/to/kubeconfig

# This section configures the 'node service':
ssh_service:
    # Turns 'ssh' role on. Default is 'yes'
    enabled: yes

    # IP and the port for SSH service to bind to.
    listen_addr: 0.0.0.0:3022

    # The optional public address the SSH service. This is useful if administrators
    # want to allow users to connect to nodes directly, bypassing a Teleport proxy
    # (see public_addr section below)
    public_addr: node.example.com:3022

    # See explanation of labels in "Labeling Nodes" section below
    labels:
        role: testing
        type: teleportnode

    # List of the commands to periodically execute. Their output will be used as node labels.
    # See "Labeling Nodes" section below for more information and more examples.
    commands:
    # this command will add a label 'arch=x86_64' to a node
    - name: arch
      command: ['/bin/uname', '-p']
      period: 1h0m0s

    # enables reading ~/.tsh/environment before creating a session. by default
    # set to false, can be set true here or as a command line flag.
    permit_user_env: false

    # configures PAM integration. see below for more details.
    pam:
        enabled: no
        service_name: teleport

# This section configures the 'proxy service'
proxy_service:
    # Turns 'proxy' role on. Default is 'yes'
    enabled: yes

    # SSH forwarding/proxy address. Command line (CLI) clients always begin their
    # SSH sessions by connecting to this port
    listen_addr: 0.0.0.0:3023

    # Reverse tunnel listening address. An auth server (CA) can establish an
    # outbound (from behind the firewall) connection to this address.
    # This will allow users of the outside CA to connect to behind-the-firewall
    # nodes.
    tunnel_listen_addr: 0.0.0.0:3024

    # The HTTPS listen address to serve the Web UI and also to authenticate the
    # command line (CLI) users via password+HOTP
    web_listen_addr: teleport_public_ip:3080

    # The DNS name the proxy HTTPS endpoint as accessible by cluster users.
    # Defaults to the proxy's hostname if not specified. If running multiple
    # proxies behind a load balancer, this name must point to the load balancer
    # (see public_addr section below)
    public_addr: 159.65.153.187:3080

    # The DNS name of the proxy SSH endpoint as accessible by cluster clients.
    # Defaults to the proxy's hostname if not specified. If running multiple proxies 
    # behind a load balancer, this name must point to the load balancer. 
    # Use a TCP load balancer because this port uses SSH protocol.
    ssh_public_addr: proxy.example.com:3023

    # TLS certificate for the HTTPS connection. Configuring these properly is
    # critical for Teleport security.
    https_key_file: /var/lib/teleport/webproxy_key.pem
    https_cert_file: /var/lib/teleport/webproxy_cert.pem

    # This section configures the Kubernetes proxy service
    kubernetes:
        # Turns 'kubernetes' proxy on. Default is 'no'
        enabled: yes

        # Kubernetes proxy listen address.
        listen_addr: 0.0.0.0:3026

        # The DNS name of the Kubernetes proxy server that is accessible by cluster clients.
        # If running multiple proxies behind  a load balancer, this name must point to the 
        # load balancer.
        public_addr: ['kube.example.com:3026']

        # This setting is not required if the Teleport proxy service is 
        # deployed inside a Kubernetes cluster. Otherwise, Teleport proxy 
        # will use the credentials from this file:
        kubeconfig_file: /path/to/kube/config

Note:- We are using single server as auth server and proxy. Whenever we use tctl command at teleport server we use --proxy=localhost.