# EKS (Instance Profile)

{% hint style="info" %}
For an overview of the available Kubernetes features and supported platforms, please see our [Kubernetes guide](https://docs.strongdm.com/admin/resources/clusters).
{% endhint %}

## Overview

This guide describes how to manage access to an Amazon Elastic Kubernetes Service (EKS) Instance Profile cluster via the StrongDM Admin UI. This cluster type supports AWS IAM role authentication for EKS resources and gateways running in EC2. EKS clusters are added and managed in both the Admin UI and the AWS Management Console.

{% hint style="info" %}
If you would like to learn more about how to enable automatic resource discovery within your Kubernetes cluster, or use privilege levels to allow users to request various levels of access to the Kubernetes cluster, please read the [Kubernetes Discovery and Privilege Levels](https://docs.strongdm.com/admin/resources/clusters/kubernetes-management) section to learn more about those features prior to following this configuration guide.
{% endhint %}

## Prerequisites

Before you begin, ensure that the EKS endpoint you are connecting is accessible from one of your StrongDM gateways or relays. See our [Nodes](https://docs.strongdm.com/admin/networking/gateways-and-relays) guide for more information.

{% hint style="info" %}
If you are using kubectl 1.30 or higher, it will default to using websockets, which the StrongDM client did not support prior to version 45.35.0. This can be remedied by taking one of the following actions:

* Update your client to version 45.35.0 or greater.
* Set the environment variable `KUBECTL_REMOTE_COMMAND_WEBSOCKETS=false` to restore the previous behavior in your kubectl.
  {% endhint %}

## Credentials-reading order

During authentication with your AWS resource, the system looks for credentials in the following places in this order:

1. Environment variables (if the Enable Environment Variables box is checked)
2. Shared credentials file
3. EC2 role or ECS profile

As soon as the relay or gateway finds credentials, it stops searching and uses them. Due to this behavior, we recommend that all similar AWS resources with these authentication options use the same method when added to StrongDM.

For example, if you are using environment variables for AWS Management Console and using EC2 role authentication for an EKS cluster, when users attempt to connect to the EKS cluster through the gateway or relay, the environment variables are found and used in an attempt to authenticate with the EKS cluster, which then fails. We recommend using the same type for all such resources to avoid this problem at the gateway or relay level. Alternatively, you can segment your network by creating subnets with their own relays and sets of resources, so that the relays can be configured to work correctly with just those resources.

## Cluster setup

{% hint style="info" %}
You can find information about your cluster in the AWS Management Console on your EKS cluster’s general configuration page.
{% endhint %}

### Manage the IAM role

1. In the AWS Management Console, go to **Identity and Access Management (IAM) > Roles**.
2. Create a role to be used for accessing the cluster, or select an existing role to be used.
3. Attach or set the role to what you are using to run your relay (for example, an EC2 instance, ECS task, EKS pod, and so forth). See AWS documentation for information on how to [attach roles to EC2 instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html), [set the role of an ECS task](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html), and [set the role of a pod in EKS](https://docs.aws.amazon.com/eks/latest/userguide/pod-execution-role.html).
4. Copy the **Role ARN** (the Amazon Resource Name specifying the role).

### Grant the role the ability to interact with the cluster

1. While authenticated to the cluster using your existing connection method, run the following command to edit the `aws-auth` ConfigMap (YML file) within Kubernetes:

```yml
kubectl edit -n kube-system configmap/aws-auth
```

2. Copy the following snippet and paste it into the file under the `data:` heading, as shown:

```yml
data:
  mapRoles: |
    - rolearn: <ARN_OF_INSTANCE_ROLE>
      username: <USERNAME>
      groups:
        - <GROUP>
```

3. In that snippet, do the following:

   1. Replace `<ARN_OF_INSTANCE_ROLE>` with the ARN of the instance role.
   2. Under `groups:`, replace `<GROUP>` with the appropriate group for the permissions level you want this StrongDM connection to have (see [Kubernetes Roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings) for more details).

   Example:

```yml
data:
  mapRoles: |
    - rolearn: arn:aws:iam::123456789012:role/Example
      username: system:node:{{EC2PrivateDNSNameExample}}
      groups:
        - system:masters
```

{% hint style="warning" %}
The name under `groups:` in the `mapRoles` block must match the subject name in the desired ClusterRoleBinding, not the name of the ClusterRoleBinding itself. For example, if a default EKS cluster has a ClusterRoleBinding called `cluster-admin`, with a group named `system:masters`, then the name `system:masters` must be input in the `mapRoles` block under `groups:`.

In the following example of the default ClusterRoleBinding for `cluster-admin` on an unconfigured EKS cluster, you can see that the group name under `Subjects` is `system:masters`.

```yml
Name:         cluster-admin
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
Role:
  Kind:  ClusterRole
  Name:  cluster-admin
Subjects:
  Kind   Name            Namespace
  ----   ----            ---------
  Group  system:masters
```

{% endhint %}

Also note that in the YML file, the indentation is **critically important**. If the indentation is wrong, the Edit command does not trigger an error message, but the change fails. Note that `mapRoles` should be at the same indent level as `mapUsers` in that file.

4. Save the file and exit your text editor.

## Resource Configuration in StrongDM

This section provides instructions for adding the resource in either the StrongDM Admin UI, CLI, Terraform provider, or SDKs.

{% tabs %}
{% tab title="Admin UI" %}
**Set up and Manage With the Admin UI**

If using the Admin UI to add the resource to StrongDM, use the following steps.

1. Log in to the Admin UI and go to **Infrastructure > Clusters**.
2. Click the **Add Resource** button.
3. Select **Elastic Kubernetes Service (instance profile)** as the **Resource Type** and set other [resource properties](#resource-properties) to configure how the StrongDM relay connects.

   ![](https://4180056444-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FF7eka9SH5TT8nJm2ZfWj%2Fuploads%2Fgit-blob-f15f85a46960e1714baf2b9d72e24f1a55e64e03%2Fadd-resource-eks-instanceprofile.png?alt=media)
4. Click **Create** to save the resource.

The Admin UI updates and shows your new cluster in a green or yellow state. Green indicates a successful connection. If it is yellow, click the **pencil** icon to the right of the server to reopen the **Connection Details** screen. Then click **Diagnostics** to determine where the connection is failing.
{% endtab %}

{% tab title="CLI" %}
**Set up and Manage With the CLI**

This section provides general steps on how to configure and manage the resource using the StrongDM CLI. For more information and examples, please see the [CLI Reference](https://docs.strongdm.com/references/cli) documentation.

1. In your terminal or Command Prompt, log in to StrongDM:

   ```sh
   sdm login
   ```
2. Run `sdm admin clusters add`` ``eks-instance-profile --help` to view the help text for the command, which shows you how to use the command and what options (properties) are available. Note which [properties](#resource-properties) are required and collect the values for them.

   <pre class="language-sh"><code class="lang-sh"><strong>NAME:
   </strong>   sdm admin clusters add amazon-eks-instance-profile - create Elastic Kubernetes Service (instance profile) cluster

   USAGE:
      sdm admin clusters add amazon-eks-instance-profile [command options] &#x3C;name>

   OPTIONS:
      --allow-resource-role-bypass                 (For legacy orgs) allows users to fallback to the existing authentication mode (Leased Credential or Identity Set) when a resource role is not provided.
      --bind-interface value                       IP address on which to listen for connections to this resource on clients. Specify "default", "loopback", or "vnm" to automatically allocate an available address from the corresponding IP range configured in the organization. (default: "default")
      --certificate-authority value                (secret)
      --cluster-name value                         (required)
      --discovery-enabled                          Enable discovery for the cluster.
      --discovery-username value                   The user to impersonate in the cluster when running discovery. Required if the cluster is configured for identity aliases. (conditional)
      --egress-filter value                        apply filter to select egress nodes e.g. 'field:name tag:key=value ...'
      --endpoint value                             (required)
      --healthcheck-namespace default              This path will be used to check the health of your connection.  Defaults to default.
      --identity-alias-healthcheck-username value  (conditional)
      --identity-set-id value                      
      --identity-set-name value                    set the identity set by name
      --port-override value                        Port on which to listen for connections to this resource on clients. Specify "-1" to automatically allocate an available port. (default: -1)
      --proxy-cluster-id value                     proxy cluster id
      --region value                               (required)
      --role-arn value                             (secret)
      --role-external-id value                     (secret)
      --secret-store-id value                      secret store id
      --subdomain value, --bind-subdomain value    DNS subdomain through which this resource may be accessed on clients (e.g. "app-prod" allows the resource to be accessed as "app-prod.&#x3C;your-org-name>.&#x3C;sdm-proxy-domain>"). Only applicable to HTTP-based resources or resources using virtual networking mode.
      --tags value                                 tags e.g. 'key=value,...'
      --template, -t                               display a JSON template
      --timeout value                              set time limit for command

   </code></pre>
3. Then run `sdm admin clusters add eks-instance-profile <RESOURCE_NAME>` and set all required properties with their values. For example:

   ```sh
   sdm admin clusters add amazon-eks-instance-profile "eks-cluster-ip-prod"
     --cluster-name "eks-prod-instance-profile"
     --region "us-east-1"
     --endpoint "https://ABCDE12345.gr7.us-east-1.eks.amazonaws.com"
     --certificate-authority "/etc/strongdm/certs/eks-ca.crt"
     --role-arn "arn:aws:iam::123456789012:role/StrongDM-EKS-Access"
     --role-external-id "ext-id-eks-ip-prod-2025"
     --identity-set-name "EKS Instance Profile Admins"
     --identity-alias-healthcheck-username "svc_eks_health"
     --discovery-enabled
     --discovery-username "sdm-discovery"
     --healthcheck-namespace "default"
     --bind-interface "default"
     --port-override -1
     --egress-filter 'field:name tag:env=prod tag:region=us-east'
     --proxy-cluster-id "plc_0123456789abcdef"
     --secret-store-id "ss_abcdef0123456789"
     --subdomain "eks-ip-prod01"
     --tags "env=prod,cloud=aws,platform=kubernetes,auth=instance-profile,team=platform"
     --timeout 30
   ```
4. Check that the resource has been added. The output of the following command should show the resource's name:

   ```sh
   sdm admin clusters list
   ```

{% endtab %}

{% tab title="Terraform" %}
**Set up and Manage With Terraform**

This section provides an example of how to configure and manage the resource using the Terraform provider. For more information and examples, please see the [Terraform provider](https://github.com/strongdm/terraform-provider-sdm) documentation.

```hcl
# Install StrongDM provider
terraform {
  required_providers {
    sdm = {
      source  = "strongdm/sdm"
      version = "16.5.0"
    }
  }
}

# Configure StrongDM provider
provider "sdm" {
  # Add API access key and secret key from the Admin UI
  api_access_key = "njjSn...5hM"
  api_secret_key = "ziG...="
}

# Create Amazon EKS (Instance Profile) cluster
resource "sdm_resource" "eks_instance_profile_prod" {
  amazon_eks_instance_profile {
    # Required
    name         = "eks-cluster-ip-prod"                                   # <name>
    cluster_name = "eks-prod-instance-profile"                              # --cluster-name
    region       = "us-east-1"                                              # --region
    endpoint     = "https://ABCDE12345.gr7.us-east-1.eks.amazonaws.com"    # --endpoint

    # TLS / CA
    certificate_authority = file("/etc/strongdm/certs/eks-ca.crt")         # --certificate-authority

    # Optional role assumption (with instance profile as base auth)
    role_arn         = "arn:aws:iam::123456789012:role/StrongDM-EKS-Access" # --role-arn (optional)
    role_external_id = "ext-id-eks-ip-prod-2025"                            # --role-external-id (optional)

    # Identity & discovery
    identity_set_name                   = "EKS Instance Profile Admins"     # --identity-set-name
    identity_alias_healthcheck_username = "svc_eks_health"                  # --identity-alias-healthcheck-username (conditional)
    discovery_enabled                   = true                               # --discovery-enabled
    discovery_username                  = "sdm-discovery"                   # --discovery-username
    healthcheck_namespace               = "default"                          # --healthcheck-namespace

    # Common networking options
    bind_interface = "default"                                              # --bind-interface ("default" | "loopback" | "vnm")
    port_override  = -1                                                     # --port-override (-1 = auto-allocate)
    egress_filter  = "field:name tag:env=prod tag:region=us-east"           # --egress-filter
    subdomain      = "eks-ip-prod01"                                        # --subdomain / --bind-subdomain (for VN access)

    # Optional integrations
    proxy_cluster_id = "plc_0123456789abcdef"                               # --proxy-cluster-id
    secret_store_id  = "ss_abcdef0123456789"                                # --secret-store-id (for CA or extras)

    # (Legacy orgs) allow fallback auth when no resource role is provided
    allow_resource_role_bypass = false                                      # --allow-resource-role-bypass

    # Tags
    tags = {                                                                 # --tags
      env      = "prod"
      cloud    = "aws"
      platform = "kubernetes"
      auth     = "instance-profile"
      team     = "platform"
    }
  }
}
```

{% endtab %}

{% tab title="SDKs" %}
**Set up and Manage With SDKs**

In addition to the Admin UI, CLI, and Terraform, you may configure and manage your resource with any of the following SDK options: Go, Java, Python, and Ruby. Please see the following references for more information and examples.

| Go            | ​[pkg.go.dev](https://pkg.go.dev/github.com/strongdm/strongdm-sdk-go/v16)​ | ​[strongdm-sdk-go](https://github.com/strongdm/strongdm-sdk-go)​         | ​[Go SDK Examples](https://github.com/strongdm/strongdm-sdk-go-examples)​         |
| ------------- | -------------------------------------------------------------------------- | ------------------------------------------------------------------------ | --------------------------------------------------------------------------------- |
| Java          | ​[javadoc](https://strongdm.github.io/strongdm-sdk-java-docs/)​            | ​[strongdm-sdk-java](https://github.com/strongdm/strongdm-sdk-java)​     | ​[Java SDK Examples](https://github.com/strongdm/strongdm-sdk-java-examples)​     |
| Python        | ​[pdocs](https://strongdm.github.io/strongdm-sdk-python-docs/)​            | ​[strongdm-sdk-python](https://github.com/strongdm/strongdm-sdk-python)​ | ​[Python SDK Examples](https://github.com/strongdm/strongdm-sdk-python-examples)​ |
| Ruby          | ​[RubyDoc](https://www.rubydoc.info/gems/strongdm)​                        | ​[strongdm-sdk-ruby](https://github.com/strongdm/strongdm-sdk-ruby)​     | ​[Ruby SDK Examples](https://github.com/strongdm/strongdm-sdk-ruby-examples)​     |
| {% endtab %}  |                                                                            |                                                                          |                                                                                   |
| {% endtabs %} |                                                                            |                                                                          |                                                                                   |

## Resource properties

The **EKS (instance profile)** cluster type has the following properties.

<table><thead><tr><th width="199.788330078125">Property</th><th width="130.1932373046875">Requirement</th><th>Description</th></tr></thead><tbody><tr><td><strong>Display Name</strong></td><td>Required</td><td>Meaningful name to display the resource throughout StrongDM; exclude special characters like quotes (") or angle brackets (&#x3C; or >)</td></tr><tr><td><strong>Cluster Type</strong></td><td>Required</td><td><strong>Elastic Kubernetes Service (instance profile)</strong></td></tr><tr><td><strong>Proxy Cluster</strong></td><td>Required</td><td>Defaults to "None (use gateways)"; if using <a href="../../networking/proxy-clusters">proxy clusters</a>, select the appropriate cluster to proxy traffic to this resource</td></tr><tr><td><strong>Endpoint</strong></td><td>Required</td><td>API server endpoint of the EKS cluster in the format <code>&#x3C;ID>.&#x3C;REGION>.eks.amazonaws.com</code>, such as <code>A95FBC180B680B58A6468EF360D16E96.yl4.us-west-2.eks.amazonaws.com</code>; relay server should be able to <a href="#prerequisites">connect to your EKS endpoint</a></td></tr><tr><td><strong>Connectivity Mode</strong></td><td>Required</td><td>Select either <strong>Virtual Networking Mode</strong>, which lets users connect to the resource with a software-defined, IP-based network; or <strong>Loopback Mode</strong>, which allows users to connect to the resource using the local loopback adapter in their operating system; this field is shown if <a href="../../clients/client-networking/virtual-networking-mode">Virtual Networking Mode</a> enabled for your organization</td></tr><tr><td><strong>IP Address</strong></td><td>Optional</td><td>If <strong>Virtual Networking Mode</strong> is the selected connectivity mode, an IP address value in the configured Virtual Networking Mode subnet in the organization network settings; if <strong>Loopback Mode</strong> is the selected connectivity mode, an IP address value in the configured Loopback IP range in the organization network settings (by default, <code>127.0.0.1</code>); if not specified, an available IP address in the configured IP address space for the selected connectivity mode will be automatically assigned; this field is shown if <a href="../../clients/client-networking/virtual-networking-mode">Virtual Networking Mode</a> and/or <a href="../../clients/client-networking/loopback-ip-ranges">multi-loopback mode</a> is enabled for your organization</td></tr><tr><td><strong>Port Override</strong></td><td>Optional</td><td>If <strong>Virtual Networking Mode</strong> is the selected connectivity mode, a port value between 1 and 65535 that is not already in use by another resource with the same IP address; if <strong>Loopback Mode</strong> is the selected connectivity mode, a port value between 1024 to 64999 that is not already in use by another resource with the same IP address; when left empty with Virtual Networking Mode, the system assigns the default port to this resource; when left empty for Loopback Mode, an available port that is not already in use by another resource is assigned; preferred port also can be modified later from the <a href="../port-overrides">Port Overrides settings</a></td></tr><tr><td><strong>DNS</strong></td><td>Optional</td><td>If Virtual Networking Mode is the selected connectivity mode, a unique hostname alias for this resource; when set, causes the desktop app to display this resource's human-readable DNS name (for example, <code>k8s.my-organization-name</code>) instead of the bind address that includes IP address and port (for example, <code>100.64.100.100:5432</code>)</td></tr><tr><td><strong>Secret Store</strong></td><td>Optional</td><td>Credential store location; defaults to none (credentials are stored in StrongDM resource configuration); to learn more, see <a href="#secret-store-options">Secret Store options</a></td></tr><tr><td><strong>Server CA</strong></td><td>Optional</td><td>Pasted server certificate (plaintext or Base64-encoded), or imported PEM file; you can either generate the server certificate on the API server or get it in Base64 format from your existing Kubernetes configuration (kubeconfig) file</td></tr><tr><td><strong>Cluster Name</strong></td><td>Required</td><td>Name of the EKS cluster</td></tr><tr><td><strong>Region</strong></td><td>Required</td><td>Region of the EKS cluster, such as <code>us-west-1</code></td></tr><tr><td><strong>Healthcheck Namespace</strong></td><td>Optional</td><td>If enabled for your organization, the namespace used for the resource healthcheck; defaults to <code>default</code> if empty; supplied credentials must have the rights to perform one of the following kubectl commands in the specified namespace: <code>get pods</code>, <code>get deployments</code>, or <code>describe namespace</code></td></tr><tr><td><strong>Enable Resource Discovery</strong></td><td>Optional</td><td>Enables <a href="../kubernetes-management#resource-discovery">automatic discovery</a> within this cluster</td></tr><tr><td><strong>Authentication</strong></td><td>Required</td><td>Authentication method to access the cluster; select either <strong>Leased Credential</strong> (default) or <strong>Identity Aliases</strong> (to use the Identity Aliases of StrongDM users to access the cluster)</td></tr><tr><td><strong>Identity Set</strong></td><td>Required</td><td>Displays if <strong>Authentication</strong> is set to <strong>Identity Aliases</strong>; select an Identity Set name from the list</td></tr><tr><td><strong>Healthcheck Username</strong></td><td>Required</td><td>If <strong>Authentication</strong> is set to <strong>Identity Aliases</strong>, the username that should be used to verify StrongDM's connection to it; username must already exist on the target cluster</td></tr><tr><td><strong>Assume Role ARN</strong></td><td>Optional</td><td>Role ARN, such as <code>arn:aws:iam::000000000000:role/RoleName</code>, that allows users accessing this resource to assume a role using <a href="https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html">AWS AssumeRole</a> functionality</td></tr><tr><td><strong>Assume Role External ID</strong></td><td>Optional</td><td>External ID if leveraging an external ID to users assuming a role from another account; if used, it must be used in conjunction with <strong>Assume Role ARN</strong>; see the <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html">AWS documentation on using external IDs</a> for more information</td></tr><tr><td><strong>Resource Tags</strong></td><td>Optional</td><td>Resource <a data-mention href="https://app.gitbook.com/s/4XOJmXFslCMVCzIG2rKp/cli/tags">Tags</a> consisting of key-value pairs <code>&#x3C;KEY>=&#x3C;VALUE></code> (for example, <code>env=dev</code>)</td></tr></tbody></table>

### **Display name**

Some Kubernetes management interfaces, such as Visual Studio Code, do not function properly with cluster names containing spaces. If you run into problems, please choose a **Display Name** without spaces.

### **Client credentials**

When your users connect to this cluster, they have exactly the rights permitted by this AWS key pair. See [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/security-iam.html) for more information.

### **Server CA**

How to get the **Server CA** from your kubeconfig file:

1. Open the CLI and type `cat ~/.kube/config` to view the contents of the file.
2. In the file, under `- cluster`, copy the `certificate-authority-data` value. That is the server certificate in Base64 encoding.

```yaml
  - cluster:
    certificate-authority-data: ... SERVER CERT BASE64 ...
```

### **Secret Store options**

By default, server credentials are stored in StrongDM. However, these credentials can also be saved in a secrets management tool.

Non-StrongDM options appear in the **Secret Store** dropdown menu if they are created under **Settings** > **Secrets Management**. When you select another Secret Store type, its unique properties display. For more details, see [Configure Secret Store Integrations](https://docs.strongdm.com/admin/access/secret-stores).

## Test the Connection

1. After creating the EKS (Instance Profile) cluster resource in the Admin UI, navigate to **Infrastructure > Clusters** and locate your newly added cluster. The health indicator should turn green once StrongDM successfully connects to the EKS control plane using the instance profile role.
2. On a test client using the StrongDM desktop app or CLI, connect to the cluster and run a basic command such as:

   ```bash
   kubectl get nodes
   ```

   Confirm that the output lists your EKS nodes and that the connection is routed through StrongDM.
3. If **Discovery** is enabled, in the Admin UI verify that namespaces, roles, and service accounts appear under the cluster’s **Discovery** tab. This confirms StrongDM successfully queried the Kubernetes API.
4. If the health status remains red or yellow:

   * Verify the cluster’s endpoint, region, and instance profile permissions are correct and reachable from your relay or gateway.
   * Check the certificate authority file and ensure the control plane endpoint uses valid TLS configuration.
   * Confirm the `healthcheck_namespace` exists and the identity alias user (if specified) has access to perform Kubernetes health checks.
   * Review the **Diagnostics** tab for authentication, IAM, or network errors.

   Once connectivity is verified and Kubernetes operations succeed, the EKS (Instance Profile) cluster resource is ready for use. You can assign roles, apply policies, and monitor cluster access through StrongDM.

## Help

If you encounter issues, please consult the [StrongDM Help Center](https://help.strongdm.com/hc/en-us).

Be prepared to provide the following information to StrongDM Support, so that they can inspect logs and confirm node and resource health:

* Resource name or ID
* CLI error output or logs
* Node name and region
* Timestamps of failed attempts
