# Proxy Clusters

There are multiple ways to arrange your StrongDM deployment, as explained in the [Deployment](https://docs.strongdm.com/admin/deployment) page. The recommended way to deploy StrongDM is through the use of proxy clusters. Proxy clusters are one of the available ways for you to proxy client traffic to your resources. They also provide a way to segment your network so that particular proxies are used to access particular resources. Proxy clusters sit behind your load balancers, and they allow you to scale your infrastructure to handle large amounts of traffic as needed; but they can also be run with only one or two proxy workers for simple network segments.

{% hint style="warning" %}
Every proxy worker in a cluster must have access to the same set of resources. Workers running in separate environments containing separate resources must belong to separate clusters.
{% endhint %}

When your organization is set up with proxy clusters, administrators can create proxy clusters, configure resources in StrongDM and attach them to the proxy clusters. Then, they allow users access to those resources through standing access with [Roles](https://docs.strongdm.com/admin/access/roles) or through Just-in-Time (JIT) access with [Workflows](https://docs.strongdm.com/admin/access/access-workflows).

Once they have been granted access, users can use the [client](https://app.gitbook.com/s/HaY8OFbXUreWEF61MhKm/client) or [CLI](https://app.gitbook.com/s/4XOJmXFslCMVCzIG2rKp/cli) to connect to your resources. Their client reaches out to the appropriate proxy cluster. One of the workers in the cluster handles the request, verifies the client is authorized to connect, and obtains credentials to connect to the resource. The connection is proxied without the credentials ever being exposed to that user. The user simply clicks to connect and begins working on the resource, unaware of any of these behind-the-scenes actions.

### Overview

A StrongDM proxy cluster comprises one or more proxy workers. A proxy worker is a process that mediates connectivity between clients and resources.

{% hint style="info" %}
Proxy clusters are a new deployment option that can be used instead of traditional [gateways and relays](https://docs.strongdm.com/admin/networking/gateways-and-relays). It is particularly useful for large networks that perform better with segmentation, or that have various subnets with differing requirements.
{% endhint %}

![](https://4180056444-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FF7eka9SH5TT8nJm2ZfWj%2Fuploads%2Fgit-blob-5da312f425333738ae9d95bac9897875470cbf4f%2Fproxy-cluster-network-diagram.png?alt=media)

When a client connects to a StrongDM resource, it looks up which proxy cluster the resource belongs to and uses that cluster to connect. One of the proxy workers in the cluster parses and logs the request; fetches, decrypts, and injects credentials as necessary; and forwards the connection to the resource.

{% hint style="success" %}
We recommend the following best practices when deploying a proxy cluster:

* Deploy one proxy cluster in each environment where you host resources.
* A proxy cluster should consist of at least two proxy workers behind a load balancer for high availability.&#x20;
* A bridged proxy cluster should consist of at least two proxy workers behind a load balancer for high availability and two bridge workers. The number of proxy workers should be equal to or greater than the number of bridge workers.
* Configure the load balancer to accept connections on port 443 and forward them to the individual proxy workers on port 8443.
* Use a network load balancer to forward TCP traffic directly to the proxy workers without any processing.
* If the load balancer supports client IP address preservation, enable it.
* Use a DNS domain name to route traffic to the load balancer rather than an IP address.
  {% endhint %}

#### Proxy worker egress requirements

Proxy workers must be able to send traffic to several destinations in order to function correctly:

{% tabs %}
{% tab title="US" %}

* `app.strongdm.com:443` (which resolves to multiple IP addresses)
* `downloads.strongdm.com:443` (which resolves to multiple IP addresses) for downloading updates
  {% endtab %}

{% tab title="UK" %}
*Follow instructions in the tab for the region of your organization's StrongDM control plane, not your own location. The default control plane region is US.*

* `app.uk.strongdm.com:443` (which resolves to multiple IP addresses)
* `downloads.uk.strongdm.com:443` (which resolves to multiple IP addresses) for downloading updates
  {% endtab %}

{% tab title="EU" %}
*Follow instructions in the tab for the region of your organization's StrongDM control plane, not your own location. The default control plane region is US.*

* `app.eu.strongdm.com:443` (which resolves to multiple IP addresses)
* `downloads.eu.strongdm.com:443` (which resolves to multiple IP addresses) for downloading updates
  {% endtab %}
  {% endtabs %}

For more information, please see the [Ports Guide](https://docs.strongdm.com/admin/networking/ports-guide).

Additionally, if your organization requires outbound traffic from your infrastructure to pass through your own corporate proxy, you must either or both of the `HTTP_PROXY`/`HTTPS_PROXY` environment variables (or the StrongDM-specific version). Please see the [Environment Variables](https://docs.strongdm.com/admin/deployment/environment-variables) for a list of available environment variables for use with StrongDM.

#### Authentication keys

Each proxy cluster uses authentication keys to link proxy workers and (optional) bridge workers to that specific cluster. The default limit for keys is four per proxy cluster, which enables optional rotation. The access key and secret key are stored in the configuration file `/etc/sysconfig/sdm-worker` along with any SDM environmental variables.

### Deploy a Single-Worker Proxy Cluster

{% hint style="warning" %}
This guide explains how to deploy a simple test cluster containing one proxy worker. For production environments we recommend using infrastructure tools to deploy multiple workers in a high availability configuration behind a load balancer. See these guides for platform-specific instructions if they apply to you:

* [Deploy ECS Fargate Proxy Clusters](https://docs.strongdm.com/admin/networking/proxy-clusters/ecs-proxy-clusters)
* [Deploy Kubernetes Proxy Clusters](https://docs.strongdm.com/admin/networking/proxy-clusters/kubernetes-proxy-clusters)
  {% endhint %}

1. Set up a **64-bit** Linux instance with at least 2 CPUs and 4 GB of memory. Make sure the firewall allows clients to connect to the instance on port 443.
2. Note the IP address of the instance.
3. Log in to the StrongDM Admin UI.
4. Go to **Networking** > **Proxy Clusters**.
5. Click **Add proxy cluster**. You can name the cluster here or modify it later.
6. Enter the address of your Linux instance (with port 443 included) in the **Advertised Address** field (for example: `111.111.111.111:443`).

   ![](https://4180056444-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FF7eka9SH5TT8nJm2ZfWj%2Fuploads%2Fgit-blob-528dadc2890b37597165a314d46f270eb3e95472%2Fadd-proxy-cluster.png?alt=media)
7. Click **Create proxy cluster**.
8. Click **Add authentication key**. The access key and secret key appear in a modal. Copy these and save them for use in a later step.

   ![](https://4180056444-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FF7eka9SH5TT8nJm2ZfWj%2Fuploads%2Fgit-blob-68627911f1b183400fbb00f55e6dd6d55f4ff85f%2Fproxy-cluster-key.png?alt=media)
9. Log in to the Linux instance.
10. To run the worker via Docker (recommended), first [install Docker](https://docs.docker.com/engine/install/). Then run the following command, substituting the access key and secret key you created:

{% tabs %}
{% tab title="US" %}

```shell
docker run \
  -e SDM_PROXY_CLUSTER_ACCESS_KEY=<ACCESS_KEY> \
  -e SDM_PROXY_CLUSTER_SECRET_KEY=<SECRET_KEY> \
  -e SDM_APP_DOMAIN=app.strongdm.com \
  --restart=always \
  --name sdm-worker \
  -p 443:8443 -d \
  public.ecr.aws/strongdm/relay
```

{% endtab %}

{% tab title="UK" %}
*Follow instructions in the tab for the region of your organization's StrongDM control plane, not your own location. The default control plane region is US.*

```shell
docker run \
  -e SDM_PROXY_CLUSTER_ACCESS_KEY=<ACCESS_KEY> \
  -e SDM_PROXY_CLUSTER_SECRET_KEY=<SECRET_KEY> \
  -e SDM_APP_DOMAIN=app.uk.strongdm.com \
  --restart=always \
  --name sdm-worker \
  -p 443:8443 -d \
  public.ecr.aws/strongdm/relay
```

{% endtab %}

{% tab title="EU" %}
*Follow instructions in the tab for the region of your organization's StrongDM control plane, not your own location. The default control plane region is US.*

```shell
docker run \
  -e SDM_PROXY_CLUSTER_ACCESS_KEY=<ACCESS_KEY> \
  -e SDM_PROXY_CLUSTER_SECRET_KEY=<SECRET_KEY> \
  -e SDM_APP_DOMAIN=app.eu.strongdm.com \
  --restart=always \
  --name sdm-worker \
  -p 443:8443 -d \
  public.ecr.aws/strongdm/relay
```

{% endtab %}
{% endtabs %}

11. To run the worker via `systemd`, download the StrongDM binary, unzip it, and run the installer. When prompted, paste the access key and secret key you created. After install, use `systemctl status sdm-worker` to check that the service is running.

{% tabs %}
{% tab title="US" %}

```shell
curl -J -O -L https://app.strongdm.com/releases/cli/linux
unzip sdmcli_*_linux_amd64.zip
./sdm install --worker --worker-bind-addr :443 --app-domain app.strongdm.com 
```

{% endtab %}

{% tab title="UK" %}
*Follow instructions in the tab for the region of your organization's StrongDM control plane, not your own location. The default control plane region is US.*

```shell
curl -J -O -L https://app.uk.strongdm.com/releases/cli/linux
unzip sdmcli_*_linux_amd64.zip
./sdm install --worker --worker-bind-addr :443 --app-domain app.uk.strongdm.com 
```

{% endtab %}

{% tab title="EU" %}
*Follow instructions in the tab for the region of your organization's StrongDM control plane, not your own location. The default control plane region is US.*

```shell
curl -J -O -L https://app.eu.strongdm.com/releases/cli/linux
unzip sdmcli_*_linux_amd64.zip
./sdm install --worker --worker-bind-addr :443 --app-domain app.eu.strongdm.com 
```

{% endtab %}
{% endtabs %}

{% hint style="warning" %}
The installer must be run by a user that exists in the `/etc/passwd` file. Any users remotely authenticated, such as with LDAP or an SSO service, will fail to complete the installation.
{% endhint %}

{% hint style="warning" %}
For production installations, we recommend you configure the workers to bind to a higher port (8443 is the default) and use a load balancer to remap that to port 443.
{% endhint %}

12. Confirm the proxy worker is running by verifying that the address is accessible from the appropriate end user network, as in the following example. If everything is working correctly, the proxy worker returns an HTTP 404 status code.

```shell
curl -k https://111.111.111.111
404 Not Found
```

{% hint style="info" %}
This guide demonstrated how to configure a proxy cluster in the Admin UI with a single proxy worker. To set up a proxy cluster with more than one proxy worker, you must use a network load balancer. The proxy cluster address in StrongDM is the address of your load balancer, and each of the proxy workers is set up with the same proxy cluster access key, as was done for the single worker cluster in this guide. Traffic to the proxy cluster is then directed by your load balancer to a proxy worker, which are all capable of authenticating to your resource(s) and forwarding client traffic to them.
{% endhint %}

#### Deploy with the CLI

Proxy clusters, like gateways or relays, can also be deployed using the CLI. This uses the `sdm admin nodes` command structure.

```
sdm admin nodes create-proxy-cluster --name <CLUSTER_NAME> <ADDRESS>:<PORT>
```

For more details, see the CLI Reference page for [sdm admin nodes create-proxy-cluster](https://app.gitbook.com/s/4XOJmXFslCMVCzIG2rKp/cli/admin/nodes/create-proxy-cluster "mention").

### Add Resources to a Proxy Cluster

To add resources to a proxy cluster, when adding or editing the resource in the Admin UI, select the name of the proxy cluster from the dropdown menu for the **Proxy Cluster** field. A resource attached to a proxy cluster will only be reachable via that proxy cluster.

At the command line, the `--proxy-cluster-id` option can be used to specify a proxy cluster. The ID of a cluster can be found using `sdm admin nodes list` or in the Admin UI under **Networking** > **Proxy Clusters**.

### Manage Existing Proxy Clusters

You can see a list of proxy clusters currently deployed in your organization in the **Networking** > **Proxy Clusters** page of the Admin UI. Selecting any cluster will bring you to the details view for that cluster, starting with the **Resources** tab. The **Resources** tab displays a list of all resources that are currently assigned to this proxy cluster. Each resource can be configured to be part of a particular proxy cluster in the configuration settings for that resource. There is also a **Keys** tab, which lists the available keys that can be used to add proxy workers to this cluster and allows the generation of additional keys. The **Settings** tab is where the cluster's settings can be configured (name and address).

#### Search filters

You can use search filters in the Admin UI on the **Networking** > **Proxy Clusters** page to search for specific proxy clusters and display them according to their name, address, or tags. Searching and filtering can also be done on the **Resources** tab when viewing the details of a particular proxy cluster.

To use filters, type or copy/paste the following filters into the **Search** field, with or without other text. Do not use quotes or tick marks.

| Filter                                        | Description                                                                      | Example search                                                                          |
| --------------------------------------------- | -------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
| `listenaddr:<IP_ADDRESS>`                     | Shows proxy clusters with the specified address                                  | `listenaddr:10.0.0.021:443` finds clusters that have an address of `10.0.0.021:443`.    |
| `name:<PARTIAL_STRING>` or any free-form text | Shows proxy clusters with names that match the entered string; partial string OK | `name:keen-coffee` or `coffee` finds all clusters whose names contain those characters. |
| `tags:<TAG=VALUE>`                            | Shows proxy clusters with the specified tags                                     | `tags:Environment=sandbox` finds clusters that have the tag `Environment=sandbox`.      |

#### Update a proxy cluster

To update a proxy cluster, find the cluster in the Admin UI at **Networking** > **Proxy Clusters** or at the CLI using `sdm admin nodes list`. View the cluster by clicking it in the Admin UI list, to edit its configuration. Using the CLI, you can use `sdm admin nodes update CLUSTER_ID`. To delete one, you can go to **Settings** > **Delete** on its details page in the Admin UI, or use the `sdm admin nodes delete` command at the CLI.

### Upgrades

StrongDM proxy workers upgrade themselves automatically. To configure when these upgrades happen, see [Maintenance Windows](https://docs.strongdm.com/admin/networking/maintenance-windows).

To minimize downtime during upgrades, configure your load balancer to periodically probe the [Liveness Check port](https://docs.strongdm.com/admin/gateways-and-relays#liveness-check) on each worker. When liveness checks are enabled, workers in the cluster coordinate with the load balancer and with each other to perform a rolling deployment upgrade during the maintenance window. The upgrade process for each worker looks like this:

1. Worker waits until there are no other workers in line ahead of it to restart.
2. Worker waits 90 seconds to allow any previously restarted workers to come back online and be registered with the load balancer.
3. Worker continues serving traffic, but shuts down its liveness check port to signal to the load balancer that it is shutting down.
4. Worker waits 90 seconds to allow the load balancer to remove it from the target group.
5. Worker severs all connections and restarts.

If blue/green deployments are desired, they can be set up using standard orchestration tools. Schedule deployments on a regular interval and configure the maintenance window such that the blue/green deployment happens before the window. This ensures the workers are already upgraded by the time the built-in rolling deployment occurs, and no restarts will be necessary.

### Third-Party Certificates

The StrongDM control plane automatically signs and issues certificates for proxy clusters, but you can also configure your proxy cluster to use your own certificates. Proxy workers respect the following environment variables, which can be mixed and matched:

* `SDM_TLS_CERT_SOURCE` determines where the proxy worker gets its TLS certificate from. Accepted values include:
  * `strongdm` (default): The proxy worker terminates TLS using a certificate signed by the StrongDM proxy cluster CA generated by the control plane.
  * `file`: The proxy worker terminates TLS using certificate and key PEM files specified by the `SDM_TLS_CERT_FILE` and `SDM_TLS_KEY_FILE` environment variables. The proxy worker automatically reloads the certificate from disk once per day, so the certificate should have a validity period of at least two days. Use this if you need to use your own certificates while also keeping the extra security afforded by mutual TLS.
  * `none`: The proxy worker does not terminate TLS. Use this if you want to terminate TLS using your own load balancer. You must also specify `SDM_TLS_CLIENT_AUTH=none`.
* `SDM_TLS_CLIENT_AUTH` controls how the proxy worker validates client TLS connections.
  * `direct` (default): The proxy worker establishes mutual TLS directly with clients and validates their client certificates directly. This mode is incompatible with `SDM_TLS_CERT_SOURCE=none`.
  * `none`: The proxy worker does not validate client certificates. Use this if you want to terminate TLS using your own load balancer.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.strongdm.com/admin/networking/proxy-clusters.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
