vault-credential-broker

21 March 2024

After providing our opinion on HashiCorp's Vault and Boundary tools in the Tech Radar, we are now focusing on Vault as a Credential Broker for Boundary. This article contains our insights and experiences on the subject.

What are the advantages of a Credential Broker?

A credential broker is a service that facilitates the secure management and exchange of credentials between different systems in an IT infrastructure. It centralizes and secures the management of credentials (usernames, passwords, API keys, certificates, etc.), ensuring authorized and controlled access to these resources.

Applications and services can securely request the necessary credentials from the credential broker without needing to store or manage them locally.

What are Boundary and Vault?

To implement Vault as a Credential Broker for Boundary, it's necessary to understand the workings of Boundary and Vault.

Boundary

We have previously delved into the implementation and usage of Boundary in a prior article. Let's review its operation in detail.

Boundary allows the creation of sessions between a user and a service. It serves as an alternative to bastions and VPNs, offering fine-grained control over access to resources. While granting users access to a bastion provides access to all resources accessible by the bastion, Boundary allows precise configuration of which users have access to which resources.

To briefly explain Boundary's operation, it utilizes two main components:

  • A controller: This authenticates users and manages various workers across the entire cluster.
  • Workers: These facilitate the creation of sessions for users towards targets (destinations to which users want to connect).

The diagram below provides a more detailed explanation of the process of tunnel creation towards a target.

boundary-client-target

  1. A user authenticates with the controller and requests to connect to a target.
  2. The controller uses a database, storing, among other things, the rights of various users. It checks if the user has the right to access the requested target. If so, it creates a session ID and stores it in the database.
  3. If the user has the right to access the target, the controller responds to the client with various information, including:
    • The session ID
    • The credentials to connect to the target
    • Necessary information to establish a connection with one of the workers. The worker is chosen by the controller through load balancing among the workers.
  4. The worker receives the session ID and queries the controller to verify the validity of the session ID.
  5. The controller verifies the validity of the session ID in its database and responds positively.
  6. The worker connects to the target with the credentials provided by the client.
  7. Thus, the session between the user and the target is established via the worker!
# Permet de lire le contenu du chemin postgres/creds/analyst
path "postgres/creds/analyst" {
  capabilities = ["read"]
}

In this example, credentials for accessing a postgres database with an analyst role are stored in the path postgres/creds/analyst. This policy grants read access to these credentials.

When a user or application wants to interact with Vault to read or modify the content of a path, they must first authenticate to obtain a temporary authentication token. It's possible to configure multiple authentication methods to access Vault, but a policy must be mapped to each authentication method.

For example, the Vault documentation provides this mapping example if configuring LDAP as the authentication method:

  • Members of the OU group "dev" map to the Vault policy named "readonly-dev."
  • Members of the OU group "ops" map to the Vault policies "admin" and "auditor."

In this example, the authentication flow for a developer using LDAP authentication would resemble this:

vault

Once authenticated to Vault, the user can interact with Vault by presenting their authentication token. Vault checks the policies associated with this token to determine if each operation is authorized or not.

Vault can also be used to generate dynamic and short-term secrets. To do this, you just need to create a backend, connected to a target. This allows Vault to create sets of temporary credentials on this target. An authenticated user can then request the creation of a dynamic secret to access the target.

dynamic-secret 

How does Vault work as a Credential Broker for Boundary?

An implementation of Vault as a Credential Broker within Boundary enables a client to authenticate with a service using temporary credentials!

The following diagram explains the connection flow from a Boundary client to a target. This diagram is becoming complex, but it only incorporates the operational schema of Boundary seen previously. We add Vault (in blue) to generate a dynamic secret so that the user can access the target.

vcb

Unlike the use case of Boundary without Vault, the connection information provided to the client is now a dynamic secret issued by Vault. This secret has the advantage of being temporary and generated based on policies that restrict the client's rights to the target.

How to implement Boundary and Vault within an infrastructure?

Implementation with an RDS

We deployed Vault as Credential Broker for Boundary in an EKS cluster using Terraform and Helm. If you're curious, the code is available on Github.

To implement Vault as Credential Broker for Boundary, we followed a tutorial from the Hashicorp documentation. In this tutorial, we have a PostgreSQL database with two roles: an analyst role that requires access to a write table to create monthly reports, and a dba (database administrator) role that must have all rights on the database to manage it.

The goal of this tutorial is to create two Boundary targets that are accessible via temporary credentials issued by Vault: one target with access to the database with the analyst role, and another with access with the dba role.

What we did somewhat differed from the tutorial because Vault and Boundary are launched in dev mode, and we wanted to test in prod mode. We had to do quite a bit of additional configurations to achieve this.

When launching Boundary in prod, there are no default resources, so you have to create quite a few things before having your first user. We followed a tutorial that explained how to create all the necessary resources.

In the end, we deployed:

  • a Boundary controller with a PostgreSQL database
  • a Boundary worker
  • Vault
  • a target: a PostgreSQL database in an RDS, with the analyst and dba roles configured

The obtained architecture is as follows:

archi

There are 3 endpoints:

  • boundary controller API: the API on which clients communicate with the controller
  • boundary controller cluster: the endpoint on which workers communicate with the controller
  • boundary worker: the endpoint on which clients communicate with the worker to open sessions with the target

If you're wondering why we publicly exposed the boundary controller cluster endpoint, when the controller and worker could have communicated within the cluster, it was just to test the configuration in case the worker and the controller were not in the same cluster. Here it's not useful.

Let's zoom in on the Vault and Boundary configuration. On the Vault side, we create a "secrets engine" to connect it to the PostgreSQL database and create dynamic secrets there.

resource "vault_database_secret_backend_connection" "postgres" {
  backend       = vault_mount.db.path
  name          = "postgres"
  allowed_roles = ["dba", "analyst"]
  plugin_name   = "postgresql-database-plugin"

  postgresql {
    connection_url = "postgresql://${data.terraform_remote_state.main.outputs.rds.this.username}:${data.terraform_remote_state.main.outputs.rds.this.password}@${data.terraform_remote_state.main.outputs.rds.this.address}:5432/postgres"
    username       = "vault"
    password       = "vault-password"
  }
}

We then add two "secret backend roles," one for the analyst role and one for the dba role. They specify how to create dynamic credentials for each role.

resource "vault_database_secret_backend_role" "dba" {
  backend             = vault_mount.db.path
  name                = "dba"
  db_name             = vault_database_secret_backend_connection.postgres.name
  creation_statements = ["CREATE ROLE \"Deployment of Vault as a Credential Broker for Boundary\" WITH LOGIN PASSWORD '' VALID UNTIL '' inherit; grant northwind_dba to \"Deployment of Vault as a Credential Broker for Boundary\";"]
}

resource "vault_database_secret_backend_role" "analyst" {
  backend             = vault_mount.db.path
  name                = "analyst"
  db_name             = vault_database_secret_backend_connection.postgres.name
  creation_statements = ["CREATE ROLE \"Deployment of Vault as a Credential Broker for Boundary\" WITH LOGIN PASSWORD '' VALID UNTIL '' inherit; grant northwind_analyst to \"Deployment of Vault as a Credential Broker for Boundary\";"]
}

To use Vault as a Credential Broker for Boundary, we create a Vault token. This token will allow Boundary to request the creation of dynamic secrets from Vault.

resource "vault_token" "boundary" {

  no_default_policy = true
  policies          = ["boundary-controller", "northwind-database"]

  renewable = true
  period    = "20m"
  ttl       = "24h"
  no_parent = true

  metadata = {
    "purpose" = "boundary"
  }
}

This token uses two policies. The boundary-controller policy, which allows Boundary to access information about its token, but also to renew it or revoke it. The northwind-database policy, which allows obtaining the necessary information to connect to the database with the analyst and dba roles.

On the Boundary side, we create a "credential store" to securely store credentials. Here the credential store is created to manage Vault secrets. So we indicate the address of Vault and the token we created earlier.

# Vault credential store

resource "boundary_credential_store_vault" "example" {
  name        = "vault"
  description = "Vault credential store"
  address     = "http://vault.vault.svc.cluster.local:8200"
  token       = data.terraform_remote_state.vault.outputs.vault_token.client_token
  scope_id    = boundary_scope.project.id
}

We then create two credential libraries, one for the analyst role and one for the dba role. They provide credentials for sessions and manage the creation, renewal, and revocation of dynamic secrets. For each credential library, we specify the Vault path of the credentials to connect to the database with the associated role.

# Credential libraries

resource "boundary_credential_library_vault" "dba" {
  name                = "dba"
  description         = "Northwind DBA credential library"
  credential_store_id = boundary_credential_store_vault.example.id
  path                = "postgres/creds/dba"
}

resource "boundary_credential_library_vault" "analyst" {
  name                = "analyst"
  description         = "Northwind DBA credential analyst"
  credential_store_id = boundary_credential_store_vault.example.id
  path                = "postgres/creds/analyst"
}
  

Finally, we can create the two targets, specifying for each one which credential library to use.

# Targets

resource "boundary_target" "northwind_analyst" {
  scope_id     = boundary_scope.project.id
  name         = "Northwind Analyst Database"
  type         = "tcp"
  default_port = "5432"
  session_connection_limit = 1
  host_source_ids = [boundary_host_set.example.id]
  brokered_credential_source_ids = [
    boundary_credential_library_vault.analyst.id
  ]
}

resource "boundary_target" "northwind_dba" {
  scope_id     = boundary_scope.project.id
  name         = "Northwind DBA Database"
  type         = "tcp"
  default_port = "5432"
  session_connection_limit = 1
  host_source_ids = [boundary_host_set.example.id]
  brokered_credential_source_ids = [
    boundary_credential_library_vault.dba.id
  ]
}

Operation

Once configured, usage is straightforward!

Authentication to Boundary via the command boundary authenticate is done through the command line. For this, you need to specify the address of the controller's API. You'll receive a token that authenticates you with Boundary.

boundary-authenticate

To connect to a target, you first need to retrieve its ID. This part isn't very user-friendly in the CLI, as it involves three commands, but it's not too complicated.

  • Start by listing the organizations with the command boundary scopes list to retrieve the organization's ID.
  • List the projects within the organization with the command boundary scopes list -scope-id $ORG_ID to retrieve the project's ID.
  • List the targets within the project with the command boundary targets list -scope-id $PROJECT_ID and retrieve the ID of the target you're interested in.

connect-target

Once you have the target's ID, simply use the command boundary connect specifying the target's ID. You also need to specify the command to execute on the remote machine. Boundary's CLI directly manages some executables. For example, to connect to a PostgreSQL database, you can use the command boundary connect postgres.

boundary-connect-postgres

For executables not managed by Boundary, you need to install the executable on the local machine and use the command boundary connect -exec <executable>. This is a small downside compared to an SSH tunnel.

If you open a session per target and request the database to list the users, you'll see dynamically created users: v-token-to-analyst-xxxx and v-token-to-dba-xxxx. These users have temporary passwords and are respectively members of the analyst and dba roles.

role-analyst-dba

What are some use cases?

Usage with a Database

As seen in the above operational example, we can use Vault as Credential Broker for Boundary to connect to a PostgreSQL database, via the command line, with temporary credentials. This is handy for developers who want to debug, but can it also be used to connect to the database from any service? For example, from an administration software like pgadmin? After verification, the answer is yes!

Just use the boundary connect command without specifying an executable. Boundary then opens a local socket and returns the port it's open on, along with the temporary credentials.

boundery-connect

For example, we tried connecting to pgadmin with these credentials:

pgadmin

And it works!

postgres

Developers could also use this technique to use the database with a developing application.

Other Targets

In our example, we configured a target that's a PostgreSQL database. It's also possible to configure other types of targets. There are other interesting use cases:

  • connecting to other types of databases with dynamic credentials. Including: MongoDB, MYSQL, Oracle, Redis
  • connecting to a Kubernetes cluster, with dynamically generated service account tokens
  • connecting via SSH to a machine, with a dynamically generated One-Time SSH Password

More generally, you can use Vault as Credential Broker for Boundary with a target if Vault has a Secrets Engine that can be used with your resource. You can find the list of Secrets Engines in the documentation.

What are the pros and cons?

Advantages:

✅ Increased security in accessing your resources.

✅ Once deployed, management is very easy.

✅ Connecting to the target for clients is straightforward.

Disadvantages:

❌ Implementation requires a significant learning curve to understand the necessary concepts and configurations, especially for Boundary. This makes the implementation experience complex when you're unfamiliar with the tool.

❌ Configuring Boundary in production takes quite a while. There's a lot of documentation to go through to know what resources to create. Just so you know, we started creating all the resources through the command line, and it was really time-consuming. We ended up using the Boundary provider from Terraform and it made life easier!

❌ The CLI could still be improved! For now, retrieving target IDs is done in 3 commands.

Conclusion

Vault as a Credential Broker for Boundary is a good implementation for securing your infrastructure with fine-grained access management to your resources. However, it takes quite a bit of time to get used to and configure. Therefore, it's not a tool we would recommend for a small project unless you're already familiar with it and have mastered it.

However, it can be very useful to deploy on a project where you can dedicate time to its configuration.