Create a destination for GCP + GCS + Databricks or Snowflake

Last updated: Apr 9, 2026
IMPLEMENTATION
HEALTH TECH VENDOR

To populate your GCP + Google Cloud Storage (GCS) repository with healthcare data from an EHR system via Redox (and then to optionally feed that data into Databricks or Snowflake for analytics), you must configure a specific Redox cloud destination. A Redox destination represents where a message is delivered (e.g., like the address in the “To” line of an email header). Learn more about connecting Redox to your cloud repository.

You'll need to perform some steps in your cloud product(s) and some in Redox. You can perform Redox setup in our dashboard or with the Redox Platform API.

Prerequisites

  • Establish a connection with your preferred EHR system. Learn how to request a connection.
  • Decide which combination of cloud products to use. Redox currently supports any of these combinations with your GCP cloud repository:
    1. Google Cloud Platform (GCP) + Google Cloud Storage (GCS)
    2. GCP + GCS + Databricks
    3. GCP + GCS + Snowflake
  • Complete your GCP (and any other cloud product) configuration before creating your Redox destination. Save any downloads with secret values, since youll need to enter some of these details into the Redox dashboard.
  • Grant access to Redox from GCP (and any other cloud product) to authorize Redox to push data to your cloud repository.

Configure in GCP

  1. Navigate to the GCP dashboard and log in.
  2. Create a service account in your GCP project.
  3. Grant the service account read/write access to the GCS bucket.
  4. Create a new key for your service account.
  5. Download the new key file. Save this file, since youll need it for Redox setup later.

Create a cloud destination in Redox

Next, create a cloud destination in your Redox organization. When the EHR system sends healthcare data to Redox, we push it on to your configured GCP + GCS cloud destination.

In the dashboard

    1. From the Product type field, select Databricks or Snowflake if youre using one of those cloud products with GCS. Your GCS settings will be applied with the additional cloud product. Select Cloud Storage if youre not using either Databricks or Snowflake.
  1. For the configure destination step, populate these fields.
    1. Bucket name: Enter the GCS bucket name. Locate this value in the GCP dashboard.
    2. Object filename prefix: Enter any prefix you want prepended to new files when theyre created in the GCS bucket. Add / to put the files in a subdirectory. For example, redox/ puts all the files in the redox directory.
  2. Click the Next button.
  3. For the auth credential step, either a drop-down list of existing auth credentials displays or a new auth credential form opens. Learn how to create an auth credential for OAuth 2.0 2-legged with JWT for GCP.

With the Redox Platform API

  1. In your terminal, prepare the /v1/authcredentials request.
  2. Specify these values in the request.
    • Locate the private_key_id, private_key_id, and client_email values in the downloaded key file from GCP.
    • Make sure the auth credentials environment ID matches the environment ID of the destination you intend to link it to.
    • Use the values we provide in the example for authStrategy, grantType, algorithm, url, and scopes. These are fixed values.
      Example: Create auth credential for GCP + GCS + Databricks or Snowflake
      json
      1
      curl 'https://api.redoxengine.com/platform/v1/authcredentials' \
      2
      --request POST \
      3
      --header 'Authorization: Bearer $API_TOKEN' \
      4
      --header 'accept: application/json' \
      5
      --header 'content-type: application/json' \
      6
      --data '{
      7
      "organization": "<Redox_organization_id>"
      8
      "name": "<human_readable_name_for_auth_credential>"
      9
      "environmentId": "<Redox_environment_ID>"
      10
      "authStrategy": "OAuth_2.0_JWT_GCP"
      11
      "grantType": "client_credentials"
      12
      "url": "https://oauth2.googleapis.com/token"
      13
      "algorithm": "RS256"
      14
      "clientId": "<client_email>"
      15
      "scopes": "https://www.googleapis.com/auth/devstorage.read_write"
      16
      "keyId": "<keyId_from_GCP>"
      17
      "privateKey": "<privateKey_from_GCP>"
      18
      }
  3. You should get a successful 200 response and a payload populated with the details of the new auth credential.
  4. In your terminal, prepare the /v1/environments/{environmentId}/destinations request with these values:
    • Set authCredential to the auth credential ID from the response you received in step #4.
    • Populate cloudProviderSettings with the settings below.
      • The fileNamePrefix is optional. If specified, the filename format will be the prefix you define appended by the log ID. If not specified, the filename will simply be the log ID.
      • Enter the productId based on your specific setup:
        • GCS only: gcs.
        • GCS + Databricks: databricks
        • GCS + Snowflake: snowflake
          Example: Values for GCP + Databricks or Snowflake cloudProviderSettings
          json
          1
          {
          2
          "cloudProviderSettings": {
          3
          "typeId": "gcp",
          4
          "productId": "<databricks_or_snowflake_or_gcs>",
          5
          "settings": {
          6
          "bucketName": "<bucket_name_from_GCS>",
          7
          "fileNamePrefix": "<optional_prefix>",
          8
          //You can append `/` after the prefix name to indicate a directory path"
          9
          }
          10
          }
          11
          }
  5. You should get a successful 200 response with a payload populated with the details of the new GCP cloud destination. Specifically, the verified status of the destination should be set to true.
  6. Your new destination will now be able to receive messages. These messages get stored in your GCS bucket, which can then be picked up by Databricks or Snowflake for further processing.