Configure an Amazon Web Services Data Connection (LiveRamp-Hosted)
How to configure an Amazon Web Services (AWS) data connection that is hosted by LiveRamp within Clean Room, including steps for setting up the connection, configuring permissions, and verifying the integration.
To connect AWS to LiveRamp Clean Room using a LiveRamp-hosted AWS S3 bucket instead of using your own, you can create a LiveRamp-hosted Amazon Web Services data connection.
Note
You can connect AWS to LiveRamp Clean Room using your own AWS S3 bucket instead of using a LiveRamp-hosted one. For more information, see "Configure an Amazon Web Services Data Connection (Customer-Hosted)".
A LiveRamp-hosted Amazon Web Services data connection can be used in the following clean room types:
Hybrid
Confidential Computing
Amazon Web Services
After you’ve created the data connection and Clean Room has validated the connection by connecting to the data in your cloud account, you will then need to map the fields before the data connection is ready to use. This is where you specify which fields can be queryable across any clean rooms, which fields contain identifiers to be used in matching, and any columns by which you wish to partition the dataset for questions.
After fields have been mapped, you’re ready to provision the resulting dataset to your desired clean rooms. Within each clean room, you’ll be able to set dataset analysis rules, exclude or include columns, filter for specific values, and set permission levels.
To configure a LiveRamp-hosted Amazon Web Services data connection, see the instructions below.
Overall Steps
Perform the following steps to configure a LiveRamp-hosted AWS data connection in LiveRamp Clean Room:
For information on performing these steps, see the sections below.
Guidelines
Review the following guidelines before starting the setup process:
LiveRamp Clean Room supports CSV and Parquet files, as well as Delta tables and multi-part files. All files should have a file extension. All CSV files must have a header in the first row. Headers should not have any spaces or special characters and should not exceed 50 characters. An underscore can be used in place of a space.
LiveRamp encourages the use of partition columns for optimal question run performance.
Generate an AWS Data Source Location in LiveRamp Clean Room
From the navigation pane, select
→ .In the row for LiveRamp-Hosted AWS S3, click
.
Add the Credentials
To add credentials:
From the LiveRamp Clean Room navigation pane, select Data Management → Credentials.
In the row for the HABU_AWS credential source, select "Activate" from the Actions dropdown.
Click
.Review the credentials information and then click
.The next screen will display the following parameters:
AWS Access Key ID
AWS Secret Access Key
User ARN
Copy and store the credentials in a secure location for the next procedure.
Use the credentials to authorize and send files to the LiveRamp-hosted AWS S3 bucket generated in the previous procedure.
Create the Data Connection
After you've added the credentials to LiveRamp Clean Room, create the data connection:
From the LiveRamp Clean Room navigation pane, select Data Management → Data Connections.
From the Data Connections page, click
.From the New Data Connection screen, select "LiveRamp-Hosted AWS S3".
If you've already generated credentials, they will automatically populate. Otherwise, you can generate or regenerate credentials from this page.
Complete the following fields in the Set up Data Connection section:
To use partitioning on the dataset associated with the data connection, slide the Uses Partitioning toggle to the right.
Category: Enter a category of your choice.
Dataset Type: Select Generic.
File Format: Select CSV, Parquet, or Delta.
Note
All files must have a header in the first row. Headers should not have any spaces or special characters and should not exceed 50 characters. An underscore can be used in place of a space.
If you are uploading a CSV file, avoid double quotes in your data (such as "First Name" or "Country").
Quote Character: If you're uploading CSV files, enter the quote character you'll be using (if any).
Field Delimiter: If you're uploading CSV files, select the delimiter to use (comma, semicolon, pipe, or tab).
Sample File Path: A sample file to use for inferring the schema and understanding how partition columns will be leveraged in questions (for use if defining partition columns). For example, "s3://clean-room-client-org-123ab456-7d89-10e1-a234-567b891c0123/purchase_events/brand-id=1234/file.csv".
Complete the following tasks and fields in the Data Location and Schema section:
To use partitioning on the dataset associated with the data connection, slide the Uses Partitioning toggle to the right.
Note
If the data connection uses partitioning, the dataset can be divided into subsets so that data processing occurs only on relevant data during question runs, which results in faster processing times. When using partitioning, a data schema reference file is required to be entered below.
Data Location: The data location will automatically populate with the AWS S3 bucket generated in the first procedure. The macro will be replaced by the true date of the upload. For example, "s3://clean-room-client-org-123ab456/uploads/HABU_AWS/purchase_events/date+yyyy-MM-dd/".
Sample File Path: If you enabled partitioning above, enter the location of a data schema reference file.
Note
The data schema reference file name must start with "s3://" and end with a valid file extension (such as ".csv").
The data schema reference file must be hosted in a static location and must have been uploaded within the last seven days.
Review the data connection details and click
.If you haven't already, upload your data files to your specified location.
All configured data connections can be seen on the Data Connections page.
When a connection is initially configured, it will show "Verifying Access" as the configuration status. Once the connection is confirmed and the status has changed to "Mapping Required", map the table's fields.

You will receive file processing notifications via email.
Map the Fields
Once the above steps have been performed in Google Cloud Platform, perform the overall steps in the sections below in LiveRamp Clean Room.
Note
Before mapping the fields, we recommend confirming any expectations your partners might have for field types for any specific fields that will be used in questions.
From the row for the newly created data connection, click the More Options menu (the three dots) and then click
.The Map Fields screen opens, and the file column names auto-populate.
For any columns that you do not want to be queryable, slide the Include toggle to the left.
Note
Ignore the field delimiter fields because this was defined in a previous step.
Click
.The Add Metadata screen opens.
For any column that contains PII data, slide the PII toggle to the right.
Select the data type for each column.
For columns that you want to partition, slide the Allow Partitions toggle to the right.
If a column contains PII, slide the User Identifiers toggle to the right and then select the user identifier that defines the PII data.
Click
.
Your data connection configuration is now complete and the status changes to "Completed".
You can now provision the resulting dataset to your desired Hybrid, Confidential Computing, or Amazon Web Services clean room.