Configure Clean Rooms
See the information below and the articles in this section for information on configuring clean rooms, including clean rooms for multi-party collaborations and walled garden clean rooms.
Clean Room Types
LiveRamp Clean Room offers several different types of clean rooms, depending on your situation and use case:
Hybrid clean rooms: Recommended when collaborators have data housed across multiple cloud warehouse types.
Confidential Computing clean rooms: Used when the clean room owner requires advanced security guarantees for collaboration. Currently supports Azure Confidential Compute. These are also referred to as “Hybrid Confidential Computing” or “TEE (Trusted Execution Environment)” clean rooms.
Native-Pattern clean rooms: Used when the clean room owner cannot allow data to leave their own cloud environment (for ease of collaboration, native-pattern clean rooms are generally not recommended).
Walled garden clean rooms: Used when connecting to walled garden data.
See the sections below for more information.
Note
Within Hybrid and Confidential Computing clean room types, you also have the option of enabling Clean Compute on Apache Spark for analytical questions. This allows you to execute your custom Python code against your own data and partner data from within a clean room while respecting the architectural integrity of the clean room. Clean Compute enables multi-node processing via customizable Spark jobs when you run a Clean Compute question. For more information, see “Clean Compute on Apache Spark”.
Hybrid Clean Rooms
Hybrid clean rooms allow parties to collaborate on data regardless of where the data is hosted and need to be used when data is housed in more than one cloud warehouse type. For example, if one party’s data is housed in Snowflake and another party’s data is housed in AWS, a Hybrid clean room must be used.
For collaboration, LiveRamp manages the orchestration between data sources.
Queries in Hybrid clean rooms are executed using Apache Spark SQL, because of its support for distributed processing of large datasets. Because of this, you might want to optimize your queries for use with Spark SQL (for more information, see “Question Builder Best Practices”). The processing takes place in a LiveRamp-hosted, clean room-specific VPC generated at question runtime.
The following data planes are supported in Hybrid clean rooms:
US:
GCP us‑east‑2, us‑central‑1
AWS us‑east‑2
AU/APAC: GCP australia‑southeast1
EU:
Azure West EU
GCP europe‑west4
UAE: AWS me‑central‑1
When connecting your data by creating a data connection for a Hybrid clean room, use the following articles, depending on your cloud warehouse:
Amazon Web Services (customer-hosted or LiveRamp-hosted)
Google Cloud Storage (customer-hosted or LiveRamp-hosted)
Confidential Computing Clean Rooms
Confidential Computing clean rooms (also sometimes referred to as “Hybrid Confidential Computing (HCC)” or “TEE (Trusted Execution Environment)” clean rooms) are a hybrid-pattern option for highly-sensitive data or code owners.
Confidential Computing clean rooms use a privacy-enhancing technology called a trusted execution environment (TEE) to ensure all data processing occurs on hardware rather than in the cloud and attest that only approved operations were run.
Confidential computing is great for multi-party collaboration because it enables specified code to run against data on specified hardware (a chip) while verifying that only the code agreed by all parties ran upon execution. This last part is a process called attestation, and it is used in cases where no party wants to directly share code or data with another but still wants to collaborate.
Attestation provides the assurance required to enable the collaboration without requiring sharing of unencrypted data or code between parties and even LiveRamp.
For collaboration, LiveRamp manages the orchestration between data sources.
Queries in Confidential Computing clean rooms are executed using Apache Spark SQL, because of its support for distributed processing of large datasets. You might want to optimize your queries for use with Spark SQL (for more information, see “Question Builder Best Practices”. The processing takes place on an AMD-SEV chip provisioned via Azure Confidential Compute. Each processing instance is unique to the clean room owner.
The following data planes are supported in Confidential Computing clean rooms:
US: Azure
EU: Azure
While partners can create standard Hybrid-pattern data connections for use in Confidential Computing clean rooms, owners must create CSV Catalog data connections. This pattern keeps the TEE’s view of the owner’s data explicit and controlled: only datasets declared in the CSV Catalog and mapped in the UI are available to query inside the HCC clean room.
When connecting your data by creating a data connection for a Confidential Computing clean room, clean room partners should use the following articles, depending on your cloud warehouse:
Amazon Web Services (customer-hosted or LiveRamp-hosted)
Google Cloud Storage (customer-hosted or LiveRamp-hosted)
Native-Pattern Clean Rooms
Native-pattern clean rooms are used when the clean room owner requires processing to occur in their own cloud instance. They require all data (both from the clean room owner and any partners) to be housed in the same cloud warehouse type (such as Snowflake or BigQuery).
Note
For ease of collaboration, native-pattern clean rooms are generally not recommended.
In native-pattern clean rooms, the processing takes place in your cloud warehouse environment.
LiveRamp Clean Room currently supports the following cloud warehouses for native clean rooms:
Snowflake
AWS
Databricks
Google BigQuery
Queries in native-pattern clean rooms are executed using the SQL dialect native to the specified cloud environment. Note Snowflake clean rooms also support Python via Snowpark.
Snowflake Native-Pattern Clean Rooms
Queries in a Snowflake native-pattern clean room use Snowflake's native SQL engine. For more information, see "Query Data in Snowflake" in Snowflake's documentation.
For collaboration, LiveRamp facilitates sharing via external tables.
During question runs, datasets are accessed via Secure Share.
The following data planes are supported in Snowflake native-pattern clean rooms:
US:
AWS_US_WEST_2
AZURE_EASTUS2
GCP_US_CENTRAL1
AWS_US_EAST_1
AWS_US_EAST_2
EU: Azure West EU
The only supported data connection type is Snowflake native-pattern.
AWS Native-Pattern Clean Rooms
For collaboration, all parties must connect via AWS Glue Catalog.
During question runs, datasets are accessed via a LiveRamp role set to orchestrate data access across accounts in the clean room.
The following data planes are supported in AWS native-pattern clean rooms:
US: Us‑east‑2
The only supported data connection type is AWS customer-hosted.
Databricks Native-Pattern Clean Rooms
For collaboration, all parties must connect via Databricks accounts.
During question runs, datasets are accessed via Delta Share.
The following data planes are supported in Databricks native-pattern clean rooms:
US
The only supported data connection type is Databricks native pattern.
Google Cloud BigQuery Native-Pattern Clean Rooms
Queries in a Google Cloud BigQuery native-pattern clean room use the GoogleSQL dialect and the compute typically runs within the clean room owner's BigQuery project. For more information, see "Optimize Queries" in BigQuery’s documentation.
For collaboration, all parties must have data in BigQuery in the same region.
During question runs, datasets are accessed via Private Exchange Listing.
We support compute in the clean room owner's project today through a clean room parameter in the BigQuery clean room called “Billing Project ID”. It's set in the clean room configuration screen and is where the compute will happen and the BigQuery jobs will be created. The Billing Project ID of the clean room determines which project gets billed for the job execution.
The following data planes are supported in BigQuery native-pattern clean rooms:
US: GCP us-east2
UK: GCP europe-west2
South Africa: GCP africa-south1
The only supported data connection type is BigQuery native pattern.
Walled Garden Clean Rooms
Walled garden clean rooms provide an intelligence layer on top of industry clean room environments so you can analyze and activate walled garden data in a more flexible, user‑friendly way via LiveRamp Clean Room.
LiveRamp Clean Room currently supports the following walled gardens for native clean rooms:
Amazon Marketing Cloud (AMC)
Facebook Advanced Analytics (FAA)
Google Ads Data Hub (ADH)
In these patterns:
The walled garden remains the system of record and continues to enforce its own data, privacy, and query rules.
LiveRamp Clean Room connects to the walled garden via a specialized data connection and credential, then uses that connection to pull event‑level outputs from the walled garden into LiveRamp Clean Room for user‑level analysis and visualizations.
From a LiveRamp UI perspective, you configure these as Hybrid clean rooms (select Hybrid when creating the clean room), but they function as walled garden–specific patterns rather than generic Hybrid or cloud‑native clean rooms.
The following walled garden data connection types are supported: