Skip to main content

Perform Translation Through ADX

LiveRamp's Translation in the Amazon Data Exchange (ADX) allows for the translation of a RampID from one partner encoding to another using either maintained or derived RampIDs. This allows you to match persistent pseudonymous identifiers to one another and enables use of the data without sharing the sensitive underlying identifiers.

Note

Specifically, RampID translation enables:

  • Person-based analytics

  • Increased match rates in data collaboration

  • Measurement enablement across device types

You can access LiveRamp Identity Resolution within the ADX Marketplace, meaning translation can be performed without leaving AWS. See "LiveRamp Identity in ADX" for more information.

Overall Steps

Before you can perform translation, you must perform the steps to enable LiveRamp Identity in the ADX Marketplace, including providing access to your S3 buckets. For information on performing these steps, see “LiveRamp Identity in the ADX Marketplace”.

After you’ve performed the steps to enable LiveRamp Identity in ADX, perform the following steps to translate RampIDs:

Note

To avoid errors, you might want to verify that your setup has been performed correctly before performing the operation. For more information, see the "Checklist to Verify Your Setup for LiveRamp Identity in ADX" section below.

  1. Format the input data file and load it into your AWS S3 input location.

  2. Initiate translation by calling the LiveRamp Workflows API endpoint.

  3. Initiate output file delivery by calling the LiveRamp Polling API endpoint.

The calls can be made either as https curl requests or through the AWS CLI (command line interface).

After you initiate file delivery, LiveRamp delivers the translated output file(s) to the specified S3 output location and associated usage metrics are reported to AWS for billing.

See the sections below for more information on performing these steps.

Checklist to Verify Your Setup for LiveRamp Identity in ADX

To avoid errors, use the checklists in the sections below to verify that all the necessary native app setup steps have been successfully performed before executing an operation.

AWS Region Alignment

  • Region in Contract: Confirm that the AWS region you provided to LiveRamp during contract execution is consistent with your actual AWS services.

  • AWS CLI Region Check: Run aws configure get region to verify the AWS region for the IAM user or profile you're using.

Note

  • The ADX offer will be made and accepted in US-East-2 and API calls will be made from US-East-2.

  • The region you provide for your contract is where your buckets need to be. As long as your buckets are in US-East-1, your job (compute at LiveRamp's end for your job) will run in US-East-1 and will not incur cross-region data transfer costs. For bucket regions other than US-Eeast-1 and US-West-2, you need cross region to be true in your incoming request and it will run in US-East-1.

IAM User and Permissioning

  • IAM User for ADX: Confirm that there is an IAM user configured specifically for ADX operations.

  • ADX Permissioning: Confirm that the IAM user has the required permissions for starting and polling jobs in ADX.

  • S3 Bucket Permissioning: Confirm that the IAM user has been granted read and write permissions for both the input and output S3 buckets.

S3 Bucket Setup

  • Input Bucket Configuration: Confirm that there is an S3 bucket exclusively dedicated for input files for LiveRamp processing.

  • Output Bucket Configuration: Confirm that there is a separate S3 bucket dedicated for output files from LiveRamp processing.

  • Bucket Policy Verification: Confirm that the bucket policies for the input and output buckets are aligned with LiveRamp's required permissions.

  • Bucket Accessibility Test: Execute aws s3 ls s3://<input-bucket-name> and aws s3 ls s3://<output-bucket-name> to verify IAM user access to the buckets.

Format the Input Data File

See the sections below for information on formatting the input data file for translation.

Input File Formatting Guidelines

Translation input data files should be formatted as CSV files. When creating input data files, follow these additional guidelines:

  • Include a header row in the first line of every file. Files cannot be processed without headers.

  • You can name your columns however you want, but every column name must be unique in a table.

  • Include the desired target domain encoding for each row in the column for the target domain:

    • Enter a partner’s domain when translating from your native encoding to that partner’s domain.

    • Enter your domain when translating from a partner’s encoding to your native encoding.

  • When performing translation on multiple files in one job, make sure the identifier column headers are the same in every file and that they match the value given for the “target_column” parameter in the call to initiate translation.

  • Try not to include additional columns. Having extra columns slows down processing.

    Note

    Additional columns (such as attribute data columns) can be included in the input file, but only the input RampIDs and translated RampIDs will be returned in the output file.

File Format Example

See the table below for an example of how to format an input data file.

Column

Example

Description

RAMPID

XYT999RkQ3MEY1RUYtNUIyMi00QjJGLUFDNjgtQjQ3QUEwMTNEMTA1CgMjVBMkNEMTktRD

RampID (maintained or derived) for translation.

TARGET_DOMAIN

T001

Target domain.

TARGET_TYPE

RampID

Target type. Currently only "RampID" is supported.

Initiate Translation

Once your data files have been prepared and placed into your S3 bucket, initiate the translation process. This is done by making a call to the LiveRamp Workflows ADX API that follows the format of the example commands shown below.

AWS CLI Calls to Initiate Translation

See below for the format of an AWS CLI call to initiate translation:

aws dataexchange send-api-asset \
    --data-set-id <data-set-id> \
    --revision-id <revision-id> \
    --asset-id <asset-id> \
    --method POST \
    --path "/adx/job/start" \
    --body '{
                "input_s3": "<Input S3 bucket>",
                "file_format": "csv",
                "file_pattern": "<Regex pattern for input files>[.]csv",
                "workflow_type": "transcoding",
                "workflow_sub_type": "transcoding",
                "target_column": "<Target column>",
                "client_id": "<Client ID>",
                "client_secret": "<Client secret>",
                "cross_region": "true"
            }'

See below for an example of what a populated AWS CLI call to initiate translation might look like:

aws dataexchange send-api-asset \
    --data-set-id <data-set-id> \
    --revision-id <revision-id> \
    --asset-id <asset-id> \
    --method POST \
    --path "/adx/job/start" \
    --body '{
                "input_s3": "s3://my-input-bucket-name",
                "file_format": "csv",
                "file_pattern": "<Regex pattern for input files>[.]csv",
                "workflow_type": "transcoding",
                "workflow_sub_type": "transcoding",
                "target_column": "my_rampid_col",
                "client_id": "my-client-id",
                "client_secret": "my-client-secret",
                "cross_region": "true"
            }'

Example Responses for Calls to Initiate Translation

The following is an example of a response for a successful job submission:

{
    "ResponseHeaders": {
        "Content-Type": "application/json",
        "Content-Length": "97",
        ...
    },
    "Body": "{\"Job ID\": \"E660EC80F3BF4473A120D3CAC890CADC_AWS_US_EAST_1\", \"Status\": \"ADX Start job submitted\"}"
}

Note

For information on troubleshooting errors that might occur when performing calls, see "Troubleshoot Calls in ADX".

Initiate Output File Delivery

Once you’ve initiated the translation process, you must make a poll job request to initiate the delivery of the output file to the output S3 bucket after processing is complete. This is done by making a call that follows the format of the example curl command shown below.

Note

It is recommended that polling be done programmatically at recurring intervals until the processing is complete and the output file has been delivered.

AWS CLI Calls to Initiate Delivery

See below for the format of an AWS CLI call to initiate delivery:

aws dataexchange send-api-asset \
    --data-set-id <data-set-id> \
    --revision-id <revision-id> \
    --asset-id <asset-id> \
    --method POST \
    --path "/adx/job/poll" \
    --body '{
            "job_id": "<Job ID>",
            "output_s3": "<Output S3 bucket>",
            "file_format": "csv",
            "client_id": "<Client ID>",
            "client_secret": "<Client secret>",
            "cross_region": "true"
            }

See below for an example of what a populated AWS CLI call to initiate delivery might look like:

aws dataexchange send-api-asset \
    --data-set-id <data-set-id> \
    --revision-id <revision-id> \
    --asset-id <asset-id> \
    --method POST \
    --path "/adx/job/poll" \
    --body '{
            "job_id": "JOB_ID_123",
            "output_s3": "s3://my-output-bucket",
            "file_format": "csv",
            "client_id": "my-client-id",
            "client_secret": "my-client-secret",
            "cross_region": "true"
            }'

Example Responses for Calls to Initiate Delivery

The following is an example of a response when processing is complete:

{
    "ResponseHeaders": {
        "Content-Type": "application/json",
        "Content-Length": "158",
        ...
    },
    "Body": "{\"Job ID\": \"E660EC80F3BF4473A120D3CAC890CADC_AWS_US_EAST_1\", \"Status\": \"ADX Poll job started for delivering output results. Re-poll later for updated status\"}"
}

In addition to the response received when processing is complete, you might get one of the following responses in the status parameter:

  • ''Upload to AWS S3 in progress. Re-poll later or wait for the delivery notification'

  • 'Output results uploaded to AWS S3 bucket'

Note

For information on troubleshooting errors that might occur when performing calls, see "Troubleshoot Calls in ADX".

API Parameters

See the tables below for a list of the API header parameters and request parameters.

Header Parameters

Header Parameter

Data Type

Description

data-set-id

string

Your AWS-provided Data set ID.

revision-id

string

Your AWS-provided Revision ID.

asset-id

string

Your AWS-provided Asset ID.

For information on finding the AWS-provided parameters, see this AWS article.

Request Parameters

Request Parameter

Data Type

Description

client_id

string

Either an existing LiveRamp client ID (if you already have Identity API credentials) or a new one provided by LiveRamp.

client_secret

string

Password / secret for the LiveRamp client_ID (either an existing password / secret (if you already have Identity API credentials) or a new one provided by LiveRamp).

workflow_type

string

“transcoding” for all translation processes.

workflow_sub_type

string

Sub type of workflow needed. Allowed values are “transcoding” for translation.

input_s3

string

S3 directory for input files.

output_s3

string

S3 directory for output files.

file_format

string

Specifies the format for input files. Accepted file format is CSV.

file_pattern

string

Regex pattern for input files.

For example, the pattern ‘input_2.*[.]csv’ would result in the processing of the following files:

input_20.csv

input_221.csv

input_225.csv

target_column

string

The column header name for the input field which contains the RampIDs to be translated. Ex: “RAMPID”

cross_region

string

“true” or “false”. If “true”, then workloads are processed in the default region (us-east-1) if the target region is unavailable. If “false”, then workloads are not processed in the default region if the target region is unavailable and a status message to enable cross region is returned to the caller.

job_id

string

For polling requests, enter the Job ID returned in the response for the call to translation. The Job ID consists of a unique ID plus your AWS region name.

Translation Output

The output file(s) from the translation process will be compressed and then written to the specified S3 bucket provided in the poll job request.

The file naming convention for the output files will be in the format "<JOB_ID>_0_0_0.csv.gz".

The Job ID will be a unique ID plus your AWS region name (such as "AWS_US_EAST_1"). The numbers after the Job ID do not carry any meaning.

Ex: 17697C67E98D4702BEB4ED7B3B0FA_AWS_US_EAST_1_0_0_0.csv.gz, 17697C67E98D4702BEB4ED7B3B0FA_AWS_US_EAST_1_1_0_0.csv.gz

Output File for Translation

The output file for device ID resolution will follow the format shown in the table below.

Column

Sample

Description

RampID (original encoding)

XYT999RkQ3MEY1RUYtNUIyMi00QjJGLUFDNjgtQjQ3QUEwMTNEMTA1CgMjVBMkNEMTktRD

Returns the original RampID included in the input table.

Transcoded_identifier (RampIDs in target encoding)

XYT001k0MS00MDc1LUI4NjEtMjlCOUI0MUY3MENBCgNjVGQjE0MTMtRkFBMC00QzlELUJF

Translated RampID or NULL (NULL due to unreadable native RampID or unauthorized domain, etc.).