LiveRamp’s Wholesale Delivery Methods
LiveRamp provides match data for clients, with flexible delivery methods:
Delivery of files to your SFTP
Delivery of files to your S3 bucket
A continuous, low-latency, S2S streamed record delivery “Firehose” (pull)
A simple download tool, which downloads all new records whenever executed (pull)
Note
Due to the ease of use and built-in offset management, we highly recommend using the provided Download Tool for customers who do not want files delivered to their SFTP or S3 bucket.
Tip
The download tool can be in addition to, or in place of, daily uploads to your SFTP or S3 bucket - talk to your account manager about your preference.
URLs for feeds follow a standard naming convention. LiveRamp will provide the necessary URL to use when using the tool.
For deliveries to your SFTP server, provide the following information:
Host name
Port
Username
Password
Root path
For deliveries to an AWS (Amazon Web Services) S3 bucket, provide the following information:
Access key
Secret key
Bucket
Region (usually “us-east-1” or “us-west-2”)
(Optional) Root path for delivery
LiveRamp continuously evaluates and produces record updates on each partner’s behalf, and is able to deliver updates to partners with low latency, usually sub-second, after related activity (e.g., a cookie sync).
Record updates for a partner are continuously written to a feed delivered via HTTPS, and a partner “subscribes” to that feed via an authenticated, long-lived HTTP GET request. Records are then streamed via the response of that request. Together, this method of delivery is very similar to Twitter’s well-known “Firehose” API.
Offset Requests
“Firehoses” allow for streamed, low-latency delivery of record updates. However a weakness of this API pattern is that a subscriber can only receive records which arrive after their subscription began. This can result in data loss, should records arrive in-between subscriptions.
To prevent this, LiveRamp models feeds as infinite-length files, and allows for new subscribers to begin their subscription at any offset of the feed, up to and including the present (representing “live” records). A subscriber can then interrupt their subscription at a particular feed byte offset, and later (potentially much later) begin a new subscription which begins reading from that exact byte offset, with no data lost in between.
Offset requests are optional, and require minor book-keeping on the client’s end to track the last-read offset. Partners willing to accommodate a small amount of data loss may instead use simple “tailing” requests, which subscribe to live records in a manner identical to a traditional API firehose.
Offset Tracking and Re-requesting
When making requests to the streaming API directly, the response may close for various reasons. Therefore, in case the response closes, clients should have a mechanism in place to automatically submit another request to the streaming API with an updated offset. Clients will need to track the offset on their side. This can be done by taking the offset used in the request, and adding the number of bytes of data received from the request. Adding these two will provide the new offset to use.
Authentication
Requests to LiveRamp feeds must be authenticated. LiveRamp provides a cookie-based authentication API at https://pippio.com/api/auth. This API accepts HTTPS POST requests with JSON credentials payloads of the form:
{"Username":"PartnerName","Password":"*******"}
Upon successful login, the authentication API will return a token via the standard HTTP ‘Set-Cookie’ header. API clients must then retain and present credentials via the standard ‘Cookie’ header. The standard library of most programming languages includes a cookie “Jar” implementation which can be used to transparently manage the details of capturing and presenting authentication tokens.
End-to-end Example
Obtain authentication token:
curl -L --cookie cookies.txt --cookie-jar cookies.txt --data-binary '{"Username":"PartnerName","Password":"********"}' https://pippio.com/api/auth
Example Output:
{"IsAuthenticated":true,"Session":{"user":{"account_id":"PartnerName","roles":["api"]...
Begin streaming match records from byte offset 123456:
curl --cookie cookies.txt -v -N "https://pippio.com/api/stream/match-records?offset=123456&block=true"
Example Output:
< HTTP/1.1 206 Partial Content
< Content-Range: bytes 123456-9223372036854775807/9223372036854775807
< Date: Mon, 17 Aug 2015 18:56:42 GMT
< Content-Type: text/plain; charset=utf-8
< Transfer-Encoding: chunked
{"Key":{"Type":5,"Value":"fd6db015-9663-431d-9850-36a6f95d81c8"},"Paths":[{"To":{"Type":1,"Value":"693570656"},”Confidence”:1.0,"Metadata":"CAUQABoPCAAAwpsTxAQ=="},{"To":{"Type":5,"Value":"480479ba-fed8-44c8-8fb8-c98397c26c98"},”Confidence”:...
Begin streaming future match records via special offset -1 (live streaming):
curl --cookie cookies.txt -v -N "https://pippio.com/api/stream/match-records?offset=-1&block=true"
Example Output:
< HTTP/1.1 206 Partial Content
< Content-Range: bytes 45678910-9223372036854775807/9223372036854775807
< Date: Mon, 17 Aug 2015 18:56:42 GMT
< Content-Type: text/plain; charset=utf-8
< Transfer-Encoding: chunked
{"Key":{"Type":5,"Value":"fd6db015-9663-431d-9850-36a6f95d81c8"},"Paths":[{"To":{"Type":1,"Value":"693570656"},”Confidence”:1.0,"Metadata":"CAUQABoPCAAAwpsTxAQ=="},{"To":{"Type":5,"Value":"480479ba-fed8-44c8-8fb8-c98397c26c98"},”Confidence”:...
In addition to the Streamed Record Delivery “Firehose”, LiveRamp provides a simple download tool - a lightweight python script which simplifies downloading all new records as a single batch, often directly from our cloud storage. It lets you define your own schedule for pulling batched record updates (eg monthly, daily, or hourly), and is easy to set up.
The tool is a single-source Python script, available at:
https://storage.googleapis.com/cdn.pippio.com/tools/arbor_batch_download.py.