LivePerson to Redshift

This page provides you with instructions on how to extract data from LivePerson and load it into Redshift. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)

What is LivePerson?

LivePerson promotes conversational commerce on digital messaging channels including SMS, Facebook Messenger, Apple Business Chat, and WhatsApp, as well as on websites and mobile apps. It lets businesses create AI-powered chatbots to handle consumer messages alongside human customer service staff.

Getting data out of LivePerson

LivePerson provides several APIs, including a LiveEngage Data Access API that lets developers retrieve data stored in the platform about agent activity, engagement, web sessions, and surveys. For example, to retrieve information about agent activity, you would call GET https://{domain}/data_access_le/account/{accountID}/le/agentActivity.

Sample LivePerson data

Here's an example of the kind of response you might see with a query like the one above.

     "dataAccessFiles": {
       "@id": "28045150",
       "link": {
         "@rel": "self"
       "file": [
           "@name": "Agent.1461387600000.1461391200000.part-00001-0",
           "@scopeStartDate": "2019-04-23T01:00:00-04:00",
           "@scopeEndDate": "2019-04-23T02:00:00-04:00",
           "@href": ""
           "@name": "Agent.1461391200000.1461394800000.part-00001-0",
           "@scopeStartDate": "2019-04-23T02:00:00-04:00",
           "@scopeEndDate": "2019-04-23T03:00:00-04:00",
           "@href": ""
           "@name": "Agent.1461394800000.1461398400000.part-00001-0",
           "@scopeStartDate": "2019-04-23T03:00:00-04:00",
           "@scopeEndDate": "2019-04-23T04:00:00-04:00",
           "@href": ""
           "@name": "Agent.1461398400000.1461402000000.part-00000-0",
           "@scopeStartDate": "2019-04-23T04:00:00-04:00",
           "@scopeEndDate": "2019-04-23T05:00:00-04:00",
           "@href": ""
           "@name": "Agent.1461402000000.1461405600000.part-00000-0",
           "@scopeStartDate": "2019-04-23T05:00:00-04:00",
           "@scopeEndDate": "2019-04-23T06:00:00-04:00",
           "@href": ""

Preparing LivePerson data

If you don't already have a data structure in which to store the data you retrieve, you'll have to create a schema for your data tables. Then, for each value in the response, you'll need to identify a predefined datatype (INTEGER, DATETIME, etc.) and build a table that can receive them. The LivePerson documentation should tell you what fields are provided by each endpoint, along with their corresponding datatypes.

Complicating things is the fact that the records retrieved from the source may not always be "flat" – some of the objects may actually be lists. In these cases you'll likely have to create additional tables to capture the unpredictable cardinality in each record.

Loading data into Redshift

Once you have identified all of the columns you will want to insert, you can use the CREATE TABLE statement in Redshift to create a table that can receive all of this data.

With a table built, it may seem like the easiest way to migrate your data (especially if there isn't much of it) is to build INSERT statements to add data to your Redshift table row by row. If you have any experience with SQL, this will be your gut reaction. But beware! Redshift isn't optimized for inserting data one row at a time. If you have a high volume of data to be inserted, you would be better off loading the data into Amazon S3 and then using the COPY command to load it into Redshift.

Keeping LivePerson data up to date

At this point you've coded up a script or written a program to get the data you want and successfully moved it into your data warehouse. But how will you load new or updated data? It's not a good idea to replicate all of your data each time you have updated records. That process would be painfully slow and resource-intensive.

The key is to build your script in such a way that it can identify incremental updates to your data. Once you've taken new data into account, you can set your script up as a cron job or continuous loop to keep pulling down new data as it appears.

Other data warehouse options

Redshift is great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Google BigQuery, PostgreSQL, Snowflake, or Microsoft Azure SQL Data Warehouse, which are RDBMSes that use similar SQL syntax, or Panoply, which works with Redshift instances. Others choose a data lake, like Amazon S3. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To BigQuery, To Postgres, To Snowflake, To Panoply, To Azure SQL Data Warehouse, and To S3.

Easier and faster alternatives

If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.

Thankfully, products like Stitch were built to move data from LivePerson to Redshift automatically. With just a few clicks, Stitch starts extracting your LivePerson data via the API, structuring it in a way that's optimized for analysis, and inserting that data into your Redshift data warehouse.