Export Data to Amazon Web Services
- Create an S3 bucket on your AWS account.
- Give Swrve log-export user access to the bucket.
- (Optional) Provide key or key alias for KMS encryption.
Create S3 Bucket
To create an S3 bucket:
Step 1: If you haven’t already done so, sign up for a Free AWS S3 Account at http://aws.amazon.com/s3/.
Step 2: On the Amazon Web console, go to S3.
Step 3: Click Create Bucket.
Step 4: Enter the bucket name as swrveexternal-<companyname>.
Step 5: Depending on your data requirements, Swrve stores all customer data and content in either our US or EU data centers.
- If your app uses our US data center (for example, https://dashboard.swrve.com/), set the region as US Standard.
- If your app uses our EU data center (for example, https://eu-dashboard.swrve.com), set the region as EU-West-1.
Step 6: Click Create.
Give Swrve Permissions to Write to the Bucket
After you create the S3 bucket, you need to give Swrve’s log-export account permission to write to the bucket by adding a bucket policy. To add a bucket policy:
Step 1: Click the name of the new bucket.
Step 2: On the top right, click Properties.
Step 3: Click Permissions.
Step 4: Click Add bucket policy.
Step 5: Enter the following JSON configuration, replacing <companyname> with your company name (lower case, no spaces or hyphens). Ensure the JSON code matches exactly.
Step 6: Click Save.
To get access to logs:
Step 1: Send a message to email@example.com to request that Swrve enable log export for the required apps.
Step 2: After your Customer Success Manager confirms logging export is enabled, download s3cmd from s3tools.org.
Step 3: Use s3cmd ls to browse your bucket.
Log files are generated for every hour. File names are formatted as:
For example, if your company name was Acme, the app ID 123, and you wanted the logs from 9-August-2016 from 14:00 to 15:00, you could download them with the following:
s3cmd get s3://swrveexternal-acme/app-123/2016-08-09T14Z.log.gz
Once the file is downloaded, decompress it with the following:
This results in the plain text file 2016-08-09T14Z.log containing JSON-formatted events.
If you would like to use AWS Key Management Service (KMS) to secure the export of your data from Swrve to AWS, contact your Customer Success Manager at firstname.lastname@example.org. You need to provide the Amazon Resource Name (ARN) for the KMS key or key alias that the you want to use to encrypt the data in the S3 bucket.
Export Data to Google Cloud Storage
Swrve logs data for your apps in a Cloud Storage Bucket that you access from the Google Cloud Platform Console. This section covers the requirements for setting up raw data export to a Google Cloud Storage bucket.
- In your Google Cloud Platform Console, create a service account that you want to use to write the data to Google Cloud Storage. The service account name usually follows the same syntax as an email address; for example, email@example.com.
- Create a Cloud Storage bucket that you want the data to be written to. The bucket name must be unique, however there is no set requirement on what or how you name the bucket.
- Give Swrve Writer user permission for the bucket.
- Get the p12 certificate that gives authentication for the named service account from the Cloud Storage Console.
- Contact your Customer Success Manager at firstname.lastname@example.org and ask them to enable your app for Google Cloud Storage data export. You need to provide the service account name, bucket name and p12 certificate.
To access the data logs in Google Cloud Storage:
Step 1: After your Customer Success Manager confirms logging export is enabled, access the Cloud Storage Bucket in your Google Cloud Platform Console. The download files in the Cloud Storage Bucket are automatically named by app ID.
Step 2: Double-click an app folder to view the log files within it. Log files are generated for every hour. File names are formatted as YYYY-MM-DDTHHZ.log.gz.
Step 3: Follow the Google Cloud Platform steps for downloading and viewing the log file. The decompressed file is a plain text file named YYYY-MM-DDTHHZ.log, containing JSON-formatted events.
The schema the raw event export files follow is:
- gzip compressed
- one line of JSON encoded code (each line represents a single event)
- each line is a JSON dictionary with the following keys: user, time, client_time, type, app_version, payload, parameters, seqnum, version, device_id
type represents the event type sent to the Swrve API. It can be one of:
user represents the user identifier sent to the Swrve API with the event (or batch of events).
time represents the time recorded on the Swrve server when the event was received. This is an integer which represents the number of milliseconds since the epoch.
client_time represents the client time sent to Swrve with event. This is an integer which represents the number of milliseconds since the epoch.
app_version represents the app_version identifier sent to the Swrve API with the event (or batch of events).
payload represents the optional payload information added to the event.
payload is a dictionary of string / integer / boolean values.
parameters contains a dictionary of data sent to the Swrve API with the event. The dictionary fields depend on event_type. These parameters match the REST API.
purchasefields: item, currency, cost,
iapfields: Platform dependant.
Seqnum, Version, Device ID
device_idare internal and are used for debugging.
Need a hosted solution?
Most Swrve customers can self-host this pipeline; all you have to do is follow the steps in these tutorials. However, if you prefer a turn-key hosted solution, we do offer one as a professional service. There is an additional fee associated with this service. For more information, contact your Customer Success Manager at email@example.com.
Need help with Queries?
Swrve support can help you with basic setup and configuration of the pipeline described above. If you need help with your queries, contact our Data Services team at firstname.lastname@example.org. They will help you get the most out of your data with dedicated support, pre-built and custom reports, and dedicated data science hours.