Notes:
- Help and support: Setting up a Kinesis Firehose delivery stream (or any other data integration) is part of your free support during onboarding, and we usually go through this process during a video call or an in-person meeting. This document describes the process so you can review it in advance or complete it yourself if you so wish.
- GDPR compliance: As part of easier GDPR compliance, Blockmetry processes your data in the AWS Dublin datacenter, which AWS calls
eu-west-1
. You can choose a different region, even outside the EU (third countries), to export your data, and Blockmetry will do so under your instructions. The instructions below contain direct links to the AWS management console in Dublin to help you complete the process faster.
Overview
Setting up a AWS Kinesis Firehose delivery stream to receive your data from Blockmetry is two parts:
- Setting up the delivery stream, which includes setting up the data store.
- Configuring permissions to allow Blockmetry to write data to the delivery stream.
Setting up the Kinesis Firehose delivery stream
Go to AWS Kinesis dashboard. You may have to log into your AWS account. Click here to create a new delivery stream in Dublin (see note above about Dublin).
In the form, fill out:
Delivery stream name: Anything you wish to call the stream that receives the Blockmetry data.
For "Source", make sure "Direct PUT or other sources" is selected (it's the default).
Click Next.
In the new form called Process records:
Record transformation: Choose Disabled.
Record format conversion: Choose Disabled.
Click Next.
In the new form called Select destination, choose your preferred destination based on your needs. If you're just getting started, Amazon Elasticsearch Service is a good choice. The details you need for this page and the next are in the section below; please follow them and continue here when you're done.
For S3 backup, we recommend at least the "Failed records only" backup mode. Choosing All records means all your data is backed up, which is a very good idea. Choose an S3 bucket and prefix as you prefer.
Click Next.
You will now be shown the Configure settings, and these depend on the destination option you chose. The details for Elastisearch in the section below.
Configure Blockmetry permissions
Once you've created the firehose stream and its destination, copy and paste the details into the property's admin page in your Blockmetry account. This will allow Blockmetry to create AWS access policies you can copy/paste to allow Blockmetry to write your data to the delivery stream. you need to update the IAM role to create a trust relationship between it and Blockmetry (as they are in different accounts).
Go to the IAM page. Click here for the IAM roles homepage in Dublin.
Click the name of the role you just created or specified for the firehose.
In the Permission tab:
Click Attach policies.
In the listing of policies, click Create policy.
Click on the JSON tab.
Now delete all contents in the default (empty) policy.
Copy and paste the policy document from the property management admin page in your Blockmetry account.
Click Review policy.
Click Create policy.
Now go back to the IAM role page, then:
Click the Trust relationships tab.
Click edit the Edit Trust Relationship button.
You will be shown an AWS Policy document in JSON format. It may contain one or more statements, so be careful not to overwrite those.
Identify where you can add another policy statement.
In your property's admin page in your Blockmetry account, you will find the policy under the
Kinesis external ID
heading. Copy and paste that into the policy document.Click Update Trust Policy.
This completes setting up the Kinesis Firehose delivery stream for your Blockmetry data.
## Amazon Elasticsearch Service destination detailsIn the Select destination form, the details are:
For Domain, choose an existing one or create new.
For Index, type in
bmanalytics
.Choose your preferred index rotation. For small volumes of data, we recommend the No rotation option.
For Type, type in
bmrecord
.For Retry duration, the default value of 300 seconds is fine. You can change that if you need.
Continue with the instruction above, and when ready, the next set of details are below.
The details for the Configure settings page are:
In the Elasticsearch buffer conditions, the defaults (5MB and 300 seconds) are fine. If you shorten the buffer, you will get the Blockmetry data faster, but that may not be optimal use of resources.
For S3 compression and encryption, we recommend choosing a compression format (GZIP has very wide support and offers good compression rates). Choose encryption as per your data security policies.
Error logging: The default option (Enabled) is recommended.
IAM role: We recommend creating a new one and giving it a descriptive Role Name in the form that's shown after you click the button.
When you're done go back to the setup tab and click Next.
You will be shown the final review page. Click Next to create the firehose and destination.
Continue in the general instructions above.