Setup your ADO Demo using an AWS S3 Connection

NoelR
edited October 2 in Best Practices

Author: Noel Rocher, Partner Success Manager at Anaplan

The problem statement

"I would like to Demo the ADO S3 Connector with a CSV file hosted on AWS S3 as an ADO Data Source."

The solution

Use a free AWS Account to host the example CSV file.

Steps to take

We will walk through how to setup an AWS environment using a free tier AWS account with an AWS S3 Bucket (a namespace) to upload/host the example CSV file. Then, we'll define an AWS User to use in ADO to access the file as a Data Source.

Don't be afraid by the number of screenshots; it's a simple setup!

Step 1: Create a free AWS account

Go to AWS Free Tier. You will need a credit card, but no charges will incur below the free tier usage limits (which can be seen here).

1 Free Tier = Always Free + 12 Months Free + Free Trials

IMPORTANT: the Amazon S3 service is not a part of the Always Free Tier (but free for the 1st 12 months). However, for the usage of a demo with small files, you will not reach a big amount (at the time this article was written). Advice: don't forget to delete your Buckets after a demo.

Let's get started.

Step 2: Create an AWS S3 Bucket and add the CSV file

Once logged in on the AWS Console, open the Services list using the icon on the top left next to the AWS logo and select the S3 service. Below are the steps to follow to create the myadotests Bucket where I will upload the CSV files.

This completes the file upload into the AWS S3 Bucket myadotests.

Step 3: Create the AWS user for ADO and grant access

It is safe to create a dedicated AWS User for ADO with the correct profile, which is mandatory for the ADO S3 Connector to work. Create an Access Key for this AWS User to get the credentials required to configure the ADO S3 Connector.

Important note: Select Show on the right of the Secret Access Key and save it for later use. It will not be possible to see it after this step. Alternatively, you can click on the “Download .csv file”.

Here's the policy content to copy/paste.

Important note: Make sure to replace myadotests (the example bucket name used here) with your own bucket name.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::myadotests"
]
},
{
"Sid": "AllS3WithinBucket",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::myadotests/*"
]
}
]
}

Last step: Define the ADO Data Source

Opening the ADO application, define a Connection using the ADO S3 Connector. Then, define a Data Source based on this Connection to retrieve the data from the CSV file you've uploaded into the AWS Bucket.

Define the S3 Connection:

Define the data source

Here we are! Data is now inside ADO as a Dataset for use in Transformation Views, etc…

Questions? Leave a comment!