Author: Noel Rocher, Partner Success Manager at Anaplan
The problem statement
"I would like to Demo the ADO S3 Connector with a CSV file hosted on AWS S3 as an ADO Data Source."
The solution
Use a free AWS Account to host the example CSV file.
Steps to take
We will walk through how to setup an AWS environment using a free tier AWS account with an AWS S3 Bucket (a namespace) to upload/host the example CSV file. Then, we'll define an AWS User to use in ADO to access the file as a Data Source.
Don't be afraid by the number of screenshots; it's a simple setup!
Step 1: Create a free AWS account
Go to AWS Free Tier. You will need a credit card, but no charges will incur below the free tier usage limits (which can be seen here).
1 Free Tier = Always Free + 12 Months Free + Free Trials
IMPORTANT: the Amazon S3 service is not a part of the Always Free Tier (but free for the 1st 12 months). However, for the usage of a demo with small files, you will not reach a big amount (at the time this article was written). Advice: don't forget to delete your Buckets after a demo.
Let's get started.
Step 2: Create an AWS S3 Bucket and add the CSV file
Once logged in on the AWS Console, open the Services list using the icon on the top left next to the AWS logo and select the S3 service. Below are the steps to follow to create the myadotests
Bucket where I will upload the CSV files.
From your AWS Console choose the S3 Service and click on the Create Bucket button.
From your AWS Console choose the S3 Service and click on the Create Bucket button.
Then you create your new Bucket providing a name (here myadotests) and click on the Create button.
WARNING: it is reported that you can experience an "Unable to open file(s) in path." message when defining a Source data from the S3 connection in ADO. This is due to the long delay during which AWS is deploying the Policy we will attach to the access of the Bucket. Choosing the US-East AWS regions for your bucket appears to decrease the likelihood of encountering this issue during your tests.
Now that I've got my new Bucket, let's upload my CSV file (here SYS08 Employee Details.csv)
I click on the Bucket name and see the page below.
Upload was successful, I can close the page.
I can now see the file in my Bucket.
This completes the file upload into the AWS S3 Bucket myadotests
.
Step 3: Create the AWS user for ADO and grant access
It is safe to create a dedicated AWS User for ADO with the correct profile, which is mandatory for the ADO S3 Connector to work. Create an Access Key for this AWS User to get the credentials required to configure the ADO S3 Connector.
Let's go to the Security Credential page from the menu appearing when you click on your account's name on the top right of the page.
Then click on Create User.
Define a name and click Next.
Select the Attach the policy directly option.
Click Next.
Then Create User.
Here we go. We now have our dedicated user for ADO access.
Next, we create an Access Key which is mandatory to configure the ADO AWS S3 Connector.
Click on the newly created user to open the page showing its details.
Then clicl on Create access key.
Select the Application running outside AWS option then click Next.
Optionally enter a name, then click on Create access key.
The access key is now created.
Important note: make sure to save the secret key after clicking on Show as it will not be possible to see it again. You'll need to create a new key then.
The secret key is mandatory to configure the ADO AWS S3 Connector.
On the User page, the Access Key is now appearing.
Last part of this section is to define the access policy attached to the user. On the same page Click on Add permissions then Create inline policy as we will provide the JSON description of the policy.
On the Specify permissions page, click on the JSON tab (in blue below).
Copy/paste the definition I'm providing below. On the JSON text, make sure to replace my bucket name by yours. Then click Next.
Give a name to the new policy and click on the Create policy button.
We are all set!
Here's the policy content to copy/paste.
Important note: Make sure to replace myadotests
(the example bucket name used here) with your own bucket name.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::myadotests"
]
},
{
"Sid": "AllS3WithinBucket",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::myadotests/*"
]
}
]
}
Last step: Define the ADO Data Source
Opening the ADO application, define a Connection using the ADO S3 Connector. Then, define a Data Source based on this Connection to retrieve the data from the CSV file you've uploaded into the AWS Bucket.
Define the S3 Connection:
Define the data source
Here we are! Data is now inside ADO as a Dataset for use in Transformation Views, etc…
Questions? Leave a comment!