Setup your ADO Demo using an AWS S3 Connection
Author: Noel Rocher, Partner Success Manager at Anaplan
The problem statement
"I would like to Demo the ADO S3 Connector with a CSV file hosted on AWS S3 as an ADO Data Source."
The solution
Use a free AWS Account to host the example CSV file.
Steps to take
We will walk through how to setup an AWS environment using a free tier AWS account with an AWS S3 Bucket (a namespace) to upload/host the example CSV file. Then, we'll define an AWS User to use in ADO to access the file as a Data Source.
Don't be afraid by the number of screenshots; it's a simple setup!
Step 1: Create a free AWS account
Go to AWS Free Tier. You will need a credit card, but no charges will incur below the free tier usage limits (which can be seen here).
1 Free Tier = Always Free + 12 Months Free + Free Trials
IMPORTANT: the Amazon S3 service is not a part of the Always Free Tier (but free for the 1st 12 months). However, for the usage of a demo with small files, you will not reach a big amount (at the time this article was written). Advice: don't forget to delete your Buckets after a demo.
Let's get started.
Step 2: Create an AWS S3 Bucket and add the CSV file
Once logged in on the AWS Console, open the Services list using the icon on the top left next to the AWS logo and select the S3 service. Below are the steps to follow to create the myadotests
Bucket where I will upload the CSV files.
This completes the file upload into the AWS S3 Bucket myadotests
.
Step 3: Create the AWS user for ADO and grant access
It is safe to create a dedicated AWS User for ADO with the correct profile, which is mandatory for the ADO S3 Connector to work. Create an Access Key for this AWS User to get the credentials required to configure the ADO S3 Connector.
Important note: Select Show on the right of the Secret Access Key and save it for later use. It will not be possible to see it after this step. Alternatively, you can click on the “Download .csv file”.
Here's the policy content to copy/paste.
Important note: Make sure to replace myadotests
(the example bucket name used here) with your own bucket name.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::myadotests"
]
},
{
"Sid": "AllS3WithinBucket",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::myadotests/*"
]
}
]
}
Last step: Define the ADO Data Source
Opening the ADO application, define a Connection using the ADO S3 Connector. Then, define a Data Source based on this Connection to retrieve the data from the CSV file you've uploaded into the AWS Bucket.
Define the S3 Connection:
Define the data source
Here we are! Data is now inside ADO as a Dataset for use in Transformation Views, etc…
Questions? Leave a comment!
Comments
-
I am getting an error while creating the Data Extract.
It says "Unable to open file(s) in path."
What is the probable error for getting this error?
I have followed every thing till now.
Any help would be appreciated.0 -
Hi @RiddhiBose. It could be several things.
Don't hesitate to open a support case at support@anaplan.com, the team will be able to identify the cause with you.0 -
I'm getting the same error as @RiddhiBose . I even tried my existing AWS credentials that I currently use without issue in Cloudworks, but I get the same error.
0 -
Fixed it!! Spoke to a colleague who had the same issue and it appears if you rename the connection it causes issues. So I deleted the existing connection and created it from scratch and it now works.
1 -
@Noel Rocher (aka Xmasrock) Thanks for putting it out there, Appreciate it. One Feedback: Screenshots are quite hazy (Difficult to comprehend anything from these esp from the dark ones).
Having said that, I was able to create connection successfully. Thanks once again.
1 -
Thanks @Misbah . I'll probably refactor the article soon to get better screenshots.
0 -
I was also having the "Unable to open file(s) in path." issue but managed to find a solution that worked for me.
When I first created the bucket, it appeared with Europe (Stockholm) as the AWS Region. This was the default setting for me. When navigated to the file I had uploaded to the bucket, I wasn't seeing my name as the owner, but instead a long code.
I changed the AWS region to US East (N. Virginia) at the upper right corner and re-created the bucket (with slightly a different name) under this region. Then I uploaded the file into this bucket, modified the bucket name for the policy, and made the needed connection changes in ADO. After this I was able to retrieve the file without issues.
1