Setup your ADO Demo using an AWS S3 Connection

NoelR
edited January 14 in Best Practices

Author: Noel Rocher, Partner Success Manager at Anaplan

The problem statement

"I would like to Demo the ADO S3 Connector with a CSV file hosted on AWS S3 as an ADO Data Source."

The solution

Use a free AWS Account to host the example CSV file.

Steps to take

We will walk through how to setup an AWS environment using a free tier AWS account with an AWS S3 Bucket (a namespace) to upload/host the example CSV file. Then, we'll define an AWS User to use in ADO to access the file as a Data Source.

Don't be afraid by the number of screenshots; it's a simple setup!

Step 1: Create a free AWS account

Go to AWS Free Tier. You will need a credit card, but no charges will incur below the free tier usage limits (which can be seen here).

1 Free Tier = Always Free + 12 Months Free + Free Trials

IMPORTANT: the Amazon S3 service is not a part of the Always Free Tier (but free for the 1st 12 months). However, for the usage of a demo with small files, you will not reach a big amount (at the time this article was written). Advice: don't forget to delete your Buckets after a demo.

Let's get started.

Step 2: Create an AWS S3 Bucket and add the CSV file

Once logged in on the AWS Console, open the Services list using the icon on the top left next to the AWS logo and select the S3 service. Below are the steps to follow to create the myadotests Bucket where I will upload the CSV files.

From your AWS Console choose the S3 Service and click on the Create Bucket button.

From your AWS Console choose the S3 Service and click on the Create Bucket button.

Then you create your new Bucket providing a name (here myadotests) and click on the Create button.

Pay Attention to the AWS

Region as it is reported in the comments below that some regions are leading to an error.

US is working well. Choose an AWS Region in US until we fix the issue on others AWS Regions.

Now that I've got my new Bucket, let's upload my CSV file (here SYS08 Employee Details.csv)

I click on the Bucket name and see the page below.

Upload was successful, I can close the page.

I can now see the file in my Bucket.

This completes the file upload into the AWS S3 Bucket myadotests.

Step 3: Create the AWS user for ADO and grant access

It is safe to create a dedicated AWS User for ADO with the correct profile, which is mandatory for the ADO S3 Connector to work. Create an Access Key for this AWS User to get the credentials required to configure the ADO S3 Connector.

Let's go to the Security Credential page from the menu appearing when you click on your account's name on the top right of the page.

Then click on Create User.

Define a name and click Next.

Select the Attach the policy directly option.

Click Next.

Then Create User.

Here we go. We now have our dedicated user for ADO access.

Next, we create an Access Key which is mandatory to configure the ADO AWS S3 Connector.

Click on the newly created user to open the page showing its details.

Then clicl on Create access key.

Select the Application running outside AWS option then click Next.

Optionally enter a name, then click on Create access key.

The access key is now created.

Important note: make sure to save the secret key after clicking on Show as it will not be possible to see it again. You'll need to create a new key then.

The secret key is mandatory to configure the ADO AWS S3 Connector.

On the User page, the Access Key is now appearing.

Last part of this section is to define the access policy attached to the user. On the same page Click on Add permissions then Create inline policy as we will provide the JSON description of the policy.

On the Specify permissions page, click on the JSON tab (in blue below).

Copy/paste the definition I'm providing below. On the JSON text, make sure to replace my bucket name by yours. Then click Next.

Give a name to the new policy and click on the Create policy button.

We are all set!

Here's the policy content to copy/paste.

Important note: Make sure to replace myadotests (the example bucket name used here) with your own bucket name.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::myadotests"
]
},
{
"Sid": "AllS3WithinBucket",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::myadotests/*"
]
}
]
}

Last step: Define the ADO Data Source

Opening the ADO application, define a Connection using the ADO S3 Connector. Then, define a Data Source based on this Connection to retrieve the data from the CSV file you've uploaded into the AWS Bucket.

Define the S3 Connection:

Define the data source

Here we are! Data is now inside ADO as a Dataset for use in Transformation Views, etc…

Questions? Leave a comment!

Comments

  • RiddhiBose
    edited October 2024

    I am getting an error while creating the Data Extract.
    It says "Unable to open file(s) in path."
    What is the probable error for getting this error?
    I have followed every thing till now.
    Any help would be appreciated.

  • Hi @RiddhiBose. It could be several things.
    Don't hesitate to open a support case at support@anaplan.com, the team will be able to identify the cause with you.

  • I'm getting the same error as @RiddhiBose . I even tried my existing AWS credentials that I currently use without issue in Cloudworks, but I get the same error.

  • Fixed it!! Spoke to a colleague who had the same issue and it appears if you rename the connection it causes issues. So I deleted the existing connection and created it from scratch and it now works.

  • @Noel Rocher (aka Xmasrock) Thanks for putting it out there, Appreciate it. One Feedback: Screenshots are quite hazy (Difficult to comprehend anything from these esp from the dark ones).

    Having said that, I was able to create connection successfully. Thanks once again.

  • Thanks @Misbah . I'll probably refactor the article soon to get better screenshots.

  • I was also having the "Unable to open file(s) in path." issue but managed to find a solution that worked for me.

    When I first created the bucket, it appeared with Europe (Stockholm) as the AWS Region. This was the default setting for me. When navigated to the file I had uploaded to the bucket, I wasn't seeing my name as the owner, but instead a long code.

    I changed the AWS region to US East (N. Virginia) at the upper right corner and re-created the bucket (with slightly a different name) under this region. Then I uploaded the file into this bucket, modified the bucket name for the policy, and made the needed connection changes in ADO. After this I was able to retrieve the file without issues.

  • I had a hard time setting this up yesterday and would like to second the request to update the screenshots. I was stuck (and more than a little frustrated) a couple times yesterday due to not being able to clearly see what is on the screenshots.

    I also had the "Unable to open file(s) in path." error and hadn't seen these comments yesterday. Today, having now read them, I deleted my AWS connection in Data Orchestrator and restarted and everything worked. So a big thank-you to those that called that out.

    Any chance this setup guide could include a comment about where to stop if they're setting this up for the ADO training (which would be at the "Define the data source" section).

  • @Misbah @atwise : I've updated the AWS setup screenshots. Can you confirm it is now OK? Feel free to add comments.

  • Screenshots are much clearer, thanks!

  • @NoelR Better now!

    Thanks,

    Miz