I am trying to connect to Azure blob, the credentials work on cloudworks, but do not work on ADO. Has anyone else experienced this issue?
Hello @jstockwell ,
In Cloudworks, the Azure Blob connection utilizes a SAS token for credentials, where as in ADO, Azure Blob uses an Azure Secret Key which cannot be the SAS token. Are you able to test creating a secret key with the proper permissions to access the Azure Blob container and test your connection in ADO? For reference on the ADO Azure Blob connection: https://help.anaplan.com/import-data-from-azure-blob-storage-c1c54d4d-c645-4ffe-98e6-ee8675b1894a Future State: ADO Azure Blob connection will be enhanced with the ability to use OAuth2.0 for connection into Azure Blob.
@JonFerneau,
We have a client that can only connect to Azure Blob using either Secure Access via RBAC AuthN+Z or Secured Network Access via Secure Tunnel / IP Whitelisting, per internal governance guidelines.
Are either of these options for establishing an ADO to Azure Blob connection, or will they need to seek another route to get their data into ADO?
@DavidEdwards , Currently, Azure Blob does not support secure tunnel connection. They can restrict access to their Azure Blob container to just Anaplan IP addresses, but I am not sure if this meets their requirements. Azure Blob also now supports OAuth within ADO so the use of SAS token is no longer required for connection.
Hello folks, Does anyone know how to work around the Polaris limitation of using the Formula or Ratio summary method alongside the Closing Balance within the same line item? I have a Business Unit list where the total (All BUs) must display the Corporate (1 of 20 leaf items under All BUs total) number of Unique Customers.β¦
Hello community, I have an issue with anaplan report (functionality). The issue I have is within couple of slides. The data in that report is scrolling into a new page. There is a hierarchy in the rows and line items and time in the columns. My problem is, the hierarchy is getting broken in the second page. One of the nodeβ¦
I want to create an S3 connection that will have monthly data stored in individual csv files that is connected to ADO. Around month end, data will be uploaded multiple times, replacing existing csvs. When this happens I want to replace the existing data for that month with the new file. I don't want to have to create aβ¦