Data Integration

Sort by:
Note: While all of these scripts have been tested and found to be fully functional, due to the vast amount of potential use cases, Anaplan does not explicitly support custom scripts built by our customers. This article is for information only and does not suggest any future product direction. Getting Started Python 3 offers many options for interacting with an API. This article will explain how you can use Python 3 to automate many of the requests that are available in our apiary, which can be found at   https://anaplan.docs.apiary.io/#. This article assumes you have the requests (version 2.18.4), base64, and JSON modules installed as well as the Python 3 version 3.6.4. Please make sure you are installing these modules with Python 3, and not for an older version of Python. For more information on these modules, please see their respective websites: Python   (If you are using a Python version older or newer than 3.6.4 or requests version older or newer than 2.18.4 we cannot guarantee validity of the article)   Requests   Base Converter   JSON   (Note: install instructions are not at this site but will be the same as any other Python module) Note:   Please read the comments at the top of every script before use, as they more thoroughly detail the assumptions that each script makes. Authentication To start, let's talk about Authentication. Every script run that connects to our API will be required to supply valid authentication. There are 2 ways to authenticate a Python script that I will be covering. Certificate Authentication Basic Encoded Authentication Certificate authentication will require that you have a valid Anaplan certificate, which you can read more about   here. Once you have your certificate saved locally, to properly convert your Anaplan certificate to be usable with the API, first you will need   openssl. Once you have that, you will need to convert the certificate to PEM format by running the following code in your terminal: openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem If you are using Certificate Authorization, the scripts we use in this article will assume you know the Anaplan account email associated with the certificate. If you do not know it, you can extract the common name (CN) from the PEM file by running the following code in your terminal: openssl x509 -text -in certtest.pem To be used with the API, the PEM certificate string will need to be converted to base64, but the scripts we will be covering will take care of that for you, so I won't cover that in this section. To use basic authentication, you will need to know the Anaplan account email that is being used, as well as the password. All scripts in this article will have the following code near the top: # Insert the Anaplan account email being used username = '' ----------------- # If using cert auth, replace cert.pem with your pem converted certificate # filename. Otherwise, remove this line. cert = open('cert.pem').read() # If using basic auth, insert your password. Otherwise, remove this line. password = '' # Uncomment your authentication method (cert or basic). Remove the other. user = 'AnaplanCertificate ' + str(base64.b64encode(( f'{username}:{cert}').encode('utf-8')).decode('utf-8')) # user = 'Basic ' + str(base64.b64encode((f'{username}:{password}' # ).encode('utf-8')).decode('utf-8') Regardless of authentication method, you will need to set the username variable to the Anaplan account email being used. If you are using a certificate to authenticate, you will need to have your PEM converted certificate in the same folder or a child folder of the one you are running the scripts from. If your certificate is in a child folder, please remember to include the file path when replacing cert.pem (e.g. cert/cert.pem). You can remove the password line and its comments, and its respective user variable. If you are using basic authentication, you will need to set the password variable to your Anaplan account password and you can remove the cert line, its comments, and its respective user variable. Getting the Information Needed for Each Script Most of the scripts covered in this article will require you to know an ID or metadata for the file, action, etc., that you are trying to process. Each script that gets this information for their respective fields is titled get_____.py. For example, if you want to get your files metadata, you'll run getFiles.py, which will write the file metadata for each file in the selected model in the selected workspace in an array to a JSON file titled files.json. You can then open the JSON file, find the file you need to reference, and use the metadata from that entry in your other scripts. TIP:   If you open the raw data tab of the JSON file it makes it much easier to copy the whole set of metadata. The following are the links to download each get____.py script. Each get script uses the requests.get method to send a get request to the proper API endpoint. getWorkspaces.py: Writes an array to workspaces.json of all the workspaces the user has access to. getModels.py: Writes an array to models.json of either all the models a user has access to if wGuid is left blank, or all of the models the user has access to in a selected workspace if a workspace ID was inserted. getModelInfo.py: Writes an array to modelInfo.json of all metadata associated with the selected model. getFiles.py: Writes an array to files.json of all metadata for each file the user has access to in the selected model and workspace. (Please refer to   the Apiary   for more information on private vs default files. Generally it is recommended that all scripts be run via the same user account.) getChunkData.py: Writes an array to chunkData.json of all metadata for each chunk of the selected file in the selected model and workspace. getImports.py: Writes an array to imports.json of all metadata for each import in the selected model and workspace. getExports.py: Writes an array to exports.json of all metadata for each export in the selected model and workspace. getActions.py: Writes an array to actions.json of all metadata for all actions in the selected model and workspace. getProcesses.py: Writes an array to processes.json of all metadata for all processes in the selected model and workspace. Uploads A file can be uploaded to the Anaplan API endpoint either in chunks, or as a single chunk. Per our apiary: We recommend that you upload files in several chunks. This enables you to resume an upload that fails before the final chunk is uploaded. In addition, you can compress files on the upload action. We recommend compressing single chunks that are larger than 50MB. This creates a Private File. Note: To upload a file using the, API that file must exist in Anaplan. If the file has not been previously uploaded, you must upload it initially using the Anaplan user interface. You can then carry out subsequent uploads of that file using the API. Multiple Chunk Uploads The script we have for reference is built so that if the script is interrupted for any reason, or if any particular chunk of a file fails to upload, simply rerunning the script will start uploading the file again, starting at the last successful chunk. For this to work, the file must be initially split using a standard naming convention, using the terminal script below. split -b [numberofBytes] [path and filename] [prefix for output files] You can store the file in any location as long as you the proper file path when setting the chunkFilePrefix (e.g. chunkFilePrefix = ''upload_chunks/chunk-" This will look for file chunks named chunk-aa, chunk-ab, chunk-ac etc., up to chunk-zz in the folder script_origin/upload_chunks/. It is very unlikely that you will ever exceed chunk-zz). This will let the script know where to look for the chunks of the file to upload. You can download the script for running a multiple chunk upload from this link: chunkUpload.py Note:   The assumed naming conventions will only be standard if using Terminal, and they do not necessarily work if the file was split using another method in Windows. If you are using Windows you will need to either create a way to standardize the naming of the chunks alphabetically {chunkFilePrefix}(aa - zz) or run the script as detailed in the   Apiary. Note:   The chunkUpload.py script keeps track of the last successful chunk by writing the name of the last successful chunk to a .txt file chunkStop.txt. This file is deleted once the import completes successfully. If the file is modified in between runs of the script, the script may not function correctly. Best practice is to leave the file alone, and delete it if you want to start the upload from the first chunk. Single Chunk Upload The single chunk upload should only be used if the file is small enough to upload in a reasonable time frame. If the upload fails, it will have to start again from the beginning. If your file has a different name then that of its version of the server, you will need to modify line 31 ("name" : '') to reflect the name of the local file. This script runs a single put request to the API endpoint to upload the file. You can download the script for running a single chunk upload from this link: singleChunkUpload.py Imports The import.py script sends a post request to the API endpoint for the selected import. You will need to set the importData value to the metadata for the import. See Getting the Information Needed for Each Script for more information. You can download the script for running an import from this link: Import.py Once the import is finished, the script will write the metadata for the import task in an array to postImport.json, which you can use to verify which task you want to view the status of while running the importStatus.py script. The importStatus.py script will return a list of all tasks associated with the selected importID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postImport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file importStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to importDump.csv which can be used to review cause of the failure. If the task finished with no failures, you will get a message telling you the import has completed with no failures. You can download the script for importStatus.py from this link: importStatus.py Note:   If you check the status of a task with an old taskID for an import that has been run since you last checked it, the dump will no longer exist and importDump.csv will be overwritten with an HTTP error, and the status of the task will be 410 Gone. Exports The export.py script sends a post request to the API endpoint for the selected export. You will need to set the exportData value to the metadata for the export. See Getting the Information Needed for Each Script for more information. You can download the script for running an export from this link: Export.py Once the export is finished, the script will write the metadata for the export task in an array to postExport.json, which you can use to verify which task you want to view the status of while running the exportStatus.py script. The exportStatus.py script will return a list of all tasks associated with the selected exportID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postExport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file exportStatus.json. If the task is still in progress, it will print the task status and progress. It is important to note that no failure dump will be generated if the export fails. You can download the script for exportStatus.py from this link: exportStatus.py Actions The action.py script sends a post request to the API endpoint for the selected action (for use with actions other than imports or exports). You will need to set the actionData value to the metadata for the action. See Getting the Information Needed for Each Script for more information. You can download the script for running an action from this link: Action.py Once the action is finished, the script will write the metadata for the action task in an array to postAction.json, which you can use to verify which task you want to view the status of while running the actionStatus.py script. The actionStatus.py script will return a list of all tasks associated with the selected actionID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postAction.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file actionStatus.json. It is important to note that no failure dump will be generated in the case of a failed action. You can download the script for actionStatus.py from this link: actionStatus.py Processes The process.py script sends a post request to the API endpoint for the selected process. You will need to set the processData value to the metadata for the process. See Getting the Information Needed for Each Script for more information. You can download the script for running a process from this link: Process.py Once the process is finished, the script will write the metadata for the process task in an array to postProcess.json, which you can use to verify which task you want to view the status of while running the processStatus.py script. The processStatus.py script will return a list of all tasks associated with the selected processID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postProcess.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file processStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to processDump.csv which can be used to review cause of the failure. It is important to note that no failure dump will be generated for the process itself, only if one of the imports in the process failed. If the task finished with no failures, you will get a message telling you the process has completed with no failures. You can download the script for processStatus.py from this link: processStatus.py Downloading a File Downloading a file from the Anaplan API endpoint will download the file in however many chunks it exists in on the endpoint. It is important to note that you should set the variable fileName to the name it has in the file metadata. First, the downloads individual chunk metadata will be written in an array to downloadChunkData.json for reference. The script will then download the file chunk by chunk and write each chunk to a new local file with the same name as the 'name' listed in the files metadata. You can download the link for this script from this link: downloadFile.py Note:If a file already exists in the same folder as your script with the same name as the name value in the files metadata, the local file will be overwritten with the file being downloaded from the server. Deleting a File You can delete the file contents of any file that the user has access to that exists in the Anaplan server. Note: This only removes private content. Default content and the import data source model object will remain. You can download the link for this script from this link: deleteFile.py Standalone Requests Code and Their Required Headers In this section, I will list the code for each request detailed above, including the API URL and the headers necessary to complete the call. I will be leaving the content right of Authorization: headers blank. Authorization header values can be either Basic encoded_username:password or AnaplanCertificate encoded_CommonName:PEM_Certificate_String (see   Certificate-Authorization-Using-the-Anaplan-API   for more information on encoded certificates) Note: requests.get will only generate a response body from the server, and no data will be locally saved unless written to a local file. Get Workspaces List requests.get('https://api.anaplan.com/1/3/workspaces/', headers='Authorization':) Get Models List requests.get('https://api.anaplan.com/1/3/models/', headers={'Authorization':}) or requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models', headers={'Authorization':}) Get Model Info requests.get(f'https://api.anaplan.com/1/3/models/{mGuid}', headers={'Authorization':}) Get Files/Imports/Exports/Actions/Processes List The get request for files, imports, exports, actions, or processes are largely the same. Change files to imports, exports, actions, or processes to run each. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files', headers={'Authorization':}) Get Chunk Data requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks', headers={'Authorization':}) Post Chunk Count requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-type': 'application/json'}, json={fileMetaData}) Upload a Chunk of a File requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local chunk file}) Mark an upload complete requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/complete', headers=={'Authorization': , 'Content-Type': 'application/json'}, json={fileMetaData}) Upload a File in a Single Chunk requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local file}) Run an Import/Export/Process The post request for imports, exports, and processes are largely the same. Change imports to exports, actions, or processes to run each. requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', headers={'Authorization': , 'Content-Type': 'application/json'}, data=json.dumps({'localeName': 'en_US'})) Run an Action requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', data={'localeName': 'en_US'}, headers={'Authorization': , 'Content-Type': 'application/json'}) Get Task list for an Import/Export/Action/Process The get request for import, export, action and process task lists are largely the same. Change imports to exports, actions, or processes to get each task list. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{importID}/tasks', headers={'Authorization':}) Get Status for an Import/Export/Action/Process Task The get request for import, export, action and process task statuses are largely the same. Change imports to exports, actions, or processes to get each task list. Note: Only imports and processes will ever generate a failure dump. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{ID}/tasks/{taskID}' headers={'Authorization':}) Download a File Note:   You will need to get the chunk metadata for each chunk of a file you want to download. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkID}, headers={'Authorization': ,'Accept': 'application/octet-stream'}) Delete a File Note:   This only removes private content. Default content and the import data source model object will remain. requests.delete('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-type': 'application/json'} Note:  SFDC user administration is not covered in this article, but the same concepts from the scripts provided can be applied to SFDC user administration. For more information on SFDC user administration see the apiary entry for  SFDC user administration .
View full article
266 Views
This article describes how to use the Anaplan DocuSign integration with single sign-on (SSO).
View full article
71 Views
Anaplan Connect v1.3.3.5 is now available. 
View full article
382 Views
We're pleased to announce version 1.6 of the DocuSign for Anaplan integration. This release includes the following enhancements. Access the Header Bar from DocuSign for Anaplan You can now access the header bar directly from DocuSign for Anaplan. The header bar provides access to models, applications, notifications, and personal settings. The following tasks are now performed from the User menu, in the top-right corner: Change Account Switch between different DocuSign accounts. Change Workspace/Model Switch between different models. Logout Log out of Anaplan. Previously, these options were located in the workspace and email address dropdowns, which have been removed. Map Percentages from Anaplan We've added the ability to include percentage amounts in DocuSign documents populated with Anaplan data. When you're creating and editing envelopes, you can now map columns containing numeric line items formatted as percentages (including decimal amounts). You can map percentages to any Text tag in your DocuSign template. Updated Certificates DocuSign will renew its DocuSign Connect X.509 certificates in May 2018. To support this change, we've updated the SSL certificates used by DocuSign for Anaplan. You don't need to take any action. Known Issues & Workarounds The following issues will be addressed in a future release: Issue Workaround The status of a sent document does not change to Complete after all recipients have signed it in DocuSign and clicked Complete. Refresh your browser to update the status. If two or more of your DocuSign accounts are part of the same group, DocuSign information (for example, envelopes) from both accounts is available to logged-in users who connect the integration to any account in the group. You'll only encounter this issue if you have multiple DocuSign accounts organized using groups.  None Selecting an envelope in the Create a new workflow wizard can take up to 20 seconds if multiple envelopes exist.  None Leading and trailing spaces are not excluded from search queries. For example, searching for "envelope one " doesn't return an envelope named "envelope one". Check for leading and trailing spaces when searching for envelopes, DocuSign Workflows, or sent documents. The column dropdowns in step 2 of the Create a new envelope wizard do not close as expected.  None Workspace administrators can update envelope status in the Anaplan module. Non workspace administrators cannot update status, although the integration will work and documents will be sent out.  None When reusing a DocuSign Workflow with the same module, certain document actions such as "decline" or "cancel" do not overwrite the columns used to track the envelope status. Clear the columns used to track the envelope status. When an existing envelope is edited to use a different DocuSign template and the workflow that used the Envelope is subsequently edited, users are unable to save the edited Workflow. When this occurs, nothing happens when you click the Save button on a Workflow. When you need to work with a different DocuSign template, create a new Envelope rather than editing an existing one. If you attempt to add more recipients to an existing DocuSign Workflow by changing the template, the workflow cannot be saved. Create a new Envelope Workflow with the required template. Useful Resources Browse the DocuSign for Anaplan section of Anapedia. Take the DocuSign Integration training course.
View full article
37 Views
We're pleased to announce the February 2018 release of the Anaplan Connector for Informatica Cloud. This release fixes Success/Error row counts in Monitor Log for Data Synchronization Tasks (DST).   Exports Anaplan List exports Success rows is the number of Anaplan List rows exported. Error row count should be 0. Anaplan Module exports Success rows is the number of Anaplan Module rows exported. Error row count should be 0. Imports Anaplan List imports Success rows is sum of number of rows successfully updated/inserted & number of rows updated/inserted with warning. Error row count is number of failed rows. Anaplan Module imports Success rows is the sum of number of Anaplan cells successfully updated/inserted & number of Anaplan cells updated/inserted with warning. Error rows is number of failed Anaplan cells. Note: Cells ignored by Anaplan Import action are not included in above count. For example, during Module Import, any parent hierarchy level cells will be ignored. For more information, see the Anaplan Informatica Connector Guide. 
View full article
164 Views
Audience: Anaplan Internal and Customers/Partners Workiva Wdesk Integration Is Now Available We are excited to announce the general availability of Anaplan’s integration with Workiva’s product, known as the Wdesk. Wdesk easily imports planning, analysis and reporting data from Anaplan to deliver integrated narrative reporting, compliance, planning and performance management on the cloud. The platform is utilized by over 3,000 organizations for SEC reporting, financial reporting, SOX compliance, and regulatory reporting. The Workiva and Anaplan partnership delivers enterprise compliance and performance management on the cloud. Workiva Wdesk, the leading narrative reporting cloud platform, and Anaplan, the leading connected-planning cloud platform, offer reliable, secure integration to address high-value use cases in the last mile of finance, financial planning and analysis, and industry specific regulatory compliance. GA Launch: March 5th  How does the Workiva Wdesk integration work? Please contact Will Berger, Partnerships (william.berger@workiva.com) from Workiva to discuss how to enable integration. Anaplan reports will feed into the Wdesk platform. Wdesk will integrate with Anaplan via Wdesk Connected Sheets. This is a Workiva built and maintained connection. What use cases are supported by the Workiva Wdesk Integration? The Workiva Wdesk integration supports a number of use cases, including: Last mile of finance: Complete regulatory reporting and filing as part of the close, consolidate, report and file process. Workiva automates and structures the complete financial reporting cycle and pulls consolidated actuals from Anaplan. Financial planning and analysis: Complex multi-author, narrative reports that combine extensive commentary and data such as budget books, board books, briefing books and other FP&A management and internal reports. Workiva creates timely, reliable narrative reports pulling actuals, targets and forecast data from Anaplan. Industry specific regulatory compliance & extensive support of XBRL and iXBRL: Workiva is used to solve complex compliance and regulatory reporting requirements in a range of industries.  In banking, Workiva supports documentation process such as CCAR, DFAST and RRP, pulling banking stress test data from Anaplan. Also, Workiva is the leading provider of XBRL software and services accounting for more than 53% of XBRL facts filed with the SEC in the first quarter of 2017.
View full article
155 Views
We're pleased to announce the release of DocuSign version 1.5. In this release, we've introduced support for Single Sign-on (SSO) authentication. This means that DocuSign SSO users can authenticate the Anaplan DocuSign Integration with their single sign-on login, rather than entering their DocuSign password every time. Also, the Anaplan DocuSign Integration now supports all Anaplan data types with the exception of the "List" data type. See DocuSign for Anaplan for more information about the Anaplan DocuSign Integration. See the DocuSign SSO Authentication article for information about using the Anaplan DocuSign integration with single sign-on (SSO). To find out more about DocuSign SSO authentication, see the Overview section of the  DocuSign Single Sign-on Implementation Guide.   Known Issues & Workarounds Issue Workaround Anaplan Workspace Administrators can update Envelope status in the Anaplan module. Non-admins cannot update status, although the Integration will work and documents will be sent out. There is no workaround. When reusing a Workflow with the same module, certain document actions such as  "decline" or "cancel" do not overwrite the columns used to track the envelope status. Clear the columns used to track the envelope status. When an existing Envelope is edited to use a different DocuSign template and the workflow that used the Envelope is subsequently edited, users are unable to save the edited Workflow. When this occurs, nothing happens when you click the Save button on a Workflow. When you need to work with a different DocuSign.com template, create a new Envelope rather than editing an existing one. If you attempt to add more recipients to an existing Envelope Workflow by changing the template, the workflow cannot be saved. Create a new Envelope Workflow with the required template.
View full article
69 Views
This release includes several fixes for the initial General Availability (GA) release of the DocuSign for Anaplan integration. Reference Issue Description 80171 Envelopes don't save if you have large amounts of documents, recipients, and tags.  - When you send an envelope, the Sent Documents tab automatically opens but does not show the correct Sent Date or Signed Date.  - If you reuse an envelope with a document template that has more recipients than previously, the updated number of recipients is not shown in step two of the Create a new Workflow wizard.  - When you preview an envelope, the preview table shows all data in the Anaplan view regardless of any filters that are applied. 76104, 76628, 78151, 78355, 78370, 79139 Users are unable to save DocuSign Workflows. 79139 80171 The signing status of sent envelopes, when tracked from the integration or Anaplan, doesn't update if the document template contains multiple Text tags. 79379, 81581 If you delete a module whose views are linked to one or more envelopes, the Workflows and Envelopes tabs no longer display existing DocuSign Workflows and envelopes. Known Issues We're working hard to resolve the following known issues with the integration in the next release. Issue Description Workaround When editing an envelope, step two of the edit wizard doesn't show the saved mappings between Anaplan and DocuSign. In step two, map every column to a document tag. The status of a sent document does not change to Complete after all recipients have signed it in DocuSign and clicked Complete. Refresh your browser to update the status. If an Anaplan view linked to a DocuSign Workflow doesn't contain any data, the Sent Documents tab is empty and you see the error: "This run either has no details or does not exist". This issue occurs if all line item data is filtered out of the view. Add some data to the view, or remove one or more filters. The Sent Documents tab is empty if a DocuSign Workflow is edited to use a different document template or module or view. None After editing an envelope to use a different view of the same module, you can't then edit the linked DocuSign Workflow to use the same view as the envelope. In this case, the Continue button in the edit wizard is disabled. In the edit wizard, select the view you want and then map every Anaplan column header to a role or action. The Continue button is enabled. Envelope signing status is not updated in Anaplan if the first recipient declines the document. The Update Anaplan button is disabled. If the second recipient also declines, the signing status is updated as expected. Envelope signing status is not updated in Anaplan if the first recipient signs the document and the second recipient declines the document. The Update Anaplan button is disabled. None Accessing the Integration You can access the integration from the Application menu, at the top left corner: The Application menu is part of the new header bar that is now available in both tiles view and in models. If the header bar isn't enabled for your organization, please continue to access the integration at https://docusign.anaplan.com/dsn Useful Resources Browse the DocuSign for Anaplan section of Anapedia. Take the DocuSign Integration training course.
View full article
209 Views
This guide assumes you have set up your runtime environment in Informatica Cloud (Anaplan Hyperconnect) and the agent is up and running. This guide focusses solely on how to configure the ODBC connection and setting up a simple synchronization task importing data from one table in PostgreSQL to Anaplan. Informatica Cloud has richer features that are not covered in this guide. The built-in help is contextual and helpful as you go along should you need more information than I have included in this guide. The intention of this guide is to help you set up a simple import from PostgreSQL to Anaplan and this guide is therefore kept short and is not covering all related areas. This guide assumes you have ran an import using a csv file as this needs to be referenced when the target connection is set up, described under section 2.2 below. To prepare, I exported the data I wanted to use for the import from PostgreSQL to a csv file. I then mapped this csv file to Anaplan and ran an initial import to create the import action that is needed.   1. Set up the ODBC connection for PostgreSQL In this example I am using the 64-bit version of the ODBC connection running on my local laptop. I have set it up for User DSN rather than System DSN, but the process is very similar should you need to set up a System DSN. You will need to download the relevant ODBC driver from PostgreSQL and install it to be able to add it to your ODBC Data Sources as per below (click the Add…button and you should be able to select the downloaded driver).     Clicking the configuration button for the ODBC Data Source opens the configuration dialogue. The configurations needed are: Database is the name of your PostgreSQL database. Server is the address to your server. As I am setting this up on my laptop, it’s localhost. User Name is the username for the PostgreSQL database. The password is the password for the PostgreSQL database. Port is the port used by PostgreSQL. You will find this if you open PostgreSQL. Testing the connection should not return any errors.   2.    Configuring source and target connections After setting up the ODBC connection as described above, you will need to set up two connections, one to PostgreSQL and one to Anaplan. Follow the steps below to do this.   2.1 Source connection – PostgreSQL ODBC Select Configure > connection in the menu bar to configure a connection.    Name your connection and add a description Select type – ODBC Select the runtime environment that will be used to run this. In this instance I am using my local machine. Insert the username for the database (same as you used to set up the ODBC connection). Insert the password for the database (same as you used to set up the ODBC connection). Insert the data source name. This is the name of the ODBC connection you configured earlier. Code page would need to correspond to the character set you are using. Testing the connection should give you below confirmation. If so, you can click Done.   2.2 Set up target connection – Anaplan The second connection that needs to be set up is the connection from Informatica Cloud to Anaplan.   Name your connection and add a description if needed Select type – AnaplanV2 Select the runtime environment that will be used to run this. In this instance I am using my local machine. Auth type – I am using Basic Auth which will require your Anaplan user credentials Insert the Anaplan username Insert the Anaplan password Certification Path location – leave blank if you use Basic Auth Insert the workspace ID (open your Anaplan model and select help and about) Insert the model ID (find in the same way as for workspace ID) I have left the remaining fields as per default setting.   Testing the connection should not pass any errors.   3 Task wizard – Data synchronization The next step is to set up a data synchronization task to connect the PostgreSQL source to the Anaplan target. Select Task Wizards in the menu bar and navigate to Data Synchronization as per below screen shot.   This will open the task wizard, starting with defining the Data Synchronization task as per below. Name the task and select the relevant task operation. In this example I have selected Insert, but other task operations are available like update and upsert.   Click Next for the next step in the workflow which is to set up the connection to the source. Start by selecting the connection you defined above under section 2.1. In this example I am using a single table as source and have therefore selected single source. With this connection you can select the source object with the Source Object drop down. This will give you a data preview so you can validate the source is defined correctly. The source object corresponds to the table you are importing from.     The next step is to define the target connection and you will be using the connection that was set up under section 2.1 above.   The target object is the import process that you ran from the csv file in the preparation step described under section 1 above. This action is referred to below as target object. The wizard will show a preview of the target module columns.    The next step in the process is the Data Filters that has both a Simple and an Advanced mode.   I am not using any data filters in this example and please refer to the built-in help for further information on how to use this.   In the field mapping you will either need to manually map or get the fields automatically mapped depending on if the names in the source and target correspond. If you map manually, you will need to drag and drop the fields from the source to the target. Once done, select Validate Mapping to check no errors are generated from the mapping.     The last step is to define whether to use a schedule to run the connection or not. You will also have the option to insert pre-processing commands and post-processing commands and any parameters for your mapping. Please refer to the built-in help for guidance on this.   After running the task, the activity log will confirm whether the import ran without errors or warnings.   As I mentioned initially, this is a simple guide to help you to set up a simple, single source import. Informatica Cloud does have more advanced options as well, both for mappings and transformations.
View full article
243 Views
The Connect Manager is a tool that allows non-technical users to create Anaplan Connect scripts from scratch simply by walking through a step-by-step wizard.                 Features include:  - Create scripts for the Import/Export of flat files  - Create scripts for Importing from JDBC/ODBC sources  - Ability to chose between commonly used JDBC connection – New in v4  - Run scripts from the new Connection Manager Interface – New in v4  - Ability to use certificate authentication Please note that this program is currently only supported on Windows systems and requires .Net 4.5 or newer to run (.Net has been included in the download package). The Connect Manager is approved by Anaplan for general release, however it is not supported by Anaplan. If there are any specific enhancements you want to see in the next version, please leave a comment or send me an email at graham.gronhoff@anaplan.com.   Download the Anaplan Connect Wizard here.    
View full article
2,900 Views
Summary Anaplan Connect is a command-line client to the Anaplan cloud-based planning environment and is a java-based utility that is able to perform a variety of commands, such as uploading and downloading data files, executing relational SQL queries (for loading into Anaplan), and running Anaplan actions and processes. To enhance the deployment of Anaplan Connect, it is import to be able to integrate the trapping of error conditions, enable the ability to retry the Anaplan Connect operation, and integrate email notifications. This article provides best practices on how to incorporate these capabilities. This article leverages the standard Windows command line batch script and documents the various components and syntax of the script. In summary, the script has the following main components: Set variable values such as exit codes, Anaplan Connect login parameters, and operations and email parameters Run commands prior to running Anaplan Connect commands Main loop block for multiple retries Establish a log file based upon the current date and loop number Run the native Anaplan Connect commands Search for string criteria to trap error conditions Branching logic based upon the discovery of any trapped error conditions Send email success or failure notification of Anaplan Connect run status Logic to determine if a retry is required End main loop block Run commands post to running Anaplan Connect commands Exit the script Section #1: Setting Script Variables The following section of the script establishes and sets variables that are used in the script. The first three lines perform the following actions: Clears the screen Sets the default to echo all commands Indicates to the operating system that variable values are strictly local to the the script The variables used in the script are as follows: ERRNO   – Sets the exit code to 0 unless set to 1 after multiple failed reties COUNT   – Counter variable used for looping multiple retries RETRY_COUNT   – Counter variable to store the max retry count (note: the /a switch indicates indicates a numeric value) AnaplanUser   – Anaplan login credentials in the format as indicated in the example WorkspaceId   – Anaplan numerical or named Workspace ID ModelId   – Anaplan numerical or named Model ID Operation   – A combination of Anaplan Connect commands. It should be noted that a ^ can be used to enhance readability by indicating that the current command continues on the next line Domain   – Email base domain. Typically, in the format of company.com Smtp   – Email SMTP server User   – Email SMTP server User ID Pass   – Email SMTP server password To   – Target email address(es). To increase the email distribution, simply add additional -t and the email addresses as in the example. From   – From email address Subject   – Email subject line. Note that this is dynamically set later in the script. cls echo on setlocal enableextensions REM **** SECTION #1 - SET VARIABLE VALUES **** set /a ERRNO=0 set /a COUNT=0 set /a RETRY_COUNT=2 REM Set Anaplan Connect Variables set AnaplanUser="<<Anaplan UserID>>:<<Anaplan UserPW>>" set WorkspaceId="<<put your WS ID here>>" set ModelId="<<put your Model ID here>>" set Operation=-import "My File" -execute ^ -output ".\My Errors.txt" REM Set Email variables set Domain="spg-demo.com" set Smtp="spg-demo" set User="fpmadmin@spg-demo.com" set Pass="1Rapidfpm" set To=-t "fpmadmin@spg-demo.com" -t "gburns@spg-demo.com" set From="fpmadmin@spg-demo.com" set Subject="Anaplan Connect Status" REM Set other types of variables such as file path names to be used in the Anaplan Connect "Operation" command Section #2: Pre Custom Batch Commands The following section allows custom batch commands to be added, such as running various batch operations like copy and renaming files or running stored procedures via a relational database command line interface. REM **** SECTION #2 - PRE ANAPLAN CONNECT COMMANDS *** REM Use this section to perform standard batch commands or operations prior to running Anaplan Connect Section #3: Start of Main Loop Block / Anaplan Connect Commands The following section of the script is the start of the main loop block as indicated by the :START. The individual components breakdown as follows: Dynamically set the name of the log file in the following date format and indicates the current loop number:   2016-16-06-ANAPLAN-LOG-RUN-0.TXT Delete prior log and error files Native out-of-the-box Anaplan Connect script with the addition of outputting the Anaplan Connect run session to the dynamic log file as highlighted here: cmd /C %Command% > .\%LogFile% REM **** SECTION #3 - ANAPLAN CONNECT COMMANDS *** :START REM Dynamically set logfile name based upon current date and retry count. set LogFile="%date:~-4%-%date:~7,2%-%date:~4,2%-ANAPLAN-LOG-RUN-%COUNT%.TXT" REM Delete prior log and error files del .\BAT_STAT.TXT del .\AC_API.ERR REM Out-of-the-box Anaplan Connect code with the exception of sending output to a log file setlocal enableextensions enabledelayedexpansion || exit /b 1 REM Change the directory to the batch file's drive, then change to its folder cd %~dp0 if not %AnaplanUser% == "" set Credentials=-user %AnaplanUser% set Command=.\AnaplanClient.bat %Credentials% -workspace %WorkspaceId% -model %ModelId% %Operation% @echo %Command% cmd /C %Command% > .\%LogFile% Section #4: Set Search Criteria The following section of the script enables trapping of error conditions that may occur with running the Anaplan Connect script. The methodology relies upon searching for certain strings in the log file after the AC commands execute. The batch command findstr can search for certain string patterns based upon literal or regular expressions and echo any matched records to the file AC_API.ERR. The existence of this file is then used to trap if an error has been caught. In the example below, two different patterns are searched in the log file. The output file AC_API.ERR is always produced even if there is no matching string. When there is no matching string, the file size will be an empty 0K file. Since the existence of the file determines if an error condition was trapped, it is imperative that any 0K files are removed, which is the function of the final line in the example below. REM **** SECTION #4 - SET SEARCH CRITERIA - REPEAT @FINDSTR COMMAND AS MANY TIMES AS NEEDED *** @findstr /c:"The file" .\%LogFile% > .\AC_API.ERR @findstr /c:"Anaplan API" .\%LogFile% >> .\AC_API.ERR REM Remove any 0K files produced by previous findstr commands @for /r %%f in (*) do if %%~zf==0 del "%%f" Section #5: Trap Error Conditions In the next section, logic is incorporated into the script to trap errors that might have occurred when executing the Anaplan Connect commands. The branching logic relies upon the existence of the AC_API.ERR file. If it exists, then the contents of the AC_API.ERR file are redirected to a secondary file called BAT_STAT.TXT and the email subject line is updated to indicate that an error occurred. If the file AC_API.ERR does not exist, then the contents of the Anaplan Connect log file is redirected to BAT_STAT.TXT and the email subject line is updated to indicate a successful run. Later in the script, the file BAT_STAT.TXT becomes the body of the email alert.  REM **** SECTION #5 - TRAP ERROR CONDITIONS *** REM If the file AC_API.ERR exists then echo errors to the primary BAT_STAT log file REM Else echo the log file to the primary BAT_STAT log file @if exist .\AC_API.ERR ( @echo . >> .\BAT_STAT.TXT @echo *** ANAPLAN CONNECT ERROR OCCURED *** >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT type .\AC_API.ERR >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT set Subject="ANAPLAN CONNECT ERROR OCCURED" ) else ( @echo . >> .\BAT_STAT.TXT @echo *** ALL OPERATIONS COMPLETED SUCCESSFULLY *** >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT type .\%LogFile% >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT set Subject="ANAPLAN LOADED SUCCESSFULLY" ) Section #6: Send Email In this section of the script, a success or failure email notification email will be sent. The parameters for sending are all set in the variable section of the script.  REM **** SECTION #6 - SEND EMAIL VIA MAILSEND *** @mailsend -domain %Domain% ^ -smtp %Smtp% ^ -auth -user %User% ^ -pass %Pass% ^ %To% ^ -f %From% ^ -sub %Subject% ^ -msg-body .\BAT_STAT.TXT Note: Sending email via SMTP requires the use of a free and simple Windows program known as MailSend. The latest release is available here:   https://github.com/muquit/mailsend/releases/ . Once downloaded, unpack the .zip file, rename the file to mailsend.exe and place the executable in the same directory where the Anaplan Connect batch script is located.  Section #7: Determine if a Retry is Required This is one of the final sections of the script that will determine if the Anaplan Connect commands need to be retried. Nested IF statements are typically frowned upon but are required here given the limited capabilities of the Windows batch language. The first IF test determines if the file AC_API.ERR exists. If this file does exist, then the logic drops in and tests if the current value of COUNT   is less than   the RETRY_COUNT. If the condition is true, then the COUNT gets incremented and the batch returns to the :START location (Section #3) to repeat the Anaplan Connect commands. If the condition of the nested IF is false, then the batch goes to the end of the script to exit with an exit code of 1.  REM **** SECTION #7 - DETERMINE IF A RETRY IS REQUIRED *** @if exist .\AC_API.ERR ( @if %COUNT% lss %RETRY_COUNT% ( @set /a COUNT+=1 @goto :START ) else ( set /a ERRNO=1 @goto :END ) ) else ( set /a ERRNO=0 Section #8: Post Custom Batch Commands The following section allows custom batch commands to be added, such as running various batch operations like copy and renaming files, or running stored procedures via a relational database command line interface. Additionally, this would be the location to add functionality to bulk insert flat file data exported from Anaplan into a relational target via tools such as Oracle SQL Loader (SQLLDR) or Microsoft SQL Server Bulk Copy (BCP).  REM **** SECTION #8 - POST ANAPLAN CONNECT COMMANDS *** REM Use this section to perform standard batch commands or operations after running Anaplan Connect commands :END exit /b %ERRNO% Sample Email Notifications The following are sample emails sent by the batch script, which are based upon the sample script in this document. Note how the needed content from the log files is piped directly into the body of the email.  Success Mail: Error Mail:
View full article
226 Views
This article outlines the requirements for Anaplan Technology Partners who want to integrate with Anaplan using Anaplan v2.0 REST APIs. Use Cases The following use cases are covered: Allow users to run integrations from the partner technology or application with or without an external integration tool to move data to and from Anaplan. Provide the ability to import data into Anaplan for planning and dashboarding and extract the planning results from Anaplan into the partner technology or application. Provide the ability to extract data from Anaplan modules and lists or import data into Anaplan modules and lists. Provide the ability to extract data from Anaplan into the partner technology or application to run specific planning scenarios or calculations. Requirements To integrate with Anaplan: Users must have a license for the partner technology and credentials to log in to Anaplan. Basic authentication or certificate authentication methods are supported. Users must have Import and/or Export actions configured in Anaplan or have the ability to create these actions in Anaplan. Assumptions Technology partners are familiar with Anaplan modelling concepts and Anaplan APIs. Information can be found on anaplan.com , help.anaplan.com , Anaplan Academy and Anaplan API reference material . Anaplan supports the deletion of items from very long lists using the Delete from List using Selection This can be invoked via a REST API. Import chunks are between 1 MB and 50 MB in size. Export chunks are 10 MB in size. Anaplan data exports and imports will run in batch mode. All Anaplan exports will be generated as .csv or .txt (tab delimited files). Anaplan imports will similarly accept .csv or .txt formatted data. All data movements will follow format and rules defined in Anaplan actions. Constraints Users can create an Anaplan Process to chain multiple Import / Export actions together and execute them in sequence. However, some functionality is not supported, e.g. files will not be output to UI for Export actions. Not in Scope Process action support is not required. OAuth File types other than .csv and tab-delimited files. Changes to Anaplan UI, login mechanism or Anaplan APIs. Guidelines  Authentication Support for Basic Authentication (user name and password). Support for Certificate Authentication (uploading an x509 cert). A custom header will be sent in the header for every API call to Anaplan to uniquely identify the partner technology and its version. For example, format "{Partner Prod name} {version}". Behavior The partner technology or application must allow users to log in to Anaplan with credentials and present list of Export or Import actions for user to select from: Get the workspace that the user has access to (present workspace name, not ID). Get the models that the user has access to (present model name, not ID) Workspace and model are used in the URL for other endpoints.  Export and Import actions Based on the Workspace and Model selected, present the Export / Import actions, with the name, to the user for selection. This list of actions will match what is presented in the Anaplan UI. Each action is associated with a Module or List in Anaplan. Execute the Export / Import action; by posting a task against the action, the action is run.  Export action: getting the file Assuming that the task succeeded, pull down the file (in some cases, in chunks). If the file is in chunks, Partner code will need to concatenate the chunks together.  Export action: parse the exported file The file should be in .csv or .txt format.  Invoke Anaplan Export API endpoint "GET https://api.anaplan.com/.../exports/<export id>" to get fields for the Export action.  Export action: analyze exported data Most users will want to analyze multiple modules and lists. Each export is for one module or list. Users will need to be able to execute more than one export in order to populate their partner technology environment.  Export action: multiple exports In Anaplan, a Process is a wrapper of multiple actions that are executed in sequential order. It is not possible to pull the export files using a process so individual exports are required. The partner technology must allow for more than one export to be selected by the user. The calls will need to be made independently as each export will need its own task ID. (This is assuming that exports run on different modules or lists.)   Export action: get the files from multiple exports This is the same as pulling files from a single export call, except that the code needs to ensure that it is pulling the correct file after the export is called. Files for all defined exports should already exist in the system so calling them will not result in the failure. However, calling them without executing a new export task or before the export task completes successfully can lead to downloading outdated information. If tasks are created against a single model in parallel, the actions will be queued and run in sequence.  Check that the task completes successfully before pulling the related file.  Import action: uploading data The Technology Partner will split data to be uploaded into chunks of certain size. Anaplan APIs support upload chunk sizes from 1MB to 50MB. These chunks will be uploaded to Anaplan in sequential manner. Once all chunks are uploaded, the Import action will be triggered by a separate REST API call.  Error handling The Anaplan API is REST so expect standard HTTP error codes for API failures. Import action failures are found by doing a GET on the TASK endpoint. The JSON response will have a summary and for error conditions, there will be a dump file that can be pulled to get more details.  The partner technology or application will need to fetch the dump file via a REST API call, save the file, and then process it. Export dump files are unusual - they are more common for imports. Ensuring that the task completes successfully before retrieving the file will avoid receiving outdated information from Anaplan. If a task fails, report the errors back to the user. Any automatic restarts should be very limited in scope and user configurable to prevent infinite loops and performance degradation of the model.  Labeling Labels should follow Anaplan naming conventions: Export Workspace Model File For example, executing an Export action should be called ‘Export’ not ‘Read’. Definitions  Workspace Each company (or autonomous workgroup) has its own Workspace. A workspace has its own set of users and may contain any number of models. Model A structure where a user can conduct planning. It contains all the objects needed for planning, such as modules and lists, but also the data values. Module Components of each Anaplan model, built up using line items, timescales, list dimensions, and pages. A module contains the metrics for planning. Lists Groups of similar items, such as people, products, and regions. They can be combined into Modules. Actions Operations defined by users to execute certain functions, such as imports, exports, or processes. Actions must be defined in Anaplan before they can be called in the API. Process Groups actions and executes them in sequential order.  Data Source Definition The configuration of an action that details how the data is handled. Task The job that executes actions and contains metadata regarding the job itself.
View full article
22 Views
Note:  This article is meant to be a guide on converting an existing Anaplan Security Certificate to PEM format for the purpose of testing its functionality via cURL commands. Please work with your developers on any in more depth application of this process.  The current Production API version is v1.3. Using a certificate to authenticate will eliminate the need to update your script when you have to change your Anaplan password. To use a certificate for authentication with the API, it first has to be converted into a Base64 encoded string recognizable by Anaplan. Information on how to obtain a certificate can be found in Anapedia. This article assumes that you already have a valid certificate tied to your user name. Steps: 1.   To properly convert your Anaplan certificate to be usable with the API, first you will need openssl (https://www.openssl.org/). Once you have that, you will need to convert the certificate to PEM format. The PEM format uses the header and footer lines “-----BEGIN CERTIFICATE-----“, and “-----END CERTIFICATE-----“.   2.   If your certificate is not in PEM format, you can convert it to the PEM format using the following OpenSSL command. “certificate-(certnumber).cer” is name of source certificate, and “certtest.pem” is name of target PEM certificate.                   openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem   View the PEM file in a text editor. It should be a Base64 string starting with “-----BEGIN CERTIFICATE-----“, and ending with “-----END CERTIFICATE-----“.   3.   View the PEM file to find the CN (Common Name) using the following command:   openssl x509 -text -in certtest.pem   It should look something like "Subject: CN=(Anaplan login email)". Copy the Anaplan login email.   4.   Use a Base-64 encoder (e.g.   https://www.base64encode.org/   ) to encrypt the CN and PEM string, separated by a colon. For example, paste this in:   (Anaplan login email):-----BEGIN CERTIFICATE-----(PEM certificate contents)-----END CERTIFICATE----- 5.   You now have the encrypted string necessary to authenticate API calls. For example, using cURL to GET a list of the Anaplan workspaces for the user that the certificate belongs to:   curl -H "Authorization: AnaplanCertificate (encrypted string)" https://api.anaplan.com/1/3/workspaces  
View full article
199 Views
We are pleased to announce Sep-2017 release of Anaplan Connector for Informatica Cloud. This release fixes Success/Error row counts in Monitor Log for Data Synchronization Tasks (DST).   Success/Error row logic for modules is given below Module Import Action - Success rows indicates number of Line Items with ALL cells accepted for Insert / Update by Anaplan - Error rows indicates number of Line Items with atleast 1 cell ignored/rejected by Anaplan Module Export Action - Success rows indicates number of Line Items exported by Anaplan - Error rows should always show 0 You should note that, for modules, Success/Error numbers in DST Monitor Log means number of Line Items. If a line item has any cell that is ignored or rejected by Anaplan, that Line Item will be counted as an Error row.   For e.g. if your module has 10 Line Items, and 3 Line Items have 1 or more data cells that are rejected or ignored by Anaplan, the Success/Error count will be 7 & 3 respectively. Tip Anaplan ignores cells at Parent hierarchy level, e.g. parent Qtr/Year for Time dimension. Remove such parent columns from your input data before running Anaplan Import. Else, these Line Items will be marked as Error in DST Monitor Log. Success/Error row logic for Lists is given below List Import Action - Success rows indicates number of List Items accepted by Anaplan for Insert / Update - Error Rows indicates number of List Items ignored/rejected by Anaplan List Export Action - Success rows indicates number of List Items exported by Anaplan - Error Rows should always show 0
View full article
33 Views
We're pleased to announce the latest release of the Anaplan Connector for Informatica Cloud, which includes the following features and enhancements: The Connector now uses the settings in the proxy access configuration on the Secure Agent to send and receive data through a corporate firewall. You can now specify an absolute file path to the error dump file in the Agent folder. Each Import Action generates a copy of the dump file (in .csv format) with details, including a date/time stamp, of all records that failed. The following delimiters are now supported: Comma Semi-colon Tab Pipe Other delimiters defined by an Anaplan admin in an import or export definition can also be used with the Connector. You can now configure a DSS to upload a file without invoking an Action. This allows you to upload a file to Anaplan and run an import later. Informatica DST session logs are now populated with the results of each task, regardless of whether the integration succeeds or fails. You can now specify the chunk size for Imports (from 1MB to 50MB) to support fewer API calls. The chunk size for large data loads can be set to a higher value to help reduce load times and minimize error during the load process. Download the updated user guide from the   Anaplan Connector for Informatica Cloud   page in Anapedia. For more information, visit the   Informatica Marketplace   and search for 'Anaplan'.
View full article
41 Views
ETL Overview Traditionally, the IT department has controlled and owned all the data in a given organization. Therefore the various functional areas within an organization (such as Finance, HR, Procurement, etc.) have provided reporting and analytical requirements to the IT department / Business Intelligence (BI) professionals, and have waited until the work corresponding to these business requirements is completed.  Historically, the approach taken by the BI professionals to meet these requirements was the standard Extract, Transform and Load process, which is depicted in the sketch below. The raw data from various data sources (cloud, .txt, databases, .csv, etc.) is first extracted on to a staging area. This extracted data is then transformed per a pre-determined set of transformation rules and then loaded to data repository. The business then consumes this transformed data for their reporting, analytics, and decision making functions. Figure 1 – ETL Process at a high level The ETL process is considered a bit rigid because all the requirements have to be first shared with the BI professionals, who will then code the required transformation rules. In addition, any changes to these rules come at a higher cost to the business, both in terms of time and money. In some cases, this lost time may also result in opportunity cost to the businesses.   ELT Overview Nowadays, given the increasing need for speed and flexibility in reporting and analytics, what-if-analyses, etc., the same businesses cannot afford to wait for an extended period of time while its business requirements are being worked on by the same BI professionals. This, coupled with the relatively lower infrastructure (hardware) costs and the emergence of cloud technologies, has given rise to the ELT process.  In the ELT process, the raw data from all data sources is extracted and then immediately loaded into a central data repository. The business can then get its hands on this raw data and transform it to suit its requirements.  Once this transformation is done, the data is readily available for reporting, analytics, and decision-making needs. The sketch below illustrates the ELT process from a high level.   Figure 2 – ELT Process at a high level The ELT process is similar to that of a data lake concept, where organizations dump data from various source systems into a centralized data repository. The format of the data in the data lake may be structured (rows and columns), semi-structured (CSV and logs), unstructured (emails and .pdfs), and sometimes even binary (images).  Once organizations become familiar with the concept of a data lake / ELT process and see the benefits, they often rush to set one up. However, care must be taken to avoid the dumping of unnecessary and/or redundant data. In addition, an ELT process should also encompass data cleansing or data archival practices to keep up with the efficiency of the data repository. Comparison of ETL and ELT: The table below summarizes and compares the two methodologies of data acquisition and preparation for warehousing and analytics purposes. ELT vs ETL and the Anaplan Platform As a flexible and agile cloud platform, Anaplan supports both methodologies. Depending on the method chosen, below are suggestions on solution design approach.  If choosing the ETL methodology, clients could utilize one of the many ETL tools available in the marketplace (such as Informatica, Mulesoft, Boomi, SnapLogic, etc.) to extract and transform the raw data, which can then be loaded to the Anaplan platform. Although it is preferred to load huge datasets to a data hub model, the transformed data could also be loaded to the live or planning model(s). With the ELT approach, after the raw data extraction, it is recommended that it be loaded to a data hub model where the Anaplan modeling team will code the required transformation rules. The transformed data can be then loaded to the live or planning model(s) to be consumed by end users. Regardless of the approach chosen, note that the activities to extract raw data and load to the Anaplan platform could be automated. A final note The content above gives a high-level overview of the two data warehousing methodologies and by no means urges clients to adopt one methodology over the other. Clients are strongly advised to evaluate the pros and cons of each methodology as they relate to their business scenario(s) and have a business case to select a methodology.
View full article
63 Views
Anaplan API: Communication failure <SSL peer unverified:  peer not authenticated> This is a common error if a Customer Server is behind a proxy or firewall. Solution is to have the customer whitelist '*.anaplan.com' for firewall blocks.  If behind a proxy, use the '-via" or 'viauser" commands in Anaplan Connect. The other very common cause for this error is that the security certificate isn’t synced up with java. If the whitelist or via command solutions don’t apply or don’t resolve the error, uninstalling and reinstalling Java usually does the trick. Thanks to Jesse Wilson for the technical details.   jesse.wilson@anaplan.com Here are the commands available:  
View full article
99 Views
Anaplan supports connectors for following ETL tools. This blog post provides a consolidated list of release notes for these connectors. This post is not meant to be a complete list, and we will keep updating it in future. a. Informatica b. DELL Boomi c. Mulesoft d. SnapLogic   Informatica connector release notes: Sep-2017 - https://network.informatica.com/servlet/JiveServlet/download/1253-41006/IC_Sept_17_CloudConnectorRel... Aug-2017 - https://kb.informatica.com/proddocs/Product%20Documentation/6/IC_Spring2017_CloudConnectorReleaseNot... Jul-2017 - https://network.informatica.com/events/1230 Feb-2017 - https://network.informatica.com/events/1154 Jul-2016 - Attached with this blog Apr-2016 - Attached with this blog   DELL Boomi release notes: 1. Will be updated    Snaplogic release notes: Spring 2017 (4.9) Snap Updates - https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/1438863/S... Fall 2016 (4.7) Snap Updates - https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/269238/Fall+2016+4.7+Snap+Updates Summer 2016 (4.6) Snap Patch Releases - https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/1941267/Summer+2016+4.6+Snap+Patch+Release... Snap Updates, 2015 - https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/1943134/Snap+Updates+2015 Snap Updates, March 2015 - https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/1943400/Snap+Updates+March+2015       IC_Winter2016_AnaplanConnector_ReleaseNotes_en.pdf         IC_Summer2016_AnaplanConnectorReleaseNotes_en.pdf  
View full article
42 Views
Manual integration with Anaplan is by far the simplest option for integration. Using the point-and-click user interface available in Anaplan, you can select any tab-delimited or comma-separated file for import into your model. Importantly, this is the only way to add a new data source to your Anaplan model. This makes it a stepping stone for all the other forms of integration, as any other import will use the formatting of an already-uploaded file for its format.
View full article
33 Views
Anaplan has built several connectors to work with popular ETL (Extract, Translate, and Load) tools. These tools provide a graphical interface through which you can set up and manage your integration. Each of the tools that we connect to has a growing library of connectors – providing a wide array of possibilities for integration with Anaplan. These ETL tools require subscriptions to take advantage of all their features, making them an especially appealing option for integration if you already have a sub.      MuleSoft Anaplan has a connector available in MuleSoft's community library that allows for easy connection to cloud systems such as Netsuite, Workday, and Salesforce.com as well as on-premise systems like Oracle and SAP. Any of these integrations can be scheduled to recur on any period needed, easily providing hands-off integration. MuleSoft uses the open-source AnyPoint studio and Java to manage its integrations between any of its available connectors. Anaplan has thorough documentation relating to our MuleSoft connector on  the Anaplan MuleSoft github.   SnapLogic SnapLogic has a Snap Pack for Anaplan that leverages our API to import and export data. The Anaplan Snap Pack provides components for reading data from and writing data to the Anaplan server using SnapLogic, as well as executing actions on the Anaplan server. This Snap Pack empowers you to use connect your data and organization on the Anaplan Platform without missing a beat.   Boomi Anaplan has a connector available on the Boomi marketplace that will empower you to create a local Atom and transfer data to or from any other source with a Boomi connector. You can use Boomi to import or export data using any of your pre-configured actions within Anaplan. This technology removes any need to store files as an intermediate step, as well as facilitating automation.   Informatica Anaplan has partnered with Informatica to build a connector on the Informatica platform. Informatica has connectors for hundreds of applications and databases, giving you the ability to leverage their integration platform for many other applications when you integrate these applications with Anaplan. You can search for the Anaplan Connector on the Informatica marketplace or request it from your Informatica sales representative.  
View full article
100 Views
Announcements

Learn about our implementation process in The Anaplan Way training. We've got a course coming up in Minneapolis (in our swanky new training space!) July 31 through Aug 2. To register, click here.


Anaplan Ask Me Anything

Ask Me Anything returns next week with Anaplan's Vice President of Product, Sampath Gomatam! Join the group and ask all your product roadmap questions.


The Dreamforce call for speakers is open!

We believe you would be a great addition to the Dreamforce agenda. By weaving Anaplan into your session, we will happily cover your hotel accommodations—just a small thank you for being a valued Anaplan customer. Click here to nominate yourself or a colleague. Good luck and remember the deadline is July 20.