Data Integration

Sort by:
We’re pleased to announce a forthcoming update to the Anaplan Connector for Informatica Cloud, which will be available from June 9, 2018.  This update will contain two significant features: Support for API-based field mapping as an alternative to the current Anaplan file-based mapping for Anaplan Import actions.  A workaround for column headers starting with numbers Benefits for users   Ability to use same data source / flat file for multiple Import actions Ability to run Import actions regardless of modified / deleted Import data source file Ability to work with data headers beginning with numbers   Support for API-based field mapping   Currently, the Anaplan Connector for Informatica Cloud refers to Anaplan files (the import data source) to retrieve column header information for mapping. This method can affect Informatica import action Data Synchronization tasks (DST), in scenarios where the Anaplan file has been deleted or overwritten. With this update, users can use API-based mapping for imports and the Anaplan Connector for Informatica Cloud will make an API call to determine the headers and remove the dependency on files. How to use the new feature (enable API-based mapping) In Informatica Cloud Services (ICS), edit the Anaplan Connections and select the new API-based Mapping check box. After saving, all DSS tasks using this connection will use API-based mapping. If you don’t want to edit your existing connections, you can copy them to new connections and then edit the new connections. This way, you can switch DSS tasks one-by-one to API-based mapping and test them independently. Impact on existing integrations Existing integrations will not be impacted by this change. The API-based Mapping check box will be disabled by default and the Anaplan Connector for Informatica Cloud will continue to use file-based mapping. Users must actively enable the API-based Mapping check box. In certain cases, DST mapping (on the Field Mapping tab in the Data Synchronization Task Wizard) may need to be refreshed or manually re-mapped when moving to API-based mapping. This could be import actions where dates are present in column headers, for example. Support for column headers starting with numbers Currently, Informatica Cloud (ICS) does not natively support column headers starting with numbers. In this update, we've implemented a workaround in the Anaplan connector to allow numbers in headings for Import actions. If you use numeric headers such as “1 Jan 2018” in Import actions, the text “_NUMHDR_” will be added to the beginning of the headers on the Field Mapping tab in the Data Synchronization Task Wizard. You can continue with the column mapping and then run the DST. The connector will remove the "_NUMHDR_" reference when writing data to Anaplan and you’ll see original headers in the Anaplan Import Data Source. Note: If you use numeric headers in Export actions, due to technical limitations, the "_NUMHDR_" text will not be removed from final output. If you proceed with column mapping and run the DST for the export action, the additional text will still be present in the output file.   Look out for the update, which will be available from June 9, 2018.
View full article
8 Views
We're pleased to announce the release of DocuSign version 1.5. In this release, we've introduced support for Single Sign-on (SSO) authentication. This means that DocuSign SSO users can authenticate the Anaplan DocuSign Integration with their single sign-on login, rather than entering their DocuSign password every time. Also, the Anaplan DocuSign Integration now supports all Anaplan data types with the exception of the "List" data type. See DocuSign for Anaplan for more information about the Anaplan DocuSign Integration. See the DocuSign SSO Authentication article for information about using the Anaplan DocuSign integration with single sign-on (SSO). To find out more about DocuSign SSO authentication, see the Overview section of the  DocuSign Single Sign-on Implementation Guide.   Known Issues & Workarounds Issue Workaround Anaplan Workspace Administrators can update Envelope status in the Anaplan module. Non-admins cannot update status, although the Integration will work and documents will be sent out. There is no workaround. When reusing a Workflow with the same module, certain document actions such as  "decline" or "cancel" do not overwrite the columns used to track the envelope status. Clear the columns used to track the envelope status. When an existing Envelope is edited to use a different DocuSign template and the workflow that used the Envelope is subsequently edited, users are unable to save the edited Workflow. When this occurs, nothing happens when you click the Save button on a Workflow. When you need to work with a different DocuSign.com template, create a new Envelope rather than editing an existing one. If you attempt to add more recipients to an existing Envelope Workflow by changing the template, the workflow cannot be saved. Create a new Envelope Workflow with the required template.
View full article
27 Views
We're pleased to announce the February 2018 release of the Anaplan Connector for Informatica Cloud. This release fixes Success/Error row counts in Monitor Log for Data Synchronization Tasks (DST).   Exports Anaplan List exports Success rows is the number of Anaplan List rows exported. Error row count should be 0. Anaplan Module exports Success rows is the number of Anaplan Module rows exported. Error row count should be 0. Imports Anaplan List imports Success rows is sum of number of rows successfully updated/inserted & number of rows updated/inserted with warning. Error row count is number of failed rows. Anaplan Module imports Success rows is the sum of number of Anaplan cells successfully updated/inserted & number of Anaplan cells updated/inserted with warning. Error rows is number of failed Anaplan cells. Note: Cells ignored by Anaplan Import action are not included in above count. For example, during Module Import, any parent hierarchy level cells will be ignored. For more information, see the Anaplan Informatica Connector Guide. 
View full article
122 Views
We're pleased to announce version 1.6 of the DocuSign for Anaplan integration. This release includes the following enhancements. Access the Header Bar from DocuSign for Anaplan You can now access the header bar directly from DocuSign for Anaplan. The header bar provides access to models, applications, notifications, and personal settings. The following tasks are now performed from the User menu, in the top-right corner: Change Account Switch between different DocuSign accounts. Change Workspace/Model Switch between different models. Logout Log out of Anaplan. Previously, these options were located in the workspace and email address dropdowns, which have been removed. Map Percentages from Anaplan We've added the ability to include percentage amounts in DocuSign documents populated with Anaplan data. When you're creating and editing envelopes, you can now map columns containing numeric line items formatted as percentages (including decimal amounts). You can map percentages to any Text tag in your DocuSign template. Updated Certificates DocuSign will renew its DocuSign Connect X.509 certificates in May 2018. To support this change, we've updated the SSL certificates used by DocuSign for Anaplan. You don't need to take any action. Known Issues & Workarounds The following issues will be addressed in a future release: Issue Workaround The status of a sent document does not change to Complete after all recipients have signed it in DocuSign and clicked Complete. Refresh your browser to update the status. If two or more of your DocuSign accounts are part of the same group, DocuSign information (for example, envelopes) from both accounts is available to logged-in users who connect the integration to any account in the group. You'll only encounter this issue if you have multiple DocuSign accounts organized using groups.  None Selecting an envelope in the Create a new workflow wizard can take up to 20 seconds if multiple envelopes exist.  None Leading and trailing spaces are not excluded from search queries. For example, searching for "envelope one " doesn't return an envelope named "envelope one". Check for leading and trailing spaces when searching for envelopes, DocuSign Workflows, or sent documents. The column dropdowns in step 2 of the Create a new envelope wizard do not close as expected.  None Workspace administrators can update envelope status in the Anaplan module. Non workspace administrators cannot update status, although the integration will work and documents will be sent out.  None When reusing a DocuSign Workflow with the same module, certain document actions such as "decline" or "cancel" do not overwrite the columns used to track the envelope status. Clear the columns used to track the envelope status. When an existing envelope is edited to use a different DocuSign template and the workflow that used the Envelope is subsequently edited, users are unable to save the edited Workflow. When this occurs, nothing happens when you click the Save button on a Workflow. When you need to work with a different DocuSign template, create a new Envelope rather than editing an existing one. If you attempt to add more recipients to an existing DocuSign Workflow by changing the template, the workflow cannot be saved. Create a new Envelope Workflow with the required template. Useful Resources Browse the DocuSign for Anaplan section of Anapedia. Take the DocuSign Integration training course.
View full article
6 Views
Summary Anaplan Connect is a command-line client to the Anaplan cloud-based planning environment and is a java-based utility that is able to perform a variety of commands, such as uploading and downloading data files, executing relational SQL queries (for loading into Anaplan), and running Anaplan actions and processes. To enhance the deployment of Anaplan Connect, it is import to be able to integrate the trapping of error conditions, enable the ability to retry the Anaplan Connect operation, and integrate email notifications. This article provides best practices on how to incorporate these capabilities. This article leverages the standard Windows command line batch script and documents the various components and syntax of the script. In summary, the script has the following main components: Set variable values such as exit codes, Anaplan Connect login parameters, and operations and email parameters Run commands prior to running Anaplan Connect commands Main loop block for multiple retries Establish a log file based upon the current date and loop number Run the native Anaplan Connect commands Search for string criteria to trap error conditions Branching logic based upon the discovery of any trapped error conditions Send email success or failure notification of Anaplan Connect run status Logic to determine if a retry is required End main loop block Run commands post to running Anaplan Connect commands Exit the script Download the .bat files referenced in this article here (includes both PC and Mac/Linux options).  Section #1: Setting Script Variables The following section of the script establishes and sets variables that are used in the script. The first three lines perform the following actions: Clears the screen Sets the default to echo all commands Indicates to the operating system that variable values are strictly local to the the script The variables used in the script are as follows: ERRNO   – Sets the exit code to 0 unless set to 1 after multiple failed reties COUNT   – Counter variable used for looping multiple retries RETRY_COUNT   – Counter variable to store the max retry count (note: the /a switch indicates indicates a numeric value) AnaplanUser   – Anaplan login credentials in the format as indicated in the example WorkspaceId   – Anaplan numerical or named Workspace ID ModelId   – Anaplan numerical or named Model ID Operation   – A combination of Anaplan Connect commands. It should be noted that a ^ can be used to enhance readability by indicating that the current command continues on the next line Domain   – Email base domain. Typically, in the format of company.com Smtp   – Email SMTP server User   – Email SMTP server User ID Pass   – Email SMTP server password To   – Target email address(es). To increase the email distribution, simply add additional -t and the email addresses as in the example. From   – From email address Subject   – Email subject line. Note that this is dynamically set later in the script. cls echo on setlocal enableextensions REM **** SECTION #1 - SET VARIABLE VALUES **** set /a ERRNO=0 set /a COUNT=0 set /a RETRY_COUNT=2 REM Set Anaplan Connect Variables set AnaplanUser="<<Anaplan UserID>>:<<Anaplan UserPW>>" set WorkspaceId="<<put your WS ID here>>" set ModelId="<<put your Model ID here>>" set Operation=-import "My File" -execute ^ -output ".\My Errors.txt" REM Set Email variables set Domain="spg-demo.com" set Smtp="spg-demo" set User="fpmadmin@spg-demo.com" set Pass="1Rapidfpm" set To=-t "fpmadmin@spg-demo.com" -t "gburns@spg-demo.com" set From="fpmadmin@spg-demo.com" set Subject="Anaplan Connect Status" REM Set other types of variables such as file path names to be used in the Anaplan Connect "Operation" command Section #2: Pre Custom Batch Commands The following section allows custom batch commands to be added, such as running various batch operations like copy and renaming files or running stored procedures via a relational database command line interface. REM **** SECTION #2 - PRE ANAPLAN CONNECT COMMANDS *** REM Use this section to perform standard batch commands or operations prior to running Anaplan Connect Section #3: Start of Main Loop Block / Anaplan Connect Commands The following section of the script is the start of the main loop block as indicated by the :START. The individual components breakdown as follows: Dynamically set the name of the log file in the following date format and indicates the current loop number:   2016-16-06-ANAPLAN-LOG-RUN-0.TXT Delete prior log and error files Native out-of-the-box Anaplan Connect script with the addition of outputting the Anaplan Connect run session to the dynamic log file as highlighted here: cmd /C %Command% > .\%LogFile% REM **** SECTION #3 - ANAPLAN CONNECT COMMANDS *** :START REM Dynamically set logfile name based upon current date and retry count. set LogFile="%date:~-4%-%date:~7,2%-%date:~4,2%-ANAPLAN-LOG-RUN-%COUNT%.TXT" REM Delete prior log and error files del .\BAT_STAT.TXT del .\AC_API.ERR REM Out-of-the-box Anaplan Connect code with the exception of sending output to a log file setlocal enableextensions enabledelayedexpansion || exit /b 1 REM Change the directory to the batch file's drive, then change to its folder cd %~dp0 if not %AnaplanUser% == "" set Credentials=-user %AnaplanUser% set Command=.\AnaplanClient.bat %Credentials% -workspace %WorkspaceId% -model %ModelId% %Operation% @echo %Command% cmd /C %Command% > .\%LogFile% Section #4: Set Search Criteria The following section of the script enables trapping of error conditions that may occur with running the Anaplan Connect script. The methodology relies upon searching for certain strings in the log file after the AC commands execute. The batch command findstr can search for certain string patterns based upon literal or regular expressions and echo any matched records to the file AC_API.ERR. The existence of this file is then used to trap if an error has been caught. In the example below, two different patterns are searched in the log file. The output file AC_API.ERR is always produced even if there is no matching string. When there is no matching string, the file size will be an empty 0K file. Since the existence of the file determines if an error condition was trapped, it is imperative that any 0K files are removed, which is the function of the final line in the example below. REM **** SECTION #4 - SET SEARCH CRITERIA - REPEAT @FINDSTR COMMAND AS MANY TIMES AS NEEDED *** @findstr /c:"The file" .\%LogFile% > .\AC_API.ERR @findstr /c:"Anaplan API" .\%LogFile% >> .\AC_API.ERR REM Remove any 0K files produced by previous findstr commands @for /r %%f in (*) do if %%~zf==0 del "%%f" Section #5: Trap Error Conditions In the next section, logic is incorporated into the script to trap errors that might have occurred when executing the Anaplan Connect commands. The branching logic relies upon the existence of the AC_API.ERR file. If it exists, then the contents of the AC_API.ERR file are redirected to a secondary file called BAT_STAT.TXT and the email subject line is updated to indicate that an error occurred. If the file AC_API.ERR does not exist, then the contents of the Anaplan Connect log file is redirected to BAT_STAT.TXT and the email subject line is updated to indicate a successful run. Later in the script, the file BAT_STAT.TXT becomes the body of the email alert.  REM **** SECTION #5 - TRAP ERROR CONDITIONS *** REM If the file AC_API.ERR exists then echo errors to the primary BAT_STAT log file REM Else echo the log file to the primary BAT_STAT log file @if exist .\AC_API.ERR ( @echo . >> .\BAT_STAT.TXT @echo *** ANAPLAN CONNECT ERROR OCCURED *** >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT type .\AC_API.ERR >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT set Subject="ANAPLAN CONNECT ERROR OCCURED" ) else ( @echo . >> .\BAT_STAT.TXT @echo *** ALL OPERATIONS COMPLETED SUCCESSFULLY *** >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT type .\%LogFile% >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT set Subject="ANAPLAN LOADED SUCCESSFULLY" ) Section #6: Send Email In this section of the script, a success or failure email notification email will be sent. The parameters for sending are all set in the variable section of the script.  REM **** SECTION #6 - SEND EMAIL VIA MAILSEND *** @mailsend -domain %Domain% ^ -smtp %Smtp% ^ -auth -user %User% ^ -pass %Pass% ^ %To% ^ -f %From% ^ -sub %Subject% ^ -msg-body .\BAT_STAT.TXT Note: Sending email via SMTP requires the use of a free and simple Windows program known as MailSend. The latest release is available here:   https://github.com/muquit/mailsend/releases/ . Once downloaded, unpack the .zip file, rename the file to mailsend.exe and place the executable in the same directory where the Anaplan Connect batch script is located.  Section #7: Determine if a Retry is Required This is one of the final sections of the script that will determine if the Anaplan Connect commands need to be retried. Nested IF statements are typically frowned upon but are required here given the limited capabilities of the Windows batch language. The first IF test determines if the file AC_API.ERR exists. If this file does exist, then the logic drops in and tests if the current value of COUNT   is less than   the RETRY_COUNT. If the condition is true, then the COUNT gets incremented and the batch returns to the :START location (Section #3) to repeat the Anaplan Connect commands. If the condition of the nested IF is false, then the batch goes to the end of the script to exit with an exit code of 1.  REM **** SECTION #7 - DETERMINE IF A RETRY IS REQUIRED *** @if exist .\AC_API.ERR ( @if %COUNT% lss %RETRY_COUNT% ( @set /a COUNT+=1 @goto :START ) else ( set /a ERRNO=1 @goto :END ) ) else ( set /a ERRNO=0 Section #8: Post Custom Batch Commands The following section allows custom batch commands to be added, such as running various batch operations like copy and renaming files, or running stored procedures via a relational database command line interface. Additionally, this would be the location to add functionality to bulk insert flat file data exported from Anaplan into a relational target via tools such as Oracle SQL Loader (SQLLDR) or Microsoft SQL Server Bulk Copy (BCP).  REM **** SECTION #8 - POST ANAPLAN CONNECT COMMANDS *** REM Use this section to perform standard batch commands or operations after running Anaplan Connect commands :END exit /b %ERRNO% Sample Email Notifications The following are sample emails sent by the batch script, which are based upon the sample script in this document. Note how the needed content from the log files is piped directly into the body of the email.  Success Mail: Error Mail:
View full article
63 Views
Note: While all of these scripts have been tested and found to be fully functional, due to the vast amount of potential use cases, Anaplan does not explicitly support custom scripts built by our customers. This article is for information only and does not suggest any future product direction. Getting Started Python 3 offers many options for interacting with an API. This article will explain how you can use Python 3 to automate many of the requests that are available in our apiary, which can be found at   https://anaplan.docs.apiary.io/#. This article assumes you have the requests (version 2.18.4), base64, and JSON modules installed as well as the Python 3 version 3.6.4. Please make sure you are installing these modules with Python 3, and not for an older version of Python. For more information on these modules, please see their respective websites: Python   (If you are using a Python version older or newer than 3.6.4 or requests version older or newer than 2.18.4 we cannot guarantee validity of the article)   Requests   Base Converter   JSON   (Note: install instructions are not at this site but will be the same as any other Python module) Note:   Please read the comments at the top of every script before use, as they more thoroughly detail the assumptions that each script makes. Authentication To start, let's talk about Authentication. Every script run that connects to our API will be required to supply valid authentication. There are 2 ways to authenticate a Python script that I will be covering. Certificate Authentication Basic Encoded Authentication Certificate authentication will require that you have a valid Anaplan certificate, which you can read more about   here. Once you have your certificate saved locally, to properly convert your Anaplan certificate to be usable with the API, first you will need   openssl. Once you have that, you will need to convert the certificate to PEM format by running the following code in your terminal: openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem If you are using Certificate Authorization, the scripts we use in this article will assume you know the Anaplan account email associated with the certificate. If you do not know it, you can extract the common name (CN) from the PEM file by running the following code in your terminal: openssl x509 -text -in certtest.pem To be used with the API, the PEM certificate string will need to be converted to base64, but the scripts we will be covering will take care of that for you, so I won't cover that in this section. To use basic authentication, you will need to know the Anaplan account email that is being used, as well as the password. All scripts in this article will have the following code near the top: # Insert the Anaplan account email being used username = '' ----------------- # If using cert auth, replace cert.pem with your pem converted certificate # filename. Otherwise, remove this line. cert = open('cert.pem').read() # If using basic auth, insert your password. Otherwise, remove this line. password = '' # Uncomment your authentication method (cert or basic). Remove the other. user = 'AnaplanCertificate ' + str(base64.b64encode(( f'{username}:{cert}').encode('utf-8')).decode('utf-8')) # user = 'Basic ' + str(base64.b64encode((f'{username}:{password}' # ).encode('utf-8')).decode('utf-8') Regardless of authentication method, you will need to set the username variable to the Anaplan account email being used. If you are using a certificate to authenticate, you will need to have your PEM converted certificate in the same folder or a child folder of the one you are running the scripts from. If your certificate is in a child folder, please remember to include the file path when replacing cert.pem (e.g. cert/cert.pem). You can remove the password line and its comments, and its respective user variable. If you are using basic authentication, you will need to set the password variable to your Anaplan account password and you can remove the cert line, its comments, and its respective user variable. Getting the Information Needed for Each Script Most of the scripts covered in this article will require you to know an ID or metadata for the file, action, etc., that you are trying to process. Each script that gets this information for their respective fields is titled get_____.py. For example, if you want to get your files metadata, you'll run getFiles.py, which will write the file metadata for each file in the selected model in the selected workspace in an array to a JSON file titled files.json. You can then open the JSON file, find the file you need to reference, and use the metadata from that entry in your other scripts. TIP:   If you open the raw data tab of the JSON file it makes it much easier to copy the whole set of metadata. The following are the links to download each get____.py script. Each get script uses the requests.get method to send a get request to the proper API endpoint. getWorkspaces.py: Writes an array to workspaces.json of all the workspaces the user has access to. getModels.py: Writes an array to models.json of either all the models a user has access to if wGuid is left blank, or all of the models the user has access to in a selected workspace if a workspace ID was inserted. getModelInfo.py: Writes an array to modelInfo.json of all metadata associated with the selected model. getFiles.py: Writes an array to files.json of all metadata for each file the user has access to in the selected model and workspace. (Please refer to   the Apiary   for more information on private vs default files. Generally it is recommended that all scripts be run via the same user account.) getChunkData.py: Writes an array to chunkData.json of all metadata for each chunk of the selected file in the selected model and workspace. getImports.py: Writes an array to imports.json of all metadata for each import in the selected model and workspace. getExports.py: Writes an array to exports.json of all metadata for each export in the selected model and workspace. getActions.py: Writes an array to actions.json of all metadata for all actions in the selected model and workspace. getProcesses.py: Writes an array to processes.json of all metadata for all processes in the selected model and workspace. Uploads A file can be uploaded to the Anaplan API endpoint either in chunks, or as a single chunk. Per our apiary: We recommend that you upload files in several chunks. This enables you to resume an upload that fails before the final chunk is uploaded. In addition, you can compress files on the upload action. We recommend compressing single chunks that are larger than 50MB. This creates a Private File. Note: To upload a file using the, API that file must exist in Anaplan. If the file has not been previously uploaded, you must upload it initially using the Anaplan user interface. You can then carry out subsequent uploads of that file using the API. Multiple Chunk Uploads The script we have for reference is built so that if the script is interrupted for any reason, or if any particular chunk of a file fails to upload, simply rerunning the script will start uploading the file again, starting at the last successful chunk. For this to work, the file must be initially split using a standard naming convention, using the terminal script below. split -b [numberofBytes] [path and filename] [prefix for output files] You can store the file in any location as long as you the proper file path when setting the chunkFilePrefix (e.g. chunkFilePrefix = ''upload_chunks/chunk-" This will look for file chunks named chunk-aa, chunk-ab, chunk-ac etc., up to chunk-zz in the folder script_origin/upload_chunks/. It is very unlikely that you will ever exceed chunk-zz). This will let the script know where to look for the chunks of the file to upload. You can download the script for running a multiple chunk upload from this link: chunkUpload.py Note:   The assumed naming conventions will only be standard if using Terminal, and they do not necessarily work if the file was split using another method in Windows. If you are using Windows you will need to either create a way to standardize the naming of the chunks alphabetically {chunkFilePrefix}(aa - zz) or run the script as detailed in the   Apiary. Note:   The chunkUpload.py script keeps track of the last successful chunk by writing the name of the last successful chunk to a .txt file chunkStop.txt. This file is deleted once the import completes successfully. If the file is modified in between runs of the script, the script may not function correctly. Best practice is to leave the file alone, and delete it if you want to start the upload from the first chunk. Single Chunk Upload The single chunk upload should only be used if the file is small enough to upload in a reasonable time frame. If the upload fails, it will have to start again from the beginning. If your file has a different name then that of its version of the server, you will need to modify line 31 ("name" : '') to reflect the name of the local file. This script runs a single put request to the API endpoint to upload the file. You can download the script for running a single chunk upload from this link: singleChunkUpload.py Imports The import.py script sends a post request to the API endpoint for the selected import. You will need to set the importData value to the metadata for the import. See Getting the Information Needed for Each Script for more information. You can download the script for running an import from this link: Import.py Once the import is finished, the script will write the metadata for the import task in an array to postImport.json, which you can use to verify which task you want to view the status of while running the importStatus.py script. The importStatus.py script will return a list of all tasks associated with the selected importID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postImport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file importStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to importDump.csv which can be used to review cause of the failure. If the task finished with no failures, you will get a message telling you the import has completed with no failures. You can download the script for importStatus.py from this link: importStatus.py Note:   If you check the status of a task with an old taskID for an import that has been run since you last checked it, the dump will no longer exist and importDump.csv will be overwritten with an HTTP error, and the status of the task will be 410 Gone. Exports The export.py script sends a post request to the API endpoint for the selected export. You will need to set the exportData value to the metadata for the export. See Getting the Information Needed for Each Script for more information. You can download the script for running an export from this link: Export.py Once the export is finished, the script will write the metadata for the export task in an array to postExport.json, which you can use to verify which task you want to view the status of while running the exportStatus.py script. The exportStatus.py script will return a list of all tasks associated with the selected exportID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postExport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file exportStatus.json. If the task is still in progress, it will print the task status and progress. It is important to note that no failure dump will be generated if the export fails. You can download the script for exportStatus.py from this link: exportStatus.py Actions The action.py script sends a post request to the API endpoint for the selected action (for use with actions other than imports or exports). You will need to set the actionData value to the metadata for the action. See Getting the Information Needed for Each Script for more information. You can download the script for running an action from this link: Action.py Once the action is finished, the script will write the metadata for the action task in an array to postAction.json, which you can use to verify which task you want to view the status of while running the actionStatus.py script. The actionStatus.py script will return a list of all tasks associated with the selected actionID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postAction.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file actionStatus.json. It is important to note that no failure dump will be generated in the case of a failed action. You can download the script for actionStatus.py from this link: actionStatus.py Processes The process.py script sends a post request to the API endpoint for the selected process. You will need to set the processData value to the metadata for the process. See Getting the Information Needed for Each Script for more information. You can download the script for running a process from this link: Process.py Once the process is finished, the script will write the metadata for the process task in an array to postProcess.json, which you can use to verify which task you want to view the status of while running the processStatus.py script. The processStatus.py script will return a list of all tasks associated with the selected processID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postProcess.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file processStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to processDump.csv which can be used to review cause of the failure. It is important to note that no failure dump will be generated for the process itself, only if one of the imports in the process failed. If the task finished with no failures, you will get a message telling you the process has completed with no failures. You can download the script for processStatus.py from this link: processStatus.py Downloading a File Downloading a file from the Anaplan API endpoint will download the file in however many chunks it exists in on the endpoint. It is important to note that you should set the variable fileName to the name it has in the file metadata. First, the downloads individual chunk metadata will be written in an array to downloadChunkData.json for reference. The script will then download the file chunk by chunk and write each chunk to a new local file with the same name as the 'name' listed in the files metadata. You can download the link for this script from this link: downloadFile.py Note:If a file already exists in the same folder as your script with the same name as the name value in the files metadata, the local file will be overwritten with the file being downloaded from the server. Deleting a File You can delete the file contents of any file that the user has access to that exists in the Anaplan server. Note: This only removes private content. Default content and the import data source model object will remain. You can download the link for this script from this link: deleteFile.py Standalone Requests Code and Their Required Headers In this section, I will list the code for each request detailed above, including the API URL and the headers necessary to complete the call. I will be leaving the content right of Authorization: headers blank. Authorization header values can be either Basic encoded_username:password or AnaplanCertificate encoded_CommonName:PEM_Certificate_String (see   Certificate-Authorization-Using-the-Anaplan-API   for more information on encoded certificates) Note: requests.get will only generate a response body from the server, and no data will be locally saved unless written to a local file. Get Workspaces List requests.get('https://api.anaplan.com/1/3/workspaces/', headers='Authorization':) Get Models List requests.get('https://api.anaplan.com/1/3/models/', headers={'Authorization':}) or requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models', headers={'Authorization':}) Get Model Info requests.get(f'https://api.anaplan.com/1/3/models/{mGuid}', headers={'Authorization':}) Get Files/Imports/Exports/Actions/Processes List The get request for files, imports, exports, actions, or processes are largely the same. Change files to imports, exports, actions, or processes to run each. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files', headers={'Authorization':}) Get Chunk Data requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks', headers={'Authorization':}) Post Chunk Count requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-type': 'application/json'}, json={fileMetaData}) Upload a Chunk of a File requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local chunk file}) Mark an upload complete requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/complete', headers=={'Authorization': , 'Content-Type': 'application/json'}, json={fileMetaData}) Upload a File in a Single Chunk requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local file}) Run an Import/Export/Process The post request for imports, exports, and processes are largely the same. Change imports to exports, actions, or processes to run each. requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', headers={'Authorization': , 'Content-Type': 'application/json'}, data=json.dumps({'localeName': 'en_US'})) Run an Action requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', data={'localeName': 'en_US'}, headers={'Authorization': , 'Content-Type': 'application/json'}) Get Task list for an Import/Export/Action/Process The get request for import, export, action and process task lists are largely the same. Change imports to exports, actions, or processes to get each task list. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{importID}/tasks', headers={'Authorization':}) Get Status for an Import/Export/Action/Process Task The get request for import, export, action and process task statuses are largely the same. Change imports to exports, actions, or processes to get each task list. Note: Only imports and processes will ever generate a failure dump. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{ID}/tasks/{taskID}' headers={'Authorization':}) Download a File Note:   You will need to get the chunk metadata for each chunk of a file you want to download. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkID}, headers={'Authorization': ,'Accept': 'application/octet-stream'}) Delete a File Note:   This only removes private content. Default content and the import data source model object will remain. requests.delete('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-type': 'application/json'} Note:  SFDC user administration is not covered in this article, but the same concepts from the scripts provided can be applied to SFDC user administration. For more information on SFDC user administration see the apiary entry for  SFDC user administration .
View full article
79 Views
This article describes how to use the Anaplan DocuSign integration with single sign-on (SSO).
View full article
25 Views
ETL Overview Traditionally, the IT department has controlled and owned all the data in a given organization. Therefore the various functional areas within an organization (such as Finance, HR, Procurement, etc.) have provided reporting and analytical requirements to the IT department / Business Intelligence (BI) professionals, and have waited until the work corresponding to these business requirements is completed.  Historically, the approach taken by the BI professionals to meet these requirements was the standard Extract, Transform and Load process, which is depicted in the sketch below. The raw data from various data sources (cloud, .txt, databases, .csv, etc.) is first extracted on to a staging area. This extracted data is then transformed per a pre-determined set of transformation rules and then loaded to data repository. The business then consumes this transformed data for their reporting, analytics, and decision making functions. Figure 1 – ETL Process at a high level The ETL process is considered a bit rigid because all the requirements have to be first shared with the BI professionals, who will then code the required transformation rules. In addition, any changes to these rules come at a higher cost to the business, both in terms of time and money. In some cases, this lost time may also result in opportunity cost to the businesses.   ELT Overview Nowadays, given the increasing need for speed and flexibility in reporting and analytics, what-if-analyses, etc., the same businesses cannot afford to wait for an extended period of time while its business requirements are being worked on by the same BI professionals. This, coupled with the relatively lower infrastructure (hardware) costs and the emergence of cloud technologies, has given rise to the ELT process.  In the ELT process, the raw data from all data sources is extracted and then immediately loaded into a central data repository. The business can then get its hands on this raw data and transform it to suit its requirements.  Once this transformation is done, the data is readily available for reporting, analytics, and decision-making needs. The sketch below illustrates the ELT process from a high level.   Figure 2 – ELT Process at a high level The ELT process is similar to that of a data lake concept, where organizations dump data from various source systems into a centralized data repository. The format of the data in the data lake may be structured (rows and columns), semi-structured (CSV and logs), unstructured (emails and .pdfs), and sometimes even binary (images).  Once organizations become familiar with the concept of a data lake / ELT process and see the benefits, they often rush to set one up. However, care must be taken to avoid the dumping of unnecessary and/or redundant data. In addition, an ELT process should also encompass data cleansing or data archival practices to keep up with the efficiency of the data repository. Comparison of ETL and ELT: The table below summarizes and compares the two methodologies of data acquisition and preparation for warehousing and analytics purposes. ELT vs ETL and the Anaplan Platform As a flexible and agile cloud platform, Anaplan supports both methodologies. Depending on the method chosen, below are suggestions on solution design approach.  If choosing the ETL methodology, clients could utilize one of the many ETL tools available in the marketplace (such as Informatica, Mulesoft, Boomi, SnapLogic, etc.) to extract and transform the raw data, which can then be loaded to the Anaplan platform. Although it is preferred to load huge datasets to a data hub model, the transformed data could also be loaded to the live or planning model(s). With the ELT approach, after the raw data extraction, it is recommended that it be loaded to a data hub model where the Anaplan modeling team will code the required transformation rules. The transformed data can be then loaded to the live or planning model(s) to be consumed by end users. Regardless of the approach chosen, note that the activities to extract raw data and load to the Anaplan platform could be automated. A final note The content above gives a high-level overview of the two data warehousing methodologies and by no means urges clients to adopt one methodology over the other. Clients are strongly advised to evaluate the pros and cons of each methodology as they relate to their business scenario(s) and have a business case to select a methodology.
View full article
74 Views
Anaplan Connect v1.3.3.5 is now available. 
View full article
197 Views
This guide assumes you have set up your runtime environment in Informatica Cloud (Anaplan Hyperconnect) and the agent is up and running. This guide focusses solely on how to configure the ODBC connection and setting up a simple synchronization task importing data from one table in PostgreSQL to Anaplan. Informatica Cloud has richer features that are not covered in this guide. The built-in help is contextual and helpful as you go along should you need more information than I have included in this guide. The intention of this guide is to help you set up a simple import from PostgreSQL to Anaplan and this guide is therefore kept short and is not covering all related areas. This guide assumes you have ran an import using a csv file as this needs to be referenced when the target connection is set up, described under section 2.2 below. To prepare, I exported the data I wanted to use for the import from PostgreSQL to a csv file. I then mapped this csv file to Anaplan and ran an initial import to create the import action that is needed.   1. Set up the ODBC connection for PostgreSQL In this example I am using the 64-bit version of the ODBC connection running on my local laptop. I have set it up for User DSN rather than System DSN, but the process is very similar should you need to set up a System DSN. You will need to download the relevant ODBC driver from PostgreSQL and install it to be able to add it to your ODBC Data Sources as per below (click the Add…button and you should be able to select the downloaded driver).     Clicking the configuration button for the ODBC Data Source opens the configuration dialogue. The configurations needed are: Database is the name of your PostgreSQL database. Server is the address to your server. As I am setting this up on my laptop, it’s localhost. User Name is the username for the PostgreSQL database. The password is the password for the PostgreSQL database. Port is the port used by PostgreSQL. You will find this if you open PostgreSQL. Testing the connection should not return any errors.   2.    Configuring source and target connections After setting up the ODBC connection as described above, you will need to set up two connections, one to PostgreSQL and one to Anaplan. Follow the steps below to do this.   2.1 Source connection – PostgreSQL ODBC Select Configure > connection in the menu bar to configure a connection.    Name your connection and add a description Select type – ODBC Select the runtime environment that will be used to run this. In this instance I am using my local machine. Insert the username for the database (same as you used to set up the ODBC connection). Insert the password for the database (same as you used to set up the ODBC connection). Insert the data source name. This is the name of the ODBC connection you configured earlier. Code page would need to correspond to the character set you are using. Testing the connection should give you below confirmation. If so, you can click Done.   2.2 Set up target connection – Anaplan The second connection that needs to be set up is the connection from Informatica Cloud to Anaplan.   Name your connection and add a description if needed Select type – AnaplanV2 Select the runtime environment that will be used to run this. In this instance I am using my local machine. Auth type – I am using Basic Auth which will require your Anaplan user credentials Insert the Anaplan username Insert the Anaplan password Certification Path location – leave blank if you use Basic Auth Insert the workspace ID (open your Anaplan model and select help and about) Insert the model ID (find in the same way as for workspace ID) I have left the remaining fields as per default setting.   Testing the connection should not pass any errors.   3 Task wizard – Data synchronization The next step is to set up a data synchronization task to connect the PostgreSQL source to the Anaplan target. Select Task Wizards in the menu bar and navigate to Data Synchronization as per below screen shot.   This will open the task wizard, starting with defining the Data Synchronization task as per below. Name the task and select the relevant task operation. In this example I have selected Insert, but other task operations are available like update and upsert.   Click Next for the next step in the workflow which is to set up the connection to the source. Start by selecting the connection you defined above under section 2.1. In this example I am using a single table as source and have therefore selected single source. With this connection you can select the source object with the Source Object drop down. This will give you a data preview so you can validate the source is defined correctly. The source object corresponds to the table you are importing from.     The next step is to define the target connection and you will be using the connection that was set up under section 2.1 above.   The target object is the import process that you ran from the csv file in the preparation step described under section 1 above. This action is referred to below as target object. The wizard will show a preview of the target module columns.    The next step in the process is the Data Filters that has both a Simple and an Advanced mode.   I am not using any data filters in this example and please refer to the built-in help for further information on how to use this.   In the field mapping you will either need to manually map or get the fields automatically mapped depending on if the names in the source and target correspond. If you map manually, you will need to drag and drop the fields from the source to the target. Once done, select Validate Mapping to check no errors are generated from the mapping.     The last step is to define whether to use a schedule to run the connection or not. You will also have the option to insert pre-processing commands and post-processing commands and any parameters for your mapping. Please refer to the built-in help for guidance on this.   After running the task, the activity log will confirm whether the import ran without errors or warnings.   As I mentioned initially, this is a simple guide to help you to set up a simple, single source import. Informatica Cloud does have more advanced options as well, both for mappings and transformations.
View full article
100 Views
Audience: Anaplan Internal and Customers/Partners Workiva Wdesk Integration Is Now Available We are excited to announce the general availability of Anaplan’s integration with Workiva’s product, known as the Wdesk. Wdesk easily imports planning, analysis and reporting data from Anaplan to deliver integrated narrative reporting, compliance, planning and performance management on the cloud. The platform is utilized by over 3,000 organizations for SEC reporting, financial reporting, SOX compliance, and regulatory reporting. The Workiva and Anaplan partnership delivers enterprise compliance and performance management on the cloud. Workiva Wdesk, the leading narrative reporting cloud platform, and Anaplan, the leading connected-planning cloud platform, offer reliable, secure integration to address high-value use cases in the last mile of finance, financial planning and analysis, and industry specific regulatory compliance. GA Launch: March 5th  How does the Workiva Wdesk integration work? Please contact Will Berger, Partnerships (william.berger@workiva.com) from Workiva to discuss how to enable integration. Anaplan reports will feed into the Wdesk platform. Wdesk will integrate with Anaplan via Wdesk Connected Sheets. This is a Workiva built and maintained connection. What use cases are supported by the Workiva Wdesk Integration? The Workiva Wdesk integration supports a number of use cases, including: Last mile of finance: Complete regulatory reporting and filing as part of the close, consolidate, report and file process. Workiva automates and structures the complete financial reporting cycle and pulls consolidated actuals from Anaplan. Financial planning and analysis: Complex multi-author, narrative reports that combine extensive commentary and data such as budget books, board books, briefing books and other FP&A management and internal reports. Workiva creates timely, reliable narrative reports pulling actuals, targets and forecast data from Anaplan. Industry specific regulatory compliance & extensive support of XBRL and iXBRL: Workiva is used to solve complex compliance and regulatory reporting requirements in a range of industries.  In banking, Workiva supports documentation process such as CCAR, DFAST and RRP, pulling banking stress test data from Anaplan. Also, Workiva is the leading provider of XBRL software and services accounting for more than 53% of XBRL facts filed with the SEC in the first quarter of 2017.
View full article
80 Views
Anaplan Connect is a downloadable tool that empowers you to automate Anaplan actions. This lightweight tool still relies on the same types of flat files that can be manually uploaded into Anaplan. Once this tool is installed on your computer, you can package that point-and-click process in a script (.bat or .sh files). These scripts work well with external scheduling tools, enabling you to schedule and automate a data upload/download from Anaplan's cloud platform. Most often, Anaplan Connect is used in conjunction with flat files, but it can also be used to connect to any relational database with JDBC.   JDBC JDBC stands for Java Database Connectivity. It is the industry standard API for database-independent connectivity between Java and a wide range of SQL databases, as well as other tabular data sources. A JDBC connection relies on Anaplan Connect to handle the Anaplan side of the integration; it has a separate category because this is the only type of Anaplan Connect script that will contain an SQL query. As with any non-JDBC integration using Anaplan Connect, Anaplan must already have a template file stored as a data source. As long as this data source is available within the Anaplan model, a JDBC integration differs from a flat file Anaplan Connect script when it comes to selecting the file for import. With a JDBC integration, this is the result of an SQL query instead of the location of a flat file. The results of this query are passed directly to Anaplan without needing to store a file. Learn more about Anaplan Connect and download the Anaplan Connect Quick Start Guide in Anapedia.
View full article
3,900 Views
Tableau Connector for Anaplan The Tableau Anaplan native integration provides an easy way to see and understand your Anaplan data using Tableau. Using the Tableau Connector for Anaplan, you can directly connect to Anaplan in few easy steps. The connector is native to Tableau and built using the Anaplan API. It enables you to import Anaplan data into Tableau’s in-memory query engine using export actions created and saved in Anaplan. With a direct connection to Anaplan, people within your organization can effectively work with Tableau and get actionable insights on their data. Users can publish their Anaplan extract as a data source to Tableau Online or Tableau Server and keep their data refreshed on a regular basis. To start using the Tableau - Anaplan connector, you need to have an Anaplan account with workspace and model, and a license for Tableau Desktop. You will also need to configure the Export actions that you plan to use with Tableau in Anaplan. Tableau supports only extract connections for Anaplan, not live connections. You can update the data by refreshing the extract.  To try the Tableau Connector for Anaplan visit  https://www.tableau.com/products/trial.  For an introduction to the Tableau - Anaplan integration, refer to the page below: https://www.tableau.com/about/blog/2016/10/connect-directly-your-anaplan-data-tableau-61853 More details about configuring the connector in Tableau are here: https://onlinehelp.tableau.com/current/pro/desktop/en-us/examples_anaplan.html Information on configuring Anaplan to use the Tableau Connector, as well as frequently asked questions, is available on Anapedia.
View full article
199 Views
We're excited to announce the general availability (GA) release of DocuSign for Anaplan, a new integration available to Premium and Enterprise customers with a Business DocuSign plan or higher. The integration enables you to generate and send DocuSign documents populated with Anaplan data, without the hassle of exporting data and managing distribution lists. You can use the integration to: Generate hundreds of dynamic documents, in just a few clicks. Eliminate manual data entry. Streamline document routing by creating DocuSign Workflows. Get real-time updates for each sent document. Once you've connected your DocuSign account, you can create, send, and track documents in five steps: Access DocuSign for Anaplan If the enhanced Launchpad has already been enabled for your organization, you can access the DocuSign for Anaplan integration from the Application menu, at top-left: If your organization still uses the Classic Launchpad, access the integration by typing   https://docusign.anaplan.com/dsn   into your browser's address bar. When prompted, enter the email address and password you use to log in to DocuSign. Accessing the integration using the above methods is only possible for Premium and Enterprise customers. Useful Resources Browse the   DocuSign for Anaplan   section of Anapedia. Take the DocuSign Integration   training course. Known Issues We've identified the following known issues with the initial GA release of the DocuSign for Anaplan integration. We're working hard to resolve these issues in the next release. The status of a sent document does not change to   Complete   after all recipients have signed it in DocuSign and clicked   Complete. Workaround:   Refresh your browser to update the status. When you send an envelope, the Sent Documents page opens automatically but does not show the correct   Sent Date   and   Signed Date. Workaround:   Refresh your browser to update the   Sent Date   and   Signed Date   to reflect when the envelope was sent. The signing status of the envelope does not update in Anaplan in the following signing workflow: Recipient 1, with DocuSign role 1, signs the document; Recipient 2, with DocuSign role 2, declines the document. The   Update Anaplan   button is disabled and your view does not update to show the declined status. Workaround:   Check the signing status on the Sent Documents page in the DocuSign for Anaplan integration. If you reuse an envelope with a document template that has a greater number of recipients, step two of the   Create a new Workflow   wizard does not show the updated number of recipients for the envelope. If two or more of your DocuSign accounts are part of the same group, DocuSign information (for example, envelopes) from both accounts is available to logged-in users who connect the integration to any account in the group. You'll only encounter this issue if you have multiple DocuSign accounts organized using groups. When you preview an envelope by clicking   Preview and Send, the preview table shows all data in the Anaplan view regardless of any filters that are applied. However, filters are respected when the DocuSign Workflow is triggered, and documents are only sent to the filtered recipients. Selecting an envelope in the   Create a new workflow   wizard can take up to 20 seconds if multiple envelopes exist. Leading and trailing spaces are not excluded from search queries. For example, searching for "envelope one " doesn't return an envelope named "envelope one".. Workaround:   Check for leading and trailing spaces when searching for envelopes, DocuSign Workflows, or sent documents. The column dropdowns in step 2 of the   Create a new envelope   wizard do not close as expected. Users are unable to save DocuSign Workflows. Workaround:   Create a new module in Anaplan and use that to configure the DocuSign integration.
View full article
99 Views
This post summarizes steps to convert your Security Certificate to PEM format and test it in a cURL command with Anaplan. The current Production API version is v1.3 Using a certificate to authenticate will eliminate the need to update your script when you have to change your Anaplan password. To use a certificate for authentication with the API, it first has to be converted into a Base64 encoded string recognizable by Anaplan. Information on how to obtain a certificate can be found in Anapedia. This article assumes that you already have a valid certificate tied to your user name. Steps: 1.   To properly convert your Anaplan certificate to be usable with the API, first you will need openssl (https://www.openssl.org/). Once you have that, you will need to convert the certificate to PEM format. The PEM format uses the header and footer lines “-----BEGIN CERTIFICATE-----“, and “-----END CERTIFICATE-----“.   2.   If your certificate is not in PEM format, you can convert it to the PEM format using the following OpenSSL command. “certificate-(certnumber).cer” is name of source certificate, and “certtest.pem” is name of target PEM certificate.                   openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem   View the PEM file in a text editor. It should be a Base64 string starting with “-----BEGIN CERTIFICATE-----“, and ending with “-----END CERTIFICATE-----“.   3.   View the PEM file to find the CN (Common Name) using the following command:   openssl x509 -text -in certtest.pem   It should look something like "Subject: CN=(Anaplan login email)". Copy the Anaplan login email.   4.   Use a Base-64 encoder (e.g.   https://www.base64encode.org/   ) to encrypt the CN and PEM string, separated by a colon. For example, paste this in:   (Anaplan login email):-----BEGIN CERTIFICATE-----(PEM certificate contents)-----END CERTIFICATE----- 5.   You now have the encrypted string necessary to authenticate API calls. For example, using cURL to GET a list of the Anaplan workspaces for the user that the certificate belongs to:   curl -H "Authorization: AnaplanCertificate (encrypted string)" https://api.anaplan.com/1/3/workspaces  
View full article
497 Views
This release includes several fixes for the initial General Availability (GA) release of the DocuSign for Anaplan integration. Reference Issue Description 80171 Envelopes don't save if you have large amounts of documents, recipients, and tags.  - When you send an envelope, the Sent Documents tab automatically opens but does not show the correct Sent Date or Signed Date.  - If you reuse an envelope with a document template that has more recipients than previously, the updated number of recipients is not shown in step two of the Create a new Workflow wizard.  - When you preview an envelope, the preview table shows all data in the Anaplan view regardless of any filters that are applied. 76104, 76628, 78151, 78355, 78370, 79139 Users are unable to save DocuSign Workflows. 79139 80171 The signing status of sent envelopes, when tracked from the integration or Anaplan, doesn't update if the document template contains multiple Text tags. 79379, 81581 If you delete a module whose views are linked to one or more envelopes, the Workflows and Envelopes tabs no longer display existing DocuSign Workflows and envelopes. Known Issues We're working hard to resolve the following known issues with the integration in the next release. Issue Description Workaround When editing an envelope, step two of the edit wizard doesn't show the saved mappings between Anaplan and DocuSign. In step two, map every column to a document tag. The status of a sent document does not change to Complete after all recipients have signed it in DocuSign and clicked Complete. Refresh your browser to update the status. If an Anaplan view linked to a DocuSign Workflow doesn't contain any data, the Sent Documents tab is empty and you see the error: "This run either has no details or does not exist". This issue occurs if all line item data is filtered out of the view. Add some data to the view, or remove one or more filters. The Sent Documents tab is empty if a DocuSign Workflow is edited to use a different document template or module or view. None After editing an envelope to use a different view of the same module, you can't then edit the linked DocuSign Workflow to use the same view as the envelope. In this case, the Continue button in the edit wizard is disabled. In the edit wizard, select the view you want and then map every Anaplan column header to a role or action. The Continue button is enabled. Envelope signing status is not updated in Anaplan if the first recipient declines the document. The Update Anaplan button is disabled. If the second recipient also declines, the signing status is updated as expected. Envelope signing status is not updated in Anaplan if the first recipient signs the document and the second recipient declines the document. The Update Anaplan button is disabled. None Accessing the Integration You can access the integration from the Application menu, at the top left corner: The Application menu is part of the new header bar that is now available in both tiles view and in models. If the header bar isn't enabled for your organization, please continue to access the integration at https://docusign.anaplan.com/dsn Useful Resources Browse the DocuSign for Anaplan section of Anapedia. Take the DocuSign Integration training course.
View full article
100 Views
At Anaplan, our mission is to change the way companies around the world align people and plans to market opportunities. Central to achieving this goal is successfully integrating your data from various external systems into Anaplan, including native connectors as well as connectors with the most popular ETL tools on the market. Anaplan has built a connector to provide a graphical environment for connecting with any of Boomi’s library of connectors to other applications. Check out the Anapedia for more information on performing basic functions with the Boomi connector; in this post, we will demonstrate an example Boomi process to demonstrate some advanced ways to use this tool.   Multi-Module Import Often, a need arises to import a subset of data into a list before it is possible to fill in a module. This is easily achievable if you prepare your import actions with a single sample CSV with the exact formatting that Boomi’s export will create. This is a useful technique for handling an export of Salesforce opportunities. A single pull from Salesforce returns a CSV containing opportunity IDs as well as other data you track in Anaplan. Two successive upsert calls with the Anaplan Connector can add new opportunity IDs to an Anaplan list (useful for model-wide data integrity) then fill add the other information to a module. Following this process has several upsides: a simpler Boomi process, a single query to Salesforce, and increased model-wide data integrity from the ability to make any opportunity id in the model match an item on the op ID list.   Calling a Process Anaplan’s Boomi connector does not have native support for calling processes. Often, calling individual actions is all that’s needed, but some integrations demand more. Also, calling an Anaplan process instead of a collection of actions can reduce the burden of maintenance for IT professionals by allowing actions to be renamed and reordered without requiring change of the Boomi process. To call an Anaplan process within your Boomi workflow, you must skirt the Anaplan connector, and instead use a Boomi   HTTPS connector   to call a process using our API. There is thorough documentation on   Anaplan’s RESTful API, and supplemental information in the knowledge base.
View full article
100 Views
Anaplan API: Communication failure <SSL peer unverified:  peer not authenticated> This is a common error if a Customer Server is behind a proxy or firewall. Solution is to have the customer whitelist '*.anaplan.com' for firewall blocks.  If behind a proxy, use the '-via" or 'viauser" commands in Anaplan Connect. The other very common cause for this error is that the security certificate isn’t synced up with java. If the whitelist or via command solutions don’t apply or don’t resolve the error, uninstalling and reinstalling Java usually does the trick. Thanks to Jesse Wilson for the technical details.   jesse.wilson@anaplan.com Here are the commands available:  
View full article
299 Views
Recently, I used Anaplan Connect for the first time; I used it to import Workday and Jobvite data into my Anaplan model. This was my first serious data integration. After my experience I put together some tips and tricks to help other first-timers succeed. Firstly, there are a few things you can do to set yourself up for success: Download the most up-to-date version of   Java. Download Anaplan Connect from Anaplan's Download Center. Make sure you can run Terminal (Mac) or the Command Prompt (Windows). Make sure you have a plaintext editor to edit your script (TextEdit or Notepad are available by default, but I recommend   Sublime Text). Read through the Anaplan Connect User Guide in the "doc" folder of the Anaplan Connect folder you downloaded in step #2. Once you have these items completed then you’re ready to start writing your script. In the Anaplan Connect folder that you downloaded, there are some example script files, “example.bat” for Windows and “example.sh” for Mac. The best way to start is to copy the right example file for your operating system, then alter it. When you’re first navigating the example script, the section contains what are called variables (e.g. ModelId, WorkspaceId, AnaplanUser). If you keep your variables at the top, then use them in your script, it's easier to edit those components because they are only in one place. I highly recommend adding a variable for your Anaplan certificate. Then you don’t have to manually enter your password every time the script runs. When you begin to piece together your own script, it will include some combination of Anaplan Connect Commands (you can check out the full list in an appendix of the Quick Start Guide for Anaplan Connect, on Anapedia). Because my script was focused on importing data from an outside source into Anaplan, it included the following components: file, put, import, execute, output. Each of these has a different function: File identifies the File Name (i.e. Workday.csv). Put identifies the File Path of the file you’re importing (i.e. User/Admin/Documents/Workday.csv). Import identifies the action Anaplan is supposed to run (i.e. Workday_Import). Execute is what runs the process; nothing needs to follow this. Output identifies what happens to errors. If you would like those to go to a file then you include the location of the file following the output (i.e. User/Admin/Documents/ErrorLog.csv). It’s worth noting that you can have multiple actions behind a file. For instance, I can have a command sequence like this: file-put-import-execute-output-put-import-execute-output. I found this useful when I used a single file to update multiple lists and modules; it saved me from needing to upload a file over and over again. When you are identifying the file path for the script, it is easiest to keep terminal open. When you drag and drop a file in terminal it will automatically populate the file path. This will assist in avoiding syntax errors since you can copy and paste from terminal into the script. Once you assemble your commands, it’s time to start testing your script! When you start testing the script, it is helpful to break it into small pre-built test chunks that build on one another. That way if something goes wrong, it won’t take as long to find out where the error is. Additionally, it makes the script more digestible in the event that it needs to be edited in the future. As you test each of these chunks, you may run into some errors, so here are a few troubleshooting tips to get you started. If your terminal reports that there is a syntax error, then there is most likely a pesky apostrophe, a space, or some other special character in your script that is causing the error. Comb through the code, especially your filenames, and find the error before attempting to run it again. Secondly, you may run into a permissions error. These typically arise when your file is not currently an executable file. When I encountered this error, changing the permissions on the file to give me write access solved it. Overall once you know these basics of Anaplan Connect you can build a script—even a complicated one! When in doubt, see if somebody else has asked about a similar issue in the discussion section; if you don’t find something there, you can always create your own question. Sometimes a second set of eyes is all you need, and our integrations site has some of the best in biz contributing! Best of luck to the other rookies out there!
View full article
299 Views
We're pleased to announce the latest release of the Anaplan Connector for Informatica Cloud, which includes the following features and enhancements: The Connector now uses the settings in the proxy access configuration on the Secure Agent to send and receive data through a corporate firewall. You can now specify an absolute file path to the error dump file in the Agent folder. Each Import Action generates a copy of the dump file (in .csv format) with details, including a date/time stamp, of all records that failed. The following delimiters are now supported: Comma Semi-colon Tab Pipe Other delimiters defined by an Anaplan admin in an import or export definition can also be used with the Connector. You can now configure a DSS to upload a file without invoking an Action. This allows you to upload a file to Anaplan and run an import later. Informatica DST session logs are now populated with the results of each task, regardless of whether the integration succeeds or fails. You can now specify the chunk size for Imports (from 1MB to 50MB) to support fewer API calls. The chunk size for large data loads can be set to a higher value to help reduce load times and minimize error during the load process. Download the updated user guide from the   Anaplan Connector for Informatica Cloud   page in Anapedia. For more information, visit the   Informatica Marketplace   and search for 'Anaplan'.
View full article
98 Views
Announcements
Hub Comes to You Paris

Hub Comes to You Paris is right around the corner! Learn more about the amazing lineup of speakers, training workshops, and networking opportunities we have planned.