Platform Filters
Choose a label to narrow down the articles.

Modeling

Import and Export Data

Data Integration

Application Lifecycle Management

Dashboard

Share a Best Practice

Share what you know. Contribute your best practices and Anaplan expertise using our Contributor's Toolkit.

Contribute a Best Practice

Let's talk about it

Discuss what you learned from these best practices and your own experiences in the Forums.

Visit Forums
If you have a multi-year model where the data range for different parts of the model varies, (for example, history covering two years, current year forecast, and three planning years) then Time Ranges should be able to deliver significant gains in terms of model size and performance. But, before you rush headlong into implementing Time Ranges across all of your models, let me share a few considerations to ensure you maximize the value of the feature and avoid any unwanted pitfalls. Naming Convention Time Ranges As with all Anaplan models, there is no set naming convention, however, we do advocate consistency and simplicity. As with lists and modules, short names are good. I like to describe the naming convention thus—as short as practical—meaning you need to understand what it means, but don’t write an essay! We recommend using the following convention: FYyy-FYyy. For example, FY16-FY18, or FY18 for a single year Time Ranges available are from 1981 to 2079, so the “19” or the “20” prefixes are not strictly necessary. Keeping the name as short as this has a couple of advantages: It has a clear indication of the boundaries for the Time Range It is short enough to see the name of the Time Range in the module and line items blueprint The aggregations available for Time Ranges can differ for each Time Range and also differ from the main model calendar. If you take advantage of this and have aggregations that differ from the model calendar, you should add a suffix to the description. For example: FY16-FY19 Q (to signify Quarter totals) FY16-FY19 QHY (Quarter and Half Year totals) FY16-FY19 HY (Half Year totals only) etc. Time Ranges are Static Time Ranges can span from 1981 to 2079. As a result, they can exist entirely outside, within, or overlap the model calendar. This means that there may likely be some additional manual maintenance to perform when the year changes. Let’s review a simple example: Assume the model calendar is FY18 with two previous years and two future years; the model calendar spans FY16-FY20 We have set up Time Ranges for historic data (FY16-FY17) and plan data (FY19-FY20) We also have modules that use the model calendar to pull all of the history, forecast, and plan data together, as seen below: At year end when we “roll over the model,” we amend the model calendar simply by amending the current year. What we have now is as follows: You see that the history and plan Time Ranges are now out of sync with the model calendar. How you change the history Time Range will depend on how much historical data you need or want to keep. Assuming you don’t need more than two year’s history, the Time Range should be re-named FY17-FY18 and the start period advanced to FY17 (from FY16). Similarly, the plan Time Range should be renamed FY20-FY21 and advanced to FY20 (from FY19). FY18 is then available for the history to be populated and FY21 is available for plan data entry. Time Ranges Pitfalls Potential Data Loss Time Ranges can bring massive space and calculation savings to your model(s), but be careful. In our example above, changing the Start Period of FY16-FY17 to FY17 would result in the data for FY16 being deleted for all line items using FY16-FY17 as a Time Range. Before you implement a Time Range that is shorter or lies outside the current model calendar, and especially when implementing Time Ranges for the first time, ensure that the current data stored in the model is not needed. If in doubt, do some or all of the suggestions below: Export out the data to a file Copy the existing data on the line item(s) to other line items that are using the model calendar Back up the entire model Formula References The majority of the formula will update automatically when updating Time Ranges. However, if you have any hard-coded SELECT statements referencing years or months within the Time Range, you will have to amend or remove the formula before amending the Time Range. Hard-coded SELECT statements go against best practice for exactly this reason; they cause additional maintenance. We recommend replacing the SELECT with a LOOKUP formula from a Time Settings module. There are other examples where the formula may need to be removed/amended before the Time Range can be adjusted. See the Anapedia documentation for more details. When to use the Model Calendar This is a good question and one that we at Anaplan pondered during the development of the feature; Do Time Ranges make the model calendar redundant? Well, I think the answer is “no,” but as with so many constructs in Anaplan, the answer probably is, “it depends!” For me, a big advantage of using the model calendar is that it is dynamic for the current year and the +/- years on either side. Change the current year and the model updates automatically along with any filters and calculations you have set up to reference current year periods, historical periods, future periods, etc.  (You are using a central time settings module, aren’t you??) Time ranges don’t have that dynamism, so any changes to the year will need to be made for each Time Range. So, our advice before implementing Time Ranges for the first time is to review each Module and: Assess the scope of the calculations Think about the reduction Time Ranges will give in terms of space and calculation savings, but compare that with annual maintenance. For example: If you have a two-year model, with one history year (FY17) and the current year (FY18), you could set up a Time Range spanning one year for FY17 and another one year Time Range for FY18 and use these for the respective data sets. However, this would mean each year both Time Ranges would need to be updated. We advocate building models logically, so it is likely that you will have groups of modules where Time Ranges will fall naturally. The majority of the modules should reflect the model calendar. Once Time Ranges are implemented, it may be that you can reduce the scope of the model calendar. If you have a potential Time Range that reflects either the current or future model calendar, leave the timescale as the default for those modules and line items; why make extra work? SELECT Statements As outlined above, we don’t advocate hard-coded time selects of the majority of time items because of the negative impact on maintenance (the exceptions being All Periods, YTD, YTG, and CurrentPeriod). When implementing Time Ranges for the first time, take the opportunity to review the line item formula with time selects. These formulae can be replaced with lookups using a Time Settings module. Application Lifecycle Management (ALM) Considerations As with the majority of the Time settings, Time Ranges are treated as structural data. If you are using ALM, all of the changes must be made in the Development model and synchronized to Production. This gives increased importance to refer to the pitfalls noted above to ensure data is not inadvertently deleted. Best of luck! Refer to the Anapedia documentation for more detail. Please ask if you have any further questions and let us and your fellow Anaplanners know of the impact Time Ranges have had on your model(s).
View full article
Thinking through the results of a modeling decision is a key part of ensuring good model performance—in other words, making sure the calculation engine isn’t overtaxed. This article highlights some ideas for how to lessen the load on the calculation engine. Formulas should be simple; a formula that is nested, or uses multiple combinations, uses valuable processing time. Writing a long, involved formula makes the engine work hard. Seconds count when the user is staring at the screen. Simple is better. Breaking up formulas and using other options helps keep processing speeds fast. You must keep a balance when using these techniques in your models, so the guidance is as follows: Break up the most commonly changed formula Break up the most complex formula Break up any formula you can’t explain the purpose of in one sentence Formulas with many calculated components The structure of a formula can have a significant bearing on the amount of calculation that happens when inputs in the model are changed. Consider the following example of a calculation for the Total Profit in an application. There are five elements that make up the calculation: Product Sales, Service Sales, Cost of Goods Sold (COGS), Operating Expenditure (Op EX), and Rent and Utilities. Each of the different elements is calculated in a separate module. A reporting module pulls the results together into the Total Profit line item, which is calculated using the formula shown below. What happens when one of the components of COGS changes? Since all the source components are included in the formula, when anything within any of the components changes, this formula is recalculated. If there are a significant number of component expressions, this can put a larger overhead on the calculation engine than is necessary. There is a simple way to structure the module to lessen the demand on the calculation engine. You can separate the input lines in the reporting module by creating a line item for each of the components and adding the Total Profit formula as a separate line item. This way, changes to the source data only cause the relevant line item to recalculate. For example, a change in the Product Sales calculation only affects the Product Sales and the Total Profit line items in the Reporting module; Services Sales, Op EX, COGS and Rent & Utilities are unchanged. Similarly, a change in COGS only affects COGS and Total Profit in the Reporting module. Keep the general guidelines in mind. It is not practical to have every downstream formula broken out into individual line items. Plan to provide early exits from formulas Conditional formulas (IF/THEN) present a challenge for the model builder in terms of what is the optimal construction for the formula, without making it overly complicated and difficult to read or understand. The basic principle is to avoid making the calculation engine do more work than necessary. Try to set up the formula to finish the calculations as soon as possible. Always put first the condition that is most likely to occur. That way the calculation engine can quit the processing of the expression at the earliest opportunity. Here is an example that evaluates Seasonal Marketing Promotions: The summer promotion runs for three months and the winter promotion for two months. There are more months when there is no promotion, so this formula is not optimal and will take longer to calculate. This is better, as the formula will exit after the first condition more frequently. There is an even better way to do this. Following the principles from above, add another line item for no promotion. And then the formula can become: This is even better because the calculation for No Promo has already been calculated, and Summer Promo occurs more frequently than Winter Promo. It is not always clear which condition will occur more frequently than others, but here are a few more examples of how to optimize formulas: FINDITEM formula The Finditem element of a formula will work its way through the whole list looking for the text item, and if it does not find the referenced text, it will return blank. If the referenced text is blank, it will also return a blank. Inserting a conditional expression at the beginning of the formula keeps the calculation engine from being overtaxed. IF ISNOTBLANK(TEXT) THEN FINDITEM(LIST,TEXT) ELSE BLANK Or IF BLANK(TEXT) THEN BLANK ELSE FINDITEM(LIST,TEXT) Use the first expression if most of the referenced text contains data and the second expression if there are more blanks than data. LAG, OFFSET, POST, etc. If in some situations there is no need to lag or offset data, for example, if the lag or offset parameter is 0. The value of the calculation is the same as the period in question. Adding a conditional at the beginning of the formula will help eliminate unnecessary calculations: IF lag_parameter = 0 THEN 0 ELSE LAG(Lineitem, lag_parameter, 0) Or IF lag_parameter <> 0 THEN LAG(Lineitem, lag_parameter, 0) ELSE 0 The use of formula a or b will depend on the most likely occurrence of 0s in the lag parameter. Booleans Avoid adding unnecessary clutter for line items formatted as BOOLEANS. There is no need to include the TRUE or FALSE expression, as the condition will evaluate to TRUE or FALSE. Sales>0 Instead of IF Sales > 0 then TRUE ELSE FALSE
View full article
PLANS is the new standard for Anaplan modeling—“the way we model.” This covers more than just the formulas and includes and evolves existing best practices around user experience and data hubs. It is a set of rules on the structure and detailed design of Anaplan models. This set of rules will provide both a clear route to good model design for the individual Anaplanner and common guidance on which Anaplanners and reviewers can rely when passing models amongst themselves.  In defining the standard, everything we do will consider or be based around: Performance – Use the correct structures and formula to optimize the Hyperblock Logical – Build the models and formula more logically – See D.I.S.C.O. below Auditable – Break up the formula for better understanding, performance, and maintainability Necessary – Don’t duplicate expressions. Store and calculate data and attributes once and reference them many times. Don't have calculations on more dimensions than needed Sustainable – Build with the future in mind, thinking about process cycles and updates        The standards will be based around three axes: Performance - How do the structures and formula impact the performance of the system? Usability/Auditability - Is the user able to understand how to interact with the functionality? Sustainability - Can the solution be easily maintained by model builders and support? We will define the techniques to use that balance on the three areas to ensure the optimal design of Anaplan models and architecture.       D.I.S.C.O As part of model and module design, we recommend categorizing modules as follows: Data – Data hubs, transactional modules, source data; reference everywhere Inputs – Design for user entry, minimize the mix of calculations and outputs System – Time management, filters, list attributes modules, mappings, etc.; reference everywhere Calculations – Optimize for performance (turn summaries off, combine structures) Outputs -  Reporting modules, minimize data flow out   Why build this way?   Performance Fewer repeated calculations Optimized structures and formulas Logical Data and calculations reside in logical places Model data flows can be easily understood Auditable Model structure can be easily understood Simplified formula (no need for complex expressions) Necessary Formulas and structures are not repeated Data is stored and calculated once, referenced many times, leading to efficient calculations Sustainable Models can be adapted and maintained more easily Expansion and scaling simplified     Recommended Content: Performance Dimension Order Formula Optimization in Anaplan Formula Structure for Performance Logical Best Practices for Module Design Auditable Formula Structure for Performance Necessary Reduce Calculations for Better Performance Formula Optimization in Anaplan Sustainable Dynamic Cell Access Tips and Tricks Dynamic Cell Access - Learning App Personal Dashboards Tips and Tricks Time Range Application Ask Me Anything (AMA) sessions
View full article
Note: While all of these scripts have been tested and found to be fully functional, due to the vast amount of potential use cases, Anaplan does not explicitly support custom scripts built by our customers. This article is for information only and does not suggest any future product direction. Getting Started Python 3 offers many options for interacting with an API. This article will explain how you can use Python 3 to automate many of the requests that are available in our apiary, which can be found at   https://anaplan.docs.apiary.io/#. This article assumes you have the requests (version 2.18.4), base64, and JSON modules installed as well as the Python 3 version 3.6.4. Please make sure you are installing these modules with Python 3, and not for an older version of Python. For more information on these modules, please see their respective websites: Python   (If you are using a Python version older or newer than 3.6.4 or requests version older or newer than 2.18.4 we cannot guarantee validity of the article)   Requests   Base Converter   JSON   (Note: install instructions are not at this site but will be the same as any other Python module) Note:   Please read the comments at the top of every script before use, as they more thoroughly detail the assumptions that each script makes. Authentication To start, let's talk about Authentication. Every script run that connects to our API will be required to supply valid authentication. There are 2 ways to authenticate a Python script that I will be covering. Certificate Authentication Basic Encoded Authentication Certificate authentication will require that you have a valid Anaplan certificate, which you can read more about   here. Once you have your certificate saved locally, to properly convert your Anaplan certificate to be usable with the API, first you will need   openssl. Once you have that, you will need to convert the certificate to PEM format by running the following code in your terminal: openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem If you are using Certificate Authorization, the scripts we use in this article will assume you know the Anaplan account email associated with the certificate. If you do not know it, you can extract the common name (CN) from the PEM file by running the following code in your terminal: openssl x509 -text -in certtest.pem To be used with the API, the PEM certificate string will need to be converted to base64, but the scripts we will be covering will take care of that for you, so I won't cover that in this section. To use basic authentication, you will need to know the Anaplan account email that is being used, as well as the password. All scripts in this article will have the following code near the top: # Insert the Anaplan account email being used username = '' ----------------- # If using cert auth, replace cert.pem with your pem converted certificate # filename. Otherwise, remove this line. cert = open('cert.pem').read() # If using basic auth, insert your password. Otherwise, remove this line. password = '' # Uncomment your authentication method (cert or basic). Remove the other. user = 'AnaplanCertificate ' + str(base64.b64encode(( f'{username}:{cert}').encode('utf-8')).decode('utf-8')) # user = 'Basic ' + str(base64.b64encode((f'{username}:{password}' # ).encode('utf-8')).decode('utf-8') Regardless of authentication method, you will need to set the username variable to the Anaplan account email being used. If you are using a certificate to authenticate, you will need to have your PEM converted certificate in the same folder or a child folder of the one you are running the scripts from. If your certificate is in a child folder, please remember to include the file path when replacing cert.pem (e.g. cert/cert.pem). You can remove the password line and its comments, and its respective user variable. If you are using basic authentication, you will need to set the password variable to your Anaplan account password and you can remove the cert line, its comments, and its respective user variable. Getting the Information Needed for Each Script Most of the scripts covered in this article will require you to know an ID or metadata for the file, action, etc., that you are trying to process. Each script that gets this information for their respective fields is titled get_____.py. For example, if you want to get your files metadata, you'll run getFiles.py, which will write the file metadata for each file in the selected model in the selected workspace in an array to a JSON file titled files.json. You can then open the JSON file, find the file you need to reference, and use the metadata from that entry in your other scripts. TIP:   If you open the raw data tab of the JSON file it makes it much easier to copy the whole set of metadata. The following are the links to download each get____.py script. Each get script uses the requests.get method to send a get request to the proper API endpoint. getWorkspaces.py: Writes an array to workspaces.json of all the workspaces the user has access to. getModels.py: Writes an array to models.json of either all the models a user has access to if wGuid is left blank, or all of the models the user has access to in a selected workspace if a workspace ID was inserted. getModelInfo.py: Writes an array to modelInfo.json of all metadata associated with the selected model. getFiles.py: Writes an array to files.json of all metadata for each file the user has access to in the selected model and workspace. (Please refer to   the Apiary   for more information on private vs default files. Generally it is recommended that all scripts be run via the same user account.) getChunkData.py: Writes an array to chunkData.json of all metadata for each chunk of the selected file in the selected model and workspace. getImports.py: Writes an array to imports.json of all metadata for each import in the selected model and workspace. getExports.py: Writes an array to exports.json of all metadata for each export in the selected model and workspace. getActions.py: Writes an array to actions.json of all metadata for all actions in the selected model and workspace. getProcesses.py: Writes an array to processes.json of all metadata for all processes in the selected model and workspace. Uploads A file can be uploaded to the Anaplan API endpoint either in chunks, or as a single chunk. Per our apiary: We recommend that you upload files in several chunks. This enables you to resume an upload that fails before the final chunk is uploaded. In addition, you can compress files on the upload action. We recommend compressing single chunks that are larger than 50MB. This creates a Private File. Note: To upload a file using the, API that file must exist in Anaplan. If the file has not been previously uploaded, you must upload it initially using the Anaplan user interface. You can then carry out subsequent uploads of that file using the API. Multiple Chunk Uploads The script we have for reference is built so that if the script is interrupted for any reason, or if any particular chunk of a file fails to upload, simply rerunning the script will start uploading the file again, starting at the last successful chunk. For this to work, the file must be initially split using a standard naming convention, using the terminal script below. split -b [numberofBytes] [path and filename] [prefix for output files] You can store the file in any location as long as you the proper file path when setting the chunkFilePrefix (e.g. chunkFilePrefix = ''upload_chunks/chunk-" This will look for file chunks named chunk-aa, chunk-ab, chunk-ac etc., up to chunk-zz in the folder script_origin/upload_chunks/. It is very unlikely that you will ever exceed chunk-zz). This will let the script know where to look for the chunks of the file to upload. You can download the script for running a multiple chunk upload from this link: chunkUpload.py Note:   The assumed naming conventions will only be standard if using Terminal, and they do not necessarily work if the file was split using another method in Windows. If you are using Windows you will need to either create a way to standardize the naming of the chunks alphabetically {chunkFilePrefix}(aa - zz) or run the script as detailed in the   Apiary. Note:   The chunkUpload.py script keeps track of the last successful chunk by writing the name of the last successful chunk to a .txt file chunkStop.txt. This file is deleted once the import completes successfully. If the file is modified in between runs of the script, the script may not function correctly. Best practice is to leave the file alone, and delete it if you want to start the upload from the first chunk. Single Chunk Upload The single chunk upload should only be used if the file is small enough to upload in a reasonable time frame. If the upload fails, it will have to start again from the beginning. If your file has a different name then that of its version of the server, you will need to modify line 31 ("name" : '') to reflect the name of the local file. This script runs a single put request to the API endpoint to upload the file. You can download the script for running a single chunk upload from this link: singleChunkUpload.py Imports The import.py script sends a post request to the API endpoint for the selected import. You will need to set the importData value to the metadata for the import. See Getting the Information Needed for Each Script for more information. You can download the script for running an import from this link: Import.py Once the import is finished, the script will write the metadata for the import task in an array to postImport.json, which you can use to verify which task you want to view the status of while running the importStatus.py script. The importStatus.py script will return a list of all tasks associated with the selected importID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postImport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file importStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to importDump.csv which can be used to review cause of the failure. If the task finished with no failures, you will get a message telling you the import has completed with no failures. You can download the script for importStatus.py from this link: importStatus.py Note:   If you check the status of a task with an old taskID for an import that has been run since you last checked it, the dump will no longer exist and importDump.csv will be overwritten with an HTTP error, and the status of the task will be 410 Gone. Exports The export.py script sends a post request to the API endpoint for the selected export. You will need to set the exportData value to the metadata for the export. See Getting the Information Needed for Each Script for more information. You can download the script for running an export from this link: Export.py Once the export is finished, the script will write the metadata for the export task in an array to postExport.json, which you can use to verify which task you want to view the status of while running the exportStatus.py script. The exportStatus.py script will return a list of all tasks associated with the selected exportID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postExport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file exportStatus.json. If the task is still in progress, it will print the task status and progress. It is important to note that no failure dump will be generated if the export fails. You can download the script for exportStatus.py from this link: exportStatus.py Actions The action.py script sends a post request to the API endpoint for the selected action (for use with actions other than imports or exports). You will need to set the actionData value to the metadata for the action. See Getting the Information Needed for Each Script for more information. You can download the script for running an action from this link: actionStatus.py. Processes The process.py script sends a post request to the API endpoint for the selected process. You will need to set the processData value to the metadata for the process. See Getting the Information Needed for Each Script for more information. You can download the script for running a process from this link: Process.py Once the process is finished, the script will write the metadata for the process task in an array to postProcess.json, which you can use to verify which task you want to view the status of while running the processStatus.py script. The processStatus.py script will return a list of all tasks associated with the selected processID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postProcess.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file processStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to processDump.csv which can be used to review cause of the failure. It is important to note that no failure dump will be generated for the process itself, only if one of the imports in the process failed. If the task finished with no failures, you will get a message telling you the process has completed with no failures. You can download the script for processStatus.py from this link: processStatus.py Downloading a File Downloading a file from the Anaplan API endpoint will download the file in however many chunks it exists in on the endpoint. It is important to note that you should set the variable fileName to the name it has in the file metadata. First, the downloads individual chunk metadata will be written in an array to downloadChunkData.json for reference. The script will then download the file chunk by chunk and write each chunk to a new local file with the same name as the 'name' listed in the files metadata. You can download the link for this script from this link: downloadFile.py Note:If a file already exists in the same folder as your script with the same name as the name value in the files metadata, the local file will be overwritten with the file being downloaded from the server. Deleting a File You can delete the file contents of any file that the user has access to that exists in the Anaplan server. Note: This only removes private content. Default content and the import data source model object will remain. You can download the link for this script from this link: deleteFile.py Standalone Requests Code and Their Required Headers In this section, I will list the code for each request detailed above, including the API URL and the headers necessary to complete the call. I will be leaving the content right of Authorization: headers blank. Authorization header values can be either Basic encoded_username:password or AnaplanCertificate encoded_CommonName:PEM_Certificate_String (see   Certificate-Authorization-Using-the-Anaplan-API   for more information on encoded certificates) Note: requests.get will only generate a response body from the server, and no data will be locally saved unless written to a local file. Get Workspaces List requests.get('https://api.anaplan.com/1/3/workspaces/', headers='Authorization':) Get Models List requests.get('https://api.anaplan.com/1/3/models/', headers={'Authorization':}) or requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models', headers={'Authorization':}) Get Model Info requests.get(f'https://api.anaplan.com/1/3/models/{mGuid}', headers={'Authorization':}) Get Files/Imports/Exports/Actions/Processes List The get request for files, imports, exports, actions, or processes are largely the same. Change files to imports, exports, actions, or processes to run each. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files', headers={'Authorization':}) Get Chunk Data requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks', headers={'Authorization':}) Post Chunk Count requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-type': 'application/json'}, json={fileMetaData}) Upload a Chunk of a File requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local chunk file}) Mark an upload complete requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/complete', headers=={'Authorization': , 'Content-Type': 'application/json'}, json={fileMetaData}) Upload a File in a Single Chunk requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local file}) Run an Import/Export/Process The post request for imports, exports, and processes are largely the same. Change imports to exports, actions, or processes to run each. requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', headers={'Authorization': , 'Content-Type': 'application/json'}, data=json.dumps({'localeName': 'en_US'})) Run an Action requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', data={'localeName': 'en_US'}, headers={'Authorization': , 'Content-Type': 'application/json'}) Get Task list for an Import/Export/Action/Process The get request for import, export, action and process task lists are largely the same. Change imports to exports, actions, or processes to get each task list. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{importID}/tasks', headers={'Authorization':}) Get Status for an Import/Export/Action/Process Task The get request for import, export, action and process task statuses are largely the same. Change imports to exports, actions, or processes to get each task list. Note: Only imports and processes will ever generate a failure dump. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{ID}/tasks/{taskID}' headers={'Authorization':}) Download a File Note:   You will need to get the chunk metadata for each chunk of a file you want to download. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkID}, headers={'Authorization': ,'Accept': 'application/octet-stream'}) Delete a File Note:   This only removes private content. Default content and the import data source model object will remain. requests.delete('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-type': 'application/json'} Note:  SFDC user administration is not covered in this article, but the same concepts from the scripts provided can be applied to SFDC user administration. For more information on SFDC user administration see the apiary entry for  SFDC user administration .
View full article
How do we keep our users in the Anaplan platform to do their work which requires a high level of advanced customization, faster and more easily than their previous Excel environment? The solution is called “Smart Filters”. Check it out !
View full article
Learn how small changes can lead to dramtic improvements in model calculations
View full article
You can interact with the data in your models using Anaplan's RESTful API. This enables you to securely import and export data, as well as run actions through any programmatic way you desire. The API can be leveraged in any custom integration, allowing for a wide range of integration solutions to be implemented. Completing an integration using the Anaplan API is a technical process that will require significant action by an individual with programming experience. Visit the links below to learn more: API Documentation Anaplan API Guide You can also view demonstration videos to understand how to implement APIs in your custom Integration client. The below videos show step-by-step guides of sequencing API calls and exporting data from Anaplan, importing data into Anaplan, and running delete actions and Anaplan processes. API sequence for uploading a file to Anaplan and running an import action: API sequence for running an export action and downloading a file from Anaplan: API sequence for running an Anaplan process and a delete action:
View full article
Overview The Anaplan Optimizer aids business planning and decision making by solving complex problems involving millions of combinations quickly to provide a feasible solution. Optimization provides a solution for selected variables within your Anaplan model that matches your objective based on your defined constraints. The Anaplan model must be structured and formatted to enable Optimizer to produce the correct solution. You are welcome to read through the materials and watch the videos on this page, but Optimizer is a premium service offered by Anaplan (Contact your Account Executive if you don't see Optimizer as an action on the settings tab). This means that you will not be able to actually do the training exercises until the feature is turned on in your system. Training The training involves an exercise along with documentation and videos to help you complete it. The goal of the exercise is to setup the optimization exercise for two use cases; network optimization and production optimization. To assist you in this process we have created an optimization exercise guide document which will walk you through each of the steps. To further help we have created three videos you can reference: An exercise walk-through A demo of each use case A demo of setting up dynamic time Follow the order of the items listed below to assist with understanding how Anaplan's optimization process works: Watch the use case video which demos the Optimizer functionality in Anaplan Watch the exercise walkthrough video Review documentation about how Optimizer works within Anaplan Attempt the Optimizer exercise Download the exercise walkthrough document Download the Optimizer model into your workspace How to configure Dynamic Time within Optimizer Download the Dynamic Time document Watch the Dynamic Time video Attempt Network Optimization exercise Attempt Production Optimization exercise
View full article
L'application Bring Your Own Key (BYOK) vous permet maintenant de vous approprier les clés de chiffrement de vos données de modèle. Si vous avez accès à l'outil Anaplan Administration, vous pouvez chiffrer et déchiffrer des espaces de travail sélectionnés à l'aide de vos propres clés AES-256. À la différence des clés principales système, les clés créées par BYOK vous appartiennent et vous en assurez la sécurité. Aucun mécanisme ne permet au personnel Anaplan d'accéder à vos clés. Bring Your Own Key (BYOK) - Guide de l'utilisateur  Bring Your Own Key (BYOK) est un produit complémentaire que votre organisation peut acheter si elle possède l'édition Enterprise.
View full article
  NOTE: The following information is also attached as a PDF for downloading and using off-line.   Overview The process of designing a model will help you: Understand the customer’s problem more completely Bring to light any incorrect assumptions you may have made, allowing for correction before building begins Provide the big picture view for building. (If you were working on an assembly line building fenders, wouldn’t it be helpful to see what the entire car looked like?)   Steps: Understand the requirements and the customer’s technical ecosystem when designing a model When you begin a project, gather information and requirements using a number of tools. These include: Statement of Work (SOW): Definition of the project scope and project objectives/high level requirements Project Manifesto: Goal of the project – big picture view of what needs to be accomplished IT ecosystem: Which systems will provide data to the model and which systems will receive data from the model? What is the Anaplan piece of the ecosystem? Current business process: If the current process isn’t working, it needs to be fixed before design can start. Business logic: What key pieces of business logic will be included in the model?  Is a distributed model needed? High user concurrency Security where the need is a separate model Regional differences that are better handled by a separate model Is the organization using ALM, requiring split or similar models to effectively manage development, testing, deployment, and maintenance of applications? (This functionality requires a premium subscription or above.) User stories: These have been written by the client—more specifically, by the subject matter experts (SMEs) who will be using the model.   Why do this step? To solve a problem, you must completely understand the current situation. Performing this step provides this information and the first steps toward the solution.   Results of this step: Understand the goal of the project Know the organizational structure and reporting relationships (hierarchies) Know where data is coming from and have an idea of how much data clean-up might be needed If any of the data is organized into categories (for example, product families) or what data relationships exist that need to be carried through to the model (for example, salespeople only sell certain products) What lists currently exist and where are they are housed Know which systems the model will either import from or export to Know what security measures are expected Know what time and version settings are needed   Document the user experience Front to back design has been identified as the preferred method for model design. This approach puts the focus on the end user experience. We want that experience to align with the process so users can easily adapt to the model. During this step focus on: User roles. Who are the users? Identifing the business process that will be done in Anaplan. Reviewing and documenting the process for each role. The main steps. If available, utilize user stories to map the process. You can document this in any way that works for you. Here is a step-by-step process you can try: What are the start and end points of the process? What is the result or output of the process? What does each role need to see/do in the process? What are the process inputs and where do they come from? What are the activities the user needs to engage in? Verb/object—approve request, enter sales amount, etc. Do not organize during this step. Use post-its to capture them. Take the activities from step 4 and put them in the correct sequence. Are there different roles for any of these activities? If no, continue with step 8. If yes, assign a role to each activity. Transcribe process using PowerPoint ®  or Lucid charts. If there are multiple roles, use swim lanes to identify the roles. Check with SMEs to ensure accuracy. Once the user process has been mapped out, do a high level design of the dashboards Include: Information needed What data does the user need to see? What the user is expected to do or decisions that the user makes Share the dashboards with the SMEs. Does the process flow align?   Why do this step?  This is probably the most important step in the model design process. It may seem as though it is too early to think about the user experience, but ultimately the information or data that the user needs to make a good business decision is what drives the entire structure of the model. On some projects, you may be working with a project manager or a business consultant to flesh out the business process for the user. You may have user stories, or it may be that you are working on design earlier in the process and the user stories haven’t been written. In any case, identify the user roles, the business process that will be completed in Anaplan, and create a high level design of the dashboards. Verify those dashboards with the users to ensure that you have the correct starting point for the next step.   Results of this step: List of user roles Process steps for each user role High level dashboard design for each user role   Use the designed dashboards to determine what output modules are necessary Here are some questions to help you think through the definition of your output modules: What information (and in what format) does the user need to make a decision? If the dashboard is for reporting purposes, what information is required? If the module is to be used to add data, what data will be added and how will it be used? Are there modules that will serve to move data to another system? What data and in what format is necessary?   Why do this step? These modules are necessary for supporting the dashboards or exporting to another system. This is what should guide your design—all of the inputs and drivers added to the design are added with the purpose of providing these output modules with the information needed for the dashboards or export.   Results of this step: List of outputs and desired format needed for each dashboard   Determine what modules are needed to transform inputs to the data needed for outputs Typically, the data at the input stage requires some transformation. This is where business rules, logic, and/or formulas come into play: Some modules will be used to translate data from the data hub. Data is imported into the data hub without properties, and modules are used to import the properties. Reconciliation of items takes place before importing the data into the spoke model. These are driver modules that include business logic, rules.    Why do this step?  Your model must translate data from the input to what is needed for the output    Results of this step: Business rules/calculations needed   Create a model schema You can whiteboard your schema, but at some point in your design process, your schema must be captured in an electronic format. It is one of the required pieces of documentation for the project and is also used during the Model Design Check-in, where a peer checks over your model and provides feedback.  Identify the inputs, outputs, and drivers for each functional area Identify the lists used in each functional area Show the data flow between the functional areas Identify time and versions where appropriate   Why do this step?   It is required as part of The Anaplan Way process. You will build your model design skills by participating in a Model Design Check-in, which allows you to talk through the tougher parts of design with a peer. More importantly, designing your model using a schema means that you must think through all of the information you have about the current situation, how it all ties together, and how you will get to that experience that meets the exact needs of the end user without fuss or bother.    Result of this step: Model schema that provides the big picture view of the solution. It should include imports from other systems or flat files, the modules or functional areas that are needed to take the data from current state to what is needed to support the dashboards that were identified in Step 2. Time and versions should be noted where required. Include the lists that will be used in the functional areas/modules.  Your schema will be used to communicate your design to the customer, model builders, and others. While you do not need to include calculations and business logic in the schema, it is important that you understand the state of the data going into a module, the changes or calculations that are performed in the module and the state of the data leaving the module, so that you can effectively explain the schema to others.  For more information, check out 351 Schemas.  This 10 to 15 minute course provides basic information about creating a model schema. Verify that the schema aligns with basic design principles When your schema is complete, give it a final check to ensure: It is simple. “Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage to move in the opposite direction.”  ― Ernst F. Schumacher “Design should be easy in the sense that every step should be obviously and clearly identifiable. Simplify elements to make change simple so you can manage the technical risk.” — Kent Beck The model aligns with the manifesto. The business process is defined and works well within the model.
View full article
Allowing model users to export data out of an Anaplan model on a large scale mode (e.g. many end user-run exports) is not a good practice. One approach is to create an "export model" in Anaplan that is specifically for exporting purposes. This export model will have the same data set and selective access definitions as in the main model, but will not have any of the data entry or reporting dashboards. In comparison, it will only have dashboards with buttons that run specific exports. To ensure a good user experience, provide a hyperlink to the export model from a dashboard in the main model. For example, users start from their usual, main model, see a link named "Exports," and click it. The link redirects them to the export model where they see a set of predefined buttons that run exports. It is important to explain to the customer and model users that: Exports execute sequentially (first in, first served): users have to wait until previously executed exports are finished before they can run their own export. There will be data latency as the export model will likely sync once or twice a day from the main model. The export from the main model to the export model is a blocking operation and must ideally be run at times that are least likely to not disrupt operations. Users will need to understand the schedule and plan their exports accordingly.
View full article
When users run exports there can be misalignment of data that causes issues in the business process. If users export out of a dashboard, it's most often for custom reporting purposes. During this the user filters, sorts, creates sums via a pivot table, uses lookup for attributes, and displays additional data. All of these are certainly needed for reporting. In a worst case scenario the user will create additional KPIs or ratios that he could not find in the Anaplan model. Next, this user will copy all of this data into a PowerPoint ® deck, make additional formatting changes, add comments to the numbers or variances, and present this deck to his meeting attendees. This user has spent a few days doing the tasks described above, and within these few days the model has changed: data might have changed, structures might have changed, some calculations might have changed, new calculations are now available, and maybe even user access has changed. Now, this user’s deck is misaligned; they are presenting data, analysis, and conclusions that can be irrelevant or that conflict with another presenter of the same meeting who exported from the platform in a different timeframe. At this point, the executive sponsor may ask why they invested in a great platform like Anaplan if the organization is still having the same issues as before where shadow processes are frequently occurring—not to mention that the additional accurate and insightful comments included in the deck are now disconnected from the rest of the data, and will remain buried in people's emails instead of being available to all. Then the next week, or next month, the meeting happens again, and all the work of extract, reformat, recalculate, copy/paste, and comments needs to be done again.
View full article
Each time a user runs an import or an export it affects platform performance, as they will block all other users of the model from performing any tasks while the import or export runs. This creates what is called a toaster message: basically a blue box at the top of the Anaplan screen that indicates to every connected user that the platform is processing an action. Any person who frequently exports out of Anaplan will likely become very unpopular among the users of the model, especially if exports last more than a few seconds. Users who are not workspace administrators can: Export data out of a module within a dashboard Run an import prepared by an administrator Run a process that an administrator has prepared. The process can combine a number of imports and exports
View full article
This article covers the necessary steps for you to migrate your Anaplan Connect (AC) 1.3.x.x script to Anaplan Connect 1.4. For more details and examples, refer to the   Anaplan Connect User Guide v1.4. The changes are: New connectivity parameters Replace reference to Anaplan Certificate with Certificate Authority (CA) certificates using new parameters Optional Chunksize & Retry parameters Changes to JDBC configuration New Connectivity Parameters Add the following parameters to your Anaplan Connect 1.4 integration scripts. These parameters provide connectivity to Anaplan and Anaplan authentication services. Both of the urls listed below need to be whitelisted with your network team. -service "https://api.anaplan.com/" -auth "https://auth.anaplan.com" Certificate Changes As noted in our   Anaplan-generated Certificates to Expire December 10, 2018 blog post, new and updated Anaplan integration options support Certificate Authority (CA) certificates for authentication. Basic Authentication is still available in Anaplan Connect 1.4, however, the use of certificates has changed. In Anaplan Connect 1.3.x.x, the script references the full path to the certificate file. For example: -certificate "/Users/username/Documents/AnaplanConnect1.4/certificate.pem" In Anaplan Connect 1.4 the CA certificate must be stored in a Java Key Store (JKS). Refer to   this video   for a walkthrough of the process of getting the CA certificate into the key store. You can also refer to   Anaplan Connect User Guide v1.4   for steps to create the Java key store. Once you have imported the key into the JKS,   make note of this information : Path to the JKS (directory path on server where JKS is saved) The Password to the JKS The alias of the certificate within the JKS. For example: KeyStorePath ="/Users/username/Documents/AnaplanConnect1.4/my_keystore.jks" KeyStorePass ="your_password" KeyStoreAlias ="keyalias" To pass these values to Anaplan Connect 1.4, use these command line parameters: -keystore {KeystorePath} -keystorealias {KeystoreAlias} -keystorepass {KeystorePass} Chunksize Anaplan Connect 1.4 allows for custom chunk sizes on files being imported. The -chunksize parameter can be included in the call with the value being the size of the chunks in megabytes. -chunksize {SizeInMBs} Retry Anaplan Connect 1.4 allows for the client to retry requests to the server in the event that the server is busy. The -maxretrycount parameter defines the number of times the process retries the action before exiting. The -retrytimeout parameter is the time in seconds that the process waits before the next retry. -maxretrycount {MaxNumberOfRetries} -retrytimeout {TimeoutInSeconds} Changes to JDBC Configuration With Anaplan Connect 1.3.x.x the parameters and query for using JDBC are stored within the Anaplan Connect script itself. For example: Operation="-file Sample.csv' -jdbcurl 'jdbc:mysql://localhost:3306/mysql?useSSL=false' -jdbcuser 'root:Welcome1' -jdbcquery 'SELECT * FROM py_sales' -import 'Sample.csv' -execute" With Anaplan Connect 1.4. the parameters and query for using JDBC have been moved to a separate file. The name of that file is then added to the AnaplanClient call using the   -jdbcproperties   parameter. For example:  Operation="-auth 'https://auth.anaplan.com' -file 'Sample.csv'  -jdbcproperties 'jdbc_query.properties' -chunksize 20 -import 'Sample.csv' -execute " To run multiple JDBC calls in the same operation, a separate jdbcpropeties file will be needed for each query. Each set of calls in the operation should include then following parameters: -file, -jdbcproperties, -import, and -execute. In the code sample below each call is underlined separately.  For example: Operation="-auth 'https://auth.anaplan.com' -file 'SampleA.csv' -jdbcproperties 'SampleA.properties' -chunksize 20 -import 'SampleA Load' -execute -file 'SampleB.csv' -jdbcproperties 'SampleB.properties' -chunksize 20 -import 'SampleB Load' -execute" JDBC Properties File Below is an example of the JDBCProperties file. Refer to the   Anaplan Connect User Guide v1.4   for more details on the properties shown below. If the query statement is long, the statement can be broken up on multiple lines by using the \ character at the end of each line. No \ is needed on the last line of the statement. The \ must be at the end of the line and nothing can follow it. jdbc.connect.url=jdbc:mysql://localhost:3306/mysql?useSSL=false jdbc.username=root jdbc.password=Welcome1 jdbc.fetch.size=5 jdbc.isStoredProcedure=false jdbc.query=select * \ from mysql.py_sales \ where year = ? and month !=?; jdbc.params=2018,04 Anaplan Connect Windows BAT Script Example (with Cert Auth) @echo off rem This example lists a user's workspaces set ServiceLocation="https://api.anaplan.com/" set Keystore="C:\Your Cert Name Here.jks" set KeystoreAlias="" set KeystorePassword="" set WorkspaceId="Enter WS ID Here" set ModelId="Enter Model ID here" set Operation=-service "https://api.anaplan.com" -auth "https://auth.anaplan.com" -W rem *** End of settings - Do not edit below this line *** setlocal enableextensions enabledelayedexpansion || exit /b 1 cd %~dp0 set Command=.\AnaplanClient.bat -s %ServiceLocation% -k %Keystore% -ka %KeystoreAlias% -kp %KeystorePassword% -workspace %WorkspaceId% -model %ModelId% %Operation% @echo %Command% cmd /c %Command% pause Anaplan Connect Shell Script Example with Cert Auth #!/bin/sh KeyStorePath="/path/Your Cert Name.jks" KeyStorePass="" KeyStoreAlias=" " WorkspaceId="Enter WS ID Here" ModelId="Enter Model Id Here" Operation="-service "https://api.anaplan.com" -auth "https://auth.anaplan.com" -W" #________________ Do not edit below this line __________________ if [ "${CACertPath}" ]; then     Credentials="-keystore ${KeyStorePath} -keystorepass ${KeyStorePass} -keystorealias ${KeyStoreAlias}" fi echo cd "`dirname "$0"`" cd "`dirname "$0"`" if [ ! -f AnaplanClient.sh ]; then     echo "Please ensure this script is in the same directory as AnaplanClient.sh." >&2     exit 1 elif [ ! -x AnaplanClient.sh ]; then     echo "Please ensure you have executable permissions on AnaplanClient.sh." >&2     exit 1 fi Command="./AnaplanClient.sh ${Credentials} ${Operation}" /bin/echo "${Command}" exec /bin/sh -c "${Command}"   
View full article
Anaplan Connect v1.3.3.5 is now available. 
View full article
Audience: Anaplan Internal and Customers/Partners Workiva Wdesk Integration Is Now Available We are excited to announce the general availability of Anaplan’s integration with Workiva’s product, known as the Wdesk. Wdesk easily imports planning, analysis and reporting data from Anaplan to deliver integrated narrative reporting, compliance, planning and performance management on the cloud. The platform is utilized by over 3,000 organizations for SEC reporting, financial reporting, SOX compliance, and regulatory reporting. The Workiva and Anaplan partnership delivers enterprise compliance and performance management on the cloud. Workiva Wdesk, the leading narrative reporting cloud platform, and Anaplan, the leading connected-planning cloud platform, offer reliable, secure integration to address high-value use cases in the last mile of finance, financial planning and analysis, and industry specific regulatory compliance. GA Launch: March 5th  How does the Workiva Wdesk integration work? Please contact Will Berger, Partnerships (william.berger@workiva.com) from Workiva to discuss how to enable integration. Anaplan reports will feed into the Wdesk platform. Wdesk will integrate with Anaplan via Wdesk Connected Sheets. This is a Workiva built and maintained connection. What use cases are supported by the Workiva Wdesk Integration? The Workiva Wdesk integration supports a number of use cases, including: Last mile of finance: Complete regulatory reporting and filing as part of the close, consolidate, report and file process. Workiva automates and structures the complete financial reporting cycle and pulls consolidated actuals from Anaplan. Financial planning and analysis: Complex multi-author, narrative reports that combine extensive commentary and data such as budget books, board books, briefing books and other FP&A management and internal reports. Workiva creates timely, reliable narrative reports pulling actuals, targets and forecast data from Anaplan. Industry specific regulatory compliance & extensive support of XBRL and iXBRL: Workiva is used to solve complex compliance and regulatory reporting requirements in a range of industries.  In banking, Workiva supports documentation process such as CCAR, DFAST and RRP, pulling banking stress test data from Anaplan. Also, Workiva is the leading provider of XBRL software and services accounting for more than 53% of XBRL facts filed with the SEC in the first quarter of 2017.
View full article
This guide assumes you have set up your runtime environment in Informatica Cloud (Anaplan Hyperconnect) and the agent is up and running. This guide focusses solely on how to configure the ODBC connection and setting up a simple synchronization task importing data from one table in PostgreSQL to Anaplan. Informatica Cloud has richer features that are not covered in this guide. The built-in help is contextual and helpful as you go along should you need more information than I have included in this guide. The intention of this guide is to help you set up a simple import from PostgreSQL to Anaplan and this guide is therefore kept short and is not covering all related areas. This guide assumes you have ran an import using a csv file as this needs to be referenced when the target connection is set up, described under section 2.2 below. To prepare, I exported the data I wanted to use for the import from PostgreSQL to a csv file. I then mapped this csv file to Anaplan and ran an initial import to create the import action that is needed.   1. Set up the ODBC connection for PostgreSQL In this example I am using the 64-bit version of the ODBC connection running on my local laptop. I have set it up for User DSN rather than System DSN, but the process is very similar should you need to set up a System DSN. You will need to download the relevant ODBC driver from PostgreSQL and install it to be able to add it to your ODBC Data Sources as per below (click the Add…button and you should be able to select the downloaded driver).     Clicking the configuration button for the ODBC Data Source opens the configuration dialogue. The configurations needed are: Database is the name of your PostgreSQL database. Server is the address to your server. As I am setting this up on my laptop, it’s localhost. User Name is the username for the PostgreSQL database. The password is the password for the PostgreSQL database. Port is the port used by PostgreSQL. You will find this if you open PostgreSQL. Testing the connection should not return any errors.   2.    Configuring source and target connections After setting up the ODBC connection as described above, you will need to set up two connections, one to PostgreSQL and one to Anaplan. Follow the steps below to do this.   2.1 Source connection – PostgreSQL ODBC Select Configure > connection in the menu bar to configure a connection.    Name your connection and add a description Select type – ODBC Select the runtime environment that will be used to run this. In this instance I am using my local machine. Insert the username for the database (same as you used to set up the ODBC connection). Insert the password for the database (same as you used to set up the ODBC connection). Insert the data source name. This is the name of the ODBC connection you configured earlier. Code page would need to correspond to the character set you are using. Testing the connection should give you below confirmation. If so, you can click Done.   2.2 Set up target connection – Anaplan The second connection that needs to be set up is the connection from Informatica Cloud to Anaplan.   Name your connection and add a description if needed Select type – AnaplanV2 Select the runtime environment that will be used to run this. In this instance I am using my local machine. Auth type – I am using Basic Auth which will require your Anaplan user credentials Insert the Anaplan username Insert the Anaplan password Certification Path location – leave blank if you use Basic Auth Insert the workspace ID (open your Anaplan model and select help and about) Insert the model ID (find in the same way as for workspace ID) I have left the remaining fields as per default setting.   Testing the connection should not pass any errors.   3 Task wizard – Data synchronization The next step is to set up a data synchronization task to connect the PostgreSQL source to the Anaplan target. Select Task Wizards in the menu bar and navigate to Data Synchronization as per below screen shot.   This will open the task wizard, starting with defining the Data Synchronization task as per below. Name the task and select the relevant task operation. In this example I have selected Insert, but other task operations are available like update and upsert.   Click Next for the next step in the workflow which is to set up the connection to the source. Start by selecting the connection you defined above under section 2.1. In this example I am using a single table as source and have therefore selected single source. With this connection you can select the source object with the Source Object drop down. This will give you a data preview so you can validate the source is defined correctly. The source object corresponds to the table you are importing from.     The next step is to define the target connection and you will be using the connection that was set up under section 2.1 above.   The target object is the import process that you ran from the csv file in the preparation step described under section 1 above. This action is referred to below as target object. The wizard will show a preview of the target module columns.    The next step in the process is the Data Filters that has both a Simple and an Advanced mode.   I am not using any data filters in this example and please refer to the built-in help for further information on how to use this.   In the field mapping you will either need to manually map or get the fields automatically mapped depending on if the names in the source and target correspond. If you map manually, you will need to drag and drop the fields from the source to the target. Once done, select Validate Mapping to check no errors are generated from the mapping.     The last step is to define whether to use a schedule to run the connection or not. You will also have the option to insert pre-processing commands and post-processing commands and any parameters for your mapping. Please refer to the built-in help for guidance on this.   After running the task, the activity log will confirm whether the import ran without errors or warnings.   As I mentioned initially, this is a simple guide to help you to set up a simple, single source import. Informatica Cloud does have more advanced options as well, both for mappings and transformations.
View full article
Summary Anaplan Connect is a command-line client to the Anaplan cloud-based planning environment and is a java-based utility that is able to perform a variety of commands, such as uploading and downloading data files, executing relational SQL queries (for loading into Anaplan), and running Anaplan actions and processes. To enhance the deployment of Anaplan Connect, it is import to be able to integrate the trapping of error conditions, enable the ability to retry the Anaplan Connect operation, and integrate email notifications. This article provides best practices on how to incorporate these capabilities. This article leverages the standard Windows command line batch script and documents the various components and syntax of the script. In summary, the script has the following main components: Set variable values such as exit codes, Anaplan Connect login parameters, and operations and email parameters Run commands prior to running Anaplan Connect commands Main loop block for multiple retries Establish a log file based upon the current date and loop number Run the native Anaplan Connect commands Search for string criteria to trap error conditions Branching logic based upon the discovery of any trapped error conditions Send email success or failure notification of Anaplan Connect run status Logic to determine if a retry is required End main loop block Run commands post to running Anaplan Connect commands Exit the script Section #1: Setting Script Variables The following section of the script establishes and sets variables that are used in the script. The first three lines perform the following actions: Clears the screen Sets the default to echo all commands Indicates to the operating system that variable values are strictly local to the the script The variables used in the script are as follows: ERRNO   – Sets the exit code to 0 unless set to 1 after multiple failed reties COUNT   – Counter variable used for looping multiple retries RETRY_COUNT   – Counter variable to store the max retry count (note: the /a switch indicates indicates a numeric value) AnaplanUser   – Anaplan login credentials in the format as indicated in the example WorkspaceId   – Anaplan numerical or named Workspace ID ModelId   – Anaplan numerical or named Model ID Operation   – A combination of Anaplan Connect commands. It should be noted that a ^ can be used to enhance readability by indicating that the current command continues on the next line Domain   – Email base domain. Typically, in the format of company.com Smtp   – Email SMTP server User   – Email SMTP server User ID Pass   – Email SMTP server password To   – Target email address(es). To increase the email distribution, simply add additional -t and the email addresses as in the example. From   – From email address Subject   – Email subject line. Note that this is dynamically set later in the script. cls echo on setlocal enableextensions REM **** SECTION #1 - SET VARIABLE VALUES **** set /a ERRNO=0 set /a COUNT=0 set /a RETRY_COUNT=2 REM Set Anaplan Connect Variables set AnaplanUser="<<Anaplan UserID>>:<<Anaplan UserPW>>" set WorkspaceId="<<put your WS ID here>>" set ModelId="<<put your Model ID here>>" set Operation=-import "My File" -execute ^ -output ".\My Errors.txt" REM Set Email variables set Domain="spg-demo.com" set Smtp="spg-demo" set User="fpmadmin@spg-demo.com" set Pass="1Rapidfpm" set To=-t "fpmadmin@spg-demo.com" -t "gburns@spg-demo.com" set From="fpmadmin@spg-demo.com" set Subject="Anaplan Connect Status" REM Set other types of variables such as file path names to be used in the Anaplan Connect "Operation" command Section #2: Pre Custom Batch Commands The following section allows custom batch commands to be added, such as running various batch operations like copy and renaming files or running stored procedures via a relational database command line interface. REM **** SECTION #2 - PRE ANAPLAN CONNECT COMMANDS *** REM Use this section to perform standard batch commands or operations prior to running Anaplan Connect Section #3: Start of Main Loop Block / Anaplan Connect Commands The following section of the script is the start of the main loop block as indicated by the :START. The individual components breakdown as follows: Dynamically set the name of the log file in the following date format and indicates the current loop number:   2016-16-06-ANAPLAN-LOG-RUN-0.TXT Delete prior log and error files Native out-of-the-box Anaplan Connect script with the addition of outputting the Anaplan Connect run session to the dynamic log file as highlighted here: cmd /C %Command% > .\%LogFile% REM **** SECTION #3 - ANAPLAN CONNECT COMMANDS *** :START REM Dynamically set logfile name based upon current date and retry count. set LogFile="%date:~-4%-%date:~7,2%-%date:~4,2%-ANAPLAN-LOG-RUN-%COUNT%.TXT" REM Delete prior log and error files del .\BAT_STAT.TXT del .\AC_API.ERR REM Out-of-the-box Anaplan Connect code with the exception of sending output to a log file setlocal enableextensions enabledelayedexpansion || exit /b 1 REM Change the directory to the batch file's drive, then change to its folder cd %~dp0 if not %AnaplanUser% == "" set Credentials=-user %AnaplanUser% set Command=.\AnaplanClient.bat %Credentials% -workspace %WorkspaceId% -model %ModelId% %Operation% @echo %Command% cmd /C %Command% > .\%LogFile% Section #4: Set Search Criteria The following section of the script enables trapping of error conditions that may occur with running the Anaplan Connect script. The methodology relies upon searching for certain strings in the log file after the AC commands execute. The batch command findstr can search for certain string patterns based upon literal or regular expressions and echo any matched records to the file AC_API.ERR. The existence of this file is then used to trap if an error has been caught. In the example below, two different patterns are searched in the log file. The output file AC_API.ERR is always produced even if there is no matching string. When there is no matching string, the file size will be an empty 0K file. Since the existence of the file determines if an error condition was trapped, it is imperative that any 0K files are removed, which is the function of the final line in the example below. REM **** SECTION #4 - SET SEARCH CRITERIA - REPEAT @FINDSTR COMMAND AS MANY TIMES AS NEEDED *** @findstr /c:"The file" .\%LogFile% > .\AC_API.ERR @findstr /c:"Anaplan API" .\%LogFile% >> .\AC_API.ERR REM Remove any 0K files produced by previous findstr commands @for /r %%f in (*) do if %%~zf==0 del "%%f" Section #5: Trap Error Conditions In the next section, logic is incorporated into the script to trap errors that might have occurred when executing the Anaplan Connect commands. The branching logic relies upon the existence of the AC_API.ERR file. If it exists, then the contents of the AC_API.ERR file are redirected to a secondary file called BAT_STAT.TXT and the email subject line is updated to indicate that an error occurred. If the file AC_API.ERR does not exist, then the contents of the Anaplan Connect log file is redirected to BAT_STAT.TXT and the email subject line is updated to indicate a successful run. Later in the script, the file BAT_STAT.TXT becomes the body of the email alert.  REM **** SECTION #5 - TRAP ERROR CONDITIONS *** REM If the file AC_API.ERR exists then echo errors to the primary BAT_STAT log file REM Else echo the log file to the primary BAT_STAT log file @if exist .\AC_API.ERR ( @echo . >> .\BAT_STAT.TXT @echo *** ANAPLAN CONNECT ERROR OCCURED *** >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT type .\AC_API.ERR >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT set Subject="ANAPLAN CONNECT ERROR OCCURED" ) else ( @echo . >> .\BAT_STAT.TXT @echo *** ALL OPERATIONS COMPLETED SUCCESSFULLY *** >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT type .\%LogFile% >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT set Subject="ANAPLAN LOADED SUCCESSFULLY" ) Section #6: Send Email In this section of the script, a success or failure email notification email will be sent. The parameters for sending are all set in the variable section of the script.  REM **** SECTION #6 - SEND EMAIL VIA MAILSEND *** @mailsend -domain %Domain% ^ -smtp %Smtp% ^ -auth -user %User% ^ -pass %Pass% ^ %To% ^ -f %From% ^ -sub %Subject% ^ -msg-body .\BAT_STAT.TXT Note: Sending email via SMTP requires the use of a free and simple Windows program known as MailSend. The latest release is available here:   https://github.com/muquit/mailsend/releases/ . Once downloaded, unpack the .zip file, rename the file to mailsend.exe and place the executable in the same directory where the Anaplan Connect batch script is located.  Section #7: Determine if a Retry is Required This is one of the final sections of the script that will determine if the Anaplan Connect commands need to be retried. Nested IF statements are typically frowned upon but are required here given the limited capabilities of the Windows batch language. The first IF test determines if the file AC_API.ERR exists. If this file does exist, then the logic drops in and tests if the current value of COUNT   is less than   the RETRY_COUNT. If the condition is true, then the COUNT gets incremented and the batch returns to the :START location (Section #3) to repeat the Anaplan Connect commands. If the condition of the nested IF is false, then the batch goes to the end of the script to exit with an exit code of 1.  REM **** SECTION #7 - DETERMINE IF A RETRY IS REQUIRED *** @if exist .\AC_API.ERR ( @if %COUNT% lss %RETRY_COUNT% ( @set /a COUNT+=1 @goto :START ) else ( set /a ERRNO=1 @goto :END ) ) else ( set /a ERRNO=0 Section #8: Post Custom Batch Commands The following section allows custom batch commands to be added, such as running various batch operations like copy and renaming files, or running stored procedures via a relational database command line interface. Additionally, this would be the location to add functionality to bulk insert flat file data exported from Anaplan into a relational target via tools such as Oracle SQL Loader (SQLLDR) or Microsoft SQL Server Bulk Copy (BCP).  REM **** SECTION #8 - POST ANAPLAN CONNECT COMMANDS *** REM Use this section to perform standard batch commands or operations after running Anaplan Connect commands :END exit /b %ERRNO% Sample Email Notifications The following are sample emails sent by the batch script, which are based upon the sample script in this document. Note how the needed content from the log files is piped directly into the body of the email.  Success Mail: Error Mail:
View full article
Note that this article uses a planning dashboard as an example, but many of these principles apply to other types of dashboards, as well. Methodology User stories Building a useful planning dashboard always starts with getting a set of very clear user stories, which describes how a user should interact with the system: The user stories need to identify: What the user wants to do What data the user need to see to perform this action What data the user wants to change How the user will check that changes made have taken effect If one or more of the above is missing in a user story, ask the product owner to complete the description. Start the dashboard design, but use it to obtain the answers. It will likely change as more details arrive.   Product owners versus designers Modelers should align with product owners by defining concrete roles and responsibilities for each team member. Product owners should provide what data users are expecting to see and how they wish to interact with the data, not ask for specific designs (this is the role of the modelers/designers). Product owners are responsible for change management and should be extra careful when dashboard/navigation is significantly different than what is currently being used (i.e. Excel ® ). Pre-demo peer review  Have a usability committee that: is made up of modeling peers outside project and/or project team members outside of modeling team will host mandatory gate check meeting to review models before demos to product owners or users Committee is designed to ensure best design by challenging modelers consistency between models function is clear exceptions / calls to action are called out the best first impression Exception, call to action, measure impact Building a useful planning dashboard will be successful if the dashboard allows users to highlight and analyze exceptions (issues, alerts, warning), take action and plan to solve these, and always visually check the impact of the change against a target. Dashboard structure Example: A dashboard is built for these two user stories that compliment each other. Story 1: Review all of my accounts for a specific region, manually adjust the goals and enter comments Story 2: Edit my account by assigning direct and overlay reps The dashboard structure should be made of: Dashboard header: Short name describing the purpose of the dashboard at the top of the page in "Heading 1" Groupings: A collection of dashboard widgets Call to action Main grid(s) Info grid(s) : Specific to one item of the main grid Info charts: Specific to one item of the main grid Specific action buttons: Specific to one item of the main grid Main charts: Covers more than one item of the main grid Individual line items: Specific to one item of the main grid, usually used for commentaries Light instructions A dashboard can have more than one of these groupings, but all elements within a grouping needs are here to answer the needs of the user story. Use best judgements to determine the number of groupings added to one dashboard. A maximum of two to three groupings is reasonable. Past this range, consider building a new dashboard. Avoid having a "does it all" dashboard, where users keep scrolling up and down to find each section. If users ask for a table of contents at the top of a dashboard, it's a sign that the dashboard has too much functionality and should be divided into multiple dashboards. Example:   General guidelines  Call to action Write a short sentence describing task to be completed within this grouping. Use heading 2 format.   Main grid(s) The main grid is the central component of the dashboard, or of the grouping. It's where the user will spend most of his/her time. This main grid will display the KPIs needed for the task (usually in the columns) and will display one or more other dimension in the rows. Warning: Users may ask for 20+ KPIs and need these KPIs to be broken down by many dimensions, such as by product, actual/plan/variance, or by time. It's critical to have a main grid as simple and as decluttered as possible. Avoid the "data jungle" syndrome. Users are used to "data jungles" simply because that's what they are used to with Excel. Tips to avoid data jungle syndrome: Make a careful KPI election (KPIs are usually the line items of a module) Display the most important KPIs ONLY, which are those needed for decision making. Hide the others for now.   A few criteria for electing a KPI in the main grid are: KPI is meant to be compared across the dimension items in the rows, or across other KPIs Viewing the KPI values for all of the rows is required to make the decision KPI is needed for sorting the rows (except on row name) A few criteria for not electing a KPI in the main grid are (besides not matching the above criteria) when we need these KPIs in more of a drill down mode; KPI provides valid extra info, but just for the selected row of the Dashboard, and does not need to be displayed for all rows.   These "extra info" KPIs should be displayed in a different grid, which will be referred to as "info grid" in this document. Take advantage of the row/column sync functionality to provide a ton of data in your dashboard, but only display data when requested or required.   Design your main grid in such a way that does not require the user to scroll left and right to view the KPIs: Efficiently select KPIs Use the column header wrap  Set the column size accordingly Vertical scroll It is ok to have users scroll vertically on the main grid. Only display 15 to 20 rows at a time when there are numerous rows, as well as other groupings and action buttons, to display on the same dashboard. Use sorts and a filter to display relevant data. Sort your grid Always sort your rows. Obtain the default sort criteria via user stories. If no special sort criteria is called out, use the alphanumeric sort on the row name. This will require a specific line item. Train end users to use the sort functionality. Filter your grid Ask end users or product owners what criteria to use to display the most relevant rows. It could be: Those that make 80 percent of a total. Use the RankCumulate function. Those that have been modified lately. This requires a process to attach a last modified date to a list item, updated daily via a batch mode. When the main grid allows item creation, always display the newly created first. Status Flag If end users need to apply their own filter values on some attributes of the list items, such as filter to show only those who belong to EMEA or those whose status is "in progress," envision building pre-set user-based filters Warning #1: This will have an impact on the module size and will require a few line items defined by a "user" dimension. Forget about filtering for 500k+ accounts with 2,000 users. Instead, it works well for 5,000 items by 400 users. Warning #2: The grid will become read-only for the line item(s) that are not defined by users, as well as the other required dimensions. The workaround is to publish the subsidiary view, but will prevent editing within the main grid. This might be considered more for read-only grids or reporting dashboards. Display these user-based filters above the main grid, and have an "Apply filter" button that simply re-opens the same dashboard and will apply the filter to the grid.  Color code your grid Use colored cells to call attention to areas of a grid, such as green for positive and red for negative Color code cells that specifically require data entry Display the full details If a large grid is required, something like 5k lines and 100 columns, then: Make it available in a dedicated full screen dashboard via a button available from the summary dashboard, such as an action button Do not add such a grid to a dashboard where KPIs, charts, or multiple grids are used for planning.  These dashboards are usually needed for ad-hoc analysis and data discovery, or random verification of changes, and can create a highly cluttered dashboard. Main charts The main chart goes hand in hand with the main grid. Use it to compare one or more of the KPIs of the main grid across the different rows.   If the main grid contains hundreds or thousands of items, do not attempt to compare this in the main chart. Instead, identify the top 20 rows that really matter or that make most of the KPI value and compare these 20 rows for the selected KPI. Location: Directly below or to the right of main display grid; should be at least partially visible with no scrolling Synchronization with selection of KPI or row of main display grid Should be used for: Comparison between row values of main display grid Displaying difference when user makes change/restatement or inputs data In cases where a chart requires 2–3 additional modules to be created: Implement and test performance If no performance issues are identified, keep the chart If performance issues are identified, work with product owners to compromise Info grid(s) These are the grids that will provide more details for an item selected on the main grid. If territories are displayed as rows, use an info grid to display as many line items as necessary for this territory. Avoid cluttering your main grid by displaying all of these line items for all terriories at once. This is not necessary and will create extra clutter and scrolling issues for end users. Location: Below or to the right of the main display grid Synced to selection of list item in main display grid Should read vertically to display many metrics pertaining to list item selected Info charts Similar to info grids, an info chart is meant to compare one or more KPIs for a selected item in the rows of the main grid.   These should be used for: Comparison of multiple KPIs for single row Comparison or display of KPIs that are not present on the main grid, but are on info grid(s) Comparing a single row's KPI(s) across time Place it on the right of the main grid, above or below an info grid. Specific action buttons Location: Below main grid; below the KPI that the action is related to OR to the far left/right - similar to "checkout" Should be an action that is to be performed on the selected row of the main grid Can be used for navigation as a drill down to a detailed view of a selected row/list item Should NOT be used as lateral navigation between dashboards; users should be trained to use the left panel for lateral navigation Individual line items Serve as a call out of important KPIs or action opportunities (i.e., user setting container for explosion, Container Explosion status) If actions taken by users require additional collaboration with other users, it should be published outside the main grid (giving particular emphasis by publishing the individual line item/s) Light instructions Call to action Serves as a header for a grouping Short sentence describing what the user should be performing within the grouping Formatted in "heading 2" Action Instructions Directly located next to a drop down, input field, or button where function is not quite clear No more than 5–6 words Formatted in "instructions"
View full article
Deal with monthly dashboards Many FP&A dashboards will need to display all 12 months in the current year, as well as Quarter, Half, and Total Year totals. Doing this is likely to create a very large grid, especially if more than one dimension is nested on the rows. Avoid this: The grid displayed here is what may be requested when Anaplan is replacing a spreadsheet-based solution. The requirement being "At minimum, do what we could do in the spreadsheets". Avoid the trap of rebuilding this in Anaplan. Usually, this simply creates an extra requirement to export this into Excel ® , have users work offline, and then import the data back into Anaplan, which kills the value that Anaplan can bring. Instead, build the dashboard as indicated below: Have end users view the aggregated values on the Cost center (the first nested dimensions) that will provide an overview on where most OPEX are spent Have end users highlight a cost center, and enter its detailed sub-accounts Visualize the monthly trend using a line chart for the selected sub-account   
View full article