Sort by:
We often see Anaplan Connect scripts created ad-hoc, as new actions are added, or updating existing scripts with these new actions. This works when there is a limited number of imports/exports/processes/etc. running, and when these actions are relatively quick. However, as a models and actions scale up and grow in complexity, this solution can become very inefficient. Either scheduling dozens of scripts, or trying to manage large, difficult to read scripts. I prefer to design for scale from the outset. My solution utilizes batch scripts that call the relavant Anaplan Connect script, passing the action to run as a variable. There are a couple ways I've accomplished this: dedicate a script to execute processes and pass in the process name, or pass in the action type (-action, -export, etc.) and name as the variable. I generally prefer the first approach, but you want to be careful when creating your process that it doesn't become so large that it impacts model performance. Usually, I will create a single script to perform all file uploads to a model, then run the processes. In my implementations, I've written each Anaplan Connect script to be model specific, but you could pass the model ID as a variable as well. To achieve this, I create a "controller" script, that calls the Anaplan Connect script, which would look something like this: @echo off for /F "tokens=* delims=" %%A in (Demand-Daily-Processes.txt) do ( call "Demand - Daily.bat" %%A & TIMEOUT 300) pause This reads from a file called Demand-Daily-Processes.text, reads a line which contains the name of the process as it appears in Anaplan, e.g.,  Load Master Data from Data Hub ... Load Transactional Data from Data Hub Then it calls the Anaplan Connect, passing this name as a variable. Once the script completes, the controller waits 300 seconds before reading the next line and calling the AC script again. This timeout is there to give the model time to recover after running the process and prevent any potential issues executing subsequent processes. The Anaplan Connect script itself looks mostly as it does, except in place of the process name, we use a variable reference.  @echo off set AnaplanUser="" set WorkspaceId="" set ModelId="" set timestamp=%date:~7,2%_%date:~3,3%_%date:~10,4%_%time:~0,2%_%time:~3,2% set Operation==-certificate "path\certificate.cer" -process "%~1" -execute -output "C:\AnaplanConnectErrors\<Model Name>-%~1-%timestamp%" rem *** End of settings - Do not edit below this line *** setlocal enableextensions enabledelayedexpansion || exit /b 1 cd %~dp0 if not %AnaplanUser% == "" set Credentials=-user %AnaplanUser% set Command=.\AnaplanClient.bat %Credentials% -workspace %WorkspaceId% -model %ModelId% %Operation% @echo %Command% cmd /c %Command% pause You can see that in place of declaring a process name, the script uses %~1. This tells the script to use the value of the first parameter provided. You can set to 9 variables this way, allowing you to pass in workspace and model IDs as well. This also creates a timestamp variable with the current system time when executed, then uses that and the process name to create a clearly labled folder for error dumps. eg. "C:\AnaplanConnectErrors\Demand Planning-Load Master Data from Data Hub-dd/mm/yyyy time". By using this solution, as you add processes to your model, you can simply add them to the text file (keeping them in the order you want them executed), rather than editing or creating batch scripts. Additionally, you need only schedule your controller script(s), making maintenance easier still. 
View full article
Table of Contents   Overview A data hub is a separate model that holds an organization’s data. Data can be shared with all your models, making expands easier to implement and ensuring data integrity across models. The data hub model can be placed in a different workspace, allowing for role segregation. This allows you to assign administrator rights to users to manage the data hub without allowing those users access to the production models. The method for importing to the data hub (into modules, rather than lists) allows you to reconcile properties using formulas. One type of data hub can be integrated with an organization’s data warehouse and hold ERP, CRM, HR, and other data as shown in this example. Anaplan Data Architecture But this isn’t the only type of data hub. Some organizations may require a data hub for transactional data, such as bookings, pipeline, or revenue. Whether you will be using a single data hub or multiple hubs, it is a good idea to plan your approach for importing from the organization’s systems into the data hub(s) as well as how you will synchronize the imports from the data hub to the appropriate model. The graphic below shows best practices.   High level best practices   When building a data hub, the best practice is to import a list with properties into a module rather than directly into a list. Using this method, you set up line items to correspond with the properties and import them using the text data type. This imports all the data without errors or warnings. The data in the data hub module can be imported to a list in the required model. The exception for importing into a module is if you are using a numbered list without a unique code (or in other words, you are using combination of properties). In that case, you will need to import the properties into the list.   Implementation steps Here are the steps to create the basics of a hub and spoke architecture. 1) Create a model and name it master data hub You can create the data hub in the same workspace where all the other models are, but a better option is to put the data hub in a different workspace. The advantage is role segregation; you can assign administrator rights to users to manage the Hub and not provide them with access to the actual production models, which are in a different workspace. Large customers may require this segregation of duties. Note: This functionality became available in release 2016.2.   2) Import your data files into the data hub Set up your lists. Identify the lists that are required in the data hub. Create these lists using good naming conventions. Set up any needed hierarchies, working from the top level down. Import data into the list from the source files, mapping only the unique name, the parent (if the name rolls up into a hierarchy), and code, if available. Do not import any list properties. These will be imported into a module. Create corresponding modules for those lists that include properties. For each list, create a module. Name the module [List Name] Properties. In the module, create a line item for each property and use the data type TEXT. Import the source file into the corresponding module. There should be no errors or warnings. Automate the process with actions. Each time you imported, an action was created. Name your actions using the appropriate naming conventions. Note: Indicate the name of the source in the name of the import action. To automate the process, you’ll want to create one process that includes all your imports. For hierarchies, it is important to get the actions in the correct order. Start with the highest level of the hierarchy list import, then the next level list and on down the hierarchy. Then add the module imports. (The order of the module imports is not critical.) Now, let's look at an example: You have a four-level hierarchy to load, such as 1) Employee→ 2) State → 3) Region → 4) Country   Lists Create lists with the right naming conventions. For this example, create these lists: G1 Country G2 Region G3 State Employee G4 Set the parent hierarchy to create the composite hierarchy. Import into each list from the source file(s), and only map name and parent. The exception is the employee list, which includes a code (employee ID) which should be mapped. Properties will be added to the data hub later.   Properties → Modules Create one module for each list that includes properties. Name the module [List Name] Properties. For this example, only the Employees list includes properties, so create one module named Employee Properties. In each module, create as many line items as you have properties. For this example, the line items are Salary and Bonus. Open the Blueprint view of the module and in the Format column, select Text. Pivot the module so that the line items are columns. Import the properties. In the grid view of the module, click on the property you are going to import into. Set up the source as a fixed line item. Select the appropriate line item from the Line Item tab and on the Mapping tab, select the correct column for the data values. You’ll need to import each property (line item) separately. There should be no errors or warnings.     Actions  Each time you run an import, an action is created. You can view these actions by selecting Actions from the Model Settings tab. The previous imports into lists and modules have created one import action per list. You can combine these actions into a process that will run each action in the correct order. Name your actions following the naming conventions. Note, the source is included in the action name.   Create one process that includes the imports. Name your process Load [List Name]. Make sure the order is correct: Put the list imports first, starting with the top hierarchy level (numbered as 1) and working down the module imports in any order.   3) Reconcile These list imports should be running with zero errors because imports are going into text formatted items. If some properties should match with items in lists, it's recommended to use FINDITEM formulas to match text to list items: FINDITEM simply looks at the text formatted line item, and finds the match in the list that you specify. Every time data is uploaded into Anaplan, you just need to make sure all items from the text formatted line item are being loaded into the list. This will be useful as you will be able to always compare the "raw data" to the "Anaplan data," and not have to load that data more than once if there are concerns about the data quality in Anaplan. If there is not a list of the properties included in your data hub model, first, create that list. Let’s use the example of Territory. Add a line item to the module and select list as the format type, then select the list name of your list of properties—in this case, Territory from the drop-down. Add the FINDITEM formula FINDITEM(x,y) where x is the name of your list (Territory for our example) and y is the line item. You can then filter this line item so that it shows all of the blank items. Correct the data in the source system. If you will be importing frequently, you may want to set up a dashboard to allow users to view the data so they can make corrections in the source system. Set up a saved view for the errors and add conditional formatting to highlight the missing (blank items) data. You can also include a counter to show the number of errors and add that information to the dashboard.   4) Split models: Filter and Set up Saved Views If the architecture of your model includes spoke models by regions, you need one master hierarchy that covers all regions and a corresponding module that stores the properties. Use that module and create as many saved views as you have spoke region models. For example, filter on Country GI = Canada if you want to import only Canadian accounts into the spoke model. You will need to create a saved view for each hierarchy and spoke model.   5) Import to the spoke module Use the cross-workspace imports if you have decided to put your Master data hub in a separate workspace. Create the lists that correspond to the hierarchy levels in each spoke model. There is no way to create a list via import for now. Create the properties in the list where needed. Keep in mind that the import of properties into the data hub as line items is an exception. List properties generally do not vary, unlike a line item in a module, which are often measured over time. Note: Properties can also be housed in modules and there are some benefits to this. See Anapedia - Model Building (specifically, the "List Attributes" and "List attributes in a module" topics). If you decide to use a module to hold the properties, you will need to create a line item for each property type and then import the properties into the module. To simplify the mapping, make sure the property names in each spoke model match the line item names of the data hub model. In each spoke model, create an import from the filtered module view of the data hub model into the lists you created in step 1. In the Actions window, name your imports using naming conventions. Create a process that includes these actions (imports). Begin with the highest level in the hierarchy and work down to the lowest. Well done! You have imported your hierarchy from a data hub model.   6) Incremental list imports When you are in the midst of your peak planning cycle and your large lists are changing frequently, you’ll want to update the data hub and push the changes to the spoke models. Running imports of several thousand list members, may cause performance issues and block users during the import activity. In a best case scenario, your data warehouse provides a date field that shows when the item was added or modified, and is able to deliver a flat file or table that includes only the changes. Your import into the HUB model will just take few seconds, and you can filter on this date field to only export the changes to the spoke models. But in most cases, all you have is a full list from the data warehouse, regardless of what has changed. To mitigate this, we'll use a technique to export only the list items that have changed (edited, deleted, updated) since the last export, using the logic in Anaplan.   Setting up the incremental loads: In the data hub model: Create a text formatted line item in your module. Name it CHECKSUM, set the format as Text, and enter a formula to concatenate of all the properties that you want to track changes for. These properties will form the base of the incremental import. Example: CHECKSUM = State & Segment & Industry & Parent & Zip Code Create a second line item, name it CHECKSUM OLD, set the format as Text, and create an import that imports CHECKSUM into CHEKSUM_OLD. Ignore any other mappings. Name this import: 1/2 im DELTA and put it in a process called "RESET DELTA" Create a line item and name it "DELTA" and set the format as Boolean. Enter this formula: IF CHECKSUM <> CHECKSUM OLD THEN TRUE ELSE FALSE. Update the filtered view that you created to export only the hierarchy for a specific region or geography. Add a filter criteria "DELTA = true". You will only see the list items which differ from the last time you imported into the data hub In the example above, we'll import into a spoke model only the list items that are in US East, and that have changed since the last import. Execute the import from the source into the data hub and then into the spoke models. In the data hub model, upload the new files and run the process import. In the spoke models, run the process import that takes the list from the data hub's filtered view. → Check the import logs and verify that only the number of items that have changed are actually imported. Back in the data hub model, run the RESET DELTA process (1/2 im DELTA import). The RESET DELTA process resets the changes, so you are ready for the next set of changes. Your source, data hub model and spoke models are all in sync.   7) Import actuals (transaction data) into the data hub and then into the spoke models Rather than importing actuals or transactions directly into a working model, import them to the data hub to make it easier for business users (with workspace admin rights) to easily select the imports they want to add to their spoke models There is one requirement: the file must include a transaction or primary key (identification code) that uniquely identifies each transaction. If there is not a transaction key, your options are as follows: Option 1: Work with the IT team to determine if it is possible to include a transaction ID in the source. This is the best option, but not always possible. Option 2: Create the transaction ID in Excel ® . Keep in mind there is a limit of 1 million rows in Excel. Also be careful about how you create the transaction ID in Excel, as some methods may delete leading zeros. Option 3: Create a numbered list in Anaplan.   Creating a numbered list and importing transaction IDs: Add a Transaction list (follow your naming conventions!) to the data hub model. In the General Lists window, select the Numbered option to change the Transaction list to a numbered list   In the Transaction list, create a property called "transaction ID", set the format to text. In the General Lists window, select Transaction ID in the Display Name Property field. Open the Transaction list and add the formula: CODE(ITEM('Transaction')) to the Transaction ID property. It will be used as the display name of the numbered list. When importing into the Transaction list, set it up as indicated below    Map the Transaction ID of the source file to the Code. Remove any selection from the Transactions drop-down list (first source field). If duplicates on the transaction ID are found, reject the import. Otherwise you will introduce corrupted data into the model. Import the transaction IDs into the Transactions list.   Import transactions Create the Actuals module. Include the transaction list and as many line items as you have fields (columns) in your source file. Set up the format of your line items. They should be set up as format type text, with the exception of columns that include values that are numbers. For those, the format should be number and include any further definitions needed (for example decimal places, units.) Add a line item called "Transaction ID" and set the format as text. Enter the formula: CODE(ITEM(Transactions)). This will be used when importing the numbered list into the spoke models. Run the import of the source file into the Actuals module. Name your two actions (imports): Import into Transactions (this was the import of the transaction IDs into the Transactions list) and Import into Actuals (this was the import from the source file into the Actuals module). Create a process that includes both imports: first, Import into Transactions, then Import into Actuals. Why a 2-dimensional module? It is important to understand that the Actuals module is a staging module with two dimensions only: transaction and line items. You can load multiple millions of these transactions and have 50+ line items, which corresponds to the properties of each transaction including version and time. Anaplan will scale without any issues. Do not create a multi dimensional module at this stage. This will be done in the spoke models, and you will carefully pick what properties will become dimensions. This will impact the spoke model size significantly if you have large lists. In the Actuals module, create a view that you will use for importing into the spoke model. Create as many saved views as required, based on how you have split the spoke models.   Reconcile The import into the module will run without errors or warnings. It does not mean that all is clean, as we just loaded some text. The reconciliation in the data hub consists of verifying that every field of the source system matches an existing item of the list of values for that field. In the module, create a list formatted line item that corresponds to each field, and use the FINDITEM() function to lookup the actual item. If the name does not match, it will return a blank cell. These cells needs to be tracked in a reconciliation dashboard. The source file will need to be fixed until all transactions actually have a corresponding item in a list. If there is not a list of the fields included in your data hub model, first create that list. Add a line item to the module and select list as the format type, then select the list name of your list of fields. Add the FINDITEM formula FINDITEM(x,y) where x is the name of your list and y is the line item. See example below: transaction 0001 is clean, transaction 0002 has an account A4 code that does not match Set up a dashboard to allow users to view the data so they can make corrections in the source system. Set up a saved view for the errors and add conditional formatting to highlight the missing (blank items) data. You can also include a counter to show the number of errors and add that information to the dashboard.   Import into the spoke models In the spoke models: Create the transaction numbered list. Import into this list from the transaction module saved view that you created in the data hub, filtered on any property you need to limit the transactions you want to push. Map the Code of the numbered list of the spoke model to the calculated Transaction ID of the Master data hub model. Create a transaction flat module. Import into this module from the same transaction module, filtered on any property you need to limit the transactions you want to push that were created in the data hub. Make sure you select the Calculated Transaction ID as your source. Do not use the Transaction name as it will be different for the same transaction in the data hub model and the spoke model. Create a target multi dimensional module, using SUM functions from the Transactional module across the line items formatted as list or time. Simple 2 dimensional module Account, Product Use SUM functions as much as possible, as it will enable users to use the drill to transaction feature that shows the transaction that make up an aggregated number.   8) Incremental data load The Actual transaction file might need to be imported several times into the data hub model and from there into the spoke models during the planning peak cycle. If the file is large, it can create performance issues for end users. Since not all transactions will change as the data is imported several times a day, there is a strong opportunity to optimize this process. In the data hub model transaction module, create the same CHECKSUM, CHECKSUM OLD and DELTA line items. CHECKSUM should concatenate all the fields you want to track the delta on, including the values. "DELTA" line item will actually catch new transactions, as well as modified transactions. See 6. Incremental List Imports above for more information   Filter the view using DELTA to only import transaction list items into the list, and the actuals transaction into the module. Create an import from CHECKSUM to CHECKSUM OLD, to be able to reset the delta after the imports have run, name this import: 2/2 im DELTA, and add it to the DELTA process created for the list. In the spoke model, import into the transaction list and into the transaction module, from the transaction filtered view. Run the DELTA import or process.   9) Automation You can semi-automate this process and have it run automatically on a frequent basis if incremental loads have been implemented. That provides immediacy of master data and actuals across all models during a planning cycle. It's semi-automatic because it requires a review of the reconciliation dashboards before pushing the data to the spoke models. There are a few ways to automate, all requiring an external tool: Anaplan Connect or the customer's ETL. The automation script needs to execute in this order: Connect to the master data hub model. Load the external files into the master data hub model. Execute the process that imports the list into the data hub. Execute the process that imports actuals (transactions) into the data hub. Manual step: Open your reconciliation dashboards, and check that data and the list are clean. Again, these imports should run with zero errors or warnings. Connect to the spoke model. Execute the list import process. Execute the transaction import models. Repeat 5, 6, and 7 for all spoke models. Connect to the master data hub model. Run the Clear DELTA process to reset the incremental checks.   Other best practices Create deletes for all your lists Create a module called Clear Lists. In the module, create a line item of type Boolean in the module where you have list and properties, call it "CLEAR ALL" and set a formula to TRUE. In Actions, create a "delete from list using selection" action and set it as below: Repeat this for all lists and create one process that executes all these delete actions.   Example of a maintenance/reconcile dashboard Use a maintenance/reconcile dashboard when manual operations are required to update applications from the hub. One method that works well is to create a module that highlights if there are errors in each data source. In that module, create a line item message that displays on the dashboard if there are errors, for example: There are errors that need correcting. A link on this dashboard to the error status page will make it easy for users to check on errors. A best practice is to automate the list refresh. Combine this with a modeling solution that only exports what has changed.   Dev-test-prod considerations There should be two saved views: One for development and one for production. That way, the hub can feed the development models with shortened versions of the lists and the production models will get the full lists. ALM considerations: The development (DEV) model will need the imports set up for DEV and production (PROD) if the different saved view option is taken. The additional ALM consideration is that the lists that are imported into the spoke models from the hub need to be marked as production data.   Development DATA HUB The data hub houses all global data needed to execute the Anaplan use case. The data hub often houses complex calculations and readies data for downstream models. DEVELOPMENT MODEL The development model is built to the 80/20 rule. It is built upon a global process, regional specific functionality is added in the deployment phase. The model is built to receive data from the data hub. DATA INTEGRATION During this stage, Anaplan Connect or a 3rd party tool is used to automate data integration. Data feeds are built from the source system into the data hub and from the data hub to downstream models. PERFORMANCE TESTING The application is put through rigorous performance testing, including automated and end user testing. These tests mimic real world usage and exceptionally heavy traffic to see how the system will perform.   Deployment DATA HUB The data hub is refreshed with the latest information from the source systems. The data hub readies data for downstream models. DEPLOYMENT  MODEL The development model is copied and the appropriate data is loaded from the data hub. Regional specific functionality is added during this phase. DATA INTEGRATION Additional data feeds from the data hub to downstream models are finalized. The integrations are tested and timed to establish baseline SLA. Automatic feeds are placed on timed schedules to keep the data up to date. PERFORMANCE TESTING The application is again put through rigorous performance testing.   Expansion DATA HUB The need for additional data for new use cases is often handled by splitting the data hub into regional data hubs. This helps the system perform more efficiently. MODEL  DEVELOPMENT The models built for new use cases are developed and thoroughly tested. Additional functionality can be added to the original models deployed. DATA INTEGRATION Data integration is updated to reflect the new system architecture. Automatic feeds are tested and scheduled according to business needs. PERFORMANCE TESTING At each stage, the application is put through rigorous performance testing. These tests mimic real world usage and exceptionally heavy traffic to see how the system will perform.
View full article
Overview: A dashboard with grids that include large lists that have been filtered and/or sorted can take time to open. The opening action can also become a blocking operation; when this happens, you'll see the blue toaster box showing "Processing....." when the dashboard is opening. This article includes some guidelines to help you avoid this situation.  Rule 1: Filter large lists by creating a Boolean line item  Avoid the use of filters on text or non-Boolean formatted items for large lists on the dashboard. Instead, create a line item with the format type Boolean and add calculations to the line item so that the results return the same data set as the filter would. This is especially helpful if you implement user-base filters, where the Boolean will be by user, and by the list to be filtered. The memory footprint of a Boolean line item is 8x smaller than other types of line items. Warning on a known issue: On an existing dashboard where a saved view is being modified by replacing the filters with a Boolean line item for filtering, you must republish it to the dashboard. Simply removing the filters from the published dashboard will not improve performance. Rule 2: Use the default Sort Use sort carefully, especially on large list. Opening a dashboard that has a grid where a large list is sorted on a text formatted line item will likely take 10 seconds or more and may be a blocking operation. To avoid using the sort: Your list is (by default) sorted by the criteria you need. If it is not sorted, you can still make the grid usable by reducing the items using a user-based filter. Rule 3: Reduce the amount of dashboard components There are times when the dashboard includes too many components, which slows performance. A reasonably large dashboard is no wider than 1.5 page (avoiding too much horizontal scrolling) and 3 pages deep. Once you exceed these limits, consider moving the components into multiple dashboards. Doing so will help both performance and usability. Rule 4: Avoid using large lists as page selectors If you have a large list and use it as a page selector on a dashboard, that dashboard will open slowly.  It may take10 seconds or more. The loading of the page selector takes more than 90% of the total time. Known issue / This is how Anaplan works: If a dashboard grid contains list formatted line items, the contents of page selector drop-downs are automatically downloaded until the size of the list meets a certain threshold; once this size is exceeded, the download happens on demand, or in other words, when a user clicks the drop down.  The issue is that when Anaplan requests the contents of list formatted cell drop-downs, it also requests contents of ALL other drop-downs INCLUDING page selectors. Recommendation: Limit the page selectors on medium to large lists using the following tips: a) Make the page selector available in one grid and use the synchronized paging option for all other grids and charts. No need to allow users to edit the page in every dashboard grid or chart. b) If you have a large list, it makes for a poor user experience, as there is no search available. Using a large list as a page selector creates both a performance and a usability issue. Solution 1: Design a dashboard dedicated to searching a line item: From the original dashboard (where you wanted to include the large list page selector), the user clicks a custom search button that opens a dashboard where the large list is displayed as the rows of a grid. The user can then use a search to find the item needed. If possible, implement user-based filters to help the user further reduce the list and quickly find the item. The user highlights the item found, closes the tab, and returns to the original dashboard where all grids are set on the highlighted item. Alternate solution: If the dashboard elements don't require the use of the list, you should publish them from a module that doesn't contain this list. For example, floating page selectors for time or versions, or grids that are displayed as rows/columns-only should be published from modules that does not include the list. Why? The view definitions for these elements will contain all the source module's dimensions, even if they are not shown, and so will carry the overhead of populating the large page selector if it was present in the source.
View full article
Master data hubs Master data hubs are used within the Anaplan platform to house an organization’s data in a single model. This hub imports data from the corporation’s data warehouse. If no single source is available, such as a data warehouse, then the master data hub will collect data from individual source systems instead. Once all data is consolidated into a single master data hub, it may then be distributed to multiple models throughout an organization’s workspace. Anaplan Data Architecture   Architecture best practices One or more Anaplan models may make up the data hub. It is a good practice to separate the master data (hierarchies, lists, and properties) from the transactional data. The business Anaplan applications will be synchronized from these data hub models using Anaplan native “model-to-model” internal imports. As a best practice, users should only implement incremental synchronization, which only synchronizes the data in the application that has changed since the last sync from the data hub. Doing this usually provides very fast synchronization. The graphic below displays best practices for doing this:   Another best practice organizations should follow when building a master data hub is to import a list with properties into a module rather than directly into a list. Using this method, line items are created to correspond with the properties and are imported using the text data type. This will import all of the data without errors or warnings, and allow for very smart dashboards, made of sorts and filters, to highlight integration issues. Once imported, the data in the master data hub module can then be imported to a list in the required model.   Data hub best practices The following list consists of best practices for establishing data architecture: Rationalize the metadata Balanced hierarchies (not ragged) will ease reporting and security settings Be driver-based Identify your metric and KPIs and what drives them Do not try to reconcile disconnected targets to bottom up plans entered at line item level. Example: Use cost per trip and number of trips for travel expenses, as opposed to inputting every line of travel expense Simplify the process Reduce the number of approval levels (threshold-based) Implement rolling forecasts Report within the planning tool; keep immediacy where needed Think outcome and options, not input Transform your existing process. Do not re-implement existing Excel ® -based processes in Anaplan Granularity Aggregate transactions to SKU level, customer ID Plan at higher level and cascade down Plan the number of TBH by role for TBH headcount expenses, as opposed to inputting every TBH employee. Sales: Sub-region level planning, cascade a rep level Plan at profit center level, allocate at cost center level based on drivers The Anaplan Way Always follow the phases of The Anaplan Way when establishing a master data hub, even in a federated approach: Pre-Release Phase Foundation Phase Implementation Phase Testing Phase Deployment Phase
View full article
Overview: Imports are blocking operation: The model is locked during the time of the import, and concurrent imports run by end-user will need to run one after the other, and will block the model for everyone else. Rule 1: Carefully decide if you let end-user import (and export) during business hours Imports executed by end-users should be carefully considered, and if possible executed once or twice a day. Customer easily accept model freeze at scheduled hours for a predefined time even if it takes 10+ minutes, and are frustrated when these imports are run randomly during business hours by anyone. Your first optimization is to adjust the process and run these imports by an admin, at scheduled time and let the user based know about the schedule. Rule 2: Mapping Objective = zero errors or warning Make sure your import returns with no errors or warning, every error takes processing time. Time to import into a medium to large list (>50k) is significantly reduced if no errors are to be processed. Here are the tips to reduce errors: Always import from a saved view - NEVER from the default view. And use the naming convention for easy maintenance Hide the line items that are not needed for import, do not bring extra columns that are not needed. In the import definition, always map all displayed line items (source→target) or use the "ignore" setting - don't leave any line item unmapped Rule 3: Watch the formulas recalculated during the import If your end-users encounter poor performance when clicking a button that triggers an import or a process, it is likely due to the recalculations that is triggered by the import, especially if the action creates or moves items within a hierarchy. You will likely need the help of Anaplan support (L3) to identify what formulas are triggered after the import is done, and get a performance check on these formulas to identify which one takes most of the time. Usually those fetching many cells such as SUM, ANY or FINDITEM() are likely to be responsible for the performance impact. To solve such situations, you will need to challenge the need of recalculating the formula identified each time a user calls the action. Often, for actions such as creations, moves, assignment done in WFP or Territory Planning, many calculations used for Reporting are triggered in real-time after the hierarchy is modified by the import, and are not necessarily needed by users. the recommendation is to challenge your customer and see if these formulas couldn't be calculated only once a day, instead of each a user runs the action. If yes, you'll need to rearchitect your modules so that these heavy formulas get to run through a different process run daily by an admin, and not by each end-users. Rule 4:  Import List properties Importing list properties takes more time than importing these as module line item. Review your model list impacted by imports, and envision replacing list properties by module line items when possible. Also, please refer to the Data Hub best practices, where we recommend to upload all list properties into a Data HUB module and not in the list property itself. Rule 5: Get your Data HUB HUB and SPOKE: Setup a HUB data model, which will feed the other production model used by stakeholders. Look at the white paper on how to build a Data HUB: Performance benefits: It will prevent production models to be blocked by a large import from External Data source. But since Data HUB to Production model imports will still be blocking operations, carefully filter what you import, and use the best practices rules listed above. All import, mapping/transformation modules required to prepare the data to be loaded into Planning modules can now be located in a dedicated Data HUB model and not in the Planning model. This model will then be smaller and will work more efficiently Reminder of the other Benefits not linked to performance: Better structure, easier maintenance: Data HUB help keep all the data organized in a central location. Better governance: Whenever possible put this Data HUB on a different WS. That will ease the separation of duties between Production models and Meta Data management, at least on Actual Data and production lists. IT department will love the idea to own the Data HUB, and have no one else be an admin in the WS Lower implementation costs: Data HUB is a way to reduce the implementation time of new projects. Assuming IT can load the data needed by the new project in the Data HUB, then business users do not have to integrated with complex source system, but with the Anaplan Data HUB instead. Rule 6: Incremental import/Export This can be the magic bullet in some cases. If you export on a frequent basis (daily ot more) from Anaplan model into a reporting system, or write back to the source system, or simply transfer data from one Anaplan model to another, you have ways to only import/exports the data that have changed since the last export. Use the concatenation + Change boolean technique explained in the Data HUB white paper.
View full article
Problem to solve: As an HR manager, I need to enter the salary raise numbers for multiple regions that I'm responsible for. As a domain best practice, my driver-based model helps me to enter raise guidelines, which will then change at the employee level. Usability issue addressed: I have ten regions, eight departments in each, with a total of 10,000+ employees. I need to align my bottom up plan, to the down target I received earlier. I need to quickly identify what region is above/behind target and address the variance. My driver-based raise modeling is fairly advanced and I need to see what the business rules are. I need to quickly see how it impacts the employee level. Call to action: Step 1: Spot what region I need to address.  Step 2: Drill into the variances by department. Steps 1 & 2 are analytics steps: "As an end user, I focus first on where the biggest issues are." This is a good usability practice that helps users. Step 3: Adjusting the guidelines (drivers) There are not excessive instructions on how to build and use guidelines, which would have cluttered the dashboard. Instead, Anaplan added a "view guideline instruction" button. This button should open a dashboard dedicated to detailed instructions or link to a video that explains how guideline works. Impact analysis: The chart above the grid will adjust as guidelines are edited. That is a good practice for impact analysis: no scrolling or clicking needed to view how the changes will impact the plan. Step 4: Review a summary of the variance after changes are made. Putting steps 1–4 close to each other is a usable way of indicating to a user that he/she needs to iterate through these four steps to achieve their objective, which is to have every region and every department be within the top down target. Step 5: A detailed impact analysis, which is placed directly below steps 3 and 4. This allows end users to drill into the employee-level details and view the granular impact of the raise guidelines. Notice the best practices in step 5:   The customer will likely ask to see 20 to 25 employee KPIs across all employees and will be tempted to display these as one large grid. This can quickly lead to an unusable grid made of thousands of rows (employees) across 25 columns. Instead, we have narrowed the KPI list to only ten that display without left-right scrolling. Criteria to elect these ten: be able to have a chart that compares employees by these KPIs. The remaining KPIs are displayed as an info grid, which only displays values for the selected employee. Things like region, zip codes, and dates are removed from the grid as they do not need to be compared side-by-side with other KPIs or between employees.  
View full article
I recently posted a Python library for version 1.3 of our API. With the GA announcment of API 2.0, I'm sharing a new library that works with these endpoints. Like the previous library, it does support certificate authentication, however it requires the private key in a particular format (documented in the code, and below). I'm pleased to announce, the use of Java keystore is now supported. Note:   While all of these scripts have been tested and found to be fully functional, due to the vast amount of potential use cases, Anaplan does not explicitly support custom scripts built by our customers. This article is for information only and does not suggest any future product direction. This library is a work in progress, and will be updated with new features once they have been tested.   Getting Started The attached Python library serves as a wrapper for interacting with the Anaplan API. This article will explain how you can use the library automate many of the requests that are available in our Apiary, which can be found at   https://anaplanbulkapi20.docs.apiary.io/#. This article assumes you have the requests and M2Crypto modules installed as well as the Python 3.7. Please make sure you are installing these modules with Python 3, and not for an older version of Python. For more information on these modules, please see their respective websites: Python   (If you are using a Python version older or newer than 3.7 we cannot guarantee validity of the article)   Requests   M2Crypto Note:   Please read the comments at the top of every script before use, as they more thoroughly detail the assumptions that each script makes. Gathering the Necessary Information In order to use this library, the following information is required: Anaplan model ID Anaplan workspace ID Anaplan action ID CA certificate key-pair (private key and public certificate), or username and password There are two ways to obtain the model and workspace IDs: While the model is open, go Help>About:  Select the workspace and model IDs from the URL:  Authentication Every API request is required to supply valid authentication. There are two (2) ways to authenticate: Certificate Authentication Basic Authentication For full details about CA certificates, please refer to our Anapedia article. Basic authentication uses your Anaplan username and password. To create a connection with this library, define the authentication type and details, and the Anaplan workspace and model IDs: Certificate Files: conn = AnaplanConnection(anaplan.generate_authorization("Certificate","<path to private key>", "<path to public certificate>"), "<workspace ID>", "<model ID>") Basic: conn = AnaplanConnection(anaplan.generate_authorization("Basic","<Anaplan username>", "<Anaplan password>"), "<workspace ID>", "<model ID>")   Java Keystore: from anaplan_auth import get_keystore_pair key_pair=get_keystore_pair('/Users/jessewilson/Documents/Certificates/my_keystore.jks', '<passphrase>', '<key alias>', '<key passphrase>') privKey=key_pair[0] pubCert=key_pair[1] #Instantiate AnaplanConnection without workspace or model IDs conn = AnaplanConnection(anaplan.generate_authorization("Certificate", privKey, pubCert), "", "") Note: In the above code, you must import the get_keystore_pair method from the anaplan_auth module in order to pull the private key and public certificate details from the keystore. Getting Anaplan Resource Information You can use this library to get the necessary file or action IDs. This library builds a Python key-value dictionary, which you can search to obtain the desired information: Example: list_of_files = anaplan.get_list(conn, "files") files_dict = anaplan_resource_dictionary.build_id_dict(list_of_files) This code will build a dictionary, with the file name as the key. The following code will return the ID of the file: users_file_id = anaplan_resource_dictionary.get_id(files_dict, "file name") print(users_file_id) To build a dictionary of other resources, replace "files" with the desired resource: actions, exports, imports, processes.  You can use this functionality to easily refer to objects (workspace, model, action, file) by name, rather than ID. Example: #Fetch the name of the process to run process=input("Enter name of process to run: ") start = datetime.utcnow() with open('/Users/jessewilson/Desktop/Test results.txt', 'w+') as file: file.write(anaplan.execute_action(conn, str(ard.get_id(ard.build_id_dict(anaplan.get_list(conn, "processes"), "processes"), process)), 1)) file.close() end = datetime.utcnow() The code above prompts for a process name, queries the Anaplan model for a list of processes, builds a key-value dictionary based on the resource name, then searches that dictionary for the user-provided name, and executes the action, and writes the results to a local file. Uploads You can upload a file of any size, and define a chunk size up to 50mb. The library loops through the file or memory buffer, reading chunks of the specified size and uploading to the Anaplan model. Flat file:  upload = anaplan.file_upload(conn, "<file ID>", <chunkSize (1-50)>, "<path to file>") "Streamed" file: with open('/Users/jessewilson/Documents/countries.csv', "rt") as f: buf=f.read() f.close() print(anaplan.stream_upload(conn, "113000000000", buf)) print(anaplan.stream_upload(conn, "113000000000", "", complete=True)) The above code reads a flat file and saves the data to a  buffer (this can be replaced with any data source, it does not necessarily need to read from a file). This data is then passed to the "streaming" upload method. This method does not accept the chunk size input, instead, it simply ensures that the data in the buffer is less than 50mb before uploading. You are responsible for ensuring that the data you've extracted is appropriately split. Once you've finished uploading the data, you must make one final call to mark the file as complete and ready for use by Anaplan actions. Executing Actions You can run any Anaplan action with this script, and define a number of times to retry the request if there's a problem. In order to execute an Anaplan action, the ID is required. To execute, all that is required is the following: run_job = execute_action(conn, "<action ID>", "<retryCount>") print(run_job) This will run the desired action, loop until complete, then print the results to the screen. If failure dump(s) exits, this will also be returned. Example output: Process action 112000000082 completed. Failure: True Process action 112000000079 completed. Failure: True Details: hierarchyName Worker Report successRowCount 0 successCreateCount 0 successUpdateCount 0 warningsRowCount 435 warningsCreateCount 0 warningsUpdateCount 435 failedCount 4 ignoredCount 0 totalRowCount 439 totalCreateCount 0 totalUpdateCount 435 invalidCount 4 updatedCount 435 renamedCount 435 createdCount 0 lineItemName Code rowCount 0 ignoredCount 435 Failure dump(s): Error dump for 112000000082 "_Status_","Employees","Parent","Code","Prop1","Prop2","_Line_","_Error_1_" "E","Test User 2","All employees","","101.1a","1.0","2","Error parsing key for this row; no values" "W","Jesse Wilson","All employees","a004100000HnINpAAN","","0.0","3","Invalid parent" "W","Alec","All employees","a004100000HnINzAAN","","0.0","4","Invalid parent" "E","Alec 2","All employees","","","0.0","5","Error parsing key for this row; no values" "W","Test 2","All employees","a004100000HnIO9AAN","","0.0","6","Invalid parent" "E","Jesse Wilson - To Delete","All employees","","","0.0","7","Error parsing key for this row; no values" "W","#1725","All employees","69001","","0.0","8","Invalid parent" [...] "W","#2156","All employees","21001","","0.0","439","Invalid parent" "E","All employees","","","","","440","Error parsing key for this row; no values" Error dump for 112000000079 "Worker Report","Code","Value 1","_Line_","_Error_1_" "Jesse Wilson","a004100000HnINpAAN","0","434","Item not located in Worker Report list: Jesse Wilson" "Alec","a004100000HnINzAAN","0","435","Item not located in Worker Report list: Alec" "Test 2","a004100000HnIO9AAN","0","436","Item not located in Worker Report list: Test 2 Downloading a File If the above code is used to execute an export action, the fill will not be downloaded automatically. To get this file, use the following: download = get_file(conn, "<file ID>", "<path to local file>") print(download) This will save the file to the desired location on the local machine (or mounted network share folder) and alert you once the download is complete, or warn you if there is an error. Get Available Workspaces and Models API 2.0 introduced a new means of fetching the workspaces and models available to a given user. You can use this library to build a key-value dictionary (as above) for these resources. #Instantiate AnaplanConnection without workspace or model IDs conn = AnaplanConnection(anaplan.generate_authorization("Certificate", privKey, pubCert), "", "") #Setting session variables uid = anaplan.get_user_id(conn) #Fetch models and workspaces the account may access workspaces = ard.build_id_dict(anaplan.get_workspaces(conn, uid), "workspaces") models = ard.build_id_dict(anaplan.get_models(conn, uid), "models") #Select workspace and model to use while True: workspace_name=input("Enter workspace name to use (Enter ? to list available workspaces): ") if workspace_name == '?': for key in workspaces: print(key) else: break while True: model_name=input("Enter model name to use (Enter ? to list available models): ") if model_name == '?': for key in models: print(key) else: break #Extract workspace and model IDs from dictionaries workspace_id = ard.get_id(workspaces, workspace_name) model_id = ard.get_id(models, model_name) #Updating AnaplanConnection object conn.modelGuid=model_id conn.workspaceGuid=workspace_id The above code will create an AnaplanConnection instance with only the user authentication defined. It queries the API to return the ID of the user in question, then queries for the available workspaces and models, and builds a dictionary with these results. You can then enter the name of the workspace and model you wish to use (or print to screen all available), then finally update the AnaplanConnection instance to be used in all future requests.
View full article
This article covers the necessary steps for you to migrate your Anaplan Connect (AC) 1.3.x.x script to Anaplan Connect 1.4. For more details and examples, refer to the   Anaplan Connect User Guide v1.4. The changes are: New connectivity parameters Replace reference to Anaplan Certificate with Certificate Authority (CA) certificates using new parameters Optional Chunksize & Retry parameters Changes to JDBC configuration New Connectivity Parameters Add the following parameters to your Anaplan Connect 1.4 integration scripts. These parameters provide connectivity to Anaplan and Anaplan authentication services. Both of the urls listed below need to be whitelisted with your network team. -service "https://api.anaplan.com/" -auth "https://auth.anaplan.com" Certificate Changes As noted in our   Anaplan-generated Certificates to Expire December 10, 2018 blog post, new and updated Anaplan integration options support Certificate Authority (CA) certificates for authentication. Basic Authentication is still available in Anaplan Connect 1.4, however, the use of certificates has changed. In Anaplan Connect 1.3.x.x, the script references the full path to the certificate file. For example: -certificate "/Users/username/Documents/AnaplanConnect1.4/certificate.pem" In Anaplan Connect 1.4 the CA certificate must be stored in a Java Key Store (JKS). Refer to   this video   for a walkthrough of the process of getting the CA certificate into the key store. You can also refer to   Anaplan Connect User Guide v1.4   for steps to create the Java key store. Once you have imported the key into the JKS,   make note of this information : Path to the JKS (directory path on server where JKS is saved) The Password to the JKS The alias of the certificate within the JKS. For example: KeyStorePath ="/Users/username/Documents/AnaplanConnect1.4/my_keystore.jks" KeyStorePass ="your_password" KeyStoreAlias ="keyalias" To pass these values to Anaplan Connect 1.4, use these command line parameters: -keystore {KeystorePath} -keystorealias {KeystoreAlias} -keystorepass {KeystorePass} Chunksize Anaplan Connect 1.4 allows for custom chunk sizes on files being imported. The -chunksize parameter can be included in the call with the value being the size of the chunks in megabytes. -chunksize {SizeInMBs} Retry Anaplan Connect 1.4 allows for the client to retry requests to the server in the event that the server is busy. The -maxretrycount parameter defines the number of times the process retries the action before exiting. The -retrytimeout parameter is the time in seconds that the process waits before the next retry. -maxretrycount {MaxNumberOfRetries} -retrytimeout {TimeoutInSeconds} Changes to JDBC Configuration With Anaplan Connect 1.3.x.x the parameters and query for using JDBC are stored within the Anaplan Connect script itself. For example: Operation="-file Sample.csv' -jdbcurl 'jdbc:mysql://localhost:3306/mysql?useSSL=false' -jdbcuser 'root:Welcome1' -jdbcquery 'SELECT * FROM py_sales' -import 'Sample.csv' -execute" With Anaplan Connect 1.4. the parameters and query for using JDBC have been moved to a separate file. The name of that file is then added to the AnaplanClient call using the   -jdbcproperties   parameter. For example:  Operation="-auth 'https://auth.anaplan.com' -file 'Sample.csv'  -jdbcproperties 'jdbc_query.properties' -chunksize 20 -import 'Sample.csv' -execute " To run multiple JDBC calls in the same operation, a separate jdbcpropeties file will be needed for each query. Each set of calls in the operation should include then following parameters: -file, -jdbcproperties, -import, and -execute. In the code sample below each call is underlined separately.  For example: Operation="-auth 'https://auth.anaplan.com' -file 'SampleA.csv' -jdbcproperties 'SampleA.properties' -chunksize 20 -import 'SampleA Load' -execute -file 'SampleB.csv' -jdbcproperties 'SampleB.properties' -chunksize 20 -import 'SampleB Load' -execute" JDBC Properties File Below is an example of the JDBCProperties file. Refer to the   Anaplan Connect User Guide v1.4   for more details on the properties shown below. If the query statement is long, the statement can be broken up on multiple lines by using the \ character at the end of each line. No \ is needed on the last line of the statement. The \ must be at the end of the line and nothing can follow it. jdbc.connect.url=jdbc:mysql://localhost:3306/mysql?useSSL=false jdbc.username=root jdbc.password=Welcome1 jdbc.fetch.size=5 jdbc.isStoredProcedure=false jdbc.query=select * \ from mysql.py_sales \ where year = ? and month !=?; jdbc.params=2018,04 Anaplan Connect Windows BAT Script Example (with Cert Auth) @echo off rem This example lists a user's workspaces set ServiceLocation="https://api.anaplan.com/" set Keystore="C:\Your Cert Name Here.jks" set KeystoreAlias="" set KeystorePassword="" set WorkspaceId="Enter WS ID Here" set ModelId="Enter Model ID here" set Operation=-service "https://api.anaplan.com" -auth "https://auth.anaplan.com" -W rem *** End of settings - Do not edit below this line *** setlocal enableextensions enabledelayedexpansion || exit /b 1 cd %~dp0 set Command=.\AnaplanClient.bat -s %ServiceLocation% -k %Keystore% -ka %KeystoreAlias% -kp %KeystorePassword% -workspace %WorkspaceId% -model %ModelId% %Operation% @echo %Command% cmd /c %Command% pause Anaplan Connect Shell Script Example with Cert Auth #!/bin/sh KeyStorePath="/path/Your Cert Name.jks" KeyStorePass="" KeyStoreAlias=" " WorkspaceId="Enter WS ID Here" ModelId="Enter Model Id Here" Operation="-service "https://api.anaplan.com" -auth "https://auth.anaplan.com" -W" #________________ Do not edit below this line __________________ if [ "${CACertPath}" ]; then     Credentials="-keystore ${KeyStorePath} -keystorepass ${KeyStorePass} -keystorealias ${KeyStoreAlias}" fi echo cd "`dirname "$0"`" cd "`dirname "$0"`" if [ ! -f AnaplanClient.sh ]; then     echo "Please ensure this script is in the same directory as AnaplanClient.sh." >&2     exit 1 elif [ ! -x AnaplanClient.sh ]; then     echo "Please ensure you have executable permissions on AnaplanClient.sh." >&2     exit 1 fi Command="./AnaplanClient.sh ${Credentials} ${Operation}" /bin/echo "${Command}" exec /bin/sh -c "${Command}"   
View full article
Summary We explain here a dynamic way to filter specific levels of a hierarchy. This provides a better way to filter & visualize hierarchies.  Overview This tutorial explains how to calculate the level of a list in a hierarchy in order to apply specific calculations (custom summary) or filters by level. In this example we have an organization hierarchy of 4 levels (Org L1 to Org L4). For each item in the hierarchy we want to calculate a filtering module value that returns the associated level. Context and notes This technique addresses a specific limitation within dashboards where a composite hierarchy's level cannot be selected if the list is synchronized to multiple module objects on the dashboard. We show the technique of creating a static filtering module based on the levels of the composite structure. The technique utilizes the Summary method Ratio of line items corresponding to the list levels to define the value of the filtering line items. Note that it is not a formula calculation but a use of the summary method Ratio applied to the composite hierarchy.   Example list We defined in this example a 4-levels list as follows: Defining the level of each list In order to calculate the level of each item in the lists, we need to create a module that calculates it by: Creating as many line items as level of hierarchy + one technical line item. Changing the settings in the blueprint of those line items according to the following table: Line Item Formula Applies to Summary Summary method Setting Ratio Technical line item 1 (empty) Formula   Level or L4 (lowest level) 4 Org L4 Ratio* L3 / Technical L3 3 Org L3 Ratio L2 / Technical L2 2 Org L2 Ratio L1 / Technical L1 1 Org L1 Ratio L1 / Technical                       When applying these settings, the calculation module looks like this: *Note that the Technical line item Summary method is using Formula, Minimum Summary method can be used but will return an error when a level of the hierarchy does not have any children and the level calculated is blank.     We can now use the line item at the lowest level—“Level (or L4)” in the example—as the basis of filters or calculations.   Applying a filter on specific levels in case of synchronization When synchronization is enabled, the option “Select levels to show” is not available. Instead, a filter based on the level calculated can be used to show only specific levels. In the example, we apply a filter on the level 4 and 1:   This gives the following result:    
View full article
Introduction The new Anaplan APIs and integration connectors leverage Certificate Authority (CA) -issued certificates.  These certificates can be obtained through your company's intermediary CA (typically issued by IT) or by purchasing it from a trusted Certificate Authority. Anaplan clients leveraging REST API v2.0 use both basic authentication and CA certificate based authentication. Examples of these clients include Anaplan Connect 1.4, Informatica Anaplan Connector, and Mulesoft 2.0.1. If you are migrating your Anaplan Connector scripts from v1.3 to v1.4, your available options for authentication will be basic authentication or CA certificate based authentication. This article outlines steps to perform in preparation for CA certificate authentication. Steps to prepare for CA certificate authentication Obtain a certificate from a CA authority Convert CA certificate to either a p12 or pfx file Import CA certificate into Internet Explorer/Mozilla Firefox to convert to a p12/pfx file Export CA certtificate from Internt Explorer/Mozilla Firefox to covert to a p12/pfx file Optional: Install Openssl tool Convert the p12/pfx file into a Java Keystore Manage CA certificates in Anaplan Tenant Administrator Validate CA certificate authentication via Anaplan Connect 1.4 script. Obtain a certificate from a CA authority You can obtain a certificate from CA authority by submitting a request or submit a request with a certificate signing requiest (CSR) containing your private key.  Contact your IT or Security Operations organization to determine if your company already has an existing relationship with a CA or intermediary CA. If your organization has an existing relationship with a CA or Intermediate CA you can request a client certificate be issued for your integration user. If your organization does not have an existing CA relationship, you should contact a valid CA to procure a client certificate. Convert CA certificate to either a p12 or pfx file Import CA certificate into IE/Firefox to convert to a p12/pfx file This section presents steps to import CA certificate into Internet Explorer and Mozilla Firefox. CA certificate will be exported in the next section to either a p12 or pfx format. CA certificates may have .crt or .cer as file extensions. Internet Explorer Within Internet explorer, click on the Settings icon and select Internet option.    Navigate to the Content tab and then click on Certificates.   Click Import to launch the Certificate Import Wizard.   Click Browse to search & select the CA Certificate file. This file may have a file extension of .crt or .cer.    If a password was used when requesting the Certificate, enter it in this screen. Ensure that the “Mark this key as exportable” option is selected and click Next.    Select the certificate store in which to import the certificate and click Next.     Review the setting and click Finish.     The certificate should appear in the certificate store selected. Mozilla Firefox Within Firefox, select Options from the settings menu.    In the Options window, click Privacy & Security from the navigation pane on the left. Scroll to the very bottom and click on the View Certificates… button.    In the Certificate Manager, click the Import… button and select the certificate to convert and click Open.   If a password was provided when the certificate was requested, enter that password and click OK.    The certificate should now show up in the Certificate Manager.   Export CA certificate from IE/Firefox to convert to a p12/pfx file This section presents steps to export CA certificate from Internet Explorer (pfx) and Mozilla Firefox (p12). Internet Explorer (pfx) Select the certificate imported above and click the Export… button to initiate the Certificate Export Wizard.      Select the option “Yes, export the private key” and click Next.   Select the option for Personal Informatica Exchange – PCKS #12 (.PFX) and click Next.    Create a password, enter it and confirm it in the following screen.  This password will be used later on in the process. Click Next to continue.    Select a location to export the file and click Save.    Verify the file location and click Next.    Review the export settings, ensure that the Export Keys settings says “Yes”, if not start the export over. If all looks good, click Next. A message will appear when the export is successful.      Mozilla Firefox (p12) To export the certificate from Firefox, click the Backup… button in the Certificate Manager.  Select a location and a name for the file.  Ensure that the Save as type: is “PKCS12 Files (*.p12)”. Click the Save button to continue.    Enter a password to be used later when exporting the public and private keys. Click the OK button to finish.   Install openssl tool (Optional) If you haven't done so already, install openssl tool for your operating system.  List of third party binary distributions may be found on www.openssl.org or here. Examples in this article are shown for Windows platform. Convert the p12/pfx file into a Java Keystore Execute the following toto export the public and private keys exported above. In the commands listed below, the values that are customer specific are in Bold Italics. There is a screen shot at the end of this section that shows all of the commands run in sequence and it shows how the passwords relate between the steps. Examples in this article assume location of the certificate as the working directory. If you are executing these commands from a different directory (ex: ...\openssl\bin), then ensure you provide absolute directory path to all the files. Export the public key Public key will be exported from the certificate (p12/pfx) using openssl tool. Result is a .pem (public_key.pem) file that will be imported into Anaplan using Anaplan's Tenant Administrator client.   NOTE: The command below will prompt for a password. This password was created in steps above during export. openssl pkcs12 -clcerts -nokeys -in ScottSmithExportedCert.pfx -out public_key.pem Edit the public_key.pem file Remove everything before ---Begin Certificate --- (section highlighted in yellow). Ensure that the emailAddress value is populated with the user that will run the integrations. Export the Private Key This command will prompt for a password. This password is the password created in the export above. It will the prompt for a new password for the Private Key. It will also ask to confirm that password.  openssl pkcs12 -nocerts -in ScottSmithExportedCert.pfx -out private_key.pem Create P12 Bundle This command will prompt for the private key password from the step above. It will the prompt for a new password for the Bundle. It will also ask to confirm that password. openssl pkcs12 -export -in public_key.pem -inkey private_key.pem -out bundle.p12 -name Scott -CAfile public_key.pem -caname Scott In the command above,  public_key.pem is the file that was created in the step "Export the Public Key".  This is the file that will be registered with Anaplan using Anaplan Tenant Administrator.  private_key.pem is the file that was created in the step "Export the Private Key". bundle.p12  is the output file from this command, which will be used in the next step to create Java Keystore. Scott is the keystore alias. Add to Java Keystore (jks) Using keytool (typically found in <Java8>/bin), create a .jks file. This file will be referenced in Anaplan Connect 1.4 scripts for authentication. Command below will prompt for a new password for the entry into the keystore. It will also ask to confirm that password.  It will, then, prompt for the Bundle password from the step above. keytool -importkeystore -destkeystore my_keystore.jks -srckeystore bundle.p12 -srcstoretype PKCS12 In the command above: my_keystore.jks is the keystore file that will be referenced in your Anaplan Connect 1.4 scripts. bundle.p12 is the P12 bundle that was created in the last step.   Manage CA certificates in Anaplan Tenant Administrator In this step, you will add public_key.pem file to list of certificates in Anaplan Tenant Administrator. This file was created & edited in the first two steps of the last section. Log on to Anaplan Tenant Administrator. Navigate to Administration --> Security --> Certificates --> Add Certificate.   Validate CA certificate authentication via Anaplan Connect 1.4 script. Since you will be migrating to CA Certificate based authentication, you will need to upgrade your Anaplan Connect and associated scripts from v1.3 to v1.4. Community article, Migrating from Anaplan Connect 1.3.x.x to Anaplan Connect 1.4 will guide you through necessary steps. Follow the steps outlined in the article to edit & execute your Anaplan Connect 1.4 script. Examples provided (Windows & Linux) at the end of the article will validate authentication to Anaplan using CA Certificates and will return list of user's workspaces in a tenant.
View full article
PLANS is the new standard for Anaplan modelling; “the way we model”. This will cover more than just the formulas and will include and evolve existing best practices around user experience and data hubs. The initial focus is to develop a set of rules on the structure and detailed design of Anaplan models. This set of rules will provide both a clear route to good model design for the individual Anaplanner, and common guidance on which Anaplanners and reviewers can rely when passing models amongst themselves.  In defining the standard, everything we do will consider or be based around: Performance – Use the correct structures and formulae to optimize the Hyperblock Logical – Build the models and formulae more logically – See D.I.S.C.O below Auditable – Break up formulae for better understanding, performance and maintainability Necessary – Don’t duplicate expressions, store reference data and attributes once, no unnecessary calculations Sustainable – Build with the future in mind, think about process cycles and updates        The standards will be based around three axes: Performance - How do the structures and formulae impact the performance of the system? Usability/Auditability - Is the user able to understand how to interact with the functionality? Sustainability - Can the solution be easily maintained by model builders and support? We will define the techniques to use that balance the three areas to ensure the optimal design of Anaplan models and architecture       D.I.S.C.O As part of model and module design we recommend categorizing modules as follows: Data – Data hubs, transactional modules, source data; reference everywhere Inputs – Design for user entry, minimize the mix of calculations and output System – Time management, filters, mappings etc.; reference everywhere Calculations – Optimize for performance (turn summaries off, combine structures) Outputs -  Reporting modules, minimize data flows out
View full article
Dimension Order affects Calculation Performance Ensuring consistency in the order of dimensions will help improve performance of your models. This consistency is relevant for modules and individual line items. Why does the order matter? Anaplan creates and uses indexes to perform calculations. Each cell in a module where dimensions intersect is given an index number. Here are two simple modules dimensioned by Customer and Product. In the first module, Product comes first and Customer second and in the second module, Customer is first and Product second. In this model, there is a third module that calculates revenue as Prices * Volumes. Anaplan assigns indexes to the intersections in the module. Here are the index values for the two modules. Note that some of the intersections are indexed the same for both modules: Customer 1 and Product 1, Customer 2 and Product 2 and Customer 3 and Product 3, and that the remainder of the cells have a different index number. Customer 1 and Product 2 is indexed with the value of 4 in the top module and the value of 2 in the bottom module. The calculation is Revenue = Price * Volume. To run the calculation, Anaplan performs the following operations by matching the index values from the two modules. Since the index values are not aligned the processor scans the index values to find a match before performing the calculation. When the dimensions in the module are reordered, these are the index values: The index values for each of the modules are now aligned. As the line-items of the same dimensional structure have an identical layout, the data is laid out linearly in memory. the calculation process accesses memory in a completely linear and predictable way. Anaplan’s microprocessors and memory sub-systems are optimized to recognise this pattern of access and to pre-emptively fetch the required data. How does the dimension order become different between modules?. When you build a module, Anaplan uses the order that you drag the lists onto the Create Module dialog The order is also dependent on where the lists are added. The lists that you add to the pages area are first, then the lists that you add to the rows area, and finally the lists added to the columns area. It is simple to re-order the lists and ensure consistency. Follow these steps: On the Modules pane, (Model Settings>Modules) look for lists that are out of order in the Applies To column. Click the Applies To row that you want to re-order, then click the ellipsis. In the Select Lists dialog, click OK. In the Confirm dialog, click OK. The lists will be in the order that they appear in General Lists. When you have completed checking the list order in the modules, click the Line Items tab and check the line items. Follow steps 1 through 3 to re-order the lists. Subsets and Line Item Subsets One word of caution about Subsets and Line Item subsets. In the example below, we have added a subset and a Line Item Subset to the module: The Applies To is as follows: Clicking on the ellipsis, the dimensions are re-ordered to: The general lists are listed in order first, followed by subsets and then line item subsets You still can re-order the dimensions by double clicking in the Applies to column and manually copying or typing the dimensions in the correct order. Other Dimensions The calculation performance relates to the common lists between the source(s) and the target. The order of separate lists in one or other doesn’t have any bearing on the calculation speed.
View full article
General recommendations First, the bigger your model is the more performance issues you are likely to experience. So a best practice is to use all the possible tools & features we have to make the model as small and dense as possible. This includes: Line Item Checks: summary calculations, dimensionality used Line Item Duplication Granularity of Hierarchies Use of subsets and line item subsets Numbered Lists More information on eliminating sparsity can be found in Learning Center courses 309 and 310. Customer requirements  General recommendations also include whenever possible, challenging your customer’s business requirements when customer require large list (>1M), big data history and high number of dimensions used at the same time for a line item (>5) Other practices Once these general and basic sparsity recommendations have been applied, you can further performance in different areas. The articles below will expand on each subject: Imports and exports and their effects on model performance Rule 1: Carefully decide if you let end-user import (and export) during business hours Rule 2: Mapping Objective = zero errors or warning Rule 3: Watch the formulas recalculated during the import Rule 4: Import List properties Rule 5: Get your Data HUB Rule 6: Incremental import/Export Dashboard settings that can help improve model performance Rule1: Large list = Filter these on a boolean, not on text Rule 2: Use the default Sort Rule 3: Reduce the amount of dashboard component Rule 4: Watch large page drop downs Formulas and their effect on model performance Model load, Model Save, Model Rollback and their effect on model performance User roles and their effect on model performance
View full article
Overview In many situations, enterprises need to split very large and complex models for various reasons including: Performance issues, including data volume and user concurrency Security considerations Metadata time cycle differences Regional / business process differences Performance issues Anaplan is a platform designed to enable businesses to build models in almost endless configurations, so there is no pre-set size recommendation for where a model can be distributed. It is not uncommon for a 15-billion-cell model performing complex calculations to remain a single model, used by only a single person or just a few people. However, in contrast to that, it is also not uncommon to have a distributed model as small as 1 billion cells, with complex calculations and multiple people in multiple locations using the model. As a general guide, this table takes into consideration the factors that influence a single model or distributed model solution. Sample Model Complex Calculations* Large Data Volumes,   (> 10GB)* High User Concurrency* Solution Sample Model 1 Yes No Yes Single model Sample Model 2 Yes Yes Yes Distributed Sample Model 3 No Yes Yes Depends on actual volume Sample Model 4 No No No Single model Sample Model 5 Yes Yes No Depends on actual volume Sample Model 6 No No Yes Depends on user concurrency Sample Model 7 No Yes No Depends on actual volume * As always, apply appropriate testing and tuning to optimize the model. Different combinations can have a dramatic effect on desired performance and experience. Security considerations Anaplan has robust security across its platform. In some cases, it’s possible to achieve region-specific experiences using selective access. If this is the case, then distributed models are not necessary. But in mixed environments where model builders and end users operate in the same model, and where various business processes exist, at times it makes sense to separate or distribute models rather than have them in a single instance. For example, you may have different countries that all need access to a workforce planning application. You also have model builders from each country modeling and maintaining their section. By distributing the models and restricting access, this problem is abated. Note: Where there is a need to segregate administration (model builder) roles, the split models will need to be in different workspaces, as the admin role is by workspace, not by model. Metadata time cycle differences  A single instance of a model serving the world across multiple time zones does not respect the different business cycles involved, and therefore updates to data and/or metadata of a model will affect the entire community, some of whom may be in the middle of their planning cycle. These changes may be small, but in many instances are large-scale and frequent changes, which require pauses in the application cycle for end users. However, a configuration that does respect business cycles and time zones and distributes the model can be beneficial to the business as business regions that are in down-time (e.g., in the middle of their night, where usage is very low) can, independently, carry out updates to data and metadata without affecting other regions. ALM application: Metadata time cycle differences Alternatively, ALM prevents pauses in the application cycle altogether by providing a development environment for each model. You may edit development models at any time without disrupting live production models for end users. Then, once you have completed your edits on the development model, you may deploy them to live production models without any disruptions or down-time for end users. As a result, using ALM removes any risk for pauses in the application cycle for any user at any time. Regional / business process differences Similar to the workforce planning example above, regional differences may exist. It may not be practical to attempt to include all regional variances that exist across countries for workforce planning in a single instance. Much of the functionality would not be relevant to every region, and so confusion and frustration would occur, as well as complication of user interface. In this instance a distributed model would be the best solution. Another consideration is that of differing business processes. That is to say, both processes are intrinsically the same, but different enough to warrant separate treatment and business processes that are completely different. An example of this may be a process where a business updates a forecast. Perhaps they get to the same point in a revenue forecast, but how different parts or divisions of a business get to that point is different. One may do an initial bottom-up forecast, submit up to management for draft approval, and then do a final submit. Another may do a top-down approach where they set a target and that target needs to be validated. These are connected, yet separate, processes that may warrant separate instances of an application.   ALM application: Regional / business process differences If regional and business processes are similar between satellite models, and the metadata between them can be synced from a single development (primary) model, then ALM can be used to develop, test, and produce the single development model that feeds the satellite models. If the regional and/or business processes cannot conform to use the same metadata from a single development model, then multiple development models must be used. In this case, ALM would be used to update, test, and produce each development model, which would then feed into each respective satellite model.
View full article
A revision tag is a snapshot of a model’s structural information at a point in time. Revision tags save all of the structural changes made in an application since the last revision tag was stored. By default, Anaplan allows you to add a title and description when creating a revision tag. This article covers:   Suggestions for naming revision tags Creating a revisions tracking list and module Note: For guidance on when to add revision tags, see When should I add revision tags?   Suggestions for naming revision tags It’s best to define a standard naming convention for your revision tags early in the model-building process. You may want to discuss with your Anaplan Business Partner or IT group if there is an existing naming convention that would be best to follow. The following suggestions are designed to ensure consistency when there are large number of changes or model builders as well as allow the team to better choose which revision tag to choose when syncing a production application. Option 1: 1.0 = Major revision/release 1 = Minor changes within a release In this option, 1.0 indicates the first major release. As subsequent minor changes are tagged, they will be noted as 1.2, 1.3, etc until the next major release: 2.0. Option 2: X = Major revision/release X.1 = Minor changes within a release In this option, YYYY indicates the year and X indicates the release number. For example, the first major release of 2017, would be: 2017.1. Subsequent minor changes would be tagged: 2017.1.1, 2017.1.2, etc until the next major release of the year: 2017.2.   Creating a revisions tracking list and module Revision tag descriptions are only visible from within Settings. That means that it can be difficult for an end user to know what changes have been made in the current release. Additionally, there may be times where you want to store additional information about revisions beyond what is in the revision tag description. To provide release visibility in a production application, consider creating a revisions list and module to store key information about revisions. Revisions list: In your Development application, create a list called: Revisions Do not set this list as Production. You want these list members to be visible in your production model    Revisions details module: In your Development application, create a list called: Revisions Details Add your Revisions List Remove Time Add your Line Items Since this module will be used to document release updates and changes, consider which of the following may be appropriate: Details: What changes were made Date: What date was this revision tag created Model History ID: What was the Model History ID when this tag was created Requested By: Who requested these changes? Tested By: Who tested these changes? Tested Date: When were these changes tested? Approved By: Who signed off on these changes? Note: Standard Selective Access rules apply to your production application. Consider who should be able to see this list and module as part of your application deployment.
View full article
In most use cases, a single model provides the solution you are seeking, but there are times it makes sense to separate, or distribute, models rather than have them in a single instance. The following articles provide insight that can help you during the design process to determine if a distributed model is needed. What is Application Lifecycle Management (ALM)? What types of distributed models are there? When should I consider a distrbuted model? How do changes to the primary model impact distributed models? What should I do after building a distributed model?
View full article
This article describes how to use the Anaplan DocuSign integration with single sign-on (SSO).
View full article
Summary Anaplan Connect is a command-line client to the Anaplan cloud-based planning environment and is a java-based utility that is able to perform a variety of commands, such as uploading and downloading data files, executing relational SQL queries (for loading into Anaplan), and running Anaplan actions and processes. To enhance the deployment of Anaplan Connect, it is import to be able to integrate the trapping of error conditions, enable the ability to retry the Anaplan Connect operation, and integrate email notifications. This article provides best practices on how to incorporate these capabilities. This article leverages the standard Windows command line batch script and documents the various components and syntax of the script. In summary, the script has the following main components: Set variable values such as exit codes, Anaplan Connect login parameters, and operations and email parameters Run commands prior to running Anaplan Connect commands Main loop block for multiple retries Establish a log file based upon the current date and loop number Run the native Anaplan Connect commands Search for string criteria to trap error conditions Branching logic based upon the discovery of any trapped error conditions Send email success or failure notification of Anaplan Connect run status Logic to determine if a retry is required End main loop block Run commands post to running Anaplan Connect commands Exit the script Section #1: Setting Script Variables The following section of the script establishes and sets variables that are used in the script. The first three lines perform the following actions: Clears the screen Sets the default to echo all commands Indicates to the operating system that variable values are strictly local to the the script The variables used in the script are as follows: ERRNO   – Sets the exit code to 0 unless set to 1 after multiple failed reties COUNT   – Counter variable used for looping multiple retries RETRY_COUNT   – Counter variable to store the max retry count (note: the /a switch indicates indicates a numeric value) AnaplanUser   – Anaplan login credentials in the format as indicated in the example WorkspaceId   – Anaplan numerical or named Workspace ID ModelId   – Anaplan numerical or named Model ID Operation   – A combination of Anaplan Connect commands. It should be noted that a ^ can be used to enhance readability by indicating that the current command continues on the next line Domain   – Email base domain. Typically, in the format of company.com Smtp   – Email SMTP server User   – Email SMTP server User ID Pass   – Email SMTP server password To   – Target email address(es). To increase the email distribution, simply add additional -t and the email addresses as in the example. From   – From email address Subject   – Email subject line. Note that this is dynamically set later in the script. cls echo on setlocal enableextensions REM **** SECTION #1 - SET VARIABLE VALUES **** set /a ERRNO=0 set /a COUNT=0 set /a RETRY_COUNT=2 REM Set Anaplan Connect Variables set AnaplanUser="<<Anaplan UserID>>:<<Anaplan UserPW>>" set WorkspaceId="<<put your WS ID here>>" set ModelId="<<put your Model ID here>>" set Operation=-import "My File" -execute ^ -output ".\My Errors.txt" REM Set Email variables set Domain="spg-demo.com" set Smtp="spg-demo" set User="fpmadmin@spg-demo.com" set Pass="1Rapidfpm" set To=-t "fpmadmin@spg-demo.com" -t "gburns@spg-demo.com" set From="fpmadmin@spg-demo.com" set Subject="Anaplan Connect Status" REM Set other types of variables such as file path names to be used in the Anaplan Connect "Operation" command Section #2: Pre Custom Batch Commands The following section allows custom batch commands to be added, such as running various batch operations like copy and renaming files or running stored procedures via a relational database command line interface. REM **** SECTION #2 - PRE ANAPLAN CONNECT COMMANDS *** REM Use this section to perform standard batch commands or operations prior to running Anaplan Connect Section #3: Start of Main Loop Block / Anaplan Connect Commands The following section of the script is the start of the main loop block as indicated by the :START. The individual components breakdown as follows: Dynamically set the name of the log file in the following date format and indicates the current loop number:   2016-16-06-ANAPLAN-LOG-RUN-0.TXT Delete prior log and error files Native out-of-the-box Anaplan Connect script with the addition of outputting the Anaplan Connect run session to the dynamic log file as highlighted here: cmd /C %Command% > .\%LogFile% REM **** SECTION #3 - ANAPLAN CONNECT COMMANDS *** :START REM Dynamically set logfile name based upon current date and retry count. set LogFile="%date:~-4%-%date:~7,2%-%date:~4,2%-ANAPLAN-LOG-RUN-%COUNT%.TXT" REM Delete prior log and error files del .\BAT_STAT.TXT del .\AC_API.ERR REM Out-of-the-box Anaplan Connect code with the exception of sending output to a log file setlocal enableextensions enabledelayedexpansion || exit /b 1 REM Change the directory to the batch file's drive, then change to its folder cd %~dp0 if not %AnaplanUser% == "" set Credentials=-user %AnaplanUser% set Command=.\AnaplanClient.bat %Credentials% -workspace %WorkspaceId% -model %ModelId% %Operation% @echo %Command% cmd /C %Command% > .\%LogFile% Section #4: Set Search Criteria The following section of the script enables trapping of error conditions that may occur with running the Anaplan Connect script. The methodology relies upon searching for certain strings in the log file after the AC commands execute. The batch command findstr can search for certain string patterns based upon literal or regular expressions and echo any matched records to the file AC_API.ERR. The existence of this file is then used to trap if an error has been caught. In the example below, two different patterns are searched in the log file. The output file AC_API.ERR is always produced even if there is no matching string. When there is no matching string, the file size will be an empty 0K file. Since the existence of the file determines if an error condition was trapped, it is imperative that any 0K files are removed, which is the function of the final line in the example below. REM **** SECTION #4 - SET SEARCH CRITERIA - REPEAT @FINDSTR COMMAND AS MANY TIMES AS NEEDED *** @findstr /c:"The file" .\%LogFile% > .\AC_API.ERR @findstr /c:"Anaplan API" .\%LogFile% >> .\AC_API.ERR REM Remove any 0K files produced by previous findstr commands @for /r %%f in (*) do if %%~zf==0 del "%%f" Section #5: Trap Error Conditions In the next section, logic is incorporated into the script to trap errors that might have occurred when executing the Anaplan Connect commands. The branching logic relies upon the existence of the AC_API.ERR file. If it exists, then the contents of the AC_API.ERR file are redirected to a secondary file called BAT_STAT.TXT and the email subject line is updated to indicate that an error occurred. If the file AC_API.ERR does not exist, then the contents of the Anaplan Connect log file is redirected to BAT_STAT.TXT and the email subject line is updated to indicate a successful run. Later in the script, the file BAT_STAT.TXT becomes the body of the email alert.  REM **** SECTION #5 - TRAP ERROR CONDITIONS *** REM If the file AC_API.ERR exists then echo errors to the primary BAT_STAT log file REM Else echo the log file to the primary BAT_STAT log file @if exist .\AC_API.ERR ( @echo . >> .\BAT_STAT.TXT @echo *** ANAPLAN CONNECT ERROR OCCURED *** >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT type .\AC_API.ERR >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT set Subject="ANAPLAN CONNECT ERROR OCCURED" ) else ( @echo . >> .\BAT_STAT.TXT @echo *** ALL OPERATIONS COMPLETED SUCCESSFULLY *** >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT type .\%LogFile% >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT set Subject="ANAPLAN LOADED SUCCESSFULLY" ) Section #6: Send Email In this section of the script, a success or failure email notification email will be sent. The parameters for sending are all set in the variable section of the script.  REM **** SECTION #6 - SEND EMAIL VIA MAILSEND *** @mailsend -domain %Domain% ^ -smtp %Smtp% ^ -auth -user %User% ^ -pass %Pass% ^ %To% ^ -f %From% ^ -sub %Subject% ^ -msg-body .\BAT_STAT.TXT Note: Sending email via SMTP requires the use of a free and simple Windows program known as MailSend. The latest release is available here:   https://github.com/muquit/mailsend/releases/ . Once downloaded, unpack the .zip file, rename the file to mailsend.exe and place the executable in the same directory where the Anaplan Connect batch script is located.  Section #7: Determine if a Retry is Required This is one of the final sections of the script that will determine if the Anaplan Connect commands need to be retried. Nested IF statements are typically frowned upon but are required here given the limited capabilities of the Windows batch language. The first IF test determines if the file AC_API.ERR exists. If this file does exist, then the logic drops in and tests if the current value of COUNT   is less than   the RETRY_COUNT. If the condition is true, then the COUNT gets incremented and the batch returns to the :START location (Section #3) to repeat the Anaplan Connect commands. If the condition of the nested IF is false, then the batch goes to the end of the script to exit with an exit code of 1.  REM **** SECTION #7 - DETERMINE IF A RETRY IS REQUIRED *** @if exist .\AC_API.ERR ( @if %COUNT% lss %RETRY_COUNT% ( @set /a COUNT+=1 @goto :START ) else ( set /a ERRNO=1 @goto :END ) ) else ( set /a ERRNO=0 Section #8: Post Custom Batch Commands The following section allows custom batch commands to be added, such as running various batch operations like copy and renaming files, or running stored procedures via a relational database command line interface. Additionally, this would be the location to add functionality to bulk insert flat file data exported from Anaplan into a relational target via tools such as Oracle SQL Loader (SQLLDR) or Microsoft SQL Server Bulk Copy (BCP).  REM **** SECTION #8 - POST ANAPLAN CONNECT COMMANDS *** REM Use this section to perform standard batch commands or operations after running Anaplan Connect commands :END exit /b %ERRNO% Sample Email Notifications The following are sample emails sent by the batch script, which are based upon the sample script in this document. Note how the needed content from the log files is piped directly into the body of the email.  Success Mail: Error Mail:
View full article
This article provides the steps needed to create a basic time filter module. This module can be used as a point of reference for time filters across all modules and dashboards within a given model. The benefits of a centralized Time Filter module include: One centralized governance of time filters. Optimization of workspace, since the filters do not need to be re-created for each view. Instead, use the Time Filter module.  Step 1: Create a new module with two dimensions—time and line items. The example below has simple examples for Weeks Only, Months Only, Quarters Only, and Years Only. Step 2: Line items should be Boolean formatted and the time scale should be set in accordance to the scale identified in the line item name. The example below also includes filters with and without summary methods, providing additional views depending on the level of aggregation desired. Once your preliminary filters are set, your module will look something like the screenshot below.  Step 3: Use the pre-set Time Filters across various modules and dashboards. Simply click on the filters icon in the tool bar, navigate to the time tab, select your Time Filter module from the module selection screen, and select the line item of your choosing. Use multiple line items at a time to filter your module or dashboard view.
View full article
Bring Your Own Key (BYOK) is now available. This enables designated Encryption Administrators to encrypt model data using your organization's encryption keys. For more information, see the BYOK User Guide. Note: Bring Your Own Key is an additional product that your organization can purchase if it has the Enterprise edition. Best practices This section contains some best practices to follow when using BYOK. Development Practices Identify or create a workspace that does not contain any essential model data. Encrypt the workspace to practice using BYOK.  After successfully encrypting the workspace: Run the tests on models in the workspace that you want. Follow the same procedure to encrypt your production workspace. If required, decrypt the development workspace. Ensure Workspaces are not in use Workspaces can't be encrypted when they are active. Ensure that your users are no longer using any models in the workspace before starting encryption. Do not start encryption until the workspace state is "Ready". Encrypting before loading data The first encryption is known as encryption in place. This is an offline event. To reduce the amount of time for this encryption, we recommend encrypting a workspace when it is first created or before significant data is loaded. Data added to models within the workspace after encryption is automatically encrypted. This is known as encryption on the fly. It's likely that this is sensitive data and it is more secure to load it after the workspace is encrypted. Identify users for key roles Identify users to be assigned the Encryption Admins role as early as possible. Identify users to be assigned the Tenant Auditor role. Encryption Admin role To maintain separation of duties, Encryption Admins should not have access to any model data. Ensure that Encryption Admins are added as members of at least one workspace with a model permission of "no access". Let your account representative know the email addresses of the Encryption Admins when you first order BYOK. Ideally, assign more than one person to the Encryption Admin role. Encryption Admin users can assign other users in their tenant the Encryption Admin role or remove it using the Access Control feature of the Administration app. Note: Only a limited set of users are eligible to be assigned the Encryption Admin role. Only users who were submitted to Anaplan as potential Encryption Admins appear in the Access Control section of the Administration app. If any users are missing, add them to the workspace in your tenant with the role 'No Access' then contact Anaplan Support and request that those users are added to the list of eligible Encryption Admins. Tenant Auditor role The Tenant Auditor role can access the BYOK audit logs. You might want to specify different users to the ones assigned the Encryption Admin role, but that’s your choice. Your Tenant Administrator can assign users to this role. Tenant Auditors need to be a user in at least one Anaplan workspace, ideally with a model permission of "no access". Wait When the "BYOK" status changes following a successful encryption or decryption action in a workspace, wait two minutes before running another operation on that workspace. This enables trailing processes to complete and helps to prevent unexpected errors. Features As an Encryption Admin, you can use the Reassign Key button on the Encrypted Workspaces page to easily apply key rotation on your workspaces. BYOK now has audit logging. You can use the Audit Service API to: Retrieve up to 30 days of logs. Get the BYOK history for your tenant. Get the BYOK history for your tenant for specific dates that you specify. Get information about who carried out an action in BYOK, when it was done, and what was done. For more information, see the Audit Service API section of the BYOK User Guide. Issues Resolved Issue Description As an Encryption Administrator, you can now assign or remove the Encryption Admin role. Known Issues and Workarounds Issue Workaround When generating a key using the required values, but without waiting before entering values, key generation fails with the "Invalid Key Name" message. Wait a few seconds before entering data on the Generate New Encryption Key popup. When editing an encryption key, the Key Alias field is disabled and cannot be changed. –
View full article