Platform Filters
Choose a label to narrow down the articles.

Modeling

Import and Export Data

Data Integration

Application Lifecycle Management

Dashboard

Share a Best Practice

Share what you know. Contribute your best practices and Anaplan expertise using our Contributor's Toolkit.

Contribute a Best Practice

Let's talk about it

Discuss what you learned from these best practices and your own experiences in the Forums.

Visit Forums
This article provides the steps needed to create a basic time filter module. This module can be used as a point of reference for time filters across all modules and dashboards within a given model. The benefits of a centralized Time Filter module include: One centralized governance of time filters. Optimization of workspace, since the filters do not need to be re-created for each view. Instead, use the Time Filter module. Conforms with the D.I.S.C.O. methodology as a 'System' module.  More on D.I.S.C.O. can be found here.   Step 1: Create a new module with two dimensions—time and line items. The example below has simple examples for Weeks Only, Months Only, Quarters Only, and Years Only. Step 2: Line items should be Boolean formatted and the time scale should be set in accordance to the scale identified in the line item name. The example below also includes filters with and without summary methods, providing additional views depending on the level of aggregation desired. Once your preliminary filters are set, your module will look something like the screenshot below.  Step 3: Use the pre-set Time Filters across various modules and dashboards. Simply click on the filters icon in the toolbar, navigate to the time tab, select your Time Filter module from the module selection screen, and select the line item of your choosing. Use multiple line items at a time to filter your module or dashboard view.  
View full article
If you have a multi-year model where the data range for different parts of the model varies, (for example, history covering two years, current year forecast, and three planning years) then Time Ranges should be able to deliver significant gains in terms of model size and performance. But, before you rush headlong into implementing Time Ranges across all of your models, let me share a few considerations to ensure you maximize the value of the feature and avoid any unwanted pitfalls. Naming Convention Time Ranges As with all Anaplan models, there is no set naming convention, however, we do advocate consistency and simplicity. As with lists and modules, short names are good. I like to describe the naming convention thus—as short as practical—meaning you need to understand what it means, but don’t write an essay! We recommend using the following convention: FYyy-FYyy. For example, FY16-FY18, or FY18 for a single year Time Ranges available are from 1981 to 2079, so the “19” or the “20” prefixes are not strictly necessary. Keeping the name as short as this has a couple of advantages: It has a clear indication of the boundaries for the Time Range It is short enough to see the name of the Time Range in the module and line items blueprint The aggregations available for Time Ranges can differ for each Time Range and also differ from the main model calendar. If you take advantage of this and have aggregations that differ from the model calendar, you should add a suffix to the description. For example: FY16-FY19 Q (to signify Quarter totals) FY16-FY19 QHY (Quarter and Half Year totals) FY16-FY19 HY (Half Year totals only) etc. Time Ranges are Static Time Ranges can span from 1981 to 2079. As a result, they can exist entirely outside, within, or overlap the model calendar. This means that there may likely be some additional manual maintenance to perform when the year changes. Let’s review a simple example: Assume the model calendar is FY18 with two previous years and two future years; the model calendar spans FY16-FY20 We have set up Time Ranges for historic data (FY16-FY17) and plan data (FY19-FY20) We also have modules that use the model calendar to pull all of the history, forecast, and plan data together, as seen below: At year end when we “roll over the model,” we amend the model calendar simply by amending the current year. What we have now is as follows: You see that the history and plan Time Ranges are now out of sync with the model calendar. How you change the history Time Range will depend on how much historical data you need or want to keep. Assuming you don’t need more than two year’s history, the Time Range should be re-named FY17-FY18 and the start period advanced to FY17 (from FY16). Similarly, the plan Time Range should be renamed FY20-FY21 and advanced to FY20 (from FY19). FY18 is then available for the history to be populated and FY21 is available for plan data entry. Time Ranges Pitfalls Potential Data Loss Time Ranges can bring massive space and calculation savings to your model(s), but be careful. In our example above, changing the Start Period of FY16-FY17 to FY17 would result in the data for FY16 being deleted for all line items using FY16-FY17 as a Time Range. Before you implement a Time Range that is shorter or lies outside the current model calendar, and especially when implementing Time Ranges for the first time, ensure that the current data stored in the model is not needed. If in doubt, do some or all of the suggestions below: Export out the data to a file Copy the existing data on the line item(s) to other line items that are using the model calendar Back up the entire model Formula References The majority of the formula will update automatically when updating Time Ranges. However, if you have any hard-coded SELECT statements referencing years or months within the Time Range, you will have to amend or remove the formula before amending the Time Range. Hard-coded SELECT statements go against best practice for exactly this reason; they cause additional maintenance. We recommend replacing the SELECT with a LOOKUP formula from a Time Settings module. There are other examples where the formula may need to be removed/amended before the Time Range can be adjusted. See the Anapedia documentation for more details. When to use the Model Calendar This is a good question and one that we at Anaplan pondered during the development of the feature; Do Time Ranges make the model calendar redundant? Well, I think the answer is “no,” but as with so many constructs in Anaplan, the answer probably is, “it depends!” For me, a big advantage of using the model calendar is that it is dynamic for the current year and the +/- years on either side. Change the current year and the model updates automatically along with any filters and calculations you have set up to reference current year periods, historical periods, future periods, etc.  (You are using a central time settings module, aren’t you??) Time ranges don’t have that dynamism, so any changes to the year will need to be made for each Time Range. So, our advice before implementing Time Ranges for the first time is to review each Module and: Assess the scope of the calculations Think about the reduction Time Ranges will give in terms of space and calculation savings, but compare that with annual maintenance. For example: If you have a two-year model, with one history year (FY17) and the current year (FY18), you could set up a Time Range spanning one year for FY17 and another one year Time Range for FY18 and use these for the respective data sets. However, this would mean each year both Time Ranges would need to be updated. We advocate building models logically, so it is likely that you will have groups of modules where Time Ranges will fall naturally. The majority of the modules should reflect the model calendar. Once Time Ranges are implemented, it may be that you can reduce the scope of the model calendar. If you have a potential Time Range that reflects either the current or future model calendar, leave the timescale as the default for those modules and line items; why make extra work? SELECT Statements As outlined above, we don’t advocate hard-coded time selects of the majority of time items because of the negative impact on maintenance (the exceptions being All Periods, YTD, YTG, and CurrentPeriod). When implementing Time Ranges for the first time, take the opportunity to review the line item formula with time selects. These formulae can be replaced with lookups using a Time Settings module. Application Lifecycle Management (ALM) Considerations As with the majority of the Time settings, Time Ranges are treated as structural data. If you are using ALM, all of the changes must be made in the Development model and synchronized to Production. This gives increased importance to refer to the pitfalls noted above to ensure data is not inadvertently deleted. Best of luck! Refer to the Anapedia documentation for more detail. Please ask if you have any further questions and let us and your fellow Anaplanners know of the impact Time Ranges have had on your model(s).
View full article
Anaplan has built several connectors to work with popular ETL (Extract, Translate, and Load) tools. These tools provide a graphical interface through which you can set up and manage your integration. Each of the tools that we connect to has a growing library of connectors – providing a wide array of possibilities for integration with Anaplan. These ETL tools require subscriptions to take advantage of all their features, making them an especially appealing option for integration if you already have a sub.      MuleSoft Anaplan has a connector available in MuleSoft's community library that allows for easy connection to cloud systems such as Netsuite, Workday, and Salesforce.com as well as on-premise systems like Oracle and SAP. Any of these integrations can be scheduled to recur on any period needed, easily providing hands-off integration. MuleSoft uses the open-source AnyPoint studio and Java to manage its integrations between any of its available connectors. Anaplan has thorough documentation relating to our MuleSoft connector on  the Anaplan MuleSoft github.   SnapLogic SnapLogic has a Snap Pack for Anaplan that leverages our API to import and export data. The Anaplan Snap Pack provides components for reading data from and writing data to the Anaplan server using SnapLogic, as well as executing actions on the Anaplan server. This Snap Pack empowers you to use connect your data and organization on the Anaplan Platform without missing a beat.   Boomi Anaplan has a connector available on the Boomi marketplace that will empower you to create a local Atom and transfer data to or from any other source with a Boomi connector. You can use Boomi to import or export data using any of your pre-configured actions within Anaplan. This technology removes any need to store files as an intermediate step, as well as facilitating automation.   Informatica Anaplan has partnered with Informatica to build a connector on the Informatica platform. Informatica has connectors for hundreds of applications and databases, giving you the ability to leverage their integration platform for many other applications when you integrate these applications with Anaplan. You can search for the Anaplan Connector on the Informatica marketplace or request it from your Informatica sales representative.  
View full article
  NOTE: The following information is also attached as a PDF for downloading and using off-line.   Overview The process of designing a model will help you: Understand the customer’s problem more completely Bring to light any incorrect assumptions you may have made, allowing for correction before building begins Provide the big picture view for building. (If you were working on an assembly line building fenders, wouldn’t it be helpful to see what the entire car looked like?)   Steps: Understand the requirements and the customer’s technical ecosystem when designing a model When you begin a project, gather information and requirements using a number of tools. These include: Statement of Work (SOW): Definition of the project scope and project objectives/high level requirements Project Manifesto: Goal of the project – big picture view of what needs to be accomplished IT ecosystem: Which systems will provide data to the model and which systems will receive data from the model? What is the Anaplan piece of the ecosystem? Current business process: If the current process isn’t working, it needs to be fixed before design can start. Business logic: What key pieces of business logic will be included in the model?  Is a distributed model needed? High user concurrency Security where the need is a separate model Regional differences that are better handled by a separate model Is the organization using ALM, requiring split or similar models to effectively manage development, testing, deployment, and maintenance of applications? (This functionality requires a premium subscription or above.) User stories: These have been written by the client—more specifically, by the subject matter experts (SMEs) who will be using the model.   Why do this step? To solve a problem, you must completely understand the current situation. Performing this step provides this information and the first steps toward the solution.   Results of this step: Understand the goal of the project Know the organizational structure and reporting relationships (hierarchies) Know where data is coming from and have an idea of how much data clean-up might be needed If any of the data is organized into categories (for example, product families) or what data relationships exist that need to be carried through to the model (for example, salespeople only sell certain products) What lists currently exist and where are they are housed Know which systems the model will either import from or export to Know what security measures are expected Know what time and version settings are needed   Document the user experience Front to back design has been identified as the preferred method for model design. This approach puts the focus on the end user experience. We want that experience to align with the process so users can easily adapt to the model. During this step focus on: User roles. Who are the users? Identifing the business process that will be done in Anaplan. Reviewing and documenting the process for each role. The main steps. If available, utilize user stories to map the process. You can document this in any way that works for you. Here is a step-by-step process you can try: What are the start and end points of the process? What is the result or output of the process? What does each role need to see/do in the process? What are the process inputs and where do they come from? What are the activities the user needs to engage in? Verb/object—approve request, enter sales amount, etc. Do not organize during this step. Use post-its to capture them. Take the activities from step 4 and put them in the correct sequence. Are there different roles for any of these activities? If no, continue with step 8. If yes, assign a role to each activity. Transcribe process using PowerPoint ®  or Lucid charts. If there are multiple roles, use swim lanes to identify the roles. Check with SMEs to ensure accuracy. Once the user process has been mapped out, do a high level design of the dashboards Include: Information needed What data does the user need to see? What the user is expected to do or decisions that the user makes Share the dashboards with the SMEs. Does the process flow align?   Why do this step?  This is probably the most important step in the model design process. It may seem as though it is too early to think about the user experience, but ultimately the information or data that the user needs to make a good business decision is what drives the entire structure of the model. On some projects, you may be working with a project manager or a business consultant to flesh out the business process for the user. You may have user stories, or it may be that you are working on design earlier in the process and the user stories haven’t been written. In any case, identify the user roles, the business process that will be completed in Anaplan, and create a high level design of the dashboards. Verify those dashboards with the users to ensure that you have the correct starting point for the next step.   Results of this step: List of user roles Process steps for each user role High level dashboard design for each user role   Use the designed dashboards to determine what output modules are necessary Here are some questions to help you think through the definition of your output modules: What information (and in what format) does the user need to make a decision? If the dashboard is for reporting purposes, what information is required? If the module is to be used to add data, what data will be added and how will it be used? Are there modules that will serve to move data to another system? What data and in what format is necessary?   Why do this step? These modules are necessary for supporting the dashboards or exporting to another system. This is what should guide your design—all of the inputs and drivers added to the design are added with the purpose of providing these output modules with the information needed for the dashboards or export.   Results of this step: List of outputs and desired format needed for each dashboard   Determine what modules are needed to transform inputs to the data needed for outputs Typically, the data at the input stage requires some transformation. This is where business rules, logic, and/or formulas come into play: Some modules will be used to translate data from the data hub. Data is imported into the data hub without properties, and modules are used to import the properties. Reconciliation of items takes place before importing the data into the spoke model. These are driver modules that include business logic, rules.    Why do this step?  Your model must translate data from the input to what is needed for the output    Results of this step: Business rules/calculations needed   Create a model schema You can whiteboard your schema, but at some point in your design process, your schema must be captured in an electronic format. It is one of the required pieces of documentation for the project and is also used during the Model Design Check-in, where a peer checks over your model and provides feedback.  Identify the inputs, outputs, and drivers for each functional area Identify the lists used in each functional area Show the data flow between the functional areas Identify time and versions where appropriate   Why do this step?   It is required as part of The Anaplan Way process. You will build your model design skills by participating in a Model Design Check-in, which allows you to talk through the tougher parts of design with a peer. More importantly, designing your model using a schema means that you must think through all of the information you have about the current situation, how it all ties together, and how you will get to that experience that meets the exact needs of the end user without fuss or bother.    Result of this step: Model schema that provides the big picture view of the solution. It should include imports from other systems or flat files, the modules or functional areas that are needed to take the data from current state to what is needed to support the dashboards that were identified in Step 2. Time and versions should be noted where required. Include the lists that will be used in the functional areas/modules.  Your schema will be used to communicate your design to the customer, model builders, and others. While you do not need to include calculations and business logic in the schema, it is important that you understand the state of the data going into a module, the changes or calculations that are performed in the module and the state of the data leaving the module, so that you can effectively explain the schema to others.  For more information, check out 351 Schemas.  This 10 to 15 minute course provides basic information about creating a model schema. Verify that the schema aligns with basic design principles When your schema is complete, give it a final check to ensure: It is simple. “Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage to move in the opposite direction.”  ― Ernst F. Schumacher “Design should be easy in the sense that every step should be obviously and clearly identifiable. Simplify elements to make change simple so you can manage the technical risk.” — Kent Beck The model aligns with the manifesto. The business process is defined and works well within the model.
View full article
Table of Contents   Overview A data hub is a separate model that holds an organization’s data. Data can be shared with all your models, making expands easier to implement and ensuring data integrity across models. The data hub model can be placed in a different workspace, allowing for role segregation. This allows you to assign administrator rights to users to manage the data hub without allowing those users access to the production models. The method for importing to the data hub (into modules, rather than lists) allows you to reconcile properties using formulas. One type of data hub can be integrated with an organization’s data warehouse and hold ERP, CRM, HR, and other data as shown in this example. Anaplan Data Architecture But this isn’t the only type of data hub. Some organizations may require a data hub for transactional data, such as bookings, pipeline, or revenue. Whether you will be using a single data hub or multiple hubs, it is a good idea to plan your approach for importing from the organization’s systems into the data hub(s) as well as how you will synchronize the imports from the data hub to the appropriate model. The graphic below shows best practices.   High level best practices   When building a data hub, the best practice is to import a list with properties into a module rather than directly into a list. Using this method, you set up line items to correspond with the properties and import them using the text data type. This imports all the data without errors or warnings. The data in the data hub module can be imported to a list in the required model. The exception for importing into a module is if you are using a numbered list without a unique code (or in other words, you are using combination of properties). In that case, you will need to import the properties into the list.   Implementation steps Here are the steps to create the basics of a hub and spoke architecture. 1) Create a model and name it master data hub You can create the data hub in the same workspace where all the other models are, but a better option is to put the data hub in a different workspace. The advantage is role segregation; you can assign administrator rights to users to manage the Hub and not provide them with access to the actual production models, which are in a different workspace. Large customers may require this segregation of duties. Note: This functionality became available in release 2016.2.   2) Import your data files into the data hub Set up your lists. Identify the lists that are required in the data hub. Create these lists using good naming conventions. Set up any needed hierarchies, working from the top level down. Import data into the list from the source files, mapping only the unique name, the parent (if the name rolls up into a hierarchy), and code, if available. Do not import any list properties. These will be imported into a module. Create corresponding modules for those lists that include properties. For each list, create a module. Name the module [List Name] Properties. In the module, create a line item for each property and use the data type TEXT. Import the source file into the corresponding module. There should be no errors or warnings. Automate the process with actions. Each time you imported, an action was created. Name your actions using the appropriate naming conventions. Note: Indicate the name of the source in the name of the import action. To automate the process, you’ll want to create one process that includes all your imports. For hierarchies, it is important to get the actions in the correct order. Start with the highest level of the hierarchy list import, then the next level list and on down the hierarchy. Then add the module imports. (The order of the module imports is not critical.) Now, let's look at an example: You have a four-level hierarchy to load, such as 1) Employee→ 2) State → 3) Region → 4) Country   Lists Create lists with the right naming conventions. For this example, create these lists: G1 Country G2 Region G3 State Employee G4 Set the parent hierarchy to create the composite hierarchy. Import into each list from the source file(s), and only map name and parent. The exception is the employee list, which includes a code (employee ID) which should be mapped. Properties will be added to the data hub later.   Properties → Modules Create one module for each list that includes properties. Name the module [List Name] Properties. For this example, only the Employees list includes properties, so create one module named Employee Properties. In each module, create as many line items as you have properties. For this example, the line items are Salary and Bonus. Open the Blueprint view of the module and in the Format column, select Text. Pivot the module so that the line items are columns. Import the properties. In the grid view of the module, click on the property you are going to import into. Set up the source as a fixed line item. Select the appropriate line item from the Line Item tab and on the Mapping tab, select the correct column for the data values. You’ll need to import each property (line item) separately. There should be no errors or warnings.     Actions  Each time you run an import, an action is created. You can view these actions by selecting Actions from the Model Settings tab. The previous imports into lists and modules have created one import action per list. You can combine these actions into a process that will run each action in the correct order. Name your actions following the naming conventions. Note, the source is included in the action name.   Create one process that includes the imports. Name your process Load [List Name]. Make sure the order is correct: Put the list imports first, starting with the top hierarchy level (numbered as 1) and working down the module imports in any order.   3) Reconcile These list imports should be running with zero errors because imports are going into text formatted items. If some properties should match with items in lists, it's recommended to use FINDITEM formulas to match text to list items: FINDITEM simply looks at the text formatted line item, and finds the match in the list that you specify. Every time data is uploaded into Anaplan, you just need to make sure all items from the text formatted line item are being loaded into the list. This will be useful as you will be able to always compare the "raw data" to the "Anaplan data," and not have to load that data more than once if there are concerns about the data quality in Anaplan. If there is not a list of the properties included in your data hub model, first, create that list. Let’s use the example of Territory. Add a line item to the module and select list as the format type, then select the list name of your list of properties—in this case, Territory from the drop-down. Add the FINDITEM formula FINDITEM(x,y) where x is the name of your list (Territory for our example) and y is the line item. You can then filter this line item so that it shows all of the blank items. Correct the data in the source system. If you will be importing frequently, you may want to set up a dashboard to allow users to view the data so they can make corrections in the source system. Set up a saved view for the errors and add conditional formatting to highlight the missing (blank items) data. You can also include a counter to show the number of errors and add that information to the dashboard.   4) Split models: Filter and Set up Saved Views If the architecture of your model includes spoke models by regions, you need one master hierarchy that covers all regions and a corresponding module that stores the properties. Use that module and create as many saved views as you have spoke region models. For example, filter on Country GI = Canada if you want to import only Canadian accounts into the spoke model. You will need to create a saved view for each hierarchy and spoke model.   5) Import to the spoke module Use the cross-workspace imports if you have decided to put your Master data hub in a separate workspace. Create the lists that correspond to the hierarchy levels in each spoke model. There is no way to create a list via import for now. Create the properties in the list where needed. Keep in mind that the import of properties into the data hub as line items is an exception. List properties generally do not vary, unlike a line item in a module, which are often measured over time. Note: Properties can also be housed in modules and there are some benefits to this. See Anapedia - Model Building (specifically, the "List Attributes" and "List attributes in a module" topics). If you decide to use a module to hold the properties, you will need to create a line item for each property type and then import the properties into the module. To simplify the mapping, make sure the property names in each spoke model match the line item names of the data hub model. In each spoke model, create an import from the filtered module view of the data hub model into the lists you created in step 1. In the Actions window, name your imports using naming conventions. Create a process that includes these actions (imports). Begin with the highest level in the hierarchy and work down to the lowest. Well done! You have imported your hierarchy from a data hub model.   6) Incremental list imports When you are in the midst of your peak planning cycle and your large lists are changing frequently, you’ll want to update the data hub and push the changes to the spoke models. Running imports of several thousand list members, may cause performance issues and block users during the import activity. In a best case scenario, your data warehouse provides a date field that shows when the item was added or modified, and is able to deliver a flat file or table that includes only the changes. Your import into the HUB model will just take few seconds, and you can filter on this date field to only export the changes to the spoke models. But in most cases, all you have is a full list from the data warehouse, regardless of what has changed. To mitigate this, we'll use a technique to export only the list items that have changed (edited, deleted, updated) since the last export, using the logic in Anaplan.   Setting up the incremental loads: In the data hub model: Create a text formatted line item in your module. Name it CHECKSUM, set the format as Text, and enter a formula to concatenate of all the properties that you want to track changes for. These properties will form the base of the incremental import. Example: CHECKSUM = State & Segment & Industry & Parent & Zip Code Create a second line item, name it CHECKSUM OLD, set the format as Text, and create an import that imports CHECKSUM into CHEKSUM_OLD. Ignore any other mappings. Name this import: 1/2 im DELTA and put it in a process called "RESET DELTA" Create a line item and name it "DELTA" and set the format as Boolean. Enter this formula: IF CHECKSUM <> CHECKSUM OLD THEN TRUE ELSE FALSE. Update the filtered view that you created to export only the hierarchy for a specific region or geography. Add a filter criteria "DELTA = true". You will only see the list items which differ from the last time you imported into the data hub In the example above, we'll import into a spoke model only the list items that are in US East, and that have changed since the last import. Execute the import from the source into the data hub and then into the spoke models. In the data hub model, upload the new files and run the process import. In the spoke models, run the process import that takes the list from the data hub's filtered view. → Check the import logs and verify that only the number of items that have changed are actually imported. Back in the data hub model, run the RESET DELTA process (1/2 im DELTA import). The RESET DELTA process resets the changes, so you are ready for the next set of changes. Your source, data hub model and spoke models are all in sync. 7) Incremental data load The Actual transaction file might need to be imported several times into the data hub model and from there into the spoke models during the planning peak cycle. If the file is large, it can create performance issues for end users. Since not all transactions will change as the data is imported several times a day, there is a strong opportunity to optimize this process. In the data hub model transaction module, create the same CHECKSUM, CHECKSUM OLD and DELTA line items. CHECKSUM should concatenate all the fields you want to track the delta on, including the values. "DELTA" line item will actually catch new transactions, as well as modified transactions. See 6. Incremental List Imports above for more information   Filter the view using DELTA to only import transaction list items into the list, and the actuals transaction into the module. Create an import from CHECKSUM to CHECKSUM OLD, to be able to reset the delta after the imports have run, name this import: 2/2 im DELTA, and add it to the DELTA process created for the list. In the spoke model, import into the transaction list and into the transaction module, from the transaction filtered view. Run the DELTA import or process.   😎  Automation You can semi-automate this process and have it run automatically on a frequent basis if incremental loads have been implemented. That provides immediacy of master data and actuals across all models during a planning cycle. It's semi-automatic because it requires a review of the reconciliation dashboards before pushing the data to the spoke models. There are a few ways to automate, all requiring an external tool: Anaplan Connect or the customer's ETL. The automation script needs to execute in this order: Connect to the master data hub model. Load the external files into the master data hub model. Execute the process that imports the list into the data hub. Execute the process that imports actuals (transactions) into the data hub. Manual step: Open your reconciliation dashboards, and check that data and the list are clean. Again, these imports should run with zero errors or warnings. Connect to the spoke model. Execute the list import process. Execute the transaction import models. Repeat 5, 6, and 7 for all spoke models. Connect to the master data hub model. Run the Clear DELTA process to reset the incremental checks.   Other best practices Create deletes for all your lists Create a module called Clear Lists. In the module, create a line item of type Boolean in the module where you have list and properties, call it "CLEAR ALL" and set a formula to TRUE. In Actions, create a "delete from list using selection" action and set it as below: Repeat this for all lists and create one process that executes all these delete actions.   Example of a maintenance/reconcile dashboard Use a maintenance/reconcile dashboard when manual operations are required to update applications from the hub. One method that works well is to create a module that highlights if there are errors in each data source. In that module, create a line item message that displays on the dashboard if there are errors, for example: There are errors that need correcting. A link on this dashboard to the error status page will make it easy for users to check on errors. A best practice is to automate the list refresh. Combine this with a modeling solution that only exports what has changed.   Dev-test-prod considerations There should be two saved views: One for development and one for production. That way, the hub can feed the development models with shortened versions of the lists and the production models will get the full lists. ALM considerations: The development (DEV) model will need the imports set up for DEV and production (PROD) if the different saved view option is taken. The additional ALM consideration is that the lists that are imported into the spoke models from the hub need to be marked as production data.   Development DATA HUB The data hub houses all global data needed to execute the Anaplan use case. The data hub often houses complex calculations and readies data for downstream models. DEVELOPMENT MODEL The development model is built to the 80/20 rule. It is built upon a global process, regional specific functionality is added in the deployment phase. The model is built to receive data from the data hub. DATA INTEGRATION During this stage, Anaplan Connect or a 3rd party tool is used to automate data integration. Data feeds are built from the source system into the data hub and from the data hub to downstream models. PERFORMANCE TESTING The application is put through rigorous performance testing, including automated and end user testing. These tests mimic real world usage and exceptionally heavy traffic to see how the system will perform.   Deployment DATA HUB The data hub is refreshed with the latest information from the source systems. The data hub readies data for downstream models. DEPLOYMENT  MODEL The development model is copied and the appropriate data is loaded from the data hub. Regional specific functionality is added during this phase. DATA INTEGRATION Additional data feeds from the data hub to downstream models are finalized. The integrations are tested and timed to establish baseline SLA. Automatic feeds are placed on timed schedules to keep the data up to date. PERFORMANCE TESTING The application is again put through rigorous performance testing.   Expansion DATA HUB The need for additional data for new use cases is often handled by splitting the data hub into regional data hubs. This helps the system perform more efficiently. MODEL  DEVELOPMENT The models built for new use cases are developed and thoroughly tested. Additional functionality can be added to the original models deployed. DATA INTEGRATION Data integration is updated to reflect the new system architecture. Automatic feeds are tested and scheduled according to business needs. PERFORMANCE TESTING At each stage, the application is put through rigorous performance testing. These tests mimic real world usage and exceptionally heavy traffic to see how the system will perform.
View full article
Overview The Anaplan Optimizer aids business planning and decision making by solving complex problems involving millions of combinations quickly to provide a feasible solution. Optimization provides a solution for selected variables within your Anaplan model that matches your objective based on your defined constraints. The Anaplan model must be structured and formatted to enable Optimizer to produce the correct solution. You are welcome to read through the materials and watch the videos on this page, but Optimizer is a premium service offered by Anaplan (Contact your Account Executive if you don't see Optimizer as an action on the settings tab). This means that you will not be able to actually do the training exercises until the feature is turned on in your system. Training The training involves an exercise along with documentation and videos to help you complete it. The goal of the exercise is to setup the optimization exercise for two use cases; network optimization and production optimization. To assist you in this process we have created an optimization exercise guide document which will walk you through each of the steps. To further help we have created three videos you can reference: An exercise walk-through A demo of each use case A demo of setting up dynamic time Follow the order of the items listed below to assist with understanding how Anaplan's optimization process works: Watch the use case video which demos the Optimizer functionality in Anaplan Watch the exercise walkthrough video Review documentation about how Optimizer works within Anaplan Attempt the Optimizer exercise Download the exercise walkthrough document Download the Optimizer model into your workspace How to configure Dynamic Time within Optimizer Download the Dynamic Time document Watch the Dynamic Time video Attempt Network Optimization exercise Attempt Production Optimization exercise
View full article
Making sure that production data lists are correctly marked within a model is a  key step to setting up and using ALM . This guide will provide a solution to how someone can make revisions to their model to allow for the tagging of a list as a production data list. Please note: this solution doesn’t work if there are hard-coded references on non-composite summary items. For more information on working with production lists and ragged hierarchies, please visit Production lists and ragged hierarchies logic. The issue arises as a model administrator needs to tag a production data list, but there are hard-coded references in the model that won’t allow the person to do so. When this occurs and the model administrator tries to tag it as a production list, they will get a warning similar to this: See  Formula Protection  for more details. To fix this issue, all direct formula references to production data lists need to be changed to be indirect references to lists using either LOOKUPs or Boolean formatted conditional logic.  Below, you will find a step-by-step guide to replacing these formulas. Identify formulas with hard-coded references There is now an easy way to identify all of the formulas which are hard-coded to production data lists. Check the 'Referenced in Formula' column in the General Lists section. This will show the line items where the list is used. Check the respective formula for hard-coded references.  If there are no hard-coded references, then it is OK to check the list as a production data list.  This is the recommended approach, as just setting the lists without prior checking may lead to a rollback error being generated, which could be time-consuming for large models (as well as frustrating). It is possible to just export the General Lists grid to help where there are multiple references for the same list and then use formulas and filters to identify all offenders in the same effort. This option will save significant amounts of time if there are many line items that would need to be changed. You are looking for direct references on the list members: [SELECT: List Name.list member] ITEM(List Name) =List Name.List member The following constructs are valid, but not recommended, as any changes to the names or codes could change the result of calculations: IF CODE(ITEM(List Name))= IF NAME(ITEM(List Name))= After following those steps, you should have a list of all of the line items that need to be changed in the model in order for production data list to be open to being checked. Please note: There may still be list properties that have hard-coded references to items. You will need to take note of these as well, but as per D.I.S.C.O., (Best practice for Module design) we recommend that List Properties are replaced with Line Items in System Modules. Replacing model formulas: The next step is to replace these formulas within the model. For this, there are two recommended options. The first option (Option 1 below) is to replace your SELECT statements with a LOOKUP formula that is referencing a list drop-down. Use this option when there are 1:1 mappings between list items and your formula logic. For example, if you were building out a P&L variance report and needed to select from a specific revenue account, you might use this option.  The second option (Option 2 below) for replacing these formulas is to build a logic module that allows you to use Booleans to select list items and reference these Boolean fields in your formulas. Use this option when there is more complex modeling logic than a 1:1 mapping. For example, you might use this option if you are building a variance report by region and you have different logic for all items under Region 1 (ex: budget – actual) than the items under Region 2 (ex: budget – forecast).  (Option 1) Add List Selections module to be used in LOOKUPs for 1:1 mappings: From here you should make a module called List Selections, with no lists applied to it and a line item for each list item reference that you previously used in the formulas that will be changed. Each of these line items will be formatted as the list that you are selecting to be production data. Afterward, you should have a module that looks similar to this: An easy and effective way to stay organized is to partition and group your line items of similar list formats into the same sections with a section header line item formatted as No Data and a style of "Heading 1." After the line items have been created, the model administrator should use the list drop-downs to select the appropriate items which are being referenced. As new line items are created in a standard mode model, the model administrator will need to open the deployed model downstream to reselect or copy and paste the list formatted values in this module since this is considered production data. Remove hard-coding and replace with LOOKUPs: Once you have created the List Selections module with all of the correct line items, you will begin replacing old formulas, which you’ve identified in Excel, with new references. For formulas where there is a SELECT statement, you will replace the entire SELECT section of the formula with a LOOKUP to the correct line item in the list selections. Example: Old Formula = Full PL.Amount[SELECT: Accounts.Product Sales] New Formula = Full PL.Amount[LOOKUP: List Selections.Select Product Sales] For formulas where there is an IF ITEM (List Name) = List Name Item, you will replace the second section of the formula after the ‘=’ to directly reference the correct line item in the list selections. Example: Old Formula = If ITEM(Accounts) = Accounts.Product Sales THEN Full PL.Amount ELSE 0 New Formula = IF ITEM(Accounts) = List Selections.Select Product Sales THEN Full PL.Amount ELSE 0   (Option 2) Modeling for complex logic and many to many relationship: In the event that you are building more complex modeling logic in your model, you should start by building Boolean references that you can use in your formulas. To accomplish this, you will create a new module with Boolean line items for each logic type that you need. Sticking with the same example as above, if you need to build a variance report where you have different logic depending on the region, start by creating a module by region that has different line items for each different logic that you need similar to the view below: Once you have the Boolean module set up, you can then change your hard-coded formulas to reference these Boolean formatted line items to write your logic. The formula may look similar to this: IF Region Logic.Logic 1 THEN logic1 ELSE IF Region Logic.Logic 2 THEN logic2 ELSE IF Region Logic.Logic 3 THEN logic3 ELSE 0   Here is a screenshot of what the end result may look like:   This method can be used across many different use cases and will provide a more efficient way of writing complex formulas while avoiding hard-coding for production data lists. Selecting production data list: After all of the hard-coded formulas have been changed in the model, you can navigate back to the Settings tab, and open General Lists. In the Production Data column, check the box for the list that you want to set as a production data list. Repeat for each list in the model that needs to be a production data list: For each list in the model that you need to make a production data list, you can repeat the steps throughout this process to successfully remove all hard-coded list references.
View full article
PLANS is the new standard for Anaplan modeling—“the way we model.” This covers more than just the formulas and includes and evolves existing best practices around user experience and data hubs. It is a set of rules on the structure and detailed design of Anaplan models. This set of rules will provide both a clear route to good model design for the individual Anaplanner and common guidance on which Anaplanners and reviewers can rely when passing models amongst themselves.  In defining the standard, everything we do will consider or be based around: Performance – Use the correct structures and formula to optimize the Hyperblock Logical – Build the models and formula more logically – See D.I.S.C.O. below Auditable – Break up the formula for better understanding, performance, and maintainability Necessary – Don’t duplicate expressions. Store and calculate data and attributes once and reference them many times. Don't have calculations on more dimensions than needed Sustainable – Build with the future in mind, thinking about process cycles and updates        The standards will be based around three axes: Performance - How do the structures and formula impact the performance of the system? Usability/Auditability - Is the user able to understand how to interact with the functionality? Sustainability - Can the solution be easily maintained by model builders and support? We will define the techniques to use that balance on the three areas to ensure the optimal design of Anaplan models and architecture.       D.I.S.C.O As part of model and module design, we recommend categorizing modules as follows: Data – Data hubs, transactional modules, source data; reference everywhere Inputs – Design for user entry, minimize the mix of calculations and outputs System – Time management, filters, list attributes modules, mappings, etc.; reference everywhere Calculations – Optimize for performance (turn summaries off, combine structures) Outputs -  Reporting modules, minimize data flow out   Why build this way?   Performance Fewer repeated calculations Optimized structures and formulas Logical Data and calculations reside in logical places Model data flows can be easily understood Auditable Model structure can be easily understood Simplified formula (no need for complex expressions) Necessary Formulas and structures are not repeated Data is stored and calculated once, referenced many times, leading to efficient calculations Sustainable Models can be adapted and maintained more easily Expansion and scaling simplified     Recommended Content: Performance Dimension Order Formula Optimization in Anaplan Formula Structure for Performance Logical Best Practices for Module Design Auditable Formula Structure for Performance Necessary Reduce Calculations for Better Performance Formula Optimization in Anaplan Sustainable Dynamic Cell Access Tips and Tricks Dynamic Cell Access - Learning App Personal Dashboards Tips and Tricks Time Range Application Ask Me Anything (AMA) sessions
View full article
Thinking through the results of a modeling decision is a key part of ensuring good model performance—in other words, making sure the calculation engine isn’t overtaxed. This article highlights some ideas for how to lessen the load on the calculation engine. Formulas should be simple; a formula that is nested, or uses multiple combinations, uses valuable processing time. Writing a long, involved formula makes the engine work hard. Seconds count when the user is staring at the screen. Simple is better. Breaking up formulas and using other options helps keep processing speeds fast. You must keep a balance when using these techniques in your models, so the guidance is as follows: Break up the most commonly changed formula Break up the most complex formula Break up any formula you can’t explain the purpose of in one sentence Formulas with many calculated components The structure of a formula can have a significant bearing on the amount of calculation that happens when inputs in the model are changed. Consider the following example of a calculation for the Total Profit in an application. There are five elements that make up the calculation: Product Sales, Service Sales, Cost of Goods Sold (COGS), Operating Expenditure (Op EX), and Rent and Utilities. Each of the different elements is calculated in a separate module. A reporting module pulls the results together into the Total Profit line item, which is calculated using the formula shown below. What happens when one of the components of COGS changes? Since all the source components are included in the formula, when anything within any of the components changes, this formula is recalculated. If there are a significant number of component expressions, this can put a larger overhead on the calculation engine than is necessary. There is a simple way to structure the module to lessen the demand on the calculation engine. You can separate the input lines in the reporting module by creating a line item for each of the components and adding the Total Profit formula as a separate line item. This way, changes to the source data only cause the relevant line item to recalculate. For example, a change in the Product Sales calculation only affects the Product Sales and the Total Profit line items in the Reporting module; Services Sales, Op EX, COGS and Rent & Utilities are unchanged. Similarly, a change in COGS only affects COGS and Total Profit in the Reporting module. Keep the general guidelines in mind. It is not practical to have every downstream formula broken out into individual line items. Plan to provide early exits from formulas Conditional formulas (IF/THEN) present a challenge for the model builder in terms of what is the optimal construction for the formula, without making it overly complicated and difficult to read or understand. The basic principle is to avoid making the calculation engine do more work than necessary. Try to set up the formula to finish the calculations as soon as possible. Always put first the condition that is most likely to occur. That way the calculation engine can quit the processing of the expression at the earliest opportunity. Here is an example that evaluates Seasonal Marketing Promotions: The summer promotion runs for three months and the winter promotion for two months. There are more months when there is no promotion, so this formula is not optimal and will take longer to calculate. This is better, as the formula will exit after the first condition more frequently. There is an even better way to do this. Following the principles from above, add another line item for no promotion. And then the formula can become: This is even better because the calculation for No Promo has already been calculated, and Summer Promo occurs more frequently than Winter Promo. It is not always clear which condition will occur more frequently than others, but here are a few more examples of how to optimize formulas: FINDITEM formula The Finditem element of a formula will work its way through the whole list looking for the text item, and if it does not find the referenced text, it will return blank. If the referenced text is blank, it will also return a blank. Inserting a conditional expression at the beginning of the formula keeps the calculation engine from being overtaxed. IF ISNOTBLANK(TEXT) THEN FINDITEM(LIST,TEXT) ELSE BLANK Or IF BLANK(TEXT) THEN BLANK ELSE FINDITEM(LIST,TEXT) Use the first expression if most of the referenced text contains data and the second expression if there are more blanks than data. LAG, OFFSET, POST, etc. If in some situations there is no need to lag or offset data, for example, if the lag or offset parameter is 0. The value of the calculation is the same as the period in question. Adding a conditional at the beginning of the formula will help eliminate unnecessary calculations: IF lag_parameter = 0 THEN 0 ELSE LAG(Lineitem, lag_parameter, 0) Or IF lag_parameter <> 0 THEN LAG(Lineitem, lag_parameter, 0) ELSE 0 The use of formula a or b will depend on the most likely occurrence of 0s in the lag parameter. Booleans Avoid adding unnecessary clutter for line items formatted as BOOLEANS. There is no need to include the TRUE or FALSE expression, as the condition will evaluate to TRUE or FALSE. Sales>0 Instead of IF Sales > 0 then TRUE ELSE FALSE
View full article
Note: While all of these scripts have been tested and found to be fully functional, due to the vast amount of potential use cases, Anaplan does not explicitly support custom scripts built by our customers. This article is for information only and does not suggest any future product direction. Getting Started Python 3 offers many options for interacting with an API. This article will explain how you can use Python 3 to automate many of the requests that are available in our apiary, which can be found at   https://anaplan.docs.apiary.io/#. This article assumes you have the requests (version 2.18.4), base64, and JSON modules installed, as well as the Python 3 version 3.6.4. Please make sure you are installing these modules with Python 3, and not for an older version of Python. For more information on these modules, please see their respective websites: Python   (If you are using a Python version older or newer than 3.6.4, or requests version older or newer than 2.18.4, we cannot guarantee the validity of the article.)   Requests   Base Converter   JSON   (Note: Install instructions are not at this site but will be the same as any other Python module.) Note:   Please read the comments at the top of every script before use, as they more thoroughly detail the assumptions that each script makes. Authentication To start, let's talk about Authentication. Every script run that connects to our API will be required to supply valid authentication. There are two ways to authenticate a Python script that I will be covering. Certificate Authentication Basic Encoded Authentication Certificate authentication will require that you have a valid Anaplan certificate, which you can read more about   here. Once you have your certificate saved locally, to properly convert your Anaplan certificate to be usable with the API, first you will need   OpenSSL. Once you have that, you will need to convert the certificate to PEM format by running the following code in your terminal: openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem If you are using Certificate Authorization, the scripts we use in this article will assume you know the Anaplan account email associated with the certificate. If you do not know it, you can extract the common name (CN) from the PEM file by running the following code in your terminal: openssl x509 -text -in certtest.pem To be used with the API, the PEM certificate string will need to be converted to base64, but the scripts we will be covering will take care of that for you, so I won't cover that in this section. To use basic authentication, you will need to know the Anaplan account email that is being used, as well as the password. All scripts in this article will have the following code near the top: # Insert the Anaplan account email being used username = '' ----------------- # If using cert auth, replace cert.pem with your pem converted certificate # filename. Otherwise, remove this line. cert = open('cert.pem').read() # If using basic auth, insert your password. Otherwise, remove this line. password = '' # Uncomment your authentication method (cert or basic). Remove the other. user = 'AnaplanCertificate ' + str(base64.b64encode(( f'{username}:{cert}').encode('utf-8')).decode('utf-8')) # user = 'Basic ' + str(base64.b64encode((f'{username}:{password}' # ).encode('utf-8')).decode('utf-8') Regardless of the authentication method, you will need to set the username variable to the Anaplan account email being used. If you are using a certificate to authenticate, you will need to have your PEM converted certificate in the same folder or a child folder of the one you are running the scripts from. If your certificate is in a child folder, please remember to include the file path when replacing cert.pem (e.g. cert/cert.pem). You can remove the password line and its comments, and its respective user variable. If you are using basic authentication, you will need to set the password variable to your Anaplan account password, and you can remove the cert line, its comments, and its respective user variable. Getting the Information Needed for Each Script Most of the scripts covered in this article will require you to know an ID or metadata for the file, action, etc., that you are trying to process. Each script that gets this information for their respective fields is titled get_____.py. For example, if you want to get your file's metadata, you'll run getFiles.py, which will write the file metadata for each file in the selected model, in the selected workspace, in an array to a JSON file titled files.json. You can then open the JSON file, find the file you need to reference, and use the metadata from that entry in your other scripts. TIP:   If you open the raw data tab of the JSON file it makes it much easier to copy the whole set of metadata. The following are the links to download each get____.py script. Each get script uses the requests.get method to send a get request to the proper API endpoint. getWorkspaces.py: Writes an array to workspaces.json of all the workspaces the user has access to. getModels.py: Writes an array to models.json of either all the models a user has access to if wGuid is left blank or all of the models the user has access to in a selected workspace if a workspace ID was inserted. getModelInfo.py: Writes an array to modelInfo.json of all metadata associated with the selected model. getFiles.py: Writes an array to files.json of all metadata for each file the user has access to in the selected model and workspace. (Please refer to   the Apiary   for more information on private vs default files. Generally, it is recommended that all scripts be run via the same user account.) getChunkData.py: Writes an array to chunkData.json of all metadata for each chunk of the selected file in the selected model and workspace. getImports.py: Writes an array to imports.json of all metadata for each import in the selected model and workspace. getExports.py: Writes an array to exports.json of all metadata for each export in the selected model and workspace. getActions.py: Writes an array to actions.json of all metadata for all actions in the selected model and workspace. getProcesses.py: Writes an array to processes.json of all metadata for all processes in the selected model and workspace. Uploads A file can be uploaded to the Anaplan API endpoint either in chunks or as a single chunk. Per our apiary: We recommend that you upload files in several chunks. This enables you to resume an upload that fails before the final chunk is uploaded. In addition, you can compress files on the upload action. We recommend compressing single chunks that are larger than 50MB. This creates a Private File. Note: To upload a file using the API that file must exist in Anaplan. If the file has not been previously uploaded, you must upload it initially using the Anaplan user interface. You can then carry out subsequent uploads of that file using the API. Multiple Chunk Uploads The script we have for reference is built so that if the script is interrupted for any reason, or if any particular chunk of a file fails to upload, simply rerunning the script will start uploading the file again, starting at the last successful chunk. For this to work, the file must be initially split using a standard naming convention, using the terminal script below. split -b [numberofBytes] [path and filename] [prefix for output files] You can store the file in any location as long as you the proper file path when setting the chunkFilePrefix (e.g. chunkFilePrefix = ''upload_chunks/chunk-" This will look for file chunks named chunk-aa, chunk-ab, chunk-ac etc., up to chunk-zz in the folder script_origin/upload_chunks/. It is very unlikely that you will ever exceed chunk-zz). This will let the script know where to look for the chunks of the file to upload. You can download the script for running a multiple chunk upload from this link: chunkUpload.py. Note:   The assumed naming conventions will only be standard if using Terminal, and they do not necessarily work if the file was split using another method in Windows. If you are using Windows you will need to either create a way to standardize the naming of the chunks alphabetically {chunkFilePrefix}(aa - zz) or run the script as detailed in the   Apiary. Note:   The chunkUpload.py script keeps track of the last successful chunk by writing the name of the last successful chunk to a .txt file chunkStop.txt. This file is deleted once the import completes successfully. If the file is modified in between runs of the script, the script may not function correctly. Best practice is to leave the file alone and delete it if you want to start the upload from the first chunk. Single Chunk Upload The single chunk upload should only be used if the file is small enough to upload in a reasonable time frame. If the upload fails, it will have to start again from the beginning. If your file has a different name then that of its version of the server, you will need to modify line 31 ("name" : '') to reflect the name of the local file. This script runs a single put request to the API endpoint to upload the file. You can download the script for running a single chunk upload from this link: singleChunkUpload.py Imports The import.py script sends a post request to the API endpoint for the selected import. You will need to set the importData value to the metadata for the import. See Getting the Information Needed for Each Script for more information. You can download the script for running an import from this link: Import.py. Once the import is finished, the script will write the metadata for the import task in an array to postImport.json, which you can use to verify which task you want to view the status of while running the importStatus.py script. The importStatus.py script will return a list of all tasks associated with the selected importID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postImport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file importStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to importDump.csv which can be used to review the cause of the failure. If the task finished with no failures, you will get a message telling you the import has completed with no failures. You can download the script for importStatus.py from this link: importStatus.py Note:   If you check the status of a task with an old taskID for an import that has been run since you last checked it, the dump will no longer exist and importDump.csv will be overwritten with an HTTP error, and the status of the task will be 410 Gone. Exports The export.py script sends a post request to the API endpoint for the selected export. You will need to set the exportData value to the metadata for the export. See Getting the Information Needed for Each Script for more information. You can download the script for running an export from this link: Export.py Once the export is finished, the script will write the metadata for the export task in an array to postExport.json, which you can use to verify which task you want to view the status of while running the exportStatus.py script. The exportStatus.py script will return a list of all tasks associated with the selected exportID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postExport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file exportStatus.json. If the task is still in progress, it will print the task status and progress. It is important to note that no failure dump will be generated if the export fails. You can download the script for exportStatus.py from this link: exportStatus.py Actions The action.py script sends a post request to the API endpoint for the selected action (for use with actions other than imports or exports). You will need to set the actionData value to the metadata for the action. See Getting the Information Needed for Each Script for more information. You can download the script for running an action from this link: actionStatus.py. Processes The process.py script sends a post request to the API endpoint for the selected process. You will need to set the processData value to the metadata for the process. See Getting the Information Needed for Each Script for more information. You can download the script for running a process from this link: Process.py. Once the process is finished, the script will write the metadata for the process task in an array to postProcess.json, which you can use to verify which task you want to view the status of while running the processStatus.py script. The processStatus.py script will return a list of all tasks associated with the selected processID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postProcess.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file processStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to processDump.csv which can be used to review the cause of the failure. It is important to note that no failure dump will be generated for the process itself, only if one of the imports in the process failed. If the task finished with no failures, you will get a message telling you the process has completed with no failures. You can download the script for processStatus.py from this link: processStatus.py. Downloading a File Downloading a file from the Anaplan API endpoint will download the file in however many chunks it exists in on the endpoint. It is important to note that you should set the variable fileName to the name it has in the file metadata. First, the downloads individual chunk metadata will be written in an array to downloadChunkData.json for reference. The script will then download the file chunk by chunk and write each chunk to a new local file with the same name as the 'name' listed in the file's metadata. You can download the link for this script from this link: downloadFile.py. Note: If a file already exists in the same folder as your script with the same name as the name value in the file's metadata, the local file will be overwritten with the file being downloaded from the server. Deleting a File You can delete the file contents of any file that the user has access to that exists in the Anaplan server. Note: This only removes private content. Default content and the import data source model object will remain. You can download the link for this script from this link: deleteFile.py. Standalone Requests Code and Their Required Headers In this section, I will list the code for each request detailed above, including the API URL and the headers necessary to complete the call. I will be leaving the content right of Authorization: headers blank. Authorization header values can be either Basic encoded_username: password or AnaplanCertificate encoded_CommonName:PEM_Certificate_String (see   Certificate-Authorization-Using-the-Anaplan-API   for more information on encoded certificates) Note: requests.get will only generate a response body from the server, and no data will be locally saved unless written to a local file. Get Workspaces List requests.get('https://api.anaplan.com/1/3/workspaces/', headers='Authorization':) Get Models List requests.get('https://api.anaplan.com/1/3/models/', headers={'Authorization':}) or requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models', headers={'Authorization':}) Get Model Info requests.get(f'https://api.anaplan.com/1/3/models/{mGuid}', headers={'Authorization':}) Get Files/Imports/Exports/Actions/Processes List The get request for files, imports, exports, actions, or processes is largely the same. Change files to imports, exports, actions, or processes to run each. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files', headers={'Authorization':}) Get Chunk Data requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks', headers={'Authorization':}) Post Chunk Count requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-type': 'application/json'}, json={fileMetaData}) Upload a Chunk of a File requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local chunk file}) Mark an upload complete requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/complete', headers=={'Authorization': , 'Content-Type': 'application/json'}, json={fileMetaData}) Upload a File in a Single Chunk requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local file}) Run an Import/Export/Process The post request for imports, exports, and processes are largely the same. Change imports to exports, actions, or processes to run each. requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', headers={'Authorization': , 'Content-Type': 'application/json'}, data=json.dumps({'localeName': 'en_US'})) Run an Action requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', data={'localeName': 'en_US'}, headers={'Authorization': , 'Content-Type': 'application/json'}) Get Task list for an Import/Export/Action/Process The get request for import, export, action and process task lists are largely the same. Change imports to exports, actions, or processes to get each task list. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{importID}/tasks', headers={'Authorization':}) Get Status for an Import/Export/Action/Process Task The get request for import, export, action and process task statuses are largely the same. Change imports to exports, actions, or processes to get each task list. Note: Only imports and processes will ever generate a failure dump. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{ID}/tasks/{taskID}' headers={'Authorization':}) Download a File Note:   You will need to get the chunk metadata for each chunk of a file you want to download. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkID}, headers={'Authorization': ,'Accept': 'application/octet-stream'}) Delete a File Note:   This only removes private content. Default content and the import data source model object will remain. requests.delete('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-type': 'application/json'} Note:  SFDC user administration is not covered in this article, but the same concepts from the scripts provided can be applied to SFDC user administration. For more information on SFDC user administration see the apiary entry for  SFDC user administration .
View full article
Learn how to organize your model into logical parts to give you a  well-designed model that is easy to follow, understand and amend at a later date
View full article
L'application Bring Your Own Key (BYOK) vous permet maintenant de vous approprier les clés de chiffrement de vos données de modèle. Si vous avez accès à l'outil Anaplan Administration, vous pouvez chiffrer et déchiffrer des espaces de travail sélectionnés à l'aide de vos propres clés AES-256. À la différence des clés principales système, les clés créées par BYOK vous appartiennent et vous en assurez la sécurité. Aucun mécanisme ne permet au personnel Anaplan d'accéder à vos clés. Bring Your Own Key (BYOK) - Guide de l'utilisateur  Bring Your Own Key (BYOK) est un produit complémentaire que votre organisation peut acheter si elle possède l'édition Enterprise.
View full article
Dimension Order Affects Calculation Performance Ensuring consistency in the order of dimensions will help improve the performance of your models. This consistency is relevant for modules and individual line items. Why does the order matter? Anaplan creates and uses indexes to perform calculations. Each cell in a module where dimensions intersect is given an index number. Here are two simple modules dimensioned by Customer and Product. In the first module, Product comes first and Customer second, and in the second module, Customer is first and Product is second. In this model, there is a third module that calculates revenue as Prices * Volumes. Anaplan assigns indexes to the intersections in the module. Here are the index values for the two modules. Note that some of the intersections are indexed the same for both modules: Customer 1 and Product 1, Customer 2 and Product 2, and Customer 3 and Product 3, and that the remainder of the cells has a different index number. Customer 1 and Product 2 is indexed with the value of 4 in the top module and the value of 2 in the bottom module. The calculation is Revenue = Price * Volume. To run the calculation, Anaplan performs the following operations by matching the index values from the two modules. Since the index values are not aligned, the processor scans the index values to find a match before performing the calculation. When the dimensions in the module are reordered, these are the index values: The index values for each of the modules are now aligned. As the line-items of the same dimensional structure have an identical layout, the data is laid out linearly in memory. The calculation process accesses memory in a completely linear and predictable way. Anaplan’s microprocessors and memory sub-systems are optimized to recognize this pattern of access and to pre-emptively fetch the required data. How does the dimension order become different between modules? When you build a module, Anaplan uses the order that you drag the lists onto the Create Module dialog. The order is also dependent on where the lists are added. The lists that you add to the 'pages' area are first, then the lists that you add to the 'rows' area, and finally the lists added to the 'columns' area. It is simple to re-order the lists and ensure consistency. Follow these steps: On the Modules pane, (Model Settings>Modules) look for lists that are out of order in the Applies To column. Click the Applies To row that you want to re-order, then click the ellipsis. In the Select Lists dialog, click OK. In the Confirm dialog, click OK. The lists will be in the order that they appear in General Lists. When you have completed checking the list order in the modules, click the Line Items tab and check the line items. Follow steps 1 through 3 to re-order the lists. Subsets and Line Item Subsets One word of caution about Subsets and Line Item subsets. In the example below, we have added a subset and a Line Item Subset to the module: The Applies To is as follows: Clicking on the ellipsis, the dimensions are re-ordered to: The general lists are listed in order first, followed by subsets and then line item subsets. You still can reorder the dimensions by double-clicking in the Applies to column and manually copying or typing the dimensions in the correct order. Largest vs. Smallest? This is the normal follow up question, and unfortunately, the answer is "it depends." Through research we have found that it all depends on the data within the module. Also, it can get very confusing if subsets are used; the Customer list might be bigger than the Products list, but if a subset of Customers is used that is smaller than Products, then what?   Also, we don't advocate ordering the lists in the General Lists setting in size order; the lists should be ordered in hierarchical order top to bottom, so, by definition, that will be smallest to largest. So our advice is be consistent. Think about how you describe the problem. Does the business talk about Customer by Product, or Products for Customers? Agree to a convention, and stick to it. Other Dimensions The calculation performance only relates to the common lists between the source(s) and the target. The order of separate lists in one or other doesn’t have any bearing on the calculation speed.
View full article
You can interact with the data in your models using Anaplan's RESTful API. This enables you to securely import and export data, as well as run actions through any programmatic way you desire. The API can be leveraged in any custom integration, allowing for a wide range of integration solutions to be implemented. Completing an integration using the Anaplan API is a technical process that will require significant action by an individual with programming experience. Visit the links below to learn more: API Documentation Anaplan API Guide You can also view demonstration videos to understand how to implement APIs in your custom Integration client. The below videos show step-by-step guides of sequencing API calls and exporting data from Anaplan, importing data into Anaplan, and running delete actions and Anaplan processes. API sequence for uploading a file to Anaplan and running an import action is as follows: API sequence for running an export action and downloading a file from Anaplan is as follows: API sequence for running an Anaplan process and a delete action is as follows:
View full article
Personal dashboards are a great new feature that enables end users to save a personalized view of a dashboard. To get the most out of this feature, here are a few tips and tricks. Tidy Up Dashboards Any change to a master dashboard (using the Dashboard Designer) will reset all personal views of a dashboard, so before enabling personal dashboards, take some time to ensure that the current dashboards are up to date: Implement any pending development changes (including menu options). Turn on the Dashboard Quick Access toolbar (if applicable). Check and amend all text box headings and comments for size, alignment, spelling, and grammar. Delete or disable any redundant dashboards to ensure end users don’t create personal views of obsolete dashboards. Use Filters R ather Th an Show/Hide It’s best practice to use a filter rather than show and hide for the rows and/or columns on a grid.  This is now more beneficial because amending the items shown or hidden on a master dashboard will reset the personal views. For example, suppose you want to display just the current quarter of a timescale. You could manually show/hide the relevant periods, but, at quarter end when the Current Period is updated, the dashboard will need to be amended, and all those personal views will be reset. If you use a filter, referencing a time module, the filter criteria will update automatically, as will the dashboard. No changes are made to the master dashboard, and all the personal views are preserved.  Create a Communication and Migration Strategy Inevitably there will be changes that must be made to master dashboards. To minimize the disruption for end users, create a communication plan and follow a structured development program . These can include the following: Bundle up dashboard revisions into a logical set of changes. Publish these changes at regular intervals (e.g., on a monthly cycle). Create a regular communication channel to inform users of changes and the implications of those changes. Create a new dashboard, and ask end users to migrate to the new dashboard over a period of time before switching off the old dashboard. Application Lifecycle Management (ALM) If you are using ALM: any structural changes to master dashboards will reset all personal views of dashboards.
View full article
Master data hubs Master data hubs are used within the Anaplan platform to house an organization’s data in a single model. This hub imports data from the corporation’s data warehouse. If no single source is available, such as a data warehouse, then the master data hub will collect data from individual source systems instead. Once all data is consolidated into a single master data hub, it may then be distributed to multiple models throughout an organization’s workspace. Anaplan Data Architecture   Architecture best practices One or more Anaplan models may make up the data hub. It is a good practice to separate the master data (hierarchies, lists, and properties) from the transactional data. The business Anaplan applications will be synchronized from these data hub models using Anaplan native “model-to-model” internal imports. As a best practice, users should only implement incremental synchronization, which only synchronizes the data in the application that has changed since the last sync from the data hub. Doing this usually provides very fast synchronization. The graphic below displays best practices for doing this:   Another best practice organizations should follow when building a master data hub is to import a list with properties into a module rather than directly into a list. Using this method, line items are created to correspond with the properties and are imported using the text data type. This will import all of the data without errors or warnings, and allow for very smart dashboards, made of sorts and filters, to highlight integration issues. Once imported, the data in the master data hub module can then be imported to a list in the required model.   Data hub best practices The following list consists of best practices for establishing data architecture: Rationalize the metadata Balanced hierarchies (not ragged) will ease reporting and security settings Be driver-based Identify your metric and KPIs and what drives them Do not try to reconcile disconnected targets to bottom up plans entered at line item level. Example: Use cost per trip and number of trips for travel expenses, as opposed to inputting every line of travel expense Simplify the process Reduce the number of approval levels (threshold-based) Implement rolling forecasts Report within the planning tool; keep immediacy where needed Think outcome and options, not input Transform your existing process. Do not re-implement existing Excel ® -based processes in Anaplan Granularity Aggregate transactions to SKU level, customer ID Plan at higher level and cascade down Plan the number of TBH by role for TBH headcount expenses, as opposed to inputting every TBH employee. Sales: Sub-region level planning, cascade a rep level Plan at profit center level, allocate at cost center level based on drivers The Anaplan Way Always follow the phases of The Anaplan Way when establishing a master data hub, even in a federated approach: Pre-Release Phase Foundation Phase Implementation Phase Testing Phase Deployment Phase
View full article
Reducing the number of calculations will lead to quicker calculations and improve performance. However, this doesn’t mean combining all your calculations into fewer line items, as breaking calculations into smaller parts has major benefits for performance. Learn more about this in the Formula Structure article. How is it possible to reduce the number of calculations? Here are three easy methods: Turn off unnecessary Summary method calculations. Avoid formula repetition by creating modules to hold formulas that are used multiple times. Ensure that you are not including more dimensions than necessary in your calculations. Turn off Summary method calculations Model builders often include summaries in a model without fully thinking through if they are necessary. In many cases, the summaries can be eliminated. Before we get to how to eliminate them, let’s recap on how the Anaplan engine calculates. In the following example we have a Sales Volume line-item that varies by the following hierarchies: Region Hierarchy Product Hierarchy Channel Hierarchy City SKU Channel Country Product All Channels Region All Products   All Regions     This means that from the detail values at SKU, City, and Channel level, Anaplan calculates and holds all 23 of the aggregate combinations shown below—24 blocks in total. With the Summary options set to Sum, when a detailed item is amended (represented in the grey block), all the other aggregations in the hierarchies are also re-calculated. Selecting the None summary option means that no calculations happen when the detail item changes. The varying levels of hierarchies are quite often only there to ease navigation, and the roll-up calculations are not actually needed, so there may be a number of redundant calculations being performed. The native summing of Anaplan is a faster option, but if all the levels are not needed it might be better to turn off the summary calculations and use a SUM formula instead.  For example, from the structure above, let’s assume that we have a detailed calculation for SKU, City, and Channel (SALES06.Final Volume). Let’s also assume we need a summary report by Region and Product, and we have a module (REP01) and a line item (Volume) dimensioned as such. REP01.Volume = SALES06 Volume Calculation.Final Volume is replaced with REP01.Volume = SALES06.Final Volume[SUM:H01 SKU Details.Product, SUM:H02 City Details.Region] The second formula replaces the native summing in Anaplan with only the required calculations in the hierarchy. How do you know if you need the summary calculations? Look for the following: Is the calculation or module user-facing? If it is presented on a dashboard, then it is likely that the summaries will be needed. However, look at the dashboard views used. A summary module is often included on a dashboard with a detail module below; Effectively, the hierarchy sub-totals are shown in the summary module, so the detail module doesn’t need the sum or all the summary calculations. Detail to Detail Is the line item referenced by another detailed calculation line item? This is very common, and if the line item is referenced by another detailed calculation the summary option is usually not required. Check the Referenced by column and see if there is anything referencing the line item. Calculation and staging modules If you have used the D.I.S.C.O. module design, you should have calculation/staging modules. These are often not user-facing and have many detailed calculations included in them. They also often contain large cell counts, which will be reduced if the summary options are turned off. Can you have different summaries for time and lists? The default option for Time Summaries is to be the same as the lists. You may only need the totals for hierarchies, or just for the timescales. Again, look at the downstream formulas. The best practice advice is to turn off the summaries when you create a line item, particularly if the line item is within a Calculation module (from the D.I.S.C.O. design principles). Avoid Formula Repetition An optimal model will only perform a specific calculation once. Repeating the same formula expression multiple times will mean that the calculation is performed multiple times. Model builders often repeat formulas related to time and hierarchies. To avoid this, refer to the module design principles (D.I.S.C.O.) and hold all the relevant calculations in a logical place. Then, if you need the calculation, you will know where to find it, rather than add another line item in several modules to perform the same calculation. If a formula construct always starts with the same condition evaluation, evaluate it once and then refer to the result in the construct. This is especially true where the condition refers to a single dimension but is part of a line item that goes across multiple dimension intersections. A good example of this can be seen in the example below: START() <= CURRENTPERIODSTART() appears five times and similarly START() > CURRENTPERIODSTART() appears twice. To correct this, include these time-related formulas in their own module and then refer to them as needed in your modules. Remember, calculate once; reference many times! Taking a closer look at our example, not only is the condition evaluation repeated, but the dimensionality of the line items is also more than required. The calculation only changes by the  day, as per the diagram below: But the Applies To here also contains Organization, Hour Scale, and Call Center Type. Because the formula expression is contained within the line item formula, for each day the following calculations are also being performed: And, as above, it is repeated in many other line items. Sometimes model builders use the same expression multiple times within the same line item. To reduce this overcalculation, reference the expression from a more appropriate module; for example, Days of Week (dimensioned solely by day) which was shown above. The blueprint is shown below, and you can see that the two different formula expressions are now contained in two line items and will only be calculated by day; the other dimensions that are not relevant are not calculated. Substitute the expression by referencing the line items shown above. In this example, making these changes to the remaining lines in this module reduces the calculation cell count from 1.5 million to 1500. Check the Applies to for your formulas, and if there are extra dimensions, remove the formula and place it in a different module with the appropriate dimensionality .
View full article
Note:  This article is meant to be a guide on converting an existing Anaplan Security Certificate to PEM format for the purpose of testing its functionality via cURL commands. Please work with your developers on any in more depth application of this process.  The current Production API version is v1.3. Using a certificate to authenticate will eliminate the need to update your script when you have to change your Anaplan password. To use a certificate for authentication with the API, it first has to be converted into a Base64 encoded string recognizable by Anaplan. Information on how to obtain a certificate can be found in Anapedia. This article assumes that you already have a valid certificate tied to your user name. Steps: 1.   To properly convert your Anaplan certificate to be usable with the API, first you will need openssl (https://www.openssl.org/). Once you have that, you will need to convert the certificate to PEM format. The PEM format uses the header and footer lines “-----BEGIN CERTIFICATE-----“, and “-----END CERTIFICATE-----“.   2.   If your certificate is not in PEM format, you can convert it to the PEM format using the following OpenSSL command. “certificate-(certnumber).cer” is name of source certificate, and “certtest.pem” is name of target PEM certificate.                   openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem   View the PEM file in a text editor. It should be a Base64 string starting with “-----BEGIN CERTIFICATE-----“, and ending with “-----END CERTIFICATE-----“.   3.   View the PEM file to find the CN (Common Name) using the following command:   openssl x509 -text -in certtest.pem   It should look something like "Subject: CN=(Anaplan login email)". Copy the Anaplan login email.   4.   Use a Base-64 encoder (e.g.   https://www.base64encode.org/   ) to encrypt the CN and PEM string, separated by a colon. For example, paste this in:   (Anaplan login email):-----BEGIN CERTIFICATE-----(PEM certificate contents)-----END CERTIFICATE----- 5.   You now have the encrypted string necessary to authenticate API calls. For example, using cURL to GET a list of the Anaplan workspaces for the user that the certificate belongs to:   curl -H "Authorization: AnaplanCertificate (encrypted string)" https://api.anaplan.com/1/3/workspaces  
View full article
Details of known issues  Challenge Recommendations Performance issues with long nested formulas Need to have a long formula on time as a result of nested intermediate calculations. If the model size does not prevent from adding extra line items, it's a better practice to create multiple intermediate line items and reduce the size of the formula, as opposed to nesting all intermediate calculations into one gigantic formula. This applies to summary formulae (SUM, LOOKUP, SELECT). Combining SUM and LOOKUP in the same line item formula can cause performance issues in some cases. If you have noticed a drop in performance after adding a combined SUM and LOOKUP to a single line item, then split it into two line items. RANKCUMULATE causes slowness A current issue with the RANKCUMULATE formula can mean that the time to open the model, including rollback, can be up to five times slower than they should be. There is currently no suitable workaround. Our recommendations are to stay within the constraints defined in Anapedia. SUM/LOOKUP with large cell count Separate formulas into different line items to reduce calculation time (fewer cells need to recalculate parts of a formula that would only affect a subset of the data). A known issue with SUM/LOOKUP combinations within a formula can lead to slow model open and calculation times, particularly if the line item has a large cell count. Example: All line items do not apply to time or versions. Y = X[SUM: R, LOOKUP: R] Y Applies to [A,B] X Applies to [A,B] R Applies to [B] list formatted [C] Recommendation: Add a new line item 'intermediate' that must have 'Applies To' set to the 'Format' of 'R' intermediate = X[SUM: R] Y = intermediate[LOOKUP: R]  This issue is currently being worked on by Development and a fix will be available in a future release Calculations are over non-common dimensions Anaplan calculates quicker if calculations are over common dimensions. Again, best seen in an example. If you have, List W, X Y = A + B Y Applies To W, X A Applies To W B Applies To W This performs slower than, Y = Intermediate Intermediate = A + B Intermediate Applies To W All other dimensions are the same as above. Similarly, you can substitute A & B above for a formula, e.g. SUM/LOOKUP calculations. Cell history truncated Currently, history generation has a time limit of 60 seconds set. The history generation is split into three stages with 1/3 of time allocated to each. The first stage is to build a list of columns required for the grid. This involves reading all the history. If this takes more than 20 seconds, then the user receives the message "history truncated after x seconds - please modify the date range," where X is how many seconds it took. No history is generated. If the first stage completes within 20 seconds, it goes on to generate the full list of history.  In the grid only the first 1000 rows are displayed; the user must Export history to get a full history. This can take significant time depending on volume.  The same steps are taken for model and cell history. The cell history is generated from loading the entire model history and searching through the history for the relevant cell information. When the model history gets too large then it is currently truncated to prevent performance issues. Unfortunately, this can make it impossible to retrieve the cell history that is needed. Make it real time when needed Do not make it real time unless it needs to be. By this we mean, do not have line items where users input data being referenced by other line items unless they have to be. A way around this could be to have users have their data input sections, which is not referenced anywhere, or as little as possible, and, say, at the end of the day when no users are in the model, run an import which would update into cells where calculations are then done. This may not always be possible if the end user needs to see resulting calculations from his inputs, but if you can limit these to just do the calculations that he needs to see and use imports during quiet times then this will still help. We see this often when not all reporting modules need to be recalculated real time. In many cases, many of these modules are good to be calculated the day after. Reduce dependencies Don't have line items that are dependent on other line items unnecessarily.This can cause Anaplan to not utilize the maximum number of calculations it can do at once. This happens where a line items formula cannot be calculated because it is waiting on results of other line items. A basic example of this can be seen with line item's A, B, and C having the formulas: A - no formula B= A C = B Here B would be calculated, and then C would be calculated after this. Whereas if the setup was: A - no formula B = A C = A Here B and C can be calculated at the same time. This also helps if line item B is not needed it can then be removed, further reducing the number of calculations and the size of the model. This needs to considered on a case-by-case basis and is a tradeoff between duplicating calculations and utilizing as many threads as possible. If line item B was referenced by a few other line items, it may indeed be quicker to have this line item. Summary calculation Summary cells often take processing time even if they are not actually recalculated because they must check all the lower level cells. Reduce summaries to ‘None’ wherever possible. This not only reduces aggregations, but also the size of the model.
View full article
This guide assumes you have set up your runtime environment in Informatica Cloud (Anaplan Hyperconnect) and the agent is up and running. This guide focusses solely on how to configure the ODBC connection and setting up a simple synchronization task importing data from one table in PostgreSQL to Anaplan. Informatica Cloud has richer features that are not covered in this guide. The built-in help is contextual and helpful as you go along should you need more information than I have included in this guide. The intention of this guide is to help you set up a simple import from PostgreSQL to Anaplan and this guide is therefore kept short and is not covering all related areas. This guide assumes you have ran an import using a csv file as this needs to be referenced when the target connection is set up, described under section 2.2 below. To prepare, I exported the data I wanted to use for the import from PostgreSQL to a csv file. I then mapped this csv file to Anaplan and ran an initial import to create the import action that is needed.   1. Set up the ODBC connection for PostgreSQL In this example I am using the 64-bit version of the ODBC connection running on my local laptop. I have set it up for User DSN rather than System DSN, but the process is very similar should you need to set up a System DSN. You will need to download the relevant ODBC driver from PostgreSQL and install it to be able to add it to your ODBC Data Sources as per below (click the Add…button and you should be able to select the downloaded driver).     Clicking the configuration button for the ODBC Data Source opens the configuration dialogue. The configurations needed are: Database is the name of your PostgreSQL database. Server is the address to your server. As I am setting this up on my laptop, it’s localhost. User Name is the username for the PostgreSQL database. The password is the password for the PostgreSQL database. Port is the port used by PostgreSQL. You will find this if you open PostgreSQL. Testing the connection should not return any errors.   2.    Configuring source and target connections After setting up the ODBC connection as described above, you will need to set up two connections, one to PostgreSQL and one to Anaplan. Follow the steps below to do this.   2.1 Source connection – PostgreSQL ODBC Select Configure > connection in the menu bar to configure a connection.    Name your connection and add a description Select type – ODBC Select the runtime environment that will be used to run this. In this instance I am using my local machine. Insert the username for the database (same as you used to set up the ODBC connection). Insert the password for the database (same as you used to set up the ODBC connection). Insert the data source name. This is the name of the ODBC connection you configured earlier. Code page would need to correspond to the character set you are using. Testing the connection should give you below confirmation. If so, you can click Done.   2.2 Set up target connection – Anaplan The second connection that needs to be set up is the connection from Informatica Cloud to Anaplan.   Name your connection and add a description if needed Select type – AnaplanV2 Select the runtime environment that will be used to run this. In this instance I am using my local machine. Auth type – I am using Basic Auth which will require your Anaplan user credentials Insert the Anaplan username Insert the Anaplan password Certification Path location – leave blank if you use Basic Auth Insert the workspace ID (open your Anaplan model and select help and about) Insert the model ID (find in the same way as for workspace ID) I have left the remaining fields as per default setting.   Testing the connection should not pass any errors.   3 Task wizard – Data synchronization The next step is to set up a data synchronization task to connect the PostgreSQL source to the Anaplan target. Select Task Wizards in the menu bar and navigate to Data Synchronization as per below screen shot.   This will open the task wizard, starting with defining the Data Synchronization task as per below. Name the task and select the relevant task operation. In this example I have selected Insert, but other task operations are available like update and upsert.   Click Next for the next step in the workflow which is to set up the connection to the source. Start by selecting the connection you defined above under section 2.1. In this example I am using a single table as source and have therefore selected single source. With this connection you can select the source object with the Source Object drop down. This will give you a data preview so you can validate the source is defined correctly. The source object corresponds to the table you are importing from.     The next step is to define the target connection and you will be using the connection that was set up under section 2.1 above.   The target object is the import process that you ran from the csv file in the preparation step described under section 1 above. This action is referred to below as target object. The wizard will show a preview of the target module columns.    The next step in the process is the Data Filters that has both a Simple and an Advanced mode.   I am not using any data filters in this example and please refer to the built-in help for further information on how to use this.   In the field mapping you will either need to manually map or get the fields automatically mapped depending on if the names in the source and target correspond. If you map manually, you will need to drag and drop the fields from the source to the target. Once done, select Validate Mapping to check no errors are generated from the mapping.     The last step is to define whether to use a schedule to run the connection or not. You will also have the option to insert pre-processing commands and post-processing commands and any parameters for your mapping. Please refer to the built-in help for guidance on this.   After running the task, the activity log will confirm whether the import ran without errors or warnings.   As I mentioned initially, this is a simple guide to help you to set up a simple, single source import. Informatica Cloud does have more advanced options as well, both for mappings and transformations.
View full article