Choose a label or article, or search below to begin.
Sort by:
Overview These dashboards are absolutely critical to good usability of a model. Dashboards are the first contact between the end users and a model. What SHOULD NOT be done in a landing dashboard: Display detailed instructions on how to use the model. See "Instruction Dashboard" instead. Use it for global navigation, built using text boxes and navigation buttons. It will create maintenance challenges if different roles have different navigation paths. It's not helpful once users know where to go. What SHOULD be done in a landing dashboard: Dispay KPIs with a chart that highlights where they stand on these KPIs, and highlight gaps / errors / exceptions / warnings. Summary/aggregated view of data on a grid to support the chart. The chart should be the primary element. Short instructions on the KPIs. A link to an instruction-based dashboard that includes guidance and video links. A generic instruction to indicate that the user should open the left-side sliding panel to discover the different navigation paths. Users who perform data entry need access to the same KPIs as execs are seeing. Landing dashboard example 1:   Displays the main KPI, which the planning model allows the organization to plan. Landing dashboard example 2:   Provides a view on how the process is progressing against the calendar. Landing dashboard example 3:   Created for executives who need to focus on escalation. Provides context and a call to action (could be a planning dashboard, too).  
View full article
Details of known issues  Challenge Recommendations PERFORMANCE ISSUES WITH LONG NESTED FORMULA Need to have a long formula on time as a result of nested intermediate calculations. If model size does not prevent from adding extra line items, it's a better practice to create multiple intermediate line items and reduce the size of the formula, as opposed to nesting all intermediate calcs into one gigantic formula. This applies to summary formulae (SUM, LOOKUP, SELECT). Combining SUM and LOOKUP in the same line item formula can cause performance issues in some cases. If you have noticed a drop in performance after adding a combined SUM and LOOKUP to a single line item, then split it into two line items. RANKCUMULATE CAUSES SLOWNESS A current issue with the RANKCUMULATE formula can mean that the time to open the model incl. rollback, times can be up to 5 times slower than they should be. There is currently no suitable workaround, our recommendations are to stay within the constraints defined in Anapedia. SUM/LOOKUP WITH LARGE CELL COUNT Separate formulas into different line items to reduce calculation time (fewer cells need to recalculate parts of a formula that would only affect a subset of the data) A known issue with SUM/LOOKUP combinations within a formula can lead to slow model open and calculation times, particularly if the line item has a large cell count. Example: All line items do not apply to time or versions. Y = X[SUM: R, LOOKUP: R] Y Applies to [A,B] X Applies to [A,B] R Applies to [B] list formatted [C] Recommendation: Add a new line item 'intermediate' that must have 'Applies To' set to the 'Format' of 'R' intermediate = X[SUM: R] Y = intermediate[LOOKUP: R]  This issue is currently being worked on by Development and a fix will be available in a future release Calculations are over non common dimensions Anaplan calculates quicker if calculations are over common dimensions. Again, best seen in an example. If you have, List W, X Y = A + B Y Applies To W, X A Applies To W B Applies To W This performs slower than, Y = Intermediate Intermediate = A + B Intermediate Applies To W All other dimensions same as above. Similarly, you can substitute A & B above for a formula, e.g. SUM/LOOKUP calculations. CELL HISTORY TRUNCATED Currently history generation has a time limit of 60 seconds set. The history generation is split into 3 stages with 1/3 of time allocated to each. The first stage is to build a list of columns required for the grid - this involves reading all the history - if this takes more than 20 seconds then the user receives the message "history truncated after x seconds - please modify the date range", where x is how many seconds it took. No history is generated. If the first stage completes within 20s it goes on to generate the full list of history.  In the grid only the first 1000 rows are displayed, the user must Export history to get full history. This can take significant time depending on volume.  The same steps are taken for model and cell history. The cell history is generated from loading the entire model history and searching through the history for the relevant cell information. When the model history gets too large then it is currently truncated to prevent performance issues, unfortunately this can make it impossible to retrieve the cell history that is needed. Make it Real time when needed Do not make it real time unless it needs to be. By this, we mean do not have line items where users input data being referenced by other line items, unless they have to be. A way around this could be to have users have their data input sections which is not referenced anywhere, or as little as possible and, say, at the end of the day when no users are in the model, run an import which would update into cells where calculations are then done. This may not always be possible if the end user needs to see resulting calculations from his inputs, but if you can limit these to just do the calculations that he needs to see and use imports during quiet times then this will still help. We see this often when not all reporting modules need to be recalculated real time. In many cases, many of these modules are good to be calculated the day after. Reduce dependancies Don't have line items that are dependent on other line items unnecessarily - this can cause Anaplan to not utilize the maximum number of calculations it can do at once. This happens where a line items formula cannot be calculated because is it waiting on results of other line items. A basic example of this can be seen with line item's A, B, and C having the formulas: A - no formula B= A C = B Here B would be calculated, and then C would be calculated after this. Whereas if the setup was: A - no formula B = A C = A Here B and C can be calculated at the same time. This also helps if line item B is not needed it can then be removed, further reducing the amount of calculations and the size of the model. This needs to considered on a case by case basis and is a tradeoff between duplicating calculations and utilizing as many threads as possible - if line item B was referenced by a few other line items, it may indeed be quicker to have this line item. Summary calculation Summary cells often take processing time even if they are not actually recalculated because they must check all the lower level cells. Reduce summaries to ‘None’ where ever possible. This not only reduces aggregations but the size of the model
View full article
Overview: A dashboard with grids that include large lists that have been filtered and/or sorted can take time to open. The opening action can also become a blocking operation; when this happens, you'll see the blue toaster box showing "Processing....." when the dashboard is opening. This article includes some guidelines to help you avoid this situation.  Rule 1: Filter large lists by creating a Boolean line item  Avoid the use of filters on text or non-Boolean formatted items for large lists on the dashboard. Instead, create a line item with the format type Boolean and add calculations to the line item so that the results return the same data set as the filter would. This is especially helpful if you implement user-base filters, where the Boolean will be by user, and by the list to be filtered. The memory footprint of a Boolean line item is 8x smaller than other types of line items. Warning on a known issue: On an existing dashboard where a saved view is being modified by replacing the filters with a Boolean line item for filtering, you must republish it to the dashboard. Simply removing the filters from the published dashboard will not improve performance. Rule 2: Use the default Sort Use sort carefully, especially on large list. Opening a dashboard that has a grid where a large list is sorted on a text formatted line item will likely take 10 seconds or more and may be a blocking operation. To avoid using the sort: Your list is (by default) sorted by the criteria you need. If it is not sorted, you can still make the grid usable by reducing the items using a user-based filter. Rule 3: Reduce the amount of dashboard components There are times when the dashboard includes too many components, which slows performance. A reasonably large dashboard is no wider than 1.5 page (avoiding too much horizontal scrolling) and 3 pages deep. Once you exceed these limits, consider moving the components into multiple dashboards. Doing so will help both performance and usability. Rule 4: Avoid using large lists as page selectors If you have a large list and use it as a page selector on a dashboard, that dashboard will open slowly.  It may take10 seconds or more. The loading of the page selector takes more than 90% of the total time. Known issue / This is how Anaplan works: If a dashboard grid contains list formatted line items, the contents of page selector drop-downs are automatically downloaded until the size of the list meets a certain threshold; once this size is exceeded, the download happens on demand, or in other words, when a user clicks the drop down.  The issue is that when Anaplan requests the contents of list formatted cell drop-downs, it also requests contents of ALL other drop-downs INCLUDING page selectors. Recommendation: Limit the page selectors on medium to large lists using the following tips: a) Make the page selector available in one grid and use the synchronized paging option for all other grids and charts. No need to allow users to edit the page in every dashboard grid or chart. b) If you have a large list, it makes for a poor user experience, as there is no search available. Using a large list as a page selector creates both a performance and a usability issue. Solution 1: Design a dashboard dedicated to searching a line item: From the original dashboard (where you wanted to include the large list page selector), the user clicks a custom search button that opens a dashboard where the large list is displayed as the rows of a grid. The user can then use a search to find the item needed. If possible, implement user-based filters to help the user further reduce the list and quickly find the item. The user highlights the item found, closes the tab, and returns to the original dashboard where all grids are set on the highlighted item. Alternate solution: If the dashboard elements don't require the use of the list, you should publish them from a module that doesn't contain this list. For example, floating page selectors for time or versions, or grids that are displayed as rows/columns-only should be published from modules that does not include the list. Why? The view definitions for these elements will contain all the source module's dimensions, even if they are not shown, and so will carry the overhead of populating the large page selector if it was present in the source.
View full article
How can we help customers with this issue? Whenever possible, ask end users “Why do you need these exports?” and “What do you need to do with the exported files?” You will likely get answers such as: Ad-hoc analysis (e.g., create my own filters and sorts, search a specific data, compare things) Reformat and present (e.g., reformatting data for presentation purposes) Recurrent reporting (e.g., always export the same data and put it in an Excel ® or PowerPoint ® file for distribution) Export to other systems (e.g., put the Anaplan data in a different system such as write back to operational tool or push to a reporting platform) Simulation (e.g., do additional “what if” scenarios, such as editing the hierarchies and comparing the results, because they don't know how to do this in Anaplan, and it seems easier and faster) Urgent specific analysis (e.g., specific analyses are suddenly required but are not yet available in the Anaplan dashboards, or the user does not know how to build this in Anaplan) Batch printing (e.g., printing large quantities of reports and data from Anaplan) Report with high impact on model size The following table lists recommendations for each issue above to be considered in your model design in an effort to deliver a high performing and collaborative model: Need Recommended Solution 1. Ad-hoc analysis Build dashboard features, such as model-user-based filtering, which covers most of the filtering requirements, using item’s attributes. Train end users how to use the native filter, search, and sort functions. It will require change management as users will see a usability gap with Excel, for example. Focus on what each Anaplan filter can do that Excel filters cannot do (and/or, multi-dimensional filtering). Build decluttered dashboards that display relevant information Only display items that make 80% of the number (use rankcumulate function) Only display items below/above a specific threshold Make the threshold value by user—use a user dimension Allow sorting rows alphanumerically Have a line item with the item name, so that sorting by alphanumeric is possible, or set by default When grids show items created on the fly, find a way to always display newly created items on top If manual line item selection is tedious due to the use of the manual show/hide on 50+ line items, use a modeling solution based on line item subsets that provides a check box-based line item selector. 2. Reformat and present Utilize the PowerPoint add-in. Build module views in your model to select what data you need to present. Then create these views in PowerPoint via the add-in and edit the formatting once for all (leverage the 3 key features: table, text, and charts). Start with building a quick proof of concept to show your customer how that works and what the benefits are. Format the PowerPoint using the customer’s colors and branding. Show how the data refresh is automated, and the same deck can be used week over week, month over months without having to re-export. 3. Recurrent reporting Analyze the reporting requirement, envision building an equivalent dashboard, and perform a gap analysis on what's being required and what the dashboard can offer. Work on change management and training to migrate users to use the online dashboard. Emphasize the dynamic functionality of dashboards in Anaplan that differentiates from a static export: Row sync / chart sync / cell sync / level selection / Master-detail If licensing is an issue, then set up a meeting to review the 2.0 pricing model and compare the cost of extra Anaplan licenses versus the cost of the manual export/reformat/distribute. Emphasize the security risk of email distribution of Excel, PowerPoint, and/or PDF files. Remind customers that every user who receives an Excel, PowerPoint, or PDF file will have access to the same data set, even if selective access has been implemented in Anaplan, which may be a significant security risk. 4. Export to other system This is a very good reason for exporting. Although, if the target system is a BI system, always check if that report could not live in Anaplan instead. The reasons to report outside of Anaplan are license and use of data that are not meant to be in the Anaplan model. If you export to other systems, consider exporting either out of the master data hub, or have a dedicated export model that has the same data as the main model. Also, implement the incremental export to only export data that has changed since the last export. Usually, only administrators should export to other systems, which should be done outside of normal business hours if possible. If end users are exporting, we recommend setting up a filtered view that is focused on what they want to export. This will avoid exporting a large data set that may take longer than necessary to process. Control the format of the number that gets exported: Use the Round() function to control the number of digits. 5. Simulation We can see customers doing hierarchy simulations in Excel by simply regrouping, splitting, renaming nodes, recalculating all rollups and comparing results. It's possible that the customer does not feel that they are skilled enough to build a hierarchy simulation in the Anaplan platform. As a result, very few projects have implemented hierarchy versioning within Anaplan, though it is actually possible using standard modeling functionality. 6. Urgent specific analysis When end users do not yet have the ability to build their own dashboards or analysis, they will ask for export functionality out of dashboards in order to be able to create these analyses without depending on the modeling team to build these for them. This is a good reason to enable these users to export, which should only be allowed to specific end users. Since show/hide the export action is by dashboard and not by role, you'll need to duplicate the dashboard: one for standard end users who will not need to export and build their own analysis, and one for advanced users who will. In that case, always ask the user who creates these analysis out of an export file to share with you (or the project team) the report that they have built and add this dashboard to the product backlog with a high priority. Your goal will be to eliminate all dashboards or reports built outside of the Anaplan platform. 7. Batch Printing The customer might need to distribute printouts of the plans to the user community in preparation for a large planning summit where each user will need to be briefed ahead of time and prepared to discuss the plan. The Anaplan platform does not provide an easy way to generate reports in a batch mode, so it's best to leverage a third party BI platform for such activity. In that case, have an administrator run a set of exports that include required planning data into a flat file that the BI platform can import, format, and distribute. 8. Report with high impact on model size If a report requires dimensionality that had not been planned to be added to the model and which could significantly increase the size of the model, then allowing export and Excel reporting using a pivot table may be the best solution. That can be the case if end users need to export a flat transactional data set and re-create the sums on some of the columns available in the flat transaction module. This can require the creation of a large, multi-dimensional module, depending on the size of the dimensions the end users need to combine. Creating this module may not be possible due to size limitations or for user access reasons, as non-administrators do not have the ability to create a module. In that case, we'll allow exporting to a CSV file and let users run the export/sums in either Excel or another system. Be aware that performance of an exported report in Excel that contains millions of cells can be poor, which can frustrate the user experience. Always envision building the equivalent analysis in Anaplan, if possible, in either the main model or in another model specific to reporting.
View full article
Each time a user runs an import or an export it affects platform performance, as they will block all other users of the model from performing any tasks while the import or export runs. This creates what is called a toaster message: basically a blue box at the top of the Anaplan screen that indicates to every connected user that the platform is processing an action. Any person who frequently exports out of Anaplan will likely become very unpopular among the users of the model, especially if exports last more than a few seconds. Users who are not workspace administrators can: Export data out of a module within a dashboard Run an import prepared by an administrator Run a process that an administrator has prepared. The process can combine a number of imports and exports
View full article
Overview Once a model is built, testing of the user concurrency and data load levels occurs, and then optimizing the system for the specific use case and conditions is carried out. Then, we have three main options in order to tune for optimum performance. These are the main optimization options: 1. Model design Is the model designed correctly? Have you reduced sparsity and unnecessary complexity? Is the model too big? Have you neatly designed the model to have input, engine, and output modules? Have you cleaned up as you go? Problems often exist when you have added to the model, tested something that did not work out, and then not removed what you tested that didn’t work. This piece is not fulfilling any requirements. Sometimes we refer to this as model debt. Remember, Anaplan is a living, breathing model and so any line items that exist in the model, whether used or not, are used by the engine. A surplus piece (model debt) is an inefficient use of model space. 2. Model calculations Check that calculations are as efficient as possible. Are you using standard functions to be more efficient? 3. Platform code  Do we need to engage L3 and/or engineering to look at code optimization? Performance issues including data volume and user concurrency Performance and the experience the end user has are of critical importance when deploying applications to a wide audience. Therefore, several factors need to be considered when deploying, in order to optimize performance and determine whether a single instance or distributed instance strategy is best: Model size Model complexity User concurrency In order for end users to enjoy the best possible experience and have an average response less than two seconds to most popular requests, model size and concurrency must be managed appropriately. In many cases a base model is produced that contains all the dimensionality and calculation logic and then the model is subjected to a series of tests that determine what the end user experience and model performance will be. The first test is a load test where data is loaded into the model to simulate what the production model volumes would actually be. During this test, basic functions are performed such as data input, allocations, filtering, pivoting, sorting, list formatted item drop down manipulation, etc. This is done both in an automated fashion and via human intervention. If you determine that some or many functions are slow and server memory and CPU are used to the maximum, this is likely a case for distribution. If however, the model is slow, but user concurrency is minimal, then this could form a case for a single model instance as the system is merely processing numbers and not being accessed by a user community. Otherwise, this model could also be split to provide a better user experience. The size of a model measured in number of cells or in memory size is a good indicator for splitting a model. We are setting the expectation that a model size should not go beyond 15B cells or 120 GB of memory. Therefore, if an application requires 30B cells, it should be split in two models. Here’s an example of how a split model decision can be made: First, estimate the size of the application: List the main dimensions that will be used in each application and define the expected number of cells for each the valid combinations of dimensions (these will be modules).  Application name Group (module) Dimension # of cells for the group Application 1 Group 1 Customer: 80 (incl. hierarchy rollups) Product: 1500(incl. rollups) Time: 36 months, 12 Q, 3 Year, 3 YTD Version: 2 (Actual, Budget) Line items: 50 metrics 650 Million Application 1 Group 2 Employee Region Time (should be same for each group) Versions (should be same for each group) Line item 300 Million Application 1 Group 3 …. … Total Application 1     12B cells  Then summarize how many models will be needed for each application.  Application name Estimated size in cells Estimated size in GB # required models Application 1 12B 90GB 1 Application 2 40B 280GB 3 The second test is user concurrency. If you have an application that requires a large user base to interact with it, a user concurrency test should be performed. As a general rule, user concurrency is approximately 10% of the total user community. Therefore, if you have a total user base of 1000, around 100 people will be on the live system performing tasks at any given time. It is usually unlikely that many more than that number would be accessing the system at the same time. In some cases though, applications follow a set high concurrency pattern and this needs to be taken into account. For example, a weekly sales forecast may have 1000 users on the system, but very likely each Sunday (if forecasts are due Monday) the user concurrency will be quite high, maybe as high as 50–60%. Your processes and experience will determine exact concurrency in high traffic applications or periods. The best approach to get to the right number of users in a model is to test the concurrency with automated tests, and then with manual tests that include a large number of real users. First, start with User Acceptance Testing (UAT). In short, UAT involves human users simultaneously performing scripted tests inside the platform. During these tests, system behavior will be monitored and reported by each of the human testers, which may be provided via a user survey that is launched post UAT. Then, automated testing can be performed in the platform. Automated testing simulates user actions across the platform. To do this, coordinate with the Anaplan QA Team to schedule automated testing of load, performance, and concurrency. It is also important to monitor the server while the automated testing is in place to monitor memory and CPU usage. The Anaplan QA team can obtain server monitoring metrics as part of the model performance testing process. In either case, application tuning needs to happen to optimize for all conditions needed. Multi-model application optimization The application tuning lifecycle includes a 2-step, iterative tuning process that reoccurs during the model building process. Step 1 is carrying out the complete build. Step 2 is tuning at the application level (i.e. optimizing the design and the calculations or business rules) by Anaplan’s L3 Support team and the solution architect. You may also make additional platform level or code optimizations with the assistance of Anaplan’s engineering department on rare occasions.
View full article
Overview In many situations, enterprises need to split very large and complex models for various reasons including: Performance issues, including data volume and user concurrency Security considerations Metadata time cycle differences Regional / business process differences Performance issues Anaplan is a platform designed to enable businesses to build models in almost endless configurations, so there is no pre-set size recommendation for where a model can be distributed. It is not uncommon for a 15-billion-cell model performing complex calculations to remain a single model, used by only a single person or just a few people. However, in contrast to that, it is also not uncommon to have a distributed model as small as 1 billion cells, with complex calculations and multiple people in multiple locations using the model. As a general guide, this table takes into consideration the factors that influence a single model or distributed model solution. Sample Model Complex Calculations* Large Data Volumes,   (> 10GB)* High User Concurrency* Solution Sample Model 1 Yes No Yes Single model Sample Model 2 Yes Yes Yes Distributed Sample Model 3 No Yes Yes Depends on actual volume Sample Model 4 No No No Single model Sample Model 5 Yes Yes No Depends on actual volume Sample Model 6 No No Yes Depends on user concurrency Sample Model 7 No Yes No Depends on actual volume * As always, apply appropriate testing and tuning to optimize the model. Different combinations can have a dramatic effect on desired performance and experience. Security considerations Anaplan has robust security across its platform. In some cases, it’s possible to achieve region-specific experiences using selective access. If this is the case, then distributed models are not necessary. But in mixed environments where model builders and end users operate in the same model, and where various business processes exist, at times it makes sense to separate or distribute models rather than have them in a single instance. For example, you may have different countries that all need access to a workforce planning application. You also have model builders from each country modeling and maintaining their section. By distributing the models and restricting access, this problem is abated. Note: Where there is a need to segregate administration (model builder) roles, the split models will need to be in different workspaces, as the admin role is by workspace, not by model. Metadata time cycle differences  A single instance of a model serving the world across multiple time zones does not respect the different business cycles involved, and therefore updates to data and/or metadata of a model will affect the entire community, some of whom may be in the middle of their planning cycle. These changes may be small, but in many instances are large-scale and frequent changes, which require pauses in the application cycle for end users. However, a configuration that does respect business cycles and time zones and distributes the model can be beneficial to the business as business regions that are in down-time (e.g., in the middle of their night, where usage is very low) can, independently, carry out updates to data and metadata without affecting other regions. ALM application: Metadata time cycle differences Alternatively, ALM prevents pauses in the application cycle altogether by providing a development environment for each model. You may edit development models at any time without disrupting live production models for end users. Then, once you have completed your edits on the development model, you may deploy them to live production models without any disruptions or down-time for end users. As a result, using ALM removes any risk for pauses in the application cycle for any user at any time. Regional / business process differences Similar to the workforce planning example above, regional differences may exist. It may not be practical to attempt to include all regional variances that exist across countries for workforce planning in a single instance. Much of the functionality would not be relevant to every region, and so confusion and frustration would occur, as well as complication of user interface. In this instance a distributed model would be the best solution. Another consideration is that of differing business processes. That is to say, both processes are intrinsically the same, but different enough to warrant separate treatment and business processes that are completely different. An example of this may be a process where a business updates a forecast. Perhaps they get to the same point in a revenue forecast, but how different parts or divisions of a business get to that point is different. One may do an initial bottom-up forecast, submit up to management for draft approval, and then do a final submit. Another may do a top-down approach where they set a target and that target needs to be validated. These are connected, yet separate, processes that may warrant separate instances of an application.   ALM application: Regional / business process differences If regional and business processes are similar between satellite models, and the metadata between them can be synced from a single development (primary) model, then ALM can be used to develop, test, and produce the single development model that feeds the satellite models. If the regional and/or business processes cannot conform to use the same metadata from a single development model, then multiple development models must be used. In this case, ALM would be used to update, test, and produce each development model, which would then feed into each respective satellite model.
View full article
There is an easy way to see to which dashboards a module has been published. This can be particularly helpful when you are making changes to a module and need to know which dashboards the changes could impact. It can also be useful to reduce sparsity by identifying modules that might not be needed within a model. In other words, if a module is not used for any dashboards you can check to see if it’s needed for anything else and if it’s not, eliminate it.      
View full article
It is important to understand what Application Lifecycle Management, or ALM, enables clients to do within Anaplan. In short, ALM enables clients to effectively manage the development, testing, deployment, and ongoing maintenance of applications in Anaplan. With ALM, it is possible to introduce changes without disrupting business operations by securely and efficiently managing and updating your applications with governance across different environments and quickly deploying changes to run more “what-if” scenarios in your planning cycles as you test and release development changes into production. Learn more here: Understanding model synchronization in Anaplan ALM Training on ALM is also available in the Education section 313 Application Lifecycle Management (ALM)
View full article
Have you ever wondered where, within a model, a list property is in use? The Referenced By property will tell you! Within Model Settings select the desired list and click on the Properties tab. From here just look for the column labeled Referenced By. It displays where the list is currently in use or being referenced. This is especially useful if you want to edit or delete a property but you don’t know if it’s being used. Please note this same feature is available for list subsets.        
View full article
Have you ever wondered where, within a model, a line item or line item subset is in use? The Referenced By property will tell you! Open the model which contains the line item. Toggle Blueprint mode on and look for the column labeled Referenced By. It displays where the line item is currently in use or being referenced.      
View full article
Functional areas should be sorted by grouping dashboards and modules separately. Doing this allows for quick access to dashboards, as well as improved control over user access assignments of these areas. Use the Reorder button to sort the functional areas. Select the rows that should be moved and then click the Reorder button to choose where to move them to.
View full article
The Contents panel provides end users with links to dashboards and modules that are accessible by their user role. Workspace administrators should remove all unnecessary dashboards and modules for each role to keep the navigation options succinct. Always keep the Contents panel in line with the business process and the user role.      
View full article
You should create user roles for each business function. You should then apply Selective Access to all lists, which helps to control the access that each end user needs. Avoid creating different roles, with varying access rights, for the same type of end user. This will help avoid the need for additional model maintenance. Sort roles in a sensible fashion using the Reorder button (e.g. most privileges, some privileges, least privileges).   Consider using a module to control user access. This will allow model builders to provide clear instructions on the roles and access rights in the model, along with the ability to change user access rights from a convenient dashboard. Additionally, you can create an import and run it as part of a process to import user access from this module. Note that only model builders will have access to import data into the user list. More information on User Roles and Selective Access can be found in Learning Center under Advanced Topics.     
View full article
When user roles are given access to lists (for edit), memory is pre-allocated for those users to increase model size. Give user role access only to the lists that they will possibly update through actions.
View full article
Announcements


Join us in San Francisco, CA, to explore what’s possible with business leaders, industry visionaries, and your peers.
Take 50% off your registration with code COMMUNITYCPX50.


Anapedia

Review the official documentation of the Anaplan platform.

Share what you know!

Share what you know! Contribute your best practices and Anaplan expertise using our Contributor's Toolkit.