
Performance
Performance
Discussion List
- If the expression is repeated in the formula (or other modules), put it on a separate line item. “Calculate once, reference many times”
- Don't use Select levels combined with filters on the same hierarchy. It is a double filter and inefficient because both will be kicked off. Use one or the other. Setting a Boolean line item in a Syste…
- Keep the non-time-based data in a separate module from static attributes Best Practice article: Data Hubs Purpose and Peak Performance
- Ensure the dimension order is consistent among modules. Calculations are faster if the common dimensions are in the same order as the Applies To. The size of the list is not as critical as the order B…
- Use Line Item subsets to create different numeric formulae for each version to avoid multiple Ifs Best Practice article: Line Item Subsets Demystified Decreasing the Length of Your Formulas Variance A…
- Based off the code of the list it should be possible to derive the attributes; Calculating the values is more efficient than storing text fields Best Practices article: Data Hubs Purpose and Peak Perf…
- Clearing and reloading a list increases the structural changes within a model and will increase the likelihood of a of model save, thus increasing the import time. It also removes pre-allocated memory…
- Using codes is more efficient for loading and using lists so strive to always have a code for lists. This is especially important for numbered lists Exception: 1.05-02a Static non hierarchy lists: For…
- Don't create a transaction list with a top level. The calculations will need to sum for all items in the list even if only a single item is added Exception: 5.07-09a Check totals: If a check total of …
- If you need the totals for validation purposes, create intermediate subtotals within the transaction list. This will significantly reduce the calculation load Related to Rule: 5.07-09 Avoid using a To…
- It is more efficient to aggregate the data in the hub and then export it, rather than accumulate it through the import to the downstream models
- Use the Data hub (or another reporting model) to keep detailed transactional data out of the main planning models. Large amounts of historic transactional data can inflate the size of the planning mod…
- Try and avoid creating Master Data in the hub. This should really come from the source system(s)
- Use System modules for filtering data (current period, current FY year, etc.) Related to Rule: 2.01-15 Filters in separate modules Best Practice article: Filter Best Practice Data Hubs: Purpose and Pe…
- Get the data from IT in the correct format as well as the correct granularity Related to Rule: 5.04-06 Import the correct granularity
- Use the flat list structures to create modules and views for downstream exports
- Try and keep Analytical modules out of the Hub
- There should be no need for composite list hierarchies in the Hub. They can be built to "test" the actions, but after testing they should be deleted Exceptions: 5.07-01a Validation purposes: If data i…
- Only use one filter criteria...If more are needed, combine them into one line item Related to Rule: 4.02-01 Use efficient filters Best Practices article: Filter Best Practice
- Create two views from a source module. One for the import to the list (using name, code and parent), and a view for the attributes to the associated System module Related to Rule: 5.05-02 Only include…
- Only include the line items that are required for the import. Create multiple views if the module is used for different imports. The number of columns in the view does affect the speed of the import. …
- Import sources should always be done from a module view. This allows for filtering and only including the elements required for each import
- Line items holding the Imported line items should reflect the data. List formatted, numbers, and dates. Don’t use text (unless it is a true text field)
- Only imported data at the granularity needed. There is no need to bring in transactional data at a weekly level when Planning happens at the month level
- Wherever possible, aggregations should be done in the source system. This is likely to reduce the size of the import file meaning faster imports
- Use the Ignore field if there are any unwanted fields in the source data
- The data file should have the key and values based on the dimensions. Non dimensional data should be in a different file Related to Rule: 5.04-01 From source system, create a code defining all attribu…
- Ideally, this would be a separate file that is unique vs. using the same file for the transactional load Exception: 5.04-01a Code is >60 characters: If the code is >60 characters, you will have to use…
- Each action in a process triggers a recalculation so try and minimize the number of actions
- Critically review the need for user driven actions, consider the effect on user concurrency. Attempt to use formulas instead. This may need additional modules, but the user experience could be improve…