Share your modeling tips and tricks — Best Practices Community Challenge
As we step into September, we’re thrilled to introduce a new and exciting addition to our Anaplan Community Challenges: the “Best Practices” challenge! Throughout the year, we’ll be inviting Community members to share best practices and tips around a specific topic. It’s a fantastic opportunity to showcase your expertise and learn from others in Community!
For our first Best Practices challenge, we’re focusing on modeling tips and tricks. Whether you’ve discovered a time-saving technique, a creative workaround, or a unique approach to building models in Anaplan, we want to hear from you! Or, it may be something that you think is important that many people forget — truly a “best practice”! This is your chance to share your insights with the community and help others enhance their Anaplan skills.
How to participate
- The Best Practices Challenge around modeling tip and tricks kicks off today, September 10, and concludes on September 30.
- Share your best practices related to modeling in Anaplan on this post. Whether it’s a detailed write-up, a short tip, or even a video, we welcome all formats!
- Explore the tips shared by your fellow Community members. You never know — you might discover a new trick that changes the way you work in Anaplan!
What’s in it for you?
- Recognition: Showcase your modeling expertise and stand out as a Community thought leader!
- Learn: Unlock new insights by checking out contributions from newer and seasoned professionals in the Anaplan ecosystem.
- Earn a Badge: As a thank you for your participation, everyone who shares their best practices will receive an exclusive Community Challenge badge. It’s a fun way to show off your contribution!
- Earn a shout-out in our upcoming event: on October 23, we’ll be hosting an event discussing modeling best practices. The best tips from this challenge will be shared!
Participate today
Whether you’ve got a quick tip or a game-changing strategy, your insights are extremely valuable. And, we hope you all come away with something new when you read Community members’ Challenge responses. Share your tips by commenting below — your contribution could be the spark that inspires someone else!
Comments
-
Sometimes you need to agregate data to be used for further calculations
One basic technique is to turn on summary methods.
This technique has at least 3 limits :
- It goes against planuals / best practices (2.03-01 Turn Summary options off by default — Anaplan Community)
- Summary methods calculate on each level of all dimensions, thus can create a huge nb of cells calculations so quickly creating a huge model for a limited scope calculation
- Summary methods can slow down your model (OEG Best Practice: Reduce calculations for better performance — Anaplan Community)
- Removing the summary methods (for instance when doing model cleanup) can silently break your calculations unless you perform a full regression test run on all depencies
To manage I strongly recommand the following tips :
- Apply planuals best practice and always set to None the summary method when dealing with calculations modules (this advice doesnt apply when you build a reporting module where aggregations are a must most lf the time and one of the key strength of anaplan)
- Use the following approach when you need to agregate a limited number of dimension on a multi-dim line item :
- Create a new list "Custom Agregation" with one Parent "Total Agregation" and one item "Agregation"
- When you need to do agregation, use the following template
- On the agregation module :
- The overall structure of the module (in applies to) will have the dimensions of your source line item you need to agregate on
- Create a line item "Agregation member selection" formated as "Custom Agregation" list and without any dimensions / version time
- Create a second line item "Agregate on" with the same format that has applies to with the dimensions you want to remove from your final agregation
- Create a third line item "Agregation result" with the final dimensions in applies to and holding the formula like that : Source Line item [SUM / ANY… : "Agregate on", LOOKUP: "Agregation member selection"
- Once this is in place, check the results of your agregation.
- Closing thoughts :
- It gets a bit of experience to use this technique so you need to force yourself to practice it at the beginning (a bit like when you went from vlookup to Match / Index in Excel)
- Document the technique somewhere so that you can have a reminder at hand when implementing it
- I strongly recommend putting the custom agregation calc° in its own dedicated calculation module (OEG Best Practice: Best practices for module design — Anaplan Community
- The agregation module has subsidiary views to keep cells numbers down but each line item is readable in the module
Here is a details of how it is built
- First Create the technical list for Custom Agregations (very simple : just one single element and no Top Level)
- 1) Create the custom agregation Module (1 module for 1 custom agregation)
- Line items :
- 2) Your Agregation element : formated as the Agregation List, points directly to the list element. No dimensions a all (e.g. cell count should be = 1)
- 3) Agregation on : Agregation List format, formula = the above line item, Applies to : the dimension you need to be agregated on (an thus removed from the target calc°)
- 4) Target Line Item Calc° : Format as needed, Formula = Source Module [AGG: Agregation Line Item, LOOKUP : Your Agregation element Line Item), Applies to / Time Scale and such as needed for result
11 -
Call me Marie Kondo but I'm a sucker for organization! One of the most overlooked places for tidying up is the actions, check out my quick tip for how you can add dummy actions to help stay organized.
14 -
Anaplan Champions!
Ok, so @Tiffany.Rice and @david.savarin inspired me to share one simple, hot tip, that will significantly improve the supportability and reusability of your data and formulas.
Always create a system module after you create a list. The system module should, at a minimum, contain line items for the related lists, properties of the list item, and any filters you need.
Here's a quick video snapshot. Hope this is a helpful tip!
8 -
Below some best practices I am using:
- Naming conventions: modules, saved views, actions, UX pages should always have associated a unique code not larger than 5-6 characters
It's easier to reference a code in the discussions. - Avoid subsidiary views in modules. It is acceptable only if it contains the same lists from the module's dimensionality. In other words: data of the subsidiary view is visible when the module is open.
This will make the model more auditable. If the line-item contains other dimensionality, create a separate module for that. - Use "Admins Only" for import actions from files. this will maintain the action editable and will not loose the file structure. Always have an external place for storing files to be able to fix the issue if the file structure is not available on Anaplan (for example when the model is copied).
- Re-order the Import data sources and put all the files as first data-sources to easily identify external files imported in the model.
- Try to avoid multiple conditions in a SavedView and create a filter line-item that contains all the conditions.
Saved Views will be more performant and line-items used as conditions will be referenced in the Filter line-items making easier to audit where a line-item is referenced. - Make multiple revision tags during the development phase (every 2-3 days), even if the ALM is not activated. This will easily trace, in time, the structural modifications made between 2 revision tags.
- Publish in UX pages only Processes and not the actions. Always put an action in a process (even if the process will contain only 1 action).
This will prevent the action to be deleted and will show that the action is used in the model. - Use "Notes" columns from different places (actions, modules, line-items, etc) for additional explanations/comments to make the model auditable
- Create a dummy role "Validate Security" in large models with multiple roles.
The role will be useful to identify the new modules, lists and actions for which the security was not managed.
If the "Validate Security" has "None", it means the security was not managed for that module, list or action. All the objects with "Validate Security" not None will indicate that security was managed. - Always have a saved view with exactly the same layout as the export actions, even the saved view will not be dynamically connected with the export action.
Be aware that the filter used in the export actions cannnot be traced back and having the correspondent saved view will be the tool to trace back. - Always avoid deleting and recreating the import actions, import data sources or export actions.
Always update these Anaplan objects in order not to break any external integrations that could be connected with these.
Deleting and re-creating these objects (even with the same name) will generate different internal ID's and external integrations (if any) will stop to work. - Functions like ITEM, CODE, PARENT, NAME should be used exclusively in the SYSTEM modules. If needed in other places, reference the SYS modules line-items. This will increase the performance, especially in the formulas in CALCULATION modules to check a member from a list.
Hope it helps,
Alex
11 - Naming conventions: modules, saved views, actions, UX pages should always have associated a unique code not larger than 5-6 characters
-
My models are often dealing with complex, non-standard calculations. I do my best to standardize as much as possible, but the reality of our business model is that we need to customize to be completive. These customizations seem to continually grow as we grow our business. I find myself having to plot out more and more complicated "IF" statements over time, and as I think through the logic, I often just sketch it out to make it work, and later, go back and optimize the logic for performance. I have recently started using AI as a time saver to optimize my logic. Once i have a complex statement that is actually working, I copy and paste my logic into Copilot and ask it to simplify. So quick and easy! While it does not always get the Anaplan syntax, it allows me to see the logic in a more easy to check format that is often very helpful. As always with AI, you need to be knowledgeable enough about the topic (in this case Anaplan model building) to be able to detect a bogus answer. Hope this idea is helpful!
9 -
0
-
Best Practices for Handling Unique Identifiers and Manual Intervention in Anaplan
Introduction: When dealing with situations where unique identifiers are absent and manual intervention is required before importing data into Anaplan, it is crucial to follow best practices to ensure smooth and efficient processes. In this article, we will explore the recommended approaches and potential challenges associated with different methods.
Method 1:Adding a Consecutive Number Column One simple approach is to add a column containing a series of consecutive numbers until the end row. This provides a basic identifier for the data. However, it is important to note that this method may not be suitable for scenarios where the user need it automated and requires no manual intervention.
Method 2:Concatenating Columns for Unique Identifiers Another option is to concatenate multiple columns to create a unique identifier. While this method can be effective, it is essential to consider the character limit constraint. If the resulting identifier exceeds 60 characters, it cannot be accommodated in any list, including numbered lists, as both the code and name of the list have a maximum limit of 60 characters.
The Best Practice: Utilizing Property Combination for Importing To avoid the limitations of the aforementioned scenarios and eliminate the need for manual intervention , the best possible approach is to use a combination of properties for importing data into the list. This method allows for a seamless import process without compromising on the unique identifier requirement.
Importing Other Parameters/Columns into a Module One might wonder how to import additional parameters or columns into a module without a unique identifier. In this situation, it may be necessary to deviate from the best practice of importing parameters directly into list properties. Instead, a solution can be implemented by importing metadata into the properties and then referencing itin the module for modeling or calculation purposes.
Conclusion: When faced with situations lacking unique identifiers and requiring manual intervention before importing data into Anaplan, following best practices is crucial. By utilizing a combination of properties for importing and considering the limitations of character counts, organizations can ensure efficient and accurate data management within the Anaplan platform.
Hope this helps !
Arnab
5 -
Tips for tagging ID's to Process , Imports and Files in the Model
Introduction -
Everyone in their model building / maintenance journey has come across scenarios where they need to debug a failed integration or create new actions / process definitions in the Model. The ID's associated with these actions , file definitions are structural in nature and form the foundation of creating the API based triggers from any external system.
I personally have faced situations where not keeping/providing the correct file id or the process id has led to broken integrations and additional issues. In this article I would like to share my approach on how a one-time activity can help us keep track of this important parameter.
Approach -
The first thing we need to understand is Anaplan has a URL that can help us get these information on the fly by just ensuring the modelID and workspaceID are correct.
Base URL [ We need to replace {wsid} & {mdid}
https://us1a.app.anaplan.com/2/0/workspaces/{wsid}/models/{mdid}/
extensions
/processes
{for listing all the process names and IDs}
/imports
{for listing all the import names and IDs}
/actions
{for listing all the delete action names and IDs}
/exports
{for listing all the export names and IDs}
/files
{for listing all the file names and IDs}
Here is a sample of how the response looks
Similar information for imports , deletes , files etc can be obtained by just changing the extension.
Note - Try disabling your single sign-on in case the URL doesn't provide a response.
Tip-
We can add these ID's to the notes section of the Process & Actions and that ensures it becomes a part of the structural information and can get copied over to the higher deployed variants as well.
[These ID's are static throughout environments i.e a process ID for one process will remain the same in the Dev or the Prod instance]
Here is an example of how I fetched the details from the URL mentioned above and updated the notes.
Conclusion-
Debugging external integration issues is generally difficult and quite stressful if we are talking about production instances with high impact and tight SLA's.
If we have the commonly shared / referenced key details like the Process , Import , File ID's in the Model {especially Datahubs} - it makes it easier to share and troubleshoot API calls with the integration team while helps reduce some time and effort to repetitively access that URL and obtain the IDs.
Happy Building!
Abhishek Roy
8 -
Visualize the build
Use a visualization tool to help with breaking the build/logic down into pieces. Then articulate this to your team/business partners before you begin to see if anything was left out or requirements forgotten and refine as necessary. This will help you sequence the build activities better and you will build it faster. It is also great documentation and can be referenced again and again...!
3 -
I'm all about keyboard shortcuts and hotkeys. Our business demands speed, so anything that helps me navigate faster and keeps my hands on the keyboard is number one on my list of to-learns. To that end, these are my top used shortcuts (apologies for the Windows bias!):
- Ctrl + Shift + F → Model search: Open the model search and immediately start typing for what you're searching for
- Ctrl + Shift + S → Search: Similar to above, open the object search and immediately start typing for the whatever you're searching for
- Ctrl + Shift + Spacebar → Toggle Blueprint view: I've found this can be a bit intermittent depending on where your browser has the active cursor, but on the whole, is faster for me than clicking the icon (especially if, like me, you occasionally have a memory lapse and click Pivot on accident)
- (UX) Alt + R → Refresh page: I've also found this can be intermittent, but again, on the whole, is faster for me than trying to click the little refresh button every time
- Changing line item format → Double-click the line item cell to open the line item format window. You can also click the cell then start typing to open the format window. The bonus with the second method: since the browser's active cursor defaults to the Type dropdown of the format window, you can hit the first letter of your desire line item format to toggle through the format options. For example, click the format cell once, hit "L" on your keyboard to open the format window, then hit "L" again to select List (can then Tab to the list field and type out your list name). Similarly, hit "T", then "T" twice to select Text. You still need your mouse to hit OK, but think of how much you don't need to drag your mouse all around the screen!
- [BONUS] Typing directly in the Formula cell: Admittedly, this isn't so much a "shortcut" (especially with Anaplan's improvements to predictive formula editing) but if you're confident enough about how your formula needs to be written, and have a bit of a reckless streak about you, you can type directly into the formula cell of a given line item in the Blueprint view instead of using the Formula bar. The main downside is if you type your formula incorrectly, you get an error message and lose what you've typed in, so you have to re-do your formula. Risky, error-prone, and ultimately not a whole lot faster, but you feel pretty cool when it works. 😎
If there are any that I'm missing, please let me know!
4 -
The first 10 very random tips that jump into my mind :
- Use the Notes as much as possible, not only to explain stuff but use a keyword/flag for stuff that is worth revisiting
- Housekeeping and model maintenance/optimisation backlog. If there is no time to formally review and optimise your model/s, encourage the business to formally allocate time for this. Creating and prioritising a backlog keeping track of the realised benefits could help obtaining buying in
- Self-document as much as possible into the UX. Avoid as much as possible ad-hoc documentation that nobody hardly reads and that will require additional maintenance overhead. Instead self-document everything into the UX (Admin pages available to Full Access/Workspace Admins only are ok
- Post deployment/sync step strategy. Did you ever find yourself in the situation where you spent a considerable amount of time and effort to develop and test something, involving end users into UAT, gaining great feedback only to then miss that one last silly step when deploying to Production that perpaps is impacting end users ? Force yourself/your team to adopt a simple but effective implementation strategy and lookout for this common risk areas :
- DCA setup
- Lookups/System Module setup
- Filters setup (central and/or user based, if applicable)
- Subset setup
- Manual data cutover (only use for small datasets)
- Semi automatic data cutover (to be used for initial data setup of large datasets/restructuring. Usually done through temporary metadata and sequential syncronisation of individual revision tags)
- Validation reports. An additional mitigation to the risk above, and something that can be used to perform regression testing.There are several posts in the community for this topic
- Consider creating semi-automated validation reports that could reconcile as much data as possible against a previous snapshot. This will help you flagging where a change has caused an unexpected result.
- Consider the above perhaps using Excel Add-in. Often this will easily allow you to validate against Source system/s
- Create key governance documentation. Start from the large amount of info you can find in the community and create your own
- Naming convention
- Change management
- Incident management
- etc
- Stay on top of new releases and Roadmap as much as possible
- Don't be afraid to ask and compare ideas
- Use the community
- Use your peers
- Create reports and logic for both high level and low level error checks and rejections of data/metadata
- Use the idea exchange! Upvote for stuff, comment and/or log ideas
4 -
I've developed a simple yet effective solution that leverages the Anaplan API and Azure Logic Apps to automatically write a timestamp to Anaplan every X minutes. This can be used to track data uploads or any other process you’d like to monitor. It's a useful feature that enhances visibility for end users by ensuring they can easily track the timeliness of their data.
Additionally, Azure Logic Apps offers a straightforward way to extend Anaplan’s native capabilities. By utilizing the API without needing to write any code, you can build powerful custom solutions with ease.
3 -
This is a great challenge!
Bringing all the modeling tips from experts together on one page is really helpful. This could be a great resource for training junior model builders.
I’m a big believer in training and documentation. I’ve learned that the best way to build models that are easy to understand and perform well in the long run is to:
- Document model building standards,
- Train new Anaplanners on best practices,
- Show them how to use the community to find answers,
- Encourage them to ask their own questions.
Seyma Tas
2 -
Introducing the Dynamic Error Message
Today, I'm excited to share an innovative solution for a common challenge: create a user-specific input form for position creation, with effective error messages and conditional formatting.
The Challenge
When users create a position, they must fill in several mandatory fields. The conventional approach to validation often relies on complex nested IF-ELSE statements that return only one error message at a time.
Why This Method Falls Short
From the user's standpoint, this can be incredibly frustrating. With only one error displayed, they may feel lost, leading to a disjointed and inefficient experience. Additionally, creating an exhaustive series of conditional formatting conditions to highlight every possible error scenario is not only tedious but also error-prone.
The Solution: Dynamic Error Message
Allow me to introduce the Dynamic Error Message system—an adaptable, user-friendly approach to error messaging. This system makes it easy to add new error messages while ensuring that the visual formatting remains consistent and clear.
How It Works
As soon as users begin filling out the form, they receive a comprehensive error message that highlights all the mandatory fields they still need to be filled in in order to process further.
With all mandatory field filled in, the relevant error messages are automatically displayed without the use of complex nested IF-ELSE statements!
Moreover, once users filled all required fields, they’ll see all error messages presented in a clear, multi-line format.
Technical Insights
I’ve created two sets of line items in the input module: those labeled "MF" for mandatory fields and "EM" for error messages. Initially, these will display a value of 0 if any mandatory fields remain unfilled, and switch to 1 when everything is complete.
Next, I organized these into line item subsets and build a config module.
By identifying line items through their prefixes, I can distinguish between mandatory fields and guardrails. Users have the option to update error names when necessary; if not, the system automatically replaces "MF" and "EM" with blank spaces and consider that as the error message name. There’s also flexibility to relax guardrails when they are not needed.
Core Calculation Module
In the calculation module, I utilize a cross-section of lists from the input module and the line item subsets. The Count Line Item uses COLLECT function to return either 0 or 1 based on the corresponding formulas. Mandatory field returns the columns names which are not filled yet whereas Guardrail is giving the error message as when it's applicable.
Finally, I used the TEXTLIST function to consolidate error messages and mandatory fields within the input module, consolidating in a final error message line item that compiles all error messages.
To improve clarity, I replace commas in error messages with newline characters, creating a more organized and readable layout for error messages.
To give an enhanced error messages experience to the end users, I replace commas with a next line character. This enhances readability. Here’s the data used for the break line item:
"
"
To create the formula for my conditional formatting, I utilized the MF and EM line items. It's important to highlight that we don’t need to rewrite the logic for CF, we’re leveraging the existing fields that generate the error messages to also dictate the highlights.
This connection is key, as it ensures the system remains cohesive and easy to maintain. Plus, the error messages are configurable, allowing you to easily update the error names or disable them if they're no longer necessary. Makes it easy to add a new error message.
This is how we can revolutionize the user experience by creating an efficient, engaging, configurable, easy to maintain and dynamic error messaging system for position creation!
Hope you all like it 🙏
6 -
Below are few of the practices which I am following :-
- Proper Naming conventions for Modules.
- A big No to saved views.
- Maintaining notes for every line items, will always be helpful in future.
- Segregation of the actions with dummy actions.
- Maintaining the documentation for every new build or any change.
- Sanity check every week.
- Separate System Modules for every list as per the requirement.
Thanks for the opportunity.
2