This article is part of a series on Polaris best practices. Click here for more Community content or visit Anapedia for detailed technical guidance.
As Anaplan solution architects and model builders begin to develop in Polaris, it's important to consider how you approach the development workflow. In Polaris the upper limit of models can scale to quintillions of addressable cells, which means it’s crucial to follow an iterative development approach that optimizes performance and model builder productivity. The good news is the methods outlined here come from tried-and-true software development methodologies that are used widely in other technologies.
Iterative Development defined
Iterative development in Polaris introduces a key concept: separating syntax validation from performance validation. Said another way, building and unit testing in one environment, evaluating scalability and performance in another, and iterating between the two throughout. Following this development approach in Anaplan can help you build models that are both correct and performant while also maintaining model builder productivity.
Iterative Development in action
Planning your development timeline using the guidance below allows model builders to validate the structure and syntax quickly, maintaining model builder productivity by avoiding long toaster times. Then validation and performance optimization happen in a separate, increasingly larger model.
This approach utilizes our best practice recommendation of a DEV/TEST and eventually PROD environment, using Anaplan’s built-in Application Lifecycle Management (ALM) to promote code changes and establish deployment discipline from the start.
Iterative Development steps
Step 1: Quick feedback, fast development
The first step involves the rapid build of the model with small lists and a very small dataset to build the structure, formulas, and logic. This allows for structure validation and quick syntax error checking without long toaster times, as well as unit testing with a small sample dataset.
Suggested environments:
What to focus on:
- Build the model structure focusing on your primary planning lists, modules, and actions. Only populates lists and data to allow for syntax and logic validation.
- Blueprint insights to focus on:
- Calculation Effort % and Populated Cell Count: These metrics start to give a preview of the future of the model.
- Calculation Complexity: Review any formulas resulting in “All Cells” which could result in overly dense modules and performance degradation.
- Begin building the UX to allow for performance review of pages, grids, filters, and selectors.
When to move to the next step: When you have functionality to test! Remember, this is a development methodology. The testing being done here is to ensure your development results are correct and performant. It is not best practice to build your entire model before moving to step two!
Step 2: Create a TEST model with fully populated lists
This is where ALM first comes into play. Create a TEST model, put it into deployed mode, and build the lists out in full. This is where you start to iterate: identify issues in TEST, fix them in DEV, promote back to TEST using ALM, and repeat.
Suggested environments:
*Deployed
What to focus on:
- Review formulas across all line items and see what the blueprint columns reveal. Focus on:
- Calculation Complexity: identify formulas with high One-to-Many results which could equate to a larger populated space and higher memory use once full data is loaded.
- Calculation Effort: Keep an eye on this and remember this is a guide to full model performance. A high value here may perform well now, but what happens with more data?
- Review the UX: How is the end user experience with full lists?
When to move to the next step: When you are confident the results being produced are correct, and you have completed the optimization suggestions above.
Step 3: Start introducing data
This step gives you a preview of performance and size, while minimizing the risk of potential model issues. Load one full period of data (or the equivalent) to the TEST model and review the insights provided. Move through the iterative steps again: identify issues in TEST, fix them in DEV, promote to TEST using ALM, and repeat.
Suggested environments:
*Deployed
What to focus on:
- Monitor model open time: With more data, this will increase.
- Performance of data loads: The time to load this volume of data will give insight into the load time for the full dataset. What opportunities are there for optimization?
- Blueprint insights to review:
- Calculation Complexity: Revisit high complexity items from Step 2. What do they reveal now?
- Cell Count: This now reflects the total potentially addressable cells.
- Populated Cell Count: This is the populated space based on the results of the formulas with one period of data. What will this grow to with full data?
- Memory Used: Start anticipating the actual size of the model based on the build.
- Review and refine the UX: How do grids and filters perform?
When to move to the next step: When you have completed the optimization suggestions above and are comfortable with the projected performance and size after a full data load.
Step 4: Reality check with a full-scale test model
This step enables true performance testing and fine-tuning of actions and calculations by loading full data. Iterating between the small DEV model and the now fully sized TEST model becomes crucial. Making logic or structural updates here could severely hinder productivity with model rollbacks due to syntax or formula errors, which is why TEST is in “deployed” mode and you’re using ALM.
Suggested environments:
*Deployed
What to focus on:
- Assess model open time, performance, and size with full data loads.
- Review the performance of time-based calculations, specifically — think TIMESUM, LAG, and OFFSET.
- Re-review Polaris columns in Blueprint: what insights do they provide now?
- Identify high priority opportunities for rework.
- Review and refine the UX:
- Are pages and cards still performant?
- Where does it make sense to suppress zeros to reduce noise?
Keep in mind that these steps are not intended to be restrictive or exhaustive, they’re meant to be a guide. As suggested in the title, they’re meant to be iterative. As you pass from the first step to subsequent steps make sure to:
- Identify points of improvement
- Apply improvements in DEV
- Push updates to TEST
- Review insights and repeat
Conclusion
The most important thing to remember is that the addressable universe in Polaris is massive, so following an approach that separates syntax validation and performance testing is critical. Using this measured, deliberate approach can greatly increase model builder productivity, as well as starting performance testing and optimization early in the development lifecycle.
……………..
Authors:
Anaplan’s Theresa Reid (@TheresaR), Architecture and Performance Director; Mike Henderson, Principal Solution Architect (@hendersonmj).
Special thanks to Stephen Rituper (@Stephen).