How to review models and applications to drive improvement

Options
AnaplanOEG
edited March 20 in Best Practices

Author: Andrew Martin (@andrew_martin_1), Operational Excellence Director at Anaplan.

Overview

Many stakeholders ask this question for different reasons. Examples include:

  • A CoE leader needing comfort that best practices have been followed in the build
  • A Solution Architect seeking affirmation that the architecture is the best design
  • A model builder tasked with improving performance or calculation logic
  • A page builder looking to improve the UX design in response to end user feedback
  • A project manager concerned about whether the user stories have been built out correctly and that the business requirements have been fully met
  • A product owner keen to ensure that the business accept and adopt the new deployment so that the business case benefits are realized
  • A business process owner challenging the effectiveness of the deployment in facilitating the process

The question can be asked at any time in the life cycle of an application, such as midway through a development project, in the weeks post go-live as a use case beds in, or when the ownership of a model changes. In other words, it can be applied to both new and mature models.

This article is about how our customers can partner with us to review their Anaplan use cases to drive best practices and continuous improvement.

1: Customer Discussion

Any review should begin with a general discussion about the application. For live use cases, your Customer Success Business Partner (CSBP) may already be leading these discussions as part of success review meetings. During development cycles, your project manager, CoE lead and professional services partners might seek an independent review. Other key stakeholders may be involved, such as those listed in the overview above.

This general discussion will often lead to key areas of focus for the review. For example:

  • Feedback on the UX design of specific pages.
  • Feedback on the navigation and usability for the app as a whole.
  • Performance – a huge area that might cover specific user interactions, model open times, model refreshes, data loads and exports.
  • The outputs of the process, be it the reporting itself or exported data. The accuracy of the data is important. Are the numbers right?
  • Does the design best serve the business process itself?

Once a frame of reference for the review has been established, with a clear set of objectives, work can begin.

Aspects of an Anaplan deployment can be split into those that can be compared to best practices in a data driven way (objective), and those that require more manual observations, experience and opinion (subjective). PLANS, our proven framework for success when it comes to model building and design, speaks to both. The bible of best practices we call the PLANUAL informs both aspects of model building.

2: Subjective Review

Depending on the focus, a subjective review could look at the following:

Model organization
Good housekeeping and logical organization make understanding a model so much easier. Examples of things a reviewer will look for are:

  • clear, consistent naming conventions for modules, lists, actions and processes.
  • well ordered objects, with modules being grouped into meaningful functional areas.
  • informative notes against line items.

But perhaps the most important evidence for a well organized model is that Anaplan’s best practice methodologies have been adhered to. DISCO is a key one here, part of the “Logical” piece in PLANS (see above), where modules belong to one of the 5 groups of Data, Input, System, Calculation and Output. A strong naming convention for modules indicates a concerted attempt to follow the DISCO paradigm.

Architecture

Being a flexible platform, Anaplan allows for different architectural design decisions to be made for any use case. It is a broad topic, and can cover many considerations.

For example:

Deciding on what development and test environments are required and how Application Lifecycle Management (ALM) will be utilized.

Designing a data hub, managing the associated structural and transactional data loads and the update processes for dependent models.

Whether to split production models. This is often the approach for very large global implementations. The split could be by geography (Country or Region), Entity, Product Family or another appropriate basis. All of the split models could be serviced from one Dev model in a “Hub and Spoke” architecture.

Planning vs Reporting models. Retaining just the reporting required for planning, and building a separate reporting model that consolidates and formats the data for wider consumption, can remove a lot of the live, downstream calculation burden in high participation planning models.

It could be that it makes more sense to build a Hypermodel, with the much larger workspace negating the need to split models.

For large, sparse use cases, consideration of whether to use Polaris, Anaplan’s sparse calculation engine, by itself or in combination with Classic models might be a relevant architectural exercise.

Of course, there are commercial implications for some of these decisions.

Business Process

Sometimes a reviewer will challenge the way that the business process is being facilitated by the models. There are situations where performance can be degraded by sub-optimal decisions on this.

For example:

Giving end users the ability to add items to lists may be fine in smaller models with few users. However, for applications with a high number of users, all of whom will be adding items at the same time of the month, performance could be badly hit since it often triggers a full recalculation of the model. Providing placeholders (i.e. new, blank items against which users can assign attributes and do their planning), added in an overnight admin process all at once, would prevent this without compromising the business process.

Having many users simultaneously exporting data on demand can block the model and frustrate other users. Scheduling exports in bulk, for use in downstream applications such as Business Intelligence, eliminates this.

Updating models with large structural and transactional datasets should be done at a time when the models are least likely to be being used.

User Experience (UX)

Even the best models will not be adopted by users who are subjected to a poor experience. Not only does an application need to look good, but it needs to be easy to navigate, intuitive to interact with, and fully facilitate the business process that each user is engaged in.

Designing an effective UX requires skills quite distinct from those needed for modeling, and so the role of Page Builder is separate to that of the Model Builder, quite often discharged by different people. The process requires input from end users of all personas, with pages designed specifically for their needs. A strong understanding of the business process as a whole and the user stories that support it, will help enormously in this.

Anaplan has developed generic guidance in the form of our U.S.E.R. methodology, which has best practice advice on how to frame the goals for the UX (Understand), wireframe the solution (Sketch), build out the pages (Execute) and iterate based on feedback (Repeat).

3: Objective Review

MAPS

MAPS (Model Analysis and Pro-Active Solutions), is a tool which was developed by Anaplan to appraise the quality of a model against the Planual rules which can be objectively identified as being followed or breached. It periodically takes sanitized metadata of most live models across the Anaplan ecosystem, along with performance statistics from model opening. It analyses line items, their dimensionality and formulae, lists and modules, and detects infractions of Planual rules to calculate a weighted score at both object and model level called Best Practice Infractions (BPI).

MAPS is itself an Anaplan model. Your CS Business Partner can pull off a Management Report which details the findings. BPI levels range from low (little or no infractions) to high (many areas of concern), and can be a useful guide as to where to start to tune a poorly performing model.

MAPS should be interpreted with both caution and context before any remedial actions are taken. Work with your CS Business Partner or Professional Services team to utilise the reports most effectively. There are sometimes good reasons why a rule is deliberately broken. Similarly, a poorly constructed formula may not actually be materially degrading performance. It could also be that investing in fixing a badly built or poorly performing model is low down the priority list, for example if the model is only used by a handful of people once a year and it works.

In many circumstances, however, MAPS can be a powerful tool to help troubleshoot a model by guiding you to the right areas for attention. It can also be used before go-live to indicate the extent to which best practices have been adhered to, and perhaps pre-empt problems before they arise.

Model Optimization

Anaplan provide a service called Model Optimization, the focus of which is model performance. It is delivered by a highly skilled team which utilizes powerful analytical tools which are not available elsewhere, to identify problem areas and make detailed recommendations for improvement.

The team take a sanitized copy of your model, against which they run specialist software which captures detailed performance characteristics for every line item and calculation. At its’ simplest, this can be done to analyze what happens as the model opens. However, with guidance from you on specific user actions that are deemed to be slow, they can analyse the calculation chains triggered and identify any bottlenecks. From these they can assess what changes might improve things, and test the impact of making those changes.

Concurrency Testing

The above optimization process analyses the “baseline” performance of a model (i.e. single user) while analytical software writes detailed logs to enable us to understand what the calculation engine is doing. As such, it enables fine tuning and re-work to help us achieve the best performance. It does not, however, simulate the real world performance of a live application. For that, Anaplan offers a concurrency testing service.

To effect this, you and your partners produce a number of realistic user journeys, scripts based on how the different personas in your organization will use the model on a regular basis. Using a sanitized copy of your model, our team use specialist tools to establish baseline results for each user journey, then programmatically load the system with more and more concurrent users going through those user journeys, steadily increasing to the realistic maximum number of users at peak expected usage.

Detailed reports then indicate any inflection points, where the rate of concurrency causes performance to degrade to unacceptable levels. If all parties are satisfied that the assumptions for the test were realistic and accurate, then a loop back through the model optimization process, focussing on the line items and calculations which the concurrency test has highlighted need attention, followed by a retest after any changes, is a common way to proceed.