Share your model design best practices – November 2024 Community Challenge

becky.leung
edited December 2 in Blog

We’re excited to announce our second Best Practices Challenge of the year! This Challenge series is open for Community members to share their expertise and tips on a specific topic. Join us to spotlight your knowledge and explore fresh ideas from other members in Community!

For our second Best Practices challenge, we’re focusing on model design. How do you plan successful model layouts, what steps do you go through, what do's and don'ts do you consider? How do you ensure minimal redesign throughout the project, and plan a model with the flexibility and scalability to expand in the future? This Challenge is a great opportunity for you to share your insights with the community and learn from other Anaplan pros!

How to participate

  • The Best Practices Challenge around model design kicks off today, November 7, and concludes on November 27.
  • Share your best practices related to model design in Anaplan on this post. Whether it’s a detailed write-up, a short tip, or even a video, we welcome all formats!
  • Explore model design tips shared by your fellow Community members.

What’s in it for you?

  • Recognition: Showcase your model design expertise and stand out as a Community thought leader! 
  • Learn: Check out contributions from newer and seasoned professionals in the Anaplan ecosystem.
  • Earn a Badge: As a thank you for your participation, everyone who shares their best practices will receive an exclusive Community Challenge badge. It’s a fun way to show off your contribution!
  • Earn a shout-out in our upcoming event: on December 4, we’ll be hosting a Challenge recap event discussing model design best practices. Participants' responses will be highlighted at this event.

Participate Today

This is a great opportunity to exchange insights, tips, and innovative ideas with fellow Anaplan professionals. Join the model design Best Practices Challenge to contribute your expertise and learn from others in the Community!

Comments

  • I agree that wireframing UX pages is key to success of an Anaplan implementation. I also have learned that connecting UX wireframes to drawing out user journeys helps a lot on designing good usability. In my user journey maps I connect different UX wireframes on the intended journey of the user in the model. From these maps it's easy to follow if the wireframes are missing something or if the user journey is in risk of becaming confusing.

  • The foundation on which Anaplan models are built are lists and its associated hierarchies. If you get the foundation wrong, the cost of remediation increases exponentially the later a problem is discovered in the project life cycle.

    Understanding your customer's master and transactional data, checking data quality and validating data assumptions as early as possible is an important exercise.

    Basic master data checks for your list hierarchy design such as uniqueness of the code being used, a list item having a parent or not having multiple parents establishes a framework for determining if your list design is on the right track and whether the quality of the customer's data is ready for Anaplanised planning. I've found on several projects that part of the customer's Anaplan journey is a need for the customer to remediate their data in order for them to truly reap the benefits of using Anaplan. Unfortunately, customers remediating their data at the same time while the model is being built tends to occur more than desired necessitating a lot of rework. Generally, customers tend to tell

    you their master data is clean at the start of an engagement, an assumption which later fails during testing. It is for this reason we have a data profiling tool that does basic data quality checks at the start of an engagement now.

    On profiling transactional data, I've recently been loading transactional data on to a Polaris environment to measure the level of sparsity to determine which types of data sets are best suited in Polaris vs Classic. It's good that the most recent licensing model includes Polaris in its Enterprise license as it was sometimes difficult to get permission from customers to load their transactional data on to our Polaris environment for data sparsity profiling.

    Besides list design and verifying platform choice, another key design consideration is performance. Supply and demand planning applications tends to have a higher risk of performance degradation due to the size of data sets and the complexity of the business process it tends to support. One of the things to look at is ensuring that calculations are only "connected" where it needs to be. For example, in a trade promotions application you have the following processes.

    • Promotion creation
    • Promotion review and approval
    • Promotion execution and tracking
    • Promotion financial accruals
    • Promotion claims
    • Promotion closure

    In a fully calculations connected model, at the time you insert a new promotion list item, the model would be calculating what it's approval status is, how much sales it has made, whether and how much accruals needs to be posted and check whether it is ready to be closed. All these calculations and you've only created a new promotion list item. This slows down performance (as I've personally experienced). Data entry for a new promotion would take several seconds just to update the start and end dates for the promotion. Therefore, look at connecting calculations only where it is required. When creating a new promotion, you don't know whether it will be approved yet or not so there's no value in calculating what the financial accruals and claims for it should be. Therefore, it makes sense to not connect the promotion creation process to the rest of the other processes until it has been approved. There's several design patterns that can be used to achieve this. One way is to create a "temporary" promotions list and after the promotion has been approved, copy the temporary promotions list across to the permanent or persisted promotions list which is connected to the other processes.

  • SIVAPRASADPERAM
    edited November 25

    Hi All,

    Below are the few suggestions from my side: 

    1 : To restrict a new user to one specific model while adding them to the workspace?

    When adding a user to a model, avoid assigning them workspace admin rights right away.

    Instead, add them as a standard user with "full access" to the specific model. Once they are added, you can then update their role to a workspace admin if needed.

    This approach ensures that the user will only have access to the model where they have been granted "full access."

    2 : Use Flat lists to Store Meta Data .

    Use flat lists to store meta data (in system module) and avoid having hierarchy's in datahub. These should only apply to the main planning models

    3 : Never delete lists and reload the list as daily occurrence.

    Use a unique key to update to values rather than delete and re-load. Also consider adding a TRUE field to the data source that will allow you to know which records have been imported.

    Clearing and reloading a list increases the structural changes within a model and will increase the likelihood of a model save. This will increase the import time. 

    4: Avoid using Emojis

    Emojis can cause issues with integration and If you're going to use emojis for pages that aren't going to be used in an integration process

    5: Time Range Naming

    Keep the naming short using the FYxx-FYyy format. This format allows the administrator to see the name of the Time Range in the module blueprint without referring to the Time Range itself.

    6: Workspace Separation

    Prepare development and production model in different workspaces to avoid user specific data loss in production model.

    7: Revision Tag naming

    Major.Minor.HistoryID – e.g. 2.1.123456 Or Date.Version_HistoryID

    I hope it will help for someone who started learning Anaplan.

  • Mixed Module(Multi-dimensional, Combined Hierarchy list) design:

    Multi-dimensional:
    Anaplan is designed to be simple to use, so we should always focus to keep the modelling as simple as possible to have the flexibility and simplicity. We should aim to use native modelling design through multi-dimensional modules where ever possible

    Combined Hierarchy:
    For exceptional scenarios where data sparsity is huge, the combined hierarchy list approach can be considered only for specific datasets. As this increases the efforts and complexity of the model(integration, data reconciliation, Master data maintenance etc.). we should aim to have less modules with Combined hierarchy.

    Balance between NUX and Add-ins(Anaplan XL/Excel Add-in) usage for easy user adoption:

    NUX:

    NUX can be considered for huge data volumes analysis and processing. NUX pages will provide better collobaration,security,cloud capabilities and can efficiently process large datasets.
    As this enables the data access/processing through web client so no setup/installations required in the client system and extending model data access will be quick with pre-defined authorizations.

    Excel Add-In:

    Excel Add-In components can be considered for low volume data reconciliation tasks. It’s also best option if the requirement is to perform additional reconciliations, by leveraging native excel features to compare with other existing data already stored in Excel. Some end users prefer to view data in excel and more comfortable to process & analyze data in excel interface.
    Currently we have the following Add-ins to integrate with Anaplan Model data:

    1. Anaplan XL(Previously FluenceXL)
    2. Excel Add-in
    3. Anaplan for Microsoft 365
    4. Anaplan Add-on for Google Sheets

  • These are the learnings that want to share from my experience till now , Although still learning lot in Anaplan as every time found me as new bee to Anaplan.
    1. Analyze your inbounds and outbounds for what time setup for model, discuss with stakeholders initially to decide for model calendar type and range. Take precized future and past year for model to save up the size of the model.
    2. Make dev and prod model in different workspaces to avoid for any user specific data loss for production.
    3. Whenever creating dev model from scratch through revision tag be ready for data loads in dev model because it will copy up the structure but for data you need to load it again.
    4. Avoid complex formulas to make performance slow , like end user is getting lot of time to open up the page.
    5. Regularly cleanup your list subsets to save up the space.
    6. Make your actions, Modules auditable and mark non used line items as no data formats.
    7. Prefer model to model imports more rather file based import to improve performance.
    8. Use DISCO Principles while designing any model. Make Data , Input , system , calculation , Output modules separately to decrease the model design complexity.
    9. Avoid hardcodings. If anything needs to be changed then design should be like to change at one place and that line item/filter can be referred to everywhere and auto update.

    Hope it will help for some of the folks who started learning Anaplan.

    • Few small observations:

      1. Take your model back up everyday and archive it so that if any data mismatch issue comes on, we can compare with the existing data.
    • 2. If you have a UAT model for testing, never change the mode from Deployed to Standard. Then it will break the ALM chain and we wont be able to use that test model any longer.
    • 3. Use multiple time ranges instead of longer period of model calendar.
    • 4. Don't use Versions in your modules if its not necessary. This will increase the cell count of the module.
    • 5. For Lists always use Delete and Load method. This will help to get accurate items in the lists and accurate data too.

  • Tiffany.Rice
    edited November 27

    Wow, so many great tips!

    Many of you have mentioned iterative design and that customers will change their mind overtime, absolutely spot on! For those of us focused on transformation♻️, we are frequently building MVP solutions around evolving processes. In these circumstances it is important to know what you need for today (aka MVP) and where you are headed tomorrow. If there are critical components that will be needed to support the future state vision, these should be assessed within the initial design . For example, if the MVP doesn't address the long term foundational needs then you may find yourself in rebuild mode later. This isn't inherently a bad thing but it can be incredibly frustrating if that path isn't well understood from the onset 🗺️.

    Eyes wide open is my philosophy, as much as you can try to predict the future🔮. Where is your organization headed? What are the core capabilities needed that they can't support today? Are there decisions being made because of upstream dependencies that will get resolved over time? In short, ask questions lots of them. There may be a need to balance speed vs. completeness ⏰, be sure to understand where the external pressures to deliver "something" are creeping in and align the roadmap accordingly. The vision will change over time, that is without a doubt, but by helping your leaders to understand the potential tradeoffs and iterative design considerations you will be seen as an invaluable partner.

  • I've been mulling my thoughts on this and thankfully by coming in late in the day lots of people have already made similar comments to what i've been thinking.

    But sometimes you've just got to start - like writing a novel (of which I have no experience by my partner is currently doing) you think you know where you're going to but over time areas evolve.

    Yes you want to make things flow logically but occasionally you need to do things slightly off-grid which you hadn't envisaged. Being multi-dimensional is great but it soon eats space and not everyone is blessed with Hypermodel workspaces/Polaris.

    Also what problem is being solved here - does the end user really need an all singing, all dancing model with automated data, management reporting, workflow, etc or just have the ability to load a CSV or two and then utilise the add-ins/export function for onward reporting. Just because you think it's cool doesn't mean it would add value to the people using the model.

    Finally don't be afraid to cut your losses and start again - obviously if you're a long way down the path that can be more problematic. Especially true of the MVP / POC - you've built something to get people interested but then for longevity purposes you need to start again.

    And don't be afraid to say well you said you want requirement X but how settled is that, what's the likelihood of that changing over the length of the project. If high then park it as a box to be designed latter, like a highway just stopping with an interchange to nowhere to be continued a latter date. (Anyone who's done anything on any of the changed international accounting standards recently will know that joy!)

  • Puneeth H P
    edited December 4

    Hi Everyone,

    First thing when I hear best practices while model building I always think about how to make it a more optimized way of solving the problem statement. There is a lot of information about using functions and, best practices to carry out during model building, DISCO methodology in which Anaplan is greatly relied on.

    Model builders should always look for a solution which makes the model more feasible, sustaining and change efficient. To always build the model in mentioned ways, there are few methods which can make the solution stand-out.

    Firstly, for a problem statement, information gathering is a fundamental step in thinking of a solution. Gather all the information available which includes understanding of the business process, output required for the users, availability of data, model size, performance, and integrating your solution into the model's existing functionality. After information gathering, the next step would be preparing layout of the problem statement and solution, which helps us to see understand the gap analysis, solution overlook in turn helps us to avoid the gap in solution.

    Secondly, which are the functions available to provide a sustaining product without affecting the model size or performance. As I mentioned earlier, we can always look into the DISCO methodology and PLANUAL standard methodology of approaching a solution. It is always suggested to look for any possible threats which affect the current functionalities in the model. The solution provided should not be confusing or questioning from end users. Try to keep it to answer the required problem statement and upon that look to provide value addition by making it easily understandable and accessible.
    Change methodologies: https://community.anaplan.com/discussion/156723/impressive-community-q-a-challenge-recap
    PLANUAL Ref: https://support.anaplan.com/planual-5731dc37-317a-49fa-a5ff-7fc3926972de

    Furthermore, an added step while laying out the plan to build particular functionality is to try looking from the end user perspective to make it more appealing for users. This can be done by understanding the capability of the Anaplan NUX. Self questioning has always helped me in these situations. Some of the questions are; How would the end user want to look at the particular Data? What are the steps the user needs to carry-out before seeing the final answer to their problem statement if there are any? Is it sufficient? Is my solution is adding value to the customer?, etc.

    There is always a possibility of different approaches towards a same problem. We need to analyse what are the possibilities of coming up with a solution, documentation of all the possibilities, build the efficient one. Do not hurry on to the solution making it vulnerable to redesigning. Understanding the complete capability of the tool helps providing a better solution to a greater extent.

    Thanks,
    Puneeth HP



    Success is the Intersection of Dreams and Hardwork!

  • Late to the party here! This is a huge topic in itself - as in the project's success or failure revolves round the designs. So here is my take on it - To me Design is Iterative and with Anaplan, fortunately, it is very easy to change it. With the changing world are the changing demands and change in requirements so it is important that we design the model in such a way that it can be pivoted quickly. And to make this happen I would strongly recommend to use four cornerstones of TAW - Data, Model, Process and Deployment.

    1. Thorough understanding of the functional use case - It may sound weird but it is very important to understand what you are trying to design. Starting point is to check with BA and understand the process/use case that is being implemented. Understand the pain points and see how Anaplan can help optimize it.
    2. Data is another important point which is extremely crucial, as this is make or break. This also is pre- requisite for a very good model design. If the data does not land in anaplan the way it should you will end up using Anaplan as ETL tool and it will start messing up the whole anaplan model design and Anaplan's performance.
    3. You got to be good in what you are building. There are many ways a problem can be solved in anaplan but you have to weigh your options and see what works best in your scenario.
    4. Last but not the least - start building keeping end user in mind. Your end goal has to be the dashboards that the end users interact with (keeping Anaplan engine's performance in mind)

    Miz