AMA: Model Building Excellence & Application Lifecycle Management

edited January 2023 in Spotlight Series

David Smith has become a household name around here, well versed in all things Anaplan. For those that haven't had the pleasure of interacting with him yet, let us introduce you! 

Tell us about your experience here at Anaplan. How long have you been here?

Four years. A lot has changed in that time!

Four years is a long time. What areas have you worked in? 

At Anaplan, I’ve worked in a variety of roles, but before that, I started life as an accountant.  However, I was never really interested in the technicalities of accounting; I always favored modeling and started my journey at the time Excel was taking off.  I moved from spreadsheets to the multi-dimensional world in 2000 with Adaytum (one of Michael Gould’s first products).  I’ve used a number of different platforms during that time and worked as a system administrator, trainer, pre-sales consultant, and solution architect, so I bring all of that experience to bear in my new role. My current role provides a bridge between product development and customer success, and I am currently leading an initiative to re-define and evolve a new standard of Anaplan modeling (PLANS). This will define the best practices of model building, balancing performance, usability, and sustainability.

What is your area of expertise, and how did you achieve it?

My passion is planning, and I always want to find the best, most efficient way to model.  I am an expert in most things Anaplan, but my specialty has to be model design; I bring all of the techniques learned from the different platforms I have worked with to build efficient, well-structured models that are easy to maintain.  I am also expert in Application Lifecycle Management, having overseen the enablement of this fantastic feature since its launch in 2016.

What is your favorite thing about Anaplan?

I can model the way I think! I can build (and amend) models so much quicker than ever before. As an example, I recently worked with a colleague to re-configure a model by changing the dimensionality of all major modules.  We did that in four hours.  That’s just not possible in most other systems.  I’m also so excited about the forthcoming Dynamic Cell Access.  I’ve personally championed this feature, and it will change how we model (in a very, very good way).

Tell us one fun fact about yourself! 

I’m right-handed, but I deal cards left-handed. Apparently, I was left-handed as a baby, until the age for four, or five, and only when I started copying my elder brother did I switch; so I guess there are some things left over that I instinctively do “the wrong way round.”

Note: The live Q&A session is now closed.

Below you'll find a quick reference guide to specific questions featured in the video. Don't forget to check out the BONUS question from our expert to hear more about the DISCO planning methodology.

  • Introduction 00:05
  • Protecting data in a version when a formula changes 00:33
  • Application Lifecycle Management (ALM) and best practices 02:27
  • Balancing agility and governance using ALM 04:45
  • Formulas to avoid 07:43
  • Best practices in multi-tiered accounts 10:15
  • New features and functions and Dynamic Cell Access 11:24
  • Keeping models clean 13:32
  • BONUS: David’s DISCO 14:56


  • Hi David,

    In a environment where we will regularly need to:

    • Update switch over dates for versions (once a week)
    • Run larger model developments that will take multiple weeks to complete
    • & have the agililty to implement minor fixes as required
    • where work is completed by more than one model builder

    What do you think is the best approach to environment structure to support a good balance of agility and governance with Anaplan?

    E.g: how many models? How many workspaces? How is ALM used? Different user access to different workspaces?

  • Hayk

    Hi David,

    Would be interesting to get your opinion on which formulas to avoid.

    EG some of them can slow down the performance, others can corrupt the calculations (like ROUND does) etc..



  • Hi David,

    For organizations that don't have ALM available, what is your opinion on best practices around continuing development and model transitions?



  • lin

    I am not a model-builder myself but liaise with a team of model-builders. I ask my question as the business owner of Forecasting data.

    We have closed previous Forecast version in ANAPLAN (e.g. March FCST) but when a formula is changed ( e.g. for a live April version), all past CLOSED versions automatically update and the $ in the closed versions change.

    This is not acceptable as the older versions were published and now if we refer to them live in ANAPLAN for variance analysis, the $ are different to that published in the past for management reporting. Data has to be manually tweaked in excel for prior versions.

    How do we avoid this?  

  • Hi David,

    In the US, Income Statements typically display both Revenue and Expenses positively.

    For example:

    Sales 100

    Cost of Goods Sold 50

    Gross Margin = 50

    Since Anaplan's default aggregation behavior is to sum all children to derive the parent, it becomes challenging to calculate totals correctly while also displaying values that conform to customer Reporting standards. Assuming a multi-tiered Accounts list (likely ragged), what are some best practices to work around this issue?

    I've used Line Items before for each Account, which allow for some more flexibility around display and formulas, however the line items are cumbersome to update as new Accounts are added to the source system that must also be included in Anaplan's account list.

    I've also seen instances where 3 different line items are used, dimensionalized by an Accounts list to come up with a proper display. Line Item 1 = where data is input, both Revenue and Expenses input as positive. Line Item 2 = Converts expenses from positive to negative. Line Item 3 = For Level 0 list members, take the value from Line Item 1 and for parents, calculate the proper total by subtracting Line Item 2 from Line Item 1. 

    Curious if you've seen anything in your experience that gets around these issues.

  • David-

    From a builder perspective, what new features or functions on the roadmap are you the most excited for?  What and how can this improve model building in Anaplan?


  • There's still time to ask your questions! We're wrapping up tomorrow morning, so get them in today. 

  • I know that it's good to keep your Anaplan Models clean so that they are easier to manage over time... removing fields that are no longer used, removing old import actions you no longer need, deleting old archived models, etc...

     In your job, do you follow a schedule to clean your models on a regular basis or do you do it simply as you notice something that's no longer needed.


    Thank you.

  • My company is also interested in the upcomming Dynamic Cell Access feature. Specifically arround using it with Break Back to control which cells are updated. Is there a place you can recommend we look for more information on the feature? I don't see much about it on Anapedia yet.


    Thank you.

  • Hi David,

    Which advice would you give for creating and managing lists and subsets of those lists (naming convention, order in which they are created, etc.)

    I am asking that because when you have multiple lists and multiple subsets for each list, finding them in the "applies to" of a module is a nightmare. The order they appear on, seems completely random.

    I am attaching a screenshot of a very simple example with 4 lists and 10 subsets to illustrate what I mean.


  • Hello,


    Most of the customer we talk, they want to see almost all the data in one model (eg: They dont want to switch between models / workspace to make a business decision). What do you suggest should be the design approach if we have multiple models distributed into multiple workspace still business want to see combined set of data / information in one model without taggling between models

  • Hello David,


    My questons are around Bulding a Data Hub - Best Practices in this article:


    1. How literally should we take this?  It makes complete sense with big lists, in a list with 5M records, it is a no-brainer.  But where is the limit?  If the list has only 100 items in it, would we still take the same approach?  What if it only has 5 items in it?


    2.  Is this approach recommended when importing data from one model to another model?  For example:

    • In Data Hub, customer master is imported from a csv file into a multiple hierarchical key lists and then  corresponding property module as needed.  Would we import the Data Hub key lists and property module(s) to a:
      • Demand Planning model key lists and property module(s)?
      • Demand Planning normal list (not key list).  If this is the case, is the property module created in Demand Planning model?

    Thank you!

  • Thanks for all the great questions this week! Watch for the video to be posted by end of day on Friday. Any questions that didn't make the cutoff for the video will be answered directly in the forum. We'll also leave this topic open for a few more days for any follow-up questions. 

  • Remember to let us know what you thought about the AMA by taking the survey. We want to hear from you!

  • Hi Fabien

    Apologies for not being able to answer this on the session last week.


    Firstly, we recommend adding a “placeholder” dummy list at the bottom of the general lists because subsets will always appear at the bottom.  So, something like <-- Subsets -->

    Secondly, we recommend naming the subset with the name of the list as a prefix, e.g. Cost Centres:EMEA, or Products P7:Active Products.  This helps identify the lists (and their purpose)


    Finally, addressing the order.  As mentioned, they will appear at the bottom of the list (below the users list - see attached) and the order is made up of two elements:

    1. The order the lists were created in general lists and
    2. The order in the subsets tab

    I have raised a request to the product team that 1. above is amended to reflect the current order in general lists as this will help with the ordering


    I hope this helps, 

    If you have any further questions please post a message on community under the relevant Topic (from the left hand navigation bar)

  • Hi Ashish

    It is a good question, and the ideal situation is to have everything in one model.  But, in certain circumstances we do need to split models

    1. Model Size
    2. Model Performance
    3. User Concurrency

    In these situations, I usually ask the following questions?

    1. Are there defined parts of the process that follow on from each other with different users?
    • Let’s assume we have a 12-day S&OP process. The Demand plan is completed in working days 1-4 in the cycle.  The output of this feeds an Operations plan, to be completed in working days 5-8 and finally that feeds an Inventory plan in working day 9-12.  The inventory plan does not need to be updated in real time as the Demand plan is updated, so this is a good candidate to split the model functionality in three models (Demand, Operations and Inventory) and import the data between them (maybe overnight, or using a schedule)
    1. Can you split the model by hierarchies?
    • It is important that we minimise the number of models a user needs to access to complete their planning, so think about splitting the models along those lines of responsibility (maybe be Region, or by Product).  For example.  Let’s assume we now have three parts of a process; Contract Setup, Contract Rules and Scenario Modelling.  As described above, we could split the model by these three, but if the same user needs move quickly from one process to another, switching between models is not a satisfactory solution.  So in this case, it would be better to try and keep all of the process calculations and modules in one model and split the model based on the main contract hierarchy.
    1. Does all central reporting/analysis need to be real time?
    • Often the models pull everything together for Central consolidation and Reporting. As with the first example, in many cases, this doesn’t need to be updated in real time and can be split out into a separate model and update regularly.  Quite often these updates are overnight in existing systems, so we are often able to improve the latency of the reporting from the existing state, even if not in real time

    I hope this is helpful, if you have any follow up quesitons, please post the question in the relevant Topic forum (from the left hand navigation panel)

  • Hi Terrie

    I presume your first question refers to the incremental data load.

    I think, as with most cases, common sense needs to prevail.  The key point is to try and make the imports as efficient as possible and that is why we advocate the incremental load.  But, as you point out, sometimes, this doesn’t make sense for all (small) lists and isn’t really practical.  As I mentioned in the video, most of the best practices are guidelines that will have exceptions.

    The really important thing to point out here is it is BAD practice to clear out the list and re-load each time it is imported, be that into the hub itself, or the downstream models.  There is a limit on the number of times this can be done before our Technical team need to get involved to reset it. Please try and avoid this method of updates.  I will be posting an article on Best Practice for List Imports soon.


    On your second question, if by a KEY list you mean a numbered list, then it will depend on the reason for the numbered list.

    If the numbered list was created in the hub because of a lack of a key, then creating the key in the hub and using that as the code in the downstream models negates the need for a numbered list.  It is much easier to try and avoid using numbered lists when importing or exporting data between models.  This is due to the #id that is used to identify the numbered list entry.  Using a code is always much more preferable. 

    However, if the numbered list was used because of flexibility/limitations on the display name, then you can still have a numbered list in the downstream model.  I would still try and use a code wherever possible (it is much easier to tie the lists together)


    I would always advocate a properties or attributes module though.  As I mentioned as part of DISCO, creating a System module and referencing the details everywhere is key to good model performance, design and understanding.


    I hope this is helpful, if you have any follow up quesitons, please post the question in the relevant Topic forum (from the left hand navigation panel)