OEG Best Practice: Data Hubs: Purpose and peak performance



  • Question regarding using Import to List, Trans, Calculate Attribute (multiple line items) for better performance : what is the plan to achieve performance when the code doesnt hold all attributes in 60 characters ? Do we use list properties ? Do we use one attribute that is loaded and can be used then to calculate all other attributes. If so, what is a performance to have 1 list code, 1 line item attribute and the rest calculated (e.g. compared to multiple line items attributes loads)

    Another question regarding delta loads : how do you perform the clearing of the "new data" boolean if we have like 4 or 5 spoke models that are "connected" to the data hub

  • Thank you for the good article! Just a friendly reminder - "The Anaplan Way" link is not working.

  • @rob_marshall question on scheduling of Data Hub to spoke integrations. If for example you have 3 spoke models that need to import data from the data hub. Does each spoke import action from lock the data hub from performing other spoke integrations? This is related to the scheduling of imports from the Spoke models. If I schedule integration of 3 spoke models at the same time, e.g. 3am vs scheduling them sequentially spoke 1 integration first then spoke 2 integration next, etc … could you please advise on this

  • @TristanS

    Great question and the answer is no because you are only reading the data from the data hub and that doesn't lock it (assuming you aren't writing anything back to the data hub). Models only lock if you are writing data (so the spokes would be locked), but not the data hub.

  • rob_marshall
    edited October 2023


    Let me answer your questions in reverse order:

    Another question regarding delta loads: how do you perform the clearing of the "new data" boolean if we have like 4 or 5 spoke models that are "connected" to the data hub

    Answer: Put the clearing of the boolean as the first action to load the data hub. In doing this, multiple spokes can take advantage of the delta load.

    Codes with greater than 60 characters

    Answer: There are a couple of ways of accomplishing this, but none are great.

    • What do the source codes look like? Are they a novel or are they a real code that are just long?
    • Can you create an Anaplan code within the source using mapping tables? You would also need to create the same mapping tables/modules within Anaplan if you are exporting back to the source system, so not ideal
    • Are there members that are in the code that shouldn't be, like Actuals or Budget or Time?
    • Can you break out some of the members and use them as selectors of the module? Remember, loading the transactional data doesn't have to be flat, it is great if you can, but it doesn't have to. If you can do this, then your transactional list will not only be smaller in length (number of characters), but also in volume.

  • Ok, so about delta load. How do you make sure that spoke model got all the previous deltas correctly before the next delta load just removes the boolean ?

    My understanding is that spoke models use dathubs as central data source in a "pick latest data" way + as imports are pull only, there is no way i can think of that would ensure data timeliness and consistency between dat hub and spoke models with this delta feature.

    Sorry for the continuous questiond, but we are looking into the delta loads questions and i cannot see how anaplan can achieve something resilient enough here.

  • @david.savarin

    Ok, good question. There are a couple of ways:

    • First, make sure you get all green checks from the actions/process, with no warnings or errors. If you get a warning or error, something is obviously wrong (remember the source should be a view, not a module or a list). Honestly, this is what about 99% of the folks do (monitor the warnings and errors). But if you want to take it a step further…
    • You can create a Totals module in the source, and bring that summary data over into a module on the spoke, where you land the data and then you can have another line item doing a sum of the members just loaded. Think of this as a validation module. To accomplish the "sum", I create a Dummy list (call it Total) with only one member. The validation module is dimensionalized by this Dummy/Total list. In the modules I am wanting to do the sum, I have a hardcoded line item referencing the lone member in the Dummy/Total List (reference a SYS Global module with the one member defined/hardcoded). Again, this is usually overkill if you are monitoring the warnings/errors, but you can definitely do this.

    Does that help?