@rob_marshall , @nicole.johnson
I do agree that there is no need to flatten the data. Less data you move around is better for performance.
On the point about pushing the data through the HUB I just want to bring one more thing - it's always good to have single point of thruth for you data. Possibly you will be using the data in the future to feed other model (that is possibly not yet there in the environment) and then it will be easier to use HUB simply rather than redesign data flow, I guess. Moreover if you schedule your imports between models carefully you won't end up with an issue of missing updates.
... View more
Dear Community, I'm seeking for your assistance with the following topic. In the model I'm contributing to we do have a possiblity that allows turning off/on some features. It's done using simple boolean line item dimensioned by list of features (to keep it simple). Let's say it looks like that: Feature Boolean (on/off) feature1 T/F feature2 T/F ... ... feature50 T/F You all know probably this simple trick that you can select multiple boolean formatted cells and hit spacebar in order to change all of them. I was using the same recently on the table above and I experienced a strange model behavior that I'm quite concerned about. Test 1 All cells are 'False' I select all of them and hit the spacebar Result: model recalc was happening for about 85s by avg Test 2 All cells are 'False' I select first half (1-25) and hit spacebar Result: model recalc was happening for about 175s by avg Test 3 All cells are 'False' I select second half (26-50) and hit spacebar Result: model recalc was happening for about 40s by avg Comparison 'test 2' vs 'test 1' is very counterintuitive, isn't it? How come lesser number of calculations results with longer time of recalculations?! But this is not yet over. I did another tests which where even more interesting: Test 4 First half (1-25) is 'True', second half (26-50) is 'False' I select second half (26-50) and hit spacebar Result: model reclac was happening for about 180s by avg ! Well, I'm puzzelled, same as you are I believe. It's worth mentioning that calculations driven by all those booleans are overlapping usually. Some of them may be completely parallel to others but in most of the cases they have some line items shared in the tree of references. They don't cover the whole model though. I was trying to find out if there is one particular feature that may be causing the issue or something. I repeated the tests with different combinations of cells selected and the most interesting ones were the following: Test 5 All cells are 'False' I switch feature 7 (sic!, just a single one) to 'True' Result: avg time about 2s Test 6 All cells are 'False' I switch features 8-50 to 'True' Result: avg time about 39s Test 7 All cells are 'False' I switch features 7-50 to 'True' Result: avg time about 84s I would totally assume that changing multiple at ones can be only quicker or the same at least. Here is not the case for sure: 84s > 39s + 2s ! As I think about it if you can imagine a references tree and you know that some of the branches/leaves are overlapping then running multiple calcs at once should be quicker since you don't make the overlapping formulas to recalculate multiple times as it is in the example of running them one by one. Also if you look at the difference (84-39-2=43) it's too big to say that this may be just transffering or whatever else related to web... Moreover hyperblock is not even calculating those booleans one by one since if that would be the case then the results of test5+tes6 should be equal to result from test7. Obviously I was trying to limit the number of factors that may impact the result. I was using a copy of a model that noone else was using, in the workspace that wasn't used for anything else etc. I was using an inspect mode available in chrome to check the processing times. I repeated the tests multiple times and the results were consistent. Can anyone help me explaining this phenomen? Can you think of any theoretic scenario that may cause hyperblock to behave like that? Have you experienced the same in your models?
... View more
Good to have such tool in Anaplan.
Is there any technical documentation available for the tool? I'm curious what type of algorithm is being used by Optimizer since this may say more about strengths and weaknesses of the tool.
... View more
Great guide Paul, thank you! Also there is one nice trick in case you use import's option to clear all items prior import. You can set a kind of dummy action that would move single cell value (say true value from a system module) into the target module's line item you want to clear up. Target line item would be in the subsidiary view with no dimensions as well. This way you limit your traffic in the model and improve action's performance since only a single boolean is being moved around. However it still works only if you want to clean the whole module. In some cases it may be useful - for example if you have just a single 'input like' line item.
... View more
Hey, great idea, I'm totally for! I would rather like to see Python / R / Julia kind of lenguages but anyway, any scriptting form would be awesome. Just imagine what can you do with it! It's not only for custom funcitons but also applying some advanced algorithms. Proper forecasting (math speaking) is no longer an issue and you don't have to go outside Anaplan. Machine Learing stuff is just next door. You can even try running some complex sort of AI programs inside Anaplan and what more do you need while taking complex decisions managing your company? The only problem I can see is that it may cause some trouble for the Anaplan servers since such algorithms are usually quite heavy on performance. Anyway, can't wait to see it in Anaplan!
... View more