Multiselect booleans - how the Hyperblock behaves?

Dear Community, I'm seeking for your assistance with the following topic.

 

In the model I'm contributing to we do have a possiblity that allows turning off/on some features. It's done using simple boolean line item dimensioned by list of features (to keep it simple). Let's say it looks like that:

 

FeatureBoolean (on/off)

feature1

T/F
feature2T/F
......
feature50T/F

 

You all know probably this simple trick that you can select multiple boolean formatted cells and hit spacebar in order to change all of them. I was using the same recently on the table above and I experienced a strange model behavior that I'm quite concerned about.

 

Test 1

  1. All cells are 'False'
  2. I select all of them and hit the spacebar

Result: model recalc was happening for about 85s by avg

 

Test 2

  1. All cells are 'False'
  2. I select first half (1-25) and hit spacebar

Result: model recalc was happening for about 175s by avg

 

Test 3

  1. All cells are 'False'
  2. I select second half (26-50) and hit spacebar

Result: model recalc was happening for about 40s by avg

 

Comparison 'test 2' vs 'test 1' is very counterintuitive, isn't it? How come lesser number of calculations results with longer time of recalculations?! But this is not yet over. I did another tests which where even more interesting:

 

Test 4

  1. First half (1-25) is 'True', second half (26-50) is 'False'
  2. I select second half (26-50) and hit spacebar

Result: model reclac was happening for about 180s by avg !

 

Well, I'm puzzelled, same as you are I believe.

It's worth mentioning that calculations driven by all those booleans are overlapping usually. Some of them may be completely parallel to others but in most of the cases they have some line items shared in the tree of references. They don't cover the whole model though.

 

I was trying to find out if there is one particular feature that may be causing the issue or something. I repeated the tests with different combinations of cells selected and the most interesting ones were the following:

 

Test 5

  1. All cells are 'False'
  2. I switch feature 7 (sic!, just a single one) to 'True'

Result: avg time about 2s

 

Test 6

  1. All cells are 'False'
  2. I switch features 8-50 to 'True'

Result: avg time about 39s

 

Test 7

  1. All cells are 'False'
  2. I switch features 7-50 to 'True'

Result: avg time about 84s

 

I would totally assume that changing multiple at ones can be only quicker or the same at least. Here is not the case for sure: 84s > 39s + 2s ! As I think about it if you can imagine a references tree and you know that some of the branches/leaves are overlapping then running multiple calcs at once should be quicker since you don't make the overlapping formulas to recalculate multiple times as it is in the example of running them one by one. Also if you look at the difference (84-39-2=43) it's too big to say that this may be just transffering or whatever else related to web...

Moreover hyperblock is not even calculating those booleans one by one since if that would be the case then the results of test5+tes6 should be equal to result from test7.

 

Obviously I was trying to limit the number of factors that may impact the result. I was using a copy of a model that noone else was using, in the workspace that wasn't used for anything else etc. I was using an inspect mode available in chrome to check the processing times. I repeated the tests multiple times and the results were consistent.

 

Can anyone help me explaining this phenomen? Can you think of any theoretic scenario that may cause hyperblock to behave like that? Have you experienced the same in your models?

Tagged:

Answers

  • Hi @DavidSmith, FYI, This post presents a very interesting perspective of the HyperBlock behavior. It would be great to understand the reason behind these numbers.

  • @PiotrWeremczuk ,

     

    I agree, very interesting...What is the model ID (can be found in the url address) and approx what time were you doing these tests? 

     

    Thanks,

     

    Rob

  • @rob_marshall 

     

    I was using model 1C63BDFF16F5419885F56E59D3BD665D and was doing tests for quite some time around 12 AM - 2 PM CEST today (10th Dec).

     

    Hope that helps

  • Hi Piotr, I'd like to get a copy of the model to analyse the calculations. I can't give you a reason for the differences based on the evidence here, I could take a guess but I don't like to do that! I'll message you...

    thanks
  • I'd recommend restructuring this so that you have a separate dimensionless line item per setting, then repeating your tests.