DIMENSION INDEXING in POLARIS
If you had already attended an event on 4th Aug 22 by "Modelers Forum India" you would be knowing by now that there is a concept called "Dimension Indexing" in Polaris Engine. In this article we will try to explain what do we mean by Dimension Indexing
If you remember, that in Hyperblock any dimension (apart from Anaplan TIME and Anaplan VERSION) constitutes one BLOCK meaning every item of the dimension is within the same block where you can perform all the functions like SELECT, SUM, LOOKUP, FINDITEM etc. while that still holds good in Polaris but there is an additional concept that was born with Polaris and that is Dimension Indexing. It means that every member/item of the dimension is being indexed but the catch is that this indexing happens in groups (Power of 2) and not at an individual member/item level.Let's try to understand with this example.
If the list/dimension has one item in it that means Polaris would need 1 index for it, if there are 2 list items then Polaris would still need 1 because the grouping of items at indices is capped at 2 (Power of 2 to 1) at entry level. When the list items go from 3 to 4 (Power of 2 to 2) then the indices required are 2 . Moving on, if the list items go from 5 to 8 (Power of 2 to 3) then the indices required are 3 and if the list items range from 9 to 16(Power of 2 to 4)then the indices required, yes you guessed it right, are 4. So how do you know how many indices are required by Polaris to hold all these members/items of the list/dimension. Just apply the LOGARITHM function to the base of 2 and ROUNDUP to its highest integer or in other words it is the POWER of 2 which should be greater than the number of items/members in the list. I am attaching a sheet where I have tried to break it down for you  Please go through it and let me know if you have any questions.
Final Words:
So in order for us to work with Polaris it is absolutely necessary that the calculations stay under these two thresholds.
1.The maximum number of indices is 64 at each line item level.
2. The maximum number of cells is (2^64)1 at each line item level and that is equal to 18,446,744,073,709,600,000 i.e., 18 Quintrillion 446 Quadrillion 744 Trillion 73 Billion 709 Million and 600 Thousand cells.
Comments

Thank you @Misbah for sharing. Very good to have this!
Do you have any idea how to calculate density/sparsity for total and subtotal levels of a given raw data of a flat file with dimension attributes? It seems like a very complex problem to solve as every parent and every child affects the density.
Without the answer to that it's impossible to know ahead how sparce is your data and does it make sense for you to try Polaris concidering 1/3 ratio for numbered data format.
0 
We are entering a realm of possibilities beyond imagination but as of now it is quite uncertain how Polaris is going to behave with the data that is let's say half dense half sparse
Calculating sparsity in Polaris looks extremely challenging to me as of now. It is absolutely necessary to understand the data in order to get to know that there is data sparsity. Sparsity at the summary levels will be lesser as compared to the granular/child level or in other words it will be dense at higher up levels.
By the way this question was asked in the event and there is an answer in the recording as well. (39:15  42:30)
0 
Great question and very hard to answer...You have to remember, Polaris is a completely different engine with completely different rules. How is it different? Obviously, the engine is different and that is an entirely different conversation, but what I want to focus on is HOW we model.
In the Classic engine, we had to concatenate lists together to make the data dense. In Polaris, we don't need to do that and can model more naturally, leading to more sparsity. This is where the fun begins because now our hierarchies in Polaris can be completely different and architected for only what they truly are. We can now have SKU's as a separate list in a module dimensionalized by Regions or Geographies.
So determining the density will be a bit more difficult and we are still thinking about it.
Hope this helps,
Rob
0