Exciting new product releases coming from Anaplan announced at Anaplan Live! 2021. One of things a lot of us are curious about and want to know more about is Polaris. What is the difference between Polaris vs a Hypermodel?
- Hypermodel gives users the ability to have up to 700GB's of space in a workspace
- Polaris is an enhanced calculation engine that can handle up to 10 quintillion cells
How does Polaris calculation engine impact workspace space? does your model need to be a Hypermodel in order to support Polaris?
I am the product manager for Polaris. Caveating with the usual roadmap statements that we haven't yet made Polaris generally available, so things are subject to change - I can explain the following between Polaris and Hypermodels.
Polaris uses a completely new underlying engine which is a natively sparse engine. That means that the amount of memory/workspace used by a line item does not depend on the dimensionality, rather it depends on the number of populated (non-zero for numeric) cells. So if I create a 10 billion cell line item in Polaris that is all zeros, it will require zero bytes of workspace. Note though that every populated (non-zero for numeric) cell in Polaris requires more memory/workspace (about 24 bytes) than every cell in the Classic engine.(about 8 bytes). So there is a trade off. Workspace size for a Polaris model is driven by the number of populated cells (including primary, aggregate, and calculated cells) and not the dimensionality.
Currently Hypermodels means the current Classic engine with greater than 130GB workspace size. That can be necessary when the amount of data being modelled is large enough - whether or not that business problem is highly sparse.
We do plan to support Polaris Hypermodels - so Polaris Workspaces with >130GB size, as well as standard Polaris workspaces (up to 130GB), but as we are still early in an EA phase with Polaris we will need to see more data to understand best practice for those scenarios.
Thank you so much for that info. So that means there won't be a 2.1 billion cell limit on a particular line item.
Also will there be an auto switch from Hypermodel to Polaris as and when it becomes GA or Do clients have to ask and pay for it - since Polaris natively is a sparse engine. I believe clients would be super happy to adopt Polaris - can't wait to work on it.
While Polaris doesn't have the 2^31 -1 limitation that the current engine does, there is a limitation which is 2^64 -1. And since the models for Polaris should be architected differently than HyperModels and Classic models as in you will not need to do the concatenations flatting multiple lists into one, Polaris will require a rebuild of the model. Lastly, yes, clients will have to pay for it, but I am aware of the costs.
To add slightly to what Rob said: the 2^64 (roughly 18 quintillion) cell limit in Polaris is *per line item*. There is nothing to stop you having multiple 2^64-1 cell line items, and if they each only have one populated cell - that is still only ~24 bytes (plus some overhead) each.
As I understand from all the white-papers and material, you cannot just switch the model from the classic to Polaris as it needs rebuild because it does not support some formula's such as FInditems etc.
I see that you stated "standard Polaris workspaces (up to 130GB)", does this mean we need to have workspaces that Polaris specific and import our models into these workspaces or can we keep our legacy workspaces?