Is there a way to migrate existing classic workspace model to a Polaris workspace?
Best Answer
-
@Sukh_Sandhu you had 2 questions there. One as part of the subject and another as part of body of the message.
- Can you migrate existing classic workspace to Polaris? It would be more about migrating the "model" from a Classic workspace into Polaris. Answer to that is it "depends" as there are functions in Classic that is not supported in Polaris at the moment. If your Classic model uses any of those functions then you would not be able to import it into Polaris
- Can we build model to model import processes between classic and Polaris? Yes. I believe this is the question Rob was answering.
2
Answers
-
Hi @Sukh_Sandhu From my understanding there are limitation with Polaris engine like it doesn't support FInditem etc, so I dont think we can directly move a model from Classic workspace to Polaris workspace.
1 -
Yes
1 -
Thank You All !
0 -
Really, you shouldn't "migrate" a Classic model to Polaris as the architecture of the new Polaris model should change (model more naturally vs. concatenating list members). Also, the way you create the logic should adapt to Polaris as well, the logic should look for data that is non-default vs. in Classic you just create it. In Polaris, you have to be careful/understand the fan out of the logic and create it appropriately.
Hope this helps,
Rob
0 -
@rob_marshall good point. I was more responding to the question whether you can vs whether you should. If you can elaborate on your comment "the logic should look for data that is non-default vs. in Classic you just create it". May be easier for me to understand if you can provide an example
0 -
Yes, happy to and should have done that. Also, I misspoke when I said the data should look for non-default, it should have said it should not look for non-default, but rather data that is valid. Think about a subset of employees and whether they are Active or Not Active = subset is Active. In Classic, we just create it and be done with it. Since the vast majority of employees will be Active and say you want to see the number of Non Active Employees, in Classic you create logic as Not Active. In Polaris, you may want to flip that so the non Active employees are the only ones checked in a subset.
Also, something to be careful about. Say you want to do a Count in a module. In Classic, it is very simple as you just create a line item that is hardcoded to 1. If you did that in Polaris with a module having 10 dimensions with sums turned on, you just made that MASSIVELY dense, and with the formats of line items being roughly 3x as large as the same format than Classic, you could crush the size of the model. Not saying you can't get counts in Polaris, you just have to think about how to do it differently.
Again, we don't advocate migrating models to Polaris because the engines are completely different and thus, the logic needs to be different. You can still get the same answers, just go about it differently.
Does that help?
0 -
@rob_marshall thanks. Makes sense
0 -
Heello π 2 Questions:
- Since I need to choose between a classic workspace or Polaris based on how dense or sparse my data model will be, what should I do for mixed cases? I might face performance issues in both scenarios. Does anyone have experience with this?
- 2) Is there any rule to understand in advance which workspace to purchase? It's difficult and time-consuming to analyze the data of every possible dataset.
0 -
- It's not a question of you either get Classic or Polaris. You can actually have both. For example, if your Anaplan tenant has been allocated 100Gb in total, you can split this between Classic and Polaris as you deem fit (e.g. 70Gb classic workspace, 30Gb Polaris). By default you would get Classic. Polaris is an extra "add on" to your initial license. The question becomes whether you would need Polaris as a extra "add on".
- Whether you should get Polaris will depend on the level of sparsity in your data. Cost benefit of getting Polaris is directly proportional to the level of data sparsity in my opinion. You need to consider the following to determine what level of sparsity makes buying Polaris as an add on become cost effective
- Polaris takes 3 times the amount of space for populated cells. So for example, if you have a module with one numeric line item (consumes 8 bytes) having one list dimension with 100,000 list items. In Classic this will consume 800,000 bytes (8 bytes x 100,000 list members). But in Polaris, if only 20% of cells in the same module has a value other than 0 (80% sparsity) then this would only use 48,000 bytes (8 bytes x 3 x 20,000 populated list members). That's 40% less the space you need compared to using Classic. But if 60% of cells has a non zero value (40% sparsity) then this would use 1,440,000 (8 bytes x 3 x 60,000 populated list members) which 640,000 bytes more than if you used Classic. The break even point in terms of sparsity is 66.6% sparsity (33.3% has value other than 0)
- Level of technical complexity that Polaris can reduce. In Classic, a lot composite lists are created with the intention of minimising sparsity such as the Product Customer list below which has 2 composite list hierarchies to generate an intersection between the Customer and Product Hierarchies
Product Customer Hierarchy
Customer Product Hierarchy
- P1 Brand
- P2 Sub Brand
- P3 Product
- P4 Product Customer
- C1 Region
- C2 Country
- C3 Territory
- C4 Customer Product
But in Polaris you can potentially have both Customer and Products as separate dimensions in a module without requiring you to create composite lists as long as it is below a dimension index of 64
Here's some of my "personal" rules for assessment
- Number of list hierarchies that need to be merged to generate a composite list. If you have 3 or more list hierarchies that need to be merged to a composite hierarchy then may be good to start assessing Polaris as a candidate. For example, you need to create a composite hierarchy that covers the following list hierarchies
- Customer Hierarchy
- Product Hierarchy
- Promotions Hierarchy
- General Ledger Hierarchy
The complexity I've experienced is that I ended up creating a lot of composite hierarches to control space consumption in Classic. In hindsight, we should have gone with Polaris at the start. Given we started in Classic, we later ended up getting Hyper Model
2. Models that require global coverage. You will generally find that the intersection of certain dimensions such as customers and products across global entities would have a high level of sparsity as both products and customers have regional scopes. E.g. a global company with 1,000 products would only sell 100 products to a specific country/region, 150 to only 2 countries, etc. The same goes with customers
3. List member life span. This is a term I just cooked up now lol. This is highly relevant for list hierarchies used in a lot of modules with time dimension. So what do I mean by this? Let's use weekly supermarket promotions like those 50% off Coca Cola soft drinks for the 1st week of May only. So the list is a promotion hierarchy. They are relatively short where majority of promotions only last less than two weeks. But the minimum time dimension you can have in Anaplan is one year. So you end up with a high level of sparsity because the list has a short life span. I've seen a similar situation with large retailers that had a habit of switching product suppliers nearly every half year to get the best bargain.
Hope this assists.
1 -
@TristanS Hi there, I appreciate the thoroughness of your response, it's much clearer to me now.
However, I've read from official Anaplan sources that the workspace choice is bindingβeither Polaris or Classic needs to be selected.
Also, your byte calculations are clear. But how can I anticipate the evolution of my dataset over time? Whether it will become more or less sparse in, say, x years? Once the data model is created, is it challenging or impossible to switch?
0 -
Let me attempt to answer your questions:
- Polaris is an extra cost, so if you didn't pay for it, then you don't have it.
- Anticipating the dataset - obviously, you can't see into the future, but you should base it on the use case (to a certain extent) as well as what kind of reports/outputs your users need/require. With that said, what is the use case being implemented?
- No, there is not an "easy" button to switch from one engine to the other because you would implement the same use case completely differently (think concatenating lists vs a more natural way).
1