Dear gurus I'd like to import raw data of accounting transaction data from ERP. I created module "Accounting Raw data". In this case, we need to define unique key in module such as Document No, so I created new list "Sequence No" as numberd list. In this case, I have to import Document No to Sequence No list first, then import transaction data with this document No. am I correct? BR Yoshi
Couple of best practises
1. It's a good idea to store attributes of the raw data/transaction in a module as line items rather than as properties in a list.
2. It may be good to implement data Hub to hold raw transaction, rather than having it in actual user facing model. The Hub Model will transform raw data (example rolled up GL/cost center etc) and pass it on to the actual planning model. When Volume of raw transactions are huge, this is a better way to implement.
@MandarAnanda Wow, it's an old thread. Here's couple of notes related to this.
1. I agree that it's better to store attributes in a module rather than in properties of a list. I have found it useful to have the attributes formatted as lists rather than text only, that way you will catch import errors straight away (e.g. missing cost center or account). It depends much on the source data (if you have to update lists from the fact data or not).
Unfortunately it's quite common that the source data does not include a unique identifier and you must therefore use the combination of properties to identify unique items. I guess in that case you don't have a possibility to store attributes in a module without using finditem-function referencing the properties of a list.
2. This is an issue that I have faced often - in what level the data should be transferred to the actual planning model without using lots of memory. The best practice is to use data hub and consolidate the data, but at the same time users want to be able to drill down to transaction. I have used the data hub to have the same data throughout all models and consolidated whenever it is enough. If drilldown to transaction must be enabled, I have tried to transfer only relevant transactions from data hub or/and used subsets for transaction lists to keep the modules with actuals as small as possible.
1. I agree that it is better to hold properties in a module. The only downside is that you need two separate import actions but fortunately with Anaplan they are quick to create.
2. Often this is a good idea but on the other hand it's very common that you are required to import transactions to the actual planning model. This is because users need to be able to drill down to transactions and because you might have so many dimensions in the transaction data (profit center, product, customer, company, partner, sales person etc.) that it is not wise to build a huge multidimensional module holding the data or to build multiple smaller modules holding the same data from different perspectives (with different dimensions). Of course the best method is dependent on the use case. Using subsets for transactions might also be helpful in some cases.
Agreed that sometimes there is a requirement for transactional analysis, but an alternative approach is to create set of filter line items using the user list as a dimension. That can give the users a more flexible and comprehensive analysis rather than the limited single drill to transaction.
I would also advocate holding this analysis in a separate model (or keep it in the data hub and restrict access for the end users). That way your planning model is just for planning and doesn't get cluttered up with large module purely for historic analysis
Remember list items alone take up 500b of space for each item, so a large transaction list can add GBs to the overall model size (10 million transactions = 5Gb before you add the line items in)
The other benefit of this is that you do not need a bunch of SUM formulae in your planning model to aggregate the data; it can be imported directly into the multi dimensional module from the source
David
@DavidSmith One question related to consolidation of numbers in data hub. Consolidating data with SUMs first and then using that module as import source is the recommended way. How about using the flat transaction module as source directly? Anaplan sums numbers when multiple rows are imported to a same cell. Haven't used this method but it would be interesting to hear if it's not recommended and why (probably because of performance issues)?
Good question
There are, as always,different factors to consider though and it is a trade off between calculations in the data hub and the export/import process time as well as the volume of transactions being exported/imported and the size of the Data Hub
If you take the situation where the data in the hub is changing infrequently a part of a cycle then the import time downstream is less relevant So if the import to the target took a little longer to process because there are more "rows" to process and aggregate does it matter? If you are exporting frequently (and especially if the exports are duing the working day), then the import time should be optimised to minimse user disruption
The same applies to the import to the data hub. Not consolidating in the hub saves model size as well as calculation time, meaning the import to the hub is quicker. Again, does it matter?. Taking the above example, summing the data after import to the hub will increase the overall time to process, but again, does it matter?
In most cases it will be more efficient to pre-aggregate and then export as the data hub shouldn't have too many calculations and keeping the export/import time down is preferrable. Increasing the data hub processing time is more preferrable to increasing the downstream import time
However, in large data hubs we should look to optimise in the same way we would in a planning model, so in that case we may need to compromise the export/import time in order to improve the performance in the hub itself
I hope that helps
David