Import raw data of transaction

Dear gurus I'd like to import raw data of accounting transaction data from ERP. I created module "Accounting Raw data". In this case, we need to define unique key in module such as Document No, so I created new list "Sequence No" as numberd list. In this case, I have to import Document No to Sequence No list first, then import transaction data with this document No. am I correct? BR Yoshi

Answers

  • Hi,

    Here's one way I have found useful:
    - Import a numbered list of your accounting raw data. Select transaction number as the unique identifier. You will then have a numbered list with the same number of rows than your original data. Add the columns of your source data as properties in your numbered list. Select formats for your columns. Usually I have used text, number and date at this point.
    - Create a module with this list only. Add line items for the columns you will use and read data to them from the properties of your numbered list. Use list formatted line items in this module instead of text. This way the fields are usable in formulas in other modules.

    For example you could have a list #Transactions with properties date (date formatted), cost center id (text formatted), account id (text formatted) and amount (number). In your module you would have the same fields transaction date, cost center (list formatted), account id (list formatted) and amount (number).
    Transaction date = #Transactions.date
    Cost center = FINDITEM(Name of cost center list, #Transactions.cost center id)
    Account = FINDITEM(Name of account list, #Transactions.account id)
    Amount = #Transactions.Amount

    Now you could easily read data from this module and drill down to transactions using formula such as:
    Transactions Module.Amount[SUM: Transactions Module.Date, SUM: Transactions Module.Cost Center, SUM: Transactions Module.Account]

    You could also add boolean line items to this module for data deletion using selection in case you don't want to keep lots of history in anaplan or if you want to delete parts of data for some reason. I have also added logic to this kind of modules for handling imperfect data. For example we had actual sales data for products that were not planned in anaplan but the actuals were needed. I used a formula for determining if product was found from product list and if not, the field would contain value "Other product".

    I'm not sure if I managed to answer your question but I hope you'll find this helpful! :)

    Br,
    Jaakko
  • RE: Import raw data of transaction
  • Jaakko,

    Thanks for your feedback.
    Seems good idea.
    By your method, it seems I don't need to import any unique number in advance.
    I'll try.

    Yoshi
  • @PrevContributor , @Jaakko , 

     

    Couple of best practises 

    1. It's a good idea to store attributes of the raw data/transaction in a module as line items rather than as properties in a list.

    2. It may be good to implement data Hub to hold raw transaction, rather than having it in actual user facing model. The Hub Model will transform raw data  (example rolled up GL/cost center etc) and pass it on to the actual planning model. When Volume of raw transactions are huge, this is a better way to implement.

  • @MandarAnanda Wow, it's an old thread. Here's couple of notes related to this.

     

    1. I agree that it's better to store attributes in a module rather than in properties of a list. I have found it useful to have the attributes formatted as lists rather than text only, that way you will catch import errors straight away (e.g. missing cost center or account). It depends much on the source data (if you have to update lists from the fact data or not).

     

    Unfortunately it's quite common that the source data does not include a unique identifier and you must therefore use the combination of properties to identify unique items. I guess in that case you don't have a possibility to store attributes in a module without using finditem-function referencing the properties of a list.

     

    2. This is an issue that I have faced often - in what level the data should be transferred to the actual planning model without using lots of memory. The best practice is to use data hub and consolidate the data, but at the same time users want to be able to drill down to transaction. I have used the data hub to have the same data throughout all models and consolidated whenever it is enough. If drilldown to transaction must be enabled, I have tried to transfer only relevant transactions from data hub or/and used subsets for transaction lists to keep the modules with actuals as small as possible.

  • 1. I agree that it is better to hold properties in a module. The only downside is that you need two separate import actions but fortunately with Anaplan they are quick to create.

    2. Often this is a good idea but on the other hand it's very common that you are required to import transactions to the actual planning model. This is because users need to be able to drill down to transactions and because you might have so many dimensions in the transaction data (profit center, product, customer, company, partner, sales person etc.) that it is not wise to build a huge multidimensional module holding the data or to build multiple smaller modules holding the same data from different perspectives (with different dimensions). Of course the best method is dependent on the use case. Using subsets for transactions might also be helpful in some cases.

  • @Jaakko 

    Agreed that sometimes there is a requirement for transactional analysis, but an alternative approach is to create set of filter line items using the user list as a dimension.  That can give the users a more flexible and comprehensive analysis rather than the limited single drill to transaction.

    I would also advocate holding this analysis in a separate model (or keep it in the data hub and restrict access for  the end users).  That way your planning model is just for planning and doesn't get cluttered up with large module purely for historic analysis

    Remember list items alone take up 500b of space for each item, so a large transaction list can add GBs to the overall model size (10 million transactions = 5Gb before you add the line items in)

    The other benefit of this is that you do not need a bunch of SUM formulae in your planning model to aggregate the data; it can be imported directly into the multi dimensional module from the source

    David

  • @DavidSmith  One question related to consolidation of numbers in data hub. Consolidating data with SUMs first and then using that module as import source is the recommended way. How about using the flat transaction module as source directly? Anaplan sums numbers when multiple rows are imported to a same cell. Haven't used this method but it would be interesting to hear if it's not recommended and why (probably because of performance issues)?

  • @Jaakko 

    Good question

    There are, as always,different factors to consider though and it is a trade off between calculations in the data hub and the export/import process time as well as the volume of transactions being exported/imported and the size of the Data Hub

     

    If you take the situation where the data in the hub is changing infrequently a part of a cycle then the import time downstream is less relevant  So if the import to the target took a little longer to process because there are more "rows" to process and aggregate does it matter?  If you are exporting frequently (and especially if the exports are duing the working day), then the import time should be optimised to minimse user disruption

     

    The same applies to the import to the data hub.  Not consolidating in the hub saves model size as well as calculation time, meaning the import to the hub is quicker.  Again, does it matter?.  Taking the above example, summing the data after import to the hub will increase the overall time to process, but again, does it matter? 

     

    In most cases it will be more efficient to pre-aggregate and then export as the data hub shouldn't have too many calculations and keeping the export/import time down is preferrable.  Increasing the data hub processing time is more preferrable to increasing the downstream import time

     

    However, in large data hubs we should look to optimise in the same way we would in a planning model, so in that case we may need to compromise the export/import time in order to improve the performance in the hub itself

     

    I hope that helps

    David

     

  • Hi

     

    It's always good to store your transactional data in data hub (separate model) and then use action for imports. As a best practice one should always create a list for unique identifiers and then use that list as a dimension in the module.

     

    Thanks

    Yash

  • What have you used as unique identifier when source dáta does not include one (numbered transaction list created with unique items identified as combination of properties)? I have concatenated the combination of properties as text but I have faced a problem where 60 characters is not enough for concatenating all these properties. You can also parse #-character away from the item name and use that but I think that is not an ideal way.

  • We should always try and push to get a code, but obviously when the code is greater than 60 chrs, you would have to use a numbere list.

    You should look carefully, through at whether you really need all the attributes in a flat list.  in a lot of cases, expecially with time, is is faster and smaller to have some dimensionality in your transaction data. 

    We will shortly be publishing some findings on this subject

    David