I have a Line Item "Conversion %" in a module that applies to a fake time dimension list called "Aging Days" that ranges from 0 - 365.
0 / 1 / 2 / 3 / 4 // ... // 365
Conversion% 1% / 1.5% / <blank> / <blank> / 1.6% / 1.6% / .... // 26.5%
If you see the above data, it is a curve (cumulative %) over 365 days. Therefore, for several days, the values remain constant. In many cases, it is also blank (in which case we wish to assume previous non blank data).
I am trying to calculate an extrapolated curve that will always have a value (give me the previous non blank value in case there are blanks - but over the "Aging Days" list and not time dimension). In the above example, both the blank values should be filled with 1.5%. Note that there is no control over how many blanks can occur, so using a "Previous Aging Day" list property for the fake time dimension will not help. Adding another line item to CUMULATE over the list doesn't help either.
Also, I wish to have another line item to calculatemax, or average etc if possible, over the "aging days" schedule/list.
Can someone help, please?
Solved! Go to Solution.
Here is a solution that works...
Here is my data module, it should be the same as yours:
Create a module called Aging Day Driver with only one line item, Start Date which is formatted as a Date. The date entered should be the first day of your time scale.
Create an attribute module from your Aging Days list (Aging Days as the only list in the Applies To) with two line items: Start Date and End Date with the following formulas:
Start Date: Aging Day Driver.Start Date + VALUE(NAME(ITEM(Aging Days)))
End Date: Start Date
Create a module dimensionalized by Time, but at the Day level. This way, you can use the Previous function which will help you out. Turn off all summaries.
Aging Days: OFFSET(Aging Days + 1, -1, 0)
Conversion: Aging Days Data.Conversion%[LOOKUP: Link to Aging Days]
Update Conversion: IF Conversion = 0 THEN PREVIOUS(Update Conversion) ELSE Conversion
Link to Aging Days: FINDITEM(Aging Days, TEXT(Aging Days))
Now, we need to get teh data (Updated Conversion) from the "Time" module back to the original module, but you can't do a SUM on day or week data so we need to use the function TimeSum. In your original module, create a new line item called Conversion % New with the following formula: TIMESUM(Aging Days to Time.Update Conversion, Aging Days Attributes.Start Date, Aging Days Attributes.End Date)
Hope this helps,
Thank you for your quick and detailed response! The solution works wonderfully.
Not that I have a better solution, but I do feel that such things (especially with fake time list as dimensions) are simple operatins, but hard to perform/have a roundabout solution in Anaplan (several modules here, for example).
Keeping manipulation of data inputs/cleanup/extrapolation tasks outside of Anaplan (via Excel or some small script) is the way to keep the model simple. Do you agree? Your thoughts are appreciated.
I guess it depends on what exactly you mean. I try to keep the business process within Anaplan, but the "actual" creation of the data set (all data from the source system(s)), I totally agree those should be cleaned up prior to arriving to Anaplan. This means the proper aggregation, proper codes for the lists, etc should be created at the source level instead of having Anaplan "figure" that out. Also, Anaplan is a much different tool than I what I have played with in the past (Essbase/BPC) in that those software solutions wanted/needed you to build everything in one "template" whereas Anaplan thrives on multiple modules.
Does this help or did I answer your question?
Yes, It does. I was trying to figure out if things like extrapolation should live outside of Anaplan (in case data comes with blanks for genuine reasons - the original question I started this thread for) given that we need to depend on so many modules just to smooth the curve. But you are right on how to think about Anaplan - it thrives on many modules. Apart from that, completely agree that only business process should sit inside of Anaplan.
Thanks again, Rob.
As a follow up to this question: I now have a similar problem with (fortunately), time dimensioned data.
Assume a conversion curve like Below, from 2014 to 2021 (upto 2018 is actuals, and henceforth is forecast), with monthly data:
Jan 14, Feb 14, Mar 14, .... Jan 15, Feb 15, Mar 15, .... , Jan 19, Feb 19, Mar 19 ...
Conversion% 1.5% / 1/6% / 1.2% / ... 1.6% / 1.0% / 1.1% / .... . <blank> / <blank>/ <blank> ... and so on
To extrapolate this curve, I need to consider monthly cohorts or seasonality. i.e, For Jan 19, I want the simple average of Last 5 years of JAN data. i.e., We are adjusting for seasonality for Jan 19 projection by taking only January values from the last 5 years (Therefore, using PREVIOUS() will not work with monthly time dimensioned data). We need a moving average for last 5 years (i.e., if data is blank, we just carry over the latest non blank january data and keep averaging last 5 years for however infinitely into the futrure we want)
Assume that this is a time dimensioned module.
I am trying an approach where I create 5 line items each one having data like:
LAG(Conversion, 12, 0) -- "1yr lag"
LAG(Conversion, 24, 0) -- "2yr lag" ... ...
and so on till LAG(Conversion, 60, 0) -- "5 yr lag"
Another line item to COUNT number of prior year lags that are non-zero (for denominator of average)
Conversion Extrapolated = SUM( all lag line items) / count
Is this the best approach? Any other ideas?
Can you not use the previously answered question but also use the function MovingSum()?
One of the reasons I could not use MOVINGSUM for this problem was because I have blanks/zeroes in the data that I wanted to ignore. Therefore, using a LAG and manually calculated COUNT was better.