Hi @Alessio_Pagliano Sorry for the delayed answer. In commercial use, predictive analytics can be applied for different use cases. E.g. territory and quota management - by having more accurate predictions, based on data - you can set better quota's and assign more insightful to different territories. But of course, also in sales forecasting, pricing use cases are algorithms helpful. I did a few projects where predictive analytics was used to understand what the dynamic price elasticity of products in different regions for different customer segments was. By knowing this, you can drive more effective promotions. There is not 'approach' i can recommend. Every project is different, every client infrastructure is different, so hard to come with a recommended approach. But starting with a PoC is definitely a good start. This is what I also do to give people an idea of how well such an approach can work. There are many courses online which you can follow to get started with is. What is sure is that it's not because you're a master anaplaner you know anything from predictive analytics. It's a different skill set you need, a different toolbox you need to understand and a different way of thinking. I believe that the role of us - master anaplanners - is to collaborate very closely with the data scientists in our organization and that we need to help them making their insights (via predictive analytics) more digestible for our planners, our end-users.
... View more
Hi @hendersonmj , thanks for your question - good point! One of the most important topics (where btw data scientists also spend most of their time on) is cleansing data. Once you start merging data from different data sources, you see this issue more than when you stay in one data set. It depends a bit on the magnitude of the data - and how complex your model is, but part of the script of the algorithm is exactly about this process. There also exist packages in R and Python which can help you doing such data cleaning. I always tell to my team: that data format is not aligned between data sources is not an issue, it's a different story when the format is not consistent. Data scientists will write rules to merge, transform, split, ... data to be able to match formats. Every time that this algorithm is triggered, the computer will go through this recipe and perform the same cleaning rules. it would be too nice when an algorithm is only about data mining and pattern recognition. I hope this helps!
... View more
Hi @fabien.junod , great to hear you are eager to get started with an easy predictive model. You can get very easily started, but if you want to grow, some basic knowledge in a coding language (python, R, ...) will be helpful. So yes, some technical know-how would be needed, ideally in one of the two most used data science languages. Understanding the drivers is definitely possible, but that's where a good model can be differentiated from a bad one. To give the easiest example in predictive analytics you can imagine multiple regression models. You try to predict volumes (y) and you have therefore 4 data points which you want to use: 1) actual volume (a), 2) promotions (b), pipeline/order book (c) and the GDP of the country you operate in (d). If you create a multiple regression model you get something like: y = av + bw + cx + yd + standard error. The drivers' sensitivity is then explained with the factors v, w, x and y). Imagine that x is 0.5. If your 'c' or order book value increases with 1 your volumes (y) will increase with 0.5 (1 times 0.5). This easily understandable approach counts for all predictive analytics models - so build it the right way, and you will understand which are the important drivers. Feel free to clarify your question if this answer would not be enough. thanks,
... View more
Hi Chris, Cool to have a question from you! Thanks for this! I believe you're speaking about the different algorithms when you speak about types of predictive analytics? That is clear from my point of view, the most used time series models make sense to start with. We can use predictive analytics for all kinds of use-cases in Anaplan. In HR planning, churn predictive is very in, in a commercial area, the optimal price you can set for a certain customer/contract is very interesting to understand. But the most used use-case is what we call 'time-series' predictive analytics. Here there is a 'time' element that plays a role (e.g. it's important that the model recognizes that Feb comes after Jan and 2019 comes before 2020). These models can be used across different metrics (e.g. volume, prices, sales, margin, cost, EBIT, trucks, syringes, FTE's, ...). The most performing one is impossible to define, as it's different for every data set, but having the 5-10 most used ones natively integrated in Anaplan (like the optimizer) would make sense. I am thinking about: Arima, arimax, multiple regressions, vector autoregression, time-series gradient boosting, LSTM, Recurrent neural networks, ... Hope this helps!
... View more
Hi @Jared Dolich Thanks for your question. On the first point: the statistical models in Anaplan are only single regression models, from a statistical point of view, you need a more complex optimizer program to really be able to include multiple regression models. The big difference between both is easy: single regression models can only take 1 explanatory variable into account as in y=ax+b(+e) (which is the actual sales in this case), multiple regression models (the more traditional regression models used in econometrics) can include different explanatory data factors as in y=ax+by+cz+d(+e). The world is too complex to model them as single regression models, so I personally try to avoid these simple regression models. It can work, if you have very seasonalised, lineal growing products - but how often is that the case? Besides that, we also know that 85% of business performance is explained by external data - so excluding external data from predictive solutions is not ideal. So yes, I would go for a python or R solution. On the second point: there are different possibilities which are described on the community page in more detail: Rest API's, Anaplan connect, or ETL solutions (like Informatica) can help to integrate your python script in Anaplan. Know you always need a place, a server to 'execute' the algorithm. In 95% of the projects I do, there is not a reason to have a 'self-serving' update of the predictive model needed. This means if you schedule the import and update of algorithms overnight (you can program this easily) this is sufficient. For the other 5%, we create a URL that is linked to the server and you click on the URL, the process (export data from anaplan, run the algorithm and send forecast back to anaplan) is triggered. Hope this helps, if not, let me know!
... View more
Hi @timothybrennan , thanks for your question. You are making some very fair points. What we typically see is that companies have already onboarded data science teams the last years, which means most of the time the analytics know-how is already inside. The desk next to the data scientist, the planner/forecaster sits (from sales, finance, supply, HR, ...). The planner asks regularly: 'he data scientist, do you know the future better then I do?', where the data scientist says: 'no, but my machines can give you the most objective view on the future you can have, by looking at the past'. 'Cool, can you send me this objective view' asks the planner. The planner received an excel with some numbers by category, product or SKU. Analytics here are not embedded in the process, the two teams are not aligned. The planner received a manual generated forecast, a black box with no clarity on how the data scientist came up with these numbers. He uses it as a starting point but changes the numbers in the input field because he believes for one or another reason they are over/underestimated. You are right, Anaplan enables process design, and this also counts for the analytics process. In an ideal world, accuracy is measured (in Anaplan), a daily, weekly, or monthly interface with an analytical construct (in python, R, C++, ...) and new insights are updated regularly in Anaplan as a baseline, as an (objective) starting point. This baseline should include internal and external data, to maximize the accuracy of the baseline. After, planners can make changes, but changes (up or down) should be recorded and a reason should be added. At the end of the process, you're looking at the 'forecast value-added' (what was the extra accuracy the planner added by changing the baseline). If this is constant a positive number (which means the planner adds accuracy to the forecast) you should understand which data or insights this planner has which your machine does not include and start including those factors. That's how we have very successfully embedded analytics in planning use cases. 1. provide a transparent baseline (and integrate this with an automated feed from your analytics engine) 2. include internal and external data 3. track changes and understand what the forecast value added is 4. optimize the algorithm
... View more
We are living in disruptive times driven by uncertainty, volatility, and ambiguity. People tend to forget quickly, but looking back, disruptive events are increasing over time. Remember the real estate crisis in the ’90s and the .com bubble in early 2000? What about the events of 9/11 and the financial crisis in 2008? And more recently, the Chinese stock market crash in mid-2015 and Brexit, just to name a few. During each event, leaders across the globe experienced sleepless nights, and companies went bankrupt. Every time, a new and unseen element was part of the disruption, but many things remained the same.
Uncertainty will not disappear—markets are more and more volatile, and the speed at which we see ambiguous events coming is only increasing. Companies must be proactive in assessing their capabilities to withstand disruption and the options they have to identify and respond to upcoming opportunities and risks. The need for dynamic, scenario planning has never been greater. Our world is moving fast, planning processes and tools need to follow. I believe, when times are disrupted we are forced to completely re-look at what they do and the way they do business. Technology and data will play a drastic role in future digitized planning processes.
The Importance of Technology During Uncertain Times
People agree—planning during disruption is important. Following Deloitte’s point of view across the resilient drivers of respond, recover and thrive, planning during disruption is important in the short-term (to respond to the situation), mid-term (to recover from the situation), and long-term (to thrive and adapt to the new normal). This includes a big focus on cash flow planning and liquidity management, supply and production planning to meet new or different demands, and collaborating very closely with suppliers. It also includes cutting all non-crucial OPEX costs, reviewing which Capex investments can wait, analyzing the demand impact, adapting territory and quota plans, and plan proportions to recover and push demand. Companies should focus on new and innovative product launches to generate revenue in streams, ensuring you have the right resources available at the right place (workforce planning). And every single time, they should be running and re-running scenarios.
Running scenarios is easier said than done when it takes you a week to analyze the impact of one scenario on the top and the bottom line of the P&L and balance sheet. This is often the in legacy solutions where function works in siloes. That’s a hard situation when you need to analyze the impact of an event across the organization. Companies need a collaborative way of working together, a modular approach with driver-based models, all connected to each other in real-time. They need an agile Connected Planning solution with an in-member calculation engine that can calculate the impact of a single measurement across an organization in seconds. Making a decision fast is important. Making a well-planned decision is critical.
In a few short days, a scenario planning process can be transformed into a stable modular solution, leading to a fully-connected plan in just weeks. This was true at Deloitte when the team was able to analyze 37 scenarios spanning workforce, revenue, margin, costs, and cash flow in just 10 days with Anaplan.
More From Community:
Starting an Anaplan Center of Excellence
Spring Cleaning: Marie Kondo-ing Your Anaplan Model
Transforming IT Project Planning for CIOs
The Need for Data and Analytics
No modeling could have predicted the situation facing companies. Still, I believe data and analytics are required more than ever. Much disruptions have impacted our models, but we still know that predictive analytics help residue the time we spend on forecasts through past learnings. Those who remember their statistic classes understand the need to think probabilistically. We are exposed to distributions where it is possible to see predictions that did not fit in the confidence intervals.
Still, advanced analytics can bring massive value. During uncertain times, it’s very important to understand what is happening around you. You need to observe how the markets are reacting, recognize trends, and acknowledge how interventions are affecting the market. Many companies have a blind spot when it comes to understanding what will happen as they only consider internal data. It’s external data—like macro-economic indicators, consumer sentiment, buying power, export, employment, IoT, and demographical data—that help you understand how markets are evolving. You can observe the most important external leading indicators, or you can leverage machine-learning models that correlate your organizational data through all the leading external indicators available and provide a set of specific indicators for you to monitor.
Predictive analytics support companies recovery from a disruption in the market by helping them understand how economic, industry, and consumer trends will affect future business outcomes. Regression models are off when something extreme happens, but the correlation and causation factors are not disconnected. Once markets are more stable and lead times are re-established, a machine-learning-driven time-series model will help the organization identify inflection points, headwinds, and tailwinds for each region and product. By embracing data and advanced analytical techniques, and embedding them in your Connected Planning process, you will predict where demand begins and how quickly. Your supply chain and workforce will be prepared for recovery, also having the ability to tap into big data from other regions. Especially in today’s disruption, we can learn a lot by incorporating daily or weekly sales data across different industries from China and Italy.
Additionally, these models are giving us the ability to incorporate the recovery from recessions in our forecasts for long-term planning. Gartner is teaching us that especially macroeconomic leading indicators are very helpful to predict our business performance for the next 18 months while internal data (microeconomic data) are only explaining the short term.
It will always be a challenge to plan and forecast during disruptive times. There is no single solution or approach that provides all the answers. However, the last thing companies should do is nothing at all. Appreciate the value that data and technology bring and understand where in your organization it can create the most value. There is an opportunity in every challenge, and it’s up to everyone to take the opportunity and set the first steps towards more connected and data-driven enterprises.
Let us know what you think about this post in the comments below. To stay up-to-date on more valuable content like this, remember to subscribe to the blog!
Nick is leading the algorithmic and predictive planning offering and drives the Anaplan practice at Deloitte Switzerland. He is energized by transforming organizations into data-driven decision-makers from vision to execution and is a master Anaplanner.
... View more
Hi Both - I am also bumping into this issue today. I did not had it for a very long time. 1. it's not due to a composite Hierarchy issue (only one list - a flat list) 2. the name does not exceed 60 characters The name i want to use is 'Reverse Logistics - Extension' , so no special characters either. Anyone an idea?
... View more
The rules of planning are evolving daily. While we still steer businesses based on numbers, plans, budgets, and assumptions, thanks to artificial intelligence (AI) developments, computing power, and data we are on the cusp of a major transformation in the way we operate our businesses. We are progressing from static, cycle driven, labor-intensive, cumbersome planning exercises to dynamic, collaborative—and especially—intelligent plans.
Many companies are too focused on seeing data as an end “goal” when they should be focusing on how to integrate data into all aspects of the business. However, I believe an end is coming for manual data mining processes thanks to the advances being made in artificial intelligence (AI). But how can AI actually help in your planning exercise?
With AI, a machine learns, improves, and continuously optimizes without bias, offering options and potential actions that can improve performance across an organization, allowing you to redirect employees onto tasks that truly align with and work toward the overall vision. We are great at breaking down complex cases and building out our visions, strategies, and goals. However, we don’t have the capacity to analyze the overload of data available to us quickly, but machines (AI) do. Machines can quickly mine data and find recurring patterns from the past, as well as estimate the impact of a previous event. This provides us time to lift the planning process to the next level, find tangible actions and wins, and take those actions accordingly.
Data is Not the Holy Grail
However, the increase in data over the years has started to do more harm than good. For instance, all data does not equal correct information. The more information available, the more “noise” there is and the harder it becomes to sort through what is real, which makes it easier for people to cherry-pick patterns that fit their pre-existing positions.
Just ask Steven Sloman and Philip Fernbach who wrote the book The Knowledge Illusion. Within the book they go on to say, “individuals know very little, the key to our intelligence lies in the people and things around us.”
We’re consistently drawing on information and expertise stored outside of our heads, while AI is only influenced by factual data and can provide a pure, impartial look at patterns and forecasts. Our brains are what make us smarter than any computer, but at the same time, our minds can also be influenced by unreliable sources.
Humans have a difficult time looking at data objectively; we want the data to be in line with the stories we are trying to tell, while machines are only looking for the patterns that make the most sense. For example, two people with the same backgrounds and experiences will not create the same forecasts due to the influence of external factors.
Machines are bringing objectivity to planning by augmenting the available internal data with external data. AI is able to find patterns and make predictions that help organizations become more accurate, have better insights, and move faster than their competition.
More About Connected Planning:
Transforming IT Project Planning for CIOs
Connected Planning in the Age of Artificial Intelligence
The State of Connected Planning Trends Review
The Collaborative, Connected, and Intelligent Self-Driving Enterprise
If you want to be prepared for the new age of Connected and augmented Planning, it is necessary to embrace artificial intelligence. As we have learned:
Planning cycles will disappear as both actuals and forecasts are generated in real-time.
Machine-based algorithms are looking at the past, internal, and external data, and are producing a more accurate baseline forecast.
Confidence intervals are changing the tone of discussions and are reducing uncertainty.
New insights are created as the self-driving enterprise beats traditional data reading capabilities to discover patterns and drastically improve business and decision-making.
The self-serving capabilities allow business partners to enter the solution and make simulations, run some scenarios and reduce the time between planning and collaborative decision making to near zero.
Machines will never replace Connected Planning, but your planning processes can grow in efficiency, effectiveness, and accuracy by augmenting them with AI.
The world is rapidly preparing for this; are you?
Nick is energized by transforming organizations into data-driven decision makers from vision to execution. He is designing, building and implementing advanced analytics platforms and digital strategies at major companies across the globe bridging strategies to true action points and insights.
Besides being a Master Anaplaner and a principal solution architect in the Deloitte EMEA CoE, Nick is recognized as a subject matter expert in algorithmic design and data-driven predictions as well as Python and Tensorflow,
Connect with Nick on LinkedIn
... View more