Why Aren’t Demand Planners Adopting Machine Learning?
Olga is a forecaster with experience in predicting new product sales for large retail clients. She currently works at HAVI as Senior Manager, North America Forecasting. Her area of expertise is the study of what makes promotions successful: merchandise, media, digital advertising, price tactics and consumer preferences. She holds a bachelor's degree in Economics from the National Technical University of Ukraine and a master's in Financial Economics from Carleton University in Ottawa, Canada.
We all know that machine learning (ML) and AI gets the analytics and data science community excited. Every self-respecting forecasting department is developing ML algorithms to predict who will click, buy, lie, or die (to borrow the title of Eric Siegel’s seminal work on the subject). All analytics conferences and publications are filled with AI buzz words.
But when it comes to real-life implementation, the majority of demand forecasters are somewhat cautious about implementing machine learning. Why is that? Isn’t machine learning all about predicting, which is literally a Forecasters job? Let’s explore the opportunities and pitfalls of applying machine learning in forecasting.
There is a subtle difference in the way forecasting and ML define ‘prediction’. When Forecasters say ‘prediction’ we mean a prediction about the future. Traditional forecasting prediction methods include Time Series modelling, algebraic equations and qualitative judgement calls. As a result traditional forecasting is somewhat manual and time consuming, and may be swayed by human judgement. However, the outputs are easily interpreted and it is an agile process; the Forecaster knows where the numbers are coming from and may easily make corrections as needed. Further, traditional forecasting may be done with limited data.
Machine learning or statistical model ‘prediction’ refers to predicting the past. This sounds a bit counterintuitive, but the idea is to compare the model ‘prediction’ with reality and measure the difference or error. These errors are used to finetune the model to predict the future. Consequently, model predictions are heavily driven by past performance and are almost impossible to finetune. Also, the interpretability of models is very limited. Another factor to consider is that by design ML requires a lot of data. On the upside, machine learning is quick and automated as well as objective, being free from human judgement.
Machine learning and AI algorithms were created for a digital world with almost unlimited data on customer clicks, purchases and browsing data. As we know, these algorithms do an excellent job in luring us to make repeat purchases, buy complimentary items, and sign up for loyalty programs. The sunk cost of prediction error (lost sales) is relatively low. In addition, every error is an opportunity for the machine learning algorithm to improve itself.
The real world marketplace is quite different from the digital, marketplace, however. The data here might be limited to cash register sales, loyalty program data, or shipment data. The sunk cost of prediction error can be quite high as restaurants and retailers make procurements in bulk. Also, predictions cannot improve themselves as there is no automatic feedback loop. For these reasons, many brick-and-mortar retailers and their suppliers still rely on traditional forecasting methods. This does not mean that Machine Learning cannot offer opportunities in improving forecasting but there are a few considerations that need to be addressed before venturing into machine learning.
Any machine learning algorithm requires a lot of data. By a lot of data, I do not mean dates or variables. Machine learning models run on defined observation levels—this can be customer, store etc. You need at least a thousand of those (if not thousands) for machine learning to work. If the sample is limited to only 10 stores, it is probably better to refrain from machine learning and use Time Series techniques instead. Another factor to consider is the cost of maintaining the data. Is it readily available or does it need to be inputted manually? Does the data need to be engineered? Would that be a one-time effort or an ongoing process requiring human and computing resources? What would be the cost of storing data over the years?
By design machine learning is a black box. For example, predictions may be generated by a vote of thousands of decision trees. You can use colorful histograms to depict the weight of each factor in the model. These charts look very smart on presentation slides but are very far from intuitive. If the cost of a wrong prediction is millions of dollars, companies might be more comfortable with Time Series and arithmetic they can understand rather than a slick black box algorithm. This especially applies to new products with no sales data or limited test data.
There are a few workarounds for understanding machine learning. Playing with parameters might be a good indicator of the robustness of results. If one slight change to model inputs or specifications results in significant changes to predictions, this might be a red flag.
At the end of the day model trustworthiness may be proved by testing on new data. We don’t necessarily need to understand the ins and outs of algorithms if we are confident in the end result. Robustness of this argument may depend on your audience. Typically, analytics professionals are comfortable using machine learning predictions as long as they are tested. Supply chain leaders might be more cautious in making business decisions based on black box. A good sanity check is to run traditional forecasting methods in parallel to machine learning. If there is feasible difference between the results there might be either an issue with the model or an important consideration was left out when creating a traditional forecast.
It goes without saying that when machine learning is set up right it is wonderfully efficient. All one needs to do is provide inputs and press the button. The ‘setting up right’ piece might be relatively straightforward or extremely difficult depending on prediction goal and data available. Repeat products with abundant history may be easily predicted using even out-of-the box ML packages such as SAS or Azure as long as the data is readily available. New product predictions may require intricate proxy algorithms to solve for limited data. This may require development of ML algorithms from scratch. In addition, there may also be a need to engineer data from different sources to feed the algorithm. This might require significant investment to either hire contractors, expand analytics team or put pressure on existing resources. Before ramping up a data science crew, companies would be well advised to consider how often the algorithm will be used, the efficiency gains, and the computing resources required for the project.
Forecasting is the cornerstone of business planning. Any changes to the forecasting process may have an impact on other areas of the business such as Finance and Supply Chain. Typically, traditional forecasting methods rely on a top-down approach. A forecast is created in aggregate and then broken down by store/time period, etc. These breakdowns may be later used for financial targets or demand planning at store level. By design, ML Forecast utilizes a bottom-up approach. A prediction is created at store/time period level and later aggregated. When switching from traditional forecasting to ML, companies must ensure smooth transition at all stages of business planning. If not done right, this transition may result in discrepancies between the ML prediction vs the financial targets and supply plans.
To summarize, ML is a great instrument to streamline forecasting. As with any tool, it has its applications, benefits, cost, and risks. When utilizing ML for forecasting, companies should consider their data, business need, decision making culture, and planning workflow. A great place to start might be trying out ML on your data using online, off-the-shelf solutions such as Azure and SAS. Most of these solutions have step-by-step training videos that will help fit an ML algorithm to your data. Experimenting with these solutions may help decide whether ML is a good tool for your company’s forecasting, and whether an off-the-shelf solution is sufficient or there is a need for in-house development. Even if it turns out that for whatever reason ML is not a good fit for your company, there is no investment lost and some analytical knowledge will gained.
This article first appeared in the summer 2023 issue of the Journal of Business Forecasting. To access the Journal, become an IBF member and get it delivered to your door every quarter, along with a host of memberships benefits including discounted conferences and training, exclusive workshops, and access to the entire IBF knowledge library.
machine learning
We all know that machine learning (ML) and AI gets the analytics and data science community excited. Every self-respecting forecasting department is developing ML algorithms to predict who will click, buy, lie, or die (to borrow the title of Eric Siegel’s seminal work on the subject). All analytics conferences and publications are filled with AI buzz words. Demand Forecasters & Data Scientists Define ‘Prediction’ DifferentlyMachine Learning Requires Much More Data Than Time Series Machine Learning is far Less Interpretable than Time Series The Cost Benefit of Machine Learning is not Always ClearImpacts on Overall Business Planning This article first appeared in the summer 2023 issue of the Journal of Business Forecasting. To access the Journal, become an IBF member and get it delivered to your door every quarter, along with a host of memberships benefits including discounted conferences and training, exclusive workshops, and access to the entire IBF knowledge library.