July 12, 2024

### Innovative strategies for digital freight matching: how to find the perfect fit with automation

Discover how AI and automation can transform freight matching, surpassing load boards to boost efficiency for shippers and carriers.

Blog /

AI Forecast demand

2024-02-28

X

minutes reading

2024-03-14

In the realm of road freight transportation, **forecast accuracy** is much more than just a metric; it is the **cornerstone of operational efficiency and strategic planning**. It is also an area with great potential for improvements that will have a direct impact on business. In fact, in a Mckinsey survey, 40% of CFOs expressed that their forecasts are not particularly accurate and take far too much time.

For technology and operations leaders in this sector, understanding and improving forecast accuracy through technical means is crucial. To help you in this process, this article delves into the **technicalities of forecast accuracy**, highlighting its importance, how it can be measured with precision and what improvement strategies can be implemented.

High forecast accuracy indicates that the **predictions made by the model are reliable and can be trusted for making key logistical decisions**, such as demand or capacity planning. Achieving high forecast accuracy is essential for several reasons:

**Operational efficiency**: It allows for more effective planning and use of resources by reducing waste and optimising operational processes.**Cost reduction**: Accurate forecasts help to minimise costs related to overcapacity/undercapacity and pricing.**Strategic decision-making**: It provides a solid foundation for making informed strategic decisions, thus enhancing productivity and competitiveness.

**Forecast accuracy represents the degree of closeness between predicted values and actual outcomes**. It is a quantitative measurement of how well a forecast model can predict future events.

Fundamentally, forecast accuracy is about **understanding and minimising the discrepancies between forecast predictions and actual observations**. These discrepancies are commonly known as **forecast errors**, which can be quantified as the difference between what was predicted and what actually happened. The formula to calculate this error is straightforward:

For instance, let’s consider a **practical example of a logistics company forecasting demand for freight services**. They predict a total of 100 shipments on a particular day. By the day’s end, the actual number of shipments processed turns out to be 95. According to the error formula, the forecast error for that day is 100 -95 = 5, meaning the forecast overestimated demand by 5 shipments for that specific day. Such an error highlights the forecast’s deviation from reality, serving as a critical metric for assessing the forecasting model’s accuracy and reliability in predicting future demand. **The smaller the error, the better**.

Measuring forecast accuracy presents unique challenges, particularly when forecasts extend over multiple future dates. Compared to evaluating the prediction for a single event – as in the example where 100 shipments were forecast and 95 were observed – assessing accuracy over a period involves a more complex metric. **Forecast accuracy means evaluating how well a forecasting model performs in the long term**, not just at a single point in time.

Any forecast accuracy evaluation must account for the cumulative performance of the model across all forecasted periods, whether that be daily, weekly or monthly forecasts. **The objective is to gauge the model’s consistency over time**, while keeping in mind that perfect accuracy for every single forecast is unrealistic.

To measure this effectively, businesses employ various statistical metrics that can aggregate forecast performance over multiple instances, providing a more comprehensive view of the model’s effectiveness in predicting future outcomes. **These metrics take into account the sum of errors over all forecast periods**, offering a **holistic assessment** of how close the forecasts are to the actual results over time.

Below, you will find a list of **commonly used metrics**:

**Explanation**: MAE calculates the average magnitude of errors (discrepancies between forecasts and actual outcomes), treating all of them equally.**When to choose**: Opt for MAE when you need a clear, straightforward measure that does not penalise major errors excessively. It is ideal for uniform error distributions without outliers.**Advantages**: Simplicity and clarity in representing forecast accuracy.**Potential issues**: Might not reflect the severity of occasional major errors, potentially giving a skewed perception of a model’s performance.

**Explanation**: MAPE expresses forecast errors as a percentage of the actual values, providing a relative measure of accuracy.**When to choose**: MAPE is useful for comparing forecasts across different scales or when the actual numbers vary greatly, making it easier to interpret forecast performance.**Advantages**: Facilitates comparisons and is easily understandable by non-technical stakeholders.**Potential issues**: Can be misleading if actual values are very close to zero, as the percentage error can become disproportionately large.

**Explanation**: sMAPE adjusts the MAPE formula to account for the symmetry between the forecast and actual values, aiming to provide a more balanced measure.**When to choose**: When seeking a percentage-based error metric that is more balanced and fair, especially in cases with low values.**Advantages**: Addresses some of the asymmetry issues in MAPE. Easier to interpret in terms of error percentage.**Potential issues**: The adjustment can sometimes lead to confusion in interpretation, especially with very low actual values.

**Explanation**: MSE measures the average of the squares of the errors between the forecasted and actual values.**When to choose**: Ideal for highlighting major errors and when the goal is to penalise these errors heavily. Particularly useful in analytical and optimisation contexts, especially in machine learning model training.**Advantages**: Emphasises larger errors more than smaller ones. Straightforward in its application.**Potential issues**: Can disproportionately highlight major errors, sometimes overshadowing the model's performance on the majority of more minor errors.

**Explanation**: RMSE is the square root of MSE, providing a measurement of the average magnitude of the forecast error.**When to choose**: RMSE is your go-to when major errors are particularly undesirable and should be emphasised in the forecast evaluation. Useful for when you want a metric that is in the same units as the forecasted data, making it easier to interpret.**Advantages**: Maintains the scale of the data. Penalises major errors more heavily than minor ones.**Potential issues**: Like MSE, it may overemphasise the significance of major errors, which could distort the overall accuracy assessment if such errors are rare.

**Each of these metrics offers a unique lens through which forecast accuracy can be viewed.** Your choice of metric will depend on the specific requirements of your forecasting task, the nature of your data and the goals of your analysis. Often, employing a combination of these metrics will provide a comprehensive understanding of your forecasting model’s performance, enabling you to make informed decisions and strategic adjustments.

That is precisely what we do at Ontruck AI Tech: **we utilise a combination of some of the metrics described above to ensure a multi-faceted evaluation of our forecast models**. By leveraging tools like **MLflow**, we meticulously record our forecasting models and document several error metrics for each. This way, by examining a **broad spectrum of accuracy measures**, we are better placed to understand and compare the effectiveness of different models.

In practice, calculating forecast accuracy for a model involves selecting a set or sets of past data to run the forecast again and compare the resulting predictions with the actual observations. **Adopting a structured, methodical approach is fundamental in order to make sure the error metrics are reliable and effective**.

Techniques like cross-validation are crucial for training and testing forecast models across different periods and situations, so as to ensure that they can adapt to varying conditions and continue to make accurate predictions.

In the logistics sector, we often deal with **time series forecasting**. For this case, some of the best practices for calculating forecast accuracy metrics effectively are as follows:

**Time series cross-validation**: Unlike standard cross-validation, time series cross-validation involves dividing the data into continuous chunks in chronological order. This method preserves the time dependencies within the data, which are critical for accurate forecasting in logistics.**Training and testing over different periods**: Using cross-validation, our forecast models are systematically trained on a designated ‘training’ set of data, then their performance is tested on a subsequent ‘testing’ set. This process is repeated for multiple continuous periods, allowing the model to learn from a broad range of conditions.**Calculating metrics for each period**: For every training-testing iteration, our chosen forecast accuracy metrics should be calculated. This is crucial for assessing how well the model performs in predicting future events in multiple situations, taking into account the specifics of each time period.**Averaging the metrics**: Average the metrics across all periods to obtain a comprehensive view of a model’s overall performance. This aggregate measure helps us to compare the model’s performance against others and select the one with the best results in all kinds of situations.

Following these best practices ensures a thorough and nuanced evaluation of forecasting models, especially those dealing with **time series data, as we often encounter in the road freight transport sector**. Moreover, by employing this rigorous approach to calculate forecast accuracy metrics, we make sure that **our forecasting models are both theoretically sound and effective on a practical level**.

Optimising forecast accuracy involves not only a structured and methodical approach to measuring it, but also a combination of advanced statistical methods, machine learning techniques and continuous model evaluation in order to improve it, including:

**Backtesting**: Backtesting tests the predictive model on historical data to assess its performance. This process helps identify the strengths and weaknesses of the model when it is confronted with different real scenarios.**Use of synthetic data**: Synthetic data generation can improve forecast accuracy by providing additional training data. This approach is particularly useful in scenarios where historical data is limited or biased and to test the model in all kinds of possible situations.**Machine learning and AI**: Employing more sophisticated algorithms like neural networks and ensemble methods to capture complex data patterns can potentially improve forecast accuracy. These models can also adapt and refine predictions over time as they are exposed to more and different data.**Continuous model refinement**: Regularly updating and retraining models with new data, using techniques such as cross-validation, will fine-tune model parameters, ensuring that forecast models stay accurate and relevant.

The importance of forecast accuracy in logistics cannot be overstated. Improving it is much more than a technical exercise; **it is the key to driving efficiency and strategic advantage in logistics and road freight transportation**. Understanding forecast accuracy involves not just knowing its definition – the degree of closeness between predicted values and actual outcomes – but also appreciating its **impact on decision-making processes**.

A precise forecast empowers organisations to allocate resources efficiently, manage demand proactively and **maintain a competitive edge in a dynamic logistics panorama**. Mastering the calculation of forecast accuracy through established metrics and best practices is crucial. By being able to measure and interpret these metrics effectively, organisations will enhance their expertise and predict future scenarios more accurately.

Implementing **strategies to enhance forecast accuracy **– such as adopting a structured, consistent approach to accuracy measurement, backtesting, leveraging synthetic data, utilising advanced Machine Learning and AI algorithms, and committing to continuous model refinement – ensures that forecasting methodologies remain robust and responsive to changing dynamics. Ultimately, investment in maximising forecast accuracy is investment in logistics operations’ future success and sustainability.

July 12, 2024

Discover how AI and automation can transform freight matching, surpassing load boards to boost efficiency for shippers and carriers.

June 25, 2024

Rising costs are squeezing LSPs, but dynamic route optimisation offers a solution. Learn how it works, the different tools and their unique capabilities.

June 18, 2024

Transportation KPIs measure performance at each phase of the supply chain. Find out how to implement them in your business.

June 11, 2024

AI enables LSPs to calculate optimal fleet capacity, proactively address demand fluctuations and simulate future scenarios for informed decision-making.