“Look-Ahead” AI Powers Truck Dispatch: GEM Mining Consulting Presents Simulation Results

By Sebastián Faúndez, Practice Leader of Analytics at GEM Mining Consulting

Introduction

GEM Mining Consulting presented the results of a study that compares, within the same simulation environment, two fleet-dispatch strategies in an open-pit operation: a reinforcement learning (RL) policy and a traditional heuristic. The goal was to measure, under level rules and without external interference, how far AI-based optimization can go when the effective constraint is in the fleet rather than the plant.

“Look-Ahead” AI Powers Truck Dispatch - GEM Mining Consulting Presents Simulation Results

What GEM Mining Consulting Did and Why It Matters

The consultancy designed a controlled experiment to answer a central question: can AI make better dispatch decisions than classic rules if everything else is held equal? When the bottleneck is in the fleet, due to availability, distances, or congestion, a better allocation policy can translate into more material moved with the same equipment, less idle time, and a smaller footprint per tonne.

Methodology

A digital mine was built using a discrete-event simulator that models routes, grades, shovels, trucks, crushers, and queues, as well as random failures and planned stoppages. Both approaches, AI and heuristic, operated with the same decision cadence, identical availability, and the same constraints on plant feed within predefined grade ranges. The reference heuristic was the “most-needy shovel / nearest truck” rule.

Key Results

Performance ratios were evaluated across 10 scenarios and summarized statistically. The RL-based approach consistently outperformed the heuristic under varied operating conditions, including failures, maintenance windows, and grade distributions. On average, RL moved 23–33% more total tonnage, reduced queue length by 38–55%, and shortened cycle times by 19–24%. The results were statistically consistent.

Why RL-Powered AI Makes the Difference

The key, GEM Mining Consulting explains, is that the RL agent is trained to “look ahead.” Unlike rules that react to immediate distance or a shovel’s instant need, the algorithm learns, over thousands of iterations, which decision sequences reduce congestion and better balance the operation over time. Simply put, the AI doesn’t only answer “what pays off now,” but “what pays off now to be better over the whole cycle.”

At the Knowledge Frontier

This modeling sits at the frontier of mine-to-plant dispatch. The system is highly complex, with uncertainty and variability associated with failures, route changes, blends, maintenance, and ore grades, and AI makes it possible to capture that dynamics and translate it into policies that better coordinate resources.

Scope and Limits of the Results

The results come from a simulation environment with equal rules for both strategies and are not universal promises. When deployed on site, the model is expected to be optimized iteratively: as the mine is digitally replicated, with real data on routes, slopes, availabilities, resources, and maintenance windows, the AI agent can help understand the system and “look ahead” to suggest more optimal dispatch decisions for each operation.

Sustainability and the Role of People

Less idle time and shorter cycles reduce diesel consumption per tonne and therefore emissions per unit produced. Equally important is the human role: the goal is not to replace human teams. The tool is designed so supervisors and control-room operators can oversee it, perform overrides when needed, and focus on tactical tasks where human judgment is key. The tool also helps optimize fleet sizing and prepare for uncertain scenarios such as weather events and equipment failures.

Local mining sector

Australia concentrates world-class mining operations spread across vast territories and challenging climates. There, long haul distances, advanced control centers, and rising expectations for productivity, safety, and decarbonization coexist. In that context, a dispatch AI that “looks ahead” adds value by anticipating operational changes, smoothing congestion, reducing idle time, and lowering consumption per tonne, with clear, traceable recommendations that the human team can supervise and override.

Conclusion

With the bottleneck in the fleet, AI-driven optimization showed clear advantages in the test bench built by GEM Mining Consulting. The head-to-head comparison between AI and a heuristic, under the same rules and conditions, offers a crisp signal to operations exploring their next productivity leap: move from static rules to intelligent, dynamic AI-based recommendations.

If you want to learn more about us and our services, visit our website at https://www.gem-mining-consulting.com/home-eng-2-2/ | [email protected]

GEM