Html code will be here

Problems

AI-powered control to optimise multi-stage crushing circuit for ore processing

Unit operations of the crushing circuit are controlled independently that creates sub-optimal operation of the entire circuit

Grizzly classifier gets clogged over 25% of the time and not monitored by the metallurgist creating instabilities and ore coarsening

The secondary crushing circuit produces coarse and segregated ore that makes HPGR circuit to operate less efficiently

Unit operations production volumes are not balanced, so the production capacity of the upstream circuits are not used efficiently

Results

Improved
stability of the recirculating load
of secondary
crushing

-4.8%
particle size
reduction in the secondary
crushing

+2.6%
throughput
increase of the
crushing circuit

Case summary
Challenge
An ore-crushing circuit is the first stage in most mineral processing plants, and its operational efficiency has an impact on the performance of the entire plant. Such circuits typically consist of several unit operations, which are often operated and controlled independently due to the high control and modeling complexity caused by large ore-storing buffers between them. This disconnected control approach results in bottlenecks and a reduction in overall circuit throughput. These bottlenecks can be further exacerbated by the high variability of ore properties and ore segregation in buffers, making the optimization and control of the whole circuit even more challenging.

One of our clients operates a platinum processing plant in South Africa with a three-stage crushing circuit comprised of gyratory crushers, cone crushers, and high-pressure grinding rolls (HPGR) separated by stockpile and silo buffers with independent control logic for each unit operation. Conundrum was commissioned to identify dry circuit bottlenecks and eliminate them to improve the overall dry circuit throughput and, consequently, the productivity of the plant.


Solution and benefits
We have deployed a new generation of AI-powered control system, built upon the modeling, optimization, and deployment components of the Conundrum platform. The core components of the solution are physics-aware machine learning (hybrid) models of various pieces of equipment or processes. These models are embedded into the Conundrum Plant-Wide Optimization Engine, which produces accurate and timely production setpoint changes to the circuit that are applied by the metallurgist. These components are deployed on the Conundrum platform, making it easy to move the models into production, make use of the produced setpoint recommendations, and monitor the application status.

The control system was tested in the recommendation mode and demonstrated a 2.6% throughput increase during a one-month trial. The system is currently operating in the plant and providing value to the customer 24/7. In this blog, we present the details of the client's operational case and the solution.
Multi-stage crushing optimization problem
A typical mineral processing plant consists of two major sections: a dry circuit and a wet circuit. As the name implies, a dry circuit does not require water to process ore, and its main purpose is to reduce the ore size until it can be further processed in milling and flotation circuits that form the wet circuit. Dry circuits may vary in terms of arrangement and configuration, but their main purpose remains the same: the gradual reduction of the ore along the circuit.

Dry circuits are often split into several sub-circuits (or unit operations), such as primary, secondary, and tertiary crushing. They are typically separated by large storage buffers, such as stockpiles and silos, which allow independent operation of the sub-circuits; for instance, maintenance can be performed on the secondary circuit while the tertiary circuit still produces the crushed product material.

An example of a dry circuit is shown in Figure 1. Most often, each sub-circuit has its own control layer, mainly consisting of base regulatory control, which controls each piece of equipment separately. In some cases, an advanced process control (APC) layer links the regulatory controllers into one system.
Figure 1 - Example of a dry circuit configuration
Despite the possible efficiency of an APC system within each sub-circuit, it is difficult to implement a control system that takes care of several sub-circuits. This is mainly due to a long, dynamically changing time lag that occurs in large buffers between the sub-circuits, making it hard to keep track of ore properties and their influence on the downstream process. As a result, such multi-stage circuit optimization is often skipped, which can result in a sub-optimal particle size distribution (PSD) being sent to the downstream circuit. This can lead to the downstream circuit struggling to effectively process ore, resulting in a low dry circuit performance as a whole, and the circuit throughput not reaching its maximum potential.
Plantwide Recipe Adviser

In Conundrum, we have developed and have readily available buffer models for stockpiles and silos that allow for the accurate propagation and segregation of ore properties and their connection to the downstream sub-circuit performance via advanced data analytics and machine learning. When sub-circuits are linked, we perform de-bottlenecking analysis and use our process modeling and optimization models to optimize the entire dry circuit instead of greedy optimization at the equipment/sub-circuit level. In this blog post, we present an operational case with a proven economical effect, confirmed by commercial operational testing.

Client case overview
Description
Figure 2 shows an overview of the client’s crushing circuit. The crushing circuit is divided into primary, secondary, and tertiary sections. In the primary section, a gyratory crusher is used in the primary reduction of the ore, which is then stored in the buffer stockpile. In the secondary zone with cone crushers, the ore is split into the undersize and oversize streams over the grizzly, where the oversize material goes to the coarse crusher (Crusher 1). The mixed crushed and undersize flow is then split over the screener where the recycle stream goes to the fine crusher (Crusher 3). Crusher 2 can be used as a coarse or fine crusher, but most of the operational time is used as a coarse crusher similar to Crusher 1 while some material from Crusher 3 can still partly go to Crusher 2.

The secondary crushing zone product is stored in the silo which feeds the tertiary crushing zone where HPGR further reduces the ore size. The product ore is then separated over the screener and recycled back to the HPGR if the target product size is not achieved. The undersize ore goes to the milling and flotation sections.
Figure 2 - Overview of the client's dry circuit
As we can see, there are two large buffers between the sections: the stockpile and the silo. The average residence time through them is around 8-10 hours under active production of the downstream operations, depending on the currently filled volume. As such, each sub-circuit is controlled separately without a clear understanding of how the product of one part can influence the subsequent one.
Client objective
The client's objective was to identify possibilities to improve the overall circuit throughput, i.e. at the outlet of the HPGR sub-circuit. In order to achieve this, it is required to analyse the production capacities of each sub-circuit and identify if the produced ore at the secondary and primary circuits influences the HPGR circuit performance. This understanding would allow us to develop a control and optimization strategy for the entire dry circuit. In the next sections, we provide a description on how the bottlenecks were revealed as well as the proposed solution to them.
Process bottlenecks identification
Through a detailed process bottleneck analysis, Conundrum found the following process performance issues and potential improvements:
Plantwide Recipe Adviser

  • Coarse and segregated ore produced by the secondary crushing circuit reduces the HPGR throughput and overall dry circuit throughput. By better refining and stabilizing the crushing product ore, overall throughput improvement can be achieved.

  • There is nearly 25% of operation time when the grizzly is clogged (choked) which causes significant operation instability, crushing throughput reduction, and PSD coarsening. On average, grizzly clogging events last 5 hours reducing the throughput by 5-7% while certain clogging events may last up to 15 hours with a significant throughput reduction.

  • Secondary crushing circuit CSS control is extremely passive and does not respond properly to inlet properties changes. More active control of CSS can help in stabilizing the secondary circuit performance, better refine secondary crushing ore, and optimize the overall dry circuit throughput.

  • There is an excessive secondary crushing circuit capacity that can be used for overall dry circuit optimization including better ore refinement.

To identify such bottlenecks in the dry circuit, we followed highly efficient and standardized analysis steps that proved to be effective over other relevant projects. Apart from statistical analysis, these steps involve high-fidelity hybrid data-driven models available in the Conundrum platform.

For crushing circuits in general and for the case under consideration, we take the following steps:
Step 1: Accurate machine learning and physics-based modeling of critical circuit parts such as grizzly, screeners, crushers, and silos that can cause bottlenecks with subsequent identification of production losses associated with this equipment.

In this particular case, the grizzly model played an essential role. The model allows accurate modeling of the undersize and oversize mass split, including modeling of clogging events. This phenomenon occurs when most of the feed particle size is close to the grizzly aperture size. This leads to ore particles getting locked within the grizzly causing a sharp increase in the overflow. This situation destabilizes the circuit quite heavily because the circuit throughput starts decreasing as well as the crusher bin levels get high very quickly. Consequently, the circuit circulating load and product PSD (particle size distribution) become unstable.

The grizzly model in the Conundrum platform allows accurate identification of clogging events. This is not the case in standard grizzly models in a wide range of industry simulators because the Conundrum models have capabilities to fit the actual production data in real-time by incorporating machine learning models which significantly improves the accuracy of the analysis.

Given the grizzly model fitted to the data, we estimated historically and in real-time the production loss on the secondary crushing stage. More importantly, we also identified how grizzly clogging affects the stability of the circuit measured by the variance of the recirculation stream and the circuit outlet P50 and P80 size (Figure 3).
Figure 3 - The hybrid machine learning grizzly model gives the possibility to uncover hidden circuit bottlenecks such as product PSD coarsening and recirculation reduction and instabilities.
Step 2: Ore properties propagation between the circuits using the stockpile and silo models to identify how the final product of one circuit influences the downstream circuit.

Among the variety of models available in the Conundrum platform, the silo and stockpile models are the ones that can better reveal existing bottlenecks between the sub-circuits. Specifically, the silo model enables a dynamic shift of ore properties over the buffer. By the dynamic shift, we mean the residence time depends on the throughput rate as well as the filled volume of the silo for any given point in time.

Again, the silo model in the Conundrum platform is not only physics-based, but also data-driven, so it accurately describes the data. The model is able to track the inlet ore properties, e.g. PSD, over the silo within each control volume (CV) depending on the throughput rate from the silo. Due to the segregation effect, the silo level will change differently in different control volumes. The segregation coefficients are learned from the historical performance of the level change and the input/output mass changes. Such a hybrid approach of combining first principles with machine learning modeling makes it possible to accurately estimate the propagation of ore properties and reliably analyze the downstream circuit with respect to the propagated properties. In this particular case, it was possible to propagate ore properties from the secondary crushing to the HPGR zone and then identify production losses caused by the coarse ore size (Figure 4).
Figure 4 - The silo model dynamically propagates the ore size and makes it possible to draw the HPGR throughput distributions with respect to the propagated ore size and identify the room for throughput improvement
Step 3: Circuit stability analysis based on physics-aware machine learning modeling approaches

The stability analysis uncovers root causes of unstable circuit behavior such as throughput drops, grizzly and screen mass split disturbances, and HPGR gap skewness instability. The analysis is conducted by means of standardized approaches embedded in the platform models.
Step 4: Detailed production volume analysis and equipment capacity of each sub-circuit including a thorough review of plant stoppages, and buffer near-overloading events.

This analysis is a must, especially in cases with high-volume buffers such as silos and stockpiles. In this case, we were able to identify the secondary crushing circuit's extra capacity, which could have been used in order to control the circuit product PSD because the coarse ore reduces the HPGR throughput. As such, utilization of the secondary crushing stage extra capacity was then proposed to be used to better control the circulating load of the secondary crushing circuit to stabilize and reduce the outlet circuit ore size.
Conundrum crushing optimization solution
Based on the bottleneck analysis, Conundrum identified that by optimizing the secondary crushing product PSD, it will be possible to increase the overall dry circuit production. The major control parameter that influences the product PSD at the circuit outlet is the crusher closed-side setting (CSS) of the crushers. The onsite metallurgist manually and rarely controlled the CSS and often kept it at some default constant mode that did not adequately address the ore properties disturbances and recirculation instabilities.

There are other parameters that can be used for controlling and stabilizing the product size such as feeders control along the circuit. However, in this particular case, the client’s APC control system was already taking this over. However, in Conundrum, we also provide highly advanced feeder control solutions combined with CSS control that delivers an end-to-end crushing optimization solution.
Predictive and optimization models
Taking the rare CSS adjustments into account and the result of the process bottleneck analysis, we delivered the following solution based on the readily-available standard Conundrum platform blocks:

  1. The domain-based hybrid models (grizzly, screener, crushers) that produce soft-sensor estimates of the parameters across the circuit.
  2. The machine learning model that forecasts the ore class (coarse/medium/fine) in the next 30 mins at the circuit outlet.
  3. Conundrum Plant-Wide Optimization Engine that finds optimal values of CSS for each crusher such that the ore fragmentation is best refined and the recirculation load of the circuit is close to the setpoint.
  4. A machine learning model explainer that describes the "WHY" behind the ore class model prediction.
Figure 5 - Architecture of the AI-powered Recipe Adviser solution for crushing optimization delivered to the client
Solution deployment

Building offline models is not enough to deliver a solution that works in real-time and at scale. In Conundrum, we deliver solutions within the platform that covers the following production aspects:
  1. Getting data from the plant data storage, contextualizing, and storing it in a database in a structured form
  2. Distributing data across active applications upon request or schedule
  3. Monitoring data quality in real-time
  4. Сontainerization and smooth deployment of machine learning application pipelines
  5. Storing machine learning application predictions and associated artifacts
  6. Monitoring model metrics and retraining the models when required
  7. Visualizing the application results for the end user
  8. Monitoring of the entire system performance, health, and important operation metrics.
Each machine learning application is a microservice and is deployed in a separate container and can be run in any scheduled manner. The application is constructed using the standard Conundrum pipeline blocks and can be set up from the configuration file to do the following:
  • Receive and combine all the necessary data sources
  • Clean the combined data and prepare the required features using domain hybrid models
  • Run the hybrid and machine learning models in the required sequence in real-time and in the training mode
  • Run the optimization engine to get the optimal control action
  • Postprocess the results to store in the database and present to the end user
  • Validate the data transformation results at any pipeline step
Figure 6 shows an example of a typical machine learning application pipeline that we pack our models in and deliver to our clients.

Figure 6 - Overview of the application pipeline for deploying machine learning and hybrid models in the Conundrum platform
An important part of an application is the User Interface (UI) which shows all the required information in a simple and clear form, so the user fully understands at each point in time:
  • Application dashboard visualizes recommendations and insights. Visualization is configurable over built-in widgets constructor,
  • Application status,
  • How well the models perform,
  • What is the required input from the user side,
  • If the application fails, what are the reasons for it, and how it can be fixed. An automated quick restart of applications ensures continuity.
The Conundrum platform covers all the essential needs and we continuously improve features to fully satisfy users' requests.
Figure 7 shows the UI of the crushing optimization application delivered to the client. The UI shows the following major components:
  1. The recommended CSS values. We show when the recommendations were given, when the next recommendation can be expected, and within which parameter limits we operate.
  2. The application status shown in a log widget where the user can read error/warning messages in case the solution fails to produce an estimate or suspicious data arrives from the plant.
  3. The hybrid models' predictions and associated confidence.
  4. Expected improvement if applying the recommendation compared to the historical values.
  5. Explanation of the model prediction.
  6. Visualization of model predictions and the recommendation acceptance rate in a time series form.
  7. Statistical model metrics that are computed in a real-time manner to monitor the model degradation if it exists.

Figure 7 - UI overview of the deployed Conundrum crushing optimization solution

Figure 8 shows a supporting dashboard that shows the main produced soft-sensors by the physics-aware machine learning models of each piece of equipment as well as the model outcomes that can be compared with the reference measurements installed in the circuit. Please note, that the soft-sensors are not measured in the field, so the Conundrum solution creates additional insights for the metallurgist through advanced modeling techniques available in the platform. For instance, in the figure we see P80 of all the circuit streams start changing heavily when grizzly starts clogging. As a consequence, the recirculation load decreases and gets unstable which negatively impacts the throughput and the screener product P80.

Figure 8 - UI of the physics-aware machine learning outcomes of the crushing optimization solution running in real-time on the client’s environment

Conundrum crushing optimization solution
In close collaboration with the client site team, we conducted operational testing of the solution during which the client metallurgist applied the given CSS recommendations every 30 minutes produced by the Conundrum solution. The testing results confirmed that:
Plantwide Recipe Adviser

  • The dry circuit throughput improvement is statistically significant and reaches 2.6%.

  • The Conundrum grizzly model identified clogging events 5.5 hours in advance on average before the crushing circuit operation was stopped to unclog the grizzly.

  • The provided CSS recommendations improved the PSD reduction from the crushing circuit by 4.6% which subsequently improved the dry circuit throughput.

  • The Conundrum platform has produced estimates over 99.7% of the operation time showing a highly reliable performance under several operational and data conditions.

When deploying and testing the solution, we consider the statistical analysis of the operational testing results as important as the solution development to ensure that the solution will continuously deliver economic benefits to the client after the testing is complete. As such, for every deployed solution, we provide a comprehensive analysis that shows not only the estimated effect but also the statistical significance of it.

In this particular case, a univariate statistical test and a multivariate regression statistical test were performed in order to make a robust estimate of the throughput improvement. Both analysis methods showed statistical significance and provided two estimates of the improvement with the agreed value of 2.6%. The throughput distributions with and without the Conundrum Recipe Adviser are shown in Figure 9.

Figure 9 - Throughput distributions with and without Conundrum Recipe Adviser solution and the difference between the means

Concluding remarks and way forward
In this blog post, we described the case of the tested dry circuit throughput optimization solution from the bottleneck identification step to delivering the solution to the client built upon the standard modeling and deployment blocks in the Conundrum platform. The solution was successfully tested in the plant with an economic benefit of 2.6% measured by the throughput improvement.

In Conundrum, we continue improving the platform's readily available solutions and blocks. More specifically, for the crushing applications, we continue improving the already available models and testing new control strategies that will allow for building a more accurate and fully tunable digital twin simulation engine which allows for optimizing the process in real-time.

Moreover, we continue working on making this available for building such solutions from the UI (aka low-code), so that even a user who is inexperienced in data analytics and machine learning will be able to create an accurate solution for a specific plant.
Industrial AI for metals & mining
Copyright © Conundrum Industrial Limited, 2022
CONTACTS
CONUNDRUM INDUSTRIAL LIMITED,
Merlin Place, Milton Road, Cambridge, United Kingdom, CB4 0DP
Platform
company
By clicking the "Submit" button, you agree to the privacy policy
Contact us
Contact us
By clicking the "Submit" button, you agree to the privacy policy