Conference papers and presentations


Forthcoming papers

2019 Process Development Symposium Europe (Frankfurt, Germany, September 25-26, 2019)

1. How Modern Digital Design Approaches can Help Realise the Potential of Process Intensification
Mark Matzopoulos, Process Systems Enterprise Ltd., London, UK

Process Intensification (PI), which aims to dramatically improve manufacturing processes through the application of novel process schemes and equipment, is not a new concept. PI goes beyond the incremental improvements achieved through optimising existing equipment and process schemes, by, for example, combining processing phenomena into fewer and more-integrated processing units in order to achieve step changes in energy efficiency, capital and operating costs and environmental impact. However, despite its obvious potential benefits, PI has yet to transform the process industries, partly because of the perceived risks of bringing new and unproven technologies to market in a conservative industry where mistakes can be costly.

A key challenge is that intensified processes are by definition novel and unproven, as opposed to less-efficient processes that have been well-understood for many years and therefore carry less risk. A traditional approach to process developments dictates that new processes require extensive construction of prototypes and pilots. However even exhaustive pilot testing still leaves open questions of operability and reliability, and a lack of systematic quantification of the effects of poor performance or failure, as well as the usual (e.g. scale-up) general technology risks associated with implementing new processes. There is also a perceived lack of design tools and data to develop intensified processes, and lack of a generalised workflows for dealing with the complexity of intensified, integrated modular systems.

All of this means that significant advantages can be realised from applying emerging digital design approaches that allow rapid and systematic exploration of the process decision space and rigorous quantification and management of technology risk. Digital design employs a model-based approach coupled closely with targeted experimentation. Experimentation is used to support the construction of a high-fidelity predictive model (or ‘digital twin’ in digital design terminology); once a model of sufficient accuracy is established, the digital twin, rather than the experimental data, is used to optimise the process design and operation.

This presentation describes, with brief illustrations, the established digital design techniques, technologies and workflows that can be applied across the intensified process development lifecycle to accelerate development and manage risk systematically. Specific topics include: capturing novel IP in high-fidelity models; validation of models against experimental and pilot data using integrated design and experimentation workflows that minimise experimentation time and cost; the application of global system analysis (GSA), a key technology for exploring the design and operational decision space, understanding sensitivity to key process parameters and quantifying and managing uncertainty and risk; and steady-state and dynamic optimisation for determining the optimal the process design taking into account operability issues or complex operating schedules.

↑TOP


Middle East Process Engineering Conference and Exhibition (Bahrain, October 14-16, 2019)

1. The Digital Refinery: Using Artificial Intelligence to Optimise Utilities Planning
Gerardo Sanchis, Process Systems Enterprise Ltd., London, UK

As large consumers of utilities, there is a real opportunity for oil refineries to become more efficient and reduce their costs and emissions by optimising utilities management. Managing the utilities systems is challenging due to the size and complexity of the networks, frequent changes in plant conditions and tightening emissions legislation. In this context, the Industrial Revolution 4.0 provides a unique opportunity to connect utility system operations to smart digital technologies such as machine learning, artificial intelligence and big data. The objective is to deliver insights to make better and faster decisions and achieve Operational Excellence.

Utilities management involves making decisions about future operation. For example, refineries may need to inform electricity suppliers of the intended purchase volume hours or days in advance. Penalties are payable if the actual imported electricity differs from the nominated. Refineries also need to decide when to schedule maintenance of equipment such as boilers and turbines, as well as prepare for shut-down/start-up of units, when finding the optimal utilities mix is most challenging. The conventional approach is to run a utility system optimiser to determine optimal operation that will meet future utilities demands. However, this involves making predictions of future utilities demands and prices, typically derived using correlations based on historical data. This requires, first, processing and cleaning large data sets, removing outliers and correcting for missing data, and second, finding relationships between utilities demands and refinery production planning information such as crude composition and unit loads. Both tasks require a considerable amount of time so simplifications are often made at the expense of accuracy, leaving potential savings on the table.

This paper proposes an alternative, new approach using artificial intelligence (AI) within a utilities planning application to predict future utilities demands and prices. AI overcomes both problems stated above. First, data cleaning is not essential as neural networks are able to identify patterns despite data anomalies. Second, AI algorithms are able to identify relationships and build correlations capturing all interactions. This approach requires less time spent on data cleaning and analysis and provides greater accuracy on predictions.

A Digital Refinery case study using AI, digital-twin and utilities optimisation technologies is presented. The application first predicts future electricity, fuel gas and steam demands and prices using an AI model previously trained using historical data. It then uses a digital-twin of the utility system to perform a multi-period optimisation to minimise the operating costs of future planned periods. The results of the study showed a reduction in utilities purchased cost greater than 5%, largely due to a much better electricity nomination strategy. Cost reductions were also identified as a consequence of a better load allocation in boilers, cogeneration and turbines and better redistribution of utilities across the networks.

Key take-aways are: How AI sits in an Industry 4.0 site strategy; How to use AI to predict future utilities demand and prices; An industrial case study showing the benefits that can be derived from an AI approach.

↑TOP


2. Predictive Maintenance and Asset Management by Combining High-Fidelity and Data-Driven Models
Luis Domingues, Steve Hall, Process Systems Enterprise Ltd., London, UK

The asset-intensive nature of manufacturing industry is a key motivation for optimizing the tracking, utilization and management of assets under the Industry 4.0 strategies. Recent advancements in artificial intelligence, IoT, big data and machine learning have provided tremendous scope for development of integrated networks of automation devices, services and enterprises. In the chemical industry, smart plant represents an approach to delivering optimal performance of plant assets. It focusses on the intelligent use of plant data and predictions of performance to make decisions that deliver operating benefits. These can be classified in terms of economic, environmental, safety and quality.

This paper presents our ideas and experiences of smart plant applications in refinery crude distillation units. The predictive operational controls and real-time optimization technology fuelled by artificial intelligence and machine learning have the potential of revolutionizing the operation of a refinery. Reduction of quality and safety issues as well as optimization of raw material and energy consumption are amongst key applications of the technology including improving the operational uptime of equipment. One area of application explored in this work is the preventive maintenance and asset management of heat exchangers in a refinery. Heat exchanger fouling in crude preheat train in modern refineries continues to be a major issue for both energy efficiency and environmental impact. It reduces preheat temperatures, necessitating higher heater duties which incur increased fuel consumption. This in turn leads to increased fuel costs and higher emissions. It can also reduce throughput and lead to reliability and maintenance issues.

Recent developments in our understanding of fouling initiation and propagation led to development of a predictive maintenance model. In this work, we show how the artificial neural network model is used to model fouling resistance in a heat exchanger in refinery crude preheat train and how it is integrated with PSE’s gPROMS, in order to simulate heat exchangers in crude preheat train under clean and fouled conditions, not only to if check current operation is as expected, but also to predict what will happen in the future. Historical operating data from the refinery is obtained for building the neural network models and is validated to ensure consistency, then trended to build heat exchanger network model. Crude and product properties are predicted using gPROMS from the inlet temperatures of product and crude together with crude blend information. Input variables of neural network model consist of, crude and product properties, inlet temperatures and flow rates. The output variables of the model include, the fouling resistance, crude and product outlet temperatures.

In the approach described, we convert all fouling effects into cost and environmental impacts. This facilitates any operational and maintenance decisions made by the refinery to be based on both, economic reality and complete knowledge of the sustainability implications. In conclusion, this paper shows that how data driven model layered on top on high fidelity model approach can be applied and how it gives better results. It has successfully applied here to fouling but has wide application in improving asset management.

↑TOP


2019 AIChE Annual Meeting (Orlando, November 10-15, 2019)

1. Investigating Effects of Dynamic Process Variability in Continuous Direct Compression on Tablet Quality Attributes through a Science-Based Digital Twin
Dana Barrasso, Xin Li, Sean K. Bermingham, Process Systems Enterprise; Gavin Reynolds, AstraZeneca

Continuous direct compression (CDC) is emerging as a common platform for continuous tablet manufacturing. In a CDC line, several sources of process variability are introduced through individual unit operations, including feeder flow rates (both due to refill events and higher frequency noise), micromixing of the blend, and tablet press flow and die filling behavior. These dynamic disturbances can influence the final tablet composition, porosity, strength, and mass.

In this work, the combined effects of dynamic sources of process variability on tablet quality attributes will be investigated through a science-based digital twin encompassing loss-in-weight feeders, refill units, continuous blenders, a surge hopper, a feed frame, and a tablet press, shown as a flowsheet model in Figure 1. Model validation requirements and workflows for the loss-in-weight feeder and continuous blender will be presented, and the impact of feeder control modes and settings will be explored. The significance of the height of the material in the surge hopper on disturbance propagation will be demonstrated. Dynamic variability in the tablet press will be introduced through variations in blend composition and bulk density in the die. The tablet-to-tablet variability in quality attributes will be quantified as distributions in quality attributes as predicted by the model for the tablet tester. The implications for process risk assessment and mitigation and process control strategies will be discussed.

Figure 1: Flowsheet model of continuous direct compression line, including five feeders, two blenders, a surge hopper, feed frame, and tablet press and associated sensors. Results figures show fluctuations in feeder flow rates and blend composition throughout the system in response to disturbances.

↑TOP


2. The Effect of Particle Sedimentation on the Performance of Pressure Filters
I.S Fragkopoulos, University of Leeds; N.A Mitchell, Process Systems Enterprise (PSE) Ltd; C.S. MacLeod, AstraZeneca; S. Mathew, Pfizer; F.L. Muller, University of Leeds

Pressure cake filtration is commonly used in the pharmaceutical industry for the separation of solids from the crystallisation slurry. The pressure applied has a huge impact on the particles within a cake and in turn affects the resistance of the liquid flowing through the cake [1]. This can cause the cake to compress and clog the filter medium, both of which slow down the filtration process. Although pressure can be increased to maintain a sufficient rate of filtration, this would cause the cake to further compress and/or lead to particle breakage. The filtration performance cannot currently be predicted accurately. Process scale-up is based on extensive experimentation, but scale-up surprises still occur between drug discovery and manufacturing. The main focus of this work was the development of a detailed filtration mechanistic model that takes experimental data and estimates filtration process parameters, so as to enable the prediction and control of the pressure filter performance at scale. The pressure filtration scale-up strategy that was followed in this study, is depicted in Figure 1.

Pressure filtration scale-up workflow

Figure 1: Pressure filtration scale-up workflow.

Lab-scale filtration experiments were performed for the investigation of the effect of size and shape of AZ and Pfizer materials on the filtration performance. An improved t/V filtration analysis model [2], which considers the fraction of solids that are already settled before filtering starts, was used in conjunction with the lab-scale filtration curves for the estimation of filtration parameters such as medium resistance, specific cake resistance and compressibility index of the cake.

Conventional Ruth-equation based filtration models are currently capable of describing only two extreme cases, where:

a) There is no sedimentation prior to filtration and cake builds-up while filtering, assuming a well-dispersed slurry (see Fig.2a).
b) Complete sedimentation is followed by filtration - permeation of liquid through a settled cake (see Fig.2b).

However, in reality the cake is commonly partly formed when filtration is initiated and also the slurry is not well-dispersed during the entire filtration duration (a layer of clear liquid at the top of the slurry is formed due to particle settling, as in Fig. 2c).

Figure 2: Schematic representation of slurry in systems with (a) no crystal sedimentation prior to filtration, (b) entire crystal sedimentation prior to filtration and (c) partial crystal sedimentation prior to filtration.

For this reason, a filtration-sedimentation lumped parameter model, currently being prototyped in PSE’s gPROMS FormulatedProducts suite, was developed and used to show the effect of sedimentation during filtration on the process performance. Predicted filtration times at pilot-plant and manufacturing scales were found to increase by more than 10% and 15% respectively (for d50s of 10µm), when taking into account particle settling.

The support of the Advanced Manufacturing Supply Chain Initiative through the funding of the ‘Advanced Digital Design of Pharmaceutical Therapeutics’ (ADDoPT) project (Grant No. 14060) is gratefully acknowledged.

↑TOP


3. Achieving Particle Size and Impurity Control for a Continuous Crystallization Process Using a Digital Design Approach
Niall Mitchell, Filipe Calado, Process Systems Enterprise; Christopher L. Burcham, Steven Myers, Eli Lilly and Company

The attainable regions for critical quality attributes, such as Particle Size Distribution (PSD) and impurity levels, in the manufacture of Solid Oral Dosage Forms (SODF) can be highly dependent upon the type and operation of the crystallization process. In this work, we outline a step-wise workflow consisting of model validation, model-based technology transfer and process optimisation employed for the digital design of a continuous cascade cooling crystallization and wet milling process for manufacturing an Active Pharmaceutical Ingredient (API). Following this workflow, the mechanistic model was first validated using batch crystallization data and subsequently applied to describe the continuous crystallization and wet milling process, and explore the impact of varying process parameters and process configurations (namely the position of the wet mill unit in the continuous crystallization process – in recycle with the first or the third crystallization stages) on the attainable processing regions for particle size and purity, subject to various process constraints.

The step-wise approach taken for model validation and its subsequent application was as follows:

Model validation of crystallization kinetics for the pure API solid phase from batch de-supersaturation experimental runs.
Application of the validated crystallization kinetics to describe the continuous crystallization of pure API in a three-stage cascade process, as shown in Figure 1 below (wet milling was not considered at this stage).
Refinement of the crystallization kinetic parameters for the pure API solid phase, utilising targeted data sets suggested by the model fitted in 1 using batch data sets.
Optimisation of the continuous crystallization process without wet milling to determine its feasibility.
Model validation of breakage kinetic parameters using batch data from a wet milling unit operated in a recycle with a batch crystallizer. A rotor-stator wet mill was utilised for this process, with a range of rotor frequencies and milling head generator configurations probed to assess the impact on the PSD over time and to fit the breakage kinetic parameters.
Optimisation of the combined crystallization and wet milling process for pure API to achieve a target product PSD. The configuration of the process, in terms of the position of the wet mill unit, in a recycle with the first or third crystallization stages, was also probed.
Model validation of crystallization kinetics for an API dimer solid phase from batch de-supersaturation experimental runs. The API dimer forms as a separate solid phase that can crystallize during the process and is considered to be a process impurity.
Optimisation of the combined crystallization and wet milling process for the pure API and API dimer solid phases (impurity) to achieve target product PSD and desired level of impurity.

Conclusions

The main conclusions of the work include the following:

The continuous crystallization process was unable to achieve the desired product PSD without wet milling, due to very low levels of secondary nucleation in the system. Optimal PSD quantiles (d10, d50 and d90) predicted for the product were significantly higher than the target PSD quantiles required to achieve the desired product performance for this compound.

Addition of a wet milling step in a recycle with the continuous crystallization significantly increased the lower end of the attainable region in terms of PSD quantiles and impurity level, allowing the process to comfortably achieve the desired range for product PSD quantiles.

In terms of configurations for the continuous process, it was found that having the wet mill placed in a recycle with the first crystallization stage was more effective at reducing product PSD and at decreasing the impurity level of the material produced, compared to having the wet mill placed in a recycle with the third stage.

Mechanistic modelling approaches can be utilised to significantly reduce the development timelines, material consumption and efficiency of continuous API crystallization production processes.

Figure 1: Simulation flowsheet used model for optimisation of continuous crystallization and wet milling process.

Simulation flowsheet used model for optimisation of continuous crystallization and wet milling process

↑TOP


4. Digital Design and Operation of Continuous Crystallization Processes Via Mechanistic Modelling
Niall Mitchell, Process Systems Enterprise; John Mack, Furqan Tahir, Eduardo Lopez-Montero, Perceptive Engineering; Cameron Brown, Strathclyde Institute of Pharmacy and Biomedical Sciences; Tariq Islam, John Robertson, Alastair J. Florence, Strathclyde University

Mechanistic models are becoming more commonly applied for Research and Development in the pharmaceutical sector to gain process understanding, enable process design and operation. Traditionally, the output from this activity is a validated mechanistic model, which is capable of quantitatively predicting the behaviour of the various Critical Quality Attributes (CQA) for typical batch or continuous pharmaceutical processes for a wide range of Critical Process Parameters (CPP). However, these tools are almost exclusively employed in an offline manner currently to enable digital design efforts, primarily aimed at assessing process robustness and variability, with very little subsequent online application of the mechanistic model to enable control or soft sensing.

Model Predictive Control (MPC) is an established technology in the process industries. It uses a statistical model of the process to capture the dynamic relationships between the inputs (CPPs) and outputs (CQAs) of the process. Using this statistical model, it predicts impact of known disturbances on operation and controls the process through co-ordinated moves on multiple inputs. The MPC exploits all opportunities to reduce variability in the CQAs whilst compensating for measured and unmeasured disturbances.

The statisitical process model is built from process response test data at production scale using techniques such as Pseudo-Random Binary Sequence (PRBS) or step-tests. The PRBS test is a relatively non-invasive technique compared to traditional step-tests as it allows the product to remain within specification whilst generating statistically rich information for modelling. Although PRBS testing is suitable for many industries it cannot be used in the Pharmaceutical sector as product generated during testing cannot be utilised for clinical or commercial supply. Consequently, the cost to generate the statistical control model would be significant, presenting a barrier to uptake of the technology. This drawback can be overcome by the integration of mechanistic models, developed using laboratory scale data with MPC system, such as Perceptive Engineering’s PharmaMV platform via a digital design approach.

In this work we outline, the application of an advanced process modelling tool, namely gPROMS FormulatedProducts, to describe a number of pharmaceutical crystallization processes. The mechanistic process model and the mechanistic model kinetic parameters were validated using process data gathered from the literature and from lab-based experiments. The lab-based mechanistic model was subsequently used to predict the behaviour of the full scale production scale.

The validated mechanistic model was subsequently integrated with PharmaMV to develop and tune the MPC against the mechanistic simulation of the process, by using the mechanistic model as a Digital Twin or Virtual Plant as follows:

gPROMS: Build mechanistic model
gPROMS: Small scale parameterisation experiments & mechanistic model validation
gPROMS + PharmaMV: Validate/check mech model against full scale data
gPROMS + PharmaMV: Build MPC using mechanistic model as a digital twin
PharmaMV: Transfer MPC to live process and test

With this approach, the MPC derived from the mechanistic model was utilized to accurately control the defined CQAs, such as final particle attributes (PSD, yield) for continuous crystallization processes, with reduced material wastage at the production scale

↑TOP


5. Optimization of the Operation of Integrated, Multi-Plant Systems
Apostolos Giovanoglou, Constantinos C. Pantelides, Process Systems Enterprise Ltd.

The use of rigorous model-based techniques for the optimizing the operation of process plants is now well established, and such techniques are implemented and deployed in both offline and online (“real-time optimization”) tools. However, increasingly, attention is shifting towards the optimization of integrated systems involving several plants sharing raw materials, intermediates and products. This is primarily driven by the fact that the decisions at the overall system level (e.g. feed allocation between plants) often have a much more significant effect on the system’s economic performance and on its ability to fulfil its requirements than decisions at the level of individual plants.

Optimization of integrated multi-plant systems is not new. In industry, it has traditionally been carried out using linear models, which greatly facilitates the solution of the underlying mathematical problem. However, this often results in solutions that actually fail to satisfy basic plant operability constraints. To some extent, this can be avoided by applying safety margins to various constraints and/or restricting the allowable range of variation of the decision variables. However, given the typical size of the money flows in these systems, the resulting sub-optimality of the solutions obtained often corresponds to significant loss of economic opportunity.

In this paper, we present a general framework for the optimization of multi-plant systems aiming to address the characteristics and requirements of real industrial applications. Aiming to exploit the increasing availability of detailed physics-based models for individual plants, the proposed framework combines these detailed models with simpler (surrogate) models derived automatically from them.

↑TOP


6. Systems-Based Pharmaceutics – an End-of-Decade Report
Costas Pantelides, Sean Bermingham, Process Systems Enterprise Ltd.

Systems-based Pharmaceutics (SbP) is a systems engineering methodology for the pharmaceutical industry that encompasses drug substance and drug product manufacturing, as well as in vivo drug performance. The approach aims to capture available relevant knowledge in the form of validated mathematical models. By integrating knowledge across the product lifecycle, SbP allows the quantification of the impact of critical process parameters and other decisions and environmental factors on the product’s critical quality attributes and the process key performance indicators.

This paper assesses the extent to which the above vision, originally formulated at the start of this decade, has been translated into reality. Aspects considered include the extent to which advances in scientific understanding have been incorporated into mathematical models that can be deployed in support of product and process design and process operations; the formal validation of these models; and the use of models in quantifying and managing the risk in model-based decisions, and in identifying practically important gaps in current knowledge. The paper also considers how current trends towards digitalization, and the associated advances in IT technologies, impact the SbP vision and its ongoing realization.

↑TOP


7. A General Digital Applications Platform
Costas Pantelides, Frances Pereira, Penny Stanger, Yiming Yan, Process Systems Enterprise Ltd.

Deep process knowledge captured within physics-based models is a key element for the successful digitalization of process industries. Digital applications making use of models derived from first principles are increasingly being used in industrial practice, particularly in the area of process operations. They include both open-loop applications, such as model-based soft sensing and monitoring of equipment degradation, and closed-loop ones such as Real-Time Optimization and nonlinear Model Predictive Control.

However, most non-trivial digital applications involve much more than a mathematical model being solved in a simulation or optimization mode. They often require multiple model-based calculations being scheduled over time and exchanging data with each other, while potentially being subject to occasional failures. They also involve extensive communication with external data servers, such as distributed control systems, plant historians, commercial databases and user dashboards. Moreover, the data involved in all such communications may be subject to systematic and/or random errors, and may even occasionally become unavailable.

This paper describes a recently developed general software platform for resilient and sustainable digital applications, taking account of the above considerations. The platform significantly reduces the cost and increases the reliability of development, testing, deployment and maintenance of diverse applications within a unified software architecture. Industrial examples illustrating the flexibility of the proposed design are presented.

↑TOP

Our website uses cookies so that we can provide a better browsing experience. Continue to use the site as normal if you're happy with this or find out more about cookies

OK