Middle East Process Engineering Conference and Exhibition (Bahrain, October 09-11, 2017)
1. Improving overall ethylene plant performance by optimising C2 hydrogenation
Nattawat Tiensai, Kritsada Chotiwiriyakun, Hattachai Aeowjaroenlap, SCG Chemicals
Alejandro Cano, Process Systems Enterprise, Inc., USA
Stepan Spatenka, Sreekumar Maroor, Amit Goda, Process Systems Enterprise Ltd., London, UK
The steam cracking process in an ethylene plant produces a certain amount of acetylene, which needs to be reacted with hydrogen to maintain ethylene product quality as well as to increase ethylene yield. The reaction is typically carried out in 2 or 3-stage fixed-bed catalytic reactor with inter-stage cooling. Side reactions include the production of ethane, which needs to be recycled to the cracker, reducing overall plant throughput and increasing energy use. In tail-end C2 hydrogenation units, acetylene oligomerisation produces green oil, which deposits on the catalyst surface and causes catalyst de-activation and loss of selectivity. The challenge for C2 reactor operators is to maximise overall selectivity to ethylene while maintaining product quality.
This paper describes the application of advanced process modelling to a typical C2 hydrogenation tail-end process. The approach uses high-fidelity models of the catalyst bed to predict composition, temperatures and other important attributes to a high degree of accuracy.
Having validated the model against experimental data, it was used to determine optimal operating conditions. The objective of the optimisation was to maximise economic gain by maximising ethylene production. The decision variables used were the time-varying inlet temperature to each bed, the hydrogen flow to each bed and the run length, with constraints including the maximum bed temperatures and outlet C2H2 concentration. By altering the conversion in each of the three beds by altering the inlet conditions it was possible to realise a 13% increase in ethylene gain and a 10% improvement in process economics, and at the same time reduce production of unwanted ethane and green oil. There is now potential to implement the model on line for real-time monitoring of catalyst activity and green oil accumulation. A similar approach can be applied to C3 hydrogenation.
2. Multi-site optimisation of natural gas processing operations to maximise asset utilisation
Costas Pantelides, Maarten Nauta, Bart de Groot, Process Systems Enterprise Ltd., London, UK
Gathering significant volumes of natural gas and supplying processed gas and associated liquids such as liquefied petroleum gas (LPG) and natural gas liquids (NGL) to consumers usually involves connecting wells in different fields with an extensive network of processing and storage facilities that can be spread across vast geographical areas.
The use of network modelling and optimisation technologies for strategic decision-making can yield substantial benefits not only in economic and environmental terms but also in improved understanding of the interaction between the various components of the process and the overall business. Key benefits include increased profitability through better asset utilisation, improved reliability through the ability to rapidly reallocate production on equipment failure, better investment planning to reduce network bottlenecks, and flare reduction in order to minimise the environmental and economic penalties from inefficient operations.
Conventional approaches to the optimisation of large supply networks usually tend to rely on rather simple models of the individual nodes (e.g. production facilities and processing plants) in these networks, often taking the form of simple (often linear) relations between the flowrates of the various materials entering and leaving each node. Whilst this greatly simplifies the solution of the underlying mathematical optimisation problem, it may lead to solutions that are unimplementable in practice; pragmatic adjustments to ensure feasibility almost always lead to sub-optimal solutions which, given the very substantial money flows in such large networks, may translate into significant loss of opportunity.
This paper describes an alternative approach to natural gas supply chain optimisation across distributed sites using a higher level of physical detail in describing the operation of the individual production and processing nodes, thereby ensuring that any solution obtained satisfies all important constraints on the operation of plant equipment. Until recently, this approach was considered to be impractical; however, the combination of modern equation-oriented modelling techniques and the continual evolution of computer processing power now allows large-scale models, comprising detailed models of individual equipment items within wide system envelopes, to be constructed and solved reliably with minimal user intervention. In practical terms, it is now possible to perform optimization of models comprising several hundreds of thousands of nonlinear equations subject to many tens of decision variables and process, equipment and product quality constraints.
The paper is illustrated with an example involving a large-scale Middle Eastern gas processing network, which shows that gains of 5% on normal processing, representing tens or hundreds of millions of dollars annually, are now possible.
3. Refinery fouling management
Kumar Prashant, Steve Hall, Process Systems Enterprise Ltd., London, UK
Jose Griman, Miguel Angel Castro, Saudi Aramco, Ras Tanura, Saudi Arabia
Hani Al Saed, Saudi Aramco, Dhahran, Saudi Arabia
Heat exchanger fouling in modern refineries continues to be a major issue for both energy efficiency and environmental impact. Fouling is most acute in crude distillation units (CDUs). It reduces preheat temperatures, necessitating higher heater duties which incur increased fuel consumption. This in turn leads to increased fuel costs and higher emissions. It also can reduce throughput and lead to reliability and maintenance issues.
Although fouling is unavoidable, there are inevitable and avoidable effects. This abstract describes a method for identifying what is avoidable and how to manage fouling to minimize its effect on cost, the environment, reliability and maintenance. The approach combines advanced process modelling techniques with the latest data visualization techniques in an Operational Excellence (OE) platform. The OE platform allows stakeholders within the refinery to be informed and participate in sustainable improvement activities through the use of appropriate dashboard technologies.
Recent developments in our understanding of how fouling is initiated and propagates help us to understand the important variables that can be influenced to help mitigate fouling. A critical contribution is new equations emerging which enable us to model and predict fouling of heat exchangers. Although fouling is pervasive within a crude unit heat exchanger network, its critical effects are often localized. This knowledge is exploited to assess how such local effects impact the overall thermal and hydraulic performance of the network and hence identify positive actions which minimize the fouling effects, for example deep or light cleaning through to flowrate changes.
Another key development is the use of process modelling tools to model network performance through time. By predicting future performance over an operating window, for example up to the next turnaround, the approach ensures that overall costs and environmental impact are properly calculated and informed decisions made. The approach results in heat exchanger networks which achieve a practical balance between thermal efficiency and operability, with all fouling effects presented both as costs and environmental impact.
Another key feature of the approach is the visualization of results. The technology is deployed via various dashboards, depending on the user’s role. One dashboard is used by engineering teams to study ‘whatifs’, another by operations staff to ensure daytoday performance is optimal, and another by the maintenance department to plan maintenance schedules which tie in to cost and environmental objectives. Current plant data and operational status are displayed in realtime in the OE platform.
A reallife case study is presented showing how the technology is used to predict the effects of temperature and flow on fouling, and also the effects of more significant process changes such as throughput increase, bypassing exchangers and the effects of removing heat exchangers. Associated cost and environmental benefits are presented.
In conclusion, this paper shows that, despite the presence of fouling in modern refineries, there are things we can do to mitigate its effects. By combining process modelling techniques with dashboards in an OE platform, we have found real benefit to address its cost and environmental consequences.
4. Next-generation utilities optimisation for refineries and large-scale chemical production sites
Penny Stanger, Gerardo Sanchis, Frances Pereira, Alfredo Ramos, Process Systems Enterprise Ltd., London, UK
Giel Pluijm, Willem Godlieb, Dorus van der Linden, DSM Ahead BV, Geleen, Netherlands
Refineries and chemical production sites are major consumers of energy in the form of electricity, steam and hydrocarbon feedstocks. Given that tariffs, costs and demands are constantly changing, there is much scope for optimising on-site production, conversion and distribution of energy to minimise cost and emissions, by managing the options available in the most cost effective way while meeting all constraints of the system. However many current ‘optimisation’ applications do not provide full capabilities for maximising economic value or minimising emissions, particularly in situations where demands from major consumers and the prices of the various components can change on an hourly basis.
This paper describes an advanced optimisation platform for managing and optimising utility operation that not only helps planners rapidly optimise equipment selection and load allocation to improve overall efficiency and reduce emissions and operating costs, but also presents operators with a ranked list of possible actions from which they can choose the best course of action given the situation ‘on the ground’.
The approach uses medium-fidelity models of the utilities system and major devices that are coupled with plant operating data via data validation and reconciliation facilities. An advanced optimisation system capable of both continuous and integer decisions determines the economically optimal operating point taking into account equipment and operational constraints, including availability. The resulting mathematical problem is solved within an equation-oriented framework, providing robustness and speed of execution well in advance of most current systems.
Because of the speed of solution, a key feature is the ability to run multiple optimisations and provide operators with a ranked list of potential combinations of changes and their corresponding benefits, within a dashboard tailored for the site. This allows operators clearly to evaluate and discuss which changes are the best to apply when, resulting in advice that is practical and easy to implement and verify. It addresses one of the biggest obstacles to practical realisation of the benefit of optimisation systems, which is gaining operator buy-in to proposed changes.
The paper is illustrated with an examples of implementation on a major European chemical manufacturing site.
5. Changes to industry guidance for relief and blowdown system design and impact on existing infrastructure
Ryan Goggin, Process Systems Enterprise Ltd., London, UK
As the last line of defense for process and equipment integrity, relief, blowdown and flare systems need to be adequately designed and maintained. Systems need to be periodically re-assessed so that they operate safely throughout the life of the facility that they help protect. This talk highlights the importance of ensuring flare and relief systems remain fit for purpose throughout their lifetime maintaining accountability for plant modifications, changes to operations, and changes to industry guidance and regulation. Recognizing high profile an recent incidents involving depressurization, relief and flare systems, including Westlake, Grangemouth and Piper Alpha and others which that have led to loss of containment due to excessive vibrations and brittle fracture of the flare piping, we show that the root cause can be linked to failure to recognize hazards and perform adequate analysis, which is fundamental to Process Hazards Analysis programs and Overpressure systems verifications.
In recognition of these & other incidents, we describe how relief and blowdown adequacy assessment has tightened in recent years in regards to significant changes to API 521, and in particular:
- the importance of verification of relief and flare systems to ensure they comply with latest Industry Standards
- to ensuring the suitability of materials of construction so as to avoid brittle fracture risks during depressurization
- the adoption of the more rigorous analytical methodology for assessing vessel survivability under fire attack for blowdown system design
- to assessing the likelihood of failure due to acoustic and flow induced vibration in flare piping
Through a number of case studies we describe a methodology for assessing and analyzing existing infrastructure for these risks and explain how a detailed model-based analysis can ensure an inherently safe relief and blowdown system design that is compliant with the latest industry guidance.
20th International Symposium on Industrial Crystallization (Dublin, Ireland, September 3-6, 2017)
1. Towards the Virtual Design of Experiments (vDoE) of a Cooling Crystallization Process via the Application of Population Balance Modelling
Girard, K.P, Rose, P.R., Chekal, B.P., Pfizer Inc., USA
Birch, M., Pfizer Inc., UK
Mitchell, N.A., Bermingham, S.K., Process Systems Enterprise (PSE) Ltd., UK
Crystallization is a unit operation that is part of almost every chemical process to produce Active Pharmaceutical Ingredients, owing to its ability to provide chemically and physically stable solids of high purity. The outcome of this operation, particularly the very important attribute of particle size distribution, can be highly dependent on a number of process parameters, such as cooling and anti-solvent addition profiles; seed size and quantity; and nucleation temperature. With a large number of potential handles, experimentally exploring a wide space of parameters consumes significant time and precious material while covering only a fraction of the potential design space. Modern process design aims to solve this problem and shrink development timelines by placing a larger emphasis on in-silico modelling of unit operations and even whole systems. By employing high-fidelity computational models of complex processes such as crystallization, researchers can quickly optimize their process with less material while covering a wider range of possible conditions. Even lower fidelity models find utility in guiding experimentation to areas of interest, often more quickly than using traditional experimental methods, such as Design of Experiments (DoE).
Further, computational models of crystallization operations can find utility in systems modelling and optimization. Linking crystallization models to models for secondary processing and even oral absorption models, all based on particle size distributions, can enable a process design team to optimize process parameters subject to constraints in order to achieve the desired product performance and patient outcomes. This is in contrast to current methods which seek to experimentally uncover which process parameters are most influential on the API attributes, which then become process parameters in a similar experimental foray during secondary processing. Ultimately, one can experimentally link crystallization process parameters to product performance, but the space rapidly becomes complex as the number of parameters increases. Clearly, there is a significant driver for crystallization engineers to develop process models that have utility in predicting their system.
In this work the seeded cooling crystallization process of a high aspect ratio API was described using population balance modelling. Kinetic mechanisms included in the model were activated secondary nucleation and crystal growth. With this model in mind, a targeted experimental plan of six experiments, was conducted with a view to providing sufficient sensitivity in the experimental data (concentration & PSD) to estimate each crystallization kinetic parameter. Upon initial model validation, it was found that the non-idealities of the particle size measurement technique, namely laser diffraction, lead to very poor fits of the model predictions with the measured PSD quantile data. The shape of the crystals was leading to an artificial broadening of the measured PSD, shown graphically in blue in figure 1, in comparison to that predicted by the model (shown in orange). Upon incorporation of these measurement effects into the model to account for this artificial broadening, the model fits with measured concentration and PSD data were significantly improved, as shown in table 1.
Figure 1: Accounting for artificial broadening in the measured PSD from laser diffraction. For illustrative purposes.
Table 1: Comparison of simulated and measured product PSD quantiles following shape correction.
|Exp. No.||Measured Product PSD (µm)||Simulated Product PSD (µm)|
The model was then used in a “virtual DOE (vDOE)” to predict the particle size for 26 sets of conditions where the seed size, seed quantity, seed temperature, and temperature profile were varied.The model predictions for the vDOE were compared to experimental data of the conducted DOE, to demonstrate the capacity of the model to describe the wider design space of the process. It was found that the vDOE and experimental DOE results were in reasonable agreement. This work demonstrates how targeted model driven experiments can be effectively used to characterise the system and subsequently utilised to conduct a virtual vDOE, in an efficient, cost effective manner, with minimal experimental effort.
Figure 2: Comparison of the simulated validated model predictions (y-axes) and the measured values from the experimental verification (x-axes) for the (a) product d50’s and (b) product volume mean diameters.
2. Attainable regions for Critical Quality Attributes in drug manufacture and end performance
Burcham, C.L., Myers, S.S., Eli Lilly and Company, USA
Mitchell, N.A., Process Systems Enterprise (PSE) Ltd , UK
SbP is a holistic approach to the development and optimization of drug manufacture and drug delivery. SbP utilizes a single model based framework for the mechanistic modelling of the beginning to end manufacturing process for formulated products, including:
- Drug substance manufacture
- Drug product manufacture
- Oral absorption and pharmacokinetics
As an example this framework allows for the impact of changes to the crystallization process to be considered on downstream unit operations and vice versa. This framework supports optimization across the complete manufacturing system and a rapid assessment of attainable processing regions subject to specified product performance constraints.
The attainable regions for critical quality attributes in the manufacture of a drug substance, such as Particle Size Distribution (PSD), and drug product performance, such as fraction of dose absorbed, of Solid Oral Dosage Forms (SODF) can be highly dependent upon the types and operation of the end-to-end production steps. This is the case for low-dose drug products, where other factors, such as Content Uniformity (CU) of SODF can also be of a major concern. This is also the case for dissolution rate limited compounds (i.e. BCS Class I) which can impact bioavailability. Both CU and bioavailability for SODF are known to be highly dependent upon the particle size distribution of the Active Pharmaceutical Ingredient (API), with control of API particle size being a common approach to ensure the content uniformity of a drug product. The allowable regions for particle size versus dose for high and low solubility drugs are shown in figure 1 below as a function of dose. The key product performance constraints of content uniformity and bioavailability vary in their impact in each case.
Figure 1: Allowable regions for the X50 of primary API PSD as function of dose for high and low solubility drugs, due to content uniformity and bioavailability constraints .
Previously, we examined a continuous cascade cooling crystallization process and explored the impact of varying process configurations, both drug substance and drug product, on the attainable processing regions, subject to various processing constraints. A continuous cooling crystallization (utilizing kinetic parameters for paracetamol in ethanol) was employed as an example case. Additional drug substance and drug product manufacturing steps were also considered in order to investigate their impact on the attainable regions, including filtration. In order to predict the content uniformity of the drug product a number of models [1-3] were employed. The predictions from these simulations are dependent upon the particle size distribution and the dosage level of the API present in final dosage form. The attainable region in terms of continuous seeding and dose versus residence time are shown in figure 2a and b, respectively.
Figure 2: Attainable regions for a two stage cascade continuous cooling crystallization of paracetamol. (a) Minimum dosage level below which continuous seeding must be employed and (b) attainable region for dose versus residence time as a function of seed concentration in the feed stream.
A similar methodology to that outlined above, was applied in this work with an Eli Lilly compound. In this case, wet milling was also considered as part of a continuous crystallization process in order to investigate its impact on the attainable regions, for a similar poorly nucleating (secondary) compound, in order to widen the design space for the operation of the continuous crystallizer. Additionally, a number of optimization objectives were posed, which included increasing throughput of the API, increasing the number of number of crystallizer stages and incorporating wet milling conditions. All cases where constrained to maintaining a target bioavailability of the final product and other processing constraints, in order to evaluate these attainable regions. Once the attainable region of a given CQA versus total production time was determined, the robustness within the attainable region was evaluated and explored via a Global Sensitivity Analysis (GSA).
3. Micro-scale process development and optimization for crystallization processes
N.A. Mitchell, Process Systems Enterprise (PSE) Ltd., UK
C. J. Brown, EPSRC Centre for Innovative Manufacturing in Continuous Manufacturing and Crystallisation, University of Strathclyde, UK
Initial experimental phases of crystallization process development are commonly carried out at very small scales, typically using 1-5mL vessels. The aims of these early phases of process development are to select a solvent based on solubility and crystal solid state. These activities are commonly conducted in high throughput reactor systems, such as the Crystal16® and Crystalline, from Technobis Crystallization Systems, as shown in Figure 1(a) and 1(b), respectively. However, for the development, validation and optimization of crystallization process models this data is usually not utilized and the selected solution system is probed experimentally and more quantitatively at much larger scales, typically between 100 – 1000 mL. A more quantitative usage of the data generated at small scale for the development of process models which may significantly reduce the number of larger scale experiments required, would aid in addressing the increasing constraints on time and materials in pharmaceutical development.
Solubility and metastable zone width (MSZW) experiments are routinely conducted with both experimental systems, shown in figure 1, with clear points (indicating the point of dissolution) and cloud points (indicating the on-set of nucleation in solution) utilised to indicate the MSZW for a given cooling rate, agitation rate and solute composition . Through turbidity and temperature measurements, both experimental systems provide the ability to quantitatively determine MSZW and therefore, the primary nucleation kinetics of the solution system [2, 3]. In addition, the Crystalline system also has the added measurement capabilities of particle visualization, providing a number based representation of the particle size distribution (PSD), as shown in figure 2, and Raman modules for concentration and solid form monitoring. In-situ sizing via image analysis is also not prone to sampling issues possible with offline PSD measurements techniques like laser diffraction. In addition, the in-situ image analysis capabilities can be enormously beneficial in terms of aiding process understanding, providing crucial information for crystallization mechanism and model discrimination activities. The lack of probes inserted into the reaction vessel also leads to no cross contamination and no interference in the crystallization environment. All of these techniques are integrated into a small reactor with overhead stirring and refluxing capabilities, which can qualitatively mimic the likely vessel configurations at larger scales.
Figure 1: Images of the (a) Crystal16® and (b) Crystalline experimental systems from Technobis Crystallization Systems .
Figure 2: Particle visualization and calculated PSD based on images.
In this work, data from paracetamol and 3-methyl-1-butanol solutions at the micro-scale was utilized to estimate the crystallization kinetics of the model, including crystal growth and primary nucleation, enabling model development, as well as mechanism discrimination. The final predictions of the developed process model were compared with a previously developed and validated process model, which employed larger scale 1 L scale experiments. Cross-validation with FBRM, laser diffraction and online solute concentration data from the larger scale, 1 L was conducted. Although perfect agreement was not achieved, primarily due to scale and reactor dependent kinetic mechanisms, such as primary nucleation and secondary nucleation, the process model was in general qualitative agreement with the original model developed with larger scale experimental data. A key outcome of this work is an adapted process development workflow for the design, model validation and optimization of processing models for crystallization systems. This workflow enables crystallization process development with an order of magnitude lower demands for materials. In particular raw API, which may not be available in the early stages of process development. In addition, the design space for the process can be assessed early on, such a process robustness and viability of continuous processing, with less dependence on larger scale, more material intensive experiments. As a result, by utilizing commonly available micro-scale process data the material demands and requirements for larger scale experiments can be significantly reduced, leading to a step change in pharmaceutical process development efficiency and productivity.
4. Case-study of an investigation of crystallization kinetics in a difficult system: dealing with fast kinetics
M. Iggland, E. Verdurand, DSM Nutritional Products AG, Switzerland
H.Mumtaz, N. Misailidis, N.A. Mitchell, Process Systems Enterprise, UK
The proper understanding of any chemical production process, in the sense of allowing full optimization and control, requires both thermodynamic and kinetic knowledge. This applies also to crystallizations – ideally, it should be known how each mechanism of crystallization acts, at what rate and how it can be influenced. Such knowledge would, in the best case, allow the tailoring of the process to produce the desired polymorph, morphology, size and amount of crystals.
This work presents a case study of a kinetic model created for a system which, due to its intrinsic behavior, does not allow for obtaining kinetic data from the large scale process or from a scaled-down version of this process. Thus, a special experimental methodology [1-3] is required in order to be able to obtain robust kinetic parameters which can be used for a prediction. These predictions need to cover both a wide temperature range and a range of solvent compositions.
The crystals produced during a crystallization process are affected by four basic mechanisms, namely nucleation, being the formation of new crystals; growth and dissolution, leading to an increase or decrease in size and change of shape of existing crystals; breakage or attrition, which is the fragmentation of crystals into smaller parts; and agglomeration, the combination of two or more crystals into one complex particle. Each mechanism affects the size, shape and number of particles in some way, and thus each mechanism is affected by the other mechanisms. The consequence of this is that kinetic measurements need to be carried out with well controlled experiments in which only defined mechanisms are allowed to take place, so that kinetic parameters can be determined in a robust way. Even though the basic mechanisms of crystallization are general, how a certain substance behaves depends strongly on operating parameters such as supersaturation, temperature and fluid dynamics and on substance- or system-specific parameters such as the solvent, presence of impurities, and also on the substances inherent behavior. In other words, how fast crystals of a substance nucleate, grow, break or agglomerate depends strongly on the substance, the solvent, the presence of impurities.
The most common mathematical framework used for kinetic models for suspension crystallizations is population balance modelling. A particle size distribution quantifies the amount and sizes of particles present in a suspension, and a set of partial differential equations describes changes over time. These PDEs are coupled with constitutive equations which describe the influence of growth, primary and secondary nucleation, and agglomeration. A mass balance takes the transfer of molecules from the liquid phase to the solid phase into account. In this work, we use gCRYSTAL to set up the model, estimate parameters and perform simulations. gCRYSTAL is an advanced process modelling flowsheeting tool for describing crystallization processes.
We set ourselves the goal of characterizing one of our crystallization processes, which involves both cooling and anti-solvent addition. In our attempt to characterize this system, we encountered a surprising behavior: regardless of the rate of cooling or the rate of addition of anti-solvent, desupersaturation was fast enough to allow the liquid concentration to be equal to the solubility during the whole experiment – two examples are shown in Figure 1, one with fast and one with slow cooling. From the resulting particle size distribution, it was clear that agglomeration and secondary nucleation were occurring, to different extents depending on the operating parameters.
Figure 1. Temperature profiles, solubility line and concentrations for two experiments: (a) cooling was slow, and in (b) cooling was as fast as possible. Seeding points are marked. In both cases, the concentration does not deviate significantly from the solubility line substantially.
This causes difficulties, since the mechanisms cannot be decoupled and the kinetics are governed by the cooling or anti-solvent dosing rate, covering up the true kinetics.
In order to avoid the difficulties caused by the behavior described above, we designed experiments that allow to decouple the mechanisms as much as possible. This involves exploiting meta-stability and careful experimentation, to first gain robust data for growth kinetics. Results from these experiments give us kinetic parameters which are used to explain other experiments, from which additional information on mechanisms affecting particle size such as agglomeration and secondary nucleation can then be gained.
We present an industrial case-study of a detailed process model, where kinetic parameters have been obtained on the laboratory scale using specially designed experiments. These experiments are able to overcome the difficulties caused by the inherent behavior of the system, and allow mechanisms to be decoupled.
5. Impact of an Impurity on the Morphology and Growth Kinetics of an Investigational API
Christopher S. Polster, Christopher L. Burcham, Eli Lilly and Company, USA
Niall Mitchell, Process Systems Enterprise, UK
The observation of impurities having an effect on the growth rates of individual crystal faces, thus impacting crystal habit, dates back to the work of Michaels and Colville in 1960 . Since then, multiple investigations on the topic have been published. This work describes an industrially relevant example of an investigational active pharmaceutical ingredient (API) which was undergoing “unexplained” crystal habit changes. Covered here are the elucidation of the specific impurity that was impeding growth of a crystal face, confirmation of the impact of the impurity on crystal growth through spiking studies utilizing on-line analytical tools to confirm its activity, and the use of population balance modeling software (gCRYSTAL) to further explain impurity inclusion and growth kinetic effects.
Investigation of Crystal Habit Changes
When campaigning to produce 10 kg of API for drug product development purposes, the third batch unexpectedly resulted in a “plate-like” crystal habit as compared to the “needle-like” habit observed in previous batches (Figure 1). Upon investigation of impurity trends, it was determined that one impurity out of 17 impurities observed in the product correlated strongly with the crystal habit change. Crystal habit for this analysis was quantified using the aspect ratio as measured by image analysis of offline microscopy. The impurity was isolated by liquid chromatography, and then identified by NMR to be a dimer of the API. The isolated dimer was then used in spiking studies to evaluate the impact of dimer level on crystal habit and growth rates.
The spiking studies were monitored by attenuated total reflectance infrared spectrometry (ATR-IR), focused beam reflectance measurement (FBRM) and process vision and measurement (PVM) probes. PVM images confirmed the transition from needles at 100 ppm dimer level to plates at 700 ppm dimer level. FBRM measurements indicated a lower aspect ratio for the higher dimer level (as calculated by the ratio of square-weighted mean to the non-weighted median ). The habit change was hypothesized to be due to interference of the dimer with fast-growing end faces. This hypothesis was corroborated by the ATR-IR data showing slower desupersaturation rates for higher impurity load experiments.
gCRYSTAL Modeling of Impurity Effects
gCRYSTAL is a commercially available set of population balance model libraries built on the gPROMS modeling platform. A recent addition to the crystallizer model library is the ability to account for the effect of impurities on the growth rate of crystals in the system, and the inclusion of those impurities into the crystal phase. Crystal growth rates are modified by applying a growth rate correction factor for each of N impurities, ϑi, in the following way:
Where G(L) is the modified growth rate, G0 (L) is the pure growth rate calculated by selected growth rate expressions. The growth rate correction factor is calculated by a power-law expression by default, but can take any functional form defined by the user. The power-law expression used by default is:
Where Ci is the molar fraction of the growth modifier i, gi is the order with respect to concentration of growth modifier i and αi is a proportionality constant for growth modifier i. In gCRYSTAL, impurities can be treated as a separate crystal phase, or they can be included directly into the product crystal phase. In the latter case, a total crystal growth rate, GT (L), is calculated by adding the growth rate attributed to each impurity i, GI,i (L), to the pure crystal growth rate as in equation (3).
Each impurity growth rate is calculated by a power-law expression by default, as in equation (4).
Where kg,i is a growth rate constant for each impurity i, and Ai and Bi are orders with respect to pure crystal growth rate and molar fraction of impurity i, respectively. Again, the impurity growth rates are customized to any user defined expression in addition to the pre-defined power-law model.
The growth modification and impurity inclusion models defined in equations (1) through (4) are used in this work to successfully describe experimental observations.
6. Extending the design space of an anti-solvent and cooling crystallization process exhibiting a synergistic solubility to improve the physical properties of an API
M. Boukerche, Eli Lilly and Company, USA
W.E. Knabe, Eurofins Lancaster, USANiall Mitchell, Process Systems Enterprise, UK
The solubility behavior of a solute in a given solvent or mixture of solvents dictates the type of crystallization processes (thermal, anti-solvent, reactive, evaporative…) that needs to be performed to robustly isolate a purified crystalline form with a suitable yield of recovery. For that purpose, in a typical crystallization workflow, the first step of a crystallization process design is to screen solvents and solvent mixtures at a couple of temperatures. Once the solvent system is selected, the physical attributes of the desired compound (PSD, polymorphic form, morphology, flowability…) is usually achieved via the determination of a robust seeding point. The subsequent development phase of the crystallization process focuses then on the control of the process parameters: seeding temperature, seed load, seed quality, initial supersaturation, supersaturation rate, and the need for a final thermal cycle to narrow the final PSD. For anti-solvent crystallization processes, the determination of the phase diagram of the desired compound is performed in a mixture of solvents with varying composition. Typically the increase of the anti-solvent content in the mixture leads to a monotonically decrease of the solubility of the solute. However, in certain solvent/anti-solvent systems, the solubility behavior can be highly nonlinear and the solubility curve reaches a maximum for a specific composition of the solvent mixture. For a given temperature, below and above the solubility apex, a change of the composition of a solvent mixture decreases the solubility. This solubility behavior is known as synergistic solubility and reflects the impact of the solvents on the activity coefficient of the solute . The synergistic comportment of a solvent mixture is often encountered during process development but can easily be missed during the initial solvent screening when only a couple of solubility data points are available.
The first manufacturing campaign of an Active Phamaceutical Ingredient (API 1) delivered needle like crystals with poor flowability and bulk/tapped densities. An anti-solvent and cooling seeded crystallization is designed and developed from Ethanol/Water to improve the API physical properties and control the final PSD of the API (Figure 1). This crystallization process is then scaled up succesfully during the second manufacturing campaign (API 2).
Figure 1: Morphology of the API 1 product (left) and API 2 product at the same magnification
Based on the screening of solvents that could potentially improve the crystal morphology, the solvent system Ethanol/Water is finally chosen. The solubility is determined at three temperatures with a varying composition of EtOH/H2O. The experimental solubility data is then used to determine a model to predict the solubility at any temperature and composition.
Figure 2: Experimental and predicted API solubility from EtOH/H2O solvent system
This solvent system exhibits a synergistic solubility with a maximum solubility reached for a range of 35 +/-5 Wt% of water. Above this composition, water acts solely as an antisolvent as shown on Figure 2. It is noteworthy to highlight that below this composition, EtOH, which is considered to be the ‘good’ solvent, would actually play the role of the anti-solvent. For the sake of clarity, the solvent which exhibits the lowest solubility in the pure solvent/solute system (i.e. water) is referred as the anti-solvent. Based on the solubility data, a seeded anti-solvent crystallization is initially developed by determination of the seeding point. In addition to form control of a desired polymorph, the aim of seeding in batch crystallization processes, is to control or suppress activated nucleation mechanisms, promoting a purely growth process and hence reduce batch-to-batch variability. When dealing with synergistic solubility, the typical approach for seeding, is to start with a composition close to the maximum solubility to optimize the process throughput. A portion of antisolvent is then added in the phase diagram domain where water is acting solely as an anti-solvent. This is done in order to induce a supersaturated solution and avoid the re-dissolution of the seed when the remainder antisolvent is added (orange line in Figure 2). The seed is then aged to allow the formation of a seed bed, which is grown by addition of the last portion of anti-solvent. Finally, the slurry is cooled down to isolation temperature, held for a sufficient time to allow full desupersaturation of the system before isolation. Based on this procedure, various batches are prepared with different techniques and delivered to the Drug Product Design group in order to define targets of physical attributes of the API that enables safety, efficacy and processability of the Drug Product. In addition to this classical approach, an unsual process is developed where the seeding point is in the region rich in Ethanol (below the apex). By carefully designing the initial condition in terms of concentration and dilution, it is then possible to maintain the presence of a slurry during the course of the addition of the second portion of water without redissolving the entire seed bed. As the solvent composition progresses through the maximum concentration region, only a portion of the seed bed is redissolved. The data generated during the initial experimental work is used to develop a validated kinetic model of the crystallization using gCRYSTAL , also used to regress crystal growth, dissolution, and secondary nucleation parameters. The developed crystallization model is then utilised to track the pre-dissolution/conditioning of the initial seeds before to the crystallization steps. This enables the optimisation of the conditioning phase, yielding a narrower PSD by kinetic ripening before the final growth of the slurry by anti-solvent addition and cooling. This approach allows the use of a lower seed load and act as an in situ conditionning of the seed bed population by redissolution of the fines, without the need of a thermal cycle.
7. Kinetic Modeling of a Pharmaceutical Crystallization Process: Mechanistic versus statistical modelling approaches
G. Taylor, GSK, UK
Niall Mitchell, Process Systems Enterprise, UK
During crystallisation process development in the pharmaceutical industry, a statistical approach using Design of Experiments (DoE) may be used to aid process understanding and develop a design space. Whilst this approach provides an understanding of how process parameters affect the critical quality attributes (CQAs) of the manufactured product, it does not answer the more fundamental question of why they affect the CQAs, and so such models may have poor predictive capabilities outside of the ranges of conditions studied. Alternatively, crystallisation processes can be described via mechanistic modelling, which uses kinetic expressions to describe dynamic changes in the system. This can offer a more fundamental understanding of the system and develop a model that can be a powerful predictive tool. Mechanistic modelling can offer many benefits to pharmaceutical process development, and is already being used frequently in continuous chemical reactions to identify potential failure modes, construct the design space and optimise processes . Crystallisation processes are notoriously more difficult to model and validate than homogenous chemical reactions, but due to the highlighted benefits it is desirable in both GSK and within the ADDoPT project collaboration . The ADDoPT (Advanced Digital Design of Pharmaceutical Therapeutics) project is developing and implementing advanced digital design techniques that eliminate non-viable drug candidate formulations as early as possible streamlining design, development and manufacturing processes
A GSK compound (Molecule X) in late stage development was chosen as a case study in this work. The crystallisation of Molecule X is a simple seeded cooling crystallisation in ethanol; however the API particles exhibit complex agglomeration which makes it difficult to predict the effects of the various process parameters on particle size distribution (PSD) of the product, making this an ideal industrial case study. In addition, as a late stage asset was chosen, data on the process was already available, at both laboratory and plant scale, and so the predictive capability of the model could be effectively evaluated by comparing the simulated outputs of a variety of process conditions with experimental data sets.
Four experiments were utilised in order to generate the data required for the validation of the mechanistic model; an initial data rich experiment using PVM and FBRM to qualitatively elucidate the important crystallisation mechanisms and Raman to quantitatively monitor desupersaturation, followed by three further experiments monitoring desupersaturation at varying stirrer speeds and cooling ramp rates. Crystal growth, agglomeration and activated secondary nucleation were shown to be the dominant crystallisation mechanisms and desupersaturation curves provided online concentration data to fit the model and estimate the crystallization kinetic parameters. Offline PSD measurements of the isolated API were also obtained for all experiments and employed in model validation. An incremental parameter estimation methodology was used to progressively include additional crystallisation mechanisms into the kinetic model and in tandem narrowing the kinetic parameter search space, represented by the boxes (Figure 1) . A growth only model was initially considered, followed by inclusion of agglomeration and finally secondary nucleation. Mersmann two step kinetics  were used to describe crystal growth, which accurately described the decay of supersaturation observed but not the particle size. Agglomeration was then considered, which was described using Mumtaz kinetics, relating rate of agglomeration to particle velocity, growth rate, energy dissipation and an empirically determined agglomeration constant . Finally, an activated secondary nucleation term was also included to describe the formation of secondary nuclei via dendritic growth . Only three of the four experiments were used in parameter estimation, with the fourth held back to confirm the predictive capabilities. All three crystallisation mechanisms were required to accurately describe the system (Figure 2).
The validated mechanistic model was used to predict the effect of key process parameters on the final PSD, and these simulated outputs compared to experimental data. The model was able to predict the impact of process parameters on PSD (ie. increase or decrease), with the exception of seed loading and seed size. Additionally, the prediction of scale up from a 1L laboratory experiment to a 630L plant vessel matched the results seen when executed on plant (Table 1).
During this work we demonstrated the construction of a mechanistic model to describe a crystallisation process which required only a limited number of simple laboratory experiments. The effect of critical process parameters on PSD could be predicted, in addition to the scale up to a plant vessel. As an alternative to a DoE approach, the mechanistic model requires less experimental burden and has the potential to predict the effects of parameters that were not included in the model validation. Finally, it also has the potential to increase the efficiency of process development and improve the fundamental understanding of pharmaceutical crystallisation processes.
8. Workflow for the quantitative application of Chord Length Distribution sensor modelling for the development of crystallization processes
N.A. Mitchell, D. Slade, Process Systems Enterprise, UK
O.S. Agimelen, C.J. Brown, B. Ahmed, A. J. Florence, J. Sefcik, A.J., Mulholland, EPSRC Centre for Innovative Manufacturing in Continuous Manufacturing and Crystallisation, University of Strathclyde, UK
The measurement and availability of Chord Length Distribution (CLD) measurements from online Process Analytical Technologies (PAT) equipment, such as Mettler-Toledo’s Focus Beam Reflectance Measurement (FBRM) probe, has become ubiquitous in recent years, to aid process development, design and control activities, particularly for crystallization processes. The FBRM probe employs a laser beam, which is rotated along a circular path at a speed of about 2 m/s. A chord is measured when this laser hits a particle in suspension and light is backscattered to the probe, with the signal measurement time and scan speed, utilised to calculate the measured chord length. Thousands of chords can be measured per minute, providing a CLD measurement of the particles present. CLD measurement systems can provide an abundance of process data in these cases, to aid in qualitative process understanding. However, to facilitate quantitative usage of CLD data, some conversion to a Particle Size Distribution (PSD) measurement is required. This inverse problem is not well posed as the PSD which can fit a given measured CLD is not unique.
However, if a PSD is available such as from a laser diffraction measurement, the forward problem (calculating a CLD from a PSD) can be solved. Various CLD models have been proposed that can be utilised to solve the forward problem, such as that of Kail et al. , which accounts for the intensity profile of the laser beam and the optical aperture of the probe. In this work, two geometric approaches were utilised. The first, suggested by Li & Wilkinson , is a 2D model which approximates the shape of particles using an ellipse, and so it is suitable for both spherical and non-spherical particles. The second approach, suggested by Vaccaro et al. , models the particle as a long thin cylinder. It considers all possible 3D orientations of each cylindrical particle. This model is applicable to particles with a low aspect ratio (ratio of width to length of particle), such as needle and rod shaped particles. In applying both of these models, the aspect ratio is employed to tune the applied transformation from the PSD to the CLD. This is because the optimum aspect ratio to convert a given PSD to CLD is not known in advance. Indeed, a key element of some inversion algorithms (predicting PSD from CLD measurements), is the sequential tuning of the aspect ratio to improve the PSD prediction provided by the conversion . This CLD conversion model was integrated with gCRYSTAL 4.2.0, an advanced population balance modelling tool for crystallization processes, based on the equation-oriented gPROMS platform. In order to apply and integrate the developed CLD conversion model with a mechanistic crystallization model for the critical validation phase, we have developed the model validation workflow, shown in figure 1.
Workflow application case
In this work, we applied the developed workflow for the application of CLD sensor modelling to validate a process model for the batch cooling crystallization of paracetamol from 3-methyl-1-butanol solutions. In this case, online CLD measurements were utilised to complement laser diffraction data for the seed and product, as shown graphically in figure 2, with calibration of the CLD conversion model using the seed and product data. In addition, solute concentration evolution is monitored, via a calibrated online ATR-FTIR probe. It is also important to emphasize that a converted CLD data set could also be utilised to gauge the impact of sampling and preparation for offline particle size measurements, such as laser diffraction. This approach was successfully applied to validate a crystallization process model of the system using a handful of data rich experiments. Additional experiments were employed to verify the predictions of the model outside of the ranges of experimental conditions employed to estimate the kinetic parameters and aspect ratio for the CLD conversion model.
9. Multiscale approach for revamping an industrial continuous crystallization
M. Oullion, N. Perret, Solvay, France
N.A. Mitchell, S.K. Bermingham, H. Mumtaz, Process Systems Enterprise (PSE) Ltd., UK
There are many challenges in the design and operation of large-scale industrial crystallization processes. In this work we develop and apply a multi-scale methodology for the optimisation of an industrial crystallization process, with a view to increasing the capacity of a solid production line and the efficiency of downstream processes, such as filtration. A multi-scale approach was followed to design the equipment modifications and to optimize the operating conditions. It required the acquisition of data at lab and industrial scale. A crystallization model was developed coupling Population Balance Modeling and Computational Fluid Dynamics (CFD). The kinetics and the kinetic parameters were determined using the experimental data. Finally, this allowed us to debottleneck the plant.
A structured workflow for evaluating the necessity for CFD coupling was employed in this work for the modelling of a crystallization process. The workflow for a crystallization process may have the following steps:
- Configure lumped model of a lab crystallizer assuming a “well-mixed” or lumped model;
- Validate and characterize the crystallization kinetics using the lumped model or “well-mixed” model
- Apply the validated lumped model to describe behaviour of larger scale/operation mode (batch to continuous, in this case);
- Optional refinement of some kinetics parameters, if model is not sufficiently predictive. Growth and agglomeration tend to scale well, whereas mechanisms such as nucleation tend to display some scale/reactor dependent behaviour;
- If model is not sufficiently predictive after this refinement step, create of a course non-CFD multi-zonal model using multiple lumped models to evaluate sensitivity to mixing conditions; and
- If sufficient sensitivity of the model predictions are observed, then a coupling to monophasic CFD simulations (ANSYS Fluent was employed in this case) could be considered at this point, using the application steps outlined in figure 1 below.
Application to industrial
A mechanistic model, considering secondary nucleation by means of attrition, crystal growth and agglomeration was developed in an incremental fashion, by progressively adding and validating the kinetic parameters of the new phenomena. Lab scale batch crystallization experiments were employed first in stage one using a lumped or well-mixed model, to characterize the crystallization. Process data for concentration versus time, intermediate and product Particle Size Distributions (PSD) where utilized to calibrate the mechanistic model. Furthermore, a flow cell and single crystal growth experiments where employed to decouple the growth kinetics from the other active crystallization mechanisms. Model as well as mechanism discrimination was carried out using the lab scale data and the model was successfully validated at lab scale.
The methodology outlined above was applied in this case. Following the validation of the crystallization kinetics using the lab data (represented by step 2 above), the lumped model was adapted to describe the industrial scale crystallizer. In this case, even with the refinement of kinetic parameters as outlined in step 4, the lumped model was not capable to describe the experimental observations at the plant scale. At this stage, the impact of the hydrodynamics in the system was at first assessed coarsely using lumped MSMPR models, with rough estimates of average power inputs and flowrates between zones, based on turnover times. It found that there was significant sensitivity to the flow field, in particular when specific crystallization mechanisms were considered in the model. For instance, it was found that for a model without agglomeration both the lumped and mutlizonal models of the industrial scale crystallizer predicted almost the same product PSD, as shown in figure 2(a). It was only when agglomeration, which displays a highly non-linear relationship to the power input and hence the flow field, was considered that a significant difference was observed in the model predictions, as shown in figure 2(b).
The above approach was found to be a very efficient at optimizing the existing crystallization, with the reduction of experimental effort for process development. The approach also facilitated the design of new equipment and modification of the existing crystallizer. In this case, a number of modifications to the crystallizer design and operation, including inlet pipe position, impeller geometry and average power input, where considered, with their impact assessed in-silico. Using this approach a 25% improvement on the filtration capacity was achieved, by producing a narrower PSD from the crystallization step, leading to the removal of the process bottleneck .
10. Development and Optimization of a Mixed Suspension Mixed Product Removal (MSMPR) Crystallization Process Incorporating Wet-milling
Y. Yang, C. Mitchell, C. D. Papageorgiou, Takeda Pharmaceuticals International Co., USA
N. A. Mitchell, and S. Bermingham, Process Systems Enterprise (PSE) Ltd., UK
In recent years, there has been an increased interest in moving pharmaceutical manufacturing from batch to continuous processing. The potential to reduce cost, simplify production and improve product quality have driven significant investment in continuous manufacturing. While crystallization is an important separation and purification unit operation, continuous crystallization processes have not been widely studied within the pharmaceutical industry primarily because of the design complexity and difficulty of robust operation. The MSMPR crystallizer is arguably one of the most promising technologies as its operation is conceptually simpler than for e.g. tubular crystallizers and it is easy to convert batch crystallizers to continuous operation. However, issues such as fouling and encrustation, transfer line blockage, and classification due to limited nucleation are frequently experienced with numerous reports in the literature.
The work presented herein solves the fouling and classification issues encountered for one of Takeda’s APIs by adding wet-milling to the first stage of an MSMPR crystallizer. The API studied was found to be characterized by slow nucleation kinetics resulting in a prolonged wash out period. During this period, large (>1mm) particles were formed that were unable to be transferred leading to classification issues, build up of supersaturation and severe fouling.
gCRYSTAL 4.2 was to utilized to develop a mechanistic model of the system, which utilizes the finite volume numerical method to solve the PBE. The crystallization model was validated using experimental data from batch seeded desupersaturation experiments, considering growth, secondary nucleation and agglomeration. The validated model was subsequently utilized to map the design space for the continuous operation of the process, considering the number of stages, temperature set-points, and residence times in each stages. This system was found to exhibit very low rates of secondary nucleation, which was insufficient to achieve the target product PSD alone. Therefore, a number of options including a wet mill where considered to improve the operational design space, in terms of the achievable PSD’s. The wet mill was validated using batch data from a rotor-stator wet mill, with the kinetic parameters for breakage estimated. The breakage kinetics were also linked to the process conditions, including rotor speed, and the specifications of the mill.
The high shear imparted by the wet-mill was found to control particle size and increase secondary nucleation rates. This reduced supersaturation and therefore fouling and encrustation. The wet-mill was incorporated in recycle with the first stage of the cascade, which was then experimentally optimized investigating factors such as solvent composition, types and materials of construction of the impeller, wet-mill rotational speed, wet-mill turnover numbers, residence time and start-up methods. Finally, a multi-stage system was developed to optimize productivity.
Experimental Optimization of the Wet-Milling Stage
The first stage incorporating wet-milling was optimized experimentally. The effects of turn over number (TON) per residence time, operating temperature and wet-mill speed were investigated. Wet-mill TON was not found to have an effect on steady state super saturation nor on the PSD (Fig. 1). Lower MSMPR operating temperature resulted in higher steady state super saturation and smaller PSD and the d50 value did not change significantly below 45 °C (Fig. 2). As expected a higher wet-mill rotaional speed afforded a smaller steady state PSD. Interestingly the MSMPR operated at higher supper saturation at 15k rpm, possibly due to the higher temperature generated in the mill as a result of higher friction and therefore greater dissolution of small PSD material (Fig. 3).
Multi-stage MSMPR setup and Model based Optimization
A three-stage MSMPR continuous crystallization skid was developed, shown in Fig. 4. Periodic slurry transfer was accomplished via a dip pipe under pressure and during each transfer, less than 10% of the batch volume was removed, which has been shown not to affect steady state. All the valves of the pressure transfer system were controlled using a Siemens PLC. The feed pump and Coriolis flow meter were housed in a heated enclosure to avoid precipitation in the transfer line. The three stage system was run continuously for 12 hours (8 residence times) achieving steady state. Strategies for improving the overall robustness of the set-up will also be discussed.
11. Application of mechanistic models for the online control of crystallization processes
Y. Salman, C. Y. Ma, T. Mahmud, K. J. Roberts, School of Chemical and Process Engineering, University of Leeds, UK
J. Mack, Perceptive Engineering Ltd, UK
N. A. Mitchell, Process Systems Enterprise (PSE) Ltd., UK
Mechanistic models are becoming more commonly applied for Research and Development in the pharmaceutical sector. Traditionally, the output from this activity, namely a validated mechanistic model, which is capable of quantitatively predicting the behaviour of the various Critical Quality Attributes (CQA) for the crystallization process for a wide range of Critical Process Parameters (CPP). However, these tools are almost exclusively employed in an offline manner currently, primarily aimed at assessing process robustness and variability, with very little subsequent online application of the model for control or soft sensing purposes.
Model Predictive Control (MPC) is an established industrial technology which has only recently been applied in the Pharmaceutical industry. Existing applications in batch and continuous crystallization processes provide tight control of supersaturation and final particle properties at various scales. Optimized supersaturation control has also been demonstrated to improve batch yield and deliver consistent product. The MPC applications to date have characterized the crystallization process using statistical models and data-driven techniques. The data generated during these experiments are used to develop dynamic process models, calibration models and establish Meta-Stable Zone (MSZ) boundaries. The MSZ boundary information is combined with the concentration prediction and MPC to provide closed loop control of super-saturation.
The drawback of this approach is the time and material costs associated with executing the experimental tests. For batch systems, the product generated during these tests is typically discarded. Furthermore, a subset of the tests must be repeated during scale up as the MSZ is both product and process dependent. The experimental testing time during initial development and scale-up could be reduced by combining information from validated mechanistic models into the Model Predictive Control system. The system Meta-Stable Zone boundaries and growth characteristics could be used to update the system during the plant tests and control system commissioning.
Methodology and Results
In this work, as the first stage of development of a MPC approach as depicted in Fig. 1, we outline the application of an advanced process modelling tool, namely gCRYSTAL (PSE), to build a model for the seeded batch cooling crystallization process of L-glutamic acid from aqueous solutions. The process model with the crystallization kinetic parameters obtained from references [1, 2] was validated using process data gathered from the laboratory based experiments carried out in 0.5 L and 20 L agitated crystallizers at Leeds . The predicted solute concentration and supersaturation profiles as a function of time and temporal evolution of crystal size distributions (CSD) in the 0.5 L crystallizer are illustrated in Figs. 2 and 3, respectively. The laboratory based crystallization process model was subsequently employed to predict the behaviour of a pilot-scale industrial crystallizer . In order to make the mechanistic model more predictive of the process behaviour observed at the larger scale, some refinement of the kinetic parameters for secondary nucleation was required using minimal experimental data from the typical plant runs.
The validated mechanistic model of the crystallizer will be integrated with a PharmaMV (Perceptive Engineering) Advanced Process Control system. PharmaMV provides the multivariate control and monitoring platform for the online laboratory and pilot-scale crystallization processes. With this approach the validated mechanistic model will be utilized to drive the successive control steps to achieve a target product crystal size distributions, defined by the D10, D50 and D90 of the final product CSDs.
12. Measurement and Modeling of the Phase Transformation between Carbamazepine Crystals and Carbamazepine: Nicotinamide (1:1) Cocrystals
T. Suwannikom, A. Flood, Vidyasirimedhi Institute of Science and Technology, Thailand
N. A. Mitchell, Process Systems Enterprise (PSE) Ltd., London, UK
The phase transformation of carbamazepine crystals in solutions containing nicotinamide into carbamazepine :nicotinamide (1:1) cocrystals has been studied using a combination of Raman spectroscopy (using both a Raman microscope and a Raman Spectrometer) for characterization of the crystal phase and in-situ optical reflectance measurement (Sequip ORM) for particle sizing and counting. Results were modeled using the population balance framework using the software gCRYSTAL. The experimental results combined with the population balance framework allows us to elucidate the mechanisms by which the phase transformation between the pure species crystal and the cocrystal occurs in solutions containing the coformer.
Cocrystals are becoming increasingly promising in the development of pharmaceutical products, as cocrystals products may have advantages over single component materials in terms of solubility, dissolution rate, bioavailability and stability among others.
A recent consensus on the definition of cocrystals produced one definition that “cocrystals are solids that are crystalline single phase materials composed of two or more different molecular and/or ionic compounds generally in a stoichiometric ratio which are neither solvates nor simple salts” . So far there has been an enormous amount of work in the literature devoted to the design of cocrystals and choice of coformer, characterization and properties of cocrystals, pharmacokinetics, and formulation. However there has been limited work on understanding of the processes for industrial production of cocrystals. Cocrystals may be formed via solvent evaporation, by grinding (neat or liquid assisted), or solution crystallization. Some notable work in this area is that of the groups of Roberts , ter Horst  and Wei . The last of these studies used a modification of the Avrami equation to model the transformation during a cocrystallization, and is perhaps the first work to systematically model a phase transformation to a cocrystal. However, crystallization processes and solution mediated transformations between crystal species are typically modeled using the population balance framework, which is useful in quantifying the underlying mechanisms of the transformation including kinetics of dissolution, nucleation and growth.
The objective of the current work is to measure transformation rates of the pure crystal carbamazepine in solutions containing nicotinamide to the 1:1 cocrystal of these species, and to model the transformation using the population balance framework, in order to quantify the kinetics of the underlying mechanisms involved. This has led to a better understanding of this process.
Transformations were performed in pure and mixed solvents including ethanol and ethyl acetate. The speed of the transformation was found to strongly depend on the solvent used. Experiments were performed within the region of the carbamazepine-nicotinamide phase diagram where the cocrystal was the stable species, and were initiated by adding pure carbamazepine crystals into a solution containing the solvent and an appropriate concentration of nicotinamide. Solutions were stirred at between 300 and 600 rpm depending on the experiment. The cocrystal transformation was characterized by a Fourier Transform Raman Microscope (Bruker SENTERRA II) and Raman Spectrometer (Bruker MultiRAM). Particle size and counts was measured in-situ using optical reflectance measurement (SEQUIP ORM sensor) and confirmed with in-situ video microscopy (SEQUIP IVM sensor). Modeling of the system and fitting of population balance models to the experimental data was done with the software gCRYSTAL (PSE Ltd.).
The transformation between the carbamazepine crystals and the carbamazepine:nicotinamide cocrystals could be followed accurately using the ORM sensor (Fig. 1) and the RAMAN instruments (Fig. 2). The crystals could easily be distinguished by microscopy also (Fig. 3 and Fig. 4). The data available allowed the population balance model to be fitted to the experimental transformation data using gCRYSTAL. This allows for the kinetics of the underlying mechanisms, dissolution of the carbamazepine and nucleation and growth of the cocrystal to be evaluated, thus improving understanding of the mechanism of the transformation.