Conference papers and presentations


Past papers and presentations

AIChE Annual Meeting 2011, (Minneapolis Convention Center Minneapolis, MN, October 16-21 2011)

1. Modelling of a Spray Drying Process

Mark A. Pinto 1, Martin Nørby 2, Sean K. Bermingham 1 and Poul Bach 2, (1)Process Systems Enterprise, London W6 7HA, United Kingdom, (2)Solid Products Development, Novozymes A/S, Dk-2880, Denmark

Session: tbc

Spray drying is a process in which a suspension or solution of a desirable product in a volatile solvent is converted to a largely dry solid product by contact with a drying medium. The process starts with an atomizer which creates droplets from the feed suspension or solution. These droplets are then mixed with a hot gas. Evaporation of the volatile solvent in the droplets takes place and a dry solid product is thus obtained.

Several models are available in the literature with varying levels of fidelity. The levels of fidelity differ both with respect to the modelling of the drying process as well with respect to the flow of the droplets and hot gas.

In a recent Ph.D. thesis, Jakob Sloth developed a model of enzyme spray drying. The model considers the drying of a single particle containing a dissolved enzyme together with additives and predicts the change in droplet size, temperature, moisture content and enzyme inactivation as drying progresses.

In this paper, the work of Sloth is (i) implemented in gSOLIDS, a commercial flowsheeting environment for solids processes and (ii) extended to describe spray drying of droplet size distributions as they flow through an industrial scale unit.

The extended model captures the following phenomena:

  1. the evolution of the droplet size distribution as constant-rate drying takes place;
  2. the evolution of the distribution of temperature with respect to droplet size; and
  • the effect of backmixing of droplets and gas.

This model is then used to construct a flowsheet that also contains models of downstream fluid bed drying and classification processes to determine the effects of key operating parameters on the overall performance of the process in steady state as well as transient periods.

References

J. Sloth (2007) Formation of enzyme containing particles by spray drying. Ph.D. thesis, Technical University of Denmark.

2. Model-Based Characterisation of Organic and Aqueous Tablet Film Coating Processes: Parameter Estimation and Risk Management

Salvador García Muñoz1, Mary T. Am Ende2, Mark A. Pinto3 and Sean K. Bermingham3, (1)Process Modeling & Engineering Technology, Pfizer Global Research and Development, Groton, CT, (2)Pharmaceutical Development, Pfizer Global Research and Development, Groton, CT, (3)Process Systems Enterprise, London W6 7HA, United Kingdom

Session: tbc

Tablet coating is an important processing step in the pharmaceutical industry in which tablets are coated with an aqueous or organic film coating for both aesthetic and functional reasons.

Mathematical models of tablet film coating are important in pharmaceutical development as they aid in design of experiments, scale-up and determining optimal process conditions as formulations change. A universal steady-state film coating model has previously been developed (am Ende, Berchielli, 2005) with the aim of providing process engineers with a means of predicting target operating conditions for optimization, scale-up and robustness studies. In that work, the model was validated with experimental conditions and the predictions obtained were found to be in good agreement with data not used for model validation.

In this study, the steady-state model was first validated against steady state plant data using gSOLIDS (Process Systems Enterprise, UK). It was found that starting with one experiment, as experiments were added, the 95% confidence interval decreased rapidly reducing to less than 10% with four or more experiments.

Figure 1: Change in estimated value of heat loss factor as experimental data is added

In order to quantify whether sufficient data was used for model validation, the 95% T-value was compared to the reference T-value. (If the 95% T-value is greater than the reference T-value, this indicates that sufficient data was used to for model validation). The results obtained indicated that at least three experiments are needed to accurately estimate the unknown model parameter. This was consistent with the results obtained for the 95% confidence interval which was more than 50% when only two experiments were used.

Figure 2: Changes in reference and 95% T-values as experimental data is added

As indicated by the 95% confidence intervals, the estimated value of the model parameter is not a perfect estimate. Therefore in order to determine a reasonable estimate of the model prediction a Monte Carlo simulation was carried out. In this simulation several instances of the model were run with each instance having the unknown model parameter sampled from a normal distribution centred at the parameter estimate and with a standard deviation obtained from the 95% confidence interval. The results from these numerous simulations were then averaged to obtain an estimate of the model prediction. In addition, the results from these simulations were used to obtain a 95% confidence interval on the model predictions. The results obtained indicated that the model predictions were in good agreement with the experimental data used for parameter estimation. Further, given the confidence interval of the estimated value of the model parameter, the standard deviation of the model prediction was much smaller than that of the data indicating that the predictions of the model vary very little with respect to the uncertainty in the parameter estimate.

Figure 3: Comparison of model predictions with experimental data

A similar analysis was conducted using dynamic data. In this case however, although the data used for parameter estimation was deemed sufficient, the quality of the fit was quantified as being unsatisfactory even if several experiments were used. This indicates insufficiencies in the model which is expected as a steady state model was used to study dynamic plant data.

In summary, the results presented above indicate that with relatively few steady state experiments, an accurate estimate can be obtained of the unknown model parameter in the steady state film coating model. The model predictions were found to be in good agreement with the experimental data. The uncertainty in the model parameter was quantified and it was found that the model predictions were relatively unaffected given the uncertainty in the parameter estimate.

The analysis presented above is generic and can be used to quantity uncertainty in any mathematical model. This information can be especially useful as it can be used to quantify the risk associated with decisions made using these mathematical models. Further, the rigour of the analysis helps determine whether more experimentation is needed or whether further model development is needed (or possibly a combination of both).

References

M. am Ende, A Berchielli (2005) A thermodynamic model for organic and aqueous tablet film coating. Pharmaceutical Development and Technology, 1:47-58

 

3. Model-Based Scale-up of Impact Milling

Brian T. Gettelfinger1, Stephen R. Glassmeyer2, Mark A. Pinto3 and Sean K. Bermingham3, (1)Chemical Systems Modeling Section, Modeling and Simulation, Corporate R&D, Procter & Gamble, West Chester, OH, (2)Particle Processing Section, Process Technologies, Corporate Engineering, Procter & Gamble, West Chester, OH, (3)Process Systems Enterprise, London W6 7HA, United Kingdom

Session: tbc

The Vogel & Peukert model uses separate material properties and mill parameters determined from bench top experiments to predict the particle size distribution of the output of impact mills. This turns mill modeling into a tool that can be used every day at P&G. This talk covers our implementation of this model and use within the gSOLIDS environment (Process Systems Enterprise, UK).  We determined the material properties for an absorbent gelling material used in diapers from sieving and single impact milling experiments.  We successfully deployed the population balance model of Vogel and Peukert that predicts the output of our bench top pin mill in gSOLIDS.  The gSOLIDS tool allowed us to perform parameter estimation of this highly nonlinear model which gave us distinct material and mill properties.  We then made successful model predictions of mill scale-up using these same parameters. This method could potentially save millions annually in experimental costs as we can generalize this method to any powder that is broken in an impact mill.   We have thus developed a model-based work process for impact mill scale-up that uses gSOLIDS at its core.

References:

L. Vogel, W. Peukert (2005) From single impact behaviour to modelling of impact mills. Chemical Engineering Science 60, 5164-5176.

4. A Model-Centric Solution to Link Content Uniformity Targets with API Particle Size Specifications and Process for a QbD Excercise

Salvador García Muñoz1, Weili Yu2, Mark A. Pinto3 and Sean K. Bermingham3, (1)Process Modeling & Engineering Technology, Pfizer Worldwide Research & Development, Groton, CT, (2)Pharmaceutical Development, Pfizer Worldwide Research & Development, Groton, CT, (3)Process Systems Enterprise, London W6 7HA, United Kingdom

Session: tbc

One of the contributing factors in determining targets for the particle size of an API can be the content uniformity of the dosage forms. This work presents a framework established with a model-based specification for particle size of the API that can be used in a bi-directional mode to: a) determine center point processing conditions for the API given a target in content-uniformity for the dosage form, and b) to propagate expected sources of variability in the API process onto the final quality of the product.

This work discusses the necessary considerations and advantages when a model-based specification is considered for the particle size distribution of the API and how this compares to common practices involving discrete points of the size distribution (such as D[10], D[50], D[90]).

Our discussion is centered on the need of a structural specification for the API particle size that can be effectively used to guide the design of the last stages of the drug substance manufacturing train.  We also emphasize the need to standardize the descriptors of particle size in order to link the effects of the variability in the API process as it affects the drug product train and eventually the quality of the dosage form.

5. Model-Based Decision Support for Design and Operation of Pharmaceutical Crystallization Processes: Efficient Workflows for Validation Against Experiments and Scale-up

Sean K. Bermingham, Process Systems Enterprise, London W6 7HA, United Kingdom and Ugo Cocchini, GlaxoSmithKline, Stevenage SG1 2NY, United Kingdom

Session: tbc

This paper describes GSK's assessment of available tools and techniques for model-based design and optimisation of crystallization processes and considers the potential for using the same models to quantify the design space of these processes.  In order for model-based decision support to be of practical value to the pharmaceutical industry, it is essential to have workflows that allow model development, experimentation, parameter estimation, process optimisation and scale-up to be done in a period of 4-8 weeks.

The batch cooling crystallization of an API from a solvent was selected as a case study.  The process was seeded and investigation of crystal images revealed that the dominant mechanisms are growth, attrition and to a lesser extent agglomeration.  The key challenges from the process development perspective are to be able to predict attrition and agglomeration as a function of operating conditions, crystallizer/agitator type and equipment scale.

A number of experiments were conducted to investigate the final PSD and solute concentration over time as a function of seeding conditions (amount and PSD of the seeds) and cooling profile.  Slurry samples were taken at critical time points.  Solid samples were isolated by vacuum filtration and analysed for PSD by Malvern Mastersizer, whilst assay was performed on the liquid filtrate to measure residual solute concentration.

A model of the batch cooling crystallization set-up used for the experimental work was developed using a commercially available tool.  The developed model is based on a population balance framework that supports both steady-state and dynamic applications.  For the model validation, the unknown kinetic parameters were estimated against the solute concentration and PSD measurements of the performed experiments.  This was done for all experiments simultaneously.

The validated model was subsequently combined with flow information from a CFD model to construct a so-called Multizonal model (see Figure below) that allowed prediction of local wall temperatures, primary and secondary nucleation rates, growth (and dissolution) rates as well as agglomeration rates.  This enables scale-up on a sound physical basis.

The value of a model-based approach to process development is to reduce the number of experiments required to understand and predict the crystallization behaviour.  The approach used here allows one to predict the behaviour at the same scale (e.g. to troubleshoot an existing process or to reduce batch time whilst satisfying PSD and purity constraints) as well as at different scales or different configurations at the same/similar scale (e.g. to aid scale-up).

 

Middle East Process Engineering Conference & Exhibition (MEPEC), (Gulf International Convention Centre, Gulf Hotel, Kingdom of Bahrain, 23 - 26 October 2011)

1. Advanced Model for Operational Optimization of Steam Crackers

Abduljelil Iliyas1, Munawar Saudagar1, Stepan Spatenka2 , Zbigniew Urban2, Constantinos Pantelides2

Session: 2A 2 ( DANA1) Topic: Performance Management 15:45-17:00

(1)Technology and Innovation Center, Saudi Basic Industries C-orporation (SABIC), P.O Box 42503, Riyadh 11551, KSA (2)Process Systems Enterprise (PSE) Limited, London W6 7HA, United Kingdom.

Abstract Steam cracking of light hydrocarbons to olefins has been a major contributor to the growth of petrochemical industries for several decades. This is partly due to ethylene, which is one of the world's largest commodity chemical, being the starting raw material for much of the petrochemical industry. Over 20 million MTA of new ethylene capacity will be added in the Middle East by 2016. At the same time, given price fluctuations, the need for profitable operation of ethylene plant has never been so critical.

It is well known that the heart of ethylene plant is the thermal cracking furnace - its operation dictates the overall plant profitability. Steam cracker performance optimization involves optimal trade-off between run-lengths which are too short (thereby unnecessarily reducing the availability of the cracker due to decoking operations that are too frequent) and run-lengths which are too long (thereby reducing the efficiency of operation due to coking resulting in reduced heat transfer and excessive pressure drops). As a result, achieving truly optimal operation requires the solution of a complex dynamic optimization problem which will determine the optimal time-varying profiles of all controls available at the operator's disposal while maintaining the cracker within safe and operable limits. Such an optimization problem needs to be based on a mathematical model that provides a sufficiently accurate description of the processes that take place within the cracker tubes and also the firebox.

The work described in this paper is part of a joint SABIC/PSE project aiming to establish a comprehensive, world-class capability for modeling and operational optimization of steam cracking technology within SABIC and its affiliates. A general model of a thermal cracker is developed using PSE's state-of-the-art high-fidelity modeling platform, gPROMS®. The tube side phenomena, including both chemistry and heat/mass transfer phenomena, are modeled in detail. The tube models are coupled with a detailed model of the firebox making use of geometrical and other information extracted from a Computational Fluid Dynamics (CFD) model. Advanced dynamic optimization techniques are applied to the combined model for the determination of an optimal operating policy over an entire run at a SABIC cracker.

2. High-fidelity dynamic modelling of depressurising vessels helps improve safety and reduce CAPEX

Authors: James Marriott, Juan-Carlos Mani, Costas Pantelides, Zbigniew Urban, Process Systems Enterprise Ltd, London, United Kingdom.

Session: 3A 1 ( DANA1) Topic: Process Modeling and Optimization 10:30-12:50

The use of dynamic modelling for relief system design can result in a considerable reduction in capital expenditure while simultaneously improving plant safety. This paper considers the application of rigorous dynamic analysis to vessel depressurisation (or "blowdown"), in order to accurately quantify relief loads and metal temperatures to enable informed safety and CAPEX decision support.

The detailed dynamic modelling and simulation of the rapid depressurisation ("blowdown") of high-pressure vessels, manifolds and pipelines (including during well start-up) are key elements in the safety analysis of oil & gas production plant and other high-pressure installations. This depressurisation not only determines the load imposed on the pressure relief system (e.g. flare network) but, more importantly, may result in significantly reduced temperatures of the vessel walls, which may lead to embrittlement and high thermal stresses. Models for blowdown have been proposed by several authors over the past two decades, and some of this work has been applied extensively in industrial applications. This paper presents a next-generation model for blowdown calculations. In contrast to earlier models in the literature, the model incorporates a 3-dimensional model of the metal walls taking account of the transfer of heat between regions of the wall in contact with different phases. This allows a more accurate estimation of the wall temperatures, and the direct computation of thermal stresses.

The model also incorporates a more accurate description of the non-equilibrium interactions among the various phases which does not rely on the use of adjustable parameters. The model has been validated against the set of experimental data obtained from a full-scale vessel.

The concepts described in the paper are illustrated with several industrial case studies.

3. Whole plant economic design optimisation using high-fidelity models

Authors: Hilario Martín Rodríguez, Repsol SA, Madrid, Spain; Alejandro Cano, Costas Pantelides, Juan-Carlos Mani, Rodrigo Blanco, Process Systems Enterprise Ltd, London, United Kingdom.

Session: 3A 1 ( DANA1) Topic: Process Modeling and Optimization 10:30-12:50

During process design there are many trade-offs to consider. Some equipment decisions may improve the economics of the equipment being considered but have a negative impact on the economics - as well as the operability - of the plant as a whole. Whole plant design optimization techniques make it possible to undertake the design of complex reactor and separation sections simultaneously, in order to determine optimal values of design variables taking all relevant constraints into consideration, and thereby ensuring the overall best economics.

In the case presented in the paper the application of such optimization techniques to a new propylene oxide process resulted in the elimination of entire distillation columns from the original process design, saving significant capital and operating costs. The plant comprised a complex multitubular reactor and a separation section with many distillation columns (one an azeotropic distillation and two involving reaction), plus large recycles.

A simulation model was built of the integrated reactor and separation flowsheet, which was then optimized, using an economic objective function that represented annualised capital plus operating cost. The rigorous mathematical optimization considered 49 decision variables simultaneously. Reactor design variables included tube pitch, tube length, coolant velocity, feed reactant mass fraction, number of baffles, cooling water inlet temperature as well as the number of active reactors and numerous other quantities. Separation section design variables included condenser reflux ratios, temperatures, pressures and temperature approaches, column top pressures, reboiler boil ratios and temperatures as well as concentrations of various products in distillate and bottoms streams. In addition were included configuration and topology decisions, such as the location of feed trays and column bypasses, which allowed flowsheet alternatives to be considered as part of the optimisation.

The optimal design represented large savings in operating and capital cost with respect to base case. Two columns were eliminated entirely from the separation section. In addition, heat integration yielded significant operating cost savings with attractive return on investment; payback was less than four months.

The methods used in this design are suitably general to be able to be applied to any process plant and can be implemented using commercially-available simulation and modelling tools.

 

Global Congress on Process Safety (Hyatt Regency, Chicago, IL, March 13-16 2011)

High-fidelity dynamic modeling of depressurizing vessels and flare networks to improve safety and reduce CAPEX
James Marriott, Zbigniew Urban, Stepan Spatenka, Process Technology, Process Systems Enterprise

Session: New Relief System Solutions scheduled for Tuesday, March 15, 2011: 3:30-5:00 PM

The use of dynamic modeling for relief system design can result in a considerable reduction in capital expenditure while simultaneously improving plant safety. This paper considers the application of dynamic analysis to two areas, vessel depressurization (or "blowdown") and flare network design, in order to accurately quantify relief loads and metal temperatures to enable informed safety and CAPEX decision support.

The detailed dynamic modeling and simulation of the rapid depressurization ("blowdown") of high-pressure vessels is a key element of the safety analysis of oil & gas production plant and other high-pressure installations. This depressurization not only determines the load imposed on the pressure relief system (e.g. flare network) but, more importantly, may result in significantly reduced temperatures of the vessel walls, which may lead to embrittlement and high thermal stresses. Models for blowdown have been proposed by several authors over the past two decades, and some of this work has been applied extensively in industrial applications. This paper presents a next-generation model for blowdown calculations. In contrast to earlier models in the literature, the model incorporates a 3-dimensional model of the metal walls taking account of the transfer of heat between regions of the wall in contact with different phases. This allows a more accurate estimation of the wall temperatures, and the direct computation of thermal stresses.

The model also incorporates a more accurate description of the non-equilibrium interactions among the various phases which does not rely on the use of adjustable parameters. The model has been validated against the set of experimental data obtained from a full-scale vessel.

The flare networks for major plants represent a non-negligible part of the overall capital investment. Current industrial practice for their design is primarily based on steady-state analysis, as described in standards such as API 521 and supported by widely-used software tools. However, it is a widely recognized fact that the application of steady-steady considerations to what is fundamentally a dynamic system inevitably requires the use of conservative assumptions which often result in significant oversizing of flare headers and other components of the network. Another major contributor to capital cost is the use of special materials for the parts of the system that may be exposed to low-temperature fluids - typically the tail pipes attached to high-pressure process vessels - and which are therefore at risk of embrittlement. The key to being able to limit the additional capital expenditure is the accurate estimation of the length of piping that is subject to "abnormal" temperatures.

This paper describes, in addition to the high-fidelity depressurisation capabilities, an advanced model-based system for flare system network design that addresses the above issues while being compatible with existing steady-state flare network technology. The system supports steady-state and dynamic analysis, wall temperature modeling and prediction of hydrate and ice formation within a single integrated framework.

 

AIChE Annual Meeting 2010, (Salt Lake City, UT, USA 7-12 November 2010)

1. Detailed Dynamic Modeling and Simulation of Flare Networks
James Marriott, Applications Engineering, Process Systems Enterprise and Rodrigo Blanco-Gutierrez, Consulting, Process Systems Enterprise

Wednesday, November 10, 2010, Hall 1 (Salt Palace Convention Center)

The flare networks for major plants represent a non-negligible part of the overall capital investment. Current industrial practice for their design is primarily based on steady-state analysis, as described in standards such as API 521 and supported by widely-used software tools. However, it is a widely recognized fact that the application of steady-steady considerations to what is fundamentally a dynamic system inevitably requires the use of conservative assumptions which often result in significant oversizing of flare headers and other components of the network. Another major contributor to capital cost is the use of special materials for the parts of the system that may be exposed to low-temperature fluids - typically the tail pipes attached to highpressure process vessels and also the parts of the main flare headers connected to them up to the point where heat transfer with the environment brings the temperature to a level that does not lead to the risk of embrittlement. Also, the relief of high temperature fluids (e.g. from vessels under fire) may require the introduction of thermal expansion loops, which also add to the capital cost. In both the cold and the hot relief cases, the key to being able to limit the additional capital expenditure is the accurate estimation of the length of piping that is subject to "abnormal" temperatures.

This paper describes an advanced system for model-based flare system network design that addresses the above issues while being compatible with existing steady-state flare network technology. The system supports steady-state and dynamic analysis, wall temperature modeling and prediction of hydrate and ice formation within a single integrated framework.

2. Detailed 3-D Dynamic Analysis of Depressurizing Vessels
Zbigniew Urban, Process Technology, Process Systems Enterprise Limited

Tuesday, November 9, 2010: 8:50 AM 250 D Room (Salt Palace Convention Center)

The detailed dynamic modeling and simulation of the rapid depressurization ("blowdown") of high-pressure vessels is a key element of the safety analysis of oil & gas production plant and other high-pressure installations. This depressurization not only determines the load imposed on the pressure relief system (e.g. flare network) but, more importantly, may result in significantly reduced temperatures of the vessel walls, which may lead to embrittlement and high thermal stresses.

Models for blowdown have been proposed by several authors over the past two decades, such as Haque, Richardon and Saville (1992), Mahgerefteh and Wong (1999) and Speranza and Terenzi (2005). Some of this work has been applied extensively in industrial applications.

This paper presents a next-generation model for blowdown calculations. In contrast to earlier models in the literature, the model incorporates a 3-dimensional model of the metal walls taking account of the transfer of heat between regions of the wall in contact with different phases. This allows a more accurate estimation of the wall temperatures, and the direct computation of thermal stresses. The model also incorporates a more accurate description of the non-equilibrium interactions among the various phases which does not rely on the use of adjustable parameters.

Implemented in gPROMS, the model has been validated against the set of experimental data obtained from a full-scale vessel, as reported by Haque et al. (1992) and Szczepanski (1994).

References

M.A. Haque, S.M. Richardson, G. Saville, "Blowdown of Pressure Vessels. I - Computer Model", Transactions of the Institute of Chemical Engineers Part B: Process Safety Environmental Protection, 70(BI), 1 (1992).

H. Mafgerefteh, S.M.A. Wong, "A numerical blowdown simulation incorporating cubic equations of state", Comput. chem. Engng., 23, 1309 (1999).

A. Speranza, A. Terenzi, "Blowdown of Hydrocarbons pressure vessel with partial phase separation", Series of Advances in Mathematics, available from http://www.i2t3.unifi.it/upload/file/Articoli/animp_2004.pdf (2005).

R. Szczepanski, "Simulation programs for blowdown of pressure vessels." IChemE SONG Meeting (1994).

3. An Integrated Framework for Model-Based Solids Process Engineering
Mark Pinto1, Sean Bermingham1, Benjamin Weinstein2 and John Hecht3, (1)Process Systems Enterprise, London W6 7HA, United Kingdom, (2)Modeling and Simulation Department, Procter & Gamble, West Chester, OH, (3)Process Technologies, Corporate Engineering, Procter & Gamble, West Chester, OH

Wednesday, November 10, 2010: 9:45 AM Grand Ballroom D (Salt Palace Convention Center)

Solids processes are estimated to rarely reach more than 60% of design capacity and require 10 times longer to start-up than those involving only liquid-gas streams. The business costs associated with these staggering statistics are exacerbated by the capital and energy-intensive nature of solids processes.

This contribution therefore focuses on the development of a model-based engineering tool that can help address these problems through a better understanding of the process and associated risks.

The first part of this contribution describes a new modelling framework for solids process aimed at providing a step change in the facilities available to engineers responsible for the design and operation of industrial solids processes. This framework has the following characteristics:

  • Use of fully discretised population balances (particle size distributions and composition distributions).
  • Ability to address true dynamics.
  • Robust handling of large numbers of recycles.
  • Support for model validation (parameter estimation and experiment design).
  • Optimisation capability for design and operation of processes.
  • Interfacing to standard CFD packages.
  • Phenomenological approach to model development that facilitates rapid and consistent development of models.
  • An intuitive user interface.

The second part of this paper covers a number of typical case studies investigated with this new framework:

  1. Model discrimination and parameter estimation using experimental data from a fed-batch agglomeration process.
  2. Analysing the impact of uncertainty in model and design parameters on process performance in order to quantify risk associated with capital expenditure decisions.
  3. Determining the optimal trade-off between on the one hand low capital cost and reduced start-up times of the process and on the other hand the robustness of the process with respect to downstream disturbances (e.g. blockages).
  4. How does the operation of upstream units, in this case a crystallizer, impact the capacity of downstream solids handling?
  5. How can information from lab-scale studies coupled with CFD simulations of plant-scale equipment be coupled to aid plant scale equipment design and optimisation?

4. Whole-Plant Design Optimization
Alejandro Cano, Process Systems Enterprise Inc and Hilario Martin Rodriguez, Repsol Centro Tecnolégico, Madrid, Spain

Monday, November 8, 2010: 4:05 PM 250 E Room (Salt Palace Convention Center)

During process design there are many trade-offs to consider. Some equipment decisions may improve the economics of the equipment being considered but have a negative impact on the economics - as well as the operability - of the plant as a whole. Whole plant design optimization techniques make it possible to undertake the design of complex reactor and separation sections simultaneously, in order to determine optimal values of design variables taking all relevant constraints into consideration, and thereby ensuring the overall best economics.

In the case presented in the paper the application of such optimization techniques to a new propylene oxide process resulted in the elimination of entire distillation columns from the original process design, saving significant capital and operating costs. The plant comprised a complex multitubular reactor and a separation section with many distillation columns (one an azeotropic distillation and two involving reaction), plus large recycles.

A simulation model was built of the integrated reactor and separation flowsheet, which was then optimized, using an economic objective function that represented annualised capital plus operating cost. The rigorous mathematical optimization considered 49 decision variables simultaneously. Reactor design variables included tube pitch, tube length, coolant velocity, feed reactant mass fraction, number of baffles, cooling water inlet temperature as well as the number of active reactors and numerous other quantities. Separation section design variables included condenser reflux ratios, temperatures, pressures and temperature approaches, column top pressures, reboiler boil ratios and temperatures as well as concentrations of various products in distillate and bottoms streams. In addition were included configuration and topology decisions, such as the location of feed trays and column bypasses, which allowed flowsheet alternatives to be considered as part of the optimisation.

The optimal design represented large savings in operating and capital cost with respect to base case. Two columns were eliminated entirely from the separation section. In addition, heat integration yielded significant operating cost savings with attractive return on investment; payback was less than four months.

The methods used in this design are suitably general to be able to be applied to any process plant and can be implemented using commercially-available simulation and modelling tools.

5. On the Optimisation of Industrial Scale Oxidation Processes by Combining Fundamental Chemistry and Multi-Phase Hydrodynamics
Praveen Lawrence1, Alfredo Ramos1, Sujin Lee2, In Seon Kim2 and Sean Bermingham1, (1)Process Systems Enterprise Ltd, London, United Kingdom, (2)Process Systems Enterprise Korea Ltd, Daejeon, South Korea

Wednesday, November 10, 2010: 1:12 PM Grand Ballroom F (Salt Palace Convention Center)

Terephthalic acid (TPA) is an important industrial intermediate produced in large scale as a raw material for poly ethylene terephthalate (PET) and other polymers. TPA manufacturers face a challenging business environment and are constantly looking to improve the process efficiency without compromising the product quality specifications and the production targets. The situation is the same for other products manufactured by means of oxidation, such as isophthalic acid, phenol and cresol.

TPA is commercially produced through the oxidation of p-xylene. Though the selectivity of the oxidation reaction is generally high (>96%), the high proportion of raw material cost in the plant economics provides impetus to look for further opportunities to reduce the raw material consumption. In addition, the solvent (acetic acid) is also lost (by burning and other reactions) in the process and this needs to be reduced. Considering that these processes are already quite efficient due to empirical optimisation over the years, any improvement achieved is expected to be relatively marginal (in the order of 1%), but due to the scale of operation this is likely to yield a few million dollars in additional profit even for a medium sized TPA plant.

Improving a process that is already efficient may involve fine tuning of the process operating conditions, modifications to the plant reactor/equipment configurations and where feasible, identify new degrees of freedom to meet the objectives. Though one can tweak the operating conditions or modify the internal configurations based on experience, it is faster, lower risk and cheaper (esp. when changes to reactor configurations are involved) to carry out optimisation studies using model-based engineering tools. Considering that the improvement targets for these processes are modest, the models used for these type of studies must be sufficiently accurate and predictive in nature. To meet these requirements, a very detailed first principle model framework for multiphase oxidation reactors has been developed in gPROMS, which accounts for the major physical and chemical phenomena taking place within the reactor:

  • Occurrence of vapour, liquid and solid phases
  • Mass transfer between the vapour and liquid phases based on Maxwell-Stefan relationship
  • Vapour liquid equilibrium at the phase interface
  • Crystallization of the product within the reactor
  • Chemical reaction kinetics
  • Hydrodynamics of the reactor

As the volume of industrial reactors is in the order of hundreds of cubic metres, assuming a well-mixed behaviour in the reactor is a gross simplification. The hydrodynamic behaviour in the reactor and its impact on the physical and chemical performance of the system is captured using a multi-zonal approach (Bezzo, 2004).

As the TPA process involves liquid recycle loops, it is essential to not only model the oxidation reactor but also the condenser systems, post-oxidation reactors and downstream crystallizers.

The above mentioned model framework has been successfully used to optimise the performance of several TPA and IPA plants around the world. This contribution describes a further improvement of this framework related to enhancing the reaction kinetic model. Oxidation of hydrocarbons (e.g., p-xylene, m-xylene, cumene) using molecular oxygen is commonly termed as autoxidation and is based on the radical chain reaction mechanism. This mechanism has been proposed for several catalysed and non-catalysed oxidation reactions and is reasonably well understood. The oxidation of both p-xylene and m-xylene are homogenously catalysed reactions and the catalyst is often a mixture of cobalt, manganese and bromine salts (MC type catalyst). The primary role of the catalyst in these processes is to selectively decompose the intermediate peroxides formed during the reaction at low activation energies. Acetic acid is used as solvent in these processes. Like most gas - liquid reaction systems, the performance of the oxidation reactors can be manipulated by changing the operating pressure, temperature and catalyst amount, but in addition, the catalyst composition and water concentration [Partenheimer, 1995] in the reactor are two added controls that can be manipulated to improve the process performance. Interestingly, water is known to be a catalyst inhibitor at high concentrations, while it favours the reaction at low concentration and hence the overall process optimisation is a non-trivial task.

The reaction kinetic models available in open literature (e.g. Cincotti et. al., 1999 for TPA) are simple lumped models which generally ignore the catalyst and water concentration effects discussed above and are hence not predictive over wider operating conditions. Cheng et. al.,(2006) proposed a reaction kinetic model based on a radical chain mechanism that takes in to account the role of catalyst, but ignores the effect of water concentration. Furthermore, this kinetic model involves too many model parameters and is not predictive.

We have carried out extensive reviews on autoxidation processes and based on this, have developed a detailed reaction kinetic model based on a fundamental radical chain reaction mechanism that captures all chemical phenomena described earlier. In addition to the above, the reaction kinetic model is also capable of accounting for colour imparting species as well.

Very often, industrial manufacturers buy a formulated Co/Mn/Br catalyst mixture and hence the degrees of freedom associated with the catalyst composition are not available to the plant operator. The kinetic model based on the radical chain mechanism accounts for the individual active species in the catalyst mixture and the optimal catalyst composition can be identified for the given system. This enables the manufacturer to purchase the right catalyst for the system from the market or otherwise formulate a catalyst mixture, which maximises the performance for their system. Thus, the reaction kinetic model based on fundamental radical chain mechanism opens new avenues to optimise the process performance.

In this paper, we describe how the radical chain mechanism based kinetic model is employed to improve the performance of an industrial scale oxidation reactor. The model prediction will be compared against a simple kinetic model that does not account for the catalyst effects.

References

Qinbo Wang, Youwei Cheng, Lijun Wang, and Xi Li, "Semicontinuous Studies on the Reaction Mechanism and Kinetics for the Liquid-Phase Oxidation of p-Xylene to Terephthalic Acid", Ind. Eng. Chem. Res. 2007, 46, 8980-8992.

Partenheimer, "Methodology and scope of metal/bromide autoxidation of hydrocarbons", Catalysis Today 23 (1995) 69-158

Youwei Cheng, Xi Li, Lijun Wang, and Qinbo Wang, "Optimum Ratio of Co/Mn in the Liquid-Phase Catalytic Oxidation of p-Xylene to Terephthalic Acid", Ind. Eng. Chem. Res. 2006, 45, 4156-4162

Bezzo, F., Macchietto, S. and Pantelides, C.C. "A general methodology for hybrid multizonal/CFD models - Part I. Theoretical framework" Comput Chem Eng, 28, 501-511, 2004. Cincotti, Roberto OrruÁ, Giacomo Cao "Kinetics and related engineering aspects of catalytic liquid-phase oxidation of p-xylene to terephthalic acid" Catalysis Today 52 (1999) 331±347

 

FC Expo (Tokyo, Japan, 03-05 March 2010)

1. New high fidelity predictive modelling accelerates FC development
How to solve challenges for PEMFC or SOFC including water balance, deactivation, optimisation of platinum load and power demand dynamics.
Zbigniew Urban, Process Systems Enterprise Limited

 

16th Roger Sargent Lecture

 The 16th Roger Sargent Lecture, 4 December 2009 [5MB download]

Process Modelling: A Progress Report

PSE MD Prof. Costas Pantelides's 16th Roger Sargent lecture, delivered to a large audience of industrialists and academics, provided a wide-ranging review of the state of the art in process modelling and its industrial application. He covered important developments in handling modelling complexity and improving the efficiency of the modelling process itself, and identified a set of key priorities and directions for the future development of process modelling technology.

 

AIChE Annual Meeting

Presented at: AIChE Annual Meeting

Multiscale Modelling of a Typical Industrial Oxidation Reactor for Terephthalic Acid Production Via a Hybrid Multizonal-CFD Approach

Maddalena Vernier,DIPIC - Department of Chemical Engineering Principles and Practice, University of Padova, Padova, Italy
In Seon Kim, Process Systems Enterprise Ltd, London, United Kingdom
Praveen Lawrence, Process Systems Enterprise Ltd, London, United Kingdom
Fabrizio Bezzo, DIPIC - Department of Chemical Engineering Principles and Practice, University of Padova, Padova, Italy

Detailed models of multiphase reactive systems are known to give realistic representation of a "well-mixed" reactor, but face difficulty in description of multiple scales systems like industrial scale multiphase reactors. The governing physical and chemical phenomena act on different time and space scale and modelling these systems involve tight interactions between several simultaneous phenomena taking place like reaction kinetics, mass transfer, heat transfer and mixing. As a consequence, the characterisation of the complex interactions between the fluid dynamics and the other phenomena should take into account the double-scale nature of these interactions.

In this work, we propose a multiscale approach, the Multizonal/CFD (Computational Fluid Dynamics) approach that captures both the hydrodynamics and complex physical/chemical phenomena occurring in a process (e.g., population balances). A fine spatial resolution is preserved in the CFD model to describe the hydrodynamics and the geometry of the system; on the other hand, a coarser grid is adopted by collecting a set of CFD cells to form a "well-mixed" and homogeneous compartment (or zone), within which a detailed set of modelling equations can be solved by a highly accurate discretisation scheme (e.g., Bezzo et al., 2004; Laakkonen et al., 2007). The Multizonal (MZ) model, which describes the whole process except hydrodynamics, is constituted by a small number of interconnected compartments, each of them representing a spatial region of the process equipment. Each zone exchanges information (mass and /or energy flows) with the adjacent zones, all containing the same set of equations. Therefore, this multi-scale approach yields to two different grids and two different subproblems, each solved by means of a specialised tool. The exchange of information between the two scales is obtained through aggregation and disaggregation procedures.

The capabilities of this general purpose MZ/CFD framework is demonstrated here by applying the technique to analyse performance of a continuous, baffled, three phase, agitated reactor for production of terephthalic acid from p-xylene. Oxidation of p-xylene is a highly exothermic reaction carried out in the presence of acetic acid. The reaction takes place in the liquid phase and is typically mass transfer limited. As terephthalic acid is sparingly soluble in the reaction medium, it crystallises out and the product is drained out of the reactor in the form of slurry. In addition to the desired reaction, undesirable intermediates and combustion products are also produced.
A zone model capturing all the above stated phenomena is implemented in the general purpose modelling tool gPROMS® (Process Systems Enterprise Ltd.), describing the whole process through a set of equations, which considers the solid, liquid and gas phase. Mass transfer from the gas-phase to the liquid phase is modelled in a rigorous way by taking into account the effect of bubble sizes. Accordingly, a full bubble population balance is considered and solved. The crystal production and precipitation is directly derived from the reaction kinetics. The vapour-liquid equilibrium, rigorous energy balance and heat transfer are also implemented. Some control loops complete the system description.

The CFD model, implemented in the FLUENT® (by Ansys, Inc.) environment, describes the hydrodynamics of the vessel solving only the momentum and total mass balances. An Eulerian-Eulerian approach is adopted to represent the gas and slurry phases. The slurry is described as a pseudo-homogeneous phase comprising both the liquid and solid. The gas phase is here represented in a simplified way assuming an average size diameter for the air bubbles. Turbulence is defined in terms of a standard k-ε model.
As this Multizonal modelling approach is computationally very efficient, the technique can be exploited to carry out process optimization studies like identification of the optimal reactor internal configuration to maximise process objective. The framework is applicable to other systems with strong interactions among several diverse phenomena, conserving high modularity, flexibility and efficiency.

References: Bezzo, F., Macchietto, S. and Pantelides, C.C., 2004. A general methodology for hybrid multizonal/CFD models. Part I. Theoretical framework. Comput Chem Eng, 28:501-511. Laakkonen M., Moilanen P., Alopaeus V. and Aittamaa J., 2007. Modelling of local bubble size distributions in agitated vessels. Chem Eng Sci, 62:721-740.

top

AIChE Annual Meeting

Model-Based Design and Optimisation of Trichlorosilane (TCS) Reactors for Polysilicon Production

Zbigniew Urban, Process Technology, Process Systems Enterprise Ltd., London, United Kingdom
Stepan Spatenka, PSE Consulting, Process Systems Enterprise Ltd., London, United Kingdom

The trichlorosilane (TCS) process is a key step in the production of polycrystalline silicon, the main raw material for the manufacture of solar panels. The current process has been established for many years but there is large scope for optimisation and scale up, particularly in light of the significant current increase in polysilicon demand.
At the heart of the TCS process is a fluidised bed reactor where silicon (usually of metallurgical grade) is reacted with chlorine to produce TCS. Silicon particles are fed to the bed in an intermittent fashion and are consumed during reaction. This establishes a distribution of particle sizes inside the unit at semi-steady state operation.

We present a comprehensive model which couples a complete description of the fluidisation phenomena with the solid particles being dissolved in the surrounding gas. The model combines a population-balance approach for reacting particles of silicon with distributed models of hydrodynamics of the dense and bubble phases co-existing in the fluidised bed. The reactions taking place both on the surface of each particle and in the gas phase, and the multicomponent heat and mass transfer between the gas phase and the particle surface are modelled in detail. Mass, energy and population balances of the reacting system are rigorously implemented in the model.

The model is of sufficient fidelity to predict key information for process and detailed engineering design, such as the effect of feed particle size on selectivity, optimal bed operating temperatures and pressures, tube bundle heat transfer area, inventory, reactor weight for different particle size, bed height and void fraction for different operations, and so on. The dynamic capability of the model allows simulation of start-up operation and emergency events.

It is demonstrated that a model-based optimisation approach can lead to significant benefits, including a substantial increase in selectivity to TCS as well as much better control of the temperature over the bed height.

top

AIChE Annual Meeting

High-Fidelity Modelling and Detailed Design of PEM Fuel Cell Stacks

Zbigniew Urban, Process Technology, Process Systems Enterprise Ltd., London, United Kingdom
Ying-Sheng Cheng, PSE Process Technology, Process Systems Enterprise Ltd., London, United Kingdom
Constantinos C. Pantelides, Process Systems Enterprise Ltd., London, United Kingdom

The optimal design of PEM fuel cell stacks is a particularly challenging task. On one hand, it is important to model in detail the numerous complex coupled phenomena that take place within each layer, including both the fluid mechanics in the fuel and air channels, and the electrochemical reactions and the multicomponent mass and heat diffusion within the electrolyte membrane. On the other hand, it is necessary to represent with reasonable accuracy stacks which involve tens or even hundreds of such layers. Moreover, determining optimal designs requires the examination of a large number of alternatives. Overall, the combination of these three factors leads to a formidable computational problem.

We present a hybrid modelling technique that combines Computational Fluid Dynamic (CFD) models of flow channel hydrodynamics with first-principles physical and chemistry models that have been validated against laboratory data. This fully-coupled approach has numerous advantages over either "pure" CFD or "pure" first-principles models; examples include the ability to predict very accurately the temperature profiles and current density across the anode-electrolyte-cathode assembly.

In principle, the above hybrid approach is applicable both to individual cell assemblies and to entire stacks. However, the computational load becomes extremely high for stacks involving large numbers of layers. This is a problem that also occurs with "pure" CFD models of the stack. The problem is, to some extent, alleviated by the availability of highly parallelised CFD codes. However, the simulation of stacks involving more than a few tens of layers remains problematic, and this is even more so when these simulations need to be repeated many times for the purposes of stack optimisation.
In view of the above, this paper presents novel and powerful hybrid modelling technology that enables multiple parallel processing of CFD and first-principles models in the context of large-scale fuel cells stacks involving many individual cell assemblies. The approach implements sophisticated model aggregation techniques that allow rapid computation of multi-layered stacks in short timescales without significant compromise on predictive accuracy.

The approach has been employed for the final design of PEM fuel cell stacks in order to optimise the detailed design of channels and manifolds while ensuring uniform temperature and pressure distribution over the stack.

top

AIChE Annual Meeting

Deployment of a gPROMS-Based Three-Phase Reactor Unit Operation within PRO/II Flowsheets through CAPE-OPEN Interfaces

Pierre Duchet-Suchaux, TOTAL, S.A., Paris, France
Sabine Savin, TOTAL, S.A., Paris, France
Alejandro Cano, Process Systems Enterprise, Inc., Cedar Knolls, NJ
Thomas H. Williams, Process Systems Enterprise Ltd, London, United Kingdom
Rodrigo Blanco, Process Systems Enterprise Ltd, London, United Kingdom
David H. Jerome, Invensys Process Systems, Lake Forest, CA
Krishna Penukonda, Invensys Process Systems, Lake Forest, CA

This paper describes the work done to deploy a three-phase slurry bubble column reactor model for Fischer-Tropsch synthesis within PRO/II flowsheets through the development of a CAPE-OPEN compliant unit operation using the gPROMS advanced process modeling platform.

TOTAL, S.A. ("TOTAL") has developed a steady-state model of a slurry reactor used for gas-to-liquids synthesis in collaboration with the group of Professor Faiçal Larachi of Laval University in Canada(1). This model had been originally developed in Aspen Technology's ACM® equation-oriented environment using internally coded thermodynamic calculations based on a variety of models and correlations.
TOTAL wished to deploy the model within a flowsheet of the full gas-to-liquids process implemented in Invensys Process Systems' PRO/II® software, and approached Process Systems Enterprise ("PSE") to study the feasibility of converting the model to a CAPE-OPEN compliant unit operation developed in PSE's gPROMS® advanced process modeling platform.

PSE developed a CAPE-OPEN compliant unit operation through the following tasks:

  1. Translation of the original ACM model into gPROMS
  2. Full coupling of hydrodynamics with mass and energy balances, allowing the full reactor model to be solved simultaneously
  3. Reformulation of several equations to (i) ensure consistency of species holdups with gas-phase and liquid-phase concentrations calculated from the equations of state, and (ii) improve model robustness
  4. Implementation of a robust initialization procedure that allows the model to be initialized without a set of initial guesses.
  5. Addition of calls to the Process Modelling Environment (PME) physical property package for the calculation of thermodynamic and transport properties.
  6. Export of the gPROMS model as a CAPE-OPEN compliant unit operation
  7. Testing of interoperability with PRO/II flowsheets, including flowsheets with recycle loops that include the reactor unit operation.

In the course of interoperability testing, PSE identified issues that needed to be addressed in the gPROMS and PRO/II software. The issues have now been addressed, and the interoperability of the gPROMS-based unit operation within the PRO/II PME has been demonstrated.

On the gPROMS side, the following enhancements to the CAPE-OPEN interface were implemented:

  1. Added the ability to explicitly control whether the gPROMS components use mass or mole basis for all calls to the PME physical property package.
  2. Added the ability to map gPROMS selectors to CAPE-OPEN option parameters
  3. Made usability enhancements to gPROMS "Export to CAPE-OPEN procedure"
  4. Substantially enhanced the logging capabilities of the gPROMS components, which facilitates the tracing of the root causes of interoperability issues.

On the PRO/II side, the following enhancements to the CAPE-OPEN interface were implemented:

  1. Reviewed and revamped the CAPE-OPEN integration architecture to provide better lifetime management of CAPE-OPEN objects and eliminate memory leaks and errors.
  2. Improved interoperability by allowing seamless use of mass/mole basis and fixing thermodynamic property calculation and access issues.
  3. Added logging capability to facilitate diagnosis and troubleshooting.
    In the resulting implementation, the user has the option to execute a robust initialization procedure the first time that the reactor model is used. To increase computational efficiency, the robust initialization procedure is not executed in subsequent iterations through the reactor unit operation. Detailed model results can be consulted through the gPROMS graphical reporting interface gRMS, which is launched automatically when the reactor unit operation is first solved.

A strong dialog between the end user (Total) for defining his needs and the software companies (Invensys and PSE) to identify and correct the problems has been the reason of the success of this project and is required for any CAPE-OPEN integration.
(1) Iliuta I., F. Larachi, J. Anfray, N. Dromard, and D. Schweich, "Multicomponent multicompartment model for Fischer-Tropsch SCBR," AIChE Journal, Vol. 53, No. 8, 2062-2083, 2007

top

AIChE Annual Meeting

A Framework for Dynamic Modeling and Optimization of Multi-Stream Plate-Fin Heat Exchangers

Michael Baldea, Praxair, Inc., Tonawanda, NY
Richard Jibb, Praxair, Inc., Tonawanda, NY
Alejandro Cano, Process Systems Enterprise, Inc., Cedar Knolls, NJ
Alfredo Ramos, Process Systems Enterprise, Inc., London, United Kingdom

Modern cryogenic air separation relies on tightly integrated process designs to achieve a high degree of energy efficiency. The use of a narrowly pinched feed-effluent heat exchanger to cool and liquefy the air feed stream(s) by recovering refrigeration from the cold product streams represents a salient design feature, yielding significant efficiency gains. While such heat exchangers lower operating costs, they also contribute to a capital cost increase; their design therefore involves tradeoffs aimed at achieving economic optimality for the plant.

The most frequently encountered feed-effluent heat exchangers in air separation plants are of the multi-stream plate-fin type. The complex structure of Plate Fin Heat Exchangers (PFHEs), comprising of a stack of finned fluid flow channels (layers) separated by parting sheets, offers a large number of design degrees of freedom, making design optimization subject to operating and manufacturing constraints a daunting, if not impossible task to carry out manually. Several studies [1-6] have focused on solving the PFHE design optimization problem. However, their scope has been largely limited to developing and testing optimization algorithms tailored to simplified prototypes or specific applications. To our knowledge, this problem has not yet been addressed and solved in a generic manner.

In this paper, we present a general dynamic modeling and optimization framework for PFHEs recently developed by Praxair, Inc. and Process Systems Enterprise, Ltd. We introduce a novel model representation that can capture heat exchanger structures of arbitrary complexity and accommodate multi-phase and supercritical process streams, and discuss its implementation as a model library in gPROMS, PSE's system modeling environment. In our work, we exploit the dynamic component of the models in formulating a mixed-integer PFHE design optimization problem aimed at minimizing the lifetime cost of the exchanger, subject to operating and manufacturing constraints. Furthermore, we discuss the extension of this formulation to ensure optimality over a wide range of plant operating parameters. Finally, we present and analyze a case study, illustrating the robust dynamic simulation capabilities of the model library and demonstrating the significant economic benefits obtained using the proposed optimization framework.

References
1. Reneaume, J.-M., Pingaud, H., and Niclout, N., "Optimization of Plate Fin Heat Exchangers." Trans IchemE, Vol. 78, Part A, September, 2000. pp. 849-859.
2. Reneaume, J.-M., and Niclout, N., "Optimal Design of Plate-Fin Heat Exchangers using Both Heuristic Based procedures and Mathematical Programming Techniques." Third International Conference on Compact Heat Exchangers and Enhancement Technology for the Process Industries, Davos, Switzerland, July, 2001. pp. 143-149.
3. Sunder, S., and Fox, V.G., "Multivariable Optimization of Plate Fin Heat Exchangers." AIChE Symposium SeriesVol. 89, No. 295, Atlanta, GA, 1993. pp. 244-252.
4. Reneaume, J.-M., and Niclout, N., "MINLP Optimization of Plate Fin Heat Exchangers." Chem. Biochem. Eng. Q., Volume 17, No.1, 2003. pp. 65-76.
5. Picün-Nunez M., Polley G.T., and Medina-Flores M., "Thermal Design of Multi-Stream Heat Exchangers", Applied Thermal Engineering 22 pp 1643-1660 (2002).
6. Wang L., and Sunden B., "Design Methodology for Multi-stream Plate-Fin Heat Exchangers in heat Exchanger Networks", Heat Transfer Engineering, 22 pp 3-11, (2001).

top

AIChE Annual Meeting

Dynamic Simulation and Optimization for Flare System Design and Validation

Constantinos C. Pantelides, Process Systems Enterprise Ltd., London, United Kingdom
Rodrigo Blanco-Gutierrez, Process Systems Enterprise Ltd., London, United Kingdom
Mark Matzopoulos, Process Systems Enterprise Ltd., London, United Kingdom

The correct design of flare systems is a key aspect of the safety of process plants. The current industrial practice primarily focuses on the use of steady-state calculations based on the maximum relief flowrates that could originate from the various potential sources connected to the system. The underlying assumption and justification for this approach is that this would represent the most challenging situation that the system would ever have to handle.

In fact, it is relatively easy to envisage plausible combinations of events that would lead to worse situations, sometimes with serious safety-related implications. These more complex scenarios invariably arise from the dynamics of both the plant and the flare system, and are often associated with effects that relate not just to the pressures (e.g. flow reversal in parts of the flare system), but also to the temperatures of materials and equipment in the system (e.g. potential failures due to very low temperatures), and sometimes to the interactions between pressure, temperature and chemical composition (e.g. potential formation of solid phases in the flare system pipework). As a result, the analysis of such situations requires mathematical models that are not only dynamic, but also incorporate a degree of detail that is significantly higher than what is currently employed in such applications.

Even with the help of detailed dynamic models, the design of new flare systems or the validation of existing ones is hampered by the very large number of possible combinations of events that can potentially arise. Consequently, the identification of "worst-case scenarios" via repeated dynamic simulations may be neither efficient nor reliable. A more systematic alternative is offered by the use of dynamic optimization techniques with suitably formulated objectives. If necessary, any scenarios that are identified in this manner can then be used in the context of a multiscenario optimization approach to suggest appropriate modifications to the existing flare system design.
In this paper, we describe an advanced system for flare system design and validation based on the above concepts and methodologies. The paper incorporates examples illustrating the key points made above.

top

AIChE Annual Meeting

Design Space, Models and Model Uncertainty

Constantinos C. Pantelides, Process Systems Enterprise Ltd., London, United Kingdom
Nilay Shah, Centre for Process Systems Engineering, Imperial College London, London, United Kingdom
Claire S. Adjiman, Centre for Process Systems Engineering, London

The concept of Design Space plays a central role in the current thinking on the regulation of pharmaceutical production processes. Most such processes involve large numbers of external disturbances and/or potential control manipulations. This results in multidimensional Design Spaces, whose effective determination and exploration require the use of process models. A range of different categories of models is currently being used, including empirical correlations, statistical non-parametric models, data-based models and first-principles models.

This paper focuses on first-principles models, of the kind that are finding increasing use in pharmaceutical applications. We consider the relation between the Design Space and the concept of "process flexibility" which has been studied extensively in the process systems engineering literature since the early 1980s. We are particularly interested in the extent to which existing flexibility analysis techniques can be applied to the determination of the Design Space, both in principle and in practice, given the transient and multistage nature of most pharmaceutical processes.

Design Spaces are supposed to provide assurances that the process will deliver products of the right quality provided the operation is maintained remain within a given envelope. However, like the result of any other model-based computation, the reliability of any Design Space determined on the basis of a model depends on the accuracy of the model itself. In other words, the Design Space is not a well delineated region, but a probabilistic one, with each point in the multivariable space of operation of the process being characterized only by a probability of belonging to the Design Space. We consider techniques for computing probabilistic Design Spaces based on quantitative information on model uncertainty derived during the formal validation of such models against experimental data sets.

top

 

11th Grove Fuel Cell Symposium

Accelerating fuel cell development and managing design risk using a systems engineering approach

Zbigniew Urban, Process Systems Enterprise Ltd, United Kingdom

Fuel cell component and system development can be accelerated significantly using systems engineering approaches based on high-fidelity first-principles models that are rigorously validated against experimental data. Via the tight integration of experimental R&D and model-based engineering design, this covers the entire fuel cell development lifecycle, from optimising the basic membrane electrolyte assembly (e.g. with respect to catalyst utilisation) to the detailed design of the stack and the quantification and management of deactivation. This systematic methodology can be used to quantify directly from models key micro-scale effects that cannot be identified from experimentation alone. It is being used to explore a wider design space more rapidly and reduce the need for physical testing. This presentation discusses the key elements of the approach, the pre-requisites for its successful implementation, and the advantages that can be derived from it.

 

PSE '09 (Salvador, Brazil, 16-20 September 2009)

A performance study of dynamic RTO applied to a large-scale industrial continuous pulping process
P A Rolandi, Process Systems Enterprise; J A Romagnoli, University of Sydney

top

A performance study of dynamic RTO applied to a large-scale industrial continuous pulping process

P A Rolandi, Process Systems Enterprise Ltd, United Kingdom, J A Romagnoli, University of Sydney

In this work we present a novel framework and software platform for control-and-optimisation of industrial process systems that supports the use of different model types, solutions methods, and control-and-optimisation strategies transparently and effectively. By decoupling the model formulation and implementation, the control-problem formulation, and the solution engine, this framework/platform enables a transparent study of controller performance characteristics. The control-and-optimisation engine is based on the gPROMS Server (gSERVER).

In this work we present the architecture of the novel dynamic real-time optimisation platform and we discuss the formulation of industrial optimisation-and-control problems and its subsequent interpretation into a mathematical formalism. The solution of the control-and-optimisation problem incorporates a constraint-violation identification-and-relaxation mechanism for recovery of infeasibilities in an online setting.

A study of controller/optimiser performance using linear and nonlinear models for the optimal operation of an industrial continuous pulping system (virtual plant) is presented. A number of alternative problem formulations and solution methods are also explored. This study shows that both first-principles model-based controllers perform well and are able to drive the plant through production transitions satisfactorily. The nonlinear model handles constraints better and the linear model results in much faster computational times.

 

ACHEMA 2009 (Frankfurt am Main) 11-15 May 2009

Special Session: Fuel Cell Technologies

Efficient multiscale modelling and detailed design of fuel cell stacks
Y-S. Cheng, C.C. Pantelides*, Z. Urban, J. Zeaiter, Process Systems Enterprise

top

Efficient multiscale modelling and detailed design of fuel cell stacks

The optimal design of fuel cell stacks is particularly challenging task. On one hand, it is important to model in detail the numerous complex coupled phenomena that take place within each layer, including both the fluid mechanics in the fuel and air channels, and the electrochemical reactions and the multicomponent mass and heat diffusion within the electrolyte membrane. On the other hand, it is necessary to represent with reasonable accuracy stacks which involve tens or even hundreds of such layers. Moreover, determining optimal designs requires the examination of a large number of alternatives. Overall, the combination of these three factors leads to a formidable computational problem.

We present a hybrid modelling technique that combines Computational Fluid Dynamic (CFD) models of flow channel hydrodynamics with first-principles physical and chemistry models that have been validated against laboratory data. This fully-coupled approach has numerous advantages over either "pure" CFD or "pure" first-principles models; examples include the ability to predict very accurately the temperature profiles and current density across the anode-electrolyte-cathode assembly.

In principle, the above hybrid approach is applicable both to individual cell assemblies and to entire stacks. However, the computational load becomes extremely high for stacks involving large numbers of layers. This is a problem that also occurs with "pure" CFD models of the stack. The problem is, to some extent, alleviated by the availability of highly parallelised CFD codes. However, the simulation of stacks involving more than a few tens of layers remains problematic, and this is even more so when these simulations need to be repeated many times for the purposes of stack optimisation.

In view of the above, this paper presents novel and powerful hybrid modelling technology that enables multiple parallel processing of CFD and first-principles models in the context of large-scale fuel cells stacks involving many individual cell assemblies. The approach implements sophisticated model aggregation techniques that allow rapid computation of multi-layered stacks in short timescales without significant compromise on predictive accuracy.

The approach has been employed for the final design of mobile fuel cell stacks in order to optimise the detailed design of channels and manifolds while ensuring uniform temperature and pressure distribution over the stack.

 

Special Session: Chemistry and Process Engineering for Power Supply

Model-based design and optimisation of trichlorosilane (TCS) reactors for polysilicon production
Zbigniew Urban and Stepan Spatenka, Process Systems Enterprise

top

Model-based design and optimisation of trichlorosilane (TCS) reactors for polysilicon production

The trichlorosilane (TCS) process is a key step in the production of polycrystalline silicon, the main raw material for the manufacture of solar panels. The current process has been established for many years but there is large scope for optimisation and scale up, particularly in light of the significant current increase in polysilicon demand.

At the heart of the TCS process is a fluidised bed reactor where silicon (usually of metallurgical grade) is reacted with chlorine to produce TCS. Silicon particles are fed to the bed in intermittent (pulsed) fashion and are consumed during reaction. This establishes a distribution of particle sizes inside the unit at semi-steady state operation.

We present a comprehensive model which couples a complete description of the fluidisation phenomena with the solid particles being dissolved in the surrounding gas. The model combines a population-balance approach for reacting particles of silicon with distributed models of hydrodynamics of the dense and bubble phases co-existing in the fluidised bed. The reactions taking place both on the surface of each particle and in the gas phase, and the multicomponent heat and mass transfer between the gas phase and the particle surface are modelled in detail. Mass, energy and population balances of the reacting system are rigorously implemented in the model.

The model is of sufficient fidelity to predict key information for process and detailed engineering design, such as the effect of feed particle size on selectivity, optimal bed operating temperatures and pressures, tube bundle heat transfer area, inventory, reactor weight for different particle size, bed height and void fraction for different operations, and so on. The dynamic capability of the model allows simulation of start-up operation and emergency events.  

It is demonstrated that a model-based optimisation approach can lead to significant benefits, including a substantial increase in selectivity to TCS as well as much better control of the temperature over the bed height.
                                                                                    
Particle size distribution: evolution of particle size of reactant pellets over the residence time

Our website uses cookies so that we can provide a better browsing experience. Continue to use the site as normal if you're happy with this or find out more about cookies

OK