Optimal experiment design
Minimise time and cost, maximize information content
The initial stages of product and process design rely heavily on experimentation programmes. Such experiments can be costly and time consuming, and are often on the critical path in the time-to-market.
Model-based experimentation techniques use detailed mathematical models of the experimental process as a means for analysing experimental data (for example, fitting of kinetic parameters), for validating models, and for optimizing the design of the experiments themselves.
Benefits of experiment design
Designing optimsal experiments can have sigificant benefits:
- maximize the information content of experiments
- reduce experimentation time
- reduce experimentation cost
- integrate experimentation and engineering design
- provide accurate model parameters that increase the accuracy of process design information and reduce technology risk
All of these require validated models capable of describing process behaviour accurately over a wide range of conditions.
gPROMS' model-based experiment design capabilities lead to dramatic increases in the efficiency and effectiveness of laboratory experimentation.
This translates into fewer, well designed, experiments resulting in more accurate estimates for model parameters, which in turn means lower design uncertainty and more accurate scale-up - overall, faster product and process development cycles.
The physical understanding gained by these experiments typically forms the basis for downstream model-based activities such as optimization of process design and operation.
In addition to the gPROMS® software, PSE's ModelCare® consulting service can provide expert assistance in applying model-based experiment design to challenging applications.
A key activity involved in building reliable and validated models is obtaining accurate values for the parameters characterising fundamental process phenomena, such as those relating to thermodynamics, chemical kinetics or mass and heat transfer.
gPROMS has powerful and well-proven parameter estimation capabilities, allowing the simultaneous estimation of multiple parameters from multiple steady-state and dynamic experiments.
Designing experiments for maximum parameter precision
In order to obtain the data needed for parameter estimation, it may be necessary to perform many expensive and time-consuming experiments. The quality of the information obtained - and hence the parameter accuracy – depends strongly on the experimental settings.
Experiment design for parameter precision aims to determine optimal experimental settings and measurement times in order to maximize the information content of the measured data, while observing all pertinent safety or operability constraints.
The process can be iterative: having performed an experiment, the measured data can be used to generate better estimates of the unknown parameters, which in turn allows better experiments to be designed.
Example: Fed-batch fermentation experiments
The purpose of the simple fed-batch fermentation process shown here is to estimate the unknown parameters θ i = 1..4, as accurately as possible with the minimum number of experiments.
A simple design (constant feed and equidistant measurements) will give very weak estimates of the parameters.
Build a gPROMS model of the experiment set-up. Typically this includes material and energy balances and a representation of the kinetic equations. Provide preliminary guesses for parameter values.
Use gPROMS' experiment design facilities to generate the optimal experiment procedure based on the current parameter values.
Execute the experiment following the optimized procedure from step 1.
Fit the model parameters to experimental data using gPROMS' parameter estimation capabilities.
Analyse the accuracy of the parameters (right) using gPROMS' extensive statistical analysis measures; if greater accuracy is required, return to Step 1 and repeat the procedure until the required parameter accuracy is obtained.
The 95% confidence intervals for the parameters to be estimated are reduced from an error of more than 140% to an error of just 62% after just one experiment.
The design of the optimal experiment keeps the system active over the whole period of time (left), with measurements being taken at the 'optimal' point.