• Language: en

Generate data and Fit using a Two Compartment Model

In this example we will demonstrate a Tut Script using the same two compartment model with absorption and bolus dosing, as used in the Fitting a Two Compartment PopPK Model and Generate a Two Compartment PopPK Data Set, see Fig. 43:-

../../../../_images/compartment_diagram303.svg

Fig. 43 Two compartment model with depot dosing for Tut Script.

This PoPy model is also used when Creating an example Tut Script.

Note

See the First order absorption model with peripheral compartment obtained by the PoPy developers for this example, including input script and output data file.

A Tut Script can be used as a theoretical tool to quickly investigate identifiability of PK/PD models, because the true f[X] parameters and the structure of the data are known.

The PoPy Manual makes extensive use of tut_scripts to create examples to illustrate different Principles of Pharmacokinetics.

Running the Tutorial Script

This tutorial example requires a single input file:-

c:\PoPy\examples\builtin_tut_example.pyml

Open a PoPy Command Prompt to setup the PoPy environment in this folder:-

c:\PoPy\examples\

With the PoPy environment enabled, you can open the script using:-

$ popy_edit builtin_tut_example.pyml

Again, with the PoPy environment enabled, call popy_run on the Tut Script from the command line:-

$ popy_run builtin_tut_example.pyml

When the tut script has terminated, you can view the output of the fit using popy_view, by typing the following command:-

$ popy_view builtin_tut_example.pyml.html

Note the extra ‘.html’ extension in the above command. This command opens a local .html file in your web browser to summarise the result of the generating process.

You can compare your local html output with the pre-computed documentation output, see First order absorption model with peripheral compartment. You should expect some minor numerical differences when comparing results with the documentation.

Syntax of Tut Script

The major structural difference between a Fit Script or Gen Script and a Tut Script is that the generating EFFECTS are encoded in GEN_EFFECTS and the fitting EFFECTS are encoded in FIT_EFFECTS. For example the GEN_EFFECTS section for this tutorial example is as follows:-

GEN_EFFECTS:
    POP: |
        c[AMT] = 100.0
        f[KA] = 0.2
        f[CL] = 2.0
        f[V1] = 50
        f[Q] = 1.0
        f[V2] = 80
        f[KA_isv,CL_isv,V1_isv,Q_isv,V2_isv] = [
            [0.1],
            [0.01, 0.03],
            [0.01, -0.01, 0.09],
            [0.01, 0.02, 0.01, 0.07],
            [0.01, 0.02, 0.01, 0.01, 0.05],
        ]
        f[PNOISE] = 0.15
    ID: |
        c[ID] = sequential(50)
        t[DOSE] = 2.0
        t[OBS] ~ unif(1.0, 50.0; 5)
        # t[OBS] = range(1.0, 50.0; 5)
        r[KA, CL, V1, Q, V2] ~ mnorm([0,0,0,0,0], f[KA_isv,CL_isv,V1_isv,Q_isv,V2_isv])

And the FIT_EFFECTS section is as follows:-

FIT_EFFECTS:
    POP: |
        f[KA] ~ P1.0
        f[CL] ~ P1.0
        f[V1] ~ P20
        f[Q] ~ P0.5
        f[V2] ~ P100
        f[KA_isv,CL_isv,V1_isv,Q_isv,V2_isv] ~ spd_matrix() [
            [0.05],
            [0.01, 0.05],
            [0.01, 0.01, 0.05],
            [0.01, 0.01, 0.01, 0.05],
            [0.01, 0.01, 0.01, 0.01, 0.05],
        ]
        f[PNOISE] ~ P0.1
    ID: |
        r[KA, CL, V1, Q, V2] ~ mnorm([0,0,0,0,0], f[KA_isv,CL_isv,V1_isv,Q_isv,V2_isv])

The GEN_EFFECTS get passed to the Gen Script and the FIT_EFFECTS get passed to the Fit Script. From the examples above you can see that the GEN_EFFECTS->POP section has:-

f[X] = true_value

Whereas the FIT_EFFECTS->POP section has:-

f[X] ~ P starting_value

Reflecting the fact that the f[X] are known constants for a Gen Script, but are unknown values to be estimated in a Fit Script.

Summary of Tut Results

The Tut Script generates an output folder containing four new scripts:-

builtin_tut_example.pyml_output/
    builtin_tut_example_gen.pyml
    builtin_tut_example_fit.pyml
    builtin_tut_example_comp.pyml
    builtin_tut_example_tutsum.pyml

See Files Generated by Tut Script for more info. These four scripts are run in order. The Gen Script is described in Generate a Two Compartment PopPK Data Set and the Fit Script is described in Fitting a Two Compartment PopPK Model. Therefore here we will focus on the Comp Script outputs, which are fitted f[X] and generated f[X] plots and their objective values. The simplest comparison output is a visual comparison of the true and fitted f[X] PK curves and the synthetic measurement data, see Table 32.

Table 32 Fitted model PK curves vs true model PK curves for first three individuals

The solid blue lines in Table 32 show the predicted PK curves for the fitted model f[X] values. The dotted blue lines show the PK curves for the true f[X] values that were used to generated the data set (in the Gen Script). The blue dots are the target c[DV_CENTRAL] values from the data file.

The target c[DV_CENTRAL] values have measurement noise added, so blue dot data points do not lie on the true f[X] curves. The graphs show that the PK curves for the fitted f[X] are very similar to the true f[X] curves. Most divergence occurs away from the data points and most agreement close to the data points, for example for Individual 2 the fitted and true curves differ over time period [0,30], but are very similar in the period [30,50]. In the period [0,30], the model tends to impute the curve for the median individual, in the absence of actual data.

Fit vs True f[X] values

If the Comp Script has been run, the TutSum Script output also contains a convenient table to compare the initial, fitted and true f[X] values, see Table 33.

Table 33 Comparison of main initial, fitted and true f[X] values
Name Initial Fitted True Abs. Error Prop. Error
f[KA] 1 0.11 0.2 8.95e-02 44.77%
f[CL] 1 2.29 2 2.94e-01 14.71%
f[V1] 20 31 50 1.90e+01 37.98%
f[Q] 0.5 1.66 1 6.60e-01 65.99%
f[V2] 100 44.8 80 3.52e+01 43.99%

Table 33 shows that the f[KA], f[CL], f[V1], f[Q] parameters are recovered reasonably well, in the sense that the fitted values are much closer to the true values, compared to the initial starting values. However the f[V2] fitted value is quite different from the true value.

You can also compare the proportional noise estimate:-

Table 34 Comparison of fitted and true proportional noise f[X] values
Name Initial Fitted True Abs. Error Prop. Error
f[PNOISE] 0.1 0.141 0.15 8.89e-03 5.93%

Here the f[PNOISE] starts at 0.1, is estimated at 0.141, which can be compared with the true value 0.15. Generally the proportional noise is identifiable in a PK/PD model. This is because every row of the data set and the current f[PNOISE] parameter has an influence on the overall likelihood.

The comparison of the covariance matrix estimates is quite long, as it displays a row for 5*5 f[X] comparisons. However the diagonal elements (only) are shown here:-

Table 35 Comparison of fitted and true isv variance diagonal f[X] values
Name Initial Fitted True Prop. Error Abs. Error
f[KA_isv] 0.05 0.21 0.1 110.44% 0.11
f[CL_isv] 0.05 0.029956 0.03 0.15% 4.37E-05
f[V1_isv] 0.05 0.0473 0.09 47.47% 0.0427
f[Q_isv] 0.05 0.0729 0.07 4.14% 0.0029
f[V2_isv] 0.05 0.0567 0.05 13.45% 0.00673

This shows that f[CL_isv] and f[Q_isv], the inter-subject variances of the clearances are estimated reasonably well. The f[KA_isv] estimate is far too high. The f[V1_isv] and f[V2_isv] estimates are worse than the starting value estimates.

The fact that f[V1_isv] and f[V2_isv] are badly estimated compared to the true values is not that surprising, since volumes of distribution are harder to estimate in PK models, compared to clearances which are rates. The difficulty in estimating f[KA_isv] may be due to the relative lack of data before the cmax peak of the PK curve, there being only 5 data points sampled per individual.

There can be multiple reasons for the fitted values not agreeing with the true parameters, for example:-

  • Too few observations in the data set.
  • Too few individuals in the data set.
  • Too much noise added to the measurement data.
  • False minima on the likelihood surface.
  • A fundamental difficulty in identifying some PK parameters, for example if the model is over-parametrised relative to the data set or even unidentifiable.

Given the relatively small amount of data generated (250 data points), the results in Table 33 are adequate. As shown in the next section the objective function for the fitted f[X] solution is actually lower than for the true f[X] solution here, see below for discussion.

Fit vs True Objective value

It is difficult to know by just comparing the fitted f[X] and true f[X] in section Fit vs True f[X] values, if the fitting method has done well or badly. We can run a form of sanity check by computing the objective function of the true f[X] and comparing this with the objective function of the fitted f[X]. This is done by optimising the r[X] for both solutions and using the synthetic data file to compute the likelihood.

The rational is that the fitted f[X] objective value should always be lower than the true f[X] objective value, because the fitted model estimates can take advantage of correlations in the random measurement noise to get a better fit to the synthetic data. If the fitted objective function is higher then the PoPy fitting method has ended up in a sub optimal local minima, because the known true values are a better minima.

In this example, the true model objective function is:-

-881.0027

Compared with the fitted model objective function:-

-896.8752

This indicates that the fitted f[X] pass the sanity check and perhaps justifies the lack of agreement with parameters such as f[V2]. However it does not say if the global optimum solution was found, i.e. whether or not f[X] is a true global minima of the likelihood surface.

The difference between the two objective values above is partially dependent on the amount of noise applied to the measurements i.e. f[PNOISE], the number of individuals simulated, the number of observations per individual and the number of parameters in the model. More random noise in the synthetic data creates more likelihood minima that are away from the true f[X] solution.

If the model is practically identifiable then increasing the number of data points and individuals should lead to a convergence between the true and fitted f[X] and the objective functions above.

Re-run the tutorial

You can familiarise yourself with PoPy’s various features by tweaking the tutorial example and re-running. A simple way of avoiding overwriting previous results is to do:-

$ copy builtin_tut_example.pyml builtin_tut_example_v2.pyml
$ popy_edit builtin_tut_example_v2.pyml

Then when you are happy with the edited file do:-

$ popy_run builtin_tut_example_v2.pyml

For example, you can adjust the amount of data generated, in this section:-

GEN_EFFECTS:
    ID: |
        c[ID] = sequential(50)
        t[DOSE] = 2.0
        t[OBS] ~ unif(1.0, 50.0; 5)
        # t[OBS] = range(1.0, 50.0; 5)

Note the ‘range’ function can sample time points evenly (instead of randomly). This usually makes the model fitting easier as the data points span the time range better. You can experiment with the number of doses or make a random sample of dose times for each individual.

You can also edit the underlying model or compartment structure, see the MODEL_PARAMS or DERIVATIVES sections.

Another possibility is changing the fitted model only, i.e take a copy of this file:-

builtin_tut_example.pyml_output/
    builtin_tut_example_fit.pyml

Name it ‘builtin_tut_example_fit_v2.pyml’, change the model structure or compartment. Then do:-

$ popy_run builtin_tut_example_fit_v2.pyml

Using a different model from the underlying generative model should result in a worse fit (using say an Akaike information criterion to compare fits).

Back to Top