• Language: en

Modelling Correlation in Random Effects

As shown in Inter-Subject Variation (ISV), we can use random effects to account for variability in model parameters between individuals in a population. In that example, we demonstrated the principle for a single parameter whereas we now consider the case where two or more parameters vary between individuals. In particular, we are interested in what happens when pairs of parameters vary in similar ways, for example when they have a common underlying cause.

In the following examples we model inter-subject variability in both m[CL] and m[V] using a combined proportional and additive noise model to generate observations for 200 individuals, and fit various models to the synthesized data.

Uncorrelated Effects

We can synthesize observations where parameters are uncorrelated simply by creating, for each parameter, an independent random effect with its own variance:

EFFECTS:
    POP: |
        f[KA] = 0.3
        f[CL] = 3
        f[V] = 20
        f[PNOISE_STD] = 0.1
        f[ANOISE_STD] = 0.05
        f[CL_isv] = 0.2
        # new: ISV on volume of distribution
        f[V_isv] = 0.1
    ID: |
        c[ID] = sequential(200)
        c[AMT] = 100.0
        t[RESET] = 0.0
        t[DOSE] = 1.0
        t[OBS] ~ unif(1.0, 50.0; 4)
        r[CL_isv] ~ norm( 0, f[CL_isv] )
        # new: ISV on volume of distribution
        r[V_isv] ~ norm( 0, f[V_isv] )

Mathematically, this is equivalent to modelling the two parameters jointly via a covariance matrix with the individual variances along the diagonal and zeros everywhere else. In PoPy this structure is enforced using the ~diag_matrix() notation:

EFFECTS:
    POP: |
        f[KA] = 0.3
        f[CL] = 3
        f[V] = 20
        f[PNOISE_STD] = 0.1
        f[ANOISE_STD] = 0.05
        # new: Joint ISV on both CL and V
        f[CL_isv, V_isv] ~ diag_matrix() [[0.2, 0.1]]
    ID: |
        c[ID] = sequential(200)
        c[AMT] = 100.0
        t[RESET] = 0.0
        t[DOSE] = 1.0
        t[OBS] ~ unif(1.0, 50.0; 4)
        # new: Joint ISV on both CL and V
        r[CL_isv, V_isv] ~ mnorm( [0,0], f[CL_isv, V_isv] )

for the covariance matrix and using a multivariate normal distribution, ~mnorm(), to model the two random effects as a vector (in this case with zero mean). Throughout this chapter we will continue to use the covariance matrix form rather than independent variables.

With independent inter-subject variability in m[CL] and m[V] the generated curves (Fig. 37) vary widely from individuals with high CL and low V (i.e. a high KE) to individuals with a low CL and high V (i.e. a low KE).

_autogen/pop_examples/re_covariance/gen_diag_fit_diag/images/gen_sim_grph_outputs/allOBS_vs_TIME/comb_spag.*

Fig. 37 Simulated prediction curves for a population with independent inter-subject variability on parameters CL and V

In all cases here, we ensure that the generated observations are the same by specifying a fixed rand_seed option in the METHOD_OPTIONS section of the tutorial script:

METHOD_OPTIONS: {py_module: gen, rand_seed: 314159}

Uncorrelated Fit to Uncorrelated Effects

Note

See the Diagonal matrix generation diagonal matrix fit for the Tut Script used to generate results in this section.

We now consider fitting (i.e. estimating the parameters of) a model using the observations, where the model also imposes a diagonal structure on the covariance matrix:

EFFECTS:
    POP: |
        f[KA] = 0.3
        f[CL] = 3
        f[V] = 20
        f[PNOISE_STD] = 0.1
        f[ANOISE_STD] = 0.05
        # new: Joint ISV on both CL and V
        f[CL_isv, V_isv] ~ diag_matrix() [[0.01, 0.01]]
    ID: |
        # new: Joint ISV on both CL and V
        r[CL_isv, V_isv] ~ mnorm( [0,0], f[CL_isv, V_isv] )

In this case, the only population parameters we estimate are the population variances, f[CL_isv] and f[V_isv]. The fitted parameter values,

f[KA] = 0.3000
f[CL] = 3.0000
f[V] = 20.0000
f[PNOISE_STD] = 0.1000
f[ANOISE_STD] = 0.0500
f[CL_isv,V_isv] = [
    [ 0.1977, 0.0000 ],
    [ 0.0000, 0.1167 ],
]

largely agree with the “true” values used to generate the observations, albeit with lower values for the variances than the true values (a phenomenon known as shrinkage).

The final objective function value (OBJV) for this fit is

-2215.965838749517

Correlated Fit to Uncorrelated Effects

Note

See the Diagonal matrix generation full matrix fit for the Tut Script used to generate results in this section.

We could, however, use a different covariance structure when fitting the model to the observations. For example, we could relax the constraints on the off-diagonal elements by allowing any covariance matrix that is symmetric positive definite (SPD) - a necessary property for all covariance matrices.

In PoPy, we do this using the ~spd_matrix() notation:

EFFECTS:
    POP: |
        f[KA] = 0.3
        f[CL] = 3
        f[V] = 20
        f[PNOISE_STD] = 0.1
        f[ANOISE_STD] = 0.05
        # new: Model that links variability of
        # CL to that of V
        # zero off diagonal element causes fail for SPD matrix hmmm...
        # f[CL_isv, V_isv] ~ spd_matrix() [
        #    [ 0.01 ],
        #    [ 0.0, 0.01 ]
        # ]
        f[CL_isv, V_isv] ~ spd_matrix() [
            [ 0.01 ],
            [ 0.0001, 0.01 ]
        ]
    ID: |
        r[CL_isv, V_isv] ~ mnorm( [0,0], f[CL_isv, V_isv] )

Using the “full” covariance matrix the fitted values,

f[KA] = 0.3000
f[CL] = 3.0000
f[V] = 20.0000
f[PNOISE_STD] = 0.1000
f[ANOISE_STD] = 0.0500
f[CL_isv,V_isv] = [
    [ 0.1928, -0.0166 ],
    [ -0.0166, 0.1155 ],
]

also largely agree with those fitted using the diagonally-constrained covariance since a diagonal matrix is a specific example of an SPD matrix. This can also be seen in the fitted objective function value,

-2217.2437926530574

which is only fractionally lower than that for the diagonally-constrained model, since the full covariance matrix has more freedom to fit to the data.

Correlated Effects

Note

See the Full matrix generation diagonal matrix fit for the Tut Script used to generate results in this section.

We now run the same experiments, only using new observations that have been synthesized from a model where the variability in m[CL] and m[V] is correlated. This is a more realistic example, since both clearance and volume of distribution can increase with weight.

EFFECTS:
    POP: |
        f[KA] = 0.3
        f[CL] = 3
        f[V] = 20
        f[PNOISE_STD] = 0.1
        f[ANOISE_STD] = 0.05
        # new: Data in which variability of CL is
        # linked to that of V
        f[CL_isv, V_isv] ~ spd_matrix() [
            [ 0.15 ],
            [ 0.05, 0.15 ],
        ]
    ID: |
        c[ID] = sequential(200)
        c[AMT] = 100.0
        t[RESET] = 0.0
        t[DOSE] = 1.0
        t[OBS] ~ unif(1.0, 50.0; 4)
        r[CL_isv, V_isv] ~ mnorm( [0,0], f[CL_isv, V_isv] )
_autogen/pop_examples/re_covariance/gen_full_fit_diag/images/gen_sim_grph_outputs/allOBS_vs_TIME/comb_spag.*

Fig. 38 Simulated prediction curves for a population with correlated inter-subject variability on parameters CL and V.

The generated predictions (Fig. 38) are qualitatively different from those generated without correlation because the resulting KE values have a smaller spread. (Correlated changes in CL and V cancel out to a degree.)

Uncorrelated Fit to Correlated Effects

Note

See the Full matrix generation diagonal matrix fit for the Tut Script used to generate results in this section.

We now run the fit using a model that prohibits correlation between CL and V by imposing a diagonal form on the covariance structure,

EFFECTS:
    POP: |
        f[KA] = 0.3
        f[CL] = 3
        f[V] = 20
        f[PNOISE_STD] = 0.1
        f[ANOISE_STD] = 0.05
        # new: Model in which variability of CL is
        # independent of that of V
        f[CL_isv, V_isv] ~ diag_matrix() [[0.01, 0.01]]
    ID: |
        r[CL_isv, V_isv] ~ mnorm( [0,0], f[CL_isv, V_isv] )

which is at odds with the data.

As a result, the fitted parameters

f[KA] = 0.3000
f[CL] = 3.0000
f[V] = 20.0000
f[PNOISE_STD] = 0.1000
f[ANOISE_STD] = 0.0500
f[CL_isv,V_isv] = [
    [ 0.1280, 0.0000 ],
    [ 0.0000, 0.1581 ],
]

are much smaller than the “true” values used to generate the observations because they are unable to capture the correlations between values. The final objective function value (OBJV) for the fit,

-2090.7014893219766

cannot be compared with that of the data without correlation because they are different datasets, but serves as a useful baseline when fitting with different models.

Correlated Fit to Correlated Effects

Note

See the Full matrix generation full matrix fit for the Tut Script used to generate results in this section.

Finally, we run a fit with a model that does have the flexibility to capture correlations, again using the ~spd_matrix() notation

EFFECTS:
    POP: |
        f[KA] = 0.3
        f[CL] = 3
        f[V] = 20
        f[PNOISE_STD] = 0.1
        f[ANOISE_STD] = 0.05
        # new: Model that links variability of
        # CL to that of V
        # f[CL_isv, V_isv] ~ spd_matrix() [
        #    [ 0.01 ],
        #    [ 0.00, 0.01 ]
        # ]
        f[CL_isv, V_isv] ~ spd_matrix() [
            [ 0.01 ],
            [ 0.001, 0.01 ]
        ]
    ID: |
        r[CL_isv, V_isv] ~ mnorm( [0,0], f[CL_isv, V_isv] )

This time, the fitted parameters

f[KA] = 0.3000
f[CL] = 3.0000
f[V] = 20.0000
f[PNOISE_STD] = 0.1000
f[ANOISE_STD] = 0.0500
f[CL_isv,V_isv] = [
    [ 0.1361, 0.0500 ],
    [ 0.0500, 0.1660 ],
]

are closer to the “true” values: the individual variances (along the diagonal) are higher and the covariance (the off-diagonal) is close to the generating value.

The improvement in fit is also apparent from the final objective function value (OBJV),

-2108.2792073976425

which is lower than for the diagonally-constrained fit, showing that the extra flexibility is beneficial when the data require it.

These examples illustrate the importance of including the flexibility to capture correlations in random effects where the model parameters vary together (for example, due to a common underlying property).

Where the observations do not come from a correlated source, having the flexibility to capture correlations has little real benefit to the model fit (a constrained covariance matrix would do just as well) but may require more data to estimate the greater number of parameters; the model should be as complex as it needs to be, but no more.

In practice, it is rare for PK parameters to be completely independent with a correlation of zero. Accounting for correlations tends to lead to better VPCs and more realistic simulations of future patients.

Back to Top