Title: | Markov Models for Health Economic Evaluations |
---|---|
Description: | An implementation of the modelling and reporting features described in reference textbook and guidelines (Briggs, Andrew, et al. Decision Modelling for Health Economic Evaluation. Oxford Univ. Press, 2011; Siebert, U. et al. State-Transition Modeling. Medical Decision Making 32, 690-700 (2012).): deterministic and probabilistic sensitivity analysis, heterogeneity analysis, time dependency on state-time and model-time (semi-Markov and non-homogeneous Markov models), etc. |
Authors: | Kevin Zarca [aut, cre], Antoine Filipovic-Pierucci [aut], Matthew Wiener [ctb], Zdenek Kabat [ctb], Vojtech Filipec [ctb], Jordan Amdahl [ctb], Yonatan Carranza Alarcon [ctb], Vince Daniels [ctb] |
Maintainer: | Kevin Zarca <[email protected]> |
License: | GPL (>= 3) |
Version: | 1.0.2 |
Built: | 2024-11-10 03:00:34 UTC |
Source: | https://github.com/aphp/heemod |
Get a survival distribution reflecting the independent hazards from two or more survival distributions.
add_hazards(...) add_hazards_(dots)
add_hazards(...) add_hazards_(dots)
... |
Survival distributions to be used in the projection. |
dots |
Used to work around non-standard evaluation. |
A surv_add_haz
object.
dist1 <- define_surv_dist(distribution = "exp", rate = .125) dist2 <- define_surv_dist(distribution = "weibull", shape = 1.2, scale = 50) combined_dist <- add_hazards(dist1, dist2)
dist1 <- define_surv_dist(distribution = "exp", rate = .125) dist2 <- define_surv_dist(distribution = "weibull", shape = 1.2, scale = 50) combined_dist <- add_hazards(dist1, dist2)
Proportionally increase or reduce the time to event of a survival distribution.
apply_af(dist, af, log_af = FALSE)
apply_af(dist, af, log_af = FALSE)
dist |
A survival distribution. |
af |
An acceleration factor to be applied. |
log_af |
If |
A surv_aft
object.
dist1 <- define_surv_dist(distribution = "exp", rate = .25) aft_dist <- apply_af(dist1, 1.5)
dist1 <- define_surv_dist(distribution = "exp", rate = .25) aft_dist <- apply_af(dist1, 1.5)
Proportional reduce or increase the hazard rate of a distribution.
apply_hr(dist, hr, log_hr = FALSE)
apply_hr(dist, hr, log_hr = FALSE)
dist |
A survival distribution. |
hr |
A hazard ratio to be applied. |
log_hr |
If |
A surv_ph
object.
dist1 <- define_surv_dist(distribution = "exp", rate = .25) ph_dist <- apply_hr(dist1, 0.5)
dist1 <- define_surv_dist(distribution = "exp", rate = .25) ph_dist <- apply_hr(dist1, 0.5)
Proportionally increase or reduce the odds of an event of a survival distribution.
apply_or(dist, or, log_or = FALSE)
apply_or(dist, or, log_or = FALSE)
dist |
A survival distribution. |
or |
An odds ratio to be applied. |
log_or |
If |
A surv_po
object.
dist1 <- define_surv_dist(distribution = "exp", rate = .25) po_dist <- apply_or(dist1, 1.2)
dist1 <- define_surv_dist(distribution = "exp", rate = .25) po_dist <- apply_or(dist1, 1.2)
Apply a time shift to a survival distribution
apply_shift(dist, shift)
apply_shift(dist, shift)
dist |
A survival distribution. |
shift |
A time shift to be applied. |
A positive shift moves the fit backwards in time. That is,
a shift of 4 will cause time 5 to be evaluated as time 1, and so on.
If shift == 0
, dist
is returned unchanged.
A surv_shift
object.
dist1 <- define_surv_dist(distribution = "gamma", rate = 0.25, shape = 3) shift_dist <- apply_shift(dist1, 4) compute_surv(dist1, 1:10) compute_surv(shift_dist, 1:10)
dist1 <- define_surv_dist(distribution = "gamma", rate = 0.25, shape = 3) shift_dist <- apply_shift(dist1, 4) compute_surv(dist1, 1:10) compute_surv(shift_dist, 1:10)
Search for the appropriate value of unknown parameters to obtain specific model results.
calibrate_model( x, parameter_names, fn_values, target_values, initial_values = NULL, method = c("Nelder-Mead", "BFGS", "L-BFGS-B"), ... )
calibrate_model( x, parameter_names, fn_values, target_values, initial_values = NULL, method = c("Nelder-Mead", "BFGS", "L-BFGS-B"), ... )
x |
Result from |
parameter_names |
Names of the parameters to calibrate. |
fn_values |
Function applied to the model that returns the values of interest as a numeric vector. |
target_values |
Values to match, same length as the
output from |
initial_values |
Optional starting values. See details. |
method |
Optimisation method ( |
... |
Optional arguments passed to
|
Parameters not being optimized are unchanged from the
values in the model run. If initial_values
is NULL
,
the initial parameter values will also be taken from the
model run.
initial_values
can be a vector or a table. In the
second case each row corresponds to a set of initial
parameter values: the calibration will be run once per
set.
Passing in multiple initial values allows (among other things) the user to check whether the calibration gets the same results from different starting points.
Multi-dimensional problems are optimized with
optimx::optimx()
, 1-dimensional problems with
stats::optimise()
(except when a method
is given).
convcode
is always NA
with stats::optimise()
.
Running calibrate_model()
does not change the model
parameters; the user must create a new model and run it
if desired.
See also vignette("k-calibration")
.
A data frame in which each row has the calibrated
values of parameters given in parameter_names
, for
the corresponding row of initial_values
, along with
the convergence code for each run.
param <- define_parameters(p = 0.8) mat <- define_transition( p, C, 0, 1 ) mod <- define_strategy( transition = mat, A = define_state(cost=10, effect = 0.5), B = define_state(cost = 5, effect = 0.8) ) res_mod <- run_model( mod = mod, parameters = param, init = c(1000L, 0L), cycles = 10, cost = cost, effect = effect, method = "end" ) f <- function(x) { dplyr::filter( get_counts(x), state_names == "A" & model_time == 10 )$count } f(res_mod) #'\dontrun{ #'calibrate_model( #' res_mod, #' parameter_names = "p", #' fn_values = f, #' target_values = 130, #' initial_values = data.frame(p = c(0.5, 0.9)), #' lower = 0, upper = 1 #') #'}
param <- define_parameters(p = 0.8) mat <- define_transition( p, C, 0, 1 ) mod <- define_strategy( transition = mat, A = define_state(cost=10, effect = 0.5), B = define_state(cost = 5, effect = 0.8) ) res_mod <- run_model( mod = mod, parameters = param, init = c(1000L, 0L), cycles = 10, cost = cost, effect = effect, method = "end" ) f <- function(x) { dplyr::filter( get_counts(x), state_names == "A" & model_time == 10 )$count } f(res_mod) #'\dontrun{ #'calibrate_model( #' res_mod, #' parameter_names = "p", #' fn_values = f, #' target_values = 130, #' initial_values = data.frame(p = c(0.5, 0.9)), #' lower = 0, upper = 1 #') #'}
heemod
on a ClusterThese functions create or delete a cluster for
heemod
. When the cluster is created it is
automagically used by heemod
functions.
use_cluster(num_cores, cluster = NULL, close = TRUE) status_cluster(verbose = TRUE) close_cluster()
use_cluster(num_cores, cluster = NULL, close = TRUE) status_cluster(verbose = TRUE) close_cluster()
num_cores |
Number of core. |
cluster |
A custom cluster. See details. |
close |
Close existing cluster before defining a new one? |
verbose |
Print cluster info. |
The usual workflow is to create the cluster with
use_cluster
, then run functions such as
run_psa()
that make use of the cluster. To
stop using the cluster run close_cluster()
.
The cluster status is given by status_cluster
.
A custom cluster can be passed to use_cluster
with
the cluster
argument. This custom cluster needs to
work with parallel::parLapply()
.
use_cluster
and close_cluster
return TRUE
invisibly in case of success.
status_cluster
returns TRUE
if a cluster
is defined, FALSE
otherwise.
Given several independent probabilities of an event, return the final probability of the event.
combine_probs(...)
combine_probs(...)
... |
Probability vectors. |
This function is only correct if the probabilities are independent!
A probability vector.
(p1 <- runif(5)) (p2 <- runif(5)) combine_probs(p1, p2)
(p1 <- runif(5)) (p2 <- runif(5)) combine_probs(p1, p2)
Generate either survival probabilities or conditional probabilities of event for each model cycle.
compute_surv(x, time, cycle_length = 1, type = c("prob", "survival"), ...)
compute_surv(x, time, cycle_length = 1, type = c("prob", "survival"), ...)
x |
A survival object |
time |
The |
cycle_length |
The value of a Markov cycle in absolute time units. |
type |
Either |
... |
arguments passed to methods. |
The results of compute_surv()
are memoised for
options("heemod.memotime")
(default: 1 hour) to
increase resampling performance.
Returns either the survival probabilities or conditional probabilities of event for each cycle.
construct a survival object from tabular specification
construct_part_surv_tib(surv_def, ref, state_names, env = new.env())
construct_part_surv_tib(surv_def, ref, state_names, env = new.env())
surv_def |
a data frame with the specification. See details. |
ref |
data frame with information about the fits. |
state_names |
names of the model states |
env |
an environment |
This function is meant to be used only from within tabular_input.R. It won't work well otherwise, in that the environment is unlikely to have what you need.
columns of surv_def: .strategy, .type, .subset, dist, until where dist can be either the name of a distribution along with parameters, or a reference to a fit for example: fit('exp') or exp(rate = 0.5)
a list with one element for each strategy. Each element
is in turn a part_surv
object, a list with two elements,
pfs and os. And those
elements are survival objects of various kinds, with the
commonality that they can be used in compute_surv()
.
Define a function to be passed to the fn_values
argument of calibrate_model()
.
define_calibration_fn( type, strategy_names, element_names, cycles, groups = NULL, aggreg_fn = sum )
define_calibration_fn( type, strategy_names, element_names, cycles, groups = NULL, aggreg_fn = sum )
type |
Type of model values ( |
strategy_names |
Names of strategies. |
element_names |
Names of states (for counts) or of state values (for values). |
cycles |
Cycles of interest. |
groups |
Optional grouping of values (values in a
same group have the same |
aggreg_fn |
A function to aggregate values in a same group. |
A numeric vector.
example("run_model") f <- define_calibration_fn( type = c("count", "count", "value"), strategy_names = c("I", "I", "II"), element_names = c("A", "B", "ly"), cycles = c(3, 5, 9), groups = c(1, 1, 2), aggreg_fn = mean )
example("run_model") f <- define_calibration_fn( type = c("count", "count", "value"), strategy_names = c("I", "I", "II"), element_names = c("A", "B", "ly"), cycles = c(3, 5, 9), groups = c(1, 1, 2), aggreg_fn = mean )
Not all correlation need to be specified for all variable combinations, unspecified correlations are assumed to be 0.
define_correlation(...) define_correlation_(.dots)
define_correlation(...) define_correlation_(.dots)
... |
A list of parameter names and correlation
coefficients of the form |
.dots |
Used to work around non-standard evaluation. |
An object of class correlation_matrix
.
cm <- define_correlation( var1, var2, .4, var1, var3, -.2, var2, var3, .1 )
cm <- define_correlation( var1, var2, .4, var1, var3, -.2, var2, var3, .1 )
Define parameter variations for a Markov model sensitivity analysis.
define_dsa(...) define_dsa_(par_names, low_dots, high_dots)
define_dsa(...) define_dsa_(par_names, low_dots, high_dots)
... |
A list of parameter names and min/max values
of the form |
par_names |
String vector of parameter names. |
low_dots , high_dots
|
Used to work around non-standard evaluation. |
A sensitivity
object.
define_dsa( a, 10, 45, b, .5, 1.5 )
define_dsa( a, 10, 45, b, .5, 1.5 )
Define Inflow for a BIA
define_inflow(...) define_inflow_(.dots)
define_inflow(...) define_inflow_(.dots)
... |
Name-value pairs of expressions defining inflow counts. |
.dots |
Used to work around non-standard evaluation. |
An object similar to the return value of
define_parameters()
.
Define Initial Counts
define_init(...) define_init_(.dots)
define_init(...) define_init_(.dots)
... |
Name-value pairs of expressions defining initial counts. |
.dots |
Used to work around non-standard evaluation. |
An object similar to the return value of
define_parameters()
.
Define parameters called to compute the transition matrix
or state values for a Markov model. Parameters can be
time dependent by using the model_time
parameter.
define_parameters(...) define_parameters_(.dots) ## S3 method for class 'uneval_parameters' modify(.OBJECT, ...)
define_parameters(...) define_parameters_(.dots) ## S3 method for class 'uneval_parameters' modify(.OBJECT, ...)
... |
Name-value pairs of expressions defining parameters. |
.dots |
Used to work around non-standard evaluation. |
.OBJECT |
An object of class
|
Parameters are defined sequentially, parameters defined earlier can be called in later expressions.
Vector length should not be explicitly set, but should
instead be stated relatively to model_time
(whose length depends on the number of simulation
cycles). Alternatively, dplyr
functions such as
dplyr::n()
can be used.
This function relies heavily on the dplyr
package.
Parameter definitions should thus mimic the use of
functions such as dplyr::mutate()
.
Variable names are searched first in the parameter
definition (only parameters defined earlier are visible)
then in the environment where define_parameters
was called.
For the modify
function, existing parameters are
modified, but no new parameter can be added. Parameter
order matters since only parameters defined earlier can
be referenced in later expressions.
An object of class uneval_parameters
(actually a named list of quosures).
# parameter 'age' depends on time: # simulating a cohort starting at 60 yo define_parameters( age_start = 60, age = age_start + model_time ) # other uses of model_time are possible define_parameters( top_time = ifelse(model_time < 10, 1, 0) ) # more elaborate: risk function define_parameters( rate = 1 - exp(- model_time * .5) ) # dont explicitly state lengths # define_parameters( # var = seq(1, 15, 2) # ) # instead rely on model_time or dplyr # functions such as n() or row_number() define_parameters( var = seq(from = 1, length.out = n(), by = 3), var2 = seq(1, length(model_time), 2) ) param <- define_parameters( age_start = 60, age = age_start + model_time ) # modify existing parameters modify( param, age_start = 40 ) # cannot add new parameters # modify( # param, # const = 4.4, # age_2 = age ^ 2 # )
# parameter 'age' depends on time: # simulating a cohort starting at 60 yo define_parameters( age_start = 60, age = age_start + model_time ) # other uses of model_time are possible define_parameters( top_time = ifelse(model_time < 10, 1, 0) ) # more elaborate: risk function define_parameters( rate = 1 - exp(- model_time * .5) ) # dont explicitly state lengths # define_parameters( # var = seq(1, 15, 2) # ) # instead rely on model_time or dplyr # functions such as n() or row_number() define_parameters( var = seq(from = 1, length.out = n(), by = 3), var2 = seq(1, length(model_time), 2) ) param <- define_parameters( age_start = 60, age = age_start + model_time ) # modify existing parameters modify( param, age_start = 40 ) # cannot add new parameters # modify( # param, # const = 4.4, # age_2 = age ^ 2 # )
Define a partitioned survival model with progression-free survival and overall survival.
define_part_surv( pfs, os, state_names, terminal_state = FALSE, cycle_length = 1 ) define_part_surv_(pfs, os, state_names, cycle_length = 1)
define_part_surv( pfs, os, state_names, terminal_state = FALSE, cycle_length = 1 ) define_part_surv_(pfs, os, state_names, cycle_length = 1)
pfs , os
|
Either results from
|
state_names |
named character vector, length 3 or 4.
State names for progression-free state, progression,
(optionally terminal) and death respectively. Elements
should be named |
terminal_state |
Should a terminal state be included? Only used when state names are not provided. |
cycle_length |
The value of a Markov cycle in absolute time units. |
A part_surv
object.
dist_pfs <- define_surv_dist("exp", rate = 1) dist_os <- define_surv_dist("exp", rate = .5) define_part_surv( pfs = dist_pfs, os = dist_os, state_names = c( progression_free = "A", progression = "B", terminal = "C", death = "D" ) ) # identical to: define_part_surv( pfs = dist_pfs, os = dist_os, terminal_state = TRUE )
dist_pfs <- define_surv_dist("exp", rate = 1) dist_os <- define_surv_dist("exp", rate = .5) define_part_surv( pfs = dist_pfs, os = dist_os, state_names = c( progression_free = "A", progression = "B", terminal = "C", death = "D" ) ) # identical to: define_part_surv( pfs = dist_pfs, os = dist_os, terminal_state = TRUE )
Define the properties of parameter distributions and their correlation structure for probabilistic uncertainty analysis of Markov models.
define_psa(..., correlation) define_psa_(.dots = list(), correlation)
define_psa(..., correlation) define_psa_(.dots = list(), correlation)
... |
Formulas defining parameter distributions. |
correlation |
A correlation matrix for parameters or
the output of |
.dots |
Pair/values of expressions coercible to quosures. |
The distributions must be defined within heemod
(see distributions), or defined with
define_distribution()
.
If no correlation matrix is specified parameters are assumed to be independant.
The correlation matrix need only be specified for correlated parameters.
An object of class resamp_definition
.
Contains list_qdist
, a list of quantile
functions and correlation
a correlation matrix.
mc <- define_correlation( age_init, cost_init, .4 ) define_psa( age_init ~ normal(60, 10), cost_init ~ normal(1000, 100), correlation = mc ) # example with multinomial parameters define_psa( rate1 + rate2 + rate3 ~ multinomial(10, 50, 40), a + b ~ multinomial(15, 30) )
mc <- define_correlation( age_init, cost_init, .4 ) define_psa( age_init ~ normal(60, 10), cost_init ~ normal(1000, 100), correlation = mc ) # example with multinomial parameters define_psa( rate1 + rate2 + rate3 ~ multinomial(10, 50, 40), a + b ~ multinomial(15, 30) )
This function is meant to be used inside define_strategy()
and
define_state()
.
define_starting_values(...) define_starting_values_(.dots)
define_starting_values(...) define_starting_values_(.dots)
... |
Name-value pairs of expressions defining starting values. The names must correspond to an existing state value. |
.dots |
Used to work around non-standard evaluation. |
The behaviour is different following the function using define_starting_values()
as an argument.
When used inside define_strategy()
, the state values are modified for the
first cycle in each state
When used inside define_state()
, the state values are modified for counts
entering the state
An object similar to the return value of
define_parameters()
.
Define the values characterising a Markov Model state for 1 cycle.
define_state(..., starting_values = define_starting_values()) define_state_(x) ## S3 method for class 'state' modify(.OBJECT, ...)
define_state(..., starting_values = define_starting_values()) define_state_(x) ## S3 method for class 'state' modify(.OBJECT, ...)
... |
Name-value pairs of expressions defining state values. |
starting_values |
Optional starting values defined
with |
x |
Used to work around non-standard evaluation. |
.OBJECT |
An object of class |
As with define_parameters()
, state values are
defined sequentially. Later state definition can thus
only refer to values defined earlier.
For the modify
function, existing values are
modified, no new values can be added. Values order
matters since only values defined earlier can be
referenced in later expressions.
An object of class state
(actually a named
list of quosures).
st <- define_state( cost = 6453, utility = .876 ) st
st <- define_state( cost = 6453, utility = .876 ) st
Combine information on parameters, transition matrix and
states defined through define_parameters()
,
define_transition()
and define_state()
respectively.
define_strategy( ..., transition = define_transition(), starting_values = define_starting_values() ) define_strategy_(transition, states, starting_values)
define_strategy( ..., transition = define_transition(), starting_values = define_starting_values() ) define_strategy_(transition, states, starting_values)
... |
Objects generated by |
transition |
An object generated by
|
starting_values |
Optional starting values defined
with |
states |
List of states, only used by
|
This function checks whether the objects are compatible in the same model (same state names...).
State values and transition probabilities referencing
state_time
are automatically expanded to implicit
tunnel states.
An object of class uneval_model
(a list
containing the unevaluated parameters, matrix and
states).
mat <- define_transition( state_names = c("s1", "s2"), 1 / c, 1 - 1/ c, 0, 1 ) s1 <- define_state( cost = 234, utility = 1 ) s2 <- define_state( cost = 421, utility = .5 ) define_strategy( transition = mat, s1 = s1, s2 = s2 )
mat <- define_transition( state_names = c("s1", "s2"), 1 / c, 1 - 1/ c, 0, 1 ) s1 <- define_state( cost = 234, utility = 1 ) s2 <- define_state( cost = 421, utility = .5 ) define_strategy( transition = mat, s1 = s1, s2 = s2 )
Define a parametric survival distribution.
define_surv_dist( distribution = c("exp", "weibull", "weibullPH", "lnorm", "llogis", "gamma", "gompertz", "gengamma", "gengamma.orig", "genf", "genf.orig"), ... )
define_surv_dist( distribution = c("exp", "weibull", "weibullPH", "lnorm", "llogis", "gamma", "gompertz", "gengamma", "gengamma.orig", "genf", "genf.orig"), ... )
distribution |
A parametric survival distribution. |
... |
Additional distribution parameters (see respective distribution help pages). |
A surv_dist
object.
define_surv_dist(distribution = "exp", rate = .5) define_surv_dist(distribution = "gompertz", rate = .5, shape = 1)
define_surv_dist(distribution = "exp", rate = .5) define_surv_dist(distribution = "gompertz", rate = .5, shape = 1)
Define a fitted survival models with a Kaplan-Meier estimator or parametric distributions
define_surv_fit(x)
define_surv_fit(x)
x |
a survfit or flexsurvreg object |
A surv_object
object.
library(survival) define_surv_fit( survfit(Surv(time, status) ~ 1, data = colon) ) define_surv_fit( flexsurv::flexsurvreg(Surv(time, status) ~ 1, data = colon, dist = "exp") )
library(survival) define_surv_fit( survfit(Surv(time, status) ~ 1, data = colon) ) define_surv_fit( flexsurv::flexsurvreg(Surv(time, status) ~ 1, data = colon, dist = "exp") )
Define a restricted cubic spline parametric survival distribution.
define_surv_spline(scale = c("hazard", "odds", "normal"), ...)
define_surv_spline(scale = c("hazard", "odds", "normal"), ...)
scale |
"hazard", "odds", or "normal", as described in flexsurvspline. With the default of no knots in addition to the boundaries, these models reduce to the Weibull, log-logistic and log-normal respectively. The scale must be common to all times. |
... |
Additional distribution parameters (see respective distribution help pages). |
A surv_dist
object.
define_surv_spline( scale = "hazard", gamma = c(-18.3122, 2.7511, 0.2292), knots=c(4.276666, 6.470800, 7.806289) ) define_surv_spline( scale = "odds", gamma = c(-18.5809, 2.7973, 0.2035), knots=c(4.276666, 6.470800, 7.806289) )
define_surv_spline( scale = "hazard", gamma = c(-18.3122, 2.7511, 0.2292), knots=c(4.276666, 6.470800, 7.806289) ) define_surv_spline( scale = "odds", gamma = c(-18.5809, 2.7973, 0.2035), knots=c(4.276666, 6.470800, 7.806289) )
Define a survival distribution based on explicit survival probabilities
define_surv_table(x) ## S3 method for class 'data.frame' define_surv_table(x) ## S3 method for class 'character' define_surv_table(x)
define_surv_table(x) ## S3 method for class 'data.frame' define_surv_table(x) ## S3 method for class 'character' define_surv_table(x)
x |
a data frame with columns |
a surv_table
object, which can be used with compute_surv()
.
x <- data.frame(time = c(0, 1, 5, 10), survival = c(1, 0.9, 0.7, 0.5)) define_surv_table(x)
x <- data.frame(time = c(0, 1, 5, 10), survival = c(1, 0.9, 0.7, 0.5)) define_surv_table(x)
Define a matrix of transition probabilities. Probability
can depend on parameters defined with
define_parameters()
, and can thus be time-dependent.
define_transition(..., state_names) define_transition_(.dots, state_names) ## S3 method for class 'uneval_matrix' modify(.OBJECT, ...) ## S3 method for class 'uneval_matrix' plot(x, relsize = 0.75, shadow.size = 0, latex = TRUE, ...)
define_transition(..., state_names) define_transition_(.dots, state_names) ## S3 method for class 'uneval_matrix' modify(.OBJECT, ...) ## S3 method for class 'uneval_matrix' plot(x, relsize = 0.75, shadow.size = 0, latex = TRUE, ...)
... |
Name-value pairs of expressions defining matrix
cells. Can refer to parameters defined with
|
state_names |
character vector, optional. State names. |
.dots |
Used to work around non-standard evaluation. |
.OBJECT |
An object of class |
x |
An |
relsize |
Argument passed to |
shadow.size |
Argument passed to
|
latex |
Argument passed to |
Matric cells are listed by row.
Parameters names are searched first in a parameter object
defined with define_parameters()
and linked with the
matrix through define_strategy()
; then in the
environment where the matrix was defined.
The complementary probability of all other row
probabilities can be conveniently referred to as C
.
The matrix code can be re-indented for readability with
reindent_transition()
.
Only matrix size is checked during this step (the matrix must be square). Other conditions (such as row sums being equal to 1) are tested later, during model evaluation.
For the modify
function, existing matrix cells are
replaced with the new expression. Cells are referenced by
name. Cell naming follows the cell_x_y
convention, with
x
being the row number and y
the column number.
An object of class uneval_matrix
(actually a
named list of quosures
expressions).
# simple 3x3 transition matrix mat_1 <- define_transition( .2, 0, .8, 0, .1, .9, 0, 0, 1 ) mat_1 plot(mat_1) # referencing parameters # rr must be present in a parameter object # that must later be linked with define_strategy mat_2 <- define_transition( .5 - rr, rr, .4, .6 ) mat_2 reindent_transition(mat_2) # can also use C define_transition( C, rr, .4, .6 ) # updating cells from mat_1 modify( mat_1, cell_2_1 = .2, cell_2_3 = .7 ) # only matrix size is check, it is thus possible # to define an incorrect matrix # this matrix will generate an error later, # during model evaluation define_transition( .5, 3, -1, 2 )
# simple 3x3 transition matrix mat_1 <- define_transition( .2, 0, .8, 0, .1, .9, 0, 0, 1 ) mat_1 plot(mat_1) # referencing parameters # rr must be present in a parameter object # that must later be linked with define_strategy mat_2 <- define_transition( .5 - rr, rr, .4, .6 ) mat_2 reindent_transition(mat_2) # can also use C define_transition( C, rr, .4, .6 ) # updating cells from mat_1 modify( mat_1, cell_2_1 = .2, cell_2_3 = .7 ) # only matrix size is check, it is thus possible # to define an incorrect matrix # this matrix will generate an error later, # during model evaluation define_transition( .5, 3, -1, 2 )
Returns different values depending on the strategy.
dispatch_strategy(.strategy, ...)
dispatch_strategy(.strategy, ...)
.strategy |
Optional strategy name. If not specified it is implicitely added. |
... |
Values of the parameter named depending on the strategy. |
A vector of values.
define_parameters( val = 456, x = dispatch_strategy( strat_1 = 1234, strat_2 = 9876, strat_3 = val * 2 + model_time ) )
define_parameters( val = 456, x = dispatch_strategy( strat_1 = 1234, strat_2 = 9876, strat_3 = val * 2 + model_time ) )
Define a distribution for PSA parameters.
normal(mean, sd) lognormal(mean, sd, meanlog, sdlog) gamma(mean, sd) binomial(prob, size) multinomial(...) logitnormal(mu, sigma) beta(shape1, shape2) triangle(lower, upper, peak = (lower + upper)/2) poisson(mean) define_distribution(x) beta(shape1, shape2) triangle(lower, upper, peak = (lower + upper)/2) use_distribution(distribution, smooth = TRUE)
normal(mean, sd) lognormal(mean, sd, meanlog, sdlog) gamma(mean, sd) binomial(prob, size) multinomial(...) logitnormal(mu, sigma) beta(shape1, shape2) triangle(lower, upper, peak = (lower + upper)/2) poisson(mean) define_distribution(x) beta(shape1, shape2) triangle(lower, upper, peak = (lower + upper)/2) use_distribution(distribution, smooth = TRUE)
mean |
Distribution mean. |
sd |
Distribution standard deviation. |
meanlog |
Mean on the log scale. |
sdlog |
SD on the log scale. |
prob |
Proportion. |
size |
Size of sample used to estimate proportion. |
... |
Dirichlet distribution parameters. |
mu |
Mean on the logit scale. |
sigma |
SD on the logit scale. |
shape1 |
for beta distribution |
shape2 |
for beta distribution |
lower |
lower bound of triangular distribution. |
upper |
upper bound of triangular distribution. |
peak |
peak of triangular distribution. |
x |
A distribution function, see details. |
distribution |
A numeric vector of observations defining a distribution, usually the output from an MCMC fit. |
smooth |
Use gaussian kernel smoothing? |
These functions are not exported, but only used
in define_psa()
. To specify a user-made
function use define_distribution()
.
use_distribution()
uses gaussian kernel
smoothing with a bandwidth parameter calculated
by stats::density()
. Values for unobserved
quantiles are calculated by linear
interpolation.
define_distribution()
takes as argument a
function with a single argument, x
,
corresponding to a vector of quantiles. It
returns the distribution values for the given
quantiles. See examples.
define_distribution( function(x) stats::qexp(p = x, rate = 0.5) ) # a mixture of 2 gaussians x <- c(rnorm(100), rnorm(100, 6)) plot(density(x)) use_distribution(x)
define_distribution( function(x) stats::qexp(p = x, rate = 0.5) ) # a mixture of 2 gaussians x <- c(rnorm(100), rnorm(100, 6)) plot(density(x)) use_distribution(x)
Export the result of a PSA in a format compatible with Sheffield Accelerated Value of Information software.
export_savi(x, folder = ".")
export_savi(x, folder = ".")
x |
PSA result. |
folder |
A folder where to save the |
This function saves 3 files at the path given by
folder
: param.csv
, the parameter values,
cost.csv
and effect.csv
the cost and effect
results.
The official SAVI website can be found at this URL: https://savi.shef.ac.uk/SAVI/
Nothing. Creates 3 files.
Given a result from run_model()
, return
state membership counts for a specific strategy.
## S3 method for class 'updated_model' get_counts(x, ...) ## S3 method for class 'combined_model' get_counts(x, ...) get_counts(x, ...) ## S3 method for class 'run_model' get_counts(x, ...) ## S3 method for class 'eval_strategy' get_counts(x, ...) ## S3 method for class 'list' get_counts(x, ...)
## S3 method for class 'updated_model' get_counts(x, ...) ## S3 method for class 'combined_model' get_counts(x, ...) get_counts(x, ...) ## S3 method for class 'run_model' get_counts(x, ...) ## S3 method for class 'eval_strategy' get_counts(x, ...) ## S3 method for class 'list' get_counts(x, ...)
x |
Result from |
... |
further arguments passed to or from other methods. |
A data frame of counts per state.
Given a result from run_model()
, return
cost and effect values for a specific strategy.
## S3 method for class 'updated_model' get_values(x, ...) ## S3 method for class 'combined_model' get_values(x, ...) get_values(x, ...) ## S3 method for class 'run_model' get_values(x, ...) ## S3 method for class 'eval_strategy' get_values(x, ...) ## S3 method for class 'list' get_values(x, ...)
## S3 method for class 'updated_model' get_values(x, ...) ## S3 method for class 'combined_model' get_values(x, ...) get_values(x, ...) ## S3 method for class 'run_model' get_values(x, ...) ## S3 method for class 'eval_strategy' get_values(x, ...) ## S3 method for class 'list' get_values(x, ...)
x |
Result from |
... |
further arguments passed to or from other methods. |
A data frame of values per state.
Project survival from a survival distribution using one or more survival distributions using the specified cut points.
join(..., at) join_(dots, at)
join(..., at) join_(dots, at)
... |
Survival distributions to be used in the projection. |
at |
A vector of times corresponding to the cut point(s) to be used. |
dots |
Used to work around non-standard evaluation. |
A surv_projection
object.
dist1 <- define_surv_dist(distribution = "exp", rate = .5) dist2 <- define_surv_dist(distribution = "gompertz", rate = .5, shape = 1) join_dist <- join(dist1, dist2, at=20)
dist1 <- define_surv_dist(distribution = "exp", rate = .5) dist2 <- define_surv_dist(distribution = "gompertz", rate = .5, shape = 1) join_dist <- join(dist1, dist2, at=20)
Load a set of survival fits
load_surv_models(location, survival_specs, use_envir)
load_surv_models(location, survival_specs, use_envir)
location |
base directory |
survival_specs |
information about fits |
use_envir |
an environment |
A list with two elements:
best_models
,
a list with the fits for each data file passed in; and
envir
,
an environment containing the models so they can be referenced to
get probabilities.
A convenience function to easily look for values in a data frame.
look_up(data, ..., bin = FALSE, value = "value")
look_up(data, ..., bin = FALSE, value = "value")
data |
A reference data frame. |
... |
Individual characteristics, should be named
like the columns of |
bin |
Either logical: should all numeric variable be binned, or character vector giving the names of variables to bin (see examples). |
value |
The value to extract from the reference data frame. |
This function is mostly used to extract population informations (such as mortality rates), given some individual characteristics.
If binning is activated, numeric individual characteristics are matched to the corresponding reference value that is directly inferior.
A vector of values, same length as ...
.
tempdf <- expand.grid(arg1 = c("A", "B", "C"), arg2 = 1:4, arg3 = 1:5) tempdf$value <- 1:60 look_up( data = tempdf, value = "value", arg1 = c("A", "B", "C", "B", "A"), arg2 = c(1, 1, 3.2, 3.0, 5), arg3 = c(-1, 1, 1, 2, 3) ) # binning doesnt catch values lesser than the smaller # reference value look_up( data = tempdf, value = "value", arg1 = c("A", "B", "C", "B", "A"), arg2 = c(1, 1, 3.2, 3.0, 5), arg3 = c(-1, 1, 1, 2, 3), bin = TRUE ) # bin can alos be given as a charater vector # to avoid binning all numeric variables look_up( data = tempdf, value = "value", arg1 = c("A", "B", "C", "B", "A"), arg2 = c(1, 1, 3.2, 3.0, 5), arg3 = c(-1, 1, 1, 2, 3), bin = c("arg2") ) age_related_df <- data.frame(age = 10 * 0:9, decade = 1:10) look_up(age_related_df, age = c(0, 10, 20), value = "decade") # binning might help in the situation look_up(age_related_df, age = c(5, 15, 23.5), value = "decade") look_up(age_related_df, age = c(5, 15, 23.5), value = "decade", bin = TRUE)
tempdf <- expand.grid(arg1 = c("A", "B", "C"), arg2 = 1:4, arg3 = 1:5) tempdf$value <- 1:60 look_up( data = tempdf, value = "value", arg1 = c("A", "B", "C", "B", "A"), arg2 = c(1, 1, 3.2, 3.0, 5), arg3 = c(-1, 1, 1, 2, 3) ) # binning doesnt catch values lesser than the smaller # reference value look_up( data = tempdf, value = "value", arg1 = c("A", "B", "C", "B", "A"), arg2 = c(1, 1, 3.2, 3.0, 5), arg3 = c(-1, 1, 1, 2, 3), bin = TRUE ) # bin can alos be given as a charater vector # to avoid binning all numeric variables look_up( data = tempdf, value = "value", arg1 = c("A", "B", "C", "B", "A"), arg2 = c(1, 1, 3.2, 3.0, 5), arg3 = c(-1, 1, 1, 2, 3), bin = c("arg2") ) age_related_df <- data.frame(age = 10 * 0:9, decade = 1:10) look_up(age_related_df, age = c(0, 10, 20), value = "decade") # binning might help in the situation look_up(age_related_df, age = c(5, 15, 23.5), value = "decade") look_up(age_related_df, age = c(5, 15, 23.5), value = "decade", bin = TRUE)
Mix a set of survival distributions using the specified weights.
mix(..., weights = 1) mix_(dots, weights = 1)
mix(..., weights = 1) mix_(dots, weights = 1)
... |
Survival distributions to be used in the projection. |
weights |
A vector of weights used in pooling. |
dots |
Used to work around non-standard evaluation. |
A surv_pooled
object.
dist1 <- define_surv_dist(distribution = "exp", rate = .5) dist2 <- define_surv_dist(distribution = "gompertz", rate = .5, shape = 1) pooled_dist <- mix(dist1, dist2, weights = c(0.25, 0.75))
dist1 <- define_surv_dist(distribution = "exp", rate = .5) dist2 <- define_surv_dist(distribution = "gompertz", rate = .5, shape = 1) pooled_dist <- mix(dist1, dist2, weights = c(0.25, 0.75))
This generic function allows the modification of various objects such as parameters, transitions matrix or states.
modify(.OBJECT, ...)
modify(.OBJECT, ...)
.OBJECT |
Various objects. |
... |
Modifications. |
More details are available on the respective help page of each object definition.
Same class as x
.
Convert saved fits to partitioned survival objects
part_survs_from_surv_inputs(surv_inputs, state_names)
part_survs_from_surv_inputs(surv_inputs, state_names)
surv_inputs |
a list of matrices of |
state_names |
names of states of the model |
surv_inputs is a tibble with columns type (PFS or OS, not case sensitive), treatment, set_name (for data subsets), dist (for survival distribution assumptions), fit (for the fitted survival object) and set_def (how the subset of data was defined, just to keep it around)
a tibble of partitioned survival objects, similar to the original tibble of survival fits, with all the columns except type and fit, and a new column part_surv.
Plot the results of a sensitivity analysis as a tornado plot.
## S3 method for class 'dsa' plot( x, type = c("simple", "difference"), result = c("cost", "effect", "icer"), strategy = NULL, widest_on_top = TRUE, limits_by_bars = TRUE, resolve_labels = FALSE, shorten_labels = FALSE, remove_ns = FALSE, bw = FALSE, ... )
## S3 method for class 'dsa' plot( x, type = c("simple", "difference"), result = c("cost", "effect", "icer"), strategy = NULL, widest_on_top = TRUE, limits_by_bars = TRUE, resolve_labels = FALSE, shorten_labels = FALSE, remove_ns = FALSE, bw = FALSE, ... )
x |
A result of |
type |
Type of plot (see details). |
result |
Plot cost, effect, or ICER. |
strategy |
Name or index of strategies to plot. |
widest_on_top |
logical. Should bars be sorted so widest are on top? |
limits_by_bars |
logical. Should the limits used for each parameter be printed in the plot, next to the bars? |
resolve_labels |
logical. Should we resolve all labels to numbers instead of expressions (if there are any)? |
shorten_labels |
logical. Should we shorten the presentation of the parameters on the plot to highlight where the values differ? |
remove_ns |
Remove variables that are not sensitive. |
bw |
Black & white plot for publications? |
... |
Additional arguments passed to |
Plot type simple
plots variations of single strategy
values, while difference
plots incremental values.
A ggplot2
object.
Various plots for Markov models probabilistic analysis.
## S3 method for class 'psa' plot( x, type = c("ce", "ac", "cov", "evpi"), max_wtp = 1e+05, n = 100, log_scale = TRUE, diff = FALSE, threshold, bw = FALSE, ... )
## S3 method for class 'psa' plot( x, type = c("ce", "ac", "cov", "evpi"), max_wtp = 1e+05, n = 100, log_scale = TRUE, diff = FALSE, threshold, bw = FALSE, ... )
x |
Result from |
type |
Type of plot, see details. |
max_wtp |
Maximal willingness to pay. |
n |
Number of CECA points to estimate (values above 100 may take significant time). |
log_scale |
Show willingness to pay on a log scale? |
diff |
Logical, perform covariance analysis on strategy differences? |
threshold |
When |
bw |
Black & white plot for publications? |
... |
Additional arguments, depends on |
type = "ac"
plots cost-effectiveness acceptability
curves, type = "ce"
plots results on the
cost-efficiency plane, type = "cov"
to perform
covariance analysis on the results, type = "evpi"
for expected value of perfect information.
A ggplot2
object.
Various plots for Markov models.
## S3 method for class 'run_model' plot( x, type = c("counts", "ce", "values"), panels = c("by_strategy", "by_state", "by_value"), values = NULL, strategy = NULL, states = NULL, free_y = FALSE, bw = FALSE, ... )
## S3 method for class 'run_model' plot( x, type = c("counts", "ce", "values"), panels = c("by_strategy", "by_state", "by_value"), values = NULL, strategy = NULL, states = NULL, free_y = FALSE, bw = FALSE, ... )
x |
Result from |
type |
Type of plot, see details. |
panels |
Should plots be faceted by model, by value or by state? |
values |
Names of values to be plotted. These can be any of the costs or effects defined in states. |
strategy |
Name or position of model(s) of interest. |
states |
Names of states to be included in the plot. |
free_y |
Should y limits be free between panels? |
bw |
Black & white plot for publications? |
... |
Additional arguments passed to
When |
A ggplot2
object.
## These examples require \code{res_mod} from the hip replacement model discussed in ## `vignette("non-homogeneous", package = "heemod")`. ## Not run: plot(res_mod) plot(res_mod, model = "all") plot(res_mod, model = "all", panels = "by_state") plot(res_mod, model = "all", include_states = c("RevisionTHR", "SuccessR")) plot(res_mod, model = "all", panels = "by_state", include_states = c("RevisionTHR", "SuccessR")) plot(res_mod, model = 2, panel = "by_state", include_states = c("RevisionTHR", "SuccessR")) ## End(Not run)
## These examples require \code{res_mod} from the hip replacement model discussed in ## `vignette("non-homogeneous", package = "heemod")`. ## Not run: plot(res_mod) plot(res_mod, model = "all") plot(res_mod, model = "all", panels = "by_state") plot(res_mod, model = "all", include_states = c("RevisionTHR", "SuccessR")) plot(res_mod, model = "all", panels = "by_state", include_states = c("RevisionTHR", "SuccessR")) plot(res_mod, model = 2, panel = "by_state", include_states = c("RevisionTHR", "SuccessR")) ## End(Not run)
Plot general survival models
## S3 method for class 'surv_object' plot( x, times = seq.int(0, 30), type = c("surv", "prob"), psa, Nrep = 100, join_opts = list(join_col = "red", join_pch = 20, join_size = 3), ... )
## S3 method for class 'surv_object' plot( x, times = seq.int(0, 30), type = c("surv", "prob"), psa, Nrep = 100, join_opts = list(join_col = "red", join_pch = 20, join_size = 3), ... )
x |
a survival object of class |
times |
Times at which to evaluate and plot the survival object. |
type |
either |
psa |
a |
Nrep |
The number of replications to estimate the variability of |
join_opts |
A list of 3 graphical parameters for points at which different survival functions are joined: join_col, join_pch and join_size. |
... |
additional arguments to pass to |
The function currently only highlights join points that are at
the top level; that is, for objects with class surv_projection
.
To avoid plotting the join points, set join_size to a negative number.
a ggplot2::ggplot()
object.
## Evaluation of the variability of the survival distribution surv1 <- define_surv_dist("exp", rate = 0.1) psa <- define_psa(surv1 ~ resample_surv(n = 100)) plot(surv1, psa=psa) ## plot surv_projection object surv2 <- define_surv_dist("exp", rate = 0.5) plot(join(surv1, surv2, at = 2), psa = psa, Nrep = 50) ## surv_fit object library(survival) km <- define_surv_fit(survfit(formula = Surv(time, status) ~ 1, data = aml)) fs <- flexsurv::flexsurvreg(formula = Surv(time, status) ~ 1, data = aml, dist = "weibull") |> define_surv_fit() psa2 <- define_psa(km ~ resample_surv(), fs ~ resample_surv(), surv1 ~ resample_surv(100)) plot(km, psa = psa2) plot(join(km, surv1, at = 6), psa = psa2) plot(join(fs, surv1, at = 6), psa = psa2)
## Evaluation of the variability of the survival distribution surv1 <- define_surv_dist("exp", rate = 0.1) psa <- define_psa(surv1 ~ resample_surv(n = 100)) plot(surv1, psa=psa) ## plot surv_projection object surv2 <- define_surv_dist("exp", rate = 0.5) plot(join(surv1, surv2, at = 2), psa = psa, Nrep = 50) ## surv_fit object library(survival) km <- define_surv_fit(survfit(formula = Surv(time, status) ~ 1, data = aml)) fs <- flexsurv::flexsurvreg(formula = Surv(time, status) ~ 1, data = aml, dist = "weibull") |> define_surv_fit() psa2 <- define_psa(km ~ resample_surv(), fs ~ resample_surv(), surv1 ~ resample_surv(100)) plot(km, psa = psa2) plot(join(km, surv1, at = 6), psa = psa2) plot(join(fs, surv1, at = 6), psa = psa2)
These convenience functions make it easier to compute transition probabilities from incidence rates, OR, RR, or probabilities estimated on a different timeframe.
rescale_prob(p, to = 1, from = 1) prob_to_prob(...) rate_to_prob(r, to = 1, per = 1) or_to_prob(or, p) rr_to_prob(rr, p)
rescale_prob(p, to = 1, from = 1) prob_to_prob(...) rate_to_prob(r, to = 1, per = 1) or_to_prob(or, p) rr_to_prob(rr, p)
p |
Probability. |
to |
Compute probability for that timeframe. |
from |
Timeframe of the original probability. |
... |
For deprecated functions. |
r |
Rate. |
per |
Number of person-time corresponding to the rate. |
or |
Odds ratio. |
rr |
Relative risk. |
A probability.
# convert 5-year probability # to 1-year probability rescale_prob(p = .65, from = 5) # convert 1-year probability # to 1-month probability rescale_prob(p = .5, to = 1/12) # convert rate per 1000 PY # to 5-year probability rate_to_prob(r = 162, per = 1000, to = 5) # convert OR to probability or_to_prob(or = 1.9, p = .51) # convert RR to probability rr_to_prob(rr = 1.9, p = .51)
# convert 5-year probability # to 1-year probability rescale_prob(p = .65, from = 5) # convert 1-year probability # to 1-month probability rescale_prob(p = .5, to = 1/12) # convert rate per 1000 PY # to 5-year probability rate_to_prob(r = 162, per = 1000, to = 5) # convert OR to probability or_to_prob(or = 1.9, p = .51) # convert RR to probability rr_to_prob(rr = 1.9, p = .51)
Reindent Transition Matrix
reindent_transition(x, print = TRUE)
reindent_transition(x, print = TRUE)
x |
A transition matrix. |
print |
Print result? |
The reindented matrix as a text string, invisibly.
Rescale a discount rate between two time frames.
rescale_discount_rate(x, from, to)
rescale_discount_rate(x, from, to)
x |
Discount rate to rescale. |
from |
Original time period. |
to |
Final time period. |
Continuous discounting is assumed, i.e. when converting a long-term discount rate into a short-term rate, we assume that a partial gain from one short term is multiplicatively discounted in all following short terms. At the same time, we assume the short-term rate is time-invariant.
Rate rescaled under the assumption of compound discounting.
## 1% monthly interest rate to annual rescale_discount_rate(0.01, 1, 12) ## 3% annual discount rate to (approximately) weekly rescale_discount_rate(0.03, 52, 1)
## 1% monthly interest rate to annual rescale_discount_rate(0.01, 1, 12) ## 3% annual discount rate to (approximately) weekly rescale_discount_rate(0.03, 52, 1)
Interfaces the output of run_psa()
into the BCEA package.
run_bcea(x, ...)
run_bcea(x, ...)
x |
Output from |
... |
Additional arguments passed to |
The BCEA package is needed for this function to work.
A BCEA analysis.
Run Sensitivity Analysis
run_dsa(model, dsa)
run_dsa(model, dsa)
model |
An evaluated Markov model. |
dsa |
An object returned by
|
A data.frame
with one row per model and
parameter value.
param <- define_parameters( p1 = .5, p2 = .2, r = .05 ) mod1 <- define_strategy( transition = define_transition( C, p1, p2, C ), define_state( cost = discount(543, r), ly = 1 ), define_state( cost = discount(432, r), ly = .5 ) ) mod2 <- define_strategy( transition = define_transition( C, p1, p2, C ), define_state( cost = 789, ly = 1 ), define_state( cost = 456, ly = .8 ) ) res2 <- run_model( mod1, mod2, parameters = param, init = c(100, 0), cycles = 10, cost = cost, effect = ly ) ds <- define_dsa( p1, .1, .9, p2, .1, .3, r, .05, .1 ) print(ds) #'\dontrun{ #'x <- run_dsa(res2, ds) #'plot(x, value = "cost") #'} #' #' # can be specified as a function of other parameters ds2 <- define_dsa( p2, p1 - .1, p1 + .1 ) #'\dontrun{ #'run_dsa(res2, ds2) #'}
param <- define_parameters( p1 = .5, p2 = .2, r = .05 ) mod1 <- define_strategy( transition = define_transition( C, p1, p2, C ), define_state( cost = discount(543, r), ly = 1 ), define_state( cost = discount(432, r), ly = .5 ) ) mod2 <- define_strategy( transition = define_transition( C, p1, p2, C ), define_state( cost = 789, ly = 1 ), define_state( cost = 456, ly = .8 ) ) res2 <- run_model( mod1, mod2, parameters = param, init = c(100, 0), cycles = 10, cost = cost, effect = ly ) ds <- define_dsa( p1, .1, .9, p2, .1, .3, r, .05, .1 ) print(ds) #'\dontrun{ #'x <- run_dsa(res2, ds) #'plot(x, value = "cost") #'} #' #' # can be specified as a function of other parameters ds2 <- define_dsa( p2, p1 - .1, p1 + .1 ) #'\dontrun{ #'run_dsa(res2, ds2) #'}
Runs one or more strategy. When more than one strategy is provided, all strategies should have the same states and state value names.
run_model( ..., parameters = define_parameters(), init = c(1000L, rep(0L, get_state_number(get_states(list(...)[[1]])) - 1)), cycles = 1, method = c("life-table", "beginning", "end"), cost = NULL, effect = NULL, state_time_limit = NULL, central_strategy = NULL, inflow = rep(0L, get_state_number(get_states(list(...)[[1]]))) ) run_model_( uneval_strategy_list, parameters, init, cycles, method, cost, effect, state_time_limit, central_strategy, inflow )
run_model( ..., parameters = define_parameters(), init = c(1000L, rep(0L, get_state_number(get_states(list(...)[[1]])) - 1)), cycles = 1, method = c("life-table", "beginning", "end"), cost = NULL, effect = NULL, state_time_limit = NULL, central_strategy = NULL, inflow = rep(0L, get_state_number(get_states(list(...)[[1]]))) ) run_model_( uneval_strategy_list, parameters, init, cycles, method, cost, effect, state_time_limit, central_strategy, inflow )
... |
One or more |
parameters |
Optional. An object generated by
|
init |
numeric vector or result of |
cycles |
positive integer. Number of Markov Cycles to compute. |
method |
Counting method. See details. |
cost |
Names or expression to compute cost on the cost-effectiveness plane. |
effect |
Names or expression to compute effect on the cost-effectiveness plane. |
state_time_limit |
Optional expansion limit for
|
central_strategy |
character. The name of the strategy at the center of the cost-effectiveness plane, for readability. |
inflow |
numeric vector or result of
|
uneval_strategy_list |
List of models, only used by
|
In order to compute comparisons strategies must be similar (same states and state value names). Thus strategies can only differ through transition matrix cell values and values attached to states (but not state value names).
The initial number of individuals in each state and the number of cycle will be the same for all strategies
state_time_limit
can be specified in 3 different ways:
As a single value: the limit is applied to all states in all strategies. 2. As a named vector (where names are state names): the limits are applied to the given state names, for all strategies. 3. As a named list of named vectors: the limits are applied to the given state names for the given strategies.
Counting method represents where the transition should occur, based on https://journals.sagepub.com/doi/10.1177/0272989X09340585: "beginning" overestimates costs and "end" underestimates costs.
A list of evaluated models with computed values.
# running a single model mod1 <- define_strategy( transition = define_transition( .5, .5, .1, .9 ), define_state( cost = 543, ly = 1 ), define_state( cost = 432, ly = 1 ) ) res <- run_model( mod1, init = c(100, 0), cycles = 2, cost = cost, effect = ly ) # running several models mod2 <- define_strategy( transition = define_transition( .5, .5, .1, .9 ), define_state( cost = 789, ly = 1 ), define_state( cost = 456, ly = 1 ) ) res2 <- run_model( mod1, mod2, init = c(100, 0), cycles = 10, cost = cost, effect = ly )
# running a single model mod1 <- define_strategy( transition = define_transition( .5, .5, .1, .9 ), define_state( cost = 543, ly = 1 ), define_state( cost = 432, ly = 1 ) ) res <- run_model( mod1, init = c(100, 0), cycles = 2, cost = cost, effect = ly ) # running several models mod2 <- define_strategy( transition = define_transition( .5, .5, .1, .9 ), define_state( cost = 789, ly = 1 ), define_state( cost = 456, ly = 1 ) ) res2 <- run_model( mod1, mod2, init = c(100, 0), cycles = 10, cost = cost, effect = ly )
This function runs a model from tabular input.
run_model_tabular( location, reference = "REFERENCE.csv", run_dsa = TRUE, run_psa = TRUE, run_demo = TRUE, save = FALSE, overwrite = FALSE )
run_model_tabular( location, reference = "REFERENCE.csv", run_dsa = TRUE, run_psa = TRUE, run_demo = TRUE, save = FALSE, overwrite = FALSE )
location |
Directory where the files are located. |
reference |
Name of the reference file. |
run_dsa |
Run DSA? |
run_psa |
Run PSA?. |
run_demo |
Run demographic analysis? |
save |
Should the outputs be saved? |
overwrite |
Should the outputs be overwritten? |
The reference file should have two columns. data
can be added, having value TRUE
where an absolute
file path is provided. data
values must include
state
, tm
, and parameters
, and can
also include options
, demographics
and
data
. The corresponding values in the file
column give the names of the files (located in
base_dir
) that contain the corresponding
information - or, in the case of data
, the
directory containing the tables to be loaded.
A list of evaluated models (always), and, if appropriate input is provided, dsa (deterministic sensitivity analysis), psa (probabilistic sensitivity analysis) and demographics (results across different demographic groups).
Run Probabilistic Uncertainty Analysis
run_psa(model, psa, N, keep = FALSE)
run_psa(model, psa, N, keep = FALSE)
model |
The result of |
psa |
Resampling distribution for parameters defined
by |
N |
> 0. Number of simulation to run. |
keep |
logical; if TRUE, all models will be returned |
A list with the following elements
psa: a data.frame
with one row per model.
run_model: a data.frame
with mean cost and utility for each strategy
model: the initial model object
N: the number of simulations ran
resamp_par: the resampled parameters
full: if keep
is TRUE, a list of each model objects created at each iteration
# example for run_psa mod1 <- define_strategy( transition = define_transition( .5, .5, .1, .9 ), define_state( cost = cost_init + age * 5, ly = 1 ), define_state( cost = cost_init + age, ly = 0 ) ) mod2 <- define_strategy( transition = define_transition( p_trans, C, .1, .9 ), define_state( cost = 789 * age / 10, ly = 1 ), define_state( cost = 456 * age / 10, ly = 0 ) ) res2 <- run_model( mod1, mod2, parameters = define_parameters( age_init = 60, cost_init = 1000, age = age_init + model_time, p_trans = .7 ), init = 1:0, cycles = 10, cost = cost, effect = ly ) rsp <- define_psa( age_init ~ normal(60, 10), cost_init ~ normal(1000, 100), p_trans ~ binomial(.7, 100), correlation = matrix(c( 1, .4, 0, .4, 1, 0, 0, 0, 1 ), byrow = TRUE, ncol = 3) ) # with run_model result # (only 10 resample for speed) ndt1 <- run_psa(res2, psa = rsp, N = 10)
# example for run_psa mod1 <- define_strategy( transition = define_transition( .5, .5, .1, .9 ), define_state( cost = cost_init + age * 5, ly = 1 ), define_state( cost = cost_init + age, ly = 0 ) ) mod2 <- define_strategy( transition = define_transition( p_trans, C, .1, .9 ), define_state( cost = 789 * age / 10, ly = 1 ), define_state( cost = 456 * age / 10, ly = 0 ) ) res2 <- run_model( mod1, mod2, parameters = define_parameters( age_init = 60, cost_init = 1000, age = age_init + model_time, p_trans = .7 ), init = 1:0, cycles = 10, cost = cost, effect = ly ) rsp <- define_psa( age_init ~ normal(60, 10), cost_init ~ normal(1000, 100), p_trans ~ binomial(.7, 100), correlation = matrix(c( 1, .4, 0, .4, 1, 0, 0, 0, 1 ), byrow = TRUE, ncol = 3) ) # with run_model result # (only 10 resample for speed) ndt1 <- run_psa(res2, psa = rsp, N = 10)
Set the covariate levels of a survival model to be represented in survival projections.
set_covariates(dist, ..., data = NULL) set_covariates_(dist, covariates, data = NULL)
set_covariates(dist, ..., data = NULL) set_covariates_(dist, covariates, data = NULL)
dist |
a survfit or flexsurvreg object |
... |
Covariate values representing the group for which survival probabilities will be generated when evaluated. |
data |
A an optional data frame representing multiple sets of covariate values for which survival probabilities will be generated. Can be used to generate aggregate survival for a heterogeneous set of subjects. |
covariates |
Used to work around non-standard evaluation. |
A surv_model
object.
fs1 <- flexsurv::flexsurvreg( survival::Surv(rectime, censrec)~group, data=flexsurv::bc, dist = "llogis" ) good_model <- set_covariates(fs1, group = "Good") cohort <- data.frame(group=c("Good", "Good", "Medium", "Poor")) mixed_model <- set_covariates(fs1, data = cohort)
fs1 <- flexsurv::flexsurvreg( survival::Surv(rectime, censrec)~group, data=flexsurv::bc, dist = "llogis" ) good_model <- set_covariates(fs1, group = "Good") cohort <- data.frame(group=c("Good", "Good", "Medium", "Poor")) mixed_model <- set_covariates(fs1, data = cohort)
Summarise Markov Model Results
## S3 method for class 'run_model' summary(object, threshold = NULL, ...)
## S3 method for class 'run_model' summary(object, threshold = NULL, ...)
object |
Output from |
threshold |
ICER threshold (possibly several) for net monetary benefit computation. |
... |
additional arguments affecting the summary produced. |
A summary_run_model
object.
Summarize surv_shift objects
## S3 method for class 'surv_shift' summary(object, summary_type = c("plot", "standard"), ...)
## S3 method for class 'surv_shift' summary(object, summary_type = c("plot", "standard"), ...)
object |
a |
summary_type |
"standard" or "plot" - "standard"
for the usual summary of a |
... |
other arguments |
A summary.
Given a table of new parameter values with a new parameter set per line, runs iteratively Markov models over these sets.
## S3 method for class 'run_model' update(object, newdata, ...) ## S3 method for class 'updated_model' plot( x, type = c("simple", "difference", "counts", "ce", "values"), result = c("cost", "effect", "icer"), strategy = NULL, ... )
## S3 method for class 'run_model' update(object, newdata, ...) ## S3 method for class 'updated_model' plot( x, type = c("simple", "difference", "counts", "ce", "values"), result = c("cost", "effect", "icer"), strategy = NULL, ... )
object |
The result of |
newdata |
A |
... |
Additional arguments passed to
|
x |
Updated model to plot. |
type |
Plot simple values or differences? |
result |
The the result to plot (see details). |
strategy |
A model index, character or numeric. |
newdata
must be a data.frame
with the
following properties: the column names must be parameter
names used in define_parameters()
; and an
optional column .weights
can give the respective
weight of each row in the target population.
Weights are automatically scaled. If no weights are provided equal weights are used for each strata.
For the plotting function, the type
argument can
take the following values: "cost"
, "effect"
or "icer"
to plot the heterogeneity of the
respective values. Furthermore "ce"
and
"count"
can produce from the combined model plots
similar to those of run_model()
.
A data.frame
with one row per model/value.
Histograms do not account for weights. On the other hand summary results do.
mod1 <- define_strategy( transition = define_transition( .5, .5, .1, .9 ), define_state( cost = 543 + age * 5, ly = 1 ), define_state( cost = 432 + age, ly = 1 * age / 100 ) ) mod2 <- define_strategy( transition = define_transition( .5, .5, .1, .9 ), define_state( cost = 789 * age / 10, ly = 1 ), define_state( cost = 456 * age / 10, ly = 1 * age / 200 ) ) res <- run_model( mod1, mod2, parameters = define_parameters( age_init = 60, age = age_init + model_time ), init = 1:0, cycles = 10, cost = cost, effect = ly ) # generating table with new parameter sets new_tab <- data.frame( age_init = 40:45 ) # with run_model result ndt <- update(res, newdata = new_tab) summary(ndt) # using weights new_tab2 <- data.frame( age_init = 40:45, .weights = runif(6) ) #'\dontrun{ #'ndt2 <- update(res, newdata = new_tab2) #' #'summary(ndt2) #'}
mod1 <- define_strategy( transition = define_transition( .5, .5, .1, .9 ), define_state( cost = 543 + age * 5, ly = 1 ), define_state( cost = 432 + age, ly = 1 * age / 100 ) ) mod2 <- define_strategy( transition = define_transition( .5, .5, .1, .9 ), define_state( cost = 789 * age / 10, ly = 1 ), define_state( cost = 456 * age / 10, ly = 1 * age / 200 ) ) res <- run_model( mod1, mod2, parameters = define_parameters( age_init = 60, age = age_init + model_time ), init = 1:0, cycles = 10, cost = cost, effect = ly ) # generating table with new parameter sets new_tab <- data.frame( age_init = 40:45 ) # with run_model result ndt <- update(res, newdata = new_tab) summary(ndt) # using weights new_tab2 <- data.frame( age_init = 40:45, .weights = runif(6) ) #'\dontrun{ #'ndt2 <- update(res, newdata = new_tab2) #' #'summary(ndt2) #'}
Returns age and sex-specific mortality probabilities for a given country.
get_who_mr_memo( age, sex = NULL, region = NULL, country = NULL, year = "latest", local = FALSE ) get_who_mr( age, sex = NULL, region = NULL, country = NULL, year = "latest", local = FALSE )
get_who_mr_memo( age, sex = NULL, region = NULL, country = NULL, year = "latest", local = FALSE ) get_who_mr( age, sex = NULL, region = NULL, country = NULL, year = "latest", local = FALSE )
age |
age as a continuous variable. |
sex |
sex as |
region |
Region code. Assumed |
country |
Country code (see details). |
year |
Use data from that year. Defaults to
|
local |
Fetch mortality data from package cached data? |
Locally cached data is used in case of connection
problems, of if local = TRUE
. For memory space reasons
local data is only available for WHO high-income
countries (pooled), and only for the latest year.
The results of get_who_mr
are memoised for
options("heemod.memotime")
(default: 1 hour) to
increase resampling performance.
This function should be used within
define_transition()
or define_parameters()
.
define_transition( C, get_who_mr(age = 50 + model_time, sex = "FMLE", country = "FRA"), 0, 1 )
define_transition( C, get_who_mr(age = 50 + model_time, sex = "FMLE", country = "FRA"), 0, 1 )