Title: | Estimation of Model-Based Predictions, Contrasts and Means |
---|---|
Description: | Implements a general interface for model-based estimations for a wide variety of models, used in the computation of marginal means, contrast analysis and predictions. For a list of supported models, see 'insight::supported_models()'. |
Authors: | Dominique Makowski [aut, cre] |
Maintainer: | Dominique Makowski <[email protected]> |
License: | GPL-3 |
Version: | 0.10.0.9 |
Built: | 2025-03-30 12:33:06 UTC |
Source: | https://github.com/easystats/modelbased |
A sample data set from a course about the analysis of factorial designs, by Mattan S. Ben-Shachar. See following link for more information: https://github.com/mattansb/Analysis-of-Factorial-Designs-foR-Psychologists
The data consists of five variables from 120 observations:
ID
: A unique identifier for each participant
sex
: The participant's sex
time
: The time of day the participant was tested (morning, noon, or afternoon)
coffee
: Group indicator, whether participant drank coffee or not
("coffee"
or "control"
).
alertness
: The participant's alertness score.
This function summarises the smooth term trend in terms of linear segments. Using the approximate derivative, it separates a non-linear vector into quasi-linear segments (in which the trend is either positive or negative). Each of this segment its characterized by its beginning, end, size (in proportion, relative to the total size) trend (the linear regression coefficient) and linearity (the R2 of the linear regression).
describe_nonlinear(data, ...) ## S3 method for class 'data.frame' describe_nonlinear(data, x = NULL, y = NULL, ...) estimate_smooth(data, ...)
describe_nonlinear(data, ...) ## S3 method for class 'data.frame' describe_nonlinear(data, x = NULL, y = NULL, ...) estimate_smooth(data, ...)
data |
The data containing the link, as for instance obtained by
|
... |
Other arguments to be passed to or from. |
x , y
|
The name of the responses variable ( |
A data frame of linear description of non-linear terms.
# Create data data <- data.frame(x = rnorm(200)) data$y <- data$x^2 + rnorm(200, 0, 0.5) model <<- lm(y ~ poly(x, 2), data = data) link_data <- estimate_relation(model, length = 100) describe_nonlinear(link_data, x = "x")
# Create data data <- data.frame(x = rnorm(200)) data$y <- data$x^2 + rnorm(200, 0, 0.5) model <<- lm(y ~ poly(x, 2), data = data) link_data <- estimate_relation(model, length = 100) describe_nonlinear(link_data, x = "x")
Selected variables from the EUROFAMCARE survey. Useful when testing on "real-life" data sets, including random missing values. This data set also has value and variable label attributes.
Run a contrast analysis by estimating the differences between each level of a
factor. See also other related functions such as estimate_means()
and estimate_slopes()
.
estimate_contrasts(model, ...) ## Default S3 method: estimate_contrasts( model, contrast = NULL, by = NULL, predict = NULL, ci = 0.95, comparison = "pairwise", estimate = NULL, p_adjust = "none", transform = NULL, keep_iterations = FALSE, effectsize = NULL, iterations = 200, es_type = "cohens.d", backend = NULL, verbose = TRUE, ... )
estimate_contrasts(model, ...) ## Default S3 method: estimate_contrasts( model, contrast = NULL, by = NULL, predict = NULL, ci = 0.95, comparison = "pairwise", estimate = NULL, p_adjust = "none", transform = NULL, keep_iterations = FALSE, effectsize = NULL, iterations = 200, es_type = "cohens.d", backend = NULL, verbose = TRUE, ... )
model |
A statistical model. |
... |
Other arguments passed, for instance, to
|
contrast |
A character vector indicating the name of the variable(s) for
which to compute the contrasts, optionally including representative values or
levels at which contrasts are evaluated (e.g., |
by |
The (focal) predictor variable(s) at which to evaluate the desired
effect / mean / contrasts. Other predictors of the model that are not
included here will be collapsed and "averaged" over (the effect will be
estimated across them). |
predict |
Is passed to the
See also section Predictions on different scales. |
ci |
Confidence Interval (CI) level. Default to |
comparison |
Specify the type of contrasts or tests that should be carried out.
|
estimate |
The
You can set a default option for the |
p_adjust |
The p-values adjustment method for frequentist multiple
comparisons. Can be one of |
transform |
A function applied to predictions and confidence intervals
to (back-) transform results, which can be useful in case the regression
model has a transformed response variable (e.g., |
keep_iterations |
If |
effectsize |
Desired measure of standardized effect size, one of
|
iterations |
The number of bootstrap resamples to perform. |
es_type |
Specifies the type of effect-size measure to estimate when
using |
backend |
Whether to use Another difference is that You can set a default backend via |
verbose |
Use |
The estimate_slopes()
, estimate_means()
and estimate_contrasts()
functions are forming a group, as they are all based on marginal
estimations (estimations based on a model). All three are built on the
emmeans or marginaleffects package (depending on the backend
argument), so reading its documentation (for instance emmeans::emmeans()
,
emmeans::emtrends()
or this website) is
recommended to understand the idea behind these types of procedures.
Model-based predictions is the basis for all that follows. Indeed,
the first thing to understand is how models can be used to make predictions
(see estimate_link()
). This corresponds to the predicted response (or
"outcome variable") given specific predictor values of the predictors (i.e.,
given a specific data configuration). This is why the concept of reference grid()
is so important for direct predictions.
Marginal "means", obtained via estimate_means()
, are an extension
of such predictions, allowing to "average" (collapse) some of the predictors,
to obtain the average response value at a specific predictors configuration.
This is typically used when some of the predictors of interest are factors.
Indeed, the parameters of the model will usually give you the intercept value
and then the "effect" of each factor level (how different it is from the
intercept). Marginal means can be used to directly give you the mean value of
the response variable at all the levels of a factor. Moreover, it can also be
used to control, or average over predictors, which is useful in the case of
multiple predictors with or without interactions.
Marginal contrasts, obtained via estimate_contrasts()
, are
themselves at extension of marginal means, in that they allow to investigate
the difference (i.e., the contrast) between the marginal means. This is,
again, often used to get all pairwise differences between all levels of a
factor. It works also for continuous predictors, for instance one could also
be interested in whether the difference at two extremes of a continuous
predictor is significant.
Finally, marginal effects, obtained via estimate_slopes()
, are
different in that their focus is not values on the response variable, but the
model's parameters. The idea is to assess the effect of a predictor at a
specific configuration of the other predictors. This is relevant in the case
of interactions or non-linear relationships, when the effect of a predictor
variable changes depending on the other predictors. Moreover, these effects
can also be "averaged" over other predictors, to get for instance the
"general trend" of a predictor over different factor levels.
Example: Let's imagine the following model lm(y ~ condition * x)
where
condition
is a factor with 3 levels A, B and C and x
a continuous
variable (like age for example). One idea is to see how this model performs,
and compare the actual response y to the one predicted by the model (using
estimate_expectation()
). Another idea is evaluate the average mean at each of
the condition's levels (using estimate_means()
), which can be useful to
visualize them. Another possibility is to evaluate the difference between
these levels (using estimate_contrasts()
). Finally, one could also estimate
the effect of x averaged over all conditions, or instead within each
condition (using [estimate_slopes]
).
A data frame of estimated contrasts.
By default, estimate_contrasts()
reports no standardized effect size on
purpose. Should one request one, some things are to keep in mind. As the
authors of emmeans write, "There is substantial disagreement among
practitioners on what is the appropriate sigma to use in computing effect
sizes; or, indeed, whether any effect-size measure is appropriate for some
situations. The user is completely responsible for specifying appropriate
parameters (or for failing to do so)."
In particular, effect size method "boot"
does not correct for covariates
in the model, so should probably only be used when there is just one
categorical predictor (with however many levels). Some believe that if there
are multiple predictors or any covariates, it is important to re-compute
sigma adding back in the response variance associated with the variables that
aren't part of the contrast.
effectsize = "emmeans"
uses emmeans::eff_size with
sigma = stats::sigma(model)
, edf = stats::df.residual(model)
and
method = "identity"
. This standardizes using the MSE (sigma). Some believe
this works when the contrasts are the only predictors in the model, but not
when there are covariates. The response variance accounted for by the
covariates should not be removed from the SD used to standardize. Otherwise,
d will be overestimated.
effectsize = "marginal"
uses the following formula to compute effect
size: d_adj <- difference * (1- R2)/ sigma
. This standardizes
using the response SD with only the between-groups variance on the focal
factor/contrast removed. This allows for groups to be equated on their
covariates, but creates an appropriate scale for standardizing the response.
effectsize = "boot"
uses bootstrapping (defaults to a low value of
200) through bootES::bootES. Adjusts for contrasts, but not for covariates.
To define representative values for focal predictors (specified in by
,
contrast
, and trend
), you can use several methods. These values are
internally generated by insight::get_datagrid()
, so consult its
documentation for more details.
You can directly specify values as strings or lists for by
, contrast
,
and trend
.
For numeric focal predictors, use examples like by = "gear = c(4, 8)"
,
by = list(gear = c(4, 8))
or by = "gear = 5:10"
For factor or character predictors, use by = "Species = c('setosa', 'virginica')"
or by = list(Species = c('setosa', 'virginica'))
You can use "shortcuts" within square brackets, such as by = "Sepal.Width = [sd]"
or by = "Sepal.Width = [fivenum]"
For numeric focal predictors, if no representative values are specified,
length
and range
control the number and type of representative values:
length
determines how many equally spaced values are generated.
range
specifies the type of values, like "range"
or "sd"
.
length
and range
apply to all numeric focal predictors.
If you have multiple numeric predictors, length
and range
can accept
multiple elements, one for each predictor.
For integer variables, only values that appear in the data will be included
in the data grid, independent from the length
argument. This behaviour
can be changed by setting protect_integers = FALSE
, which will then treat
integer variables as numerics (and possibly produce fractions).
See also this vignette for some examples.
The predict
argument allows to generate predictions on different scales of
the response variable. The "link"
option does not apply to all models, and
usually not to Gaussian models. "link"
will leave the values on scale of
the linear predictors. "response"
(or NULL
) will transform them on scale
of the response variable. Thus for a logistic model, "link"
will give
estimations expressed in log-odds (probabilities on logit scale) and
"response"
in terms of probabilities.
To predict distributional parameters (called "dpar" in other packages), for
instance when using complex formulae in brms
models, the predict
argument
can take the value of the parameter you want to estimate, for instance
"sigma"
, "kappa"
, etc.
"response"
and "inverse_link"
both return predictions on the response
scale, however, "response"
first calculates predictions on the response
scale for each observation and then aggregates them by groups or levels
defined in by
. "inverse_link"
first calculates predictions on the link
scale for each observation, then aggregates them by groups or levels defined
in by
, and finally back-transforms the predictions to the response scale.
Both approaches have advantages and disadvantages. "response"
usually
produces less biased predictions, but confidence intervals might be outside
reasonable bounds (i.e., for instance can be negative for count data). The
"inverse_link"
approach is more robust in terms of confidence intervals,
but might produce biased predictions. However, you can try to set
bias_correction = TRUE
, to adjust for this bias.
In particular for mixed models, using "response"
is recommended, because
averaging across random effects groups is then more accurate.
## Not run: # Basic usage model <- lm(Sepal.Width ~ Species, data = iris) estimate_contrasts(model) # Dealing with interactions model <- lm(Sepal.Width ~ Species * Petal.Width, data = iris) # By default: selects first factor estimate_contrasts(model) # Can also run contrasts between points of numeric, stratified by "Species" estimate_contrasts(model, contrast = "Petal.Width", by = "Species") # Or both estimate_contrasts(model, contrast = c("Species", "Petal.Width"), length = 2) # Or with custom specifications estimate_contrasts(model, contrast = c("Species", "Petal.Width = c(1, 2)")) # Or modulate it estimate_contrasts(model, by = "Petal.Width", length = 4) # Standardized differences estimated <- estimate_contrasts(lm(Sepal.Width ~ Species, data = iris)) standardize(estimated) # Other models (mixed, Bayesian, ...) data <- iris data$Petal.Length_factor <- ifelse(data$Petal.Length < 4.2, "A", "B") model <- lme4::lmer(Sepal.Width ~ Species + (1 | Petal.Length_factor), data = data) estimate_contrasts(model) data <- mtcars data$cyl <- as.factor(data$cyl) data$am <- as.factor(data$am) model <- rstanarm::stan_glm(mpg ~ cyl * wt, data = data, refresh = 0) estimate_contrasts(model) estimate_contrasts(model, by = "wt", length = 4) model <- rstanarm::stan_glm( Sepal.Width ~ Species + Petal.Width + Petal.Length, data = iris, refresh = 0 ) estimate_contrasts(model, by = "Petal.Length = [sd]", test = "bf") ## End(Not run)
## Not run: # Basic usage model <- lm(Sepal.Width ~ Species, data = iris) estimate_contrasts(model) # Dealing with interactions model <- lm(Sepal.Width ~ Species * Petal.Width, data = iris) # By default: selects first factor estimate_contrasts(model) # Can also run contrasts between points of numeric, stratified by "Species" estimate_contrasts(model, contrast = "Petal.Width", by = "Species") # Or both estimate_contrasts(model, contrast = c("Species", "Petal.Width"), length = 2) # Or with custom specifications estimate_contrasts(model, contrast = c("Species", "Petal.Width = c(1, 2)")) # Or modulate it estimate_contrasts(model, by = "Petal.Width", length = 4) # Standardized differences estimated <- estimate_contrasts(lm(Sepal.Width ~ Species, data = iris)) standardize(estimated) # Other models (mixed, Bayesian, ...) data <- iris data$Petal.Length_factor <- ifelse(data$Petal.Length < 4.2, "A", "B") model <- lme4::lmer(Sepal.Width ~ Species + (1 | Petal.Length_factor), data = data) estimate_contrasts(model) data <- mtcars data$cyl <- as.factor(data$cyl) data$am <- as.factor(data$am) model <- rstanarm::stan_glm(mpg ~ cyl * wt, data = data, refresh = 0) estimate_contrasts(model) estimate_contrasts(model, by = "wt", length = 4) model <- rstanarm::stan_glm( Sepal.Width ~ Species + Petal.Width + Petal.Length, data = iris, refresh = 0 ) estimate_contrasts(model, by = "Petal.Length = [sd]", test = "bf") ## End(Not run)
After fitting a model, it is useful generate model-based estimates of the response variables for different combinations of predictor values. Such estimates can be used to make inferences about relationships between variables, to make predictions about individual cases, or to compare the predicted values against the observed data.
The modelbased
package includes 4 "related" functions, that mostly differ in
their default arguments (in particular, data
and predict
):
estimate_prediction(data = NULL, predict = "prediction", ...)
estimate_expectation(data = NULL, predict = "expectation", ...)
estimate_relation(data = "grid", predict = "expectation", ...)
estimate_link(data = "grid", predict = "link", ...)
While they are all based on model-based predictions (using
insight::get_predicted()
), they differ in terms of the type of
predictions they make by default. For instance, estimate_prediction()
and
estimate_expectation()
return predictions for the original data used to fit
the model, while estimate_relation()
and estimate_link()
return
predictions on a insight::get_datagrid()
. Similarly, estimate_link
returns predictions on the link scale, while the others return predictions on
the response scale. Note that the relevance of these differences depends on
the model family (for instance, for linear models, estimate_relation
is
equivalent to estimate_link()
, since there is no difference between the
link-scale and the response scale).
Note that you can run plot()
on
the output of these functions to get some visual insights (see the
plotting examples).
See the details section below for details about the different possibilities.
estimate_expectation( model, data = NULL, by = NULL, predict = "expectation", ci = 0.95, transform = NULL, keep_iterations = FALSE, ... ) estimate_link( model, data = "grid", by = NULL, predict = "link", ci = 0.95, transform = NULL, keep_iterations = FALSE, ... ) estimate_prediction( model, data = NULL, by = NULL, predict = "prediction", ci = 0.95, transform = NULL, keep_iterations = FALSE, ... ) estimate_relation( model, data = "grid", by = NULL, predict = "expectation", ci = 0.95, transform = NULL, keep_iterations = FALSE, ... )
estimate_expectation( model, data = NULL, by = NULL, predict = "expectation", ci = 0.95, transform = NULL, keep_iterations = FALSE, ... ) estimate_link( model, data = "grid", by = NULL, predict = "link", ci = 0.95, transform = NULL, keep_iterations = FALSE, ... ) estimate_prediction( model, data = NULL, by = NULL, predict = "prediction", ci = 0.95, transform = NULL, keep_iterations = FALSE, ... ) estimate_relation( model, data = "grid", by = NULL, predict = "expectation", ci = 0.95, transform = NULL, keep_iterations = FALSE, ... )
model |
A statistical model. |
data |
A data frame with model's predictors to estimate the response. If
|
by |
The predictor variable(s) at which to estimate the response. Other
predictors of the model that are not included here will be set to their mean
value (for numeric predictors), reference level (for factors) or mode (other
types). The |
predict |
This parameter controls what is predicted (and gets internally
passed to |
ci |
Confidence Interval (CI) level. Default to |
transform |
A function applied to predictions and confidence intervals
to (back-) transform results, which can be useful in case the regression
model has a transformed response variable (e.g., |
keep_iterations |
If |
... |
You can add all the additional control arguments from
|
A data frame of predicted values and uncertainty intervals, with
class "estimate_predicted"
. Methods for visualisation_recipe()
and plot()
are available.
The most important way that various types of response estimates differ is in terms of what quantity is being estimated and the meaning of the uncertainty intervals. The major choices are expected values for uncertainty in the regression line and predicted values for uncertainty in the individual case predictions.
Expected values refer to the fitted regression line - the estimated average response value (i.e., the "expectation") for individuals with specific predictor values. For example, in a linear model y = 2 + 3x + 4z + e, the estimated average y for individuals with x = 1 and z = 2 is 11.
For expected values, uncertainty intervals refer to uncertainty in the estimated conditional average (where might the true regression line actually fall)? Uncertainty intervals for expected values are also called "confidence intervals".
Expected values and their uncertainty intervals are useful for describing the relationship between variables and for describing how precisely a model has been estimated.
For generalized linear models, expected values are reported on one of two scales:
The link scale refers to scale of the fitted regression line, after transformation by the link function. For example, for a logistic regression (logit binomial) model, the link scale gives expected log-odds. For a log-link Poisson model, the link scale gives the expected log-count.
The response scale refers to the original scale of the response variable (i.e., without any link function transformation). Expected values on the link scale are back-transformed to the original response variable metric (e.g., expected probabilities for binomial models, expected counts for Poisson models).
In contrast to expected values, predicted values refer to predictions for individual cases. Predicted values are also called "posterior predictions" or "posterior predictive draws".
For predicted values, uncertainty intervals refer to uncertainty in the individual response values for each case (where might any single case actually fall)? Uncertainty intervals for predicted values are also called "prediction intervals" or "posterior predictive intervals".
Predicted values and their uncertainty intervals are useful for forecasting the range of values that might be observed in new data, for making decisions about individual cases, and for checking if model predictions are reasonable ("posterior predictive checks").
Predicted values and intervals are always on the scale of the original response variable (not the link scale).
modelbased provides 4 functions for generating model-based response estimates and their uncertainty:
estimate_expectation()
:
Generates expected values (conditional average) on the response scale.
The uncertainty interval is a confidence interval.
By default, values are computed using the data used to fit the model.
estimate_link()
:
Generates expected values (conditional average) on the link scale.
The uncertainty interval is a confidence interval.
By default, values are computed using a reference grid spanning the
observed range of predictor values (see insight::get_datagrid()
).
estimate_prediction()
:
Generates predicted values (for individual cases) on the response scale.
The uncertainty interval is a prediction interval.
By default, values are computed using the data used to fit the model.
estimate_relation()
:
Like estimate_expectation()
.
Useful for visualizing a model.
Generates expected values (conditional average) on the response scale.
The uncertainty interval is a confidence interval.
By default, values are computed using a reference grid spanning the
observed range of predictor values (see insight::get_datagrid()
).
If the data = NULL
, values are estimated using the data used to fit the
model. If data = "grid"
, values are computed using a reference grid
spanning the observed range of predictor values with
insight::get_datagrid()
. This can be useful for model visualization. The
number of predictor values used for each variable can be controlled with the
length
argument. data
can also be a data frame containing columns with
names matching the model frame (see insight::get_data()
). This can be used
to generate model predictions for specific combinations of predictor values.
These functions are built on top of insight::get_predicted()
and correspond
to different specifications of its parameters. It may be useful to read its
documentation,
in particular the description of the predict
argument for additional
details on the difference between expected vs. predicted values and link vs.
response scales.
Additional control parameters can be used to control results from
insight::get_datagrid()
(when data = "grid"
) and from
insight::get_predicted()
(the function used internally to compute
predictions).
For plotting, check the examples in visualisation_recipe()
. Also check out
the Vignettes and README examples for
various examples, tutorials and usecases.
library(modelbased) # Linear Models model <- lm(mpg ~ wt, data = mtcars) # Get predicted and prediction interval (see insight::get_predicted) estimate_expectation(model) # Get expected values with confidence interval pred <- estimate_relation(model) pred # Visualisation (see visualisation_recipe()) plot(pred) # Standardize predictions pred <- estimate_relation(lm(mpg ~ wt + am, data = mtcars)) z <- standardize(pred, include_response = FALSE) z unstandardize(z, include_response = FALSE) # Logistic Models model <- glm(vs ~ wt, data = mtcars, family = "binomial") estimate_expectation(model) estimate_relation(model) # Mixed models model <- lme4::lmer(mpg ~ wt + (1 | gear), data = mtcars) estimate_expectation(model) estimate_relation(model) # Bayesian models model <- suppressWarnings(rstanarm::stan_glm( mpg ~ wt, data = mtcars, refresh = 0, iter = 200 )) estimate_expectation(model) estimate_relation(model)
library(modelbased) # Linear Models model <- lm(mpg ~ wt, data = mtcars) # Get predicted and prediction interval (see insight::get_predicted) estimate_expectation(model) # Get expected values with confidence interval pred <- estimate_relation(model) pred # Visualisation (see visualisation_recipe()) plot(pred) # Standardize predictions pred <- estimate_relation(lm(mpg ~ wt + am, data = mtcars)) z <- standardize(pred, include_response = FALSE) z unstandardize(z, include_response = FALSE) # Logistic Models model <- glm(vs ~ wt, data = mtcars, family = "binomial") estimate_expectation(model) estimate_relation(model) # Mixed models model <- lme4::lmer(mpg ~ wt + (1 | gear), data = mtcars) estimate_expectation(model) estimate_relation(model) # Bayesian models model <- suppressWarnings(rstanarm::stan_glm( mpg ~ wt, data = mtcars, refresh = 0, iter = 200 )) estimate_expectation(model) estimate_relation(model)
Extract random parameters of each individual group in the context of mixed models, commonly referred to as BLUPs (Best Linear Unbiased Predictors). Can be reshaped to be of the same dimensions as the original data, which can be useful to add the random effects to the original data.
estimate_grouplevel(model, ...) ## Default S3 method: estimate_grouplevel(model, type = "random", ...) ## S3 method for class 'brmsfit' estimate_grouplevel( model, type = "random", dispersion = TRUE, test = NULL, diagnostic = NULL, ... ) reshape_grouplevel(x, indices = "all", group = "all", ...)
estimate_grouplevel(model, ...) ## Default S3 method: estimate_grouplevel(model, type = "random", ...) ## S3 method for class 'brmsfit' estimate_grouplevel( model, type = "random", dispersion = TRUE, test = NULL, diagnostic = NULL, ... ) reshape_grouplevel(x, indices = "all", group = "all", ...)
model |
A mixed model with random effects. |
... |
Other arguments passed to |
type |
|
dispersion , test , diagnostic
|
Arguments passed to
|
x |
The output of |
indices |
A list containing the indices (i.e., which columns) to extract (e.g., "Coefficient"). |
group |
A list containing the random factors to select. |
Unlike raw group means, BLUPs apply shrinkage: they are a compromise between the group estimate and the population estimate. This improves generalizability and prevents overfitting.
# lme4 model data(mtcars) model <- lme4::lmer(mpg ~ hp + (1 | carb), data = mtcars) random <- estimate_grouplevel(model) # Show group-specific effects random # Visualize random effects plot(random) # Reshape to wide data so that it matches the original dataframe... reshaped <- reshape_grouplevel(random, indices = c("Coefficient", "SE")) # ...and can be easily combined with the original data alldata <- cbind(mtcars, reshaped) # Use summary() to remove duplicated rows summary(reshaped) # overall coefficients estimate_grouplevel(model, type = "total")
# lme4 model data(mtcars) model <- lme4::lmer(mpg ~ hp + (1 | carb), data = mtcars) random <- estimate_grouplevel(model) # Show group-specific effects random # Visualize random effects plot(random) # Reshape to wide data so that it matches the original dataframe... reshaped <- reshape_grouplevel(random, indices = c("Coefficient", "SE")) # ...and can be easily combined with the original data alldata <- cbind(mtcars, reshaped) # Use summary() to remove duplicated rows summary(reshaped) # overall coefficients estimate_grouplevel(model, type = "total")
Estimate average value of response variable at each factor level or
representative value, respectively at values defined in a "data grid" or
"reference grid". For plotting, check the examples in
visualisation_recipe()
. See also other related functions such as
estimate_contrasts()
and estimate_slopes()
.
estimate_means( model, by = "auto", predict = NULL, ci = 0.95, estimate = NULL, transform = NULL, keep_iterations = FALSE, backend = NULL, verbose = TRUE, ... )
estimate_means( model, by = "auto", predict = NULL, ci = 0.95, estimate = NULL, transform = NULL, keep_iterations = FALSE, backend = NULL, verbose = TRUE, ... )
model |
A statistical model. |
by |
The (focal) predictor variable(s) at which to evaluate the desired
effect / mean / contrasts. Other predictors of the model that are not
included here will be collapsed and "averaged" over (the effect will be
estimated across them). |
predict |
Is passed to the
See also section Predictions on different scales. |
ci |
Confidence Interval (CI) level. Default to |
estimate |
The
You can set a default option for the |
transform |
A function applied to predictions and confidence intervals
to (back-) transform results, which can be useful in case the regression
model has a transformed response variable (e.g., |
keep_iterations |
If |
backend |
Whether to use Another difference is that You can set a default backend via |
verbose |
Use |
... |
Other arguments passed, for instance, to
|
The estimate_slopes()
, estimate_means()
and estimate_contrasts()
functions are forming a group, as they are all based on marginal
estimations (estimations based on a model). All three are built on the
emmeans or marginaleffects package (depending on the backend
argument), so reading its documentation (for instance emmeans::emmeans()
,
emmeans::emtrends()
or this website) is
recommended to understand the idea behind these types of procedures.
Model-based predictions is the basis for all that follows. Indeed,
the first thing to understand is how models can be used to make predictions
(see estimate_link()
). This corresponds to the predicted response (or
"outcome variable") given specific predictor values of the predictors (i.e.,
given a specific data configuration). This is why the concept of reference grid()
is so important for direct predictions.
Marginal "means", obtained via estimate_means()
, are an extension
of such predictions, allowing to "average" (collapse) some of the predictors,
to obtain the average response value at a specific predictors configuration.
This is typically used when some of the predictors of interest are factors.
Indeed, the parameters of the model will usually give you the intercept value
and then the "effect" of each factor level (how different it is from the
intercept). Marginal means can be used to directly give you the mean value of
the response variable at all the levels of a factor. Moreover, it can also be
used to control, or average over predictors, which is useful in the case of
multiple predictors with or without interactions.
Marginal contrasts, obtained via estimate_contrasts()
, are
themselves at extension of marginal means, in that they allow to investigate
the difference (i.e., the contrast) between the marginal means. This is,
again, often used to get all pairwise differences between all levels of a
factor. It works also for continuous predictors, for instance one could also
be interested in whether the difference at two extremes of a continuous
predictor is significant.
Finally, marginal effects, obtained via estimate_slopes()
, are
different in that their focus is not values on the response variable, but the
model's parameters. The idea is to assess the effect of a predictor at a
specific configuration of the other predictors. This is relevant in the case
of interactions or non-linear relationships, when the effect of a predictor
variable changes depending on the other predictors. Moreover, these effects
can also be "averaged" over other predictors, to get for instance the
"general trend" of a predictor over different factor levels.
Example: Let's imagine the following model lm(y ~ condition * x)
where
condition
is a factor with 3 levels A, B and C and x
a continuous
variable (like age for example). One idea is to see how this model performs,
and compare the actual response y to the one predicted by the model (using
estimate_expectation()
). Another idea is evaluate the average mean at each of
the condition's levels (using estimate_means()
), which can be useful to
visualize them. Another possibility is to evaluate the difference between
these levels (using estimate_contrasts()
). Finally, one could also estimate
the effect of x averaged over all conditions, or instead within each
condition (using [estimate_slopes]
).
A data frame of estimated marginal means.
To define representative values for focal predictors (specified in by
,
contrast
, and trend
), you can use several methods. These values are
internally generated by insight::get_datagrid()
, so consult its
documentation for more details.
You can directly specify values as strings or lists for by
, contrast
,
and trend
.
For numeric focal predictors, use examples like by = "gear = c(4, 8)"
,
by = list(gear = c(4, 8))
or by = "gear = 5:10"
For factor or character predictors, use by = "Species = c('setosa', 'virginica')"
or by = list(Species = c('setosa', 'virginica'))
You can use "shortcuts" within square brackets, such as by = "Sepal.Width = [sd]"
or by = "Sepal.Width = [fivenum]"
For numeric focal predictors, if no representative values are specified,
length
and range
control the number and type of representative values:
length
determines how many equally spaced values are generated.
range
specifies the type of values, like "range"
or "sd"
.
length
and range
apply to all numeric focal predictors.
If you have multiple numeric predictors, length
and range
can accept
multiple elements, one for each predictor.
For integer variables, only values that appear in the data will be included
in the data grid, independent from the length
argument. This behaviour
can be changed by setting protect_integers = FALSE
, which will then treat
integer variables as numerics (and possibly produce fractions).
See also this vignette for some examples.
The predict
argument allows to generate predictions on different scales of
the response variable. The "link"
option does not apply to all models, and
usually not to Gaussian models. "link"
will leave the values on scale of
the linear predictors. "response"
(or NULL
) will transform them on scale
of the response variable. Thus for a logistic model, "link"
will give
estimations expressed in log-odds (probabilities on logit scale) and
"response"
in terms of probabilities.
To predict distributional parameters (called "dpar" in other packages), for
instance when using complex formulae in brms
models, the predict
argument
can take the value of the parameter you want to estimate, for instance
"sigma"
, "kappa"
, etc.
"response"
and "inverse_link"
both return predictions on the response
scale, however, "response"
first calculates predictions on the response
scale for each observation and then aggregates them by groups or levels
defined in by
. "inverse_link"
first calculates predictions on the link
scale for each observation, then aggregates them by groups or levels defined
in by
, and finally back-transforms the predictions to the response scale.
Both approaches have advantages and disadvantages. "response"
usually
produces less biased predictions, but confidence intervals might be outside
reasonable bounds (i.e., for instance can be negative for count data). The
"inverse_link"
approach is more robust in terms of confidence intervals,
but might produce biased predictions. However, you can try to set
bias_correction = TRUE
, to adjust for this bias.
In particular for mixed models, using "response"
is recommended, because
averaging across random effects groups is then more accurate.
modelbased_backend
: options(modelbased_backend = <string>)
will set a
default value for the backend
argument and can be used to set the package
used by default to calculate marginal means. Can be "marginalmeans"
or
"emmeans"
.
modelbased_estimate
: options(modelbased_estimate = <string>)
will
set a default value for the estimate
argument.
Chatton, A. and Rohrer, J.M. 2024. The Causal Cookbook: Recipes for Propensity Scores, G-Computation, and Doubly Robust Standardization. Advances in Methods and Practices in Psychological Science. 2024;7(1). doi:10.1177/25152459241236149
Dickerman, Barbra A., and Miguel A. Hernán. 2020. Counterfactual Prediction Is Not Only for Causal Inference. European Journal of Epidemiology 35 (7): 615–17. doi:10.1007/s10654-020-00659-8
Heiss, A. (2022). Marginal and conditional effects for GLMMs with marginaleffects. Andrew Heiss. doi:10.59350/xwnfm-x1827
library(modelbased) # Frequentist models # ------------------- model <- lm(Petal.Length ~ Sepal.Width * Species, data = iris) estimate_means(model) # the `length` argument is passed to `insight::get_datagrid()` and modulates # the number of representative values to return for numeric predictors estimate_means(model, by = c("Species", "Sepal.Width"), length = 2) # an alternative way to setup your data grid is specify the values directly estimate_means(model, by = c("Species", "Sepal.Width = c(2, 4)")) # or use one of the many predefined "tokens" that help you creating a useful # data grid - to learn more about creating data grids, see help in # `?insight::get_datagrid`. estimate_means(model, by = c("Species", "Sepal.Width = [fivenum]")) ## Not run: # same for factors: filter by specific levels estimate_means(model, by = "Species = c('versicolor', 'setosa')") estimate_means(model, by = c("Species", "Sepal.Width = 0")) # estimate marginal average of response at values for numeric predictor estimate_means(model, by = "Sepal.Width", length = 5) estimate_means(model, by = "Sepal.Width = c(2, 4)") # or provide the definition of the data grid as list estimate_means( model, by = list(Sepal.Width = c(2, 4), Species = c("versicolor", "setosa")) ) # Methods that can be applied to it: means <- estimate_means(model, by = c("Species", "Sepal.Width = 0")) plot(means) # which runs visualisation_recipe() standardize(means) # grids for numeric predictors, combine range and length model <- lm(Sepal.Length ~ Sepal.Width * Petal.Length, data = iris) # create a "grid": value range for first numeric predictor, mean +/-1 SD # for remaining numeric predictors. estimate_means(model, c("Sepal.Width", "Petal.Length"), range = "grid") # range from minimum to maximum spread over four values, # and mean +/- 1 SD (a total of three values) estimate_means( model, by = c("Sepal.Width", "Petal.Length"), range = c("range", "sd"), length = c(4, 3) ) data <- iris data$Petal.Length_factor <- ifelse(data$Petal.Length < 4.2, "A", "B") model <- lme4::lmer( Petal.Length ~ Sepal.Width + Species + (1 | Petal.Length_factor), data = data ) estimate_means(model) estimate_means(model, by = "Sepal.Width", length = 3) ## End(Not run)
library(modelbased) # Frequentist models # ------------------- model <- lm(Petal.Length ~ Sepal.Width * Species, data = iris) estimate_means(model) # the `length` argument is passed to `insight::get_datagrid()` and modulates # the number of representative values to return for numeric predictors estimate_means(model, by = c("Species", "Sepal.Width"), length = 2) # an alternative way to setup your data grid is specify the values directly estimate_means(model, by = c("Species", "Sepal.Width = c(2, 4)")) # or use one of the many predefined "tokens" that help you creating a useful # data grid - to learn more about creating data grids, see help in # `?insight::get_datagrid`. estimate_means(model, by = c("Species", "Sepal.Width = [fivenum]")) ## Not run: # same for factors: filter by specific levels estimate_means(model, by = "Species = c('versicolor', 'setosa')") estimate_means(model, by = c("Species", "Sepal.Width = 0")) # estimate marginal average of response at values for numeric predictor estimate_means(model, by = "Sepal.Width", length = 5) estimate_means(model, by = "Sepal.Width = c(2, 4)") # or provide the definition of the data grid as list estimate_means( model, by = list(Sepal.Width = c(2, 4), Species = c("versicolor", "setosa")) ) # Methods that can be applied to it: means <- estimate_means(model, by = c("Species", "Sepal.Width = 0")) plot(means) # which runs visualisation_recipe() standardize(means) # grids for numeric predictors, combine range and length model <- lm(Sepal.Length ~ Sepal.Width * Petal.Length, data = iris) # create a "grid": value range for first numeric predictor, mean +/-1 SD # for remaining numeric predictors. estimate_means(model, c("Sepal.Width", "Petal.Length"), range = "grid") # range from minimum to maximum spread over four values, # and mean +/- 1 SD (a total of three values) estimate_means( model, by = c("Sepal.Width", "Petal.Length"), range = c("range", "sd"), length = c(4, 3) ) data <- iris data$Petal.Length_factor <- ifelse(data$Petal.Length < 4.2, "A", "B") model <- lme4::lmer( Petal.Length ~ Sepal.Width + Species + (1 | Petal.Length_factor), data = data ) estimate_means(model) estimate_means(model, by = "Sepal.Width", length = 3) ## End(Not run)
Estimate the slopes (i.e., the coefficient) of a predictor over or within different factor levels, or alongside a numeric variable. In other words, to assess the effect of a predictor at specific configurations data. It corresponds to the derivative and can be useful to understand where a predictor has a significant role when interactions or non-linear relationships are present.
Other related functions based on marginal estimations includes
estimate_contrasts()
and estimate_means()
.
See the Details section below, and don't forget to also check out the Vignettes and README examples for various examples, tutorials and use cases.
estimate_slopes( model, trend = NULL, by = NULL, ci = 0.95, p_adjust = "none", transform = NULL, keep_iterations = FALSE, backend = NULL, verbose = TRUE, ... )
estimate_slopes( model, trend = NULL, by = NULL, ci = 0.95, p_adjust = "none", transform = NULL, keep_iterations = FALSE, backend = NULL, verbose = TRUE, ... )
model |
A statistical model. |
trend |
A character indicating the name of the variable for which to compute the slopes. |
by |
The (focal) predictor variable(s) at which to evaluate the desired
effect / mean / contrasts. Other predictors of the model that are not
included here will be collapsed and "averaged" over (the effect will be
estimated across them). |
ci |
Confidence Interval (CI) level. Default to |
p_adjust |
The p-values adjustment method for frequentist multiple
comparisons. For |
transform |
A function applied to predictions and confidence intervals
to (back-) transform results, which can be useful in case the regression
model has a transformed response variable (e.g., |
keep_iterations |
If |
backend |
Whether to use Another difference is that You can set a default backend via |
verbose |
Use |
... |
Other arguments passed, for instance, to
|
The estimate_slopes()
, estimate_means()
and estimate_contrasts()
functions are forming a group, as they are all based on marginal
estimations (estimations based on a model). All three are built on the
emmeans or marginaleffects package (depending on the backend
argument), so reading its documentation (for instance emmeans::emmeans()
,
emmeans::emtrends()
or this website) is
recommended to understand the idea behind these types of procedures.
Model-based predictions is the basis for all that follows. Indeed,
the first thing to understand is how models can be used to make predictions
(see estimate_link()
). This corresponds to the predicted response (or
"outcome variable") given specific predictor values of the predictors (i.e.,
given a specific data configuration). This is why the concept of reference grid()
is so important for direct predictions.
Marginal "means", obtained via estimate_means()
, are an extension
of such predictions, allowing to "average" (collapse) some of the predictors,
to obtain the average response value at a specific predictors configuration.
This is typically used when some of the predictors of interest are factors.
Indeed, the parameters of the model will usually give you the intercept value
and then the "effect" of each factor level (how different it is from the
intercept). Marginal means can be used to directly give you the mean value of
the response variable at all the levels of a factor. Moreover, it can also be
used to control, or average over predictors, which is useful in the case of
multiple predictors with or without interactions.
Marginal contrasts, obtained via estimate_contrasts()
, are
themselves at extension of marginal means, in that they allow to investigate
the difference (i.e., the contrast) between the marginal means. This is,
again, often used to get all pairwise differences between all levels of a
factor. It works also for continuous predictors, for instance one could also
be interested in whether the difference at two extremes of a continuous
predictor is significant.
Finally, marginal effects, obtained via estimate_slopes()
, are
different in that their focus is not values on the response variable, but the
model's parameters. The idea is to assess the effect of a predictor at a
specific configuration of the other predictors. This is relevant in the case
of interactions or non-linear relationships, when the effect of a predictor
variable changes depending on the other predictors. Moreover, these effects
can also be "averaged" over other predictors, to get for instance the
"general trend" of a predictor over different factor levels.
Example: Let's imagine the following model lm(y ~ condition * x)
where
condition
is a factor with 3 levels A, B and C and x
a continuous
variable (like age for example). One idea is to see how this model performs,
and compare the actual response y to the one predicted by the model (using
estimate_expectation()
). Another idea is evaluate the average mean at each of
the condition's levels (using estimate_means()
), which can be useful to
visualize them. Another possibility is to evaluate the difference between
these levels (using estimate_contrasts()
). Finally, one could also estimate
the effect of x averaged over all conditions, or instead within each
condition (using [estimate_slopes]
).
A data.frame of class estimate_slopes
.
To define representative values for focal predictors (specified in by
,
contrast
, and trend
), you can use several methods. These values are
internally generated by insight::get_datagrid()
, so consult its
documentation for more details.
You can directly specify values as strings or lists for by
, contrast
,
and trend
.
For numeric focal predictors, use examples like by = "gear = c(4, 8)"
,
by = list(gear = c(4, 8))
or by = "gear = 5:10"
For factor or character predictors, use by = "Species = c('setosa', 'virginica')"
or by = list(Species = c('setosa', 'virginica'))
You can use "shortcuts" within square brackets, such as by = "Sepal.Width = [sd]"
or by = "Sepal.Width = [fivenum]"
For numeric focal predictors, if no representative values are specified,
length
and range
control the number and type of representative values:
length
determines how many equally spaced values are generated.
range
specifies the type of values, like "range"
or "sd"
.
length
and range
apply to all numeric focal predictors.
If you have multiple numeric predictors, length
and range
can accept
multiple elements, one for each predictor.
For integer variables, only values that appear in the data will be included
in the data grid, independent from the length
argument. This behaviour
can be changed by setting protect_integers = FALSE
, which will then treat
integer variables as numerics (and possibly produce fractions).
See also this vignette for some examples.
library(ggplot2) # Get an idea of the data ggplot(iris, aes(x = Petal.Length, y = Sepal.Width)) + geom_point(aes(color = Species)) + geom_smooth(color = "black", se = FALSE) + geom_smooth(aes(color = Species), linetype = "dotted", se = FALSE) + geom_smooth(aes(color = Species), method = "lm", se = FALSE) # Model it model <- lm(Sepal.Width ~ Species * Petal.Length, data = iris) # Compute the marginal effect of Petal.Length at each level of Species slopes <- estimate_slopes(model, trend = "Petal.Length", by = "Species") slopes ## Not run: # Plot it plot(slopes) standardize(slopes) model <- mgcv::gam(Sepal.Width ~ s(Petal.Length), data = iris) slopes <- estimate_slopes(model, by = "Petal.Length", length = 50) summary(slopes) plot(slopes) model <- mgcv::gam(Sepal.Width ~ s(Petal.Length, by = Species), data = iris) slopes <- estimate_slopes(model, trend = "Petal.Length", by = c("Petal.Length", "Species"), length = 20 ) summary(slopes) plot(slopes) ## End(Not run)
library(ggplot2) # Get an idea of the data ggplot(iris, aes(x = Petal.Length, y = Sepal.Width)) + geom_point(aes(color = Species)) + geom_smooth(color = "black", se = FALSE) + geom_smooth(aes(color = Species), linetype = "dotted", se = FALSE) + geom_smooth(aes(color = Species), method = "lm", se = FALSE) # Model it model <- lm(Sepal.Width ~ Species * Petal.Length, data = iris) # Compute the marginal effect of Petal.Length at each level of Species slopes <- estimate_slopes(model, trend = "Petal.Length", by = "Species") slopes ## Not run: # Plot it plot(slopes) standardize(slopes) model <- mgcv::gam(Sepal.Width ~ s(Petal.Length), data = iris) slopes <- estimate_slopes(model, by = "Petal.Length", length = 50) summary(slopes) plot(slopes) model <- mgcv::gam(Sepal.Width ~ s(Petal.Length, by = Species), data = iris) slopes <- estimate_slopes(model, trend = "Petal.Length", by = c("Petal.Length", "Species"), length = 20 ) summary(slopes) plot(slopes) ## End(Not run)
A sample data set, used in tests and some examples. Useful for demonstrating count models (with or without zero-inflation component). It consists of nine variables from 250 observations.
These functions are convenient wrappers around the emmeans and the
marginaleffects packages. They are mostly available for developers who want
to leverage a unified API for getting model-based estimates, and regular users
should use the estimate_*
set of functions.
The get_emmeans()
, get_emcontrasts()
and get_emtrends()
functions are
wrappers around emmeans::emmeans()
and emmeans::emtrends()
.
get_emcontrasts( model, contrast = NULL, by = NULL, predict = NULL, comparison = "pairwise", keep_iterations = FALSE, verbose = TRUE, ... ) get_emmeans( model, by = "auto", predict = NULL, keep_iterations = FALSE, verbose = TRUE, ... ) get_emtrends( model, trend = NULL, by = NULL, keep_iterations = FALSE, verbose = TRUE, ... ) get_marginalcontrasts( model, contrast = NULL, by = NULL, predict = NULL, ci = 0.95, comparison = "pairwise", estimate = NULL, p_adjust = "none", transform = NULL, keep_iterations = FALSE, verbose = TRUE, ... ) get_marginalmeans( model, by = "auto", predict = NULL, ci = 0.95, estimate = NULL, transform = NULL, keep_iterations = FALSE, verbose = TRUE, ... ) get_marginaltrends( model, trend = NULL, by = NULL, ci = 0.95, p_adjust = "none", transform = NULL, keep_iterations = FALSE, verbose = TRUE, ... )
get_emcontrasts( model, contrast = NULL, by = NULL, predict = NULL, comparison = "pairwise", keep_iterations = FALSE, verbose = TRUE, ... ) get_emmeans( model, by = "auto", predict = NULL, keep_iterations = FALSE, verbose = TRUE, ... ) get_emtrends( model, trend = NULL, by = NULL, keep_iterations = FALSE, verbose = TRUE, ... ) get_marginalcontrasts( model, contrast = NULL, by = NULL, predict = NULL, ci = 0.95, comparison = "pairwise", estimate = NULL, p_adjust = "none", transform = NULL, keep_iterations = FALSE, verbose = TRUE, ... ) get_marginalmeans( model, by = "auto", predict = NULL, ci = 0.95, estimate = NULL, transform = NULL, keep_iterations = FALSE, verbose = TRUE, ... ) get_marginaltrends( model, trend = NULL, by = NULL, ci = 0.95, p_adjust = "none", transform = NULL, keep_iterations = FALSE, verbose = TRUE, ... )
model |
A statistical model. |
contrast |
A character vector indicating the name of the variable(s) for
which to compute the contrasts, optionally including representative values or
levels at which contrasts are evaluated (e.g., |
by |
The (focal) predictor variable(s) at which to evaluate the desired
effect / mean / contrasts. Other predictors of the model that are not
included here will be collapsed and "averaged" over (the effect will be
estimated across them). |
predict |
Is passed to the
See also section Predictions on different scales. |
comparison |
Specify the type of contrasts or tests that should be carried out.
|
keep_iterations |
If |
verbose |
Use |
... |
Other arguments passed, for instance, to
|
trend |
A character indicating the name of the variable for which to compute the slopes. |
ci |
Confidence Interval (CI) level. Default to |
estimate |
The
You can set a default option for the |
p_adjust |
The p-values adjustment method for frequentist multiple
comparisons. For |
transform |
A function applied to predictions and confidence intervals
to (back-) transform results, which can be useful in case the regression
model has a transformed response variable (e.g., |
# Basic usage model <- lm(Sepal.Width ~ Species, data = iris) get_emcontrasts(model) ## Not run: # Dealing with interactions model <- lm(Sepal.Width ~ Species * Petal.Width, data = iris) # By default: selects first factor get_emcontrasts(model) # Or both get_emcontrasts(model, contrast = c("Species", "Petal.Width"), length = 2) # Or with custom specifications get_emcontrasts(model, contrast = c("Species", "Petal.Width=c(1, 2)")) # Or modulate it get_emcontrasts(model, by = "Petal.Width", length = 4) ## End(Not run) model <- lm(Sepal.Length ~ Species + Petal.Width, data = iris) # By default, 'by' is set to "Species" get_emmeans(model) ## Not run: # Overall mean (close to 'mean(iris$Sepal.Length)') get_emmeans(model, by = NULL) # One can estimate marginal means at several values of a 'modulate' variable get_emmeans(model, by = "Petal.Width", length = 3) # Interactions model <- lm(Sepal.Width ~ Species * Petal.Length, data = iris) get_emmeans(model) get_emmeans(model, by = c("Species", "Petal.Length"), length = 2) get_emmeans(model, by = c("Species", "Petal.Length = c(1, 3, 5)"), length = 2) ## End(Not run) ## Not run: model <- lm(Sepal.Width ~ Species * Petal.Length, data = iris) get_emtrends(model) get_emtrends(model, by = "Species") get_emtrends(model, by = "Petal.Length") get_emtrends(model, by = c("Species", "Petal.Length")) ## End(Not run) model <- lm(Petal.Length ~ poly(Sepal.Width, 4), data = iris) get_emtrends(model) get_emtrends(model, by = "Sepal.Width") model <- lm(Sepal.Length ~ Species + Petal.Width, data = iris) # By default, 'by' is set to "Species" get_marginalmeans(model) # Overall mean (close to 'mean(iris$Sepal.Length)') get_marginalmeans(model, by = NULL) ## Not run: # One can estimate marginal means at several values of a 'modulate' variable get_marginalmeans(model, by = "Petal.Width", length = 3) # Interactions model <- lm(Sepal.Width ~ Species * Petal.Length, data = iris) get_marginalmeans(model) get_marginalmeans(model, by = c("Species", "Petal.Length"), length = 2) get_marginalmeans(model, by = c("Species", "Petal.Length = c(1, 3, 5)"), length = 2) ## End(Not run) model <- lm(Sepal.Width ~ Species * Petal.Length, data = iris) get_marginaltrends(model, trend = "Petal.Length", by = "Species") get_marginaltrends(model, trend = "Petal.Length", by = "Petal.Length") get_marginaltrends(model, trend = "Petal.Length", by = c("Species", "Petal.Length"))
# Basic usage model <- lm(Sepal.Width ~ Species, data = iris) get_emcontrasts(model) ## Not run: # Dealing with interactions model <- lm(Sepal.Width ~ Species * Petal.Width, data = iris) # By default: selects first factor get_emcontrasts(model) # Or both get_emcontrasts(model, contrast = c("Species", "Petal.Width"), length = 2) # Or with custom specifications get_emcontrasts(model, contrast = c("Species", "Petal.Width=c(1, 2)")) # Or modulate it get_emcontrasts(model, by = "Petal.Width", length = 4) ## End(Not run) model <- lm(Sepal.Length ~ Species + Petal.Width, data = iris) # By default, 'by' is set to "Species" get_emmeans(model) ## Not run: # Overall mean (close to 'mean(iris$Sepal.Length)') get_emmeans(model, by = NULL) # One can estimate marginal means at several values of a 'modulate' variable get_emmeans(model, by = "Petal.Width", length = 3) # Interactions model <- lm(Sepal.Width ~ Species * Petal.Length, data = iris) get_emmeans(model) get_emmeans(model, by = c("Species", "Petal.Length"), length = 2) get_emmeans(model, by = c("Species", "Petal.Length = c(1, 3, 5)"), length = 2) ## End(Not run) ## Not run: model <- lm(Sepal.Width ~ Species * Petal.Length, data = iris) get_emtrends(model) get_emtrends(model, by = "Species") get_emtrends(model, by = "Petal.Length") get_emtrends(model, by = c("Species", "Petal.Length")) ## End(Not run) model <- lm(Petal.Length ~ poly(Sepal.Width, 4), data = iris) get_emtrends(model) get_emtrends(model, by = "Sepal.Width") model <- lm(Sepal.Length ~ Species + Petal.Width, data = iris) # By default, 'by' is set to "Species" get_marginalmeans(model) # Overall mean (close to 'mean(iris$Sepal.Length)') get_marginalmeans(model, by = NULL) ## Not run: # One can estimate marginal means at several values of a 'modulate' variable get_marginalmeans(model, by = "Petal.Width", length = 3) # Interactions model <- lm(Sepal.Width ~ Species * Petal.Length, data = iris) get_marginalmeans(model) get_marginalmeans(model, by = c("Species", "Petal.Length"), length = 2) get_marginalmeans(model, by = c("Species", "Petal.Length = c(1, 3, 5)"), length = 2) ## End(Not run) model <- lm(Sepal.Width ~ Species * Petal.Length, data = iris) get_marginaltrends(model, trend = "Petal.Length", by = "Species") get_marginaltrends(model, trend = "Petal.Length", by = "Petal.Length") get_marginaltrends(model, trend = "Petal.Length", by = c("Species", "Petal.Length"))
Global options from the modelbased package
For calculating marginal means
options(modelbased_backend = <string>)
will set a default value for the
backend
argument and can be used to set the package used by default to
calculate marginal means. Can be "marginaleffects"
or "emmeans"
.
options(modelbased_estimate = <string>)
will set a default value for the
estimate
argument, which modulates the type of target population
predictions refer to.
For printing
options(modelbased_select = <string>)
will set a default value for the
select
argument and can be used to define a custom default layout for
printing.
options(modelbased_include_grid = TRUE)
will set a default value for the
include_grid
argument and can be used to include data grids in the output
by default or not.
options(modelbased_full_labels = FALSE)
will remove redundant
(duplicated) labels from rows.
For plotting
options(modelbased_join_dots = <logical>)
will set a default value for
the join_dots
.
options(modelbased_numeric_as_discrete = <number>)
will set a default
value for the modelbased_numeric_as_discrete
argument. Can also be
FALSE
.
estimate_contrasts()
This function "pools" (i.e. combines) multiple estimate_contrasts
objects,
returned by estimate_contrasts()
, in a similar fashion as mice::pool()
.
pool_contrasts(x, ...)
pool_contrasts(x, ...)
x |
A list of |
... |
Currently not used. |
Averaging of parameters follows Rubin's rules (Rubin, 1987, p. 76).
A data frame with pooled comparisons or contrasts of predictions.
Rubin, D.B. (1987). Multiple Imputation for Nonresponse in Surveys. New York: John Wiley and Sons.
data("nhanes2", package = "mice") imp <- mice::mice(nhanes2, printFlag = FALSE) comparisons <- lapply(1:5, function(i) { m <- lm(bmi ~ age + hyp + chl, data = mice::complete(imp, action = i)) estimate_contrasts(m, "age") }) pool_contrasts(comparisons)
data("nhanes2", package = "mice") imp <- mice::mice(nhanes2, printFlag = FALSE) comparisons <- lapply(1:5, function(i) { m <- lm(bmi ~ age + hyp + chl, data = mice::complete(imp, action = i)) estimate_contrasts(m, "age") }) pool_contrasts(comparisons)
This function "pools" (i.e. combines) multiple estimate_means
objects, in
a similar fashion as mice::pool()
.
pool_predictions(x, transform = NULL, ...) pool_slopes(x, transform = NULL, ...)
pool_predictions(x, transform = NULL, ...) pool_slopes(x, transform = NULL, ...)
x |
A list of |
transform |
A function applied to predictions and confidence intervals
to (back-) transform results, which can be useful in case the regression
model has a transformed response variable (e.g., |
... |
Currently not used. |
Averaging of parameters follows Rubin's rules (Rubin, 1987, p. 76).
Pooling is applied to the predicted values and based on the standard errors
as they are calculated in the estimate_means
or estimate_predicted
objects provided in x
. For objects of class estimate_means
, the predicted
values are on the response scale by default, and standard errors are
calculated using the delta method. Then, pooling estimates and calculating
standard errors for the pooled estimates based ob Rubin's rule is carried
out. There is no back-transformation to the link-scale of predicted values
before applying Rubin's rule.
A data frame with pooled predictions.
Rubin, D.B. (1987). Multiple Imputation for Nonresponse in Surveys. New York: John Wiley and Sons.
# example for multiple imputed datasets data("nhanes2", package = "mice") imp <- mice::mice(nhanes2, printFlag = FALSE) # estimated marginal means predictions <- lapply(1:5, function(i) { m <- lm(bmi ~ age + hyp + chl, data = mice::complete(imp, action = i)) estimate_means(m, "age") }) pool_predictions(predictions) # estimated slopes (marginal effects) slopes <- lapply(1:5, function(i) { m <- lm(bmi ~ age + hyp + chl, data = mice::complete(imp, action = i)) estimate_slopes(m, "chl") }) pool_slopes(slopes)
# example for multiple imputed datasets data("nhanes2", package = "mice") imp <- mice::mice(nhanes2, printFlag = FALSE) # estimated marginal means predictions <- lapply(1:5, function(i) { m <- lm(bmi ~ age + hyp + chl, data = mice::complete(imp, action = i)) estimate_means(m, "age") }) pool_predictions(predictions) # estimated slopes (marginal effects) slopes <- lapply(1:5, function(i) { m <- lm(bmi ~ age + hyp + chl, data = mice::complete(imp, action = i)) estimate_slopes(m, "chl") }) pool_slopes(slopes)
print()
method for modelbased objects. Can be used to tweak the output
of tables.
## S3 method for class 'estimate_contrasts' print(x, select = NULL, include_grid = NULL, full_labels = NULL, ...)
## S3 method for class 'estimate_contrasts' print(x, select = NULL, include_grid = NULL, full_labels = NULL, ...)
x |
An object returned by the different |
select |
Determines which columns are printed and the table layout. There are two options for this argument:
Using Note: glue-like syntax is still experimental in the case of more complex models (like mixed models) and may not return expected results. |
include_grid |
Logical, if |
full_labels |
Logical, if |
... |
Arguments passed to |
Invisibly returns x
.
Columns and table layout can be customized using options()
:
modelbased_select
: options(modelbased_select = <string>)
will set a
default value for the select
argument and can be used to define a custom
default layout for printing.
modelbased_include_grid
: options(modelbased_include_grid = TRUE)
will
set a default value for the include_grid
argument and can be used to
include data grids in the output by default or not.
modelbased_full_labels
: options(modelbased_full_labels = FALSE)
will
remove redundant (duplicated) labels from rows.
Use print_html()
and print_md()
to create tables in HTML or
markdown format, respectively.
model <- lm(Petal.Length ~ Species, data = iris) out <- estimate_means(model, "Species") # default print(out) # smaller set of columns print(out, select = "minimal") # remove redundant labels data(efc, package = "modelbased") efc <- datawizard::to_factor(efc, c("c161sex", "c172code", "e16sex")) levels(efc$c172code) <- c("low", "mid", "high") fit <- lm(neg_c_7 ~ c161sex * c172code * e16sex, data = efc) out <- estimate_means(fit, c("c161sex", "c172code", "e16sex")) print(out, full_labels = FALSE, select = "{estimate} ({se})")
model <- lm(Petal.Length ~ Species, data = iris) out <- estimate_means(model, "Species") # default print(out) # smaller set of columns print(out, select = "minimal") # remove redundant labels data(efc, package = "modelbased") efc <- datawizard::to_factor(efc, c("c161sex", "c172code", "e16sex")) levels(efc$c172code) <- c("low", "mid", "high") fit <- lm(neg_c_7 ~ c161sex * c172code * e16sex, data = efc) out <- estimate_means(fit, c("c161sex", "c172code", "e16sex")) print(out, full_labels = FALSE, select = "{estimate} ({se})")
Smoothing a vector or a time series. For data.frames, the function will smooth all numeric variables stratified by factor levels (i.e., will smooth within each factor level combination).
smoothing(x, method = "loess", strength = 0.25, ...)
smoothing(x, method = "loess", strength = 0.25, ...)
x |
A numeric vector. |
method |
Can be "loess" (default) or "smooth". A loess smoothing can be slow. |
strength |
This argument only applies when |
... |
Arguments passed to or from other methods. |
A smoothed vector or data frame.
x <- sin(seq(0, 4 * pi, length.out = 100)) + rnorm(100, 0, 0.2) plot(x, type = "l") lines(smoothing(x, method = "smooth"), type = "l", col = "blue") lines(smoothing(x, method = "loess"), type = "l", col = "red") x <- sin(seq(0, 4 * pi, length.out = 10000)) + rnorm(10000, 0, 0.2) plot(x, type = "l") lines(smoothing(x, method = "smooth"), type = "l", col = "blue") lines(smoothing(x, method = "loess"), type = "l", col = "red")
x <- sin(seq(0, 4 * pi, length.out = 100)) + rnorm(100, 0, 0.2) plot(x, type = "l") lines(smoothing(x, method = "smooth"), type = "l", col = "blue") lines(smoothing(x, method = "loess"), type = "l", col = "red") x <- sin(seq(0, 4 * pi, length.out = 10000)) + rnorm(10000, 0, 0.2) plot(x, type = "l") lines(smoothing(x, method = "smooth"), type = "l", col = "blue") lines(smoothing(x, method = "loess"), type = "l", col = "red")
Most 'modelbased' objects can be visualized using the plot()
function, which
internally calls the visualisation_recipe()
function. See the examples
below for more information and examples on how to create and customize plots.
## S3 method for class 'estimate_predicted' visualisation_recipe( x, show_data = FALSE, point = NULL, line = NULL, pointrange = NULL, ribbon = NULL, facet = NULL, grid = NULL, join_dots = NULL, numeric_as_discrete = NULL, ... ) ## S3 method for class 'estimate_slopes' visualisation_recipe( x, line = NULL, pointrange = NULL, ribbon = NULL, facet = NULL, grid = NULL, ... ) ## S3 method for class 'estimate_grouplevel' visualisation_recipe( x, line = NULL, pointrange = NULL, ribbon = NULL, facet = NULL, grid = NULL, ... )
## S3 method for class 'estimate_predicted' visualisation_recipe( x, show_data = FALSE, point = NULL, line = NULL, pointrange = NULL, ribbon = NULL, facet = NULL, grid = NULL, join_dots = NULL, numeric_as_discrete = NULL, ... ) ## S3 method for class 'estimate_slopes' visualisation_recipe( x, line = NULL, pointrange = NULL, ribbon = NULL, facet = NULL, grid = NULL, ... ) ## S3 method for class 'estimate_grouplevel' visualisation_recipe( x, line = NULL, pointrange = NULL, ribbon = NULL, facet = NULL, grid = NULL, ... )
x |
A modelbased object. |
show_data |
Logical, if |
point , line , pointrange , ribbon , facet , grid
|
Additional aesthetics and parameters for the geoms (see customization example). |
join_dots |
Logical, if |
numeric_as_discrete |
Maximum number of unique values in a numeric
predictor to treat that predictor as discrete. Defaults to |
... |
Not used. |
The plotting works by mapping any predictors from the by
argument to the x-axis,
colors, alpha (transparency) and facets. Thus, the appearance of the plot depends
on the order of the variables that you specify in the by
argument. For instance,
the plots corresponding to estimate_relation(model, by=c("Species", "Sepal.Length"))
and estimate_relation(model, by=c("Sepal.Length", "Species"))
will look different.
The automated plotting is primarily meant for convenient visual checks, but
for publication-ready figures, we recommend re-creating the figures using the
ggplot2
package directly.
There are two options to remove the confidence bands or errors bars
from the plot. To remove error bars, simply set the pointrange
geom to
point
, e.g. plot(..., pointrange = list(geom = "point"))
. To remove the
confidence bands from line geoms, use ribbon = "none"
.
Some arguments for plot()
can get global defaults using options()
:
modelbased_join_dots
: options(modelbased_join_dots = <logical>)
will
set a default value for the join_dots
.
modelbased_numeric_as_discrete
: options(modelbased_numeric_as_discrete = <number>)
will set a default value for the modelbased_numeric_as_discrete
argument.
Can also be FALSE
.
library(ggplot2) library(see) # ============================================== # estimate_relation, estimate_expectation, ... # ============================================== # Simple Model --------------- x <- estimate_relation(lm(mpg ~ wt, data = mtcars)) layers <- visualisation_recipe(x) layers plot(layers) # visualization_recipe() is called implicitly when you call plot() plot(estimate_relation(lm(mpg ~ qsec, data = mtcars))) ## Not run: # It can be used in a pipe workflow lm(mpg ~ qsec, data = mtcars) |> estimate_relation(ci = c(0.5, 0.8, 0.9)) |> plot() # Customize aesthetics ---------- plot(x, point = list(color = "red", alpha = 0.6, size = 3), line = list(color = "blue", size = 3), ribbon = list(fill = "green", alpha = 0.7) ) + theme_minimal() + labs(title = "Relationship between MPG and WT") # Customize raw data ------------- plot(x, point = list(geom = "density_2d_filled"), line = list(color = "white")) + scale_x_continuous(expand = c(0, 0)) + scale_y_continuous(expand = c(0, 0)) + theme(legend.position = "none") # Single predictors examples ----------- plot(estimate_relation(lm(Sepal.Length ~ Species, data = iris))) # 2-ways interaction ------------ # Numeric * numeric x <- estimate_relation(lm(mpg ~ wt * qsec, data = mtcars)) plot(x) # Numeric * factor x <- estimate_relation(lm(Sepal.Width ~ Sepal.Length * Species, data = iris)) plot(x) # ============================================== # estimate_means # ============================================== # Simple Model --------------- x <- estimate_means(lm(Sepal.Width ~ Species, data = iris), by = "Species") layers <- visualisation_recipe(x) layers plot(layers) # Customize aesthetics layers <- visualisation_recipe(x, point = list(width = 0.03, color = "red"), pointrange = list(size = 2, linewidth = 2), line = list(linetype = "dashed", color = "blue") ) plot(layers) # Two levels --------------- data <- mtcars data$cyl <- as.factor(data$cyl) model <- lm(mpg ~ cyl * wt, data = data) x <- estimate_means(model, by = c("cyl", "wt")) plot(x) # GLMs --------------------- data <- data.frame(vs = mtcars$vs, cyl = as.factor(mtcars$cyl)) x <- estimate_means(glm(vs ~ cyl, data = data, family = "binomial"), by = c("cyl")) plot(x) ## End(Not run) # ============================================== # estimate_slopes # ============================================== model <- lm(Sepal.Width ~ Species * Petal.Length, data = iris) x <- estimate_slopes(model, trend = "Petal.Length", by = "Species") layers <- visualisation_recipe(x) layers plot(layers) ## Not run: # Customize aesthetics and add horizontal line and theme layers <- visualisation_recipe(x, pointrange = list(size = 2, linewidth = 2)) plot(layers) + geom_hline(yintercept = 0, linetype = "dashed", color = "red") + theme_minimal() + labs(y = "Effect of Petal.Length", title = "Marginal Effects") model <- lm(Petal.Length ~ poly(Sepal.Width, 4), data = iris) x <- estimate_slopes(model, trend = "Sepal.Width", by = "Sepal.Width", length = 20) plot(visualisation_recipe(x)) model <- lm(Petal.Length ~ Species * poly(Sepal.Width, 3), data = iris) x <- estimate_slopes(model, trend = "Sepal.Width", by = c("Sepal.Width", "Species")) plot(visualisation_recipe(x)) ## End(Not run) # ============================================== # estimate_grouplevel # ============================================== ## Not run: data <- lme4::sleepstudy data <- rbind(data, data) data$Newfactor <- rep(c("A", "B", "C", "D")) # 1 random intercept model <- lme4::lmer(Reaction ~ Days + (1 | Subject), data = data) x <- estimate_grouplevel(model) layers <- visualisation_recipe(x) layers plot(layers) # 2 random intercepts model <- lme4::lmer(Reaction ~ Days + (1 | Subject) + (1 | Newfactor), data = data) x <- estimate_grouplevel(model) plot(x) + geom_hline(yintercept = 0, linetype = "dashed") + theme_minimal() # Note: we need to use hline instead of vline because the axes is flipped model <- lme4::lmer(Reaction ~ Days + (1 + Days | Subject) + (1 | Newfactor), data = data) x <- estimate_grouplevel(model) plot(x) ## End(Not run)
library(ggplot2) library(see) # ============================================== # estimate_relation, estimate_expectation, ... # ============================================== # Simple Model --------------- x <- estimate_relation(lm(mpg ~ wt, data = mtcars)) layers <- visualisation_recipe(x) layers plot(layers) # visualization_recipe() is called implicitly when you call plot() plot(estimate_relation(lm(mpg ~ qsec, data = mtcars))) ## Not run: # It can be used in a pipe workflow lm(mpg ~ qsec, data = mtcars) |> estimate_relation(ci = c(0.5, 0.8, 0.9)) |> plot() # Customize aesthetics ---------- plot(x, point = list(color = "red", alpha = 0.6, size = 3), line = list(color = "blue", size = 3), ribbon = list(fill = "green", alpha = 0.7) ) + theme_minimal() + labs(title = "Relationship between MPG and WT") # Customize raw data ------------- plot(x, point = list(geom = "density_2d_filled"), line = list(color = "white")) + scale_x_continuous(expand = c(0, 0)) + scale_y_continuous(expand = c(0, 0)) + theme(legend.position = "none") # Single predictors examples ----------- plot(estimate_relation(lm(Sepal.Length ~ Species, data = iris))) # 2-ways interaction ------------ # Numeric * numeric x <- estimate_relation(lm(mpg ~ wt * qsec, data = mtcars)) plot(x) # Numeric * factor x <- estimate_relation(lm(Sepal.Width ~ Sepal.Length * Species, data = iris)) plot(x) # ============================================== # estimate_means # ============================================== # Simple Model --------------- x <- estimate_means(lm(Sepal.Width ~ Species, data = iris), by = "Species") layers <- visualisation_recipe(x) layers plot(layers) # Customize aesthetics layers <- visualisation_recipe(x, point = list(width = 0.03, color = "red"), pointrange = list(size = 2, linewidth = 2), line = list(linetype = "dashed", color = "blue") ) plot(layers) # Two levels --------------- data <- mtcars data$cyl <- as.factor(data$cyl) model <- lm(mpg ~ cyl * wt, data = data) x <- estimate_means(model, by = c("cyl", "wt")) plot(x) # GLMs --------------------- data <- data.frame(vs = mtcars$vs, cyl = as.factor(mtcars$cyl)) x <- estimate_means(glm(vs ~ cyl, data = data, family = "binomial"), by = c("cyl")) plot(x) ## End(Not run) # ============================================== # estimate_slopes # ============================================== model <- lm(Sepal.Width ~ Species * Petal.Length, data = iris) x <- estimate_slopes(model, trend = "Petal.Length", by = "Species") layers <- visualisation_recipe(x) layers plot(layers) ## Not run: # Customize aesthetics and add horizontal line and theme layers <- visualisation_recipe(x, pointrange = list(size = 2, linewidth = 2)) plot(layers) + geom_hline(yintercept = 0, linetype = "dashed", color = "red") + theme_minimal() + labs(y = "Effect of Petal.Length", title = "Marginal Effects") model <- lm(Petal.Length ~ poly(Sepal.Width, 4), data = iris) x <- estimate_slopes(model, trend = "Sepal.Width", by = "Sepal.Width", length = 20) plot(visualisation_recipe(x)) model <- lm(Petal.Length ~ Species * poly(Sepal.Width, 3), data = iris) x <- estimate_slopes(model, trend = "Sepal.Width", by = c("Sepal.Width", "Species")) plot(visualisation_recipe(x)) ## End(Not run) # ============================================== # estimate_grouplevel # ============================================== ## Not run: data <- lme4::sleepstudy data <- rbind(data, data) data$Newfactor <- rep(c("A", "B", "C", "D")) # 1 random intercept model <- lme4::lmer(Reaction ~ Days + (1 | Subject), data = data) x <- estimate_grouplevel(model) layers <- visualisation_recipe(x) layers plot(layers) # 2 random intercepts model <- lme4::lmer(Reaction ~ Days + (1 | Subject) + (1 | Newfactor), data = data) x <- estimate_grouplevel(model) plot(x) + geom_hline(yintercept = 0, linetype = "dashed") + theme_minimal() # Note: we need to use hline instead of vline because the axes is flipped model <- lme4::lmer(Reaction ~ Days + (1 + Days | Subject) + (1 | Newfactor), data = data) x <- estimate_grouplevel(model) plot(x) ## End(Not run)
Find zero crossings of a vector, i.e., indices when the numeric variable crosses 0. It is useful for finding the points where a function changes by looking at the zero crossings of its derivative.
zero_crossings(x) find_inversions(x)
zero_crossings(x) find_inversions(x)
x |
A numeric vector. |
Vector of zero crossings or points of inversion.
Based on the uniroot.all
function from the rootSolve package.
x <- sin(seq(0, 4 * pi, length.out = 100)) # plot(x, type = "b") modelbased::zero_crossings(x) modelbased::find_inversions(x)
x <- sin(seq(0, 4 * pi, length.out = 100)) # plot(x, type = "b") modelbased::zero_crossings(x) modelbased::find_inversions(x)