# Post-Hoc Modification of Linear Models

Research output: Artistic and non-textual form › Software › Scientific

### Standard

**Post-Hoc Modification of Linear Models.**van Vliet, Marijn (Author). 2019.

Research output: Artistic and non-textual form › Software › Scientific

### Harvard

*Post-Hoc Modification of Linear Models*, 2019, Software.

### APA

### Vancouver

### Author

### Bibtex - Download

}

### RIS - Download

TY - ADVS

T1 - Post-Hoc Modification of Linear Models

AU - van Vliet, Marijn

PY - 2019/1/11

Y1 - 2019/1/11

N2 - Linear machine learning models are a powerful tool that can “learn” a data transformation by being exposed to examples of input with the desired output, thereby forming the basis for a variety of powerful techniques for analyzing neuroimaging data. However, their ability to learn the desired transformation is limited by the quality and size of the example dataset, which in neuroimaging studies is often notoriously noisy and small. In these cases, it is desirable to fine-tune the learned linear model using domain information beyond the example dataset. In the presence of co-linearities in the data (which in neuroimaging is almost always the case), it is easier to formulate domain knowledge in terms of the pattern matrix [2] than the weights. For example, in a source estimation setting, the pattern matrix is the leadfield (i.e., forward solution) and the weight matrix is the inverse solution.The post-hoc adaptation framework combines the insight of Haufe et al. [2] that a pattern matrix can be computed for any linear model, with the insight from source estimation methods that priors that are formulated on the pattern matrix can be translated into priors on the weight matrix.The framework decomposes the weight matrix of a linear model into three subcomponents:1. the covariance matrix of the data, which describes the scale of the input features and their relationship2. the pattern matrix, which describes the signal of interest, see Haufe et al. [2]3. the normalizer, which describes the scale of the result and the relationship between the outputs of the model

AB - Linear machine learning models are a powerful tool that can “learn” a data transformation by being exposed to examples of input with the desired output, thereby forming the basis for a variety of powerful techniques for analyzing neuroimaging data. However, their ability to learn the desired transformation is limited by the quality and size of the example dataset, which in neuroimaging studies is often notoriously noisy and small. In these cases, it is desirable to fine-tune the learned linear model using domain information beyond the example dataset. In the presence of co-linearities in the data (which in neuroimaging is almost always the case), it is easier to formulate domain knowledge in terms of the pattern matrix [2] than the weights. For example, in a source estimation setting, the pattern matrix is the leadfield (i.e., forward solution) and the weight matrix is the inverse solution.The post-hoc adaptation framework combines the insight of Haufe et al. [2] that a pattern matrix can be computed for any linear model, with the insight from source estimation methods that priors that are formulated on the pattern matrix can be translated into priors on the weight matrix.The framework decomposes the weight matrix of a linear model into three subcomponents:1. the covariance matrix of the data, which describes the scale of the input features and their relationship2. the pattern matrix, which describes the signal of interest, see Haufe et al. [2]3. the normalizer, which describes the scale of the result and the relationship between the outputs of the model

UR - https://users.aalto.fi/~vanvlm1/posthoc/python

M3 - Software

ER -

ID: 34657795