From vivaldi to beatles and back: Predicting lateralized brain responses to music

Research output: Contribution to journalArticleScientificpeer-review

Standard

From vivaldi to beatles and back : Predicting lateralized brain responses to music. / Alluri, Vinoo; Toiviainen, Petri; Lund, Torben E.; Wallentin, Mikkel; Vuust, Peter; Nandi, Asoke K.; Ristaniemi, Tapani; Brattico, Elvira.

In: NeuroImage, Vol. 83, 12.2013, p. 627-636.

Research output: Contribution to journalArticleScientificpeer-review

Harvard

Alluri, V, Toiviainen, P, Lund, TE, Wallentin, M, Vuust, P, Nandi, AK, Ristaniemi, T & Brattico, E 2013, 'From vivaldi to beatles and back: Predicting lateralized brain responses to music', NeuroImage, vol. 83, pp. 627-636. https://doi.org/10.1016/j.neuroimage.2013.06.064

APA

Alluri, V., Toiviainen, P., Lund, T. E., Wallentin, M., Vuust, P., Nandi, A. K., ... Brattico, E. (2013). From vivaldi to beatles and back: Predicting lateralized brain responses to music. NeuroImage, 83, 627-636. https://doi.org/10.1016/j.neuroimage.2013.06.064

Vancouver

Alluri V, Toiviainen P, Lund TE, Wallentin M, Vuust P, Nandi AK et al. From vivaldi to beatles and back: Predicting lateralized brain responses to music. NeuroImage. 2013 Dec;83:627-636. https://doi.org/10.1016/j.neuroimage.2013.06.064

Author

Alluri, Vinoo ; Toiviainen, Petri ; Lund, Torben E. ; Wallentin, Mikkel ; Vuust, Peter ; Nandi, Asoke K. ; Ristaniemi, Tapani ; Brattico, Elvira. / From vivaldi to beatles and back : Predicting lateralized brain responses to music. In: NeuroImage. 2013 ; Vol. 83. pp. 627-636.

Bibtex - Download

@article{4e5682a4e8684572b272b3f50b993f9c,
title = "From vivaldi to beatles and back: Predicting lateralized brain responses to music",
abstract = "We aimed at predicting the temporal evolution of brain activity in naturalistic music listening conditions using a combination of neuroimaging and acoustic feature extraction. Participants were scanned using functional Magnetic Resonance Imaging (fMRI) while listening to two musical medleys, including pieces from various genres with and without lyrics. Regression models were built to predict voxel-wise brain activations which were then tested in a cross-validation setting in order to evaluate the robustness of the hence created models across stimuli. To further assess the generalizability of the models we extended the cross-validation procedure by including another dataset, which comprised continuous fMRI responses of musically trained participants to an Argentinean tango. Individual models for the two musical medleys revealed that activations in several areas in the brain belonging to the auditory, limbic, and motor regions could be predicted. Notably, activations in the medial orbitofrontal region and the anterior cingulate cortex, relevant for self-referential appraisal and aesthetic judgments, could be predicted successfully. Cross-validation across musical stimuli and participant pools helped identify a region of the right superior temporal gyrus, encompassing the planum polare and the Heschl's gyrus, as the core structure that processed complex acoustic features of musical pieces from various genres, with or without lyrics. Models based on purely instrumental music were able to predict activation in the bilateral auditory cortices, parietal, somatosensory, and left hemispheric primary and supplementary motor areas. The presence of lyrics on the other hand weakened the prediction of activations in the left superior temporal gyrus. Our results suggest spontaneous emotion-related processing during naturalistic listening to music and provide supportive evidence for the hemispheric specialization for categorical sounds with realistic stimuli. We herewith introduce a powerful means to predict brain responses to music, speech, or soundscapes across a large variety of contexts.",
keywords = "Auditory cortex, Computational feature extraction, Cross-validation, FMRI, Naturalistic stimulus, Orbitofrontal cortex",
author = "Vinoo Alluri and Petri Toiviainen and Lund, {Torben E.} and Mikkel Wallentin and Peter Vuust and Nandi, {Asoke K.} and Tapani Ristaniemi and Elvira Brattico",
year = "2013",
month = "12",
doi = "10.1016/j.neuroimage.2013.06.064",
language = "English",
volume = "83",
pages = "627--636",
journal = "NeuroImage",
issn = "1053-8119",

}

RIS - Download

TY - JOUR

T1 - From vivaldi to beatles and back

T2 - Predicting lateralized brain responses to music

AU - Alluri, Vinoo

AU - Toiviainen, Petri

AU - Lund, Torben E.

AU - Wallentin, Mikkel

AU - Vuust, Peter

AU - Nandi, Asoke K.

AU - Ristaniemi, Tapani

AU - Brattico, Elvira

PY - 2013/12

Y1 - 2013/12

N2 - We aimed at predicting the temporal evolution of brain activity in naturalistic music listening conditions using a combination of neuroimaging and acoustic feature extraction. Participants were scanned using functional Magnetic Resonance Imaging (fMRI) while listening to two musical medleys, including pieces from various genres with and without lyrics. Regression models were built to predict voxel-wise brain activations which were then tested in a cross-validation setting in order to evaluate the robustness of the hence created models across stimuli. To further assess the generalizability of the models we extended the cross-validation procedure by including another dataset, which comprised continuous fMRI responses of musically trained participants to an Argentinean tango. Individual models for the two musical medleys revealed that activations in several areas in the brain belonging to the auditory, limbic, and motor regions could be predicted. Notably, activations in the medial orbitofrontal region and the anterior cingulate cortex, relevant for self-referential appraisal and aesthetic judgments, could be predicted successfully. Cross-validation across musical stimuli and participant pools helped identify a region of the right superior temporal gyrus, encompassing the planum polare and the Heschl's gyrus, as the core structure that processed complex acoustic features of musical pieces from various genres, with or without lyrics. Models based on purely instrumental music were able to predict activation in the bilateral auditory cortices, parietal, somatosensory, and left hemispheric primary and supplementary motor areas. The presence of lyrics on the other hand weakened the prediction of activations in the left superior temporal gyrus. Our results suggest spontaneous emotion-related processing during naturalistic listening to music and provide supportive evidence for the hemispheric specialization for categorical sounds with realistic stimuli. We herewith introduce a powerful means to predict brain responses to music, speech, or soundscapes across a large variety of contexts.

AB - We aimed at predicting the temporal evolution of brain activity in naturalistic music listening conditions using a combination of neuroimaging and acoustic feature extraction. Participants were scanned using functional Magnetic Resonance Imaging (fMRI) while listening to two musical medleys, including pieces from various genres with and without lyrics. Regression models were built to predict voxel-wise brain activations which were then tested in a cross-validation setting in order to evaluate the robustness of the hence created models across stimuli. To further assess the generalizability of the models we extended the cross-validation procedure by including another dataset, which comprised continuous fMRI responses of musically trained participants to an Argentinean tango. Individual models for the two musical medleys revealed that activations in several areas in the brain belonging to the auditory, limbic, and motor regions could be predicted. Notably, activations in the medial orbitofrontal region and the anterior cingulate cortex, relevant for self-referential appraisal and aesthetic judgments, could be predicted successfully. Cross-validation across musical stimuli and participant pools helped identify a region of the right superior temporal gyrus, encompassing the planum polare and the Heschl's gyrus, as the core structure that processed complex acoustic features of musical pieces from various genres, with or without lyrics. Models based on purely instrumental music were able to predict activation in the bilateral auditory cortices, parietal, somatosensory, and left hemispheric primary and supplementary motor areas. The presence of lyrics on the other hand weakened the prediction of activations in the left superior temporal gyrus. Our results suggest spontaneous emotion-related processing during naturalistic listening to music and provide supportive evidence for the hemispheric specialization for categorical sounds with realistic stimuli. We herewith introduce a powerful means to predict brain responses to music, speech, or soundscapes across a large variety of contexts.

KW - Auditory cortex

KW - Computational feature extraction

KW - Cross-validation

KW - FMRI

KW - Naturalistic stimulus

KW - Orbitofrontal cortex

UR - http://www.scopus.com/inward/record.url?scp=84881261703&partnerID=8YFLogxK

U2 - 10.1016/j.neuroimage.2013.06.064

DO - 10.1016/j.neuroimage.2013.06.064

M3 - Article

C2 - 23810975

AN - SCOPUS:84881261703

VL - 83

SP - 627

EP - 636

JO - NeuroImage

JF - NeuroImage

SN - 1053-8119

ER -

ID: 13534568