Generation of geometric interpolations of building types with deep variational autoencoders

Jaime de Miguel*, Maria Eugenia Villafane, Luka Piškorec, Fernando Sancho-Caparrini

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

23 Downloads (Pure)

Abstract

This work presents a methodology for the generation of novel 3D objects resembling wireframes of building types. These result from the reconstruction of interpolated locations within the learnt distribution of variational autoencoders (VAEs), a deep generative machine learning model based on neural networks. The data set used features a scheme for geometry representation based on a ‘connectivity map’ that is especially suited to express the wireframe objects that compose it. Additionally, the input samples are generated through ‘parametric augmentation’, a strategy proposed in this study that creates coherent variations among data by enabling a set of parameters to alter representative features on a given building type. In the experiments that are described in this paper, more than 150 k input samples belonging to two building types have been processed during the training of a VAE model. The main contribution of this paper has been to explore parametric augmentation for the generation of large data sets of 3D geometries, showcasing its problems and limitations in the context of neural networks and VAEs. Results show that the generation of interpolated hybrid geometries is a challenging task. Despite the difficulty of the endeavor, promising advances are presented.
Original languageEnglish
Article number34
Number of pages35
JournalDesign Science
Volume6
Issue numbere34
DOIs
Publication statusPublished - 28 Dec 2020
MoE publication typeA1 Journal article-refereed

Fingerprint Dive into the research topics of 'Generation of geometric interpolations of building types with deep variational autoencoders'. Together they form a unique fingerprint.

Cite this