In this paper, we are aiming to present a methodology for generation, manipulation and form finding of structural typologies using variational autoencoders, a machine learning model based on neural networks. We are giving a detailed description of the neural network architecture used as well as the data representation based on the concept of a 3D-canvas with voxelized wireframes. In this 3D-canvas, the input geometry of the building typologies is represented through their connectivity map and subsequently augmented to increase the size of the training set. Our variational autoencoder model then learns a continuous latent distribution of the input data from which we can sample to generate new geometry instances, essentially hybrids of the initial input geometries. Finally, we present the results of these computational experiments and lay out the conclusions as well as outlook for future research in this field.