As children, we first learn to speak and only later to read and write. And when we write, we use letters that correspond to sounds. Consequently, the reading system in your brain is heavily connected to other systems in your brain, such as speech and memory systems. I am going to study how these systems interact and how new systems are built on top of existing ones. Using deep learning, I will create computer models of the various visual and auditory systems in the brain. Recordings of brain activity of both adults and children will guide my designs. The goal is to create a computer model that can perform some basic tasks, such as recognizing written and spoken words, while activity flowing through the model matches the brain activity of a human performing the same task. By experimenting with the training sequence and taking basic brain anatomy into account, I aim to develop a new theory on how your brain performs basic language processing.