A recent trend in HCI has been the reuse of social media to augment face-to-face interactions amongst strangers. Where Digital presentation of media are displayed during face-to-face encounters. Work has shown that displaying media when co-present with a stranger can help to support conversation. However, existing work considers social media as a raw resource, using algorithmic matching to identify shared topics between individuals, presenting these as text. Therefore, we do not know how users would choose digital media to represent themselves to others or how they would wish it to be displayed. This is important, as existing work fails to take into account the rich practices around how users choose to represent themselves on-line to others, and the implications if unwanted data are disclosed. Through a two-part study 32 participants designed a digital representation of themselves that could be presented to strangers in face-to-face interaction. We then studied how these were employed. Our results found that users prefer more social, rich and ambiguous content to present, the majority of which comes from outside existing social and digital media services. The use of ambiguous content helping to both sustain conversation, and being used as a way to control disclosure of information. By considering two display technologies (HMD and Smartwatch) we are also able to decouple the role of the visualisation from how it is displayed, identifying how showing the visualisation can help in the conversation.