Learning Structure-Aware Representations of Dependent Types

Kokos Kogkalidis, Orestis Melkonian, Jean-Philippe Bernardy

Research output: Contribution to conferencePosterScientificpeer-review

Abstract

Agda is a dependently-typed programming language and a proof assistant, pivotal in proof formalization and programming language theory. This paper extends the Agda ecosystem into machine learning territory, and, vice versa, makes Agda-related resources available to machine learning practitioners. We introduce and release a novel dataset of Agda program-proofs that is elaborate and extensive enough to support various machine learning applications -- the first of its kind. Leveraging the dataset's ultra-high resolution, which details proof states at the sub-type level, we propose a novel neural architecture targeted at faithfully representing dependently-typed programs on the basis of structural rather than nominal principles. We instantiate and evaluate our architecture in a premise selection setup, where it achieves promising initial results, surpassing strong baselines.
Original languageEnglish
Publication statusPublished - 2024
MoE publication typeNot Eligible
EventConference on Neural Information Processing Systems - Vancouver, Canada, Vancouver , Canada
Duration: 10 Dec 202415 Dec 2024
Conference number: 38
https://neurips.cc/Conferences/2024

Conference

ConferenceConference on Neural Information Processing Systems
Abbreviated titleNeurIPS
Country/TerritoryCanada
CityVancouver
Period10/12/202415/12/2024
Internet address

Fingerprint

Dive into the research topics of 'Learning Structure-Aware Representations of Dependent Types'. Together they form a unique fingerprint.

Cite this