Learning Explainable Decision Rules via Maximum Satisfiability

Henrik Cao, Riku Sarlin, Alex Jung

Research output: Contribution to journalArticleScientificpeer-review

11 Downloads (Pure)

Abstract

Decision trees are a popular choice for providing explainable machine learning, since they make explicit how different features contribute towards the prediction. We apply tools from constraint satisfaction to learn optimal decision trees in the form of sparse k-CNF (Conjunctive Normal Form) rules. We develop two methods offering different trade-offs between accuracy and computational complexity: one offline method that learns decision trees using the entire training dataset and one online method that learns decision trees over a local subset of the training dataset. This subset is obtained from training examples near a query point. The developed methods are applied on a number of datasets both in an online and an offline setting. We found that our methods learn decision trees which are significantly more accurate than those learned by existing heuristic approaches. However, the global decision tree model tends to be computationally more expensive compared to heuristic approaches. The online method is faster to train and finds smaller decision trees with an accuracy comparable to that of the k-nearest-neighbour method.
Original languageEnglish
Article number9272729
Pages (from-to)218180-218185
Number of pages6
JournalIEEE Access
Volume8
Early online date2020
DOIs
Publication statusPublished - 2020
MoE publication typeA1 Journal article-refereed

Keywords

  • Ethics in modelling
  • Explainable AI
  • decision aiding tools
  • Satisfiability checking
  • machine learning

Fingerprint Dive into the research topics of 'Learning Explainable Decision Rules via Maximum Satisfiability'. Together they form a unique fingerprint.

Cite this