Transformers to Predict the Applicability of Symbolic Integration Routines

Research output: Chapter in Book/Report/Conference proceedingConference proceedingpeer-review

6 Downloads (Pure)

Abstract

Symbolic integration is a fundamental problem in mathematics: we consider how machine learning may be used to optimise this task in a Computer Algebra System (CAS). We train transformers that predict whether a particular integration method will be successful, and compare against the existing human-made heuristics (called guards) that perform this task in a leading CAS. We find the transformer can outperform these guards, gaining up to 30% accuracy and 70% precision. We further show that the inference time of the transformer is inconsequential which shows that it is well-suited to include as a guard in a CAS. Furthermore, we use Layer Integrated Gradients to interpret the decisions that the transformer is making. If guided by a subject-matter expert, the technique can explain some of the predictions based on the input tokens, which can lead to further optimisations.
Original languageEnglish
Title of host publicationProceedings of the 4th Workshop on Mathematical Reasoning and AI (MATH-AI 2024) at NeurIPS 2024
Number of pages10
Publication statusAccepted/In press - 9 Oct 2024
EventThe 4th Workshop on Mathematical Reasoning and AI - Vancouver, Canada
Duration: 14 Dec 202414 Dec 2024
https://mathai2024.github.io/

Conference

ConferenceThe 4th Workshop on Mathematical Reasoning and AI
Abbreviated titleMATH-AI 24
Country/TerritoryCanada
CityVancouver
Period14/12/2414/12/24
Internet address

Keywords

  • Transformers
  • Explainability
  • Computer Algebra
  • Symbolic Integration

Fingerprint

Dive into the research topics of 'Transformers to Predict the Applicability of Symbolic Integration Routines'. Together they form a unique fingerprint.

Cite this