Abstract
Symbolic integration is a fundamental problem in mathematics: we consider how machine learning may be used to optimise this task in a Computer Algebra System (CAS). We train transformers that predict whether a particular integration method will be successful, and compare against the existing human-made heuristics (called guards) that perform this task in a leading CAS. We find the transformer can outperform these guards, gaining up to 30% accuracy and 70% precision. We further show that the inference time of the transformer is inconsequential which shows that it is well-suited to include as a guard in a CAS. Furthermore, we use Layer Integrated Gradients to interpret the decisions that the transformer is making. If guided by a subject-matter expert, the technique can explain some of the predictions based on the input tokens, which can lead to further optimisations.
Original language | English |
---|---|
Title of host publication | Proceedings of the 4th Workshop on Mathematical Reasoning and AI (MATH-AI 2024) at NeurIPS 2024 |
Number of pages | 10 |
Publication status | Accepted/In press - 9 Oct 2024 |
Event | The 4th Workshop on Mathematical Reasoning and AI - Vancouver, Canada Duration: 14 Dec 2024 → 14 Dec 2024 https://mathai2024.github.io/ |
Conference
Conference | The 4th Workshop on Mathematical Reasoning and AI |
---|---|
Abbreviated title | MATH-AI 24 |
Country/Territory | Canada |
City | Vancouver |
Period | 14/12/24 → 14/12/24 |
Internet address |
Keywords
- Transformers
- Explainability
- Computer Algebra
- Symbolic Integration