Predicting At-Risk Programming Students in Small Imbalanced Datasets using Synthetic Data

Research output: Chapter in Book/Report/Conference proceedingConference proceedingpeer-review

Abstract

This study is part of a larger project focused on measuring, understanding, and improving student engagement in programming education. We investigate whether synthetic data generation can help identify at-risk students earlier in a small, imbalanced dataset from an introductory programming module. The analysis used anonymised records from 379 students, with 15% marked as failing, and applied several machine learning algorithms. The first experiments showed poor recall for the failing group. However, using synthetic data generation methods led to a significant improvement in performance. Our results suggest that, when combined with synthetic data, machine learning can help identify at-risk students early in programming courses. This research lays the groundwork for validating and using these models with live student cohorts in the future, to allow for timely and effective interventions that can improve student outcomes. It also includes feature importance analysis to refine formative tasks. Overall, this study contributes to developing practical workflows that help detect disengagement early and improve student success in programming education.
Original languageEnglish
Title of host publication26th International Conference on Artificial Intelligence in Education
PublisherSpringer Nature
Publication statusSubmitted - 7 May 2025

Publication series

NameCommunications in Computer and Information Science
PublisherSpringer Nature

Fingerprint

Dive into the research topics of 'Predicting At-Risk Programming Students in Small Imbalanced Datasets using Synthetic Data'. Together they form a unique fingerprint.

Cite this