From Scores to Decisions: Comparing Logistic Regression, Random Forest, and XGBoost for Calibrated, Cost‑Sensitive Credit Default Prediction
Research Article
Open Access
CC BY

From Scores to Decisions: Comparing Logistic Regression, Random Forest, and XGBoost for Calibrated, Cost‑Sensitive Credit Default Prediction

Han Wu 1*
1 University of Nottingham
*Corresponding author: liyhw30@nottingham.ac.uk
Published on 11 November 2025
Volume Cover
AEMPS Vol.240
ISSN (Print): 2754-1177
ISSN (Online): 2754-1169
ISBN (Print): 978-1-80590-527-1
ISBN (Online): 978-1-80590-528-8
Download Cover

Abstract

Predicting the likelihood that a borrower will default on a loan is a fundamental task in credit risk management. Traditional credit scoring relies on logistic regression models, but the rise of machine learning has brought more flexible alternatives such as Random Forest and XGBoost. While these methods can yield higher predictive accuracy, they also raise concerns about probability calibration, cost‑sensitive decision rules, and interpretability. This work compares Logistic Regression, Random Forest and XGBoost on a publicly available credit risk dataset. After standardising numerical variables, encoding categorical variables and handling missing values, this study trains each model using cross‑validated hyper‑parameters. It evaluates discrimination (Receiver Operating Characteristic Area Under the Curve and Precision–Recall Area Under the Curve, thereafter, ROC AUC, PR AUC), calibration (Brier score and reliability curves) and derive cost‑sensitive thresholds assuming false negatives are five times more costly than false positives. Results show that XGBoost achieves the highest AUC (≈ 0.95) and PR AUC (≈ 0.89) while maintaining good calibration. Appropriate threshold tuning reduces expected losses substantially—e.g. lowering the Logit cut‑off to 0.2 increases recall from 17 % to 78 %. A detailed discussion of feature importance and model interpretability is presented, and the research outlines implications for deploying modern scoring models under regulatory constraints. This paper aims to bridge the gap between algorithmic advances and their responsible application “from scores to decisions.”

Keywords:

Credit scoring, logistic regression, random forest, XGBoost, cost-sensitive learning.

View PDF
Wu,H. (2025). From Scores to Decisions: Comparing Logistic Regression, Random Forest, and XGBoost for Calibrated, Cost‑Sensitive Credit Default Prediction. Advances in Economics, Management and Political Sciences,240,71-80.

References

[1]. Zhang, X., Zhang, T., Hou, L., Liu, X., Guo, Z., Tian, Y., & Liu, Y. (2023). Data‑Driven Loan Default Prediction: A Machine Learning Approach for Enhancing Business Process Management. Systems, 13(7), 581.

[2]. Wang, H., Wong, S. T., Ganatra, M. A., & Luo, J. (2024). Credit Risk Prediction Using Machine Learning and Deep Learning: A Study on Credit Card Customers. Risks, 12(11), 174.

[3]. Yang, S., Huang, Z., Xiao, W., & Shen, X. (2025). Interpretable Credit Default Prediction with Ensemble Learning and SHAP. arXiv preprint arXiv: 2505.20815.

[4]. Gunnarsson, B. R., vanden Broucke, S. K. L., Baesens, B., & Lemahieu, W. (2021). Deep learning for credit scoring: Do or don’t? European Journal of Operational Research, 295(1), 292–305.

[5]. Alonso‑Robisco, A., & Carbó, J. M. (2022a). Can machine learning models save capital for banks? Evidence from a Spanish credit portfolio. International Review of Financial Analysis, 84, 102372.

[6]. Zedda, S. (2024). Credit scoring: Does XGBoost outperform logistic regression? A test on Italian SMEs. Research in International Business and Finance, 70, 102397.

[7]. Xiao, J., Li, S., Tian, Y., Huang, J., Jiang, X., & Wang, S. (2025). Example‑dependent cost‑sensitive learning based selective deep ensemble model for customer credit scoring. Scientific Reports, 15, Article 6000.

[8]. Chen, Y., Calabrese, R., & Martin‑Barragán, B. (2024). Interpretable machine learning for imbalanced credit scoring datasets. European Journal of Operational Research, 312(1), 357–372.

[9]. Rudin, C. (2019). Stop explaining black box machine learning models for high‑stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215.

[10]. Alonso‑Robisco, A., & Carbó, J. M. (2022b). Measuring the model risk‑adjusted performance of machine learning algorithms in credit default prediction. Financial Innovation, 8, Article 70.

Cite this article

Wu,H. (2025). From Scores to Decisions: Comparing Logistic Regression, Random Forest, and XGBoost for Calibrated, Cost‑Sensitive Credit Default Prediction. Advances in Economics, Management and Political Sciences,240,71-80.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

About volume

Volume title: Proceedings of ICFTBA 2025 Symposium: Data-Driven Decision Making in Business and Economics

ISBN: 978-1-80590-527-1(Print) / 978-1-80590-528-8(Online)
Editor: Lukášak Varti
Conference date: 12 December 2025
Series: Advances in Economics, Management and Political Sciences
Volume number: Vol.240
ISSN: 2754-1169(Print) / 2754-1177(Online)