single-jc.php

JACIII Vol.30 No.1 pp. 258-266
doi: 10.20965/jaciii.2026.p0258
(2026)

Research Paper:

Minimalist Machine Learning: Classifying Patterns with a Single Attribute

Noé Oswaldo Rodríguez* ORCID Icon, Yenny Villuendas-Rey** ORCID Icon, Cornelio Yáñez Márquez*,† ORCID Icon, and Antonio Alarcón-Paredes* ORCID Icon

*Centro de Investigación en Computación, Instituto Politécnico Nacional
Av. Juan de Dios Bátiz S/N, Nueva Industrial Vallejo, Gustavo A. Madero, CDMX 07738, México

Corresponding author

**Centro de Innovación y Desarrollo Tecnológico en Cómputo, Instituto Politécnico Nacional
Av. Juan de Dios Bátiz S/N, Nueva Industrial Vallejo, Gustavo A. Madero, CDMX 07738, México

Received:
April 16, 2025
Accepted:
September 8, 2025
Published:
January 20, 2026
Keywords:
minimalist machine learning, Mexican axolotl optimization, interpretable AI, binary classification, dimensionality reduction
Abstract

This article introduces MML-MAO (minimalist machine learning-Mexican axolotl optimization), a model that belongs to the recently developed MML paradigm. The underlying premise of this new paradigm is a version of Occam’s razor; in machine learning, effective, efficient, simple, and interpretable models are preferable. The conceptual and operational foundation of MML-MAO consists of selecting a single attribute from the total set of features present in the dataset. Then, through the application of the MAO metaheuristic, additional features are added to the initial simple attribute. For each training instance, the arithmetic mean of the small set of features selected by MAO is calculated; subsequently, by comparing this mean with a threshold, the class of the instance is determined. The experiments used 5-fold cross-validation as the validation method and the F1-score as the performance metric. The results of MML-MAO were compared with eight state-of-the-art machine learning algorithms across 22 datasets. According to the Friedman ranking, the MML-MAO model achieved the best overall performance; and according to the Bonferroni–Dunn test, it is statistically indistinguishable from SVM and statistically outperforms decision trees. The results confirm that compact, white-box models can outperform more sophisticated and complex models in less time and without the need for post-hoc interpretability tests. It is concluded that minimalist approaches deserve consideration by practitioners, especially in domains where transparency, interpretability, and scalability are essential.

Cite this article as:
N. Rodríguez, Y. Villuendas-Rey, C. Márquez, and A. Alarcón-Paredes, “Minimalist Machine Learning: Classifying Patterns with a Single Attribute,” J. Adv. Comput. Intell. Intell. Inform., Vol.30 No.1, pp. 258-266, 2026.
Data files:
References
  1. [1] R. Y. Choi, A. S. Coyner, J. Kalpathy-Cramer, M. F. Chiang, and J. P. Campbell, “Introduction to Machine Learning, Neural Networks, and Deep Learning,” Transl. Vis. Sci. Technol., Vol.9, No.2, Article No.14, 2020.
  2. [2] I. J. Goodfellow, Y. Bengio, and A. Courville, “Deep Learning,” MIT Press, 2016.
  3. [3] M. Verma, “Artificial intelligence and its scope in different areas with special reference to the field of education,” Int. J. Adv. Res., Vol.3, Issue 1, pp. 5-10, 2018.
  4. [4] V. Hassija et al., “Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence,” Cogn. Comput., Vol.16, pp. 45-74, 2023. https://doi.org/10.1007/s12559-023-10179-8
  5. [5] V. Božić, “Explainable artificial intelligence (XAI): Enhancing transparency and trust in AI systems,” Preprint, 2023.
  6. [6] M. Nagahisarchoghaei, M. M. Karimi, S. Rahimi, L. Cummins, and G. Ghanbari, “Generative Local Interpretable Model-Agnostic Explanations,” Proc. of Int. FLAIRS Conf., Article No.133378, 2023. https://doi.org/10.32473/flairs.36.133378
  7. [7] M. T. Ribeiro, S. Singh, and C. Guestrin, ““Why Should I Trust You?”: Explaining the Predictions of Any Classifier,” arXiv preprint, arXiv:1602.04938, 2016. https://doi.org/10.48550/arXiv.1602.04938
  8. [8] C. Yáñez-Márquez, “Toward the Bleaching of the Black Boxes: Minimalist Machine Learning,” IT Prof., Vol.22, Issue 4, pp. 51-56, 2020. https://doi.org/10.1109/MITP.2020.2994188
  9. [9] J.-L. Solorio-Ramírez, M. Saldana-Perez, M. D. Lytras, M.-A. Moreno-Ibarra, and C. Yáñez-Márquez, “Brain Hemorrhage Classification in CT Scan Images Using Minimalist Machine Learning,” Diagnostics, Vol.11, Issue 8, Article No.1449, 2021. https://doi.org/10.3390/diagnostics11081449
  10. [10] E. Ventura-Molina, C. López-Martín, I. López-Yáñez, and C. Yáñez-Márquez, “A Novel Data Analytics Method for Predicting the Delivery Speed of Software Enhancement Projects,” Mathematics, Vol.8, Issue 11, Article No.2002, 2020. https://doi.org/10.3390/math8112002
  11. [11] Y. Villuendas-Rey, J. L, Velázquez-Rodríguez, M. D. Alanis-Tamez, M.-A. Moreno-Ibarra, and C. Yáñez-Márquez, “Mexican Axolotl Optimization: A Novel Bioinspired Heuristic,” Mathematics, Vol.9, Issue 7, Article No.781, 2021. https://doi.org/10.3390/math9070781
  12. [12] A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth, “Occam’s Razor,” Inf. Process. Lett., Vol.24, Issue 6, pp. 377-380, 1987. https://doi.org/10.1016/0020-0190(87)90114-1
  13. [13] R. C. Holte, “Very Simple Classification Rules Perform Well on Most Commonly Used Datasets,” Mach. Learn., Vol.11, No.1, pp. 63-90, 1993. https://doi.org/10.1023/A:1022631118932
  14. [14] D. J. Hand, “Classifier Technology and the Illusion of Progress,” Stat. Sci., Vol.21, No.1, pp. 1-14, 2006. https://doi.org/10.1214/088342306000000060
  15. [15] R. C. Holte, “Elaboration on Two Points Raised in “Classifier Technology and the Illusion of Progress”,” Stat. Sci., Vol.21, No.1, pp. 24-26, 2006. https://doi.org/10.1214/088342306000000033
  16. [16] M. Fernández-Delgado, E. Cernadas, S. Barro, and D. Amorim, “Do we need hundreds of classifiers to solve real-world classification problems?,” J. Mach. Learn. Res., Vol.15, Issue 1, pp. 3133-3181, 2014.
  17. [17] M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” arXiv preprint, arXiv:1905.11946, 2020. https://doi.org/10.48550/arXiv.1905.11946
  18. [18] C. S. V. P. Rao et al., “Location Determination of Electric Vehicles Parking Lot with Distribution System by Mexican AXOLOTL Optimization and Wild Horse Optimizer,” IEEE Access, Vol.10, pp. 55408-55427, 2022. https://doi.org/10.1109/ACCESS.2022.3176370
  19. [19] A. M. Eltamaly and M. A. Alotaibi, “Novel Hybrid Mexican Axolotl Optimization with Fuzzy Logic for Maximum Power Point Tracker of Partially Shaded Photovoltaic Systems,” Energies, Vol.17, Issue 11, Article No.2445, 2024. https://doi.org/10.3390/en17112445
  20. [20] R. Ramachandran, S. Kannan, S. K. Ganesan, and B. Annamalai, “Optimal economic-emission load dispatch in microgrid incorporating renewable energy sources by golden jackal optimization (GJO) and Mexican Axolotl optimization (MAO),” Energy Environ., Vol.34, Issue 4, 2023. https://doi.org/10.1177/0958305X231204605
  21. [21] H. K. Vijayvergia and U. S. Modani, “TEAMR-AMAA: Trust and energy aware multicast routing based on adaptive Mexican axolotl algorithm in WSN,” Int. J. Syst. Syst. Eng., Vol.13, No.3, pp. 306-323, 2023. https://doi.org/10.1504/IJSSE.2023.133017
  22. [22] Z. Liu et al., “KAN: Kolmogorov-Arnold Networks,” arXiv preprint, arXiv:2404.19756, 2024. https://doi.org/10.48550/arXiv.2404.19756
  23. [23] J. Alcalá-Fdez et al., “KEEL Data-Mining Software Tool: Data Set Repository, Integration of Algorithms and Experimental Analysis Framework,” J. Mult.-Valued Logic Soft Comput., Vol.17, Nos.2-3, pp. 255-287, 2011.
  24. [24] M. Kelly, R. Longjohn, and K. Nottingham, “The UC Irvine Machine Learning Repository.” https://archive.ics.uci.edu [Accessed November 3, 2023]
  25. [25] C. L. Nutt, “Brain Cancer,” Mendeley Data, V4, 2017. https://data.mendeley.com/datasets/ynp2tst2hh/4/files/ee1e019f-2c9c-4174-81dc-19cf64a2935b [Accessed January 3, 2024]
  26. [26] F. Pedregosa et al., “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res., Vol.12, No.85, pp. 2825-2830, 2011.
  27. [27] OpenAI, “ChatGPT: Optimizing language models for dialogue.” https://openai.com/research/chatgpt [Accessed June 9, 2024]
  28. [28] J. Tang, W. Yu, G. Zhao, X. Jiao, and X. Ding, “Application of Bispectrum Dimensionality Reduction Method in Ultrasonic Echo Signal Processing,” J. Adv. Comput. Intell. Intell. Inform., Vol.26, No.6, pp. 1053-1060, 2020. https://doi.org/10.20965/jaciii.2022.p1053
  29. [29] T. Cao, C. Vo, S. Nguyen, A. Inoue, and D. Zhou, “A Kernel k-Means-Based Method and Attribute Selections for Diabetes Diagnosis,” J. Adv. Comput. Intell. Intell. Inform., Vol.24, No.1, pp. 73-82, 2020. https://doi.org/10.20965/jaciii.2020.p0073

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Jan. 21, 2026