1 Faculty of Industrial Engineering, Jordan University of Science and Technology, Irbid, Jordan.
2 Department of Computer Information Systems Jordan University of Science and Technology, Irbid, Jordan
International Journal of Science and Research Archive, 2025, 14(03), 1017-1025
Article DOI: 10.30574/ijsra.2025.14.3.0777
Received on 10 February 2025; revised on 17 March 2025; accepted on 20 March 2025
Coin classification is challenging but crucial for various applications such as vending machines, cash registers, and self-service kiosks. Coins are prevalent daily in banks, grocery stores, malls, supermarkets, and ATMs. Therefore, it is essential to have the capability to recognize coins with high accuracy automatically. Deep learning image processing models have recently shown promise in resolving the coin classification problem. These models can learn to identify and classify coins based on visual features such as shape, size, and texture. However, it is not easy as many coins appear similar, making it difficult to distinguish between different types of coins and classify them accurately. This paper proposes a coin classification system that utilizes the popular open-source library Vision Tensor Flow, which is excellent for image processing and computer vision. Our system is designed to handle multi-class classification of coins, which means it can recognize and classify multiple types of coins simultaneously. We tested our system using a dataset of various coin types from different countries, and the results were promising- it can achieve high accuracy in coin recognition. We use the Czech coins dataset to classify the coin images; we use ImageNet-21K as a pre-trained model to help our model enhance accuracy, and we train our model on VITIn.
Coin classification; ViT; Deep learning and vision transformer; Countries
Preview Article PDF
AlMo’men Bellah Alawnah and Ola Hayajnah. Coins multi-class classification using vision TensorFlow. International Journal of Science and Research Archive, 2025, 14(03), 1017-1025. Article DOI: https://doi.org/10.30574/ijsra.2025.14.3.0777.
Copyright © 2025 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0







