1King Bernard S. Saul,2Joy B. Saul,3Kristine T. Soberano
1Northern Iloilo State University, Estancia, Iloilo, Philippines
2Estancia National High School, Estancia, Iloilo, Philippines
3Northern Negros State College of Science and Technology, Old Sagay, Sagay City, Negros Occidental, Philippines
DOI : https://doi.org/10.47191/ijmra/v6-i4-03Google Scholar Download Pdf
ABSTRACT:
The traditional method of taking attendance using paper sheets is prone to errors like impersonation, loss, or theft. To solve this issue, automatic attendance systems utilizing identification technology such as barcode badges, electronic tags, touch screens, magnetic stripe cards, and biometrics have been implemented. Biometric technology uses physiological or behavioral characteristics for identification purposes, but traditional biometric systems have limitations such as vulnerability to damage or alteration over time, and variations in occlusions, poses, facial expressions, and illumination can affect face recognition accuracy. Fingerprint identification relies on the distinctiveness of fingerprints and involves comparing two impressions of the friction ridges on human fingers or toes to determine if they belong to the same individual. There are five primary categories of fingerprints: arch, tented arch, left loop, right loop, and whorl. Various algorithms have been developed to recognize fingerprints using minutiaebased matching, which involves identifying key features like ridge ending and bifurcation. Deep learning algorithms, particularly convolutional neural networks, have been successful in improving identification accuracy by extracting features automatically from fingerprint images. In recent times, securing personal data has become increasingly important, and the Convolutional Neural Network (CNN) identification system is recommended for improving accuracy and performance. This paper proposes a fingerprint identification system that combines three models: CNN, Softmax, and Random Forest (RF) classifiers. The conventional system uses K-means and DBSCAN algorithms to separate the foreground and background regions and extracts features using CNNs and dropout approach. The Softmax acts as a recognizer. The proposed algorithm is evaluated on a public database and shows promising results, providing an accurate and efficient biometric identification system.
KEYWORDS:Fingerprint Identification, Convolutional Neural Network, Attendance Monitoring
REFERENCES
1) O. Shoewu and O. a. Idowu, “Development of Attendance Management System using Biometrics.,” Pacific J. Sci. Technol., vol. 13, no. 1, pp. 300– 307, 2012.
2) M. Ula, A. Pratama, Y. Asbar, W. Fuadi, R. Fajri, and R. Hardi, “A New Model of the Student Attendance Monitoring System Using RFID Technology,” J. Phys. Conf. Ser., vol. 1807, no. 1, 2021, doi: 10.1088/1742-6596/1807/1/012026.
3) S. Rahman, M. Rahman, and M. Rahman, “Edelweiss Applied Science and Technology using Fingerprint Recognition,” vol. 2, no. 1, pp. 90–94, 2018.
4) J. Da Wu and S. H. Ye, “Driver identification using finger-vein patterns with Radon transform and neural network,” Expert Syst. Appl., vol. 36, no. 3 PART 2, pp. 5793–5799, 2009, doi: 10.1016/j.eswa.2008.07.042.
5) S. Ahmad Radzi, M. Khalil-Hani, and R. Bakhteri, “Finger-vein biometric identification using convolutional neural network,” Turkish J. Electr. Eng. Comput. Sci., vol. 24, no. 3, pp. 1863–1878, 2016, doi: 10.3906/elk-1311-43.
6) H. Zhengfang, A. J. P. Delima, I. K. D. Machica, J. C. T. Arroyo, S. Weibin, and X. Gang, “Fingerprint Identification based on Novel Siamese Rectangular Convolutional Neural Networks,” Int. J. Emerg. Technol. Adv. Eng., vol. 12, no. 5, pp. 28–37, 2022, doi: 10.46338/ijetae0522_04.
7) E. H. Holder, “U.S. Department of Justice Office of Justice Programs,” Office.
8) J. Marriott, B. Mukhopadhyay, and P. Chatterjee, “On the Skin-Furrows of the Hand,” Britain in India, 1765-1905, pp. 285–287, 2021, doi: 10.4324/9781003113577-9.
9) K. Karu and A. K. Jain, “Fingerprint classification,” Pattern Recognit., vol. 29, no. 3, pp. 389–404, 1996, doi: 10.1016/0031-3203(95)00106-9. [10] K. W. Bowyer and M. J. Burge, Introduction to the Handbook of Iris Recognition. 2016. doi: 10.1007/978-1-4471-6784-6_1.
10) “FACE RECOGNITION USING SCATTERING CONVOLUTIONAL NETWORK Shervin Minaee , Amirali Abdolrashidi and Yao Wang ECE Department , NYU School of Engineering , USA”.
11) C. Ding and D. Tao, “Robust Face Recognition via Multimodal Deep Face Representation,” IEEE Trans. Multimed., vol. 17, no. 11, pp. 2049–2058, 2015, doi: 10.1109/TMM.2015.2477042.
12) S. Minaee and A. Abdolrashidi, “HIGHLY ACCURATE PALMPRINT RECOGNITION USING STATISTICAL AND WAVELET FEATURES
13) Shervin Minaee and AmirAli Abdolrashidi ECE Department, NYU Polytechnic School of Engineering , NY , USA,” pp. 31–36, 2015.
14) E. Marasco and A. Ross, “A survey on antispoofing schemes for fingerprint recognition systems,” ACM Comput. Surv., vol. 47, no. 2, 2014, doi: 10.1145/2617756.
15) U. Park, S. Pankanti, and A. K. Jain, “Fingerprint verification using SIFT features,” Biometric Technol. Hum. Identif. V, vol. 6944, p. 69440K, 2008, doi: 10.1117/12.778804.
16) R. Cappelli, M. Ferrara, and D. Maltoni, “Minutia Cylinder-Code: A new representation and matching technique for fingerprint recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 12, pp. 2128–2141, 2010,
doi: 10.1109/TPAMI.2010.52.
17) S. Minaee, A. Abdolrashidi, and Y. Wang, “IRIS RECOGNITION USING SCATTERING TRANSFORM AND TEXTURAL FEATURES Shervin Minaee , AmirAli Abdolrashidi and Yao Wang ECE Department , NYU Polytechnic School of Engineering , USA,” 2015 IEEE Signal Process. Signal Process. Educ. Work., pp. 37–42, 2015.
18) F. A. Afsar, M. Arif, and M. Hussain, “Fingerprint identification and verification system using minutiae matching,” Natl. Conf. Emerg. Technol., pp. 141–146, 2004, [Online].
Available:http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle:Fingerprint+Identification+and+Verification+System+using+Minutiae+Matching#0
19) A.M. Bazen and S. H. Gerez, “Systematic methods for the computation of the directional fields and singular points of fingerprints,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 7, pp. 905–919, 2002,
doi: 10.1109/TPAMI.2002.1017618.
20) K. N. Mutter, Z. M. Jafri, and A. A. Aziz, “Automatic fingerprint identification using gray hopfield neural network improved by run-length encoding,” Proc. - Comput. Graph. Imaging Vis. Mod. Tech. Appl. CGIV, pp. 205–210, 2008, doi: 10.1109/CGIV.2008.25.
21) H. a Abdullah, “Fingerprint Identification System Using Neural Networks,” Nucej, vol. 15, no. 2, pp. 234–244, 2012.
22) K. S. Choi, J. S. Shin, J. J. Lee, Y. S. Kim, S. B. Kim, and C. W. Kim, “In vitro trans-differentiation of rat mesenchymal cells into insulin-producing cells by rat pancreatic extract,” Biochem. Biophys. Res. Commun., vol. 330, no. 4, pp. 1299–1305, 2005, doi: 10.1016/j.bbrc.2005.03.111.
23) S. Karimi-Bidhendi, F. Munshi, and A. Munshi, “Scalable Classification of Univariate and Multivariate Time Series,” Proc. - 2018 IEEE Int. Conf. Big Data, Big Data 2018, pp. 1598–1605, 2019, doi: 10.1109/BigData.2018.8621889.
24) S. Minaee et al., “A Deep Unsupervised Learning Approach Toward MTBI Identification Using Diffusion MRI,” Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. EMBS, vol. 2018-July, pp. 1267–1270, 2018, doi: 10.1109/EMBC.2018.8512556.
25) S. Minaee et al., “MTBI Identification from Diffusion MR Images Using Bag of Adversarial Visual Features,” IEEE Trans. Med. Imaging, vol. 38, no. 11, pp. 2545–2555, 2019, doi: 10.1109/TMI.2019.2905917.
26) Y. Sun, Y. Chen, X. Wang, and X. Tang, “Deep learning face representation by joint identification-verification,” Adv. Neural Inf. Process. Syst., vol. 3, no. January, pp. 1988–1996, 2014.
27) A. Jasuja and S. Rathee, “Emotion Recognition Using Facial Expressions,” Int. J. Inf. Retr. Res., vol. 11, no. 3, pp. 1–17, 2021, doi: 10.4018/ijirr.2021070101.
28) D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, “Learning a deep convolutional network for image super-resolution,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8692, no. September, pp. 184–199, 2014, [Online]. Available:http://mmlab.ie.cuhk.edu.hk/projects/SRCNN.html.
29) H. Rampersad, “Developing,” Total Perform. Scorec., pp. 159–183, 2020, doi: 10.4324/9780080519340-12.
30) Badrinarayanan, A. Kendall, and R. Cipolla, “\href{https://arxiv.org/pdf/1511.00561.pdf}{Segnet: A deep convolutional encoder-decoder architecture for image segmentation},” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 12, pp. 2481–2495, 2017, [Online]. Available: https://arxiv.org/pdf/1511.00561.pdf
31) Sangeetha and K. J. R. Prasad, “Syntheses of novel derivatives of 2-acetylfuro[2,3-a]carbazoles, benzo[1,2-b]-1,4-thiazepino[2,3-a]carbazoles and 1-acetyloxycarbazole-2- carbaldehydes,” Indian J. Chem. - Sect. B Org. Med. Chem., vol. 45, no. 8, pp. 1951–1954, 2006, doi: 10.1002/chin.200650130.
32) D. Bahdanau, K. H. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–15, 2015.
33) S. Minaee and Z. Liu, “Automatic question-answering using a deep similarity neural network,” 2017 IEEE Glob. Conf. Signal Inf. Process. Glob. 2017 - Proc., vol. 2018-Janua, pp. 923–927, 2018, doi: 10.1109/GlobalSIP.2017.8309095.
34) A. Severyn and A. Moschittiy, “Learning to rank short text pairs with convolutional deep neural networks,” SIGIR 2015 - Proc. 38th Int. ACM SIGIR Conf. Res. Dev. Inf. Retr., pp. 373–382, 2015, doi: 10.1145/2766462.2767738. [35] Y. Chen, “Convolutional Neural Network for Sentence Classification by,” 2015.
35) T. C. Buck, M. S. Müller, M. Plattner, and A. W. Koch, “CNN Features off-the-shelf: an Astounding Baseline for Recognition Ali,” Opt. Meas. Syst. Ind. Insp. VI, vol. 7389, p. 738930, 2009.
36) J. Chandramohan, R. Nagarajan, M. A. kumar, T. Dineshkumar, G. Kannan, and R. Prakash, “Attendance Monitoring System of Students Based on Biometric and GPS Tracking System,” Int. J. Adv. Eng. Manag. Sci., vol. 3, no. 3, pp. 241–246, 2017, doi: 10.24001/ijaems.3.3.16.
37) S. Kouamo and C. Tangha, “Fingerprint Recognition with Artificial Neural Networks: Application to E-Learning,” J. Intell. Learn. Syst. Appl., vol. 08, no. 02, pp. 39–49, 2016, doi: 10.4236/jilsa.2016.82004.
38) M. V. Valueva, N. N. Nagornov, P. A. Lyakhov, G. V. Valuev, and N. I. Chervyakov, “Application of the residue number system to reduce hardware.
39) costs of the convolutional neural network implementation,” Math. Comput. Simul., vol. 177, pp. 232–243, 2020, doi: 10.1016/j.matcom.2020.04.031.
40) M. Galar et al., “A survey of fingerprint classification Part I: Taxonomies on feature extraction methods and learning models,” Knowledge-Based Syst., vol. 81, no. February, pp. 76–97, 2015, doi: 10.1016/j.knosys.2015.02.008.
41) D. Peralta, I. Triguero, S. García, Y. Saeys, J. M. Benitez, and F. Herrera, “On the use of convolutional neural networks for robust classification of multiple fingerprint captures,” Int. J. Intell. Syst., vol. 33, no. 1, pp. 213–230, 2018, doi: 10.1002/int.21948.
42) Dabouei, H. Kazemi, S. M. Iranmanesh, J. Dawson, and N. M. Nasrabadi, “Fingerprint distortion rectification using deep convolutional neural networks,” Proc. - 2018 Int. Conf. Biometrics, ICB 2018, pp. 1–8, 2018,
doi: 10.1109/ICB2018.2018.00012.
43) A. H. Yih, J. L. Hung, J. A. Wu, and L. M. Chen, “Overlapped fingerprint separation based on deep learning,” ACM Int. Conf. Proceeding Ser., pp. 14–18, 2018, doi: 10.1145/3299852.3299857.
44) M. Aritonang, I. D. Hutahaean, H. Sipayung, and I. H. Tambunan, “Implementation of Fingerprint Recognition Using Convolutional Neural Network and RFID Authentication Protocol on Attendance Machine,” ACM Int. Conf. Proceeding Ser., pp. 151–156, 2020, doi: 10.1145/3397391.3397449.
45) S. Minaee, E. Azimi, and A. Abdolrashidi, “FingerNet: Pushing the Limits of Fingerprint Recognition Using Convolutional Neural Network,” 2019, [Online]. Available: http://arxiv.org/abs/1907.12956
46) Y. Zhang, Y. Jiao, J. Li, and X. Niu, “A Fingerprint Enhancement Algorithm using a Federated Filter,” Quality.
47) E. M. Cherrat, R. Alaoui, and H. Bouzahir, “Improving of fingerprint segmentation images based on K-means and DBSCAN clustering,” Int. J. Electr. Comput. Eng., vol. 9, no. 4, pp. 2425–2432, 2019, doi: 10.11591/ijece.v9i4.pp2425-2432.
48) J. Canny, “Canny1986,” no. 1, pp. 1–17.
49) B. Bhanu and A. Kumar, Advances in Computer Vision and Pattern Recognition Deep Learning for Biometrics. [Online]. Available: http://www.springer.com/series/4205
50) A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM, vol. 60, no. 6, pp. 84–90, 2017, doi: 10.1145/3065386.
51) E. Park, W. Kim, Q. Li, H. Kim, J. Kim, and V. Corp, “of Random Sample Patches,” 2016.
52) L. D. Avendaño-Valencia and S. D. Fassois, “Natural vibration response-based damage detection for an operating wind turbine via Random
53) Coefficient Linear Parameter Varying AR modelling,” J. Phys. Conf. Ser., vol. 628, no. 1, pp. 273–297, 2015, doi: 10.1088/1742-6596/628/1/012073.
54) W. Deng, Z. Huang, J. Zhang, and J. Xu, “A Data Mining Based System for Transaction Fraud Detection,” 2021 IEEE Int. Conf. Consum. Electron. Comput. Eng. ICCECE 2021, pp. 542–545, 2021, doi: 10.1109/ICCECE51280.2021.9342376.
55) M. P. LaValley, “Logistic regression,” Circulation, vol. 117, no. 18, pp. 2395–2399, 2008, doi: 10.1161/CIRCULATIONAHA.106.682658.
56) Y. Yin, L. Liu, and X. Sun, “SDUMLA-HMT: A multimodal biometric database,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 7098 LNCS, pp. 260–268, 2011, doi: 10.1007/978-3-642-25449-9_33.
57) N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res., vol. 15, pp. 1929–1958, 2014.
VOLUME 06 ISSUE 04 APRIL 2023
There is an Open Access article, distributed under the term of the Creative Commons Attribution – Non Commercial 4.0 International (CC BY-NC 4.0) (https://creativecommons.org/licenses/by-nc/4.0/), which permits remixing, adapting and building upon the work for non-commercial use, provided the original work is properly cited.
Our Services and Policies
Authors should prepare their manuscripts according to the instructions given in the authors' guidelines. Manuscripts which do not conform to the format and style of the Journal may be returned to the authors for revision or rejected.
The Journal reserves the right to make any further formal changes and language corrections necessary in a manuscript accepted for publication so that it conforms to the formatting requirements of the Journal.
International Journal of Multidisciplinary Research and Analysis will publish 12 monthly online issues per year,IJMRA publishes articles as soon as the final copy-edited version is approved. IJMRA publishes articles and review papers of all subjects area.
Open access is a mechanism by which research outputs are distributed online, Hybrid open access journals, contain a mixture of open access articles and closed access articles.
International Journal of Multidisciplinary Research and Analysis initiate a call for research paper for Volume 07 Issue 12 (December 2024).
PUBLICATION DATES:
1) Last Date of Submission : 26 December 2024 .
2) Article published within a week.
3) Submit Article : editor@ijmra.in or Online
Why with us
1 : IJMRA only accepts original and high quality research and technical papers.
2 : Paper will publish immediately in current issue after registration.
3 : Authors can download their full papers at any time with digital certificate.
The Editors reserve the right to reject papers without sending them out for review.
Authors should prepare their manuscripts according to the instructions given in the authors' guidelines. Manuscripts which do not conform to the format and style of the Journal may be returned to the authors for revision or rejected. The Journal reserves the right to make any further formal changes and language corrections necessary in a manuscript accepted for publication so that it conforms to the formatting requirements of the Journal.