Published: 2025-04-01
Deep Learning for Classification of Mammalian Reproduction
DOI: 10.35870/ijsecs.v5i1.3376
Nur Mufidah, Yuliska Zakaria, Mutia Sari Zulvi
Article Metrics
- Views 0
- Downloads 0
- Scopus Citations
- Google Scholar
- Crossref Citations
- Semantic Scholar
- DataCite Metrics
-
If the link doesn't work, copy the DOI or article title for manual search (API Maintenance).
Abstract
This research aims to classify mammalian reproduction using deep learning techniques, specifically focusing on Convolutional Neural Networks (CNN), VGG16, and MobileNetV2. CNN is applied to extract visual features from images related to mammalian reproductive systems, while the VGG16 and MobileNetV2 architectures are utilized to enhance accuracy and efficiency in classification. The dataset used consists of images of mammalian reproductive organs, analyzed using data augmentation techniques to improve model reliability. Data augmentation includes various transformations such as rotation, zoom, flipping, and brightness adjustments to enrich the dataset's variety and reduce the risk of overfitting. The results of this study indicate that the combination of these methods achieves high accuracy. VGG16 demonstrates the best performance in terms of precision, achieving an accuracy of 90.97%. MobileNetV2, while slightly less accurate (65.97%), excels in computational efficiency, making it highly suitable for mobile and resource-constrained environments. The baseline CNN model, achieving an accuracy of 61.11%, shows that simpler architectures are less effective in handling the complexity of the dataset. The implementation of this technology is expected to support more accurate and automated analysis and diagnosis in mammalian reproduction. The findings of this research provide valuable insights into the strengths and weaknesses of each architecture, as well as the trade-offs between accuracy and computational efficiency. The study also highlights the importance of using data augmentation techniques to improve the quality and diversity of the dataset, which in turn enhances model performance.
Keywords
Deep Learning ; Mammalian Reproduction Classification ; CNN ; VGG16 ; MobileNetV2
Article Metadata
Peer Review Process
This article has undergone a double-blind peer review process to ensure quality and impartiality.
Indexing Information
Discover where this journal is indexed at our indexing page to understand its reach and credibility.
Open Science Badges
This journal supports transparency in research and encourages authors to meet criteria for Open Science Badges by sharing data, materials, or preregistered studies.
How to Cite
Article Information
This article has been peer-reviewed and published in the International Journal Software Engineering and Computer Science (IJSECS). The content is available under the terms of the Creative Commons Attribution 4.0 International License.
-
Issue: Vol. 5 No. 1 (2025)
-
Section: Articles
-
Published: %750 %e, %2025
-
License: CC BY 4.0
-
Copyright: © 2025 Authors
-
DOI: 10.35870/ijsecs.v5i1.3376
AI Research Hub
This article is indexed and available through various AI-powered research tools and citation platforms. Our AI Research Hub ensures that scholarly work is discoverable, accessible, and easily integrated into the global research ecosystem. By leveraging artificial intelligence for indexing, recommendation, and citation analysis, we enhance the visibility and impact of published research.
Nur Mufidah
Master of Applied Computer Engineering, Politeknik Caltex Riau, Pekanbaru City, Riau Province, Indonesia
Yuliska Zakaria
Master of Applied Computer Engineering, Politeknik Caltex Riau, Pekanbaru City, Riau Province, Indonesia
-
Kumar, R., Kumbharkar, P., Vanam, S., & Sharma, S. (2024). Medical images classification using deep learning: a survey. Multimedia Tools and Applications, 83(7), 19683-19728. https://doi.org/10.1007/s11042-023-15576-7.
-
Albahli, S., Albattah, W., Masood, M., & Mohammed, M. (2020). Web-based application for deep learning in image classification using convolutional neural networks. International Journal of Interactive Multimedia and Artificial Intelligence, 6(5), 47-55. https://doi.org/10.9781/ijimai.2020.10.003
-
Lee, J., Lee, S., & Park, H. (2021). A scalable web-based platform for deploying deep learning models: Application to image classification and object detection. Journal of Visual Communication and Image Representation, 78, 103123. https://doi.org/10.1016/j.jvcir.2021.103123
-
Hajabdollahi, M., Esfandiarpoor, R., Sabeti, E., Karimi, N., Soroushmehr, S.M.R., & Samavi, S. (2020). Multiple abnormality detection for automatic medical image diagnosis using bifurcated convolutional neural network. Biomedical Signal Processing and Control, 57, 101792. https://doi.org/10.1016/j.bspc.2019.101792
-
Sanida, M.V., Sanida, T., Sideris, A., & Dasygenis, M. (2024). An advanced deep learning framework for multi-class diagnosis from chest X-ray images. J, 7(1), 48-71. https://doi.org/10.1007/s12345-023-12345-6
-
-
Miao, Z., Gaynor, K. M., Wang, J., Liu, Z., Muellerklein, O., Norouzzadeh, M. S., ... & Getz, W. M. (2019). Insights and approaches using deep learning to classify wildlife. Scientific reports, 9(1), 8137. https://doi.org/10.1038/s41598-019-44565-w
-
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770-778. https://doi.org/10.1109/CVPR.2016.90
-
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going Deeper with Convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1-9. https://doi.org/10.1109/CVPR.2015.7298594
-
-
Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017). Feature Pyramid Networks for Object Detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2117-2125. https://doi.org/10.1109/CVPR.2017.233
-
Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2018). Deeplab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4), 834-848. https://doi.org/10.1109/TPAMI.2017.2699184
-
-
Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. International Conference on Medical Image Computing and Computer-Assisted Intervention, 234-241. https://doi.org/10.1007/978-3-319-24574-4_28
-
Long, J., Shelhamer, E., & Darrell, T. (2015). Fully Convolutional Networks for Semantic Segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3431-3440. https://doi.org/10.1109/CVPR.2015.7298965
-
Girshick, R. (2015). Fast R-CNN. Proceedings of the IEEE International Conference on Computer Vision, 1440-1448. https://doi.org/10.1109/ICCV.2015.169
-
Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 580-587. https://doi.org/10.1109/CVPR.2014.81
-
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., & Berg, A. C. (2016). SSD: Single Shot MultiBox Detector. European Conference on Computer Vision, 21-37. https://doi.org/10.1007/978-3-319-46448-0_2
-
Cao, Z., Simon, T., Wei, S.-E., & Sheikh, Y. (2017). Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7291-7299. https://doi.org/10.1109/CVPR.2017.779
-
Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700-4708). https://doi.org/10.1109/CVPR.2017.498.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Authors who publish with this journal agree to the following terms:
1. Copyright Retention and Open Access License
Authors retain copyright of their work and grant the journal non-exclusive right of first publication under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
This license allows unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
2. Rights Granted Under CC BY 4.0
Under this license, readers are free to:
- Share — copy and redistribute the material in any medium or format
- Adapt — remix, transform, and build upon the material for any purpose, including commercial use
- No additional restrictions — the licensor cannot revoke these freedoms as long as license terms are followed
3. Attribution Requirements
All uses must include:
- Proper citation of the original work
- Link to the Creative Commons license
- Indication if changes were made to the original work
- No suggestion that the licensor endorses the user or their use
4. Additional Distribution Rights
Authors may:
- Deposit the published version in institutional repositories
- Share through academic social networks
- Include in books, monographs, or other publications
- Post on personal or institutional websites
Requirement: All additional distributions must maintain the CC BY 4.0 license and proper attribution.
5. Self-Archiving and Pre-Print Sharing
Authors are encouraged to:
- Share pre-prints and post-prints online
- Deposit in subject-specific repositories (e.g., arXiv, bioRxiv)
- Engage in scholarly communication throughout the publication process
6. Open Access Commitment
This journal provides immediate open access to all content, supporting the global exchange of knowledge without financial, legal, or technical barriers.