Alp, S.Senlik, R.2026-03-262026-03-262023979835030659010.1109/ASYU58738.2023.102967932-s2.0-85178294932https://doi.org/10.1109/ASYU58738.2023.10296793https://hdl.handle.net/20.500.14901/3563Accurate identification of beef components is crucial for the meat industry, encompassing consumer confidence, food safety, and quality control. This study addresses the challenge by developing a robust model for beef component classification using RGB images obtained from smartphones. A diverse dataset was collected outside a controlled laboratory environment, closely resembling real-world conditions. Three CNN-based models, EfficientNetV2S, ResNet101, and VGG16, were fine-tuned and evaluated on the dataset. The results demonstrated the effectiveness of the models in accurately classifying beef components. EfficientNetV2S achieved the highest performance, with precision, recall, and F1-score values of 0.92 for all classes. This research bridges the gap between non-destructive detection technologies and end users, providing a practical and reliable solution for beef component identification in various applications. © 2023 IEEE.eninfo:eu-repo/semantics/closedAccessBeef Component ClassificationCNNNon-DestructiveRed Meat QualityTransfer LearningTransfer Learning Approach for Classification of Beef Meat Regions with CNNConference Object