학술논문

Evaluating Federated Dino’s performance on the segmentation task across diverse domains
Document Type
Conference
Source
2024 IEEE International Conference on Big Data (BigData) Big Data (BigData), 2024 IEEE International Conference on. :7784-7789 Dec, 2024
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
General Topics for Engineers
Robotics and Control Systems
Signal Processing and Analysis
YOLO
Training
Data privacy
Accuracy
Training data
Data models
Robustness
Security
Biomedical imaging
Automotive engineering
Federated Learning
Data sets
Evaluation
Image segmentation
Language
ISSN
2573-2978
Abstract
This study investigates the performance of the DI-NOv2 pre-trained model within Federated Learning (FL) environments, focusing on its application to segmentation tasks across diverse domains. While DINOv2 has demonstrated high efficacy in centralized training scenarios, its capabilities under FL conditions—where data privacy and security are paramount—remain underexplored. Utilizing data sets spanning industrial, medical, and automotive sectors, we evaluated DINOv2’s accuracy and generalization in decentralized settings. Our findings reveal that federated DINOv2 performs comparably to centralized models, effectively segmenting objects despite the decentralized and heterogeneous nature of the data. However, inherent biases in the pre-trained model posed challenges, affecting performance across different domains. These results highlight the need for domain-specific fine-tuning and bias mitigation strategies to enhance the robustness of pre-trained models in FL contexts. Future work should address these challenges to maximize the potential of FL in privacy-sensitive applications, ensuring high performance while maintaining data confidentiality.