
PROJECT
| Anjali Rao SG | AUTHOR | ACTIVE |
| Dhruv | COORDINATOR | ACTIVE |

This project presents a novel, integrated framework for securing Federated Learning (FL) within next-generation 6G wireless networks. While FL preserves data privacy, its decentralized nature makes it highly vulnerable to malicious attacks. To address this, we implement a dynamic, trust-based security strategy using multi-metric behavioral analysis of clients. Built using the Flower framework in a simulated 6G setting, the system evaluates metrics such as gradient anomaly, parameter divergence, and performance inconsistency to compute real-time trust scores and dynamically adjust aggregation weights.
Simulation results demonstrate the system’s capability to identify and mitigate malicious client behavior. The global model's accuracy improved consistently across four rounds, achieving over 80% test accuracy, demonstrating that the proposed trust-based integration enhances FL security without hindering model convergence.
The system integrates three major components working together to ensure secure and efficient decentralized model training:
Implemented via Integrated6GTrustFedAvg, responsible for:
Federated learning enables decentralized model training without sharing raw data, supporting privacy compliance (GDPR, HIPAA). However, the server lacks visibility into client data, making FL vulnerable to:
These threats highlight the need for real-time anomaly detection and secure aggregation.
Traditional defenses include:
Limitations:
Trust-based systems provide a more adaptive defence by maintaining reputation scores computed from:
6G technologies—Terahertz communication, AI-driven networking, Massive MIMO—enable large-scale FL at the edge. However, large device counts and dynamic topology introduce new security challenges requiring adaptive, multi-metric approaches.
A multi-layered distributed system integrating:
Clients perform:
Provides:
Enabling fast, large-scale FL deployments.
Metrics:
These feed into
Synthetic dataset: synthetic_network_dataset.csv
Custom 6G-NET module simulating:
Overrides Flower’s standard aggregation to:
project/
├── flower_federated_ids_6g.py
├── artifacts_flower/
├── ids6g_synthetic.csv
Output Files:
global_model.ptclient_flags.json6g_network_results.jsonenhanced_trust_log.csvtraining_log.csvAcross 4 rounds with 7 clients, the system detected suspicious behavior early and reduced malicious contributions dynamically.
| Round | Test Accuracy | Test Loss |
|---|---|---|
| 1 | 0.7238 | 0.5050 |
| 2 | 0.7238 | 0.3894 |
| 3 | 0.7238 | 0.3473 |
| 4 | 0.8095 | 0.3083 |
Despite malicious clients, accuracy improved steadily, validating the strength of the trust-based model.
This project demonstrates a resilient, multi-layered approach to securing Federated Learning in 6G networks. The proposed trust-based aggregation enables real-time detection and mitigation of malicious client behavior without compromising FL’s decentralized and privacy-preserving nature. The results confirm that this approach maintains model accuracy and stability even in adversarial environments.
[1] FL privacy leakage studies [2] Adaptive trust-based aggregation literature [3] Multi-metric anomaly detection research [4] 6G network capability surveys