cover photo

PROJECT

Federated learning for intrusion detection in 6G networks

Anjali Rao SGAUTHORACTIVE
DhruvCOORDINATORACTIVE
work cover photo
This Report is yet to be approved by a Coordinator.

Secure Federated Learning in 6G Networks – Project Report

Abstract

This project presents a novel, integrated framework for securing Federated Learning (FL) within next-generation 6G wireless networks. While FL preserves data privacy, its decentralized nature makes it highly vulnerable to malicious attacks. To address this, we implement a dynamic, trust-based security strategy using multi-metric behavioral analysis of clients. Built using the Flower framework in a simulated 6G setting, the system evaluates metrics such as gradient anomaly, parameter divergence, and performance inconsistency to compute real-time trust scores and dynamically adjust aggregation weights.

Simulation results demonstrate the system’s capability to identify and mitigate malicious client behavior. The global model's accuracy improved consistently across four rounds, achieving over 80% test accuracy, demonstrating that the proposed trust-based integration enhances FL security without hindering model convergence.


1. System Functionality Overview

The system integrates three major components working together to ensure secure and efficient decentralized model training:

1.1 6G Network Simulation

  • Device Management: Simulates multiple clients and tracks their connection states.
  • Base Station Topology: Models multiple base stations handling device connections and utilization.

1.2 Federated Learning Process

  • Training Rounds: Multi-round decentralized model training.
  • Client Selection: Server samples a subset of clients per round.
  • Local Training: Each selected client trains on its private data.
  • Global Aggregation: Server aggregates updates to improve the global model.

1.3 Trust-Based Security Strategy

Implemented via Integrated6GTrustFedAvg, responsible for:

  • Gradient Anomaly Detection
  • Performance Inconsistency Detection
  • Parameter Divergence Measurement
  • Statistical Outlier Detection
  • Trust Score Calculation
  • Dynamic Weight Adjustment during model aggregation
  • Device Status Assignment (normal/suspicious/blocked)

2. Literature Survey

2.1 Federated Learning and Security Challenges

Federated learning enables decentralized model training without sharing raw data, supporting privacy compliance (GDPR, HIPAA). However, the server lacks visibility into client data, making FL vulnerable to:

  • Data Poisoning
  • Model Poisoning
  • Backdoor Attacks
  • Privacy Leakage via Gradient Inference

These threats highlight the need for real-time anomaly detection and secure aggregation.

2.2 Anomaly Detection & Trust-Based Defence Mechanisms

Traditional defenses include:

  • Krum, Trimmed Mean statistical filtering

Limitations:

  • Easily bypassed by coordinated attacks
  • Penalizes non-IID but benign clients

Trust-based systems provide a more adaptive defence by maintaining reputation scores computed from:

  • Gradient deviation
  • Performance variation
  • Parameter divergence

2.3 Integration with 6G Networks

6G technologies—Terahertz communication, AI-driven networking, Massive MIMO—enable large-scale FL at the edge. However, large device counts and dynamic topology introduce new security challenges requiring adaptive, multi-metric approaches.


3. Proposed Architecture

A multi-layered distributed system integrating:

  • Central Server (orchestrator + trust engine)
  • Edge Devices (local training nodes)
  • 6G Network Layer (communication backbone)

3.1 Central Server

  • Maintains global model
  • Performs client sampling
  • Executes trust-based security analysis
  • Aggregates model updates dynamically

3.2 Edge Devices

Clients perform:

  1. Receive global model
  2. Train locally
  3. Send updated parameters to server

3.3 6G Network

Provides:

  • Ultra-low latency
  • Massive device connectivity
  • High bandwidth

Enabling fast, large-scale FL deployments.

3.4 Trust-Based Security Engine

Metrics:

  • Gradient Anomaly
  • Performance Inconsistency
  • Parameter Divergence

These feed into

  • Trust Score → used for Dynamic Weighting.

4. Implementation

4.1 Simulation Environment

  • Implemented using Flower (flwr)
  • Parallelism enabled via Ray Virtual Client Engine
  • 7 simulated clients, 4 training rounds

4.2 Network & Data Modeling

  • Synthetic dataset: synthetic_network_dataset.csv

  • Custom 6G-NET module simulating:

    • Device–base station connections
    • Device status updates
    • Base station utilization

4.3 Custom FL Strategy: Integrated6GTrustFedAvg

Overrides Flower’s standard aggregation to:

  • Compute trust scores
  • Dynamically adjust client weights

4.4 Trust-Based Security Engine Workflow

  1. Compute metrics for each client
  2. Aggregate into detection score
  3. Compute trust score = 1 - detection
  4. Adjust client aggregation weight

4.5 Project Structure

project/
 ├── flower_federated_ids_6g.py
 ├── artifacts_flower/
 ├── ids6g_synthetic.csv

Output Files:

  • global_model.pt
  • client_flags.json
  • 6g_network_results.json
  • enhanced_trust_log.csv
  • training_log.csv

5. Results

5.1 Summary

Across 4 rounds with 7 clients, the system detected suspicious behavior early and reduced malicious contributions dynamically.

5.2 Round-by-Round Summary

Round 1

  • Suspicious: Clients 0, 1, 4
  • Weight reductions: 21%–25%

Round 2

  • Suspicious: Clients 2, 4

Round 3

  • Suspicious: Clients 0, 3

Round 4

  • Suspicious: Clients 0, 1, 3

5.3 Performance Metrics

RoundTest AccuracyTest Loss
10.72380.5050
20.72380.3894
30.72380.3473
40.80950.3083

Despite malicious clients, accuracy improved steadily, validating the strength of the trust-based model.


Conclusion

This project demonstrates a resilient, multi-layered approach to securing Federated Learning in 6G networks. The proposed trust-based aggregation enables real-time detection and mitigation of malicious client behavior without compromising FL’s decentralized and privacy-preserving nature. The results confirm that this approach maintains model accuracy and stability even in adversarial environments.


References

[1] FL privacy leakage studies [2] Adaptive trust-based aggregation literature [3] Multi-metric anomaly detection research [4] 6G network capability surveys

UVCE,
K. R Circle,
Bengaluru 01