UPDF AI

Hierarchical Federated Transfer learning and Digital Twin Enhanced Secure Cooperative Smart Farming

Lopamudra Praharaj,Maanak Gupta,Deepti Gupta

2023 · DOI: 10.1109/BigData59044.2023.10386345
BigData Congress [Services Society] · 5 Citations

TLDR

This research introduces a hierarchical federated transfer learning framework designed to address and mitigate the security threats in collaborative smart farming and introduces a multi-layered architecture incorporating Digital Twins (DT).

Abstract

The agriculture industry is extensive utilizing AI and data-driven systems for efficiency and automation, with the goal to meet the rising food demand. Individual farm owners can leverage agricultural cooperatives to consolidate resources, exchange data, and share domain knowledge. These cooperatives can enable the generation of AI-supported insights for their member farmers. However, this collaborative approach has raised concerns among individual smart farm owners regarding cybersecurity threats, and privacy. A cybersecurity breach not only endangers the farm attacked but can also risks the entire network of smart farms members within the cooperative. In this research, we emphasize security challenges within cooperative smart farming and introduce a multi-layered architecture incorporating Digital Twins (DT). Further, we introduce a hierarchical federated transfer learning framework designed to address and mitigate the security threats in collaborative smart farming. Our approach leverages Federated Learning (FL) based Anomaly Detection (AD), which operate on edge servers, enabling the execution of AD models locally without exposing the farm’s data. This localization also has excellent generalization ability, which can highly improve the detection of unknown cyber attacks. We employ a hierarchical FL structure that supports aggregation at various levels, fostering multi-party collaboration. Furthermore, we have devised an approach that integrates Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) models, complemented by transfer learning. The objective is to expedite training duration while upholding high accuracy levels. To illustrate the efficiency of our proposed architecture, we present a use case to demonstrate our model’s capabilities. Furthermore, we also present a proof-of-concept implementation of our proposed architecture within Amazon Web Services (AWS) environment, reflecting real-world feasibility.

Cited Papers
Citing Papers