UPDF AI

Adaptive Hybrid Gradient-based Particle Swarm Optimization for Enhanced Neural Network Training

Okundalaye Oluwaseun Olumide

2025 · DOI: 10.9734/arjom/2025/v21i8971
Asian Research Journal of Mathematics · 0 Citations

TLDR

A Hybrid Gradient-Based Particle Swarm Optimization (HG-PSO) framework that combines the global search capability of Particle Swarm Optimization with the local refinement efficiency of gradient-based methods is introduced, leading to improved convergence speed, reduced overfitting, and enhanced generalization performance.

Abstract

Optimizing deep neural networks (DNNs) presents significant challenges due to complex loss landscapes, hyperparameter sensitivity, and slow convergence rates. This study introduces a Hybrid Gradient-Based Particle Swarm Optimization (HG-PSO) framework that combines the global search capability of Particle Swarm Optimization (PSO) with the local refinement efficiency of gradient-based methods. The proposed approach dynamically balances exploration and exploitation, leading to improved convergence speed, reduced overfitting, and enhanced generalization performance. Experimental evaluations using benchmark datasets—Fashion-MNIST, SVHN, and Tiny ImageNet—demonstrate that HG-PSO outperforms traditional optimizers such as Stochastic Gradient Descent (SGD), Adam, and standalone PSO. HG-PSO achieves a 15% reduction in training errors and a 12% increase in validation accuracy on average. Additionally, the method exhibits superior robustness against noisy gradients, making it well-suited for real-world deep-learning applications. These results establish HG-PSO as a powerful and efficient optimization strategy for neural network training.

Cited Papers
Citing Papers