UPDF AI

ChatGPT and the Future of Generative AI: Architecture, Limitations, and Advancements in Large Language Models

Kingsley Olunosen Osobase,Oluwatoyin Olawale Akadiri,3 Authors,C. C. Ekechi

2025 · DOI: 10.38177/ajbsr.2025.7312
0 Citations

TLDR

It is argued that ChatGPT should be viewed not merely as a product but as a living research instrument, one that highlights both the transformative potential and the societal risks of generative AI.

Abstract

This review examines ChatGPT as both a technological milestone and a research testbed for understanding the evolution of large language models (LLMs). Beginning with the transformer architecture and its scaling laws, the paper analyzes how pretraining, supervised fine-tuning, and reinforcement learning with human feedback (RLHF) collectively shape model performance. Empirical evidence from education, healthcare, business, and scientific research demonstrates that ChatGPT and domain-tuned variants can augment learning outcomes, accelerate professional productivity, and enable new forms of discovery. At the same time, persistent challenges, including hallucinations, bias, opacity, data limitations, and environmental costs, reveal the limits of scale-driven progress. Ongoing improvements such as multimodality, retrieval-augmented generation, domain-specific alignment, interpretability research, and energy-efficient training signals are emerging solutions, but they also expose critical research gaps. Looking forward, the integration of LLMs with autonomous agents, data science workflows, symbolic reasoning, and governance frameworks will define the trajectory of generative AI. The paper argues that ChatGPT should be viewed not merely as a product but as a living research instrument, one that highlights both the transformative potential and the societal risks of generative AI.

Cited Papers
Citing Papers