Voice and Text-Based AI Healthcare Chatbot Using Local Language Models
Voice and Text-Based AI Healthcare Chatbot Using Local Language Models
D. Venkateswarlu
TLDR
This project presents the design and development of an AI-based healthcare chatbot capable of functioning both offline and online, supporting voice and text input/output, and built entirely using free, open-source tools.
Abstract
Abstract -- In recent years, the integration of Artificial Intelligence (AI) into healthcare systems has transformed the way users access medical knowledge, offering enhanced accessibility, personalization, and efficiency. This project presents the design and development of an AI-based healthcare chatbot capable of functioning both offline and online, supporting voice and text input/output, and built entirely using free, open-source tools. Unlike most existing solutions that depend on expensive or cloud-based APIs, this chatbot utilizes locally hosted machine learning models from Hugging Face (distilGPT2) to deliver responses to health-related queries. The chatbot allows users to either type or speak their questions. It processes spoken input using the SpeechRecognition library and synthesizes spoken responses using pyttsx3, creating a natural, conversational experience. Responses are generated using a hybrid approach: a rule-based system handles common queries such as flu symptoms or sleep advice, while transformer-based text generation provides contextually appropriate answers for general or ambiguous queries. All conversations are handled through an intuitive web interface built with Streamlit, which also maintains session memory to simulate human-like dialogue.
Key Words: Voice-enabled AI chatbot, Healthcare assistant, Natural language processing, Offline chatbot, Speech recognition, Text-to-speech synthesis, Local language models, Streamlit interface, Hugging Face transformers, distilGPT2, pyttsx3, Speech Recognition, PyTorch, Real-time voice interaction, Rule-based healthcare responses.

