6th International Conference on AI, Machine Learning
and Applications (AIMLA 2026)

March 21 ~ 22, 2026, Sydney, Australia

Accepted Papers


A Comparative Study of LLM-Powered Database Interfaces versus Traditional SQL Systems for Inventory Management

Menglong Guo 1, Yu Sun 2 , 1 The Chinese University of Hong Kong, Ma Liu Shui, Hong Kong, 2 California State Polytechnic University, Pomona, CA 91768

ABSTRACT

This study presents a systematic comparison of LLM-powered database interfaces versus traditional SQL systemsfor inventory management, implementing two parallel Flask backends—a SQLite-based system using SQLAlchemyORM and an LLM-based system using DeepSeek to process natural language commands against JSON storage—with identical REST API endpoints enabling controlled comparison [10]. Experimental results reveal significanttrade-offs: the SQL backend achieved 12ms mean latency and 100% operational accuracy, while the LLM backendaveraged 1,850ms latency (154x slower) with 88% accuracy that degraded to 72% for complex multi-stepoperations. These findings demonstrate that while LLM-powered databases offer unprecedented query flexibility andnatural language accessibility, they currently incur substantial performance and reliability penalties; traditionalSQL systems remain superior for mission-critical applications requiring deterministic behavior and ACIDcompliance, while LLM approaches suit scenarios prioritizing user accessibility and dynamic query capabilitiesover guaranteed correctness and response speed.

KEYWORDS

Large Language Models, SQL Databases, Natural Language Interfaces, Inventory Management.


Parameter-efficient fine-tuning for medical text summarization: a comparative study of lora, prompt tuning, and full fine-tuning

Ulugbek Shernazarov 1, Rostislav Svitsov 1 , 1 Bin Shi , 1 Telecom SudParis, France

ABSTRACT

Fine-tuning large language models for domain-specific tasks such as medical text summarization typically demands substantial computational resources. Parameter-efficient fine-tuning methods offer a promising alternative by updating only a small fraction of model parameters while maintaining competitive performance. This paper presents a comprehensive comparison of three adaptation approaches—Low-Rank Adaptation (LoRA), Prompt Tuning, and Full Fine-Tuning—evaluated across the Flan-T5 model family on the PubMed medical summarization dataset. Our experiments reveal that LoRA consistently outperforms full fine-tuning across all model scales, achieving 43.67 ROUGE-1 on Flan-T5-Large with only 0.6% trainable parameters compared to 40.82 ROUGE-1 for full fine-tuning. This finding suggests that the low-rank constraint provides beneficial regularization for domain adaptation. We analyze the performance-efficiency trade-offs in detail and provide practical recommendations for deploying medical summarization systems under varying resource constraints.

KEYWORDS

Parameter-efficient fine-tuning, Medical text summarization, Low-rank adaptation, Prompt tuning, Large language models


Deep Learning And Augmentation Architectures For Image Classification In Alzheimers Diagnosis

1 Jiawei Zhang, 2 Xin Zhang, 3 Xinyin Miao , 1 Senior Investment Analyst, PRA Group (Nasdaq: PRAA), Norfolk, Virginia, USA, 2 Data Scientist, PRA Group (Nasdaq: PRAA), Norfolk, Virginia, USA , 2 Data Scientist, PRA Group (Nasdaq: PRAA), Norfolk, Virginia, USA , 3 Senior Data Analyst, American Airlines Group Inc (Nasdaq: AAL), Dallas, Texas,USA

ABSTRACT

This paper utilizes four cutting edge deep learning architectures, namely VGG19, Xception, InceptionV3, and ResNet50, with transfer learning, image augmentation and two layers of regularization to be able to accurately predict the Alzheimers Disease classes under 33,982 MRI images with a 0.9563 accuracy, 0.9972 roc_auc, and 0.9559 f1 score in the testing scenario. By investigating the internal neural network structures and comparing the prediction performance, it provides the insight of how various deep learning architectures work differently under same conditions as well as the power of transfer learning and image augmentation in image-based classification and clinical diagnosis.

KEYWORDS

Deep learning, Transfer Learning, Image Classification, Neural Network Architecture, Regularization, Augmentation