LLM Experimentation through knowledge graphs: Towards improved management, repeatability, and verification

No Thumbnail Available
Authors
Erickson, John S.
Santos, Henrique
Pinheiro, Vládia
McCusker, Jamie
McGuinness, Deborah L.
Issue Date
2024-12-31
Type
Article
Language
Keywords
Research Projects
Organizational Units
Journal Issue
Alternative Title
Abstract
Generative large language models (LLMs) have transformed AI by enabling rapid, human-like text generation, but they face challenges, including managing inaccurate information generation. Strategies such as prompt engineering, Retrieval-Augmented Generation (RAG), and incorporating domain-specific Knowledge Graphs (KGs) aim to address their issues. However, challenges remain in achieving the desired levels of management, repeatability, and verification of experiments, especially for developers using closed-access LLMs via web APIs, complicating integration with external tools. To tackle this, we are exploring a software architecture to enhance LLM workflows by prioritizing flexibility and traceability while promoting more accurate and explainable outputs. We describe our approach and provide a nutrition case study demonstrating its ability to integrate LLMs with RAG and KGs for more robust AI solutions.
Description
Full Citation
John S. Erickson, Henrique Santos, Vládia Pinheiro, Jamie P. McCusker, & Deborah L. McGuinness (2024). LLM Experimentation through knowledge graphs: Towards improved management, repeatability, and verification. Journal of Web Semantics, 100853.
Publisher
Elsevier
Journal
Volume
Issue
PubMed ID
DOI
ISSN
EISSN