LLMs Underperform Graph-Based Parsers on Supervised Relation Extraction for Complex Graphs

📰 ArXiv cs.AI

arXiv:2604.08752v1 Announce Type: cross Abstract: Relation extraction represents a fundamental component in the process of creating knowledge graphs, among other applications. Large language models (LLMs) have been adopted as a promising tool for relation extraction, both in supervised and in-context learning settings. However, in this work we show that their performance still lags behind much smaller architectures when the linguistic graph underlying a text has great complexity. To demonstrate

Published 13 Apr 2026
Read full paper → ← Back to Reads