LLM-Guided Semantic Bootstrapping for Interpretable Text Classification with Tsetlin Machines
📰 ArXiv cs.AI
arXiv:2604.12223v1 Announce Type: cross Abstract: Pretrained language models (PLMs) like BERT provide strong semantic representations but are costly and opaque, while symbolic models such as the Tsetlin Machine (TM) offer transparency but lack semantic generalization. We propose a semantic bootstrapping framework that transfers LLM knowledge into symbolic form, combining interpretability with semantic capacity. Given a class label, an LLM generates sub-intents that guide synthetic data creation
DeepCamp AI