When Valid Signals Fail: Regime Boundaries Between LLM Features and RL Trading Policies
📰 ArXiv cs.AI
arXiv:2604.10996v1 Announce Type: cross Abstract: Can large language models (LLMs) generate continuous numerical features that improve reinforcement learning (RL) trading agents? We build a modular pipeline where a frozen LLM serves as a stateless feature extractor, transforming unstructured daily news and filings into a fixed-dimensional vector consumed by a downstream PPO agent. We introduce an automated prompt-optimization loop that treats the extraction prompt as a discrete hyperparameter an
DeepCamp AI