📰 Reads
114,258 articles · Updated every 3 hours
All
⚡ AI Lessons (17059)
ArXiv cs.AIDev.to AIDev.to · FORUM WEBForbes InnovationMedium · ProgrammingMedium · AI
ArXiv cs.AI
📄 Paper
1d ago
pAI/MSc: ML Theory Research with Humans on the Loop
arXiv:2604.20622v1 Announce Type: new Abstract: We present pAI/MSc, an open-source, customizable, modular multi-agent system for academic research workflows. Ou
ArXiv cs.AI
📄 Paper
1d ago
CHORUS: An Agentic Framework for Generating Realistic Deliberation Data
arXiv:2604.20651v1 Announce Type: new Abstract: Understanding the intricate dynamics of online discourse depends on large-scale deliberation data, a resource th
ArXiv cs.AI
📄 Paper
1d ago
Large Language Models Outperform Humans in Fraud Detection and Resistance to Motivated Investor Pressure
arXiv:2604.20652v2 Announce Type: new Abstract: Large language models trained on human feedback may suppress fraud warnings when investors arrive already persua
ArXiv cs.AI
📄 Paper
1d ago
Participatory provenance as representational auditing for AI-mediated public consultation
arXiv:2604.20711v1 Announce Type: new Abstract: Artificial intelligence is increasingly deployed to synthesize large-scale public input in policy consultations
ArXiv cs.AI
📄 Paper
1d ago
Learning to Evolve: A Self-Improving Framework for Multi-Agent Systems via Textual Parameter Graph Optimization
arXiv:2604.20714v1 Announce Type: new Abstract: Designing and optimizing multi-agent systems (MAS) is a complex, labor-intensive process of "Agent Engineering."
ArXiv cs.AI
📄 Paper
1d ago
Interval POMDP Shielding for Imperfect-Perception Agents
arXiv:2604.20728v1 Announce Type: new Abstract: Autonomous systems that rely on learned perception can make unsafe decisions when sensor readings are misclassif
ArXiv cs.AI
📄 Paper
1d ago
AAC: Admissible-by-Architecture Differentiable Landmark Compression for ALT
arXiv:2604.20744v1 Announce Type: new Abstract: We introduce \textbf{AAC} (Architecturally Admissible Compressor), a differentiable landmark-selection module fo
ArXiv cs.AI
📄 Paper
1d ago
Where and What: Reasoning Dynamic and Implicit Preferences in Situated Conversational Recommendation
arXiv:2604.20749v1 Announce Type: new Abstract: Situated conversational recommendation (SCR), which utilizes visual scenes grounded in specific environments and
ArXiv cs.AI
📄 Paper
1d ago
V-tableR1: Process-Supervised Multimodal Table Reasoning with Critic-Guided Policy Optimization
arXiv:2604.20755v1 Announce Type: new Abstract: We introduce V-tableR1, a process-supervised reinforcement learning framework that elicits rigorous, verifiable
ArXiv cs.AI
📄 Paper
1d ago
SWE-chat: Coding Agent Interactions From Real Users in the Wild
arXiv:2604.20779v1 Announce Type: new Abstract: AI coding agents are being adopted at scale, yet we lack empirical evidence on how people actually use them and
ArXiv cs.AI
📄 Paper
1d ago
Automatic Ontology Construction Using LLMs as an External Layer of Memory, Verification, and Planning for Hybrid Intelligent Systems
arXiv:2604.20795v1 Announce Type: new Abstract: This paper presents a hybrid architecture for intelligent systems in which large language models (LLMs) are exte
ArXiv cs.AI
📄 Paper
1d ago
Diagnosing CFG Interpretation in LLMs
arXiv:2604.20811v1 Announce Type: new Abstract: As LLMs are increasingly integrated into agentic systems, they must adhere to dynamically defined, machine-inter
ArXiv cs.AI
📄 Paper
1d ago
AutoGraph-R1: End-to-End Reinforcement Learning for Knowledge Graph Construction
arXiv:2510.15339v3 Announce Type: cross Abstract: Building effective knowledge graphs (KGs) for Retrieval-Augmented Generation (RAG) is pivotal for advancing qu
ArXiv cs.AI
📄 Paper
1d ago
Coding with Eyes: Visual Feedback Unlocks Reliable GUI Code Generating and Debugging
arXiv:2604.19750v1 Announce Type: cross Abstract: Recent advances in Large Language Model (LLM)-based agents have shown remarkable progress in code generation.
ArXiv cs.AI
📄 Paper
1d ago
Soft-Label Governance for Distributional Safety in Multi-Agent Systems
arXiv:2604.19752v1 Announce Type: cross Abstract: Multi-agent AI systems exhibit emergent risks that no single agent produces in isolation. Existing safety fram
ArXiv cs.AI
📄 Paper
1d ago
WorkflowGen:an adaptive workflow generation mechanism driven by trajectory experience
arXiv:2604.19756v1 Announce Type: cross Abstract: Large language model (LLM) agents often suffer from high reasoning overhead, excessive token consumption, unst
ArXiv cs.AI
📄 Paper
1d ago
Transparent Screening for LLM Inference and Training Impacts
arXiv:2604.19757v1 Announce Type: cross Abstract: This paper presents a transparent screening framework for estimating inference and training impacts of current
ArXiv cs.AI
📄 Paper
1d ago
Explainable Speech Emotion Recognition: Weighted Attribute Fairness to Model Demographic Contributions to Social Bias
arXiv:2604.19763v1 Announce Type: cross Abstract: Speech Emotion Recognition (SER) systems have growing applications in sensitive domains such as mental health
ArXiv cs.AI
📄 Paper
1d ago
Can We Locate and Prevent Stereotypes in LLMs?
arXiv:2604.19764v1 Announce Type: cross Abstract: Stereotypes in large language models (LLMs) can perpetuate harmful societal biases. Despite the widespread use
ArXiv cs.AI
📄 Paper
1d ago
Do Hallucination Neurons Generalize? Evidence from Cross-Domain Transfer in LLMs
arXiv:2604.19765v1 Announce Type: cross Abstract: Recent work identifies a sparse set of "hallucination neurons" (H-neurons), less than 0.1% of feed-forward net
ArXiv cs.AI
📄 Paper
1d ago
OThink-SRR1: Search, Refine and Reasoning with Reinforced Learning for Large Language Models
arXiv:2604.19766v1 Announce Type: cross Abstract: Retrieval-Augmented Generation (RAG) expands the knowledge of Large Language Models (LLMs), yet current static
ArXiv cs.AI
📄 Paper
1d ago
Accelerating PayPal's Commerce Agent with Speculative Decoding: An Empirical Study on EAGLE3 with Fine-Tuned Nemotron Models
arXiv:2604.19767v1 Announce Type: cross Abstract: We evaluate speculative decoding with EAGLE3 as an inference-time optimization for PayPal's Commerce Agent, po
ArXiv cs.AI
📄 Paper
1d ago
Saying More Than They Know: A Framework for Quantifying Epistemic-Rhetorical Miscalibration in Large Language Models
arXiv:2604.19768v1 Announce Type: cross Abstract: Large language models (LLMs) exhibit systematic miscalibration with rhetorical intensity not proportionate to
ArXiv cs.AI
📄 Paper
1d ago
TTKV: Temporal-Tiered KV Cache for Long-Context LLM Inference
arXiv:2604.19769v1 Announce Type: cross Abstract: Key-value (KV) caching is critical for efficient inference in large language models (LLMs), yet its memory foo
DeepCamp AI