XAttnRes: Cross-Stage Attention Residuals for Medical Image Segmentation

📰 ArXiv cs.AI

arXiv:2604.03297v1 Announce Type: cross Abstract: In the field of Large Language Models (LLMs), Attention Residuals have recently demonstrated that learned, selective aggregation over all preceding layer outputs can outperform fixed residual connections. We propose Cross-Stage Attention Residuals (XAttnRes), a mechanism that maintains a global feature history pool accumulating both encoder and decoder stage outputs. Through lightweight pseudo-query attention, each stage selectively aggregates fr

Published 7 Apr 2026
Read full paper → ← Back to News