cs.AI, cs.CV, cs.LG

Saccade Attention Networks: Using Transfer Learning of Attention to Reduce Network Sizes

arXiv:2604.16485v1 Announce Type: new
Abstract: One of the limitations of transformer networks is the sequence length due to the quadratic nature of the attention matrix. Classical self attention uses the entire sequence length, however, the actual at…