Is Attention sink without Positional Encoding unavoidable? [D]
TL;DR: As soon as I remove Positional Encoding (PE) from Self or Cross-attention, I start seeing vertical hot lines in attention heatmaps. Is there any way to make a model have query-conditioned attention without PE? So, I've been trying to p…