[D] Algebraic structure in the Mayan Tzolkin calendar has possible application to equivariant neural nets

[D] Algebraic structure in the Mayan Tzolkin calendar has possible application to equivariant neural nets

I wrote a short paper analyzing the Mayan Tzolkin calendar as a 260-element cyclic system using affine maps over ( \mathbb{Z}_{260} ), involutions, a Klein four-group action, and a non-abelian extension.

The main result is mathematical, but I think there may be a connection to equivariant machine learning: the induced 4-element orbits (“Harmonic Quads”) seem like a natural basis for weight-sharing, and the larger operator group may offer a useful inductive bias for architectures on small structured discrete domains.

I’m posting here mainly to ask:

  • Does this seem genuinely relevant to geometric deep learning or group-equivariant modeling?
  • Has anyone seen something structurally similar used as an equivariance prior on small finite cyclic domains?
  • Would the better next step be to compute the irreps of the full group, or to build a toy equivariant architecture first?

Paper: https://zenodo.org/records/19420419

submitted by /u/Intelligent_Welder76
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top