[D] Runtime layer on Hugging Face Transformers (no source changes) [D]

I’ve been experimenting with a runtime-layer approach to augmenting existing ML systems without modifying their source code.

As a test case, I took modeling_utils.py from Transformers (v5.5.0), kept it byte-for-byte intact, and introduced a separate execution layer around it. Then I dropped it back into a working environment to see if behavior could be added externally.

What I tested:

• Input validation (basic injection / XSS pattern detection) • Persistent state across calls • Simple checkpoint / recovery behavior • Execution-time observation 

Key constraint:

The original Transformers file is not modified at all.

What I observed:

• Malicious inputs triggered validation in the runtime layer • Normal model usage was unaffected • State persisted across calls without touching model code 

What I’m trying to understand

Is there any real advantage to doing this at a runtime layer vs:

• hooks inside the framework • middleware-style patterns • or modifying the source directly 

Looking for feedback on:

• Where this breaks in real-world usage • Whether this is fundamentally different from existing patterns • If there are better ways to achieve the same effect inside Transformers 

Happy to help anyone run it locally if you want to verify behavior.

I recorded a short demo here:

https://youtu.be/n1hGDWLoEPw

Repo (full copy + runtime layer):

https://github.com/SweetKenneth/transformers-ascended-verified

submitted by /u/KennethSweet
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top