Got OpenAI’s privacy filter model running on-device via ExecuTorch

Got OpenAI's privacy filter model running on-device via ExecuTorch

Been experimenting with running OpenAI's privacy filter model on mobile through ExecuTorch. Sharing in case it's useful to others working on similar problems.

Setup:
- Runtime: ExecuTorch
- Memory footprint: ~600 MB RAM
- Bridge: react-native-executorch

The model handles arbitrary text — emails, documents, chat logs, pasted notes, transcripts — and flags sensitive content reasonably well across all of them. Quality holds up better than I expected; it catches the kinds of PII and sensitive material you'd actually want flagged, not just trivial pattern matches.

Privacy filtering is one of those tasks where sending the text to a cloud API to check whether the text is sensitive has always been a bit backwards. The class of inputs this is most useful for — drafts, internal docs, exported chat history, scanned/OCR'd documents — is exactly the stuff people are most reluctant to send off-device. Running it locally lines up the privacy guarantee with the actual use case.

submitted by /u/K4anan
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top