| TL;DR:META's New AI Can Predict Your Brain Better Than A Brain Scan. Abstract:
Layman's Explanation:TRIBE v2 is a foundation model trained on 1,000+ hours of brain imaging data from 720 people. You feed it a video, sound clip, or text, and it predicts:
When tested on people it had never seen, the model's predictions were actually more accurate than most real brain scans (which get distorted by heartbeats, breathing, and movement). Researchers then replicated decades of classic neuroscience experiments entirely inside the software. No scanner, no human subjects. The model correctly identified the brain's face recognition center, language network, and emotional processing regions on its own. My Thoughts:Look at what else Meta has been building:
There's no evidence these are all connected, however regardless Meta now has a complete picture of attention, from the stimulus to the neural response. Link to the Paper: https://ai.meta.com/research/publications/a-foundation-model-of-vision-audition-and-language-for-in-silico-neuroscience/Link to the GitHub: https://github.com/facebookresearch/tribev2Link to the Open-Sourced Weights: https://huggingface.co/facebook/tribev2[link] [comments] |