| DeepSeek, in collaboration with Peking University and Tsinghua University, has released the paper "Thinking with Visual Primitives" along with its open-source repository, introducing a new multimodal reasoning framework. The core approach of this framework is to elevate spatial tokens—specifically coordinate points and bounding boxes—into the "minimal units of thought" within the model's chain-of-thought. These are directly interleaved during the reasoning process, enabling the model to "point" to specific locations within an image while it "thinks." https://github.com/deepseek-ai/Thinking-with-Visual-Primitives [link] [comments] |