Meta plans to track US employees’ mouse movements, clicks, keystrokes, and screen activity to train workplace AI agents, according to Reuters, offering an early look at how far major tech companies may go to build systems that can automate knowledge work.
The company plans to do so through a tool called Model Capability Initiative, or MCI, which will run on work-related apps and websites and periodically capture screen snapshots, according to the report.
Meta reportedly told staff in internal memos that the data collected through MCI would be used to help train AI models in areas where they still struggle to mimic how humans interact with computers, such as navigating dropdown menus and using keyboard shortcuts. The company added that the data would not be used for performance reviews and would be limited to AI training.
The move is likely to intensify debate over worker surveillance and data governance as companies push to use AI for a growing share of workplace tasks.
In a separate memo, Meta CTO Andrew Bosworth told employees the company was expanding internal data collection as part of its broader “AI for Work” push, now rebranded as the Agent Transformation Accelerator, Reuters added. Bosworth said Meta’s goal was to build a future in which AI agents “primarily do the work” while employees “direct, review and help them improve.”
The move comes as the broader AI industry pushes toward agents that can operate software on behalf of users. Anthropic has already showcased computer-use capabilities, while OpenAI launched its Operator agent last year.
“Meta’s move signals a shift from automating discrete tasks to replicating entire human workflows by learning from real employee behavior,” said Pareekh Jain, CEO of Pareekh Consulting. “Enterprise systems will increasingly act as data exhaust pipelines, capturing how work actually gets done so AI agents can execute it.”
Sanchit Vir Gogia, chief analyst at Greyhound Research, said the shift is important because companies are no longer just documenting workflows for automation, but capturing how work is actually carried out inside systems. “Systems do not just learn from historical data anymore. They learn from the way people intervene, adjust, and get outcomes despite the system itself,” he said.
Governance and compliance concerns
For enterprise IT leaders, this kind of monitoring raises a new category of behavioral data risk because it captures not just business information, but also how employees perform tasks inside enterprise systems.
“Privacy and compliance risks are significant, especially in Europe under GDPR and labor laws, where capturing keystrokes and screen activity may be restricted or require explicit consent,” Jain said. “Security risks increase because training datasets may contain credentials, IP, and sensitive workflows, making them high-value attack targets.”
Gogia said the risks should not be viewed in isolation. “These risks stack. They interact. They reinforce each other,” he said, adding that data gathered for AI training could also be repurposed over time for productivity monitoring or other employment-related decisions.
Jain added that governance could become more difficult because companies may struggle to trace what AI systems learned from specific employees. Employee awareness of monitoring could also affect the quality of the data itself. “People do not behave the same way when they know they are being observed,” Gogia said. Over time, that could mean systems are trained not on how work naturally happens, but on behavior shaped by observation.