Even Meta's employees are struggling to grasp AI. Who would have imagined that?
Unsplash
If you're looking for an example of what occurs when a tech giant attempts to impose an AI-driven future on its employees, Meta serves as a prime illustration right now. The company that built its success on understanding its users has redirected that same strategy internally, leading to discontent among its staff. Last month, Meta discreetly informed tens of thousands of its U.S. employees that their work laptops would start monitoring their keystrokes, mouse movements, clicks, and screen activities. The aim was to utilize this behavioral data to enhance Meta’s AI models and understand actual computer usage. The response was swift—within hours, internal comment threads overflowed with anger, confusion, and over a hundred emoji reactions that clearly expressed employees' emotions.
When an engineering manager inquired about opting out, Meta’s chief technology officer, Andrew Bosworth, provided a straightforward response: there was no way to opt out, at least on a corporate laptop. This is the same company that is linking the use of AI tools to performance evaluations, conducting mandatory “AI Transformation Weeks” to retrain its workforce, and creating internal dashboards that gamify the number of AI tokens employees use daily—a metric so meticulously monitored that some workers started constructing AI agents to oversee their other AI agents. The entire situation began to resemble a self-consuming feedback loop.
The layoffs only worsened the situation
This isn’t happening in isolation. On April 17, it was revealed that Meta planned to lay off about 10% of its workforce—approximately 8,000 people—with the initial round scheduled for May 20. Employees who had been encouraged to embrace and train with AI, only to have their computer activities analyzed to improve AI, suddenly found themselves questioning whether their efforts were paving the way for their own replacements. The timing was, to say the least, dreadful. Internal messages described the atmosphere as “incredibly demoralizing.” At least three countdown sites emerged, keeping track of the days until the layoffs. Employees shared bleak memes, with one widely circulated internal post simply stating: “It does not matter.”
Josh Edelson/Getty Images / Meta
Mark Zuckerberg addressed the data collection during a company-wide meeting, presenting it not as surveillance but as a means of teaching AI how “intelligent individuals use computers to accomplish tasks.” He also remarked that AI is “likely one of the most competitive fields in history”—a statement that carried a different weight for those in the office, contemplating their job security in a matter of weeks.
This is merely a glimpse of what lies ahead everywhere
What is taking place at Meta is not confined to the company; it's just further advanced than elsewhere. Microsoft, Coinbase, and Block have all made comparable moves recently, reorganizing around AI, which has resulted in layoffs and internal strife. The distinction is that Meta is executing all of these actions concurrently and on a grand scale: retraining employees, monitoring their behavior, tying job security to AI adoption metrics, and reducing staff to support the entire initiative.
Mariia Shalabaieva / Unsplash
There’s no straightforward way to manage any of this. An employee backlash against keystroke tracking at one of the world’s most influential tech companies—one that is, among other things, actively developing AI systems to monitor and understand human behavior—brings its own sort of irony. Meta had devoted years to convincing billions to willingly share their data. Convincing its employees to do the same is proving far more challenging.
Shimul is a contributor at Digital Trends, with over five years of experience in the tech sector.
Rice grain-sized sensor could give robots a delicate touch and stop them from damaging things
Robots demonstrate remarkable precision, yet finesse is often not their strong suit. While a machine can assemble a car with near-perfect accuracy, it might still exert excessive pressure in sensitive scenarios, like operating inside a human eye or during delicate surgeries. This issue has led researchers at Shanghai Jiao Tong University to develop a novel type of force sensor that could enable robots to “perceive” their touch more accurately. The sensor is tiny, roughly the size of a grain of rice at just 1.7 millimeters wide, making it suitable for advanced surgical instruments. Its unique aspect is that it does not depend on traditional electronics; instead, it utilizes light to measure force from all directions, including pressure, sliding motions, and twisting. Here’s how it functions: at the tip of an optical fiber is a soft material that slightly changes shape upon contact with an object. This small deformation modifies how light travels through the sensor. The altered light pattern is transmitted through optical fibers to a camera, which captures it like an image. Researchers then apply a machine learning model to analyze these light patterns and convert them into accurate force measurements. Essentially, the system learns to “interpret” touch using only light, without needing numerous wires or multiple separate sensors within such a compact space.
Read more
Sci-fi got the gadgets right, but
Other articles
Even Meta's employees are struggling to grasp AI. Who would have imagined that?
Meta is monitoring employee keystrokes, linking AI utilization to performance evaluations, and simultaneously laying off thousands, yet seems astonished that employee morale is plummeting.
