Can artificial intelligence actually make knowledge workers faster? A recent Microsoft-led experiment suggests the answer is yes — but with important nuances.
Artificial intelligence tools are quickly becoming part of everyday office workflows. From drafting emails to searching large document sets, AI assistants promise to save time and reduce manual effort. But beyond the hype, an important question remains: do these tools actually make information workers more productive?
Overall, the study shows that AI assistance can meaningfully speed up common information-worker tasks while maintaining similar levels of accuracy.
A recent study by Benjamin G. Edelman, Donald Ngwe, and Sida Peng, economists at Microsoft’s Office of the Chief Economist, set out to measure this question using controlled experiments. Their research evaluates how Microsoft 365 Copilot affects the speed, accuracy, and overall experience of common workplace tasks.
The results matter because organizations worldwide are rapidly investing in AI copilots without clear causal evidence of their real productivity impact.
Why this research matters now
Many claims about AI productivity are based on personal experience or small case studies. While these insights are useful, they don’t always provide reliable evidence about real performance gains.
One reason this study stands out is its use of randomized controlled trials (RCTs) — a rigorous research method designed to isolate cause and effect. By comparing workers who used the AI tool with those who did not, the researchers aimed to produce clearer evidence about the true impact of AI assistance in everyday information work.
As organizations increasingly consider deploying AI tools at scale, this type of evidence becomes especially valuable.
Because participants were randomly assigned to AI and non-AI conditions, the researchers were better able to isolate the causal impact of the tool rather than relying only on self-reported productivity improvements.
How the researchers tested Microsoft Copilot
The research team conducted two randomized controlled experiments involving 310 participants recruited through Upwork. Participants were asked to perform realistic information-worker tasks in simulated workplace environments intended to resemble common office scenarios.
The tasks included:
- Retrieving information from company documents
- Searching emails and calendar entries
- Catching up after a missed online meeting
- Writing marketing content
Participants were divided into two groups:
- A Copilot group with access to Microsoft 365 Copilot
- A control group using standard tools only
Importantly, participants were rewarded for both speed and accuracy, encouraging behavior similar to real workplace conditions.
What the study found about speed
Across both experiments, the most consistent finding was a clear improvement in task completion speed for users with AI assistance.
In the first experiment:
- Copilot users completed tasks in 17.9 minutes on average
- The control group required 24.3 minutes
- This represents roughly 26.6% faster completion
The second experiment showed similar patterns:
- Overall task duration improved by about 29%
- Content creation tasks saw the largest gains, with speed improving by over 60%
These results suggest that AI assistance can significantly reduce the time required for routine information-processing work.
Quick Performance Snapshot
| Task Category | Without AI (Control) | With AI (Copilot) | Efficiency Impact |
|---|---|---|---|
| General Information Tasks | 24.3 minutes | 17.9 minutes | ~26.6% faster |
| Content Creation | Standard pace | AI-assisted drafting | >60% faster |
| Accuracy | Baseline | Comparable | No significant change |
Did AI reduce accuracy?
Speed improvements often raise concerns about quality. Working faster is helpful only if accuracy remains stable.
In this study, the researchers found no statistically significant difference in accuracy between the Copilot group and the control group. In practical terms, users worked faster without meaningfully improving or reducing correctness. This suggests the primary benefit observed in the experiments was efficiency rather than quality enhancement.
However, the authors note that AI tools introduce new trade-offs. Depending on how users rely on AI assistance, they may choose to prioritize speed, accuracy, or a balance of both. This flexibility is part of what makes AI tools powerful — but it also requires thoughtful use.
How users perceived the AI tool
Beyond objective performance metrics, the researchers also measured user sentiment.
Participants who used Copilot reported strongly positive experiences:
- Most users said the tool saved noticeable time
- Many reported reduced effort during tasks
- A large majority indicated they would want to use the tool again
- Users who tried Copilot showed 35–40% higher willingness to pay compared with those who only heard about it
Interestingly, users often overestimated the amount of time saved. This gap between perceived and measured productivity points to an important behavioral factor: positive user experience with AI tools may sometimes outpace the objectively measured efficiency gains. While the measured savings were smaller than perceived, the positive sentiment suggests that AI assistance improves the overall user experience, not just raw speed.
Important limitations to keep in mind
As with any experimental research, the authors highlight several caveats.
First, the experiments used an early version of Microsoft 365 Copilot. Because AI systems evolve rapidly, future versions may perform differently.
Second, the tasks were conducted in simulated workplace environments, which may not fully capture the complexity of real organizational settings.
Third, participants were recruited from Upwork and represented a globally distributed sample. While diverse, this group may not perfectly reflect all information workers.
The researchers therefore frame their findings as strong early evidence rather than a definitive or universal measure of AI productivity gains. As with most experimental studies, these findings should be interpreted in context rather than treated as universal outcomes.
Other recent studies of generative AI have also noted that performance gains can vary significantly by task type, suggesting the productivity impact of AI may not be uniform across all knowledge-work activities.
What this means for everyday information workers
Despite the limitations, the study provides credible experimental evidence that AI assistants can meaningfully improve efficiency in common knowledge-work tasks.
The strongest benefits appear in areas such as:
- Drafting and summarizing content
- Navigating large information repositories
- Catching up on meetings
- Routine document and email analysis
For organizations, this suggests that AI tools may deliver the greatest value when applied to repetitive, information-heavy workflows rather than highly specialized or creative tasks.
From our perspective at OurNetHelps, the findings reinforce a broader trend: AI is increasingly becoming a practical productivity layer inside everyday software tools, rather than a standalone novelty.
At OurNetHelps, we focus heavily on tools that reduce manual friction in everyday workflows. The findings of this study align with emerging practitioner experience: AI assistance appears to deliver the most immediate value in search, retrieval, and first-draft generation tasks.
This pattern is consistent with early organizational adoption reports.
Final thoughts: AI productivity is becoming measurable
The Microsoft study adds rigorous experimental support to a growing consensus — well-designed AI assistants can materially improve the speed of information work without significantly harming accuracy.
While further research in real workplace settings is still needed, current evidence suggests that AI tools like Copilot are beginning to produce measurable productivity gains. As these tools continue to evolve, their impact on how knowledge work gets done will likely become even more significant.
Future enterprise-level studies will be important to confirm whether these experimental gains translate into sustained workplace productivity improvements.
References
Edelman, B. G., Ngwe, D., & Peng, S. (2023). Measuring the Impact of AI on Information Worker Productivity. SSRN.
https://ssrn.com/abstract=4648686
Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative AI at Work.
https://arxiv.org/abs/2304.11771
Noy, S., & Zhang, W. (2023). Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence. Science.
https://www.science.org/doi/10.1126/science.adh2586
Peng, S., Kalliamvakou, E., Cihon, P., & Demirer, M. (2023). The Impact of AI on Developer Productivity: Evidence from GitHub Copilot.
https://arxiv.org/abs/2302.06590