AI Safety Newsletter #2: Investigating the “white-collar job automation” hype
Exploring the truth of AI-driven job displacement through hard numbers
💡Key Takeaways (TL;DR)
Inside this newsletter:
Information regarding upcoming our AI Safety courses starting 30th June.
A data-driven counterpoint to the AI job automation hype, arguing that due to current AI limitations we’re currently in a phase of “human-AI augmentation”, but acknowledges the real risk of a future automation tipping point.
A recap of an AISCT community event, “Understanding the Promise and Peril of AI Progress: A Roundtable Discussion”, where founder Leo Hyams & community member Charl Botha were both guest speakers.
Offers recommended substacks for further reading.
🧭 What’s on the go?
We're excited to announce our AI Safety courses starting June 30th, 2025!
Choose from three specialized tracks drawing from the BlueDot Impact curriculum:
AI Alignment
AI Governance
Economics of Transformative AI
Who this is for: professionals, policymakers, researchers, students, and anyone passionate about ensuring AI benefits humanity. No prior AI safety experience required!
Details:
Format: 8 weeks of expert-facilitated discussions + 4-week hands-on project
Locations: Cape Town, Johannesburg, Pretoria, Stellenbosch + Online
Time commitment: ~5 hours/week
Cost: Free!
Application Deadline: June 17, 2025, apply now!
This course is a great way to get started in the field. Several course alumni are now working at leading AI safety organizations. Learn More & Apply Here: https://tally.so/r/nPrAkP
🔍Topic Deep Dive
Investigating the “white-collar job automation” hype
This month, it’s been impossible to miss the viral articles on Twitter and Reddit. "Has the Decline of Knowledge Work Begun?" asked the New York Times. Another, titled "Behind the Curtain, a White Collar Bloodbath," echoed the sentiment. They suggest the disruption is not coming in 3-5 years, but is happening now.
As a community that takes AI's long-term impact seriously, we had to ask: what does the data say?
1. Unpacking the numbers
Many articles point to a single ominous statistic: “the unemployment rate for college graduates in the US has risen 30% since September 2022”. It sounds dramatic. But context is everything.
The reality: This "30% rise" was a jump from 2.0% to 2.6%1. For comparison, the overall US unemployment rate is 4.0%.
Historically: Zooming out, a 2.6% unemployment rate for this demographic is still incredibly low. It was 5.0% in 20102 and 3.5% in 19923.
While any increase matters to those affected, declaring a "bloodbath” based on this data seems premature. The impact isn't "incredibly noticeable" in the aggregate… yet.
2. The limitations of current AI models
Even though one should generally be skeptical of predictions made by AI CEOs, Anthropic's Dario Amodei has predicted AI "could wipe out half of all entry-level white collar jobs over the next 1 to 5 years."
I mean, It’s hard to argue against a "could" scenario. However… the necessary condition for this level of automation is accuracy & reliability. For two years, we've heard predictions that hallucinations would be solved. Sam Altman himself suggested we wouldn't be talking about them 18-24 months after GPT-4's release. Yet, here we are in mid-2025, and a recent New Scientist article is titled "AI hallucinations are getting worse and they're here to stay."
As long as there's even a 1% chance a model will confidently invent a legal precedent or misread a financial statement, you need a human in the loop. This leads to massive productivity gains, not mass layoffs. We're seeing this play out in the real world:
Klarna: Famously announced replacing 700 customer service roles with AI, only then to throw an uno reverse and rehire human agents because "customers like talking to people".
Duolingo: Also walked back plans to heavily rely on AI for content generation, re-engaging human contractors.
3. A theory: “the calm before the storm”
All this leads to a theory for the rest of the 2020s:
The calm (now): As long as models require human oversight, they function as powerful tools for augmentation. This boosts productivity with limited effect on net employment. This success fosters a false sense of security, encouraging more investment into automation.
The storm (tipping point): Eventually, a model (or a system of models) will achieve a critical threshold of self-correction & reliability. The "human-in-the-loop" will no longer be a requirement for a vast array of cognitive tasks. That sense of security built up during "The Calm" could evaporate like cold water on hot tiles.
It's difficult to predict the timing of the automation tipping point. The persistence of AI hallucinations remains a key barrier to complete automation. However, focusing solely on current limitations overlooks the long-term trajectory of AI capability improvements. While timelines are uncertain, the need to prepare for this potentially radical shift remains urgent.
👪 Community Event
Written by community member Charl Botha
Our May 15th roundtable discussion, "AI Progress: Promise or Peril” provided a forum for attendees to discuss the growing risks of AI progress, and the future economic impacts of AGI. The event was structured around two presentations that served as a foundation for broader community discussions.
Charl Botha initiated the session by analysing the current rate of AI progress. He presented case studies of dangerous applications and stressed the dual-use nature of this technology, which necessitates proactive safety protocols.
Following this, Leo Hyams explored the speculative domain of Post-AGI Economics, examining scenarios where human labor is fundamentally displaced and outlining potential preparatory strategies for individuals.
The presentations were followed by group discussions and then a larger roundtable discussion, where the attendees brought different perspectives about the topics covered. It was encouraging to see such adamant engagement from non-specialists in this conversation. Helped us get a little bit out of our research echo chamber. We look forward to hosting more of these events, subscribe to the Luma for more!
📚 Recommended Resources
Substack newsletters are an invaluable resource for anyone looking to deepen their understanding of the AI landscape. Here are our personal favourite subscriptions, offering a curated blend of technical insights, industry analysis & forward-thinking commentary.
Ben's Bites: Ben curates the most important news, tools, and developments in the fast-paced world of artificial intelligence.
Latent Space: A technical newsletter and podcast for AI engineers and developers, focusing on in-depth analysis of new models, developer tools, and infrastructure.
One Useful Thing (Ethan Mollick): Written by a Wharton professor, this newsletter offers practical insights and experiments on how to effectively use AI in work, education, and business.
Luke Drago: This newsletter features deep, analytical essays exploring the profound long-term societal, economic, and philosophical consequences of developing artificial general intelligence (AGI).
🧠 References
While researching for this newsletter, we found that the AI Explained youtube channel perfectly captured many of these arguments in his latest video, “AI Accelerates: New Gemini Model + AI Unemployment Stories Analysed.”
He has a real talent for making complex topics accessible, and we can't recommend his channel enough.
U.S. Bureau of Labor Statistics. (03 May 2025). Employment Situation Summary, Table A-4: Employment status of the civilian population 25 years and over by educational attainment.
U.S. Bureau of Labor Statistics. (12 Feb 2020). Unemployment rate 2.0 percent for college grads, 3.8 percent for high-school grads in January 2020. The Economics Daily.
U.S. Bureau of Labor Statistics. (n.d.). Unemployment rates for people 25 years and older by educational attainment, seasonally adjusted [data series; 1992 point].