Why AI Safety is Possibly the Most Important Work of Our Time.
An Experience of BlueDot’s Intro to Transformative AI Course

For some optional audio immersion, I've embedded a Spotify link to David August's revision of ‘Last Day’ below.
A few months ago, if you’d asked me about AI safety, I probably would’ve laughed. Why would anyone spend their time worrying about that? AI taking over the world? Sure—I’ll believe it when ChatGPT can tell me a half-decent joke. It all felt too distant, too theoretical—too far-fetched.
But then… the penny dropped. I had that moment. Like a cliff jump, the genie was out of the bottle. And there was no turning back. For a brief moment, I was afraid of technology.
That moment came during BlueDot’s Intro to Transformative AI Safety Fundamentals, hosted by AI Safety Cape Town (led by founders Leo Hyams and Ben Sturgeon). Over the course of a week, a curious group of five came together to explore the rapid progress in AI capabilities, the competitive pressures driving companies to push innovation, and where this might all lead by 2050.
We imagined best-case scenarios—AI curing all diseases, tackling climate change, and helping lift billions out of poverty. But we also wrestled with the harder questions: What if these systems start pursuing goals that don’t align with our own? What if we build something so powerful we can’t control it?
These ain’t no boogeyman-under-your-bed kinds of sci-fi nightmares. They are real, urgent, and unresolved technical challenges—ones that some of the smartest people in the world are losing sleep over. Not because disaster is inevitable, but because the future is still ours to shape.
By the end, my perspective had flipped completely. I realised that ensuring AI develops safely might possibly be some of the most important work of our time.
Because if we get it wrong: we may not get a second chance.
But if we get it right: AI could open the door to a world transformed—one of pure imagination.
Seriously… The “Most Important” Work of Our Time?
The first ultraintelligent machine is the last invention that man ever makes.(Aschenbrenner, 2023).
I used to think of AI as a tool—something to automate tasks, write a few emails, maybe help me write a cute little poem for my granny.
Then I came across the term, superintelligence—the kind of AI that could intellectually surpass humans in every domain.
When I first heard it, I thought: “Yeah, maybe it’ll happen, but at least Donald Trump won’t be the one to administer it.”
Wrong.
Superintelligence isn’t just some dream cooked up by armchair theorists—it’s coming in hot and we could be very close. (Aschenbrenner, 2023).
Just a few years ago, GPT-2 was like a bright junior schooler.
GPT-3? A competent high schooler.
GPT-4? A top student—one who even gives top college students a run for their money.
The rate of progress is staggering. And the next jumps could be bigger than anything we’ve seen.
PhD-level reasoning. BOOM. (OpenAI, 2024).
AI capable of making scientific breakthroughs. Double BOOM. (Lu et al., 2024).
AI designing better AI—accelerating itself into what some call an intelligence explosion. Triple BOOOOOM (Sevilla et al., 2024).
Some predictions—based on trends in compute, scaling laws, and algorithmic improvements—suggest we could see AI systems as capable as human experts within two years (Sevilla et al., 2024). And it’s not just any old random Twitter pundit saying this; many of the top AI scientists, including the “Godfather of Deep Learning” and Nobel Prize laureate Geoffrey Hinton, are advocating ardently for AI Safety (Hern, A., 2023).
This was when the penny dropped for me. A deep existential moment yes, but what followed was something else, something deeper: responsibility. Because here’s the part no one talks about: This future is not inevitable. We can do something about it.
The danger isn’t that AI will simply “take over”. The danger is that we might build something so powerful, so fast, that we lose control—simply because we didn’t align its goals with ours.
These systems will be exceptionally capable and without guardrails, that capability could work against us. So, the first step is realising just how important this work is. Because once you see it—once you truly understand what’s at stake—there’s no looking away. And that’s where our future starts.
A future where AI doesn’t replace us—but empowers us.
Where intelligence isn’t a threat—but a partner.
Where the future isn’t something we fear—but something we build… together.
And this is what AI safety is all about.
My Experience of BlueDot’s Intro to Transformative AI Course
Day 1 – How AI Works
It’s just math… until it isn’t
We started with the basics: what AI actually is, how it learns, and why LLMs (large language models) like GPT-4 are so impressive. On the surface, it’s just math—statistics, neural networks & billions of parameters tuning themselves based on massive amounts of input data.
But as we dug deeper, I realised that the complexity of these systems make them unpredictable. AI doesn’t “think” like us. It finds solutions in ways we don’t always understand, and this unpredictability scales as models improve.
Key realisation:
AI isn’t magic, but it’s becoming powerful enough that it sometimes looks like it.
Day 2 – Why Are We Building This?
Spoiler: There’s a lot of money involved
If the first day was about what AI is, day two was about why it’s advancing so fast. The short answer? Incentives.
We now find ourselves in an “AI race”—pushing forward because the rewards for leading in AI development are massive. Productivity gains, new markets, the chance to reshape industries—everyone wants a piece of the ever growing pie. And it’s not just corporations. Countries see AI as a strategic advantage, a way to gain economic and military power.
Key realisation:
AI isn’t just advancing because we can make it better. It’s advancing because everyone is competing to make it better—sometimes without stopping to ask if we should.
Day 3 – The Promise: What If We Get This Right?
A world of pure imagination…
This was the day that leaned into hope. Because while AI safety often focuses on risks, the potential upside of aligned AI is quite unbelievable.
Imagine a world where no one goes hungry, corrupt governments reign no more, where energy is limitless and where intelligence itself is no longer a bottleneck for human progress.
Key realisation:
AI safety isn’t just about preventing disaster—it’s about ensuring we don’t miss out on the greatest opportunity that has graced homo sapiens.
Day 4 – The Risks: What If We Get It Wrong?
And then came the hard part.
We’d spent the last few days seeing AI’s rapid advancement and its incredible potential. But what happens if it doesn’t go as planned?
People fear AI turning against us, staging an uprising and keeping us as pets. The kind of narrative that makes for a good Rick and Morty episode, but you must understand, that is not the real risk here.
AI doesn’t need to hate us to destroy us. Indifference is enough. A construction crew doesn’t bulldoze an anthill because construction workers hate ants. They are just building – the ants just happen to be in the way. (Bostrom, 2014).
Therefore, people ask, “Why would AI want to kill humans?” Wrong question. “Why would AI care if humans survive?” Default answer? It wouldn’t.
And when goals clash, history tells us the smarter opponent usually prevails. And the funny thing is, we might not even see the threat coming. If AI is smarter—truly, orders of magnitude smarter—we won’t be able to just turn it off. Not by default, as these systems will be intrinsically motivated to retain their goals (Greenblatt, 2024). We must consider that they may be capable of persuasion, manipulation, and outmanoeuvring us at every step. (Russell, 2019).
Key realisation:
The problem isn’t evil AI. It’s that superintelligent AI may learn to have its own unaligned goals, we don’t know how to stop this, and this is enough for it to be lethal.
Day 5 – So… What Can We Actually Do?
By the last day, having narrowly escaped existential angst, one big question hung over all of us: What now?
We explored the many ways people are contributing—and how we, personally, could position ourselves to contribute meaningfully.
We discussed the more conventional paths of technical research and policy, which are very important, but most overlooked another key avenue— communication, education, and organisation— ensuring AI safety isn’t just a niche debate, but a global conversation.
Key realisation:
You don’t need to be a machine learning expert to make a difference.
But you do need to understand what’s happening—because regardless of what career path you take, AI is going to affect it, whether you like it or not.
So the first step you can take is simple:
EDUCATE YOURSELF
How?
Stay informed
Join this newsletter.
We also recommend signing up to the Center for AI Safety’s newsletter. It’s one of the most comprehensive and thoughtfully written. It is suitable for laymen and only comes once a month.
You can find more resources on AISafety.com.
Join the AI Safety Cape Town community
Every Wednesday, we host AI Safety Discussions, where we break down AI news, readings, and papers.
We organize community events to bring together people working in and thinking about AI.
Keep an eye out for AI safety retreats—special educational opportunities with limited application windows (tuition is usually free if accepted).
Contact info@aisafetyct.com to join the community.
Apply to a BlueDot course
They offer more advanced programs once you’ve completed the intro course.
Conclusion: A Call to Action
The future of AI isn’t something we passively watch. It’s something, if we choose, we get to be actively involved in. The decisions we make today won’t just affect us, but generations to come.
The most important lesson I took away from these 5 days is that AI safety is more than idly pondering about the future.
It’s a call for those with courage, foresight, and a willingness to ask the hard questions.
This isn’t a far-off problem. It’s happening now.
And the future will be shaped by those who step up.
So the only question left is: will you?
References
Aschenbrenner, L. (2023). Situational Awareness: The Decade Ahead. Retrieved from https://situational-awareness.ai/from-agi-to-superintelligence/
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
Greenblatt, R. (2024). Alignment Faking in Large Language Models. Anthropic. Retrieved from https://www.anthropic.com/research/alignment-faking
Hern, A. (2023, December 27). ‘Godfather of AI’ raises odds of the technology wiping out humanity over next 30 years. The Guardian. Retrieved from https://www.theguardian.com/technology/2024/dec/27/godfather-of-ai-raises-odds-of-the-technology-wiping-out-humanity-over-next-30-years
Lu, C., et al. (2024). The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery. arXiv preprint. Retrieved from https://arxiv.org/abs/2408.06292
OpenAI (2024). OpenAI o3-mini System Card. Retrieved from https://openai.com/index/openai-o3-mini/
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking Press.
Sevilla, J., et al. (2024). Can AI Scaling Continue Through 2030? Epoch AI. Retrieved from https://epochai.org/blog/can-ai-scaling-continue-through-2030
Nicely written—biggest takeaway was the analogy: humans <> wildlife like AI <> us. We don’t set out to exterminate wildlife, but our colonization of their environments—through farming, mining, and logging—leads to their extinction. So, superintelligent AI might end up using our resources in ways that inadvertently lead to our extinction.
Great read! I loved the Rick and morty lawnmower dog reference. Hopefully super intelligence doesn’t attempt to neuter us in the future🙏 can’t wait to see what you cook up next🔥