Cybersecurity in the age of AI: What technical leaders need to know

In this article: If you’re a CTO, IT manager, or tech lead at a nonprofit, this guide is for you. We explain how AI is changing the way we should approach cybersecurity in 2025. We discuss the practical steps you can take straight away to ensure you stay safe.  

Jump to the section you need most: 

 

AI has changed everything, especially in cybersecurity. For nonprofits, this shift isn’t theoretical. It’s real and happening fast. It affects the way we protect donor data, operational systems, and even reputations. The stakes are increasing as the digital threat landscape is evolving. 

So how do technical leaders stay ahead when AI is being used on both sides of the cybersecurity fence? 

Why AI is a double-edged sword for cybersecurity

AI is one of the best tools defenders have ever had. But it’s also one of the most dangerous tools attackers have ever used. 

It’s helping us automate incident response, detect anomalies, and analyse signals across endpoints faster than any human team could.  

But that same power is now in the hands of cybercriminals. 

Cybercriminals use AI to automate and scale attacks at unprecedented speed and complexity. 

AI can write convincing phishing emails in seconds. It can clone voices. It can learn your systems and your people very quickly and quietly. 

This means attacks are faster, harder to detect, and often more personalised. 

The increasing availability of powerful AI tools makes it easier for less-skilled actors to launch sophisticated attacks. Making cyber threats more accessible and expanding the pool of potential adversaries. 

AI in Cybersecurity: The benefits and risks

We’ve summed up the double-edged sword of AI in cybersecurity in this table: 

Table summing up the benefits and risks of AI in cybersecurity

AI-generated threats are getting more sophisticated

Phishing no longer looks like a sketchy Gmail address and poor grammar. Now, it looks like your CEO asking for an urgent invoice. Or a fake voicemail from a board member. Or even a deepfake Teams call. 

And it’s not just hypothetical. In 2024, a finance worker was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call. 

AI-generated attacks are up 4,000% since 2022, and they’re only getting more sophisticated. 

Attackers are now using AI to analyse social media, emails, and online behaviour to create hyper-personalised phishing messages. These scams mimic writing styles, reference recent activity, and even continue legitimate email threads to avoid suspicion. Deepfake audio and video tools are being used to convincingly impersonate executives and board members, leading to large-scale financial fraud and data leaks. 

Static security tools can’t keep up. AI-driven phishing campaigns continuously evolve to bypass traditional defences by exploiting psychological triggers, perfect grammar, and near-flawless style. Meanwhile, self-learning malware is adapting in real time, hiding in plain sight by mimicking normal system activity. 

In addition, the rise of Cybercrime-as-a-Service means even low-skilled attackers now have access to powerful AI tools on the dark web. These groups run automated A/B tests to fine-tune phishing content for maximum engagement. 

Large Language Models (LLMs) and open-source AI tools are giving threat actors the ability to: 

  • Write malicious code in real time 
  • Tailor attacks using scraped personal info 
  • Learn how your org operates 

 

That’s what makes AI-generated threats so dangerous. They don’t feel like attacks. They feel like business as usual. 

How confident are you that your team would spot these attacks today? 

What this means for your security stack

The evolving cybersecurity landscape isn’t just about keeping up with patches and updates. It’s about adjusting your security posture from reactive to proactive. 

For many nonprofits, that means taking a second look at tools like: 

  • Microsoft Secure Score: to get a quantifiable sense of your current setup 
  • Conditional Access and Multi-Factor Authentication: to add layers without slowing people down 
  • Endpoint Detection and Response (EDR): to catch what traditional antivirus software misses 

 

If you’re still relying on manual monitoring or a mix-and-match stack, it’s time to rethink the approach. 

Where nonprofits are most vulnerable

Nonprofits have a unique risk profile. You’re dealing with: 

  • Limited IT teams 

  • Legacy systems or hybrid environments 

  • Staff and volunteers using personal devices 

  • Sensitive data from donors, clients, or government partners 

 

When you add a mission-first culture, where cybersecurity can feel secondary, the risk increases. 

AI-generated threats exploit exactly these conditions. Especially when there are distractions, outdated systems, and low awareness. 

What technical leaders can do right now

Here’s what you can do, even without a major overhaul: 

1. Start with Microsoft Secure Score: It’s built into Microsoft 365 and gives you a clear, actionable baseline. 

2. Run an attack simulation: Tools like Microsoft Defender Attack Simulator let you test how your team would respond to a phishing or malware scenario. 

3. Invest in user awareness: Tools are essential, but human error still accounts for most breaches. 

4. Make security part of culture: It can’t sit in a silo. Bring leadership, finance, and program teams into the conversation. 

 

For more steps you can action, check out our Cybersecurity Checklist for Nonprofits and Charities 

Why human behaviour still matters

AI might be driving the threat, but it’s still human decisions that open the door. 

Your users are your greatest risk and your strongest line of defence. 

It’s been shown that organisations that train people and test regularly reduce phishing risk by up to 86%.  

That’s why technical leaders need to think beyond the firewall. Focus on behaviour, training, and a culture where people feel empowered to report weird emails or suspicious logins. 

Looking ahead: The role of AI in defence

It’s not all doom and gloom. AI is also powering a new era of cybersecurity tools which are smarter and more scalable for lean teams. 

From Microsoft Copilot to automated threat detection, we’re seeing a shift toward AI-enhanced defence that nonprofits can access if they know where to look and how to set it up properly. 

The best leaders won’t just react to AI. They’ll use it to build smarter, more resilient systems. 

Final thoughts: stay vigilant.

Cybersecurity in the age of AI isn’t about fear. It’s about focus. 

Focus on building systems that adapt. Focus on training people who are alert. And focus on making security part of your everyday operations.  

You don’t need a 12-person team to stay safe. You just need consistency and the right tools. 

And a partner who knows how to help. 

If you need help navigating this complex landscape, partnering with experts who understand nonprofit challenges can make all the difference. 

We work with over 80 nonprofits across New Zealand, Australia, and Canada and are committed to ensuring nonprofits stay safe from cyberattacks.  

Not sure where to start? We can help.

Book a free consultation to asses your cybersecurity posture and get practical next steps tailored to your nonprofit

Contact us