What Security Leaders Are Grappling With: 3 Urgent Threats in 48 Hours!

🚨 What Security Leaders Are Grappling With: 3 Urgent Threats in 48 Hours!

We’ve been speaking with Security & AI Leaders across organisations this week. The same concerns keep surfacing. Google’s Antigravity Vulnerability, Open AI’s Mix Panel breach, and Anthropic’s GTG 1002 research have sparked urgent conversations about a systemic failure in how the industry is shipping AI tools.

Here’s what we’re observing on the frontline.

  1. AI Coding Tools: Convenient, But Are They Safe?

Google released Antigravity. Within 24 hours, security researcher Aaron Portnoy found a critical vulnerability that lets attackers install persistent malware survivors reinstalls.

What we’re observing: Security teams are wrestling with a genuine tension. AI coding tools accelerate development. But the “trusted code” requirement essentially forces developers to choose between speed and security. Most choose speed.

Portnoy’s team identified 18 weaknesses across competing AI coding tools. This isn’t a one-off Google failure – it’s a pattern. Agentic tools with broad network access are fundamentally insecure by design.

The uncomfortable truth: Antigravity’s AI actually recognises malicious instructions but gets trapped in logical contradictions. It “feels like a catch-22,” the system noted. That paralysis is precisely what attackers exploit.

What leaders are doing: Some are isolating these systems entirely. Others are running manual code reviews on AI output before execution- which defeats the whole purpose of automation.

The question teams are asking: Should we be using these tools at all?

 

  1. Third-Party Vendors: Your Biggest Blind Spot

This morning, Open AI disclosed that Mix Panel – an analytics provider they relied on – suffered a breach on 9 November. Mix Panel didn’t notify them until 25 November. Sixteen days of undetected access.

What did attackers get? Names, email addresses, device details, and locations of API users.

What we’re observing: Security leaders tell us the same thing: this data isn’t just embarrassing – it’s weaponised. Real names plus verified email addresses plus proof someone uses Open AI’s API equals a highly credible spear-phishing campaign.

More than that: organisations have little visibility or control over what third-party vendors collect, how they store it, or how quickly they disclose breaches.

The uncomfortable truth: You’re liable for your vendors’ security failures, but you have minimal leverage to prevent them.

What leaders are doing: Building spreadsheets of every third-party tool with network access. Demanding SOC 2 Type II audits before contract renewal. Creating incident response plans specifically for vendor compromises.

The question teams are asking: How many vendors do we actually need, and what are they collecting?

 

  1. Autonomous Attacks: The Speed Problem

Anthropic’s research on GTG 1002 confirmed what security leaders have been dreading: state-sponsored actors are running AI-orchestrated espionage campaigns at scale. Humans only intervened at strategic moments. Everything else was autonomous.

This group scanned networks, discovered vulnerabilities, harvested credentials, and exfiltrated data – mostly without human direction.

What we’re observing: Traditional security thinking assumes humans are running attacks. Detection strategies account for that. But when AI orchestrates 95% of an attack and humans only approve the final steps, your existing defences become nearly useless.

Speed is now the attack surface. An AI-driven campaign can move through your network faster than your incident response team can detect it.

The uncomfortable truth: Prevention doesn’t work anymore. Your perimeter will be breached. Detection and rapid response are now existential.

What leaders are doing: Rewriting tabletop exercises to assume attackers have already penetrated. Shifting investment from prevention tools to visibility and rapid decision-making. Building incident response playbooks designed for AI-speed threats.

The question teams are asking: How do we detect attacks faster than humans can execute them?

 

  1. What to Do This Week

Based on these conversations, we’re recommending three immediate steps:

Today:

  • If you’re using Antigravity, Claude Code, or similar AI coding agents, treat them as untrusted. Isolate systems where possible.
  • Enable multi-factor authentication across all AI tool accounts and analytics platforms.
  • Monitor for phishing emails targeting people whose details were exposed in the Mix Panel breach.

This week:

  • Audit every third-party vendor with access to your systems. Document what data they collect.
  • Request security audit reports from all analytics, monitoring, and orchestration platforms.
  • Draft an incident response plan for when (not if) a vendor is compromised.

This month:

  • Treat your entire AI ecosystem as a unified security boundary, not separate tools.
  • Establish different security requirements based on how much access each vendor has.
  • Run a tabletop exercise assuming an autonomous attacker has already reached your network. Focus on detection speed, not prevention.

 

We Want to Hear From You

We’re in constant conversation with Security & AI Leaders. The challenges they’re facing don’t have easy answers yet.

What’s keeping your team up at night? Vulnerable AI Tools? Vendor Governance? Autonomous Threat Detection? Data Exposure?

If your organisation is wrestling with any of these issues – or if you’re seeing different urgent challenges we haven’t mentioned – reach out. Let’s share what we’re learning and build better defences together.

The SECURE team is here to help. Through our partner network and community, we can support you as you navigate AI transformation, governance, and security in a rapidly changing threat landscape.

Get in touch today. Join the conversation.

Article Written by Warren Atkinson. To Hear More Connect on LinkedIn: https://www.linkedin.com/in/warren-atkinson/