OnIT Solutions Logo

Initializing AI Systems

AI & MSP News
2 May 2026
9 min read

Australia Left Behind as Anthropic's Mythos AI Tool Boosts Cyber Risk

Major United States corporations have been granted exclusive early access to Anthropic’s Mythos tool to prepare for advanced digital threats, leaving critical Australian infrastructure providers without the same defensive capabilities. This selective release means local banks, power providers, and essential services currently lack the access…

OnIT Solutions blog post featured image

Australian Firms Face Rising AI Cybersecurity Risks Without Access to Mythos

Major United States corporations have been granted exclusive early access to Anthropic’s Mythos tool to prepare for advanced digital threats, leaving critical Australian infrastructure providers without the same defensive capabilities. This selective release means local banks, power providers, and essential services currently lack the access required to test their systems against these specific AI cybersecurity risks. As these non-public tools become more powerful, the inability to stress-test local networks creates a significant gap in our national defense.

The Growing Threat to Australian Infrastructure Security

The current cyber threat landscape is shifting as high-level AI capabilities move from theoretical research to practical, albeit restricted, applications. Without access to tools like Anthropic Mythos, Australian firms are effectively flying blind against the very technologies that could soon be used to automate large-scale hacking attempts. Expert Dimitri Vedeneev from CyberCX notes that this technology and its rapid development are "going to push the cybersecurity field forward" in unprecedented ways.

For Australian businesses, this delay in access represents a strategic disadvantage when trying to secure cloud solutions and physical assets. Essential service providers cannot afford to wait for a general release when the tools for potential disruption are already being refined behind closed doors. Maintaining a robust cybersecurity posture requires proactive testing that is currently impossible without the latest AI model capabilities.

Addressing the Gap in Local Defense Strategies

The speed at which these tools are being developed outpaces the current defensive protocols used by many domestic organisations. As automated hacking attempts become more sophisticated, the risk to Australian infrastructure security grows, particularly if local providers cannot simulate the attacks they are likely to face. This disparity highlights a growing divide between international AI developers and the security needs of Australian businesses.

Bridging this gap will require more than just updated software; it demands a comprehensive AI strategy that accounts for the evolving nature of automated threats. As the technology continues to mature, the focus must remain on ensuring that Australian firms are not left vulnerable while their global counterparts build more resilient defenses. This lack of access is merely one part of a broader shift in how digital intelligence is reshaping the world of corporate security.

Analyzing the Rapid Evolution of AI Model Capabilities

New artificial intelligence models are "getting better and better in terms of their cyber capabilities," according to Saeed Akhlaghpour from UQ Business School. This evolution signifies a broader trend where AI model capabilities are becoming increasingly sophisticated, moving beyond simple text generation into complex digital security tasks. For many organisations, these advancements introduce significant AI cybersecurity risks that require a fundamental shift in defensive thinking. While the current focus is on specific releases like Anthropic Mythos, the underlying trajectory suggests a permanent change in how digital threats are constructed.

The Escalating Sophistication of the Cyber Threat Landscape

The emergence of high-level AI tools presents a dual-edged sword for the modern business environment. While new technologies offer substantial benefits to consumers, such as improved efficiency and personalisation, they simultaneously cause cyber risks to escalate in both scale and sophistication. This shift in the cyber threat landscape means that attacks can be launched more frequently and with greater precision than ever before. Businesses must now balance the drive for innovation with a robust cybersecurity posture that can withstand automated, intelligent threats.

Experts note that the offensive capabilities of these models are advancing regardless of the specific vendor or software developer involved. As these engines grow more powerful, they become capable of identifying and exploiting vulnerabilities at a speed that human security teams struggle to match. Maintaining Australian infrastructure security in this environment requires a proactive approach to monitoring how these tools are being utilised globally. Relying on legacy systems is no longer a viable option when the tools used by adversaries are constantly learning and evolving.

Awaiting the Democratisation of Advanced AI Tools

Although access to certain advanced models is currently restricted to a few major corporations, experts emphasise that these capabilities will eventually be available to all actors. Saeed Akhlaghpour suggests that while this reality is sobering, now "is not the moment to panic or to go to bunkers." Instead, the focus should be on preparing for a future where high-level AI is a standard component of both digital offense and defense. A well-defined AI strategy can help businesses anticipate these shifts before they become critical vulnerabilities.

Acknowledging the eventual democratisation of these tools is the first step toward long-term resilience. Organisations that fail to recognise this trend risk being caught off guard when sophisticated hacking tools move from private testing into the public domain. Preparing for this inevitability ensures that local firms are not just reacting to threats, but are actively building the infrastructure needed to survive them. This shift in capability also brings new pressures from regulators who expect companies to stay ahead of these emerging digital dangers.

Defending Australian Infrastructure Security Against Global Threats

Alastair MacGibbon, former government adviser, argues that the federal government must immediately bring together infrastructure providers, AI developers, and security firms to build a unified national strategy. This call to action stems from concerns that relying solely on international software development leaves our domestic services vulnerable to sudden technological shifts. While major tech companies may engineer at a global scale, MacGibbon warns that this "doesn't mean the service delivery will continue to an Australian citizen." Developing robust Australian infrastructure security requires a localized approach that prioritizes the continuity of our domestic networks over generic global benchmarks.

Bridging the Disconnect in the Cyber Threat Landscape

The lack of local testing for specialized tools like Anthropic Mythos highlights a growing disconnect between international development and domestic security needs. When Australian firms are excluded from early-access programs, they lose the critical ability to model how these advanced AI model capabilities might interact with local legacy systems. This visibility gap creates significant AI cybersecurity risks for organizations that manage our most sensitive data sets. By the time these tools are eventually released to the public, the cyber threat landscape may have already evolved beyond what our current manual defenses can handle.

Without a coordinated domestic testing framework, Australian businesses are forced to react to threats rather than anticipating them. This reactive posture is particularly dangerous for providers of essential services who cannot afford downtime. MacGibbon’s proposal for a united front aims to bridge this gap by ensuring local cybersecurity experts have the same technical foresight as their international counterparts. Moving forward, the focus must shift from basic compliance to active participation in the global AI safety dialogue.

Ensuring Resilience for Critical Services

Coordinated strategies are essential to ensure that critical services, such as electricity and banking, remain resilient against AI-driven disruptions. The federal government must facilitate an environment where infrastructure providers can collaborate directly with AI developers to stress-test domestic grids. Protecting these sectors requires more than just standard software updates; it necessitates a proactive AI strategy tailored to the unique requirements of the Australian market. If these systems fail, the impact on daily life—from financial transactions to home heating—would be immediate and severe.

Ensuring uninterrupted service for the average citizen depends on how quickly we can align our national defense with global technological shifts. As the speed of development continues to accelerate, the distance between high-level engineering and local service delivery must be closed. This effort is not just about adopting new tools, but about building a framework that keeps the Australian public safe from automated threats. As these sophisticated models become the new standard, the regulatory burden on local firms is also beginning to change.

Maintaining Compliance Under ASIC’s Technology Neutral Obligations

The Australian Securities and Investments Commission (ASIC) has warned that financial services licensees must remain "on the front foot every day" to ensure their customers are not exposed to harm through inadequate technical controls. This directive comes as AI cybersecurity risks grow in both scale and sophistication, challenging the traditional ways businesses protect sensitive financial data. ASIC has made it clear that while the tools may change, the responsibility to safeguard clients remains a constant requirement for every licensed entity.

The Role of Technology Neutrality in AI Model Capabilities

Existing consumer protection laws and director duties in Australia are fundamentally "technology neutral," meaning they apply to the use of artificial intelligence just as strictly as they do to legacy IT systems. Businesses cannot claim that the novelty or complexity of AI model capabilities absolves them of their legal obligations to provide secure services. Licensees must ensure that any implementation of automated tools does not breach current provisions or leave the cyber threat landscape unmanaged.

To meet these requirements, organisations are encouraged to conduct a thorough review of their current cybersecurity posture and operational resilience. ASIC expects firms to demonstrate that they have evaluated how new technologies might impact their ability to comply with existing financial services laws. This proactive approach is essential for maintaining trust as more powerful tools like Anthropic Mythos eventually become part of the standard corporate toolkit.

Securing Australian Infrastructure Security Through Proactive Governance

Director duties now involve a deeper understanding of how digital automation interacts with Australian infrastructure security and client privacy. Failing to implement a robust AI strategy that addresses these risks could result in significant regulatory scrutiny or legal consequences. As the speed of development accelerates, the gap between having the right technology and having the right controls becomes a critical vulnerability for local firms.

Ultimately, the burden of proof lies with the business to show that their use of AI has not increased the risk of harm to the public. By focusing on technology-neutral outcomes, firms can build a framework that accommodates future advancements while staying firmly within the bounds of Australian law. This focus on long-term stability ensures that even as the digital environment shifts, the core principles of consumer protection and corporate responsibility remain intact.

Frequently Asked Questions

What is Anthropic's Mythos and why is it significant?

Mythos is a powerful new AI tool developed by Anthropic that possesses advanced cybersecurity capabilities. It is significant because it has not been made public, yet major US firms are already using it to test their defenses, creating a gap for those without access.

Why are Australian firms currently unable to access Mythos?

While major US firms have been granted access to Mythos to prepare for AI-driven threats, the tool has not been released to Australian local banks, power providers, or infrastructure firms. This delay highlights the challenges Australian organisations face in keeping pace with global AI development.

What does ASIC require from businesses using AI?

ASIC expects businesses to ensure that their use of AI does not breach existing consumer protection laws or director duties. Because these obligations are technology neutral, firms must proactively manage AI-related risks to prevent their customers from being harmed by inadequate security controls.

Sources

Future-Proof Your Business with OnIT Solutions

Staying on top of AI and technology trends is critical for Australian SMBs. Our team helps you cut through the noise and implement the right solutions for your business. Talk to our AI Strategy team about what today's developments mean for your organisation — or explore our full range of Managed IT Services.

Let's chat on WhatsApp

How can I help you? :)