New AI tools hit cyber defense stocks
By Alimat Aliyeva
The presentation of a new automated software code–scanning service by Anthropic, powered by its chatbot Claude, triggered a sharp decline in the shares of major U.S. cybersecurity companies. Investors interpreted the new product as a potential competitive threat to traditional software security solutions, AzerNEWS reports.
Over two trading sessions, shares of CrowdStrike and Zscaler fell by about 10%, while Netskope and Tenable declined by more than 12%. Attempts to reassure investors by CrowdStrike CEO George Kurtz, who wrote on LinkedIn that automated code scanning cannot replace comprehensive cyber defense platforms, failed to halt the sell-off.
Exchange-traded funds tracking the cybersecurity sector — iShares Cybersecurity and Tech ETF and Global X Cybersecurity ETF — dropped to multi-month lows, reflecting broader investor anxiety.
The negative momentum quickly spread across the wider software industry and into private capital markets. Shares of Salesforce, ServiceNow, and Oracle lost around 4%, while IBM plunged 13%, marking its worst performance in more than two decades. This marks the second major technology-sector sell-off in a month linked to AI-driven workflow automation announcements.
The situation was further exacerbated by comments from Jenny Johnson, CEO of Franklin Templeton, who warned that corporate software risks becoming a low-margin commodity amid the rapid advancement of neural network technologies. As a result, major private equity firms such as Blackstone, Apollo Global Management, and KKR — all active lenders to the tech sector — also came under pressure, with their shares falling more than 5% on concerns about slowing capital inflows due to what analysts describe as “AI-driven disruption.”
According to analysts at Bank of America, Anthropic’s new tools pose a direct competitive threat primarily to highly specialized platforms such as GitLab and JFrog, which focus on DevOps workflows and code management.
Meanwhile, the Financial Times reported that Anthropic itself disclosed cyberattacks targeting its AI models. According to the company, several Chinese AI laboratories — including DeepSeek, Moonshot AI, and MiniMax — allegedly attempted so-called model “distillation” in order to extract data for training their own AI systems.
Anthropic claims that approximately 24,000 fake accounts were used in the process, potentially circumventing U.S. export restrictions on advanced chips such as the Nvidia H200. The company argues that such practices undermine fair competition and pose national security risks, as resulting AI systems may lack safeguards preventing the development of biological weapons or the execution of cyberattacks.
Market analysts note that the episode highlights a broader structural shift: investors are increasingly pricing in a scenario in which AI does not merely complement traditional enterprise software but fundamentally reshapes — and in some cases replaces — its core functions. If this trend continues, the software industry could face a transformation comparable in scale to the cloud computing revolution of the early 2010s.
Here we are to serve you with news right now. It does not cost much, but worth your attention.
Choose to support open, independent, quality journalism and subscribe on a monthly basis.
By subscribing to our online newspaper, you can have full digital access to all news, analysis, and much more.
You can also follow AzerNEWS on Twitter @AzerNewsAz or Facebook @AzerNewsNewspaper
Thank you!