Microsoft wrote last week that its “investigations have not detected any other use of this pattern by other actors and Microsoft has taken steps to block related abuse.” But if the stolen signing key could have been used to breach other services, even if it wasn’t used this way in the recent incident, the finding has significant implications for the security of Microsoft’s cloud services and other platforms.
The attack “seems to have a broader scope than originally assumed,” the Wiz researchers wrote. They added , “This isn’t a Microsoft-specific issue—if a signing key for Google, Facebook, Okta, or any other major identity provider leaks, the implications are hard to comprehend.”
Microsoft’s products are ubiquitous worldwide, though, and Wiz’s Luttwak emphasizes that the incident should serve as an important warning.
“There are still questions that only Microsoft can answer. For example, when was the key compromised? And how?” he says. “Once we know that, the next question is, do we know it’s the only key that they had compromised?
In response to China’s attack on US government cloud email accounts from Microsoft—a campaign that US officials have described publicly as espionage—Microsoft announced this past week that it will make more of its cloud logging services free to all customers. Previously, customers had to pay for a license to Microsoft’s Purview Audit (Premium) offering to log the data.
The US Cybersecurity and Infrastructure Security Agency’s executive assistant director for cybersecurity, Eric Goldstein, wrote in a blog post also published this past week that “asking organizations to pay more for necessary logging is a recipe for inadequate visibility into investigating cybersecurity incidents and may allow adversaries to have dangerous levels of success in targeting American organizations.”
Since OpenAI revealed ChatGPT to the world last November, the potential of generative AI has been thrust into the mainstream. But it isn’t just text that can be created, and many of the emerging harms of the technology are only starting to be realized. This week, UK-based child safety charity the Internet Watch Foundation (IWF), which scours the web for child sexual abuse images and videos and removes them, revealed it is increasingly finding AI-generated abuse images online.
In June, the charity started logging AI images for the first time—saying it found seven URLs sharing dozens of images. These included AI generations of girls around 5 years old posing naked in sexual positions, according to the BBC. Other images were even more graphic. While generated content only represents a fraction of the child sexual abuse material available online overall, its existence is worrying experts. The IWF says it found guides on how people could create lifelike images of children using AI and that the creation of the images, which is illegal in many countries, is likely to normalize and encourage predatory behaviors toward children.
After threatening to roll out global password-sharing crackdowns for years, Netflix launched the initiatives in the US and UK at the end of May. And the effort seems to be going as planned. In earnings reported on Thursday, the company said that it added 5.9 million new subscribers in the past three months, a jump nearly three times higher than analysts predicted. Streaming subscribers have grown accustomed to sharing passwords and balked at Netflix’s strict new rules, which were prompted by stagnating new subscriber signups. But ultimately, at least a portion of account-sharers seem to have bit the bullet and started paying on their own.