Microsoft AI team accidentally leaked/exposed tens of terabytes of internal data, including passwords and private keys. The data was disclosed while publishing a storage bucket on GitHub.
Users of the GitHub repository, where Microsoft provided open-source code and AI models for image processing and recognition, were asked to download the models from the Azure Storage URL.
According to cloud security startup Wiz, the URL gave permissions to the entire storage account where millions of sensitive data were stored. The Azure URL contained 38 terabytes of sensitive information, including private keys and passwords.
Microsoft AI researchers also, by mistake, shared the personal backups of two Microsoft employees’ personal computers. The URL also contained more than 30,000 internal team messages from Microsoft employees.
The URL was supposed to provide read-only permission but was misconfigured to allow complete control. Due to this mistake, anyone can delete, add, replace, and inject malicious content into users’ sensitive data.
While answering the question regarding the data leakage, Microsoft relied on the fact that no customer data was exposed, and no services were at risk due to this issue.
Comments