Microsoft has been pushing its new trend of adding generative AI features into its products, and labeling them as "Copilots". From searching for info on Bing Chat, to adding help in creating documents with Microsoft 365 Copilot and more, the company's theme has been that these AI features help humans and not replace them. Today, that trend continues with the just-announced Microsoft Security Copilot.
The new service, like other Microsoft AI products, is based on OpenAI's GPT-4 AI It was introduced today as part of the company's first Microsoft Secure event. The company stated:
When Security Copilot receives a prompt from a security professional, it uses the full power of the security-specific model to deploy skills and queries that maximize the value of the latest large language model capabilities. And this is unique to a security use-case. Our cyber-trained model adds a learning system to create and tune new skills. Security Copilot then can help catch what other approaches might miss and augment an analyst’s work. In a typical incident, this boost translates into gains in the quality of detection, speed of response and ability to strengthen security posture.
Like other current-generation AI products, Microsoft says Security Copilot can make mistakes. One of them showed up in the above video, where Copilot made a reference to "Windows 9" which does not exist. However, Microsoft believes this will ultimately help security employees to find threats and deal with them much faster compared to ordinary methods.
Since this is a security project, you would also expect Microsoft Security Copilot to be safe and secure to use. Companies that use this service will still control their own data, and Copilot won't be used to train outside AI based on a company's data. The service is currently in a private preview but there's no word on when it will be generally available.
2 Comments - Add comment