Today, Microsoft announced a new Copilot Labs feature called Copilot Vision. Copilot Labs is a new feature where Microsoft will test new AI features with a small subset of users to gather feedback and improve them. Copilot Vision is similar to the ChatGPT video capabilities that OpenAI demoed earlier this year. Copilot Vision provides users with real-time assistance based on their screen content.
Right now, Copilot Vision works only inside Microsoft Edge. Once enabled, when you visit a supported website, you can start asking Copilot questions regarding the content on the page as you browse. Copilot Vision can also suggest the next steps and assist with the task at hand. You can even speak your queries naturally, eliminating the need to type. Think of Copilot Vision as an expert sitting next to you, looking at your screen to help you along the way.
Microsoft designed Copilot Vision with privacy as a core principle. Copilot Vision is an entirely opt-in feature, enabled only when users explicitly choose to do so. Microsoft does not store or use any data from Copilot Vision (audio, images, text, or conversations) for model training. Once your Vision session is over, all the data will be permanently deleted.
Additionally, Copilot Vision will not work on websites with paywalls or sensitive content. During this preview phase, Microsoft is enabling Copilot Vision only on a pre-approved list of websites. Microsoft will expand Copilot Vision to more websites in the future, prioritizing safety and responsibility.
Finally, it is important to note that Copilot Vision is not an agent. It cannot take actions on your behalf; instead, it can only answer questions throughout your browsing experience.
Copilot Vision is now rolling out to a subset of Copilot Pro users in the United States. It has the potential to revolutionize how we interact with the web, providing a more intuitive and helpful browsing experience. As Microsoft continues to develop and expand this feature, we can expect an even more useful Copilot experience in the future.
0 Comments - Add comment