Meta, the owner of Facebook, has published a new artificial intelligence model that, when prompted, can identify individual objects within an image. It can also select them based on the text that is input into the model.
The tool, named Segment Anything, works on the fundamentals of identifying which image pixels belong to an object. It is being launched as a project which doesn't just include the model, but a task and dataset as a part of its research paper.
The model itself, named the Segment Anything Model (SAM), and the dataset, which is named the Segment Anything 1-Billion mask dataset (SA-1B), is touted by Meta as the largest ever segmentation dataset to enable a broad set of applications. The dataset will be made available for research purposes and the model is available under a permissive open license (Apache 2.0).
Meta goes further to say that SAM could be used to power applications that require finding and segmenting any object in any image across numerous domains. The model itself could become a component of a more capable wider reaching AI in the future as well and can be used to enhance AR and VR applications, selecting objects based on where the user is looking.
Technology based on the SAM is already in use within Meta, particularly on Facebook and Instagram where users tag photos, moderate prohibited content, and also feed the algorithms that determine which posts to show users.
Meta has launched a demo of the model for anyone to try with their own images to see what the AI is capable of.