The Segment Anything Model (SAM) has demonstrated remarkable capability as a general segmentation model given visual prompts such as points or boxes. While SAM is conceptually compatible with text ...
The performance gap between unsupervised segmentation models and SAM can be significantly reduced. UnSAM not only advances the state-of-the-art in unsupervised segmentation by 10% but also achieves ...
Vision foundation models (VFMs), such as the segment anything model (SAM), allow zero-shot or interactive segmentation of visual contents; thus, they are quickly applied in a variety of visual scenes.