In this work, we present two novel datasets for image scene understanding. Both datasets have annotations compatible with panoptic segmentation and additionally they have part-level labels for selected semantic classes. This report describes the format of the two datasets, the annotation protocols, the merging strategies, and presents the datasets statistics. The datasets labels together with code for processing and visualization will be published at https://github.com/tue-mps/panoptic_parts
Semantic understanding of visual scenes is one of the holy grails of computer vision. Despite effort...
We present a single network method for panoptic segmentation. This method combines the predictions f...
It is natural to represent objects in terms of their parts. This has the potential to improve the pe...
In this work, we present two novel datasets for image scene understanding. Both datasets have annota...
In this work, we introduce the new scene understanding task of Part-aware Panoptic Segmentation (PPS...
In this work, we introduce panoramic panoptic segmentation, as the most holistic scene understanding...
Panoptic segmentation combines instance and semantic predictions, allowing the detection of "things"...
\u3cp\u3eIn this work, we propose a single deep neural network for panoptic segmentation, for which ...
We present an end-to-end network to bridge the gap between training and inference pipeline for panop...
In this work, we present an end-to-end network for fast panoptic segmentation. This network, called ...
Panoptic segmentation is a recently proposed task that unifies both instance and semantic segmentati...
International audienceThis article presents a dataset called Paris-Lille-3D. This dataset is compose...
Semantic understanding of urban street scenes through visual perception has been widely studied due ...
An essential prerequisite for unleashing the potential of supervised deep learning algorithms in the...
Pixel-wise semantic segmentation is capable of unifying most of driving scene perception tasks, and ...
Semantic understanding of visual scenes is one of the holy grails of computer vision. Despite effort...
We present a single network method for panoptic segmentation. This method combines the predictions f...
It is natural to represent objects in terms of their parts. This has the potential to improve the pe...
In this work, we present two novel datasets for image scene understanding. Both datasets have annota...
In this work, we introduce the new scene understanding task of Part-aware Panoptic Segmentation (PPS...
In this work, we introduce panoramic panoptic segmentation, as the most holistic scene understanding...
Panoptic segmentation combines instance and semantic predictions, allowing the detection of "things"...
\u3cp\u3eIn this work, we propose a single deep neural network for panoptic segmentation, for which ...
We present an end-to-end network to bridge the gap between training and inference pipeline for panop...
In this work, we present an end-to-end network for fast panoptic segmentation. This network, called ...
Panoptic segmentation is a recently proposed task that unifies both instance and semantic segmentati...
International audienceThis article presents a dataset called Paris-Lille-3D. This dataset is compose...
Semantic understanding of urban street scenes through visual perception has been widely studied due ...
An essential prerequisite for unleashing the potential of supervised deep learning algorithms in the...
Pixel-wise semantic segmentation is capable of unifying most of driving scene perception tasks, and ...
Semantic understanding of visual scenes is one of the holy grails of computer vision. Despite effort...
We present a single network method for panoptic segmentation. This method combines the predictions f...
It is natural to represent objects in terms of their parts. This has the potential to improve the pe...