Abstract

Video segmentation is a stepping stone to understanding video context. Video segmentation enables one to represent a video by decomposing it into coherent regions which comprise whole or parts of objects. However, the challenge originates from the fact that most of the video segmentation algorithms are based on unsupervised learning due to expensive cost of pixelwise video annotation and intra-class variability within similar unconstrained video classes. We propose a Markov Random Field model for unconstrained video segmentation that relies on tight integration of multiple cues: vertices are defined from contour based superpixels, unary potentials from temporal smooth label likelihood and pairwise potentials from global structure of a video. Multi-cue structure is a breakthrough to extracting coherent object regions for unconstrained videos in absence of supervision. Our experiments on VSB100 dataset show that the proposed model significantly outperforms competing state-of-the-art algorithms. Qualitative analysis illustrates that video segmentation result of the proposed model is consistent with human perception of objects.

**1 Overview**

Figure 1: Overview of the framework. (a) Node potential depends on histogram of temporal smooth pixelwise labels of the corresponding frame. Spatial edge potentials: (b) Gray intensity represents contour strength. (c) RGB color is displayed for better visualization. (d) Color represents motion direction. (e) Color represents visual word identity of each dense SIFT feature. Temporal edge potential depends on correspondence ratio on long trajectory and color affinity. (f) Superpixels for corresponding vertices in the frame *f* are illustrated by object contours. For visualization purpose, it shows coarse grained superpixels. Best viewed in color.

**2 Contributions**

In this paper, we propose a novel hierarchical video segmentation model which integrates temporal smooth labels with global structure consistency with preserving object boundaries. Our contributions are as follows:

• We propose a video segmentation model that preserves multi-cue structures of object boundary and temporal smooth label with global spatio-temporal consistency.

• We propose an effective pairwise potential to represent spatio-temporal structure evaluated on object boundary, color, optical flow, texture and long trajectory correspondence.

• Video hierarchy is inferred through the process of graph edge consistency, which generalizes traditional hierarchy induction approaches.

• The proposed method infers precise coarse grained segmentation, where a segment may represent one whole object.

**3 Proposed Model**

### 3.1 Multi-Cue Structure Preserving MRF Model

An overview of our framework for video segmentation is depicted in Figure 1. A video is represented as a graph *G*=(*V*,*E*), where a vertex set is defined on contour based superpixels from all frames *f*∈{1,⋯,*F*} in the video. For each frame, an object contour map is obtained from contour detector [1]. A region enclosed by a contour forms a superpixel. An edge set describes relationship for each pair of vertices. The edge set consists of spatial edges where and temporal edges where .

Video segmentation is obtained by MAP inference on a Markov Random Field on this graph *G*, where and *Z* is the partition function. Vertex *i* is labeled as from the label set *L* of size *L*. MAP inference is equivalent to the following energy minimization problem.

In (1), represents node potentials for a vertex *i*∈*V* and is edge potentials for an edge . As with the edge set *E*, edge potentials are decomposed into spatial and temporal edge potentials, . The vector indicates label and is the label pair indicator matrix for and . Operators ⋅ and : represent inner product and Frobenius product, respectively. Spatial edge potentials are defined for each edge which connects the vertices in the same frame . In contrast, temporal edge potentials are defined for each pair of vertices in the different frames . It is worth noting that the proposed model includes spatial edges between two vertices that are not spatially adjacent and, similarly, temporal edges are not limited to consecutive frames.

A set of vertices of the graph is defined from contour based superpixels such that the inferred region labels will preserve accurate object boundaries. Node potential parameters are obtained from temporally smooth label likelihood. Edge potential parameters aggregate appearance and motion features to represent global spatio-temporal structure of the video. MAP inference of the proposed Markov Random Field(MRF) model will infer the region labels which preserve object boundary, attain temporal smoothness and are consistent to global structure. Details are described in the following sections.

### 3.2 Node Potentials

Unary potential parameters represent a cost of labeling vertex *i*∈*V* from a label set *L*. While edge potentials represent global spatio-temporal structure in a video, node potentials in the proposed model strengthen temporal smoothness for label inference. Temporal smooth label set *L* is obtained from a greedy agglomerative clustering [10]. The clustering algorithm merges two adjacent blobs in a video when color difference is smaller than the variance of each blob. Node potential parameters represent labeling cost of vertex *i* from negative label likelihood .

Each superpixel is evaluated by pixelwise cluster labels from *L* and the label histogram represents label likelihood for the vertex *i*. As illustrated in Figure 1 (a), a superpixel has a mixture of pixelwise temporal smooth labels because the agglomerative clustering [10] merges unstructured blobs. Let be the number of pixelwise temporal smooth label*b* in the corresponding superpixel of vertex *i*. As described in 3.1, a vertex is defined on a superpixel which is enclosed by an object contour. Arbelaez et al. [1] extract object contours so that taking different threshold values on the contours will produce different granularity levels of enclosed regions. In our proposed model, we take a set of vertices from a video frame *f* by a single threshold on contours which results in fine-grained superpixels.

### 3.3 Spatial Edge Potentials

Binary edge potential parameters ψ consist of two different types; spatial and temporal edge potentials, and , respectively . Spatial edge potentials model pairwise relationship of two vertices *i* and *j* within a single video frame *f*. We define these pairwise potentials as follows:

A spatial edge potential parameter is the (*l*,*l*‘) element of matrix which represents the cost of labeling a pair of vertices *i* and *j* as *l* and *l*‘, respectively. It takes Potts energy where all different pairs of label take homogeneous cost . Spatial edge potentials are decomposed into , which represent pairwise potentials in the channel of object boundary, color, optical flow direction and texture. Pairwise cost of having different labels is high if the two vertices *i* and *j* have high affinity in the corresponding channel. As a result, edge potentials increase the likelihood of assigning the same label to vertices *i* and *j* during energy minimization.

The edge potentials take equal weights on all channels. Importance of each channel may depend on video context and different videos have dissimilar contexts. Learning weights of each channel is challenging and it is prone to overfitting due to high variability of video context and limited number of labeled video samples in the dataset. Hence, the propose model equally weights all channels.

The model controls the granularity of segmentation by a threshold τ. In (9), the pairwise potential is thresholded by τ. If τ is set to a high value, only edges with higher affinity will be included in the graph. On the other hand, if we set a low value to τ, the number of edges increases and more vertices will be assigned to the same label because they are densely connected by the edge set. We next discuss each individual potential type in the context of our video segmentation model.

**Object Boundary Potentials ****.** Object boundary potentials evaluate cost of two vertices *i* and *j* in the same frame assigned to different labels in terms of object boundary information. The potential parameters are defined as follows:

where represents the minimum boundary path weight among all possible paths from a vertex *i* to *j*. The potentials are obtained from Gaussian Radial Basis Function(RBF) of with which is the mean of as a normalization term.

If the two superpixels *i* and *j* are adjacent, their object boundary potentials are decided by the shared object contour strength , where is the edge connects vertices *i* and *j* and the boundary strength is estimated from contour detector [1] . The boundary potentials can be extended to non-adjacent vertices *i* and *j* by evaluating a path weight from vertex *i* to *j*. For each path *p* from a vertex *i* to *j*, boundary potential of path *p* is evaluated by taking the maximum edge weights where is an edge along the path *p*. The algorithm to calculate is described in Algorithm 1, which modifies Floyd-Warshall shortest path algorithm.

Typically, a path in a graph is evaluated by sum of edge weights along the path. However, in case of boundary strength between the two non-adjacent vertices in the graph, total sum of the edge weights along the path is not an effective measurement because the sum of weights is biased toward the number of edges in the path. For example, a path consists edges of weak contour strength may have the higher path weight than another path which consists of smaller number of edges with strong contour. Therefore, we evaluate a path by the maximum edge weight along the path and the path weight is govern by an edge of the strongest contour strength.

Figure 2 illustrates two different path weight models of the max edge weight and the sum edge weight. Figure 2 (a) illustrates contour strength where red color represents high strength. Two vertices indicated by white arrows are selected in an airplane. In Figure 2 (b), two paths are displayed. *Path 2* consists of less number of edges but it intersects with a strong contour that represents boundary of the airplane. If we evaluate object boundary score between the two vertices, *Path 1* should be considered since it connects vertices within the airplane. Figure 2 (c) shows edge sum path weight from a vertex at tail to all the other vertices. It displays that the minimum path weight between the two vertices are evaluated on *Path 2*. On the other hand, Figure 2 (d) illustrates that max edge path weight takes *Path 1* as minimum path weight which conveys human perception of object hierarchy.

Figure 2: Comparison of two types of path weight models.

**Color Potentials ****.** Color feature for each vertex is represented by a histogram of CIELab color space in the corresponding superpixel. Color potential between the vertex *i* and *j* is evaluated on two color histograms and :

where is Earth Mover’s Distance(EMD) between and of vertices *i* and *j* and is the normalization parameter.

Earth Mover’s Distance [16] is a distance measurement between two probability distributions. EMD is typically more accurate over distance in color space of superpixels. An issue with distance is that if the two histograms on simplex do not share non-zero color bins, the two histogram are evaluated with the maximum distance of 1. Therefore, distance of vertices *i* and *j* is the same as the distance between *i* and *k*, if *i*,*j*,*k* do not share any color bins. This occurs often when we compare color feature of superpixels because superpixel is intended to exhibit coherent color especially in the fine grained level. Superpixels on different objects or different parts of an object may have different colors. For example, if we use distance to measure color difference of superpixels, distance between superpixels of red and orange will have the same distance of red and blue because they do not share color bins. However, this is not intuitive to human perception. In contrast, EMD considers distance among each color bin, hence it is able to distinguish non overlapping color histograms.

**Optical Flow Direction Potentials ****.** In each video frame, motion direction feature of *i*th vertex can be obtained from a histogram of optical flow direction . As with the case of color potentials, we use EMD between the two histograms and to accurately estimate difference direction in motion:

where is the mean EMD distance on optical flow histogram.

**Texture Potentials ****.** Dense SIFT features are extracted for each superpixel and Bag-of-Words(BoW) model is obtained from K-means clustering on D-SIFT features. We evaluate SIFT feature on multiple dictionaries of different *K*. Texture potentials are calculated from RBF on distance of two BoW histograms and , which is a typical choice of distance measurement for BoW model:

where parameter is the mean distance on D-SIFT word histogram.

### 3.4 Temporal Edge Potentials

Temporal edge potentials define correspondence of vertices at different frames. It relies on long trajectories which convey long range temporal dependencies and more robust than optical flow.

where is a set of long trajectories which pass through vertex *i*. Pairwise potential represents temporal correspondence of two vertices from overlapping ratio of long trajectories that vertices *i* and *j* shares, where and *f*≠*f*‘. In order to distinguish two different objects of the same motion, we integrate color potentials between two vertices. Long trajectories are extracted from [18].

### 3.5 Hierarchical Inference on Segmentation Labels

The proposed model attains hierarchical inference of segmentation labels by controlling the number of edges with a fixed set of vertices defined at a finest level of superpixels. As the edge set becomes dense in the graph, the energy function in (1) takes higher penalties from the pairwise potentials. As a consequence, vertices connected by dense edges will be assigned to the same label and it leads to coarse-grained segmentation.

In contrast, another approach that enables hierarchical segmentation is to define a hierarchical vertex set in a graph. A set of vertices in the finer level will be connected to a vertex in coarser level. It introduces another set of edges which connect vertices at different levels of hierarchy.

Our proposed approach on hierarchical inference takes computational advantages over graph representation with a hierarchical vertex set. Our proposed graph representation has less the number of vertices and edges because we have a single finest level of hierarchy without additional vertices for coarser levels. This advantage not only enables an efficient graph inference, but also take less computation time to calculate node and edge potentials for additional vertex and edge sets.

**4 Experimental Evaluation**

### 4.1 Dataset

We evaluate the proposed model on VSB100 video segmentation benchmark data provided by Galasso et al. [9]. There are a few additional video datasets which have pixelwise annotation. FBMS-59 dataset [15] consists of 59 video sequences and SegTrack v2 dataset [13] consists of 14 sequences. However, the both datasets annotate on a few major objects leaving whole background area as one label. It is more appropriate for object tracking or background subtraction task. On the other hand, VSB100 consists of 60 test video sequences of maximum 121 frames. For each video, every 20 frame is annotated with pixelwise segmentation labels by four annotators. The dataset contains the largest number of video sequences annotated with pixelwise label, which allows quantitative analysis. The dataset provides a set of evaluation measurements.

**Volume Precision-Recall.** VPR score measures overlap of the volume between the segmentation result of the proposed algorithm *S* and ground truths annotated by *M* annotators. Over-segmentation will have high precision with low recall score.

**Boundary Precision-Recall.** BPR score measures overlap between object boundaries of the segmentation result *S* and ground truths boundaries . Conversely to VPR, over-segmentation will have low precision with high recall scores.

### 4.2 MSP-MRF Setup

In this section, we present the detailed setup of our Multi-Cue Structure Preserving Markov Random Field (MSP-MRF) model for unconstrained video segmentation problem. As described in Section 3.2, we take a single threshold on image contour, so that each frame contains approximately 100 superpixels. We assume that this granularity level is fine enough such that no superpixel at this level will overlay on multiple ground truth regions. Node potential (6) is evaluated for each superpixel with temporal smooth label obtained with agglomerative clustering [10]. Although we chose the 11th fine grained level of hierarchy, Section 4.4 illustrates that the proposed method shows stable performance over different label set size |*L*| for node potential. Finally, edge potential is estimated as in (9), (14). For color histograms, we used 50 bins for each CIELab color channel. In addition, 50 bins were set for horizontal and vertical motion of optical flow. For D-SIFT Bag-of-Words model, we used 5 dictionaries of *K*=100,200,400,800,1000 words. Energy minimization problem in (1) for MRF inference is optimized using FastPD algorithm [12].

Figure 3: Temporal consistency recovered by MSP-MRF.

Figure 4: Comparison of segmentation boundary on the same granularity levels on two videos.

### 4.3 Qualitative Analysis

Figure 3 illustrates a segmentation result on an *airplane* video sequence. MSP-MRF rectifies temporally inconsistent segmentation result of [10]. For example, in the fourth column of Figure 3, the red bounding boxes show MSP-MRF rectified label from Grundmann’s result such that labels across frames become spatio-temporally consistent.

In addition, control parameter τ successfully obtains different granularity level of segmentation. For MSP-MRF, the number of region labels is decreased as τ decreases. Figure 4 compares video segmentation results of MSP-MRF with Grundmann’s by displaying segmentation boundary on the same granularity levels, where the two methods have the same number of segments in the video. MSP-MRF infers spatial smooth object regions, which illustrates the fact that the proposed model successfully captures spatial structure of objects.

Figure 5: PR curve comparison to other models.

Figure 6: PR curve on different size of label set *L*.

Table 1: Performance of MSP-MRF model compared with state-of-the-art video segmentation algorithms on VSB100.

### 4.4 PR Curve on High recall regions

We specifically consider high recall regions of segmentation since we are typically interested in videos with relatively few objects. Our proposed method improves and rectifies state-of-the-art video segmentation of greedy agglomerative clustering [10], because we make use of structural information of object boundary, color, optical flow, texture and temporal correspondence from long trajectories. Figure 5 shows that the proposed method achieves significant improvement over state-of-the-art algorithms. MSP-MRF improves in both BPR and VPR scores such that it is close to *Oracle* which evaluates contour based superpixels on ground truth. Hence, it is worth noting that *oracle* is the best accuracy that MSP-MRF could possibly achieve because MSP-MRF takes contour based superpixels from [1] as well.

The proposed MSP-MRF model rectifies agglomerative clustering by merging two different labels of vertices if it reduces overall cost defined in (1). By increasing the number of edges in the graph by lowering threshold value, the model leads to coarser grained segmentation. As a result, MSP-MRF only covers higher recall regions from precision-recall scores of the selected label set size |*L*| from [10]. A hybrid model that covers high precision regions is described in Section 4.5.

Figure 6 illustrates the PR curve of MSP-MRF on different granularity levels of label set |*L*| in node potential (6). Dashed-green line is the result of greedy agglomerative clustering [10]. Solid-green line is the result of MSP-MRF with edge threshold τ set to 1, which leaves no edge in the graph. The figure shows that results of MSP-MRF are stable over different size of |*L*|, particularly in the high recall regions.

### 4.5 Hybrid Model for Over Segmentation

The proposed model effectively merges labels of each pair of nodes according to edge set *E*. As the number of edges increases, the size of the inferred label set will decrease from |*L*|, which will cover higher recall regions. Although we are interested in high recall regions, the model needs to be evaluated on high precision regions of PR curve. For this purpose, we take a hybrid model that obtains rectified segmentation results from MSP-MRF on the high recall regions but retains segmentation result of [10] on high precision regions as an unrectified baseline.

Table 1 shows performance comparison to state-of-the-art video segmentation algorithms. The proposed MSP-MRF model outperforms state-of-the-art algorithms on most of the evaluation metrics. BPR and VPR is described in Section 4.1. Optimal dataset scale(ODS) aggregates F-scores on a single fixed scale of PR curve across all video sequences, while optimal segmentation scale(OSS) selects the best F-score with different scale for each video sequence. All the evaluation metrics are followed from dataset [9]. It is worth noting that our MSP-MRF model achieves best ODS and OSS results for both BPR and VPR evaluation measurements, which are equivalent to results of *Oracle*. As described in Section 4.4, *Oracle* is a model that evaluates contour based superpixels on ground truth.

MSP-MRF infers segmentation label by integrating object boundary, global structure and temporal smoothness based on [10]. The result shows that incorporating boundary and global structure rectifies [10] by significant margin. It should be noted that result of [10] is higher than previously reported in [9]. We assume this is due to implementation updates on [10] over recent years. Qualitatively, we observe that recent implementation of [10] detects objects whose appearance is less distinctive from background, where the previous implementation could not elucidate objects under those circumstances.

**5 Conclusion**

In this paper, we have presented a novel video segmentation model that considers three important aspects of video segmentation. The model preserves object boundary by defining vertex set from contour based superpixels. In addition, temporal smooth label is inferred by providing unary node potential from agglomerative clustering label likelihood. Finally, global structure is enforced from pairwise edge potential on object boundary, color, optical flow motion, texture and long trajectory affinities. Experimental evaluation shows that the proposed model outperforms state-of-the-art video segmentation algorithm on most of the metrics.

References

[1] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation. *IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)*, 33(5):898–916, May 2011.

[2] V. Badrinarayanan, F. Galasso, and R. Cipolla. Label propagation in video sequences. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2010.

[3] G. J. Brostow, J. Fauqueur, and R. Cipolla. Semantic object classes in video: A high-definition ground truth database. *Pattern Recognition Letters*, 2008.

[4] T. Brox and J. Malik. Large displacement optical flow: descriptor matching in variational motion estimation. *IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)*, 33(3):500–513, 2011.

[5] A. Elqursh and A. M. Elgammal. Online motion segmentation using dynamic label propagation. In *IEEE International Conference on Computer Vision (ICCV)*, pages 2008–2015, 2013.

[6] B. FrÃ¶hlich, E. Rodner, M. Kemmler, and J. Denzler. Large-scale gaussian process multi-class classification for semantic segmentation and facade recognition. *Machine Vision and Applications*, 24(5):1043–1053, 2013.

[7] F. Galasso, R. Cipolla, and B. Schiele. Video segmentation with superpixels. In *Asian Conference on Computer Vision (ACCV)*, 2012.

[8] F. Galasso, M. Keuper, T. Brox, and B. Schiele. Spectral graph reduction for efficient image and streaming video segmentation. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2014.

[9] F. Galasso, N. S. Nagaraja, T. J. Cardenas, T. Brox, and B. Schiele. A unified video segmentation benchmark: Annotation, metrics and analysis. In *IEEE International Conference on Computer Vision (ICCV)*, December 2013.

[10] M. Grundmann, V. Kwatra, M. Han, and I. Essa. Efficient hierarchical graph based video segmentation. *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2010.

[11] A. Khoreva, F. Galasso, M. Hein, and B. Schiele. Learning must-link constraints for video segmentation based on spectral clustering. In *German Conference on Pattern Recognition (GCPR)*, 2014.

[12] N. Komodakis and G. Tziritas. Approximate labeling via graph cuts based on linear programming. *IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)*, 29(8):1436–1453, Aug. 2007.

[13] F. Li, T. Kim, A. Humayun, D. Tsai, and J. M. Rehg. Video segmentation by tracking many figure-ground segments. In *IEEE International Conference on Computer Vision (ICCV)*, 2013.

[14] B. Nadler and M. Galun. Fundamental limitations of spectral clustering methods. In B. Schölkopf, J. Platt, and T. Hoffman, editors, *Advances in Neural Information Processing Systems (NIPS)*, Cambridge, MA, 2007. MIT Press.

[15] P. Ochs, J. Malik, and T. Brox. Segmentation of moving objects by long term video analysis. *IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)*, 36(6):1187 – 1200, Jun 2014.

[16] O. Pele and M. Werman. Fast and robust earth mover’s distances. In *IEEE International Conference on Computer Vision (ICCV)*, 2009.

[17] P.Ochs and T.Brox. Object segmentation in video: a hierarchical variational approach for turning point trajectories into dense regions. In *IEEE International Conference on Computer Vision (ICCV)*, 2011.

[18] T.Brox and J.Malik. Object segmentation by long term analysis of point trajectories. In *European Conference on Computer Vision (ECCV)*, Lecture Notes in Computer Science. Springer, Sept. 2010.

[19] C. Zhang, L. Wang, and R. Yang. Semantic segmentation of urban scenes using dense depth maps. In *European Conference on Computer Vision (ECCV)*, pages 708–721, Berlin, Heidelberg, 2010. Springer-Verlag.