Home > Codes

Dirichlet-based Histogram Feature Transform

Histogram-based features have significantly contributed to recent development of image classifications, such as by SIFT local descriptors. In this paper, we propose a method to efficiently transform those histogram features for improving the classification performance. The (L1-normalized) histogram feature is regarded as a probability mass function, which is modeled by Dirichlet distribution. Based on the probabilistic modeling, we induce the Dirichlet Fisher kernel for transforming the histogram feature vector. The method works on the individual histogram feature to enhance the discriminative power at a low computational cost. On the other hand, in the bag-of-feature (BoF) frame- work, the Dirichlet mixture model can be extended to Gaussian mixture by transforming histogram-based local de- scriptors, e.g., SIFT, and thereby we propose the method of Dirichlet-derived GMM Fisher kernel.

  1. T. Kobayashi,
    Dirichlet-based Histogram Feature Transform for Image Classification
    ,
    Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. -, 2014. (to appear) pdf

matlab code

Image Feature by Histogram of Oriented p.d.f Gradients

We propose a novel feature extraction method for image classification. Following the BoF approach, a plenty of local descriptors are first extracted in an image and the proposed method is built upon the probability density function (p.d.f) formed by those descriptors. Since the p.d.f essentially represents the image, we extract the features from the p.d.f by means of the gradients on the p.d.f. The gradients, especially their orientations, effectively characterize the shape of the p.d.f from the geometrical viewpoint. We construct the features by the histogram of the oriented p.d.f gradients via orientation coding followed by aggregation of the orientation codes.


  1. T. Kobayashi,
    BoF meets HOG: Feature Extraction based on Histograms of Oriented p.d.f Gradients for Image Classification
    ,
    Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 747-754, 2013. pdf

matlab code

Bilinear Classification

We propose a low-rank bilinear classifier based on the efficient optimization formulated in a tractable convex form. In the proposed method, the classifier is optimized by minimizing the trace norm of the classifier (matrix), which contributes to the rank reduction for an efficient classifier without any hard constraint on the rank. In addition, by considering a kernelbased extension of the bilinear method, we induce a novel multiple kernel learning (MKL), called heterogeneous MKL. The method combines both inter kernels between heterogeneous types of features and the ordinary kernels within homogeneous features into a new discriminative kernel in a unified manner using the bilinear model.

  1. T. Kobayashi, N. Otsu,
    Efficient Optimization For Low-Rank Integrated Bilinear Classifiers
    ,
    Proc. European Conference on Computer Vision (ECCV), pp. 474-487, 2012. pdf  supplementary material

matlab code

Similarity

We propose the kernel-based formulation of transition probabilities via considering kernel least squares in the probabilistic framework. The similarities are consequently derived from the kernel-based transition probabilities which are efficiently computed, and the similarities are inherently sparse without applying k-NN. For the case of multiple types of kernel functions, the multiple transition probabilities can be integrated with prior probabilities, i.e., linear weights, and we propose a computationally efficient method to optimize the weights in a discriminative manner, as in multiple kernel learning. The novel similarity is thereby constructed by the composite transition probability and it benefits the semi-supervised learning methods as well.

  1. T. Kobayashi, N. Otsu,
    Efficient Similarity Derived From Kernel-based Transition Probability
    ,
    Proc. European Conference on Computer Vision (ECCV), pp. 371-385, 2012. pdf

matlab code

Logistic Label Propagation (LLP)

We propose a novel method for semi-supervised learning, called logistic label propagation (LLP). The proposed method employs the logistic function to classify input pattern vectors, similarly to logistic regression. To cope with unlabeled samples as well as labeled ones in the semi-supervised learning framework, the logistic functions are learnt by using similarities between samples in a manner similar to label propagation. In the proposed method, these two methods of logistic regression and label propagation are effectively incorporated in terms of posterior probabilities.

  1. T. Kobayashi, K. Watanabe, N. Otsu,
    Logistic Label Propagation
    ,
    Pattern Recognition Letters
    , Vol. 33, No. 5, pp. 580-588, 2012. pdf

matlab code

Spatio-Temporal Auto-Correlation of Gradients (STACOG)

We propose a novel method of motion feature extraction, called spatio-temporal auto-correlation of gradients (STACOG). The feature extraction method utilizes auto-correlations of space-time gradients of three-dimensional motion shape in a video sequence. The method effectively exploits the local relationships of the gradients corresponding to the space-time geometric characteristics of the motion. The STACOG method provides motion features of high discriminative power and enables faster recognition of motions.

  1. T. Kobayashi, N. Otsu,
    Motion Recognition Using Local Auto-Correlation of Space-Time Gradients
    ,
    Pattern Recognition Letters
    , Vol. 33, No. 9, pp. 1188-1195, 2012. pdf

matlab code

Gradient Local Auto-Correlation (GLAC)

We propose a method (gradient local auto-correlation:GLAC) for extracting image features which utilizes 2nd order statistics, i.e., spatial and orientational auto-correlations of local gradients. It enables us to extract richer information from images and to obtain more discriminative power than standard histogram based methods. The image gradients are sparsely described in terms of magnitude and orientation. From a geometrical viewpoint, the method extracts information about not only the gradients but also the curvatures of the image surface. The GLAC method is also viewed as co-occurrence of oriented gradient (COG) which is an one-order extention of HOG.

  1. T. Kobayashi, N. Otsu,
    Image Feature Extraction Using Gradient Local Auto-correlations
    ,
    Proc. European Conference on Computer Vision (ECCV), pp. 346-358, 2008. pdf

matlab code

Support Vector Reduction

Kernel-based methods, e.g., support vector machine (SVM), produce high classification performances. However, the computation becomes time-consuming as the number of the vectors supporting the classifier increases. We propose a method for reducing the computational cost of classification by kernel-based methods while retaining the high performance. By using linear algebra of a kernel Gram matrix of the support vectors (SVs) at low computational cost, the method efficiently prunes the redundant SVs which are unnecessary for constructing the classifier. The pruning is based on the evaluation of the performance (Hinge loss) of the classifier formed by the reduced SVs in SVM.

  1. T. Kobayashi, N. Otsu,
    Efficient Reduction Of Support Vectors In Kernel-Based Methods
    ,
    Proc. International Conference on Image Processing (ICIP), pp. 2077-2080, 2009. pdf

matlab code

Utility Software