Skip to Main content
Please note that Internet Explorer version 8.x will not be supported as of January 1, 2016. Please refer to this blog post for more information. Close
 
Cover image Pattern Recognition

Pattern Recognition

  1. Articles in Press
  2. Open Access articles
  3. Virtual Special Issues
  4.  CloseVolumes 71 - 77 (2017 - 2018)
    1. Volume 77 - selected
      In Progress (May 2018)
       Issue contains Open Access articles
    2. Volume 76
      In Progress (April 2018)
       Not entitled to full text
    3. Volume 75
      pp. 1-338 (March 2018)
      Distance Metric Learning for Pattern Recognition
       Not entitled to full text
    4. Volume 74
      pp. 1-650 (February 2018)
       Issue contains Open Access articles
    5. Volume 73
      pp. 1-288 (January 2018)
       Entitled to full text
    6. Volume 72
      pp. 1-572 (December 2017)
       Not entitled to full text
    7. Volume 71
      pp. 1-482 (November 2017)
       Not entitled to full text
Download and Export 0 checked results 

This issue is In Progress but contains articles that are final and fully citable. For recently accepted articles, see Articles in Press.

    Regular papers

    Features

    • Feature selection method with joint maximal information entropy between features and class

      Original Research Article
    • Pages 20-29
    • Kangfeng Zheng, Xiujuan Wang
    • Highlights

      A new metric (joint maximal information entropy (JMIE)) is defined to measure a feature subset.

      A new feature selection method combining the joint maximal information entropy among features (FS-JMIE) and binary particle swarm optimization (BPSO) algorithm is proposed in this paper.

      Experimental results on 5 UCI datasets show the efficiency of the proposed feature selection method.

      The proposed method manifests advantage in feature selection with multiple classes.

      FS-JMIE shows higher consistency and better time-efficiency than BPSO-SVM algorithm.

    •  Not entitled to full text
  1. Clustering

    • Feature co-shrinking for co-clustering

      Original Research Article
    • Pages 12-19
    • Qi Tan, Pei Yang, Jingrui He
    • Highlights

      We propose a novel non-negative matrix tri-factorization model based on cosparsity regularization to enable the co-feature-selection for co-clustering. It aims to learn the inter-correlation among the multi-way features while co-shrinking the irrelevant ones by encouraging the co-sparsity of the model parameters.

      We propose an efficient algorithm to solve the non-smooth optimization problem. It works in an iteratively update fashion, and is guaranteed to converge.

      Experimental results on various data sets show the effectiveness of the proposed approach.

    •  Not entitled to full text
    • An adaptive graph learning method based on dual data representations for clustering

      Original Research Article
    • Pages 126-139
    • Tianchi Liu, Chamara Kasun Liyanaarachchi Lekamalage, Guang-Bin Huang, Zhiping Lin
    • Highlights

      Showing that combining original data with a proper nonlinear embedding could be a better basis for adaptive graph learning.

      Development of dual representations, i.e., the original data and a nonlinear embedding obtained by an Extreme Learning Machine-based neural network.

      Proposing a novel adaptive graph learning method for clustering based on the dual representation.

      Extensive experiments on both synthetic and real-world benchmark datasets verified the effectiveness of the proposed method.

    •  Not entitled to full text
  2. Classifiers and classification

    • A new probabilistic classifier based on decomposable models with application to internet traffic

      Original Research Article
    • Pages 1-11
    • Fatemeh Ghofrani, Alireza Keshavarz-Haddad, Ali Jamshidi
    • Highlights

      In this paper, we proposed a new algorithm called LDMLCS to build a family of decomposable models that are appropriate for classification process.

      LDMLCS can generate numerous decomposable models, including simple models such as Tree Augmented Naive Bayes (TAN) and complex models with large joint marginal distribution, and some models that could be used for classification on a given dataset.

      This algorithm can address the problem of over-fitting and also capture the interaction between different features effectively and consequently, the obtained model is easily interpretable.

    •  Not entitled to full text
    • Classification of the emotional stress and physical stress using signal magnification and canonical correlation analysis

      Original Research Article
    • Pages 140-149
    • Kan Hong, Guodong Liu, Wentao Chen, Sheng Hong
    • Highlights

      The emotional state of a person can be obtained by extracting individual stress index, which provides useful information to health field.

      This paper directs the frontier of industrializing stress recognition by focusing on establishing a set of non-contact imaging based classifications for emotional stress (ES) and Physical stress (PS).

      The proposed algorithm successfully extracts weak signal from thermal imaging for stress classification.

      The classification algorithm is effective with accuracy as high as 90%. This study can lead to a practical system for the noninvasive assessment of stress states for practical applications.

    •  Not entitled to full text
    • Training neural network classifiers through Bayes risk minimization applying unidimensional Parzen windows

      Original Research Article
    • Pages 204-215
    • Marcelino Lázaro, Monson H. Hayes, Aníbal R. Figueiras-Vidal
    • Highlights

      The main contribution of this manuscript is a new training algorithm for binary classification using neural networks.

      The training algorithm is based on the minimization of an estimate of the Bayes risk.

      Parzen windows method is used to estimate the conditional distributions necessary to compute the probabilities of error included in the Bayes risk.

      A new set of training algorithms emerge from this Bayes risk minimization formulation using Parzen windows.

      Some interesting relationships with classical training methods are discovered.

    •  Not entitled to full text
  3. Human movements and activities

  4. Objects and image analysis

    • A large margin algorithm for automated segmentation of white matter hyperintensity

      Original Research Article
    • Pages 150-159
    • Chen Qin, Ricardo Guerrero, Christopher Bowles, Liang Chen, David Alexander Dickie, Maria del C. Valdes-Hernandez, Joanna Wardlaw, Daniel Rueckert
    • Highlights

      A novel large margin method for white matter hyperintensity segmentation is proposed.

      A supervised large margin algorithm is proposed to learn a global classifier.

      A semi-supervised large margin classifier is learned for refinement on test subject.

      The proposed model shows competitive performance on subjects with vascular disease.

    •  Not entitled to full text
  5. Various applications

  6. Machine learning

    • Incremental feature extraction based on decision boundaries

      Original Research Article
    • Pages 65-74
    • Seongyoun Woo, Chulhee Lee
    • Highlights

      We developed a gradient based decision boundary feature extraction algorithm for neural networks and its incremental version.

      The proposed method updates the decision boundaries from sequentially added samples and obtains discriminately informative vectors based on the updated decision boundaries.

      When applied to real world databases, it showed noticeably better classification performance than some existing incremental algorithms.

    •  Not entitled to full text
    • Manifold constraint transfer for visual structure-driven optimization

      Original Research Article
    • Pages 87-98
    • Baochang Zhang, Alessandro Perina, Ce Li, Qixiang Ye, Vittorio Murino, Alessio Del Bue
    • Highlights

      We leverage the manifold structure of visual data in order to improve performance in general optimization problems subject to linear constraints.

      As the main theoretical result, we show that manifold constraints can be transferred from the data to the optimized variables if these are linearly correlated.

      We also show that the resulting optimization problem can be solved with an efficient alternating direction method of multipliers that can consistently integrate the manifold constraints during the optimization process.

      We obtain a simple approach, which instead of directly optimizing on the manifold, and can iteratively recast the problem as the projection over the manifold via an embedding method.

    •  Not entitled to full text
  7. Virtual Special Section on Deep Learning for Computer Aided Cancer Detection and Diagnosis with Medical Imaging; Edited by Jinshan Tang, Yongyi Yang, Sos Agaian and Lin Yang