Unsupervised Learning
Unsupervised learning is another common type of machine learning algorithm uses to draw inference on the unlabeled dataset (i.e., “no-ground-truth” data ). Unsupervised learning tends to be more subjective compared to supervised learning, as there is no simple goal (such as prediction) for the analysis (James et al. 2013, page 374)[F1] . Additionally, there are no universally accepted mechanisms for validating the result of unsupervised learning. Hence, this approach is more challenging in terms of estimation and evaluation compared to supervised techniques. Unsupervised learning algorithms are mainly used for exploratory data analysis, including density estimation, learning to draw a sample from a distribution, learning to denoise data from some distribution, find a manifold that data lies near, or clustering the data into related subgroups (Goodfellow et al. 2016, page 142)[F2] . As discussed by Goodflow et al. (2016), unsupervised learning is used to reveal the best representation of the data, a representation that preserves as much information about X as possible while keeping it simple and more accessible than X itself. In general, the unsupervised learning algorithm can be classified into (1) parametric unsupervised learning, where the researchers assume the data comes from a population that follows a specific probability distribution, and (2) non-parametric unsupervised learning, where the researcher does not require to make any assumption about the distribution of the population. Following is a short discussion of different members of the family of unsupervised algorithms:
Clustering: The most common unsupervised learning method is clustering, uses for exploratory data analysis, and finding hidden patterns or grouping in data. In simple terms, clustering refers to the process of organizing objects into groups whose members are similar in some way. In many cases, distance and similarity are the basis for constructing cluster algorithms. The classical clustering approaches can be divided into nine groups of method, including methods, based on partitioning (e.g., k-mean), hierarchy or hierarchical relationship (e.g., hierarchical clustering), fuzzy theory (e.g., FCM), mixture-density distribution (e.g., Gaussian Mixture Model), density-based spatial relationship (e.g., DBSCAN), graph theory (e.g., CLICK), grid structure (e.g., STING), fractal theory, and model-based algorithms. More recently the clustering algorithms expanded by employing modern techniques, including clustering based on kernel, swarm intelligence, quantum theory, spectral graph theory, and affinity propagation. For more information check Xu and Tian (2015)[F3] .
Dimensionality reduction/data compression: The idea behind this approach is to reduce redundancy and represent most of the information in data with only a fraction of actual content. This technique is usually used to obtain better features for a classification or regression tasks by decreasing multicollinearity, and redundant features. In addition, dimension reduction techniques can help in data compressing and reducing the required storage space. There are three main methods for dimensionality reduction. The first method reduces the input data according to some statistics or information theory criterion, including vector quantization and mixture models, principal component analysis (PCA), non-linear PCA, manifold analysis, generative topographic map, self-organizing maps, elastic maps, kernel PCA, kernel entropy component analysis, independent component analysis and factor analysis. Another group of reduction techniques is based on the decomposition of a matrix formed by all input data as columns. The transformation is a linear change of basis between the two variable sets. Singular Value Decomposition (SVD), Non-negative Matrix/Tensor Factorization, and Sparse SVD are the most popular decomposition methods. The last group of the method is based on projections, such as projection onto interesting direction and projection onto manifolds (Soranzo et al. 2014)[F4]
Generative Model: Generative models are the new member in the family of unsupervised learning, in which the model generates new samples from the same distribution that training data are sampled. One of the long term benefits of the generative model is its ability to automatically learn the features of the given data. A common use case for generative models to generate a set of images similar to the given set. Vibrational auto-encoder, Boltzman Machine, and Generative Adversarial Networks are the examples of this type of model.
Last updated