Clustering

From
Jump to: navigation, search

YouTube ... Quora ...Google search ...Google News ...Bing News

Similarity Measures for Clusters

  • Compare the numbers of identical and unique item pairs appearing in cluster sets
  • Achieved by counting the number of item pairs found in both clustering sets (a) as well as the pairs appearing only in the first (b) or the second (c) set.
  • With this a similarity coefficient, such as the Jaccard index, can be computed. The latter is defined as the size of the intersect divided by the size of the union of two sample sets: a/(a+b+c).
  • In case of partitioning results, the Jaccard Index measures how frequently pairs of items are joined together in two clustering data sets and how often pairs are observed only in one set.
  • Related coefficient are the Rand Index and the Adjusted Rand Index. These indices also consider the number of pairs (d) that are not joined together in any of the clusters in both sets

Clustering Algorithms | Data Analysis in Genome Biology


Unsupervised Learning

The main types of clustering in unsupervised machine learning include K-means, hierarchical clustering, Density-Based Spatial Clustering of Applications with Noise (DBSCAN), and Gaussian Mixtures Model (GMM).

News Headlines With Text Clustering

One way to use unsupervised learning for text clustering of news headlines is by using a model that employs unsupervised learning to automatically extract latent information from news articles with pre-determined topics. This model can use techniques such as Doc2vec to generate word vectors for each article. Afterward, a clustering algorithm such as spectral clustering can be applied to group the data based on similarity. This approach alleviates the need for humans to label news items manually. Another approach is to fine-tune pre-trained models unsupervised for text clustering, which simultaneously learns text representations and cluster assignments using a clustering oriented loss.


Feature Extraction

Feature extraction is an efficient approach for alleviating the issue of dimensionality in high-dimensional data. Unsupervised feature extraction projects high-dimensional data into a low-dimensional subspace while preserving similarity. It generates low-dimensional features without considering any explicit semantic labels. This can be done using unsupervised learning methods such as transformations (e.g., PCA/ICA/NMF), embeddings (e.g., T-distributed stochastic neighbor embedding), cluster-based methods (e.g., k-means), and kernel-based methods (e.g., kernel PCA).