- Embedding ... Fine-tuning ... RAG ... Search ... Clustering ... Recommendation ... Anomaly Detection ... Classification ... Dimensional Reduction. ...find outliers
- Singular Value Decomposition (SVD)
- Principal Component Analysis (PCA)
- Fuzzy C-Means (FCM)
- Association Rule Learning
- Mean-Shift Clustering
- Density-Based Spatial Clustering of Applications with Noise (DBSCAN)
- Expectation–Maximization (EM) Clustering using Gaussian Mixture Models (GMM)
- Restricted Boltzmann Machine (RBM)
- Variational Autoencoder (VAE)
- OPTICS: Ordering Points To Identify the Clustering Structure
- Multidimensional Scaling (MDS)
- Hierarchical; to include clustering
- Excel ... Documents ... Database; Vector & Relational ... Graph ... LlamaIndex
Similarity Measures for Clusters
- Compare the numbers of identical and unique item pairs appearing in cluster sets
- Achieved by counting the number of item pairs found in both clustering sets (a) as well as the pairs appearing only in the first (b) or the second (c) set.
- With this a similarity coefficient, such as the Jaccard index, can be computed. The latter is defined as the size of the intersect divided by the size of the union of two sample sets: a/(a+b+c).
- In case of partitioning results, the Jaccard Index measures how frequently pairs of items are joined together in two clustering data sets and how often pairs are observed only in one set.
- Related coefficient are the Rand Index and the Adjusted Rand Index. These indices also consider the number of pairs (d) that are not joined together in any of the clusters in both sets
The main types of clustering in unsupervised machine learning include K-means, hierarchical clustering, Density-Based Spatial Clustering of Applications with Noise (DBSCAN), and Gaussian Mixtures Model (GMM).
News Headlines With Text Clustering
One way to use unsupervised learning for text clustering of news headlines is by using a model that employs unsupervised learning to automatically extract latent information from news articles with pre-determined topics. This model can use techniques such as Doc2vec to generate word vectors for each article. Afterward, a clustering algorithm such as spectral clustering can be applied to group the data based on similarity. This approach alleviates the need for humans to label news items manually. Another approach is to fine-tune pre-trained models unsupervised for text clustering, which simultaneously learns text representations and cluster assignments using a clustering oriented loss.
Feature extraction is an efficient approach for alleviating the issue of dimensionality in high-dimensional data. Unsupervised feature extraction projects high-dimensional data into a low-dimensional subspace while preserving similarity. It generates low-dimensional features without considering any explicit semantic labels. This can be done using unsupervised learning methods such as transformations (e.g., PCA/ICA/NMF), embeddings (e.g., T-distributed stochastic neighbor embedding), cluster-based methods (e.g., k-means), and kernel-based methods (e.g., kernel PCA).