Difference between revisions of "Hierarchical Clustering; Agglomerative (HAC) & Divisive (HDC)"

From
Jump to: navigation, search
Line 3: Line 3:
  
 
* [[AI Solver]]  
 
* [[AI Solver]]  
 +
** [[...find outliers]]
 
** [[...cluster]]
 
** [[...cluster]]
 
*** [[...no, I do not know the amount of groups/classes]]
 
*** [[...no, I do not know the amount of groups/classes]]
Line 8: Line 9:
 
* [[Hierarchical Cluster Analysis (HCA)]]
 
* [[Hierarchical Cluster Analysis (HCA)]]
 
* [[Hierarchical Temporal Memory (HTM)]]
 
* [[Hierarchical Temporal Memory (HTM)]]
* [[...find outliers]]
+
 
 
* [http://www.researchgate.net/publication/315966848_Exploreing_K-Means_with_Internal_Validity_Indexes_for_Data_Clustering_in_Traffic_Management_System Exploreing K-Means with Internal Validity Indexes for Data Clustering in Traffic Management System | S. Nawrin, S. Akhter and M. Rahatur]
 
* [http://www.researchgate.net/publication/315966848_Exploreing_K-Means_with_Internal_Validity_Indexes_for_Data_Clustering_in_Traffic_Management_System Exploreing K-Means with Internal Validity Indexes for Data Clustering in Traffic Management System | S. Nawrin, S. Akhter and M. Rahatur]
  

Revision as of 08:37, 7 January 2019

Youtube search... ...Google search

Hierarchical clustering algorithms actually fall into 2 categories: (1) Agglomerative; bottom-up or (2) Divisive; top-down


Agglomerative Clustering - Bottom Up

Bottom-up algorithms treat each data point as a single cluster at the outset and then successively merge (or agglomerate) pairs of clusters until all clusters have been merged into a single cluster that contains all data points. Bottom-up hierarchical clustering is therefore called hierarchical agglomerative clustering or HAC. This hierarchy of clusters is represented as a tree (or dendrogram). The root of the tree is the unique cluster that gathers all the samples, the leaves being the clusters with only one sample. The 5 Clustering Algorithms Data Scientists Need to Know | Towards Data Science

Hierarchical clustering does not require us to specify the number of clusters and we can even select which number of clusters looks best since we are building a tree. Additionally, the algorithm is not sensitive to the choice of distance metric; all of them tend to work equally well whereas with other clustering algorithms, the choice of distance metric is critical. A particularly good use case of hierarchical clustering methods is when the underlying data has a hierarchical structure and you want to recover the hierarchy; other clustering algorithms can’t do this. These advantages of hierarchical clustering come at the cost of lower efficiency, as it has a time complexity of O(n³), unlike the linear complexity of K-Means and GMM.

1*ET8kCcPpr893vNZFs8j4xg.gif

Divisive Clustering = Top Down