Abstract: Hierarchical Clustering is an important tool for unsupervised learning whose goal is to construct a hierarchical decomposition of a given dataset describing relationships at all levels of granularity simultaneously. Despite its long history, Hierarchical Clustering was underdeveloped from a theoretical perspective, partly because of a lack of suitable objectives and algorithms with guarantees. In this talk, I want to tell you about the recent progress in the area with an emphasis on connections to approximation algorithms like Sparsest Cut and Balanced Cut, some hardness of approximation results, and also highlight some interesting open problems.