Probabilistic Partitive Partitioning (PPP)
Probabilistic Partitive Partitioning (PPP)
Clustering is a NP-hard problem. Thus, no optimal algorithm exists, heuristics are applied to cluster the data. Heuristics can be very resource-intensive, if not applied properly. For substantially large data sets computational efficiencies can be achieved by reducing the input space if a minimal loss of information can be achieved. Clustering algorithms, in general, face two common problems:
- These converge to different settings with different initial conditions.
- The number of clusters has to be arbitrarily decided beforehand. This problem has become critical in the realm of big data.
Recently, clustering algorithms have emerged which can speedup computations using parallel processing over the grid but face the aforementioned problems. Goals: Our goals are to find methods to cluster data which:
- Guarantee convergence to the same settings irrespective of the initial conditions.
- Eliminate the need to establish the number of clusters beforehand.
- Can be applied to cluster large datasets.
Methods: We introduce a method that combines probabilistic and combinatorial clustering methods to produce repeatable and compact clusters that are not sensitive to initial conditions. This method harnesses the power of k-means (a combinatorial clustering method) to cluster/partition very large dimensional datasets and uses the Gaussian Mixture Model (a probabilistic clustering method) to validate the k-means partitions. Results: We show that this method produces very compact clusters that are not sensitive to initial conditions. This method can be used to identify the most ‘separable’ set in a dataset which increases the ‘clusterability’ of a dataset. This method also eliminates the need to specify the number of clusters in advance.
Let's Start Building
Would you like to start a project with us?
We value our customers and consider them our partners in success, if you want to start a project with us feel free to contact us.