The K-medoids is a classical partitioning technique of clustering that splits the data set of $n$ objects into $k$ clusters, where the number $k$ of clusters is assumed to be known a priori. Unlike the K-means algorithm however, **the K-medoids algorithm chooses points within the dataset as the centers of the clusters**.

A naive implementation of the K-medoids algorithm is similar to the K-means algorithm and requires the following steps,

- Select the initial medoids randomly (that is, select $k$ points from the dataset randomly as the cluster centers).
- Iterate while the cost decreases:
- In each cluster, make the point that minimizes the sum of distances within the cluster the medoid.
- Reassign each point to the cluster defined by the closest medoid determined in the previous step.

Write a function in Python, MATLAB, or your preferred language to cluster an input dataset using the naive K-medoids method.

Test the functionality of your algorithm with this example dataset for 6 clusters.