The K-medoids is a classical partitioning technique of clustering that splits the data set of $n$ objects into $k$ clusters, where the number $k$ of clusters is assumed to be known a priori. Unlike the K-means algorithm however, the K-medoids algorithm chooses points within the dataset as the centers of the clusters.

A naive implementation of the K-medoids algorithm is similar to the K-means algorithm and requires the following steps,

  1. Select the initial medoids randomly (that is, select $k$ points from the dataset randomly as the cluster centers).
  2. Iterate while the cost decreases:
    1. In each cluster, make the point that minimizes the sum of distances within the cluster the medoid.
    2. Reassign each point to the cluster defined by the closest medoid determined in the previous step.

Write a function in Python, MATLAB, or your preferred language to cluster an input dataset using the naive K-medoids method.
Test the functionality of your algorithm with this example dataset for 6 clusters.