Dears, Kindly, I want Answers to these questions (not hand written please) Q1) D
ID: 3739494 • Letter: D
Question
Dears,
Kindly, I want Answers to these questions (not hand written please)
Q1) Define partitioning clustering approaches and hierarchical clustering approaches and give a typical method for each type. Also state the main difference between these two approaches?Define partitioning clustering approaches and hierarchical clustering approaches and give a typical method for each type. Also state the main difference between these two approaches?
Q2) Compare K-means and K-medoids algorithms. List down the main differences between these two algorithms?
Q3) Cluster the following eightpoints into three clusters using K means clustering algorithm and use Euclidean distance.
A1=(2,10), A2=(2,5), A3=(8,4), A4=(5,8), A5=(7,5), A6=(6,4), A7=(1,2), A8=(4,9).
1- Create distance matrix by calculating Euclidean distance between each pair of points. (0.5 mark)
2- Suppose that the initial centers of each cluster are A1, A4 and A7. Run the k-means algorithm for once only and show:
-The new clusters (i.e. the examples belonging to each cluster) (1 mark)
-The centers of the new clusters (0.5 mark)
Show all your work.
Explanation / Answer
Q1)
partitional clustering
A partitional clustering is simply a division of the set of data objects into non-overlapping subsets (clusters) such that each data object is in exactly one subset.
The clusters are formed using the notion of distance between the objects. The mean of medoid can be used to represent the cluster center.
This clustering is effective for small to medium sized datasets.
Examples are: k-Means, K-Medoids.
Hierarchical clustering:
As per chegg policy please ask remaiing in another question.
Hierarchical clustering is obtained by forming a set of nested clusters organized as a tree which can be visualized as a dendogram.
Each node (cluster) in the tree (except for the leaf nodes) is the union of its children (subclusters), and the root ofthe tree is the cluster containing all the objects.
In hierarchical clustering, we donot have to assume the number of clusters because any number of clusters can be obtained by cutting the dendogram at particular levels.
Examples: Agglomerative Clustering, Divisive Clustering
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.