I would not advise having such a high number of clusters with Kmeans. Instead, try using Agglomerative clustering with euclidean distance. This would allow you to find a cutoff where you can get the expected number of clusters by grouping points.
Cutting if off at 5 would give you 4 clusters while curring it off at 2 would give you more.
Dummy code -
from sklearn.cluster import AgglomerativeClustering
import numpy as np
X = np.array([[1, 2], [1, 4], [1, 0],[4, 2], [4, 4], [4, 0]])
clustering = AgglomerativeClustering().fit(X)
clustering.labels_
array([1, 1, 1, 0, 0, 0])
You can use a pre-computed matrix for agglomerative clustering for the same as well
Check the documentation link that I have shared.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…