I have an array of floats like this:
[1.91, 2.87, 3.61, 10.91, 11.91, 12.82, 100.73, 100.71, 101.89, 200]
Now, I want to partition the array like this:
[[1.91, 2.87, 3.61] , [10.91, 11.91, 12.82] , [100.73, 100.71, 101.89] , [200]]
// [200] will be considered as an outlier because of less cluster support
I have to find this kind of segment for several arrays and I don't know what should be the partition size. I tried to do it by using hierarchical clustering (Agglomerative) and it gives satisfactory results for me. However, issue is, I was suggested not to use clustering algorithms for one-dimensional problem as their is no theoretical justification (as they are for multidimensional data) to do that.
I spent lots of time to find solution. However, suggestions seem quite different like: this and this VS. this and this and this.
I found another suggestion rather than clustering i.e. natural breaks optimization. However, this also needs to declare the partition number like K-means (right ?).
It is quite confusing (specially because I have to perform those kind of segmentation on several arrays and it is impossible to know the optimal partition number).
Are there any ways to find partitions (thus we can reduce the variance within partitions and maximize the variance between partitions) with some theoretical justification?
Any pointers to article/papers (if available C/C++/Java implementation) with some theoretical justification will be very helpful for me.
See Question&Answers more detail:
os