I am creating an app which detect the exercises. i trained the model using create ML. i got 100% result in create ML application. But when i am integrating into the application using Vision
framework it's always showing only one exercise. i followed the code exactly from Build an Action Classifier with Create ML for creating ml and requesting VNHumanBodyPoseObservation
. Followed this for converting VNHumanBodyPoseObservation
to MLMultiArray
.
Here is the code what i do:
func didOutput(pixelBuffer: CVPixelBuffer) {
self.extractPoses(pixelBuffer)
}
func extractPoses(_ pixelBuffer: CVPixelBuffer) {
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer)
let request = VNDetectHumanBodyPoseRequest { (request, err) in
if err == nil {
if let observations =
request.results as? [VNRecognizedPointsObservation], observations.count > 0 {
if let prediction = try? self.makePrediction(observations) {
print("(prediction.label), confidence: (prediction.confidence)")
}
}
}
}
do {
// Perform the body pose-detection request.
try handler.perform([request])
} catch {
print("Unable to perform the request: (error).
")
}
}
func makePrediction(_ observations: [VNRecognizedPointsObservation]) throws -> (label: String, confidence: Double) {
let fitnessClassifier = try PlayerExcercise(configuration: MLModelConfiguration())
let numAvailableFrames = observations.count
let observationsNeeded = 60
var multiArrayBuffer = [MLMultiArray]()
for frameIndex in 0 ..< min(numAvailableFrames, observationsNeeded) {
let pose = observations[frameIndex]
do {
let oneFrameMultiArray = try pose.keypointsMultiArray()
multiArrayBuffer.append(oneFrameMultiArray)
} catch {
continue
}
}
// If poseWindow does not have enough frames (45) yet, we need to pad 0s
if numAvailableFrames < observationsNeeded {
for _ in 0 ..< (observationsNeeded - numAvailableFrames) {
do {
let oneFrameMultiArray = try MLMultiArray(shape: [1, 3, 18], dataType: .double)
try resetMultiArray(oneFrameMultiArray)
multiArrayBuffer.append(oneFrameMultiArray)
} catch {
continue
}
}
}
let modelInput = MLMultiArray(concatenating: [MLMultiArray](multiArrayBuffer), axis: 0, dataType: .float)
//
//
let predictions = try fitnessClassifier.prediction(poses: modelInput)
return (label: predictions.label, confidence: predictions.labelProbabilities[predictions.label]!)
}
func resetMultiArray(_ predictionWindow: MLMultiArray, with value: Double = 0.0) throws {
let pointer = try UnsafeMutableBufferPointer<Double>(predictionWindow)
pointer.initialize(repeating: value)
}
I suspect the issue happening while converting VNRecognizedPointsObservation to MLMultiArray Please help me, i am trying to achieve this so hard. Thanks in advance.
question from:
https://stackoverflow.com/questions/65642606/action-ml-classifier-not-giving-expected-results 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…