Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
339 views
in Technique[技术] by (71.8m points)

ios - Why do I get the wrong color of a pixel with following code?

I create an UIImage with backgroundcolor RED:

let theimage:UIImage=imageWithColor(UIColor(red: 1, green: 0, blue: 0, alpha: 1) );

func imageWithColor(color: UIColor) -> UIImage {
    let rect = CGRectMake(0.0, 0.0, 200.0, 200.0)
    UIGraphicsBeginImageContext(rect.size)
    let context = UIGraphicsGetCurrentContext()

    CGContextSetFillColorWithColor(context, color.CGColor)
    CGContextFillRect(context, rect)

    let image = UIGraphicsGetImageFromCurrentImageContext()
    UIGraphicsEndImageContext()

    return image
}

I am retrieving the color in the middle of the image as follow:

let h:CGFloat=theimage.size.height;
let w:CGFloat=theimage.size.width;


let test:UIColor=theimage.getPixelColor(CGPoint(x: 100, y: 100))

var rvalue:CGFloat = 0;
var gvalue:CGFloat = 0;
var bvalue:CGFloat = 0;
var alfaval:CGFloat = 0;
test.getRed(&rvalue, green: &gvalue, blue: &bvalue, alpha: &alfaval);


print("Blue Value : " + String(bvalue));
print("Red Value : " + String(rvalue));


extension UIImage {
    func getPixelColor(pos: CGPoint) -> UIColor {

        let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
        let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)

        let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4

        let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
        let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
        let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
        let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)

        return UIColor(red: r, green: g, blue: b, alpha: a)
    }
}

As result I get :

Blue Value : 1.0 Red Value : 0.0

Why this ? I couldn't find the mistake.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

The problem is not the built-in getRed function, but rather the function that builds the UIColor object from the individual color components in the provider data. Your code is assuming that the provider data is stored in RGBA format, but it apparently is not. It would appear to be in ARGB format. Also, I'm not sure you have the byte order right, either.

When you have an image, there are a variety of ways of packing those into the provider data. A few examples are shown in the Quartz 2D Programming Guide:

enter image description here

If you're going to have a getPixelColor routine that is hard-coded for a particular format, I might check the alphaInfo and bitmapInfo like so (in Swift 4.2):

extension UIImage {
    func getPixelColor(point: CGPoint) -> UIColor? {
        guard let cgImage = cgImage,
            let pixelData = cgImage.dataProvider?.data
            else { return nil }

        let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)

        let alphaInfo = cgImage.alphaInfo
        assert(alphaInfo == .premultipliedFirst || alphaInfo == .first || alphaInfo == .noneSkipFirst, "This routine expects alpha to be first component")

        let byteOrderInfo = cgImage.byteOrderInfo
        assert(byteOrderInfo == .order32Little || byteOrderInfo == .orderDefault, "This routine expects little-endian 32bit format")

        let bytesPerRow = cgImage.bytesPerRow
        let pixelInfo = Int(point.y) * bytesPerRow + Int(point.x) * 4;

        let a: CGFloat = CGFloat(data[pixelInfo+3]) / 255
        let r: CGFloat = CGFloat(data[pixelInfo+2]) / 255
        let g: CGFloat = CGFloat(data[pixelInfo+1]) / 255
        let b: CGFloat = CGFloat(data[pixelInfo  ]) / 255

        return UIColor(red: r, green: g, blue: b, alpha: a)
    }
}

And if you were to always build this image programmatically for code that is dependent upon the bit map info, I'd explicitly specify these details when I created the image:

func image(with color: UIColor, size: CGSize) -> UIImage? {
    let rect = CGRect(origin: .zero, size: size)
    let colorSpace = CGColorSpaceCreateDeviceRGB()
    guard let context = CGContext(data: nil,
                                  width: Int(rect.width),
                                  height: Int(rect.height),
                                  bitsPerComponent: 8,
                                  bytesPerRow: Int(rect.width) * 4,
                                  space: colorSpace,
                                  bitmapInfo: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue) else {
        return nil
    }
    context.setFillColor(color.cgColor)
    context.fill(rect)
    return context.makeImage().flatMap { UIImage(cgImage: $0) }
}

Perhaps even better, as shown in Technical Q&A 1509, you might want to have getPixelData explicitly create its own context of a predetermined format, draw the image to that context, and now the code is not contingent upon the format of the original image to which you are applying this.

extension UIImage {

    func getPixelColor(point: CGPoint) -> UIColor? {
        guard let cgImage = cgImage else { return nil }

        let width = Int(size.width)
        let height = Int(size.height)
        let colorSpace = CGColorSpaceCreateDeviceRGB()

        guard let context = CGContext(data: nil,
                                      width: width,
                                      height: height,
                                      bitsPerComponent: 8,
                                      bytesPerRow: width * 4,
                                      space: colorSpace,
                                      bitmapInfo: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
            else {
                return nil
        }

        context.draw(cgImage, in: CGRect(origin: .zero, size: size))

        guard let pixelBuffer = context.data else { return nil }

        let pointer = pixelBuffer.bindMemory(to: UInt32.self, capacity: width * height)
        let pixel = pointer[Int(point.y) * width + Int(point.x)]

        let r: CGFloat = CGFloat(red(for: pixel))   / 255
        let g: CGFloat = CGFloat(green(for: pixel)) / 255
        let b: CGFloat = CGFloat(blue(for: pixel))  / 255
        let a: CGFloat = CGFloat(alpha(for: pixel)) / 255

        return UIColor(red: r, green: g, blue: b, alpha: a)
    }

    private func alpha(for pixelData: UInt32) -> UInt8 {
        return UInt8((pixelData >> 24) & 255)
    }

    private func red(for pixelData: UInt32) -> UInt8 {
        return UInt8((pixelData >> 16) & 255)
    }

    private func green(for pixelData: UInt32) -> UInt8 {
        return UInt8((pixelData >> 8) & 255)
    }

    private func blue(for pixelData: UInt32) -> UInt8 {
        return UInt8((pixelData >> 0) & 255)
    }

    private func rgba(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) -> UInt32 {
        return (UInt32(alpha) << 24) | (UInt32(red) << 16) | (UInt32(green) << 8) | (UInt32(blue) << 0)
    }

}

Clearly, if you're going to check a bunch of pixels, you'll want to refactor this (decouple the creation of the standardized pixel buffer from the code that checks the color), but hopefully this illustrates the idea.


For earlier versions of Swift, see previous revision of this answer.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...