Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
287 views
in Technique[技术] by (71.8m points)

ios - Is this code drawing at the point or pixel level? How to draw retina pixels?

Consider this admirable script which draws a (circular) gradient,

https://github.com/paiv/AngleGradientLayer/blob/master/AngleGradient/AngleGradientLayer.m

int w = CGRectGetWidth(rect);
int h = CGRectGetHeight(rect);

and then

angleGradient(data, w, h ..

and the it loops over all those

for (int y = 0; y < h; y++)
for (int x = 0; x < w; x++) {

basically setting the color

    *p++ = color;

But wait - wouldn't this be working by points, not pixels?

How, really, would you draw to the physical pixels on dense screens?

Is it a matter of:

Let's say the density is 4 on the device. Draw just as in the above code, but, on a bitmap four times as big, and then put it in the rect?

That seems messy - but is that it?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

[Note: The code on the github example does not calculate the gradient on a pixel basis. The code on the github example calculates the gradient on a points basis. -Fattie]

The code is working in pixels. First, it fills a simple raster bitmap buffer with the pixel color data. That obviously has no notion of an image scale or unit other than pixels. Next, it creates a CGImage from that buffer (in a bit of an odd way). CGImage also has no notion of a scale or unit other than pixels.

The issue comes in where the CGImage is drawn. Whether scaling is done at that point depends on the graphics context and how it has been configured. There's an implicit transform in the context that converts from user space (points, more or less) to device space (pixels).

The -drawInContext: method ought to convert the rect using CGContextConvertRectToDeviceSpace() to get the rect for the image. Note that the unconverted rect should still be used for the call to CGContextDrawImage().

So, for a 2x Retina display context, the original rect will be in points. Let's say 100x200. The image rect will be doubled in size to represent pixels, 200x400. The draw operation will draw that to the 100x200 rect, which might seem like it would scale the large, highly-detailed image down, losing information. However, internally, the draw operation will scale the target rect to device space before doing the actual draw, and fill a 200x400 pixel area from the 200x400 pixel image, preserving all of the detail.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...