是否可以检测到每个被触摸的像素?更具体地说,当用户触摸屏幕时,是否可以跟踪用户触摸的点簇的所有 x-y 坐标?我如何区分用户何时用拇指绘图和何时用指尖绘图?我想根据用户触摸屏幕的方式来反射(reflect)画笔的差异,并且还想跟踪所有被触摸的像素。
我目前正在使用来自 Apple 开发者网站的 GLPaint 示例中的以下代码:
http://developer.apple.com/library/ios/#samplecode/GLPaint/Introduction/Intro.html
示例代码允许使用预定义的画笔大小进行绘图并沿途跟踪 x-y 坐标。如何根据用户触摸屏幕的方式更改画笔并跟踪所有被触摸的像素?
// Drawings a line onscreen based on where the user touches
- (void) renderLineFromPointCGPoint)start toPointCGPoint)end
{
NSLog(@"x:%f y:%f",start.x, start.y);
static GLfloat* vertexBuffer = NULL;
static NSUInteger vertexMax = 64;
NSUInteger vertexCount = 0,
count,
i;
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
// Convert locations from Points to Pixels
CGFloat scale = self.contentScaleFactor;
start.x *= scale;
start.y *= scale;
end.x *= scale;
end.y *= scale;
// Allocate vertex array buffer
if(vertexBuffer == NULL)
vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));
// Add points to the buffer so there are drawing points every X pixels
count = MAX(ceilf(sqrtf((end.x - start.x) * (end.x - start.x) + (end.y - start.y) * (end.y - start.y)) / kBrushPixelStep), 1);
for(i = 0; i < count; ++i) {
if(vertexCount == vertexMax) {
vertexMax = 2 * vertexMax;
vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat));
}
vertexBuffer[2 * vertexCount + 0] = start.x + (end.x - start.x) * ((GLfloat)i / (GLfloat)count);
vertexBuffer[2 * vertexCount + 1] = start.y + (end.y - start.y) * ((GLfloat)i / (GLfloat)count);
vertexCount += 1;
}
// Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
// Display the buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
// Handles the start of a touch
- (void)touchesBeganNSSet *)touches withEventUIEvent *)event
{
CGRect bounds = [self bounds];
UITouch* touch = [[event touchesForView:self] anyObject];
firstTouch = YES;
// Convert touch point from UIView referential to OpenGL one (upside-down flip)
location = [touch locationInView:self];
location.y = bounds.size.height - location.y;
}
// Handles the continuation of a touch.
- (void)touchesMovedNSSet *)touches withEventUIEvent *)event
{
CGRect bounds = [self bounds];
UITouch* touch = [[event touchesForView:self] anyObject];
// Convert touch point from UIView referential to OpenGL one (upside-down flip)
if (firstTouch) {
firstTouch = NO;
previousLocation = [touch previousLocationInView:self];
previousLocation.y = bounds.size.height - previousLocation.y;
} else {
location = [touch locationInView:self];
location.y = bounds.size.height - location.y;
previousLocation = [touch previousLocationInView:self];
previousLocation.y = bounds.size.height - previousLocation.y;
}
// Render the stroke
[self renderLineFromPoint:previousLocation toPoint:location];
}
// Handles the end of a touch event when the touch is a tap.
- (void)touchesEndedNSSet *)touches withEventUIEvent *)event
{
CGRect bounds = [self bounds];
UITouch* touch = [[event touchesForView:self] anyObject];
if (firstTouch) {
firstTouch = NO;
previousLocation = [touch previousLocationInView:self];
previousLocation.y = bounds.size.height - previousLocation.y;
[self renderLineFromPoint:previousLocation toPoint:location];
}
}
// Handles the end of a touch event.
- (void)touchesCancelledNSSet *)touches withEventUIEvent *)event
{
// If appropriate, add code necessary to save the state of the application.
// This application is not saving state.
}
Best Answer-推荐答案 strong>
AFAIK 没有 API 可以访问触摸区域进行触摸。考虑到电容式触摸屏的限制,我什至不确定你想要的东西在物理上是否可行。我记得最近在 Cocoa Heads 上的一个演示文稿,其中展示了一些信息在 OS X(通过私有(private) API)上可用于触控板,但不适用于 iOS。
我相信这是图形输入板采用内置传感器技术的特殊触控笔的原因之一。
对于绘图应用程序,部分解决方法可能是像某些桌面应用程序那样模拟“墨迹书写”:如果用户的触摸停留在给定的位置,就好像墨水从“笔”中流出并逐渐扩散一样进行绘制通过“纸”。
关于objective-c - 从触摸点获取单个像素,我们在Stack Overflow上找到一个类似的问题:
https://stackoverflow.com/questions/10015774/
|