If you don't require to draw the diagonal of each quad in your wireframe, and are fine with only drawing the edges of each quad, this gets much simpler. There's no need to worry about barycentric coordinates if you operate on quads instead of triangles. Instead of the 3 barycentric coordinates, use 2 coordinates for the relative position within the mesh:
0,2----1,2----2,2----3,2----4,2
| | | | |
| | | | |
| | | | |
0,1----1,1----2,1----3,1----4,1
| | | | |
| | | | |
| | | | |
0,0----1,0----2,0----3,0----4,0
This also allows you to share vertices across quads, cutting the total number of vertices in your model by approximately a factor of 4.
You then feed these coordinate pairs from the vertex shader through to the fragment shader just like it's described for the barycentric coordinates in the article you linked.
In the fragment shader, the code gets just slightly more complicated, since it needs to test for values being either close to 0 or close to 1, after taking the fractional part. I haven't tested this, but it could look something like this, with vQC
being the equivalent of vBC
in the article:
varying vec2 vQC;
...
void main() {
vec2 vRel = fract(vQC);
if (any(lessThan(vec4(vRel, 1.0 - vRel), vec4(0.02)))) {
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
} else {
gl_FragColor = vec4(0.5, 0.5, 0.5, 1.0);
}
}
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…