OStack程序员社区-中国程序员成长平台

标题: ios - SceneKit 中的折射可能吗? [打印本页]

作者: 菜鸟教程小白    时间: 2022-12-12 15:48
标题: ios - SceneKit 中的折射可能吗?

是否有可能制作一个可以穿过光线的形状,这样您就可以在光线因折射而弯曲的情况下看到它?像镜头或玻璃(或水)?



Best Answer-推荐答案


要使用 SceneKit 实现折射,您需要一个 SCNProgram。内置着色器不能对折射做任何事情。

根据本文中的答案 (Which are the right Matrix Values to use in a metal shader passed by a SCNProgram to get a correct chrome like reflection),可以像这样使用 SceneKit 实现折射效果。 (本示例基于 ARKit)

你需要:

制作一个全新的 ARKit (SceneKit) 项目,创建或加载一个类似球体的几何对象并将其放置在空间中。 (SCNSphere 会好的)

实现天空盒。确保您的天空盒由 6 个单独的图像(立方体贴图)组成 - 不要使用 2:1 球体贴图,它们似乎不适用于金属着色器中的采样器。这是天空盒的一个很好的链接 https://www.humus.name )

制作六个所需的 UIImages 来保存天空盒的各个图片,如下所示:

let skybox1 = UIImage.init(named: "art.scnassets/subdir/image-PX.png")
let skybox2 = UIImage.init(named: "art.scnassets/subdir/image-NX.png")
let skybox3 = UIImage.init(named: "art.scnassets/subdir/image-PY.png")
let skybox4 = UIImage.init(named: "art.scnassets/subdir/image-NY.png")
let skybox5 = UIImage.init(named: "art.scnassets/subdir/image-PZ.png")
let skybox6 = UIImage.init(named: "art.scnassets/subdir/image-NZ.png")

图像必须是正方形,并且应该是 2 的幂(为了获得最佳的 mipmapping 目的)。 512x512就好了,1024x1024已经需要很多内存了

创建一个 SCNMaterialProperty 来保存天空盒的单个图像数组,如下所示:

// Cube-Map Structure:
//      PY
//  NX  PZ  PX  NZ
//      NY

// Array Order:
// PX, NX, PY, NY, PZ, NZ

let envMapSkyboxMaterialProperty = SCNMaterialProperty(contents: [skybox1!,skybox2!,skybox3!,skybox4!,skybox5!,skybox6!])

(P = 正,N = 负)

然后像这样设置天空盒我们需要它用于场景的反射、折射背景和照明*)

myScene.background.contents = envMapSkyboxMaterialProperty?.contents

同时设置光照环境**。

myScene.lightingEnvironment.contents = envMapSkyboxMaterialProperty?.contents

假设现在您可以使用默认 Material 将几何对象放置在空间中 - 我们现在已准备好将 SCNProgram 与用于光折射的特殊金属着色器对齐。

制作 SCNProgram 并像这样配置它:

let sceneProgramRefract = SCNProgram()
sceneProgramRefract.vertexFunctionName   = "myVertexRefract" // (myVertexRefract is the Keyword used in the shader)
sceneProgramRefract.fragmentFunctionName = "myFragmentRefract" // (myFragmentRefract is the Keyword used in the shader)

在您的目标几何节点的 Material 上附加 SCNProgram,如下所示:

firstMaterial.program = sceneProgramRefract // doing this will replace the entire built-in SceneKit shaders for that object.
firstMaterial.setValue(envMapSkyboxMaterialProperty, forKey: "cubeTexture") // (cubeTexture is the Keyword used in the shader to access the Skybox)

在您的项目中添加一个新的 Metal 文件并将其命名为“shaders.metal”

将 Metal 文件中的任何内容替换为:

// Default Metal Header for SCNProgram
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>

// Default Sampler for the Skybox
constexpr sampler cubeSampler;


// Nodebuffer (you only need the enabled Matrix floats)
struct MyNodeBuffer {
    // float4x4 modelTransform;
    // float4x4 inverseModelTransform;
    float4x4 modelViewTransform; // required
    // float4x4 inverseModelViewTransform;
    float4x4 normalTransform; // required
    // float4x4 modelViewProjectionTransform;
    // float4x4 inverseModelViewProjectionTransform;
};

// Input Struct
typedef struct {
    float3 position [[ attribute(SCNVertexSemanticPosition) ]];
    float3 normal   [[ attribute(SCNVertexSemanticNormal)   ]];
} MyVertexInput;

// Struct filled by the Vertex Shader
struct SimpleVertexRefract
{
    float4 position [[position]];
    float  k;
    float3 worldSpaceReflection;
    float3 worldSpaceRefraction;
};

// VERTEX SHADER
vertex SimpleVertexRefract myVertexRefract(MyVertexInput in [[stage_in]],
                                          constant SCNSceneBuffer& scn_frame [[buffer(0)]],
                                          constant MyNodeBuffer& scn_node [[buffer(1)]])
{
float4 modelSpacePosition(in.position, 1.0f);
float4 modelSpaceNormal(in.normal, 0.0f);

// We'll be computing the reflection in eye space, so first we find the eye-space
// position. This is also used to compute the clip-space position below.
float4 eyeSpacePosition         = scn_node.modelViewTransform * modelSpacePosition;

// We compute the eye-space normal in the usual way.
float3 eyeSpaceNormal           = (scn_node.normalTransform * modelSpaceNormal).xyz;

// The view vector in eye space is just the vector from the eye-space position.
float3 eyeSpaceViewVector       = normalize(-eyeSpacePosition.xyz);

float3 view_vec                 = normalize(eyeSpaceViewVector);
float3 normal                   = normalize(eyeSpaceNormal);

const float ETA                 = 1.12f; // (this defines the intensity of the refraction. 1.0 will be no refraction)
float c                         = dot(view_vec, normal);
float d                         = ETA * c;
float k                         = clamp(d * d + (1.0f - ETA * ETA), 0.0f, 1.0f); // k is used in the fragment shader

// for Reflection / Refraction
// To find the reflection/refraction vector, we reflect/refract the (inbound) view vector about the normal.
float4 eyeSpaceReflection       = float4(reflect(-eyeSpaceViewVector, eyeSpaceNormal), 0.0f);
float4 eyeSpaceRefraction       = float4(refract(-eyeSpaceViewVector, eyeSpaceNormal, ETA), 0.0f);

// To sample the cube-map, we want a world-space reflection vector, so multiply
// by the inverse view transform to go back from eye space to world space.
float3 worldSpaceReflection     = (scn_frame.inverseViewTransform * eyeSpaceReflection).xyz;
float3 worldSpaceRefraction     = (scn_frame.inverseViewTransform * eyeSpaceRefraction).xyz;

// Fill the Out-Struct
SimpleVertexRefract out;
out.position                    = scn_frame.projectionTransform * eyeSpacePosition;
out.k                           = k;
out.worldSpaceReflection        = worldSpaceReflection; //
out.worldSpaceRefraction        = worldSpaceRefraction; //
return out;
}

// FRAGMENT SHADER
fragment float4 myFragmentRefract(SimpleVertexRefract in [[stage_in]],
                                  texturecube<float, access::sample> cubeTexture [[texture(0)]])
{
// Since the reflection vector's length will vary under interpolation, we normalize it
// and flip it from the assumed right-hand space of the world to the left-hand space
// of the interior of the cubemap.
float3 worldSpaceReflection     = normalize(in.worldSpaceReflection) * float3(1.0f, 1.0f, -1.0f);
float3 worldSpaceRefraction     = normalize(in.worldSpaceRefraction) * float3(1.0f, 1.0f, -1.0f);

float3 reflection               = cubeTexture.sample(cubeSampler, worldSpaceReflection).rgb;
float3 refraction               = cubeTexture.sample(cubeSampler, worldSpaceRefraction).rgb;

float4 color;
color.rgb                       = mix(reflection, refraction, float3(in.k)); // this is where k is finally used
color.a                         = 1.0f;
return color;
}

编译并运行。效果应该是这样的:

Figure 1

*如果您使用 AR 场景 - 设置 Skybox 将覆盖当前的摄像机源,您可能需要在设置天空盒之前将 AR 源备份到其他位置,如下所示: 进行全局定义:

var originalARSource : Any? = nil // screen Scene Backup
originalARSource = myScene.background.contents

您可以通过将 myScene.background.contents 设置回 originalARSource 来跳回 AR 源

** 在 ARKit 中,确保在天空盒处于事件状态时将 Tracking Configuration 设置为 .none:

configuration.environmentTexturing = .none

关于ios - SceneKit 中的折射可能吗?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31276054/






欢迎光临 OStack程序员社区-中国程序员成长平台 (https://ostack.cn/) Powered by Discuz! X3.4