Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
963 views
in Technique[技术] by (71.8m points)

swift - LiDAR and RealityKit – Capture a Real World Texture for a Scanned Model

Task

I would like to capture a real-world texture and apply it to a 3D mesh produced with a help of LiDAR scanner. I suppose that Projection-View-Model matrices should be used for that. A texture must be made from fixed Point-of-View, for example, from center of a room. However, it would be an ideal solution if we could apply an environmentTexturing data, collected as a cube-map texture in a scene.

enter image description here

Look at 3D Scanner App. It's a reference app allowing us to export a model with its texture.

I need to capture a texture with one iteration. I do not need to update it in a realtime. I realize that changing PoV leads to a wrong texture's perception, in other words, distortion of a texture. Also I realize that there's a dynamic tesselation in RealityKit and there's an automatic texture mipmapping (texture's resolution depends on a distance it captured from).

import RealityKit
import ARKit
import MetalKit
import ModelIO

class ViewController: UIViewController, ARSessionDelegate {
    
    @IBOutlet var arView: ARView!

    override func viewDidLoad() {
        super.viewDidLoad()

        arView.session.delegate = self
        arView.debugOptions.insert(.showSceneUnderstanding)

        let config = ARWorldTrackingConfiguration()
        config.sceneReconstruction = .mesh
        config.environmentTexturing = .manual
        arView.session.run(config)
    }
}

Question

  • How to capture and apply a real world texture for a reconstructed 3D mesh?


See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Pity but I am unable to capture model's texture in realtime using the LiDAR scanning process (at WWDC21 Apple didn't announce API for that). However, there's good news – a new methodology has emerged at last. It will allow developers to create textured models from a series of shots.

Photogrammetry

enter image description here

Object Capture API, announced at WWDC 2021, provides developers with the long-awaited photogrammetry tool. At the output we get USDZ model with corresponding texture. To implement Object Capture API you need Xcode 13, iOS 15 and macOS 12.


Let me share some tips on how to capture photos of a high quality:

  • Lighting conditions must be appropriate
  • Use soft light with soft (not harsh) shadows
  • Adjacent images must have a 75% overlap
  • Do not use autofocus
  • Images with RGB + Depth channels are preferable
  • Images with gravity data are preferable
  • Higher resolution and RAW images are preferable
  • Do not capture moving objects
  • Do not capture reflective or refractive objects
  • Do not capture objects with specular highlights

Technically, iPhone is capable of storing multiple channels as visual data, and data from any iOS sensor as metadata. In other words, we can should implement digital compositing techniques. We must store for each shot the following channels – RGB, Alpha (segmentation), Depth data with its Confidence, Disparity, etc, and useful data from a digital compass. The depth channel can be taken from LiDAR (where the precise distance is in meters), or from two RGB cameras (disparity channels are of mediocre quality). We are able to save all this data in OpenEXR file or in Apple's double-four-channels JPEG. Depth data must be 32bit.

Here's an Apple sample app where a capturing approach was implemented.


To create a USDZ model from a series of captured images, submit these images to RealityKit using a PhotogrammetrySession.

Here's a code snippet that spills a light on this process:

import RealityKit
import Combine

let pathToImages = URL(fileURLWithPath: "/path/to/my/images/")

let url = URL(fileURLWithPath: "model.usdz")

var request = PhotogrammetrySession.Request.modelFile(url: url, 
                                                   detail: .medium)

var configuration = PhotogrammetrySession.Configuration()
configuration.sampleOverlap = .normal
configuration.sampleOrdering = .unordered
configuration.featureSensitivity = .normal
configuration.isObjectMaskingEnabled = false

guard let session = try PhotogrammetrySession(input: pathToImages, 
                                      configuration: configuration)
else { return ?} 

var subscriptions = Set<AnyCancellable>()

session.output.receive(on: DispatchQueue.global())
              .sink(receiveCompletion: { _ in
                  // errors
              }, receiveValue: { _ in
                  // output
              }) 
              .store(in: &subscriptions)

session.process(requests: [request])

A complete version of code allowing you create a USDZ model from series of shots can be found inside this sample app.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...