what are the supported pixel buffer input formats for processFrame and enqueueCameraFrame SDK methods? I’m getting invalid pixel format for 420f capture straight from the camera (I would hope the enqueueCameraFrame would work with that?)
I am also getting “Invalid image format!” from the default 420f format coming from my iPad camera. What format should we be using instead?
I’m no longer seeing the error after converting my pixel buffer to kCVPixelFormatType_32BGRA … however, frameAvailable is not being called. Any tips here would be greatly appreciated!
I’ve also tried using processFrameAndReturn however this function does not seem to properly set outputBuffer. I’ve tried it both with Swift and Objective C.
The current solution is to use version 5.4.1 with initializeOffscreen and use the encoded frames (note, however, that you may encounter thread priority errors [in purple] in Xcode. The app may not crash during use, but there could be performance issues, which I cannot guarantee).
The second method is to use the metalview or, unless there is a special reason not to, use the UIView created by deepAR, specifically the createARView function for rendering. This seems to be the best option at the moment, allowing the use of the latest SDK from deepAR.
Personally, I feel that using the second method seems to be faster.
Essentially, through the camera delegate, we receive a new frame each time and encode it through deepAR before displaying it on the screen. Therefore, there is a passive method where the frame received in frameAvailable is shown on the screen each time a new frame is updated through the draw function of the metalView.
By feeding the sample buffer received from the camera into enqueueCameraFrame or processFrame, it automatically renders through the createARView that was registered initially.
In my opinion, there seem to be two main reasons for using offscreen and createARView. Basically, you use createARView, and the reasons for using offscreen are:
Not only for deepAR but also when each frame needs the developer’s special filter or additional work.
To minimize resources and enhance performance.
However, since there is still an issue with version 5.4.1, the second point is not certain. Despite this, if the first point is applicable, then it makes sense to use offscreen for direct control, and otherwise, it’s appropriate to use it as is.
Thanks! 5.4.1 is working for me using offscreen and processFrame, however when I call switchEffect, the effects are not being applied. I am trying different things but any thoughts are appreciated!
And also, are you using processFrame to send frames and then displaying them on the screen through FrameAvailable? I think that the frames coming from the camera are just being rendered directly on the screen, so essentially, the encoded [=with deepAR effects] images are just being discarded.
Please try converting the path of the deepAR filter as follows:
extension String {
// Returns the device's internal path for a file name (with extension)
var path: String? {
return Bundle.main.path(forResource: self, ofType: nil)
}
}
and the slot should be like
“effect” → “face” or “background”
and you should try converting
Before executing deepAR.processFrame(pixelBuffer, mirror: false), try converting the pixelBuffer to an image and saving it to the gallery. If the applied filter does not appear in this part, there may be another problem.
func imageFromSampleBuffer(sampleBuffer: CMSampleBuffer) -> UIImage? {
if let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let context = CIContext()
if let cgImage = context.createCGImage(ciImage, from: ciImage.extent) {
return UIImage(cgImage: cgImage)
}
}
return nil
}
Also, I’m curious where you are applying the sample buffer received in frameAvailable?
As for #2, I didn’t try it but I don’t think the filter would apply if the image was generated before processFrame?
To answer your last question - I am applying the sample buffer received in frameAvailable to further processing before it is rendered onscreen. This app is a “photo booth”-style app and already has several processing options built-in such as exposure, brightness, zoom, etc.
Almost everything is working really nice now! Just one more question – any idea why processFrame might result in an all-black image? This only happens occasionally during a photo capture, and honestly, may not be Deep AR related. Just thought I’d float it here in case you have any idea off the top of your head.
Really appreciate all this help! And no worries if you have no idea regarding the black image … I’ll figure it out eventually.
Usually that would happen if the input frame was black or empty.
If it’s consistently the first frame that does this you can try changing this line deepAR.initializeOffscreen(withWidth: 1, height: 1)
to anticipate a most likely input resolution to see if that helps.
As @olly suggested you can try saving your input frames as images to check their content.