"Invalid image format!" error help

Hi,

I’d like to echo what @rafaelnobrekz said here "Invalid image format!" is printed when call processFrame - #4 by rafaelnobrekz

what are the supported pixel buffer input formats for processFrame and enqueueCameraFrame SDK methods? I’m getting invalid pixel format for 420f capture straight from the camera (I would hope the enqueueCameraFrame would work with that?)

I am also getting “Invalid image format!” from the default 420f format coming from my iPad camera. What format should we be using instead?

I’m no longer seeing the error after converting my pixel buffer to kCVPixelFormatType_32BGRA … however, frameAvailable is not being called. Any tips here would be greatly appreciated!

I believe this might be related A very serious issue has occurred. I urgently request a prompt response. "initializeOffscreen dosent work with new version of deepAR"

Please help… this is an urgent issue. I also can’t seem to find any older releases of the deepAR iOS sdk to use as a workaround.

I’ve also tried using processFrameAndReturn however this function does not seem to properly set outputBuffer. I’ve tried it both with Swift and Objective C.

As seen in the link A very serious issue has occurred. I urgently request a prompt response. "initializeOffscreen dosent work with new version of deepAR", currently, versions including and below 5.2.0 cannot be downloaded. Furthermore, if you check the update modifications here https://docs.deepar.ai/deepar-sdk/platforms/ios/changelog, starting from version 5.4.2, a priority-related issue has been resolved. However, when initializing Offscreen, there is a side effect that frames are not coming out through frameAvailable as they should.

The current solution is to use version 5.4.1 with initializeOffscreen and use the encoded frames (note, however, that you may encounter thread priority errors [in purple] in Xcode. The app may not crash during use, but there could be performance issues, which I cannot guarantee).

The second method is to use the metalview or, unless there is a special reason not to, use the UIView created by deepAR, specifically the createARView function for rendering. This seems to be the best option at the moment, allowing the use of the latest SDK from deepAR.

Personally, I feel that using the second method seems to be faster.

Thanks for the tips! I didn’t realize 5.4.1 was available – I’ll try that first.

My app has a lot of custom work done on the video capture front, so being able to process frame-by-frame is ideal, as opposed to using createARView.

Can you elaborate on what you mean by using the metalview?

Essentially, through the camera delegate, we receive a new frame each time and encode it through deepAR before displaying it on the screen. Therefore, there is a passive method where the frame received in frameAvailable is shown on the screen each time a new frame is updated through the draw function of the metalView.

By feeding the sample buffer received from the camera into enqueueCameraFrame or processFrame, it automatically renders through the createARView that was registered initially.

========================================================================

For now, please match the version to 5.4.1 and follow the guide below. It should work.

========================================================================

In my opinion, there seem to be two main reasons for using offscreen and createARView. Basically, you use createARView, and the reasons for using offscreen are:

  1. Not only for deepAR but also when each frame needs the developer’s special filter or additional work.
  2. To minimize resources and enhance performance.

However, since there is still an issue with version 5.4.1, the second point is not certain. Despite this, if the first point is applicable, then it makes sense to use offscreen for direct control, and otherwise, it’s appropriate to use it as is.

Of course, this is just a personal opinion.

1 Like

Thanks! 5.4.1 is working for me using offscreen and processFrame, however when I call switchEffect, the effects are not being applied. I am trying different things but any thoughts are appreciated!

Can you show me how you are using switchEffect?

And also, are you using processFrame to send frames and then displaying them on the screen through FrameAvailable? I think that the frames coming from the camera are just being rendered directly on the screen, so essentially, the encoded [=with deepAR effects] images are just being discarded.

====================================================================

The order is as follows:

  1. Receive a new frame from the cameraDelegate.
  2. Use the frame received in step 1 with deepAR.processFrame(CMSampleBuffer).
  3. Receive the newly encoded frame from deepAR in FrameAvailable.
  4. Display the new frame received on the screen.

I think I am following that order. I see the DeepAR watermark but no effects.

I initialize deepAR like this

deepAR = DeepAR()
deepAR.setLicenseKey("my-key-here")
deepAR.changeLiveMode(false)
deepAR.initializeOffscreen(withWidth: 1, height: 1)
deepAR.delegate = self

deepAR.switchEffect(withSlot: "effect", path: "viking_helmet.deepar")

Then in my AVCaptureVideoDataOutputSampleBufferDelegate

if let ciImage = sampleBuffer.ciImage {
    if deepAR.renderingResolution.width != ciImage.extent.size.width {
        deepAR.setRenderingResolutionWithWidth(Int(ciImage.extent.size.width), height: Int(ciImage.extent.size.height))
    }
    if let pixelBuffer = ciImage.toPixelBuffer(
        using: context,
        pixelBufferAttributes: [
            kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA
        ]
    ) {
        deepAR.processFrame(pixelBuffer, mirror: false)
    }
}

I’m converting the sample buffer to a pixel buffer, avoiding 420f format (see the start of this post).

Then in frameAvailable I pass along the sampleBuffer to my app’s further filtering.

I’ve also tried calling switchEffect in several other places.

  1. Please try converting the path of the deepAR filter as follows:
extension String {
    // Returns the device's internal path for a file name (with extension)
    var path: String? {
        return Bundle.main.path(forResource: self, ofType: nil)
    }
}

and the slot should be like
“effect” → “face” or “background”

and you should try converting

  1. Before executing deepAR.processFrame(pixelBuffer, mirror: false), try converting the pixelBuffer to an image and saving it to the gallery. If the applied filter does not appear in this part, there may be another problem.
func imageFromSampleBuffer(sampleBuffer: CMSampleBuffer) -> UIImage? {
        if let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
            let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
            let context = CIContext()
            if let cgImage = context.createCGImage(ciImage, from: ciImage.extent) {
                return UIImage(cgImage: cgImage)
            }
        }
        return nil
    }

Also, I’m curious where you are applying the sample buffer received in frameAvailable?

2 Likes

Converting the path did the trick! Thank you!

As for #2, I didn’t try it but I don’t think the filter would apply if the image was generated before processFrame?

To answer your last question - I am applying the sample buffer received in frameAvailable to further processing before it is rendered onscreen. This app is a “photo booth”-style app and already has several processing options built-in such as exposure, brightness, zoom, etc.

Almost everything is working really nice now! Just one more question – any idea why processFrame might result in an all-black image? This only happens occasionally during a photo capture, and honestly, may not be Deep AR related. Just thought I’d float it here in case you have any idea off the top of your head.

Really appreciate all this help! And no worries if you have no idea regarding the black image … I’ll figure it out eventually.

Usually that would happen if the input frame was black or empty.

If it’s consistently the first frame that does this you can try changing this line deepAR.initializeOffscreen(withWidth: 1, height: 1)
to anticipate a most likely input resolution to see if that helps.

As @olly suggested you can try saving your input frames as images to check their content.

1 Like