"Invalid image format!" is printed when call processFrame

[iOS]
If CVPixelBufferRef obtained from webrtc is passed to processFrame, “Invalid image format!” is output to the console and is not processed.

below is my sample codes. anybody help me plz.
thank you

  1. intializing

    • (void) initDeepAR {
      _deepAR = [[DeepAR alloc] init];
      [_deepAR setLicenseKey:@“mykey”];
      _deepAR.delegate = self;
      [_deepAR changeLiveMode:NO];
      [_deepAR initializeOffscreenWithWidth:640 height:480];
      _deepAREffectString = [[NSBundle mainBundle] pathForResource:@“galaxy_background.deepar” ofType:@“”];

    }

    1. called didInitalize
    • (void)didInitialize {
      [_deepAR startCaptureWithOutputWidth:640 outputHeight:480 subframe:CGRectMake(0.0, 0.0, 1.0, 1.0)];
      }
    1. process frame
    • (void)capturer:(nonnull RTCVideoCapturer *) capturer didCaptureVideoFrame:(nonnull RTCVideoFrame *)frame {

      RTCCVPixelBuffer *rtcPixelBuf = (RTCCVPixelBuffer *)frame.buffer;
      CVPixelBufferRef pixelBufferRef = rtcPixelBuf.pixelBuffer;
      [_deepAR processFrame:pixelBufferRef mirror:NO]; << “Invalid image format!”
      }

below is CVPixelBufferRef information , not support 420v?

<CVPixelBuffer 0x283778dc0 width=640 height=480 pixelFormat=420v iosurface=0x28027c170 planes=2 poolName=CoreVideo>

<Plane 0 width=640 height=480 bytesPerRow=640>

<Plane 1 width=320 height=240 bytesPerRow=640>

<attributes={

PixelFormatDescription =     {

    BitsPerComponent = 8;

    ComponentRange = VideoRange;

    ContainsAlpha = 0;

    ContainsGrayscale = 0;

    ContainsRGB = 0;

    ContainsYCbCr = 1;

    FillExtendedPixelsCallback = {length = 24, bytes = 0x0000000000000000e8a0c4be010000000000000000000000};

    IOSurfaceCoreAnimationCompatibility = 1;

    IOSurfaceOpenGLESFBOCompatibility = 1;

    IOSurfaceOpenGLESTextureCompatibility = 1;

    OpenGLESCompatibility = 1;

    PixelFormat = 875704438;

    Planes =         (

                    {

            BitsPerBlock = 8;

            BlackBlock = {length = 1, bytes = 0x10};

        },

                    {

            BitsPerBlock = 16;

            BlackBlock = {length = 2, bytes = 0x8080};

            HorizontalSubsampling = 2;

            VerticalSubsampling = 2;

        }

    );

};

} propagatedAttachments={

CVImageBufferColorPrimaries = "ITU_R_709_2";

CVImageBufferTransferFunction = "ITU_R_709_2";

CVImageBufferYCbCrMatrix = "ITU_R_601_4";

MetadataDictionary =     {

    ExposureTime = "0.033331";

    LuxLevel = 325;

    NormalizedSNR = "21.63113965390955";

    SNR = "31.59735072970156";

    SensorID = 1300;

};

} nonPropagatedAttachments={

}>

change pixel format BGRA, and then normally processed.

I want to know supported pixel format. let me know plz
@jelena @Zak

Hi, the iOS SDK doesn’t support 420v as an output format. Here’s a list
https://s3.eu-west-1.amazonaws.com/sdk.developer.deepar.ai/doc/ios/_deep_a_r_8h.html#aeb712c6c6c0a8af0dfd79f451ecb9277

@jelena what are the supported pixel buffer input formats for processFrame and enqueueCameraFrame SDK methods? I’m getting invalid pixel format for 420f capture straight from the camera (I would hope the enqueueCameraFrame would work with that?)

1 Like

I would expect the SDK to return a proper error on the onError delegate callback with accurate details ie what was the supported expected formats and the provided format. Having only a print statement is not enough