Hello, i hope you’re doing well.
I am trying to integrate DeepAR SDK into my application, which is a random calling application that has both voice and video calling, to enable users to use face filters,masks and background segmentation to preserve their annonimity while connecting with other people.
The platform im trying to integrate it to is android/kotlin. I took the demo project that you provide in your quickstart guide and i tried to take what i can from it but i am encountering a few blockages,thus i came here to ask a few questions.
1: Why does the (void frameAvailable(Image var1)) not get triggered if i don’t call “setOffscreenRendering”, i want to both show the deepAR feed to my screen aswell as send those frames to webRTC, is there any workaround to this or is it meant to be like that? If so then would it be possible to let me know how i can use those frames to feed them into a surfaceview for example and at the same time send them over to webrtc.
2: Why does using the “DeepARImageFormat.YUV_420_888” format make my preview green, or in some cases it does some weird stuff like show my feed multiple times as if i had used an instagram filter, that format is what i use to currently send each frame to webrtc.
3: Do you have any example apps or demos where you do something similar to what im trying to achieve?
Thank you so much in advance, i hope to not bother anyone with my questions and i hope that someone is able to answer them.