iOS impemantion with existing camera setup

Hello good people,
I’m iOS developer and I want to implement this SDK in applicaion.
I tested every feature I wanted from DeepAR on sample app and I’m happy to work with this.
I have some questions.

  1. My app is on ipad and with my camera setup I give user ability to choose between: builtInWideAngleCamera and builtInUltraWideCamera cameras for front camera, so does DeepAR works with it?
  2. And the main question I have is how to implement SDK with my already working camera? I mean without using builtin CamerController in SDK. Can someone help me with this?

Hi, you can make a custom camera controller to include your existing setup

there is an example on our GitHub, it’s a bit older but you should be able to use it as a reference

I managed to get implemantation working in a this way provided below. I have DeepAR watermark but when I try to change effect it does nothing :confused:

import UIKit
import AVFoundation
import DeepAR

private enum Effects: String, CaseIterable {
    case four_Faces_Alian = "FourFacesAlian.deepar"
}

protocol CameraSessionDelegate: AnyObject {
    func didTakePhoto(_ photo: UIImage)
}

protocol CameraSession {
    func checkCameraPermissions(completion: @escaping () -> ())
    func installCamera(on view: UIView)
    func takePhoto()
    func endSession()
    func startSession()
    func switchCamera()
    func previouseEffect()
    func nextEffect()
    var delegate: CameraSessionDelegate? { get set }
    var deepAR: DeepAR! { get set }
}

class CameraSessionImpl: NSObject, CameraSession {
    //MARK: - Public Properties
    
    public var deepAR: DeepAR!

    //MARK: - Private Properties
    
    private var captureSession: AVCaptureSession?
    private var stillImageOutput: AVCaptureVideoDataOutput?
    private var videoPreviewLayer: AVCaptureVideoPreviewLayer?
    
    private var effectIndex: Int = 0
    private var effectPaths: [String?] {
        return Effects.allCases.map { $0.rawValue.path }
    }
    
    private var videoDataOutputQueue: DispatchQueue = DispatchQueue(label: "ai.deepar.videoqueue")
    
    private var currentDevice: AVCaptureDevice? {
        guard let inputs = captureSession?.inputs else { return nil }
        for input in inputs {
            if let deviceInput = input as? AVCaptureDeviceInput {
                return deviceInput.device
            }
        }
        return nil
    }
    
    //MARK: - Delegate

    weak var delegate: CameraSessionDelegate?
    
    //MARK: - Public Methods
    
    func checkCameraPermissions(completion: @escaping () -> ()) {
        switch AVCaptureDevice.authorizationStatus(for: .video) {
        case .notDetermined:
            // Request
            AVCaptureDevice.requestAccess(for: .video) { granted in
                guard granted else { return }
                DispatchQueue.main.async {
                    completion()
                }
            }
        case .restricted, .denied:
            break
        case .authorized:
            completion()
        @unknown default:
            break
        }
    }

    func installCamera(on view: UIView) {
        let captureSession = AVCaptureSession()
        captureSession.sessionPreset = .photo
        self.captureSession = captureSession

        guard let frontCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front)
        else {
            assertionFailure("Unable to access front camera!")
            return
        }

        do {
            let input = try AVCaptureDeviceInput(device: frontCamera)
            if captureSession.canAddInput(input) && captureSession.canAddOutput(AVCaptureVideoDataOutput()) {
                let stillImageOutput = AVCaptureVideoDataOutput()
                stillImageOutput.alwaysDiscardsLateVideoFrames = true
                stillImageOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]
                stillImageOutput.setSampleBufferDelegate(self, queue: videoDataOutputQueue)
                self.stillImageOutput = stillImageOutput

                captureSession.addInput(input)
                captureSession.addOutput(stillImageOutput)
            }

            DispatchQueue.global(qos: .userInitiated).async { [weak self] in
                self?.captureSession?.startRunning()
                DispatchQueue.main.async {
                    self?.videoPreviewLayer?.videoGravity = .resizeAspectFill
                }
            }
        }
        catch let error {
            debugPrint("Error Unable to initialize back camera: \(error.localizedDescription)")
        }
    }
    
    func switchCamera() {
        guard let session = self.captureSession else { return }
        
        guard let ultraWideDevice = AVCaptureDevice.default(.builtInUltraWideCamera, for: .video, position: .front),
              let wideDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front),
              let currentDevice
        else {
            print("Cameras Not available")
            return
        }
        
        session.beginConfiguration()
        
        // Remove existing inputs
        session.inputs.forEach { session.removeInput($0) }
        
        // Add the new input
        do {
            if currentDevice == wideDevice {
                let input = try AVCaptureDeviceInput(device: ultraWideDevice)
                if session.canAddInput(input) {
                    session.addInput(input)
                } else {
                    print("Could not add input to session")
                }
            } else {
                let input = try AVCaptureDeviceInput(device: wideDevice)
                if session.canAddInput(input) {
                    session.addInput(input)
                } else {
                    print("Could not add input to session")
                }
            }
        } catch {
            print("Error setting up camera input: \(error.localizedDescription)")
        }
        
        session.commitConfiguration()
    }
    
    func previouseEffect() {
        var path: String?
        effectIndex = (effectIndex - 1 < 0) ? (effectPaths.count - 1) : (effectIndex - 1)
        path = effectPaths[effectIndex]
        deepAR.switchEffect(withSlot: "face", path: Effects.four_Faces_Alian.rawValue)
    }
    
    func nextEffect() {
        var path: String?
        effectIndex = (effectIndex + 1 > effectPaths.count - 1) ? 0 : (effectIndex + 1)
        path = effectPaths[effectIndex]
        deepAR.switchEffect(withSlot: "face", path: path)
    }

    func endSession() {
        captureSession?.stopRunning()
    }

    func startSession() {
        captureSession?.startRunning()
    }

    func takePhoto() {
        let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
        settings.flashMode = .auto
//        self.stillImageOutput?.capturePhoto(with: settings, delegate: self)
    }
}

extension CameraSessionImpl: AVCapturePhotoCaptureDelegate {
    func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
        guard let imageData = photo.fileDataRepresentation(),
              let image = UIImage(data: imageData)
        else { return }

        delegate?.didTakePhoto(image)
    }
}

extension CameraSessionImpl: AVCaptureVideoDataOutputSampleBufferDelegate {
    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        deepAR?.enqueueCameraFrame(sampleBuffer, mirror: true)
    }
}

extension String {
    var path: String? {
        return Bundle.main.path(forResource: "Effects", ofType: nil)
    }
}
import UIKit
import Firebase
import SnapKit
import DeepAR

protocol CameraViewDelegate: AnyObject {
    func didTook(photo: UIImage)
}

class CameraView: UIView {
    weak var delegate: CameraViewDelegate?
    private var cameraSession: CameraSession?
    private let cameraView: UIView = UIView()
    private var arView: UIView = UIView()
        
    private enum FOVChangeButtonImage: String {
        case fovOn = "arrow.up.backward.and.arrow.down.forward"
        case fovOff = "arrow.down.forward.and.arrow.up.backward"
    }
    
    private enum changeEffectImage: String {
        case right = "arrowshape.forward"
        case left = "arrowshape.backward"
    }
    
    private let changeFOVButton: UIButton = {
       let changeFOVButton = UIButton(frame: CGRect(x: 0, y: 0, width: 40, height: 40))
        changeFOVButton.layer.cornerRadius = 20
        changeFOVButton.layer.borderWidth = 2
        changeFOVButton.layer.borderColor = UIColor.white.cgColor
        changeFOVButton.layer.backgroundColor = UIColor.clear.cgColor
        changeFOVButton.imageView?.tintColor = UIColor.white
        changeFOVButton.setImage(UIImage(systemName: FOVChangeButtonImage.fovOn.rawValue), for: .normal)
        return changeFOVButton
    }()
    
    private let previousEffectButton: UIButton = {
       let changeFOVButton = UIButton(frame: CGRect(x: 0, y: 0, width: 40, height: 40))
        changeFOVButton.layer.cornerRadius = 20
        changeFOVButton.imageView?.tintColor = UIColor.white
        changeFOVButton.setImage(UIImage(systemName: changeEffectImage.left.rawValue), for: .normal)
        return changeFOVButton
    }()
    
    private let nextEffectButton: UIButton = {
       let changeFOVButton = UIButton(frame: CGRect(x: 0, y: 0, width: 40, height: 40))
        changeFOVButton.layer.cornerRadius = 20
        changeFOVButton.imageView?.tintColor = UIColor.white
        changeFOVButton.setImage(UIImage(systemName: changeEffectImage.right.rawValue), for: .normal)
        return changeFOVButton
    }()

    override init(frame: CGRect) {
        super.init(frame: frame)
        initializeDeepAR()
        layout()
    }
    
    convenience init() {
        self.init(frame: CGRect.zero)
    }
    
    override func layoutSubviews() {
        super.layoutSubviews()
        arView.frame = cameraView.bounds
        bringSubviewToFront(arView)
    }
    
    private func initializeDeepAR() {
        cameraSession = CameraSessionImpl()
        cameraSession?.delegate = self
        cameraSession?.deepAR = DeepAR()
        cameraSession?.deepAR.delegate = self
        cameraSession?.deepAR.setLicenseKey("240431cb2485b77c92178ebc7d8893d6bcf7e11e0a45dbb4737e1c114418bf9a385452285d9309fb")
        cameraSession?.deepAR.initializeOffscreen(withWidth: 1080, height: 720)
    }
    
    func checkCameraPermissions(completion: @escaping () -> ()) {
        self.cameraSession?.checkCameraPermissions {
            completion()
        }
    }
    
    func installCamera() {
        self.cameraSession?.installCamera(on: self.arView)
    }

    //this func initiates countDownLabel that for it part will notify self to take picture
    func takePhoto() {
        cameraSession?.takePhoto()
    }
    
    func stopRecording() {
      //  background {
            self.cameraSession?.endSession()
       // }
    }

    func startRecording() {
        background {
            self.cameraSession?.startSession()
        }
    }
    
    private func layout(){
        arView = (cameraSession?.deepAR.createARView(withFrame: cameraView.bounds))!
        cameraView.addSubview(arView)
        
        addSubview(cameraView)
        cameraView.snp.makeConstraints {
            $0.left.right.top.bottom.equalToSuperview()
        }
        
        addSubview(changeFOVButton)
        changeFOVButton.addTarget(self, action: #selector(didChangeFOVOption), for: .touchUpInside)
        changeFOVButton.snp.makeConstraints {
            $0.centerX.equalToSuperview()
            $0.width.height.equalTo(40)
            $0.bottom.equalTo(cameraView).inset(20)
        }
        
        addSubview(previousEffectButton)
        previousEffectButton.addTarget(self, action: #selector(didChangeToPreviouseEffect), for: .touchUpInside)
        previousEffectButton.snp.makeConstraints {
            $0.centerY.equalTo(changeFOVButton)
            $0.width.height.equalTo(40)
            $0.bottom.equalTo(cameraView).inset(20)
            $0.left.equalTo(changeFOVButton.snp.right).inset(100)
        }
        
        addSubview(nextEffectButton)
        nextEffectButton.addTarget(self, action: #selector(didChangeToNextEffect), for: .touchUpInside)
        nextEffectButton.snp.makeConstraints {
            $0.centerY.equalTo(changeFOVButton)
            $0.width.height.equalTo(40)
            $0.bottom.equalTo(cameraView).inset(20)
            $0.right.equalTo(changeFOVButton.snp.left).inset(100)
        }
    }
    
    private func changeFOVButtonImage() {
        DispatchQueue.main.async {
            if self.changeFOVButton.imageView?.image == UIImage(systemName: FOVChangeButtonImage.fovOff.rawValue) {
                self.changeFOVButton.setImage(UIImage(systemName: FOVChangeButtonImage.fovOn.rawValue), for: .normal)
            } else {
                self.changeFOVButton.setImage(UIImage(systemName: FOVChangeButtonImage.fovOff.rawValue), for: .normal)
            }
        }
    }
    
    @objc
    private func didChangeFOVOption() {
        cameraSession?.switchCamera()
        changeFOVButtonImage()
    }
    
    @objc
    private func didChangeToPreviouseEffect() {
        cameraSession?.previouseEffect()
    }
    
    @objc
    private func didChangeToNextEffect() {
        cameraSession?.nextEffect()
    }
    
    required init?(coder aDecoder: NSCoder) {
        super.init(coder: aDecoder)
    }
}

extension CameraView: CameraSessionDelegate {
    func didTakePhoto(_ photo: UIImage) {
        self.delegate?.didTook(photo: photo)
//        deepAR?.takeScreenshot()
    }
}

// MARK: - DeepARDelegate

extension CameraView: DeepARDelegate {
    func didTakeScreenshot(_ screenshot: UIImage!) {
        self.delegate?.didTook(photo: screenshot)
    }
    
    func frameAvailable(_ sampleBuffer: CMSampleBuffer!) {
        guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
        let image = UIImage(pixelBuffer: pixelBuffer)
        
        DispatchQueue.main.async {
            self.arView.layer.contents = image?.cgImage
        }
    }
}

extension UIImage {
    convenience init?(pixelBuffer: CVPixelBuffer) {
        // Create a CIImage from the pixel buffer
        let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
        
        // Create a CIContext
        let context = CIContext(options: nil)
        
        // Render the CIImage to a CGImage
        guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else {
            return nil
        }
        
        // Initialize the UIImage with the CGImage
        self.init(cgImage: cgImage)
    }
}

Do you have any idea why :frowning: ?

Can you log out the contents of Effects?

I tried direct switch without any enums or anything

        deepar.switchEffect(
            withSlot: "effects",
            path: "/private/var/containers/Bundle/Application/7DC2CE90-B218-4402-B2A7-FBC8A8467657/PhotoAppTest.app/FourFacesAlian.deepar"
        )

and nothing happens after I call this function :confused:

I suspect either the path or the effect is incorrect. You’ll need to check the path yourself, but you can share the effect if you’d like me to check it

I used your custom camera project and rewrite it on swift. I zipped file in which there is a file called FourFacesAlian.deepar this is a filter I modified and added to a project. Could you check why I can’t eneble this effect or any effect on swift side of files :confused: ? When switch effect is triggered nothing happens :frowning: I removed all old objective c files just to be concentrated on swift.
CustomCameraSwift.zip (2.7 MB)

Hey Jelena :innocent:
I figured out and get everything working :metal::metal::metal: The problem was in the paths :metal: Thanks :star_struck:

1 Like