LoginSignup
2
1

More than 3 years have passed since last update.

AVCaptureMultiCamSessionを使用してCameraOSS作成

Last updated at Posted at 2019-12-18

こんにちはフリーランスの永田です。年始からSwiftUI案件で稼働します。案件獲得のコツは技術を磨くです。

AVCaptureMultiCamSession

MovieSession

上記の最新APIは両画面同時撮影する機能を持ちます。このAPIを使用して11月から作成して、OSS化しました。

宜しければスターお願い致します。

機能

  • 前面背面同時撮影
  • インナー画面の位置の可変。画面域の可変
  • 2画面同じ比率での撮影
  • 縦画面、横画面の対応(右側、左側対応)

環境

  • Xcode11.3
  • Swift5.0
  • iOS13~

動作の一例

ポイント

全てのOSS機能を言語化するとかなりの文字量になってしまい、何を見ているのか、プログラムのどこにいるのかなどを保ちながら理解するのが、困難になる場合があるので、ポイントを集約します。
プレビュー画面は自作で合わせる。

BothSidesMixer.swiftBothSidesMixer.metal でプレビュー画面を設定しています。

BothSidesMixer.swift のこの部分です。
プログラム内のコメント、Fixed with memory measuresの部分を同じメソッドでやると高速ループの中に入り、高メモリーがさらに高くなるので、
フラグが変更する度に呼び出すようにして、メモリーが上昇するのを極力制御しています。

getMtlSize メソッド

CVPixelBufferを新たに作成する機能です。

mix メソッド

CVPixelBufferPoolCreatePixelBufferCVPixelBufferを作成しています。
fullScreenTextureは全画面
pipTextureはインナー画面
outputTextureMTLTexturefullScreenTexturepipTextureを表示するTextureです。
BothSidesMixer.metal で書き込み処理をしています。

inputImageは2画面同時比率を実現しています。
座標を合わせる技はこちら
CGAffineTransformの合わせ技です。

let inputImage = CIImage(cvImageBuffer: fullScreenPixelBuffer, options: nil).transformed(by: CGAffineTransform(scaleX: 0.5, y: 0.5).translatedBy(x: CGFloat(fullScreenTexture.width/2), y: 0))

fullScreenTextureMTLTextureを作成しています。
makeTextureFromCVPixelBufferMTLTextureを作成するロジックを用意しています。

guard let newfullScreenTexture = makeTextureFromCVPixelBuffer(pixelBuffer: pixelBuffer) else {
     print("AVCaptureMultiCamViewModel_mix")
     return nil
}

var parameters = MixerParameters(pipPosition: pipPosition, pipSize: pipSize)
BothSidesMixer.metalに渡すパラメータを用意します。

let pipPosition = SIMD2(Float(pipFrame.origin.x) * Float(fullScreenTexture.width),Float(pipFrame.origin.y) * Float(fullScreenTexture.height))

let pipSize = SIMD2(Float(pipFrame.size.width) * Float(pipTexture.width),Float(pipFrame.size.height) * Float(pipTexture.height))

var parameters = MixerParameters(pipPosition: pipPosition, pipSize: pipSize)

commandQueueはMetalを使用して、GPUに書き込むプログラムになっています。
commandEncoder.setTextureとMetalFileのreporterMixerメソッドを見ると理解しやすいと思います。
引数が連動しています。

画面を作成しているメソッドの全体像です。

    // Fixed with memory measures
    func getMtlSize(mtl: MTLTexture, sameRatio: Bool) {
        if sameRatio == true && pixelBuffer == nil {
            let options = [
                kCVPixelBufferCGImageCompatibilityKey as String: true,
                kCVPixelBufferCGBitmapContextCompatibilityKey as String: true,
                kCVPixelBufferIOSurfacePropertiesKey as String: [:]
                ] as [String : Any]

            cvReturn = CVPixelBufferCreate(kCFAllocatorDefault,
                                           Int(mtl.width),
                                           Int(mtl.height),
                                           kCVPixelFormatType_32BGRA,
                                           options as CFDictionary,
                                           &pixelBuffer)
        }
    }

    func mix(fullScreenPixelBuffer: CVPixelBuffer,
             pipPixelBuffer: CVPixelBuffer,
             _ sameRatio: Bool) -> CVPixelBuffer? {

        guard let outputPixelBufferPool = outputPixelBufferPool else { return nil }

        var newPixelBuffer: CVPixelBuffer?
        CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, outputPixelBufferPool, &newPixelBuffer)

        let outputPixelBuffer = newPixelBuffer
        let outputTexture = makeTextureFromCVPixelBuffer(pixelBuffer: outputPixelBuffer)
        guard var fullScreenTexture = makeTextureFromCVPixelBuffer(pixelBuffer: fullScreenPixelBuffer) else { return nil}
        guard let pipTexture = makeTextureFromCVPixelBuffer(pixelBuffer: pipPixelBuffer) else { return nil}

        if sameRatio == true {
            // Fixed with memory measures
            getMtlSize(mtl: fullScreenTexture,sameRatio: sameRatio)

            if cvReturn == kCVReturnSuccess {
                guard let pixelBuffer = pixelBuffer else {
                    print("AVCaptureMultiCamViewModel_mix")
                    return nil
                }
                let ciContext = CIContext()
                let inputImage = CIImage(cvImageBuffer: fullScreenPixelBuffer, options: nil).transformed(by: CGAffineTransform(scaleX: 0.5, y: 0.5).translatedBy(x: CGFloat(fullScreenTexture.width/2), y: 0))
                let colorSpace = CGColorSpaceCreateDeviceRGB()
                ciContext.render(inputImage, to: pixelBuffer, bounds: inputImage.extent, colorSpace: colorSpace)

                guard let newfullScreenTexture = makeTextureFromCVPixelBuffer(pixelBuffer: pixelBuffer) else {
                    print("AVCaptureMultiCamViewModel_mix")
                    return nil
                }
                fullScreenTexture = newfullScreenTexture
            }
        }

        let pipPosition = SIMD2(Float(pipFrame.origin.x) * Float(fullScreenTexture.width), Float(pipFrame.origin.y) * Float(fullScreenTexture.height))
        let pipSize = SIMD2(Float(pipFrame.size.width) * Float(pipTexture.width), Float(pipFrame.size.height) * Float(pipTexture.height))
        var parameters = MixerParameters(pipPosition: pipPosition, pipSize: pipSize)

        guard let commandQueue = commandQueue,
            let commandBuffer = commandQueue.makeCommandBuffer(),
            let commandEncoder = commandBuffer.makeComputeCommandEncoder(),

            let computePipelineState = computePipelineState else {
                print("BothSidesMixer_computePipelineState")

                if let textureCache = textureCache { CVMetalTextureCacheFlush(textureCache, 0) }

                return nil
        }

        commandEncoder.setComputePipelineState(computePipelineState)
        commandEncoder.setTexture(fullScreenTexture, index: 0)
        commandEncoder.setTexture(pipTexture, index: 2)
        commandEncoder.setTexture(outputTexture, index: 3)
        commandEncoder.setBytes(UnsafeMutableRawPointer(&parameters), length: MemoryLayout<MixerParameters>.size, index: 0)

        let width = computePipelineState.threadExecutionWidth
        let height = computePipelineState.maxTotalThreadsPerThreadgroup / width
        let threadsPerThreadgroup = MTLSizeMake(width, height, 1)
        let threadgroupsPerGrid = MTLSize(width: (fullScreenTexture.width + width - 1) / width,
                                          height: (fullScreenTexture.height + height - 1) / height,
                                          depth: 1)
        commandEncoder.dispatchThreadgroups(threadgroupsPerGrid, threadsPerThreadgroup: threadsPerThreadgroup)

        commandEncoder.endEncoding()
        commandBuffer.commit()
        commandBuffer.waitUntilCompleted()

        return outputPixelBuffer
    }

肝心のカメラの合成クラス

主な合成メソッドです。このconfigureBackCameraconfigureFrontCamera の2つのメソッドは録画をする上で、必須でどうしてもメモリが高メモリになってしまいます。iPhonePro11で260MB前後です。

背面カメラの設定configureBackCameraメソッド
前面カメラの設定configureFrontCameraメソッド
音の設定configureMicrophoneメソッド

処理の内容はコード見て理解して下さい。それが一番わかりやすいです。


final class BothSidesMultiCamViewModel: NSObject {

    var session                   = AVCaptureMultiCamSession()
    var aModel                    : BothSidesMultiCamSessionModel?

    var backCamera                             : AVCaptureDevice?
    var backDeviceInput                        : AVCaptureDeviceInput?
    var frontDeviceInput                       : AVCaptureDeviceInput?

    private var microphoneDeviceInput          : AVCaptureDeviceInput?

    private let backCameraVideoDataOutput      = AVCaptureVideoDataOutput()
    private let frontCameraVideoDataOutput     = AVCaptureVideoDataOutput()
    private let backMicrophoneAudioDataOutput  = AVCaptureAudioDataOutput()
    private let frontMicrophoneAudioDataOutput = AVCaptureAudioDataOutput()

    private let dataOutputQueue                = DispatchQueue(label: "data output queue")


    override init() {
        aModel = BothSidesMultiCamSessionModel()
        super.init()
        dataSet()
    }

    // Flash
    func pushFlash() {
        do {
            try backCamera?.lockForConfiguration()
            switch backCamera?.torchMode {
            case .off:
                backCamera?.torchMode = AVCaptureDevice.TorchMode.on
            case .on:
                backCamera?.torchMode = AVCaptureDevice.TorchMode.off
            default: break
            }
            backCamera?.unlockForConfiguration()
        } catch {
            print("not be used")
        }
    }

    func dataSet() {
        aModel?.dataOutput(backdataOutput: backCameraVideoDataOutput,
                              frontDataOutput: frontCameraVideoDataOutput,
                              backicrophoneDataOutput: backMicrophoneAudioDataOutput,
                              fronticrophoneDataOutput: frontMicrophoneAudioDataOutput)
    }

    func screenShot(call: @escaping() -> Void, orientation: UIInterfaceOrientation) { aModel?.screenShot(call: call, orientation: orientation) }

    func changeDviceType() {
        guard let backDeviceInput = backDeviceInput else {
            print("AVCaptureMultiCamViewModel_session")
            return
        }
        session.removeInput(backDeviceInput)
        session.removeOutput(backCameraVideoDataOutput)
        backCamera = nil
    }

    func configureBackCamera(_ backCameraVideoPreviewLayer: AVCaptureVideoPreviewLayer?,deviceType :AVCaptureDevice.DeviceType) {
        session.beginConfiguration()
        defer {
            session.commitConfiguration()
        }

        backCamera = AVCaptureDevice.default(deviceType, for: .video, position: .back)

        guard let backCamera = backCamera else {
            print("BothSidesMultiCamViewModel_backCamera")
            return
        }

        // Camera support is limited.
        if deviceType == .builtInWideAngleCamera {
            do {
                try backCamera.lockForConfiguration()
                backCamera.focusMode = .continuousAutoFocus
                backCamera.unlockForConfiguration()
            } catch {
                print("not be used")
            }
        }

        do {
            backDeviceInput = try AVCaptureDeviceInput(device: backCamera)

            guard let backCameraDeviceInput = backDeviceInput,
                session.canAddInput(backCameraDeviceInput) else {
                    print("AVCaptureMultiCamViewModel_backCameraDeviceInput")
                    return
            }

            session.addInputWithNoConnections(backCameraDeviceInput)
        } catch {
            return
        }

        guard let backCameraDeviceInput = backDeviceInput,
            let backCameraVideoPort = backCameraDeviceInput.ports(for: .video,
                                                                  sourceDeviceType: backCamera.deviceType,
                                                                  sourceDevicePosition: backCamera.position).first else {
                                                                    print("AVCaptureMultiCamViewModel_backCameraVideoPort")
                                                                    return
        }

        guard session.canAddOutput(backCameraVideoDataOutput) else {
            print("AVCaptureMultiCamViewModel_session.canAddOutput")
            return
        }

        session.addOutputWithNoConnections(backCameraVideoDataOutput)
        backCameraVideoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
        backCameraVideoDataOutput.setSampleBufferDelegate(aModel, queue: dataOutputQueue)

        let backCameraVideoDataOutputConnection = AVCaptureConnection(inputPorts: [backCameraVideoPort], output: backCameraVideoDataOutput)
        guard session.canAddConnection(backCameraVideoDataOutputConnection) else {
            print("AVCaptureMultiCamViewModel_session.canAddConnection")
            return
        }

        session.addConnection(backCameraVideoDataOutputConnection)
        backCameraVideoDataOutputConnection.videoOrientation = .portrait

        guard let backCameraVideoPreviewLayer = backCameraVideoPreviewLayer else {
            print("AVCaptureMultiCamViewModel_backCameraVideoPreviewLayer")
            return
        }

        let backCameraVideoPreviewLayerConnection = AVCaptureConnection(inputPort: backCameraVideoPort, videoPreviewLayer: backCameraVideoPreviewLayer)
        guard session.canAddConnection(backCameraVideoPreviewLayerConnection) else {
            print("AVCaptureMultiCamViewModel_session.canAddConnection")
            return
        }
        session.addConnection(backCameraVideoPreviewLayerConnection)
    }

    func configureFrontCamera(_ frontCameraVideoPreviewLayer: AVCaptureVideoPreviewLayer?, deviceType :AVCaptureDevice.DeviceType) {
        session.beginConfiguration()
        defer {
            session.commitConfiguration()
        }

        guard let frontCamera = AVCaptureDevice.default(deviceType, for: .video, position: .front) else {
            print("AVCaptureMultiCamViewModel_frontCamera")
            return
        }

        do {
            frontDeviceInput = try AVCaptureDeviceInput(device: frontCamera)
            guard let frontCameraDeviceInput = frontDeviceInput,
                session.canAddInput(frontCameraDeviceInput) else {
                    print("AVCaptureMultiCamViewModel_frontCameraDeviceInput")
                    return
            }
            session.addInputWithNoConnections(frontCameraDeviceInput)
        } catch {
            return
        }

        guard let frontCameraDeviceInput = frontDeviceInput,
            let frontCameraVideoPort = frontCameraDeviceInput.ports(for: .video,
                                                                    sourceDeviceType: frontCamera.deviceType,
                                                                    sourceDevicePosition: frontCamera.position).first else {
                                                                        print("AVCaptureMultiCamViewModel_frontCameraVideoPort")
                                                                        return
        }
        guard session.canAddOutput(frontCameraVideoDataOutput) else {
            print("AVCaptureMultiCamViewModel_session.canAddOutput")
            return
        }

        session.addOutputWithNoConnections(frontCameraVideoDataOutput)
        frontCameraVideoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
        frontCameraVideoDataOutput.setSampleBufferDelegate(aModel, queue: dataOutputQueue)

        let frontCameraVideoDataOutputConnection = AVCaptureConnection(inputPorts: [frontCameraVideoPort], output: frontCameraVideoDataOutput)
        guard session.canAddConnection(frontCameraVideoDataOutputConnection) else {
            print("AVCaptureMultiCamViewModel_session.canAddConnection")
            return
        }

        session.addConnection(frontCameraVideoDataOutputConnection)
        frontCameraVideoDataOutputConnection.videoOrientation = .portrait
        frontCameraVideoDataOutputConnection.automaticallyAdjustsVideoMirroring = false
        frontCameraVideoDataOutputConnection.isVideoMirrored = true

        guard let frontCameraVideoPreviewLayer = frontCameraVideoPreviewLayer else {
            print("AVCaptureMultiCamViewModel_frontCameraVideoPreviewLayer")
            return
        }

        let frontCameraVideoPreviewLayerConnection = AVCaptureConnection(inputPort: frontCameraVideoPort, videoPreviewLayer: frontCameraVideoPreviewLayer)
        guard session.canAddConnection(frontCameraVideoPreviewLayerConnection) else {
            print("AVCaptureMultiCamViewModel_session.canAddConnection")
            return
        }

        session.addConnection(frontCameraVideoPreviewLayerConnection)
        frontCameraVideoPreviewLayerConnection.automaticallyAdjustsVideoMirroring = false
        frontCameraVideoPreviewLayerConnection.isVideoMirrored = true
    }

    func configureMicrophone() {

        session.beginConfiguration()
        defer {
            session.commitConfiguration()
        }

        guard let microphone = AVCaptureDevice.default(for: .audio) else {
            print("AVCaptureMultiCamViewModel_microphone")
            return
        }

        do {
            self.microphoneDeviceInput = try AVCaptureDeviceInput(device: microphone)

            guard let microphoneDeviceInput = microphoneDeviceInput,
                session.canAddInput(microphoneDeviceInput) else {
                    print("AVCaptureMultiCamViewModel_microphoneDeviceInput")
                    return
            }
            session.addInputWithNoConnections(microphoneDeviceInput)
        } catch {
            return
        }
        guard let microphoneDeviceInput = microphoneDeviceInput,
            let backMicrophonePort = microphoneDeviceInput.ports(for: .audio,
                                                                 sourceDeviceType: microphone.deviceType,
                                                                 sourceDevicePosition: .back).first else {
                                                                    print("AVCaptureMultiCamViewModel_microphoneDeviceInput")
                                                                    return
        }

        guard let frontMicrophonePort = microphoneDeviceInput.ports(for: .audio,
                                                                    sourceDeviceType: microphone.deviceType,
                                                                    sourceDevicePosition: .front).first else {
                                                                    print("AVCaptureMultiCamViewModel_frontMicrophonePort")
                                                                    return

        }

        guard session.canAddOutput(backMicrophoneAudioDataOutput) else {
            print("AVCaptureMultiCamViewModel_session.canAddOutput")
            return
        }

        session.addOutputWithNoConnections(backMicrophoneAudioDataOutput)
        backMicrophoneAudioDataOutput.setSampleBufferDelegate(aModel, queue: dataOutputQueue)

        guard session.canAddOutput(frontMicrophoneAudioDataOutput) else {
            print("AVCaptureMultiCamViewModel_session.canAddOutput")
            return
        }

        session.addOutputWithNoConnections(frontMicrophoneAudioDataOutput)
        frontMicrophoneAudioDataOutput.setSampleBufferDelegate(aModel, queue: dataOutputQueue)

        let backMicrophoneAudioDataOutputConnection = AVCaptureConnection(inputPorts: [backMicrophonePort], output: backMicrophoneAudioDataOutput)
        guard session.canAddConnection(backMicrophoneAudioDataOutputConnection) else {
            print("AVCaptureMultiCamViewModel_session.canAddConnection")
            return
        }

        session.addConnection(backMicrophoneAudioDataOutputConnection)

        let frontMicrophoneAudioDataOutputConnection = AVCaptureConnection(inputPorts: [frontMicrophonePort], output: frontMicrophoneAudioDataOutput)
        guard session.canAddConnection(frontMicrophoneAudioDataOutputConnection) else {
            print("AVCaptureMultiCamViewModel_session.canAddConnection")
            return
        }

        session.addConnection(frontMicrophoneAudioDataOutputConnection)
    }
}

以上、AVCaptureMultiCamSessionを使用して作成したOSSの紹介をさせていただきました。

来年も皆様、良い一年になりますように。

貴重なお時間、お読み下さいまして、誠にありがとうございます。

2
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
2
1