LoginSignup
1
3

リアルタイムカメラ加工アプリを作る【Swift】【AVFoundation】

Last updated at Posted at 2024-02-19

カメラ→加工→動画保存

Feb-03-2024 03-15-35.gif

カメラからの一枚一枚のフレームに任意の加工をして動画として書き込む方法です。
(Core ML でアニメ風の画像にする処理を例とします)

AVCaptureSession でカメラフレームを取得

フレームを加工

AVAssetWriter に流し込む

という手順で動画にします。

手順

infoPlist でアプリでカメラとマイクの使用許可設定

info.plist
Privacy - Camera Usage Description
Privacy - Microphone Usage Description

プロパティ

これらを使います。

// capture
private var captureSession = AVCaptureSession()
private var captureVideoOutput = AVCaptureVideoDataOutput()
private let captureAudioOutput = AVCaptureAudioDataOutput()

// writing
private var videoWriter:AVAssetWriter!
private var videoWriterVideoInput:AVAssetWriterInput!
private var videoWriterPixelBufferAdaptor:AVAssetWriterInputPixelBufferAdaptor!
private var videoWriterAudioInput:AVAssetWriterInput!

private var videoSize: CGSize = .zero
private var processing:Bool = false
private var currentSampleBuffer:CMSampleBuffer!
private let ciContext = CIContext()
private var isRecording = false
private var recordingStartTime:CMTime?

// preview
private var imageView = UIImageView()

カメラの映像・音声を取得するセッティング

AVCaptureSessionに入出力を設定し、Sessionをスタートします。
(設定項目)

  • Video Input
  • Video Output
  • Audio Input
  • Audio Output

** ここでカメラ映像の幅と高さも取得します(映像を書き込む際に使う)。

private func setupCaptureSession() {
    
    do {
        // video input
        let captureDevice:AVCaptureDevice = AVCaptureDevice.default(for: .video)!
        let videoInput = try AVCaptureDeviceInput(device: captureDevice)
        if captureSession.canAddInput(videoInput) {
            captureSession.addInput(videoInput)
        }
        
        let dimensions = CMVideoFormatDescriptionGetDimensions(captureDevice.activeFormat.formatDescription)
        videoSize = CGSize(width: CGFloat(dimensions.height), height: CGFloat(dimensions.width))

        // video output
        captureVideoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]
        captureVideoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
        if captureSession.canAddOutput(captureVideoOutput) {
            captureSession.addOutput(captureVideoOutput)
        }
        
        // audio input
        if let audioDevice = AVCaptureDevice.default(for: .audio),
           let audioInput = try? AVCaptureDeviceInput(device: audioDevice) {
            if captureSession.canAddInput(audioInput) {
                captureSession.addInput(audioInput)
            }
        }

        // audio output
        captureAudioOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "audioQueue"))
        if captureSession.canAddOutput(captureAudioOutput) {
            captureSession.addOutput(captureAudioOutput)
        }

        // start session
        DispatchQueue.global(qos: .userInitiated).async {
            self.captureSession.startRunning()
        }
    } catch {
        print("Error setting up capture session: \(error.localizedDescription)")
    }
}

以下のデリゲートを使用しているクラス(今回は ViewController )に継承させます。
これにより、カメラフレームおよびマイク音声が更新される度にデリゲートメソッドが呼ばれるようになります。

  • AVCaptureVideoDataOutputSampleBufferDelegate(video用)
  • AVCaptureAudioDataOutputSampleBufferDelegate(audio用)
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate

Video書き込みセッティング

宛先ファイルに動画として書き込むセッティングをします。

  • 書き込み先URLを作る
    今回の例では、アプリのドキュメントディレクトリに"video.mov"というファイルを作っています。
  • AVAssetWriterを初期化
    書き込み先URLを渡します。
  • videoのinputを設定
    カメラ映像の幅と高さを渡します。
  • PixelBufferAdoptorを設定
    処理したpixelBufferを書き込む際に使用します。
  • audioのinputを設定
private func setupVideoWriter() {
    if recordingStartTime != nil {
        recordingStartTime = nil
    }
    // set writing destination url
    guard let outputURL = try? FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true).appendingPathComponent("video.mov") else { fatalError() }
    try? FileManager.default.removeItem(at: outputURL)
    
    // initialize video writer
    videoWriter = try! AVAssetWriter(outputURL: outputURL, fileType: .mov)
    
    // set video input
    let videoOutputSettings: [String: Any] = [
        AVVideoCodecKey: AVVideoCodecType.h264,
        AVVideoWidthKey: videoSize.width,
        AVVideoHeightKey: videoSize.height
    ]
    videoWriterVideoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoOutputSettings)
    videoWriterVideoInput.expectsMediaDataInRealTime = true
    
    // use adaptor for write processed pixelbuffer
    videoWriterPixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterVideoInput, sourcePixelBufferAttributes: [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)])
    
    // set audio input
    let audioOutputSettings: [String: Any] = [
        AVFormatIDKey: kAudioFormatMPEG4AAC,
        AVNumberOfChannelsKey: 2,
        AVSampleRateKey: 44100,
        AVEncoderBitRateKey: 128000
    ]
    videoWriterAudioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioOutputSettings)
    videoWriterAudioInput.expectsMediaDataInRealTime = true
    
    if videoWriter.canAdd(videoWriterVideoInput) {
        videoWriter.add(videoWriterVideoInput)
    }
    if videoWriter.canAdd(videoWriterAudioInput) {
        videoWriter.add(videoWriterAudioInput)
    }
}

映像取得・加工・書き込みの実行

  • 映像取得
    captureOutput はカメラフレームと音声が更新されるたびに呼ばれるメソッドです。
    この中でフレームを取得・加工して動画ファイルに書き込みます。
    音声も書き込みます。
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    
    if let videoDataOutput = output as? AVCaptureVideoDataOutput,
       !processing {

       // video
       
        processing = true
        
        // Proceed only when the previous frame has finished processing   
        
        if let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {

            // this is a video frame
            // process a frame here

            startTime = Date()
            guard let processedCIImage = predict(pixelBuffer: pixelBuffer) else {
                return
            }
            
            // Update preview
            updatePreview(processedCIImage: processedCIImage)
            
            if isRecording {
                let presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)

                // write
                
                writeProcessedVideoFrame(processedCIImage: processedCIImage, presentationTimeStamp: presentationTimeStamp)
                
            }
        }
        processing = false
        
    } else if let audioDataOutput = output as? AVCaptureAudioDataOutput {
    // audio
    // Since the update timing for audio and video is different, we will separate them.
        if isRecording ,
           let recordingStartTime = recordingStartTime{
            if videoWriterAudioInput.isReadyForMoreMediaData,
               videoWriter.status == .writing {
                var copyBuffer : CMSampleBuffer?
                var count: CMItemCount = 1
                var info = CMSampleTimingInfo()
                CMSampleBufferGetSampleTimingInfoArray(sampleBuffer, entryCount: count, arrayToFill: &info, entriesNeededOut: &count)
                info.presentationTimeStamp = CMTimeSubtract(info.presentationTimeStamp, recordingStartTime)
                CMSampleBufferCreateCopyWithNewTiming(allocator: kCFAllocatorDefault,sampleBuffer: sampleBuffer,sampleTimingEntryCount: 1,sampleTimingArray: &info,sampleBufferOut: &copyBuffer)
                
                videoWriterAudioInput.append(copyBuffer!)
            }
        }
    }
}
  • 加工処理
    CVPixelBuffer(カメラ画像)を加工します。
    ここは好きな処理に置き換えてください。
    今回は Core ML でフレームをアニメ風にしています。
// initialize ML model
// private var coreMLRequest: VNCoreMLRequest?
// func setupMLModel() {
//     let mlModelConfig = MLModelConfiguration()
//     do {
//         let coreMLModel:MLModel = try animeganHayao(configuration: mlModelConfig).model
//         let vnCoreMLModel:VNCoreMLModel = try VNCoreMLModel(for: coreMLModel)
//         let coreMLRequest:VNCoreMLRequest = VNCoreMLRequest(model: vnCoreMLModel, completionHandler: coreMLRequestCompletionHandler)
//         coreMLRequest.imageCropAndScaleOption = .scaleFill
//         self.coreMLRequest = coreMLRequest
//     } catch let error {
//        fatalError(error.localizedDescription)
//     }
// }

private func predict(pixelBuffer: CVPixelBuffer)->CIImage? {
    guard let coreMLRequest = coreMLRequest else {
        processing = false
        return nil
    }
    let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer,orientation: .right, options: [:])
    do {
        try handler.perform([coreMLRequest])
        guard let result:VNPixelBufferObservation = coreMLRequest.results?.first as? VNPixelBufferObservation else {
            processing = false
            return nil}
        let end = Date()
        let inferenceTime = end.timeIntervalSince(startTime)
        print(inferenceTime)
        let pixelBuffer:CVPixelBuffer = result.pixelBuffer
        let resultCIImage = CIImage(cvPixelBuffer: pixelBuffer)
        let resizedCIImage = resultCIImage.resize(as: videoSize)
        return resizedCIImage
    } catch {
        print("Vision error: \(error.localizedDescription)")
        return nil
    }
}

フレーム加工中は次のフレームを加工しないように processing フラグで switch します。
大量の加工処理が発生するとアプリ全体がブロックされてしまうためです。

  • preview をユーザーに表示
    加工したフレームを UIImageView に表示します。
    AVCaptureSessionでは通常 AVPreviewLayer でプレビューを表示しますが、加工したフレームは AVPreviewLayer で表示できないため、 UIImageView に一枚ずつ反映することでプレビューします。
func updatePreview(processedCIImage: CIImage) {
    
    let processedUIImage = UIImage(ciImage: processedCIImage)
    
    DispatchQueue.main.async {
        
        self.imageView.image = processedUIImage
    
    }
}
  • 録画

AVAssetWriter の
startWriting と
startSession
を呼んで録画を開始します。

if videoWriter.status == .unknown {
    self.videoWriter.startWriting()
    self.videoWriter.startSession(atSourceTime: .zero)
}
isRecording = true
  • 書き込み

AVAssetWriterInputPixelBufferAdaptor に 処理後の CVPixelBuffer を書き込みます。
処理したフレームは CIImage になっているので CVPixelBuffer に戻します。

AVAssetWriter を 開始時間0秒で開始しているので、書き込みスタート時の CMTime  を フレームの CMTime から引いて書き込みます。
( audio の方も同じように時間の計算をして書き込んでいます。)

func writeProcessedVideoFrame(processedCIImage: CIImage, presentationTimeStamp: CMTime) {
    
    // get the time of this video frame.
    if self.videoWriter.status == .writing,
       self.videoWriterVideoInput.isReadyForMoreMediaData == true {
        if recordingStartTime == nil {
            self.recordingStartTime = presentationTimeStamp
        }
        
        // CIImage -> CVPixelBuffer
        var processedPixelBuffer: CVPixelBuffer?
        CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, videoWriterPixelBufferAdaptor.pixelBufferPool!, &processedPixelBuffer)
        self.ciContext.render(processedCIImage, to: processedPixelBuffer!)
        guard let processedPixelBuffer = processedPixelBuffer else { return }
        let time = CMTimeSubtract(presentationTimeStamp, recordingStartTime!)
        // write
        self.videoWriterPixelBufferAdaptor.append(processedPixelBuffer, withPresentationTime: time)
    }
}

録画の終了・動画の保存

video と audio それぞれの AVAssetWriterInput で
.markAsFinished()
を呼び、

AVAssetWriter の
.finishWriting
を呼んで、
書き込みを終了します。

書き込み処理が終わると、ハンドラが呼ばれるので、そこで AVAssetWriter に設定した書き込み先 URL で動画を取得し、フォトライブラリに保存します。

if videoWriter.status == .writing {
    videoWriterVideoInput.markAsFinished()
    videoWriterAudioInput.markAsFinished()
    videoWriter.finishWriting { [weak self] in
        guard let self = self else { return }
        let outputURL = self.videoWriter.outputURL
        self.saveVideoToPhotoLibrary(url: outputURL)
    }
}
func saveVideoToPhotoLibrary(url: URL) {
    PHPhotoLibrary.requestAuthorization { status in
        if status == .authorized {
            PHPhotoLibrary.shared().performChanges({
                PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: url)
            }) { saved, error in
                DispatchQueue.main.async {
                    if saved {
                        print("Video saved in photo library")
                        self.setupVideoWriter()
                    } else {
                        print("Failed saving: \(String(describing: error))")
                    }
                }
            }
        } else {
            print("Access to photo library was denided")
        }
    }
}

Tips

・AVAssetWriter の startWriting を呼ぶタイミングによっては書き込みの最初に数秒フレームが静止してしまいます。
captureSession 内で呼んだり、 DispathQueue.global で呼んだりすると静止が発生しました。

だいすけの経験的なベストプラクティスは録画開始ボタンを押したときに呼ぶことです。

・Xcodeでアプリをrunすると、AVAssetWriterのstartSessionでアプリ全体が数秒ブロックされることがありました。一度ビルドした後、デバイス単体で実行すると解消されます。

・AVAssetWriter は、1動画ごとに初期化する必要があります。

・Core MLモデルを使う場合、最初のビルド時にモデルの初期化に数秒かかります。
一度初期化してしまうと、次にアプリを開いたときにはすぐインスタンス化できます。
Xcodeのrunでは毎回初期化に数秒かかります。一度ビルドした後、デバイス単体で実行すると解消されます。

全体コード

//
//  RealTimeCameraInferenceViewController.swift
//
//  Created by 間嶋大輔 on 2024/01/21.
//

import UIKit
import Vision
import AVFoundation
import Photos

class RealTimeCameraInferenceViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate {
    
    private var coreMLRequest: VNCoreMLRequest?
    
    // Capture
    private var captureSession = AVCaptureSession()
    private var captureVideoOutput = AVCaptureVideoDataOutput()
    private let captureAudioOutput = AVCaptureAudioDataOutput()
    
    // Video Writing
    private var videoWriter:AVAssetWriter!
    private var videoWriterVideoInput:AVAssetWriterInput!
    private var videoWriterPixelBufferAdaptor:AVAssetWriterInputPixelBufferAdaptor!
    private var videoWriterAudioInput:AVAssetWriterInput!
    
    private var videoSize: CGSize = .zero
    private var processing:Bool = false
    private var currentSampleBuffer:CMSampleBuffer!
    private let ciContext = CIContext()
    private var isRecording = false
    private var startTime:Date!
    private var recordingStartTime:CMTime?
    
    // View
    private var imageView = UIImageView()
    private var descriptionLabel = UILabel()
    private var recordButton = UIButton()
    
    override func viewDidLoad() {
        super.viewDidLoad()
        setupCoreMLRequest()
        setupView()
        setupCaptureSession()
        setupVideoWriter()
    }
    
    func setupCoreMLRequest() {
        let mlModelConfig = MLModelConfiguration()
        do {
            let coreMLModel:MLModel = try animeganHayao(configuration: mlModelConfig).model
            let vnCoreMLModel:VNCoreMLModel = try VNCoreMLModel(for: coreMLModel)
            let coreMLRequest:VNCoreMLRequest = VNCoreMLRequest(model: vnCoreMLModel, completionHandler: coreMLRequestCompletionHandler)
            coreMLRequest.imageCropAndScaleOption = .scaleFill
            self.vnCoreMLModel = vnCoreMLModel
            self.coreMLRequest = coreMLRequest
        } catch let error {
            fatalError(error.localizedDescription)
        }
    }
    
    private func predict(pixelBuffer: CVPixelBuffer)->CIImage? {
        guard let coreMLRequest = coreMLRequest else {
            processing = false
            return nil
        }
        let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer,orientation: .right, options: [:])
        do {
            try handler.perform([coreMLRequest])
            guard let result:VNPixelBufferObservation = coreMLRequest.results?.first as? VNPixelBufferObservation else {
                processing = false
                return nil}
            let end = Date()
            let inferenceTime = end.timeIntervalSince(startTime)
            print(inferenceTime)
            let pixelBuffer:CVPixelBuffer = result.pixelBuffer
            let resultCIImage = CIImage(cvPixelBuffer: pixelBuffer)
            let resizedCIImage = resultCIImage.resize(as: videoSize)
            return resizedCIImage
        } catch {
            print("Vision error: \(error.localizedDescription)")
            return nil
        }
    }
    
    // MARK: -Video
    
    @objc func recordVideo() {
        if isRecording {
            if videoWriter.status == .writing {
                videoWriterVideoInput.markAsFinished()
                videoWriterAudioInput.markAsFinished()
                videoWriter.finishWriting { [weak self] in
                    guard let self = self else { return }
                    let outputURL = self.videoWriter.outputURL
                    self.saveVideoToPhotoLibrary(url: outputURL)
                }
            }
        } else {
            if videoWriter.status == .unknown {
                self.videoWriter.startWriting()
                self.videoWriter.startSession(atSourceTime: .zero)
            }
        }
        isRecording.toggle()
    }
    
    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        
        if let videoDataOutput = output as? AVCaptureVideoDataOutput,
           !processing {
            processing = true
            
            // Proceed to processing only when the previous frame has finished processing
            // process a video frame here
            
            if let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) { // this is a video frame
                currentSampleBuffer = sampleBuffer
                startTime = Date()
                guard let processedCIImage = predict(pixelBuffer: pixelBuffer) else {
                    return
                }
                
                // Update preview
                updatePreview(processedCIImage: processedCIImage)
                
                if isRecording {
                    
                    let presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
                    writeProcessedVideoFrame(processedCIImage: processedCIImage, presentationTimeStamp: presentationTimeStamp)
                }
            }
            processing = false
        } else if let audioDataOutput = output as? AVCaptureAudioDataOutput {
            if isRecording ,
               let recordingStartTime = recordingStartTime{
                if videoWriterAudioInput.isReadyForMoreMediaData,
                   videoWriter.status == .writing {
                    var copyBuffer : CMSampleBuffer?
                    var count: CMItemCount = 1
                    var info = CMSampleTimingInfo()
                    CMSampleBufferGetSampleTimingInfoArray(sampleBuffer, entryCount: count, arrayToFill: &info, entriesNeededOut: &count)
                    info.presentationTimeStamp = CMTimeSubtract(info.presentationTimeStamp, recordingStartTime)
                    CMSampleBufferCreateCopyWithNewTiming(allocator: kCFAllocatorDefault,sampleBuffer: sampleBuffer,sampleTimingEntryCount: 1,sampleTimingArray: &info,sampleBufferOut: &copyBuffer)
                    
                    videoWriterAudioInput.append(copyBuffer!)
                }
            }
            
        }
    }
    
    func writeProcessedVideoFrame(processedCIImage: CIImage, presentationTimeStamp: CMTime) {
        
        // get the time of this video frame.
        if self.videoWriter.status == .writing,
           self.videoWriterVideoInput.isReadyForMoreMediaData == true {
            if recordingStartTime == nil {
                self.recordingStartTime = presentationTimeStamp
            }
            
            // CIImage -> CVPixelBuffer
            var processedPixelBuffer: CVPixelBuffer?
            CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, videoWriterPixelBufferAdaptor.pixelBufferPool!, &processedPixelBuffer)
            self.ciContext.render(processedCIImage, to: processedPixelBuffer!)
            guard let processedPixelBuffer = processedPixelBuffer else { return }
            let time = CMTimeSubtract(presentationTimeStamp, recordingStartTime!)
            // write
            self.videoWriterPixelBufferAdaptor.append(processedPixelBuffer, withPresentationTime: time)
        }
    }
    
    func updatePreview(processedCIImage: CIImage) {
        let cgImage = ciContext.createCGImage(processedCIImage, from: processedCIImage.extent)
        
        let processedUIImage = UIImage(cgImage: cgImage!)
        DispatchQueue.main.async {
            
            self.imageView.image = processedUIImage
        }
    }
    
    private func setupCaptureSession() {
        
        do {
            // video input
            let captureDevice:AVCaptureDevice = AVCaptureDevice.default(for: .video)!
            let videoInput = try AVCaptureDeviceInput(device: captureDevice)
            if captureSession.canAddInput(videoInput) {
                captureSession.addInput(videoInput)
            }
            
            let dimensions = CMVideoFormatDescriptionGetDimensions(captureDevice.activeFormat.formatDescription)
            videoSize = CGSize(width: CGFloat(dimensions.height), height: CGFloat(dimensions.width))
            
            // video output
            captureVideoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]
            captureVideoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
            if captureSession.canAddOutput(captureVideoOutput) {
                captureSession.addOutput(captureVideoOutput)
            }
            
            // audio input
            if let audioDevice = AVCaptureDevice.default(for: .audio),
               let audioInput = try? AVCaptureDeviceInput(device: audioDevice) {
                if captureSession.canAddInput(audioInput) {
                    captureSession.addInput(audioInput)
                }
            }
            
            // audio output
            captureAudioOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "audioQueue"))
            if captureSession.canAddOutput(captureAudioOutput) {
                captureSession.addOutput(captureAudioOutput)
            }
            // start session
            DispatchQueue.global(qos: .userInitiated).async {
                self.captureSession.startRunning()
            }
        } catch {
            print("Error setting up capture session: \(error.localizedDescription)")
        }
    }
    
    
    private func setupVideoWriter() {
        if recordingStartTime != nil {
            recordingStartTime = nil
        }
        // set writing destination url
        guard let outputURL = try? FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true).appendingPathComponent("video.mov") else { fatalError() }
        try? FileManager.default.removeItem(at: outputURL)
        
        // initialize video writer
        videoWriter = try! AVAssetWriter(outputURL: outputURL, fileType: .mov)
        
        // set video input
        let videoOutputSettings: [String: Any] = [
            AVVideoCodecKey: AVVideoCodecType.h264,
            AVVideoWidthKey: videoSize.width,
            AVVideoHeightKey: videoSize.height
        ]
        videoWriterVideoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoOutputSettings)
        videoWriterVideoInput.expectsMediaDataInRealTime = true
        
        // use adaptor for write processed pixelbuffer
        videoWriterPixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterVideoInput, sourcePixelBufferAttributes: [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)])
        
        // set audio input
        let audioOutputSettings: [String: Any] = [
            AVFormatIDKey: kAudioFormatMPEG4AAC,
            AVNumberOfChannelsKey: 2,
            AVSampleRateKey: 44100,
            AVEncoderBitRateKey: 128000
        ]
        videoWriterAudioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioOutputSettings)
        videoWriterAudioInput.expectsMediaDataInRealTime = true
        
        if videoWriter.canAdd(videoWriterVideoInput) {
            videoWriter.add(videoWriterVideoInput)
        }
        if videoWriter.canAdd(videoWriterAudioInput) {
            videoWriter.add(videoWriterAudioInput)
        }
    }
    
    func saveVideoToPhotoLibrary(url: URL) {
        PHPhotoLibrary.requestAuthorization { status in
            if status == .authorized {
                PHPhotoLibrary.shared().performChanges({
                    PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: url)
                }) { saved, error in
                    DispatchQueue.main.async {
                        if saved {
                            print("Video saved in photo library")
                            self.setupVideoWriter()
                        } else {
                            print("Failed saving: \(String(describing: error))")
                        }
                    }
                }
            } else {
                print("Access denided to photo library")
            }
        }
    }
    
    // MARK: -View
    private func setupView() {
        imageView.frame = view.bounds
        descriptionLabel.frame = CGRect(x: 0, y: view.center.y, width: view.bounds.width, height: 100)
        recordButton.frame = CGRect(x: view.center.x - 50, y: view.bounds.maxY - 150, width: 100, height: 100)
        
        view.addSubview(imageView)
        view.addSubview(descriptionLabel)
        view.addSubview(recordButton)
        
        imageView.contentMode = .scaleAspectFit
        descriptionLabel.text = "Core ML model is initializing./n please wait a few seconds..."
        descriptionLabel.numberOfLines = 2
        descriptionLabel.textAlignment = .center
        recordButton.setImage(UIImage(systemName: "video.circle.fill"), for: .normal)
        recordButton.addTarget(self, action: #selector(recordVideo), for: .touchUpInside)
    }
    
    
    // completion patern
    
    //    private func inference(pixelBuffer: CVPixelBuffer) {
    //        guard let coreMLRequest = coreMLRequest else {
    //            processing = false
    //            return
    //        }
    //        let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer,orientation: .right, options: [:])
    //        do {
    //            try handler.perform([coreMLRequest])
    //        } catch {
    //            print("Vision error: \(error.localizedDescription)")
    //            processing = false
    //
    //        }
    //    }
    
    //    func coreMLRequestCompletionHandler(request:VNRequest?, error:Error?) {
    //        guard let result:VNPixelBufferObservation = coreMLRequest?.results?.first as? VNPixelBufferObservation else {
    //            processing = false
    //            return }
    //        let end = Date()
    //        //        let inferenceTime = end.timeIntervalSince(startTime)
    //        let pixelBuffer:CVPixelBuffer = result.pixelBuffer
    //        let resultCIImage = CIImage(cvPixelBuffer: pixelBuffer)
    //        let resizedCIImage = resultCIImage.resize(as: videoSize)
    //        let resultUIImage = UIImage(ciImage: resizedCIImage)
    //        if isRecording {
    //            writeProcessedVideoFrame(processedCIImage: resizedCIImage, sampleBuffer: currentSampleBuffer)
    //        }
    //
    //        DispatchQueue.main.async { [weak self] in
    //            self?.imageView.image = resultUIImage
    //        }
    //        processing = false
    //
    //    }
}

任意の処理を録画できます。

🐣


フリーランスエンジニアです。
AIについて色々記事を書いていますのでよかったらプロフィールを見てみてください。

もし以下のようなご要望をお持ちでしたらお気軽にご相談ください。
AIサービスを開発したい、ビジネスにAIを組み込んで効率化したい、AIを使ったスマホアプリを開発したい、
ARを使ったアプリケーションを作りたい、スマホアプリを作りたいけどどこに相談したらいいかわからない…

いずれも中間コストを省いたリーズナブルな価格でお請けできます。

お仕事のご相談はこちらまで
rockyshikoku@gmail.com

機械学習やAR技術を使ったアプリケーションを作っています。
機械学習/AR関連の情報を発信しています。

X
Medium
GitHub

1
3
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
3