5
3

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

Unityの2DRendererで前景をぼかすポストプロセスを実装(2DRenderer後編)

Last updated at Posted at 2025-04-16

もくじ

2DRenderer前編

2DRenderer中編

概要

3DでDOF(被写界深度)を実装するには備え付けのDOFのポストプロセスを追加すればいいだけです。原理としてもDepthTextureからぼかしたい部分を抽出してぼかすだけです。
ただこれを2Dでは使えなく、Depthは描画するMeshの頂点とカメラの距離を書き込むため2DはSpriteRendererのように四角(Quad)のMeshでテクスチャのAlphaも考慮されないためちゃんとしたDepthは描画されないです。

そのため前景のObjectだけをぼかし、ついでにMaskで抜けるポストプロセスを作ることにしました。
またその過程でRenderFeatureで2DRendererについて、なにができて、なにができないのかも多少知見を得られたので紹介します。

環境

  • Unity 6000.0.23f1

実装

手法1

まず最初に以下の仕組みで実装しようとしました。

  1. 通常のRenderLoopでFillterlingを使って前景ObjectとMask以外を描画
  2. RendererFeatureで2DrenderPassの後に自前のRenderPassを追加。ここで前景ObjectとMask用のテクスチャをそれぞれ一時的なRenderTextureに書き込み。前景ObjectはちゃんとAlphaも書き戻す際に必要なので4チャンネル、Maskは1チャンネルでいいです
  3. 前景ObjectのRenderTextureをぼかす。これはUnityの備え付けのDOFポストプロセスの仕組みを改修して使うのがいいと思います。さらに前景ObjectのRenderTextureのAlphaとMaskのRendererTxtureを使って通常のRenderLoopのRenderTextureに書き戻します

ですがこの手法1では結果的に無理でした。

不可能な要因
3DではFillterlingといってCameraのCullingMaskとは別にLayerで描画するものを指定できる機能があります。なので、RenderLoop時はRenderDataの設定でフィルタリング、独自パスで描画する時は別のフィルタリングでそれぞれ別のLayerを描画することが可能です。
しかし、2DRendererDataにはFillterlingがありません。なので独自パスで描画する際はLayerMask用意できますが、RenderLoop時の描画はフィルタリングできないので、RenderLoop時に前景Objectを描画せずに独自パスで描画ことが不可能になっています。またCameraのCullingMaskはRenderPassごとのものではなく、一連のRenderLoop内で変更できるものでもないので、CullingMaskでフィルタリングしたLayerは同一カメラではどうやっても描画できません。

RendererのFillterの該当箇所

3Dの場合

com.unity.render-pipelines.universal\Runtime\UniversalRenderer.cs 377行目
透過オブジェクト描画のRenderPass作成時にRendererDataからFillteringのLayerMaskを受け取ってます。

m_RenderTransparentForwardPass = new DrawObjectsPass(URPProfileId.DrawTransparentObjects, false, RenderPassEvent.BeforeRenderingTransparents, RenderQueueRange.transparent, data.transparentLayerMask, m_DefaultStencilState, stencilData.stencilReference);

com.unity.render-pipelines.universal\Runtime\Passes\DrawObjectsPass.cs 85~106行目
初期化時に受け取ったLayerMaskからFilteringSettingsを生成して、描画時にフィルタリングしてるのがわかります。

internal void Init(bool opaque, RenderPassEvent evt, RenderQueueRange renderQueueRange, LayerMask layerMask, StencilState stencilState, int stencilReference, ShaderTagId[] shaderTagIds = null)
{
    if (shaderTagIds == null)
        shaderTagIds = new ShaderTagId[] { new ShaderTagId("SRPDefaultUnlit"), new ShaderTagId("UniversalForward"), new ShaderTagId("UniversalForwardOnly") };

    m_PassData = new PassData();
    foreach (ShaderTagId sid in shaderTagIds)
        m_ShaderTagIdList.Add(sid);
    renderPassEvent = evt;
    m_FilteringSettings = new FilteringSettings(renderQueueRange, layerMask);
    m_RenderStateBlock = new RenderStateBlock(RenderStateMask.Nothing);
    m_IsOpaque = opaque;
    m_ShouldTransparentsReceiveShadows = false;
    m_IsActiveTargetBackBuffer = false;

    if (stencilState.enabled)
    {
        m_RenderStateBlock.stencilReference = stencilReference;
        m_RenderStateBlock.mask = RenderStateMask.Stencil;
        m_RenderStateBlock.stencilState = stencilState;
    }
}

2Dの場合

com.unity.render-pipelines.universal\Runtime\2D\Rendergraph\Renderer2DRendergraph.cs 534、535行目
GetFilterSettingsでFilteringSettingsを取得して描画時にRenderPassに渡してます。

LayerUtility.GetFilterSettings(m_Renderer2DData, ref m_LayerBatches[i], cameraSortingLayerBoundsIndex, out var filterSettings);
m_RendererPass.Render(renderGraph, frameData, m_Renderer2DData, ref m_LayerBatches, i, ref filterSettings);

com.unity.render-pipelines.universal\Runtime\2D\Passes\Utility\LayerUtility.cs 198~211行目
GetFilterSettingsでは引数によらずlayerMaskは-1なのでフィルタリングされてないのが分かります。

public static void GetFilterSettings(Renderer2DData rendererData, ref LayerBatch layerBatch, short cameraSortingLayerBoundsIndex, out FilteringSettings filterSettings)
{
    filterSettings = FilteringSettings.defaultValue;
    filterSettings.renderQueueRange = RenderQueueRange.all;
    filterSettings.layerMask = -1;
    filterSettings.renderingLayerMask = 0xFFFFFFFF;

    short upperBound = layerBatch.layerRange.upperBound;

    if (rendererData.useCameraSortingLayerTexture && cameraSortingLayerBoundsIndex >= layerBatch.layerRange.lowerBound && cameraSortingLayerBoundsIndex < layerBatch.layerRange.upperBound)
        upperBound = cameraSortingLayerBoundsIndex;

    filterSettings.sortingLayerRange = new SortingLayerRange(layerBatch.layerRange.lowerBound, upperBound);
}

難しい要因
2DRendererでは4つのLightTextureがあると書きましたが、厳密にはそれぞれの2DLightのSortingLayerの数だけBatchが作られ、Batch数×4のLightTextureが作られてるようです。
そしてBatch毎にそれぞれ違うsortingLayerRangeと4つのLightTextureで描画する仕組みになっているようです。
LightTextureは2DRendererのContextItem1であるUniversal2DResourceDataに格納されますが、前景Objectを適切なsortingLayerRangeとLightTextureで描画する必要があります。
この辺はコード読んだだけなので確実とは言えませんがちゃんと検証すれば自前のRenderPassでもLitのObjectの描画も可能かもしれません。

以下のファイルが関係ある部分です。
com.unity.render-pipelines.universal\Runtime\2D\Rendergraph\Renderer2DRendergraph.cs
com.unity.render-pipelines.universal\Runtime\2D\Rendergraph\DrawRenderer2DPass.cs
com.unity.render-pipelines.universal\Runtime\2D\Rendergraph\DrawLight2DPass.cs

手法2

手法1では前述の要因で無理だったので以下の手法で実装しました。

  1. カメラを3つに分け、それぞれ前景Object用、Mask用、メイン用と用途を分けます。
    メイン用は手法1と同様前景ObjectとMask以外を描画し、ぼかした前景を描き戻す用です。メイン用はPriorityを高くして必ず最後に描画されるようにします。
    前景Object用は前景Objectを描画するだけのものになるのでCullingMaskで前景と2DLightだけ描画されるようにします。2
    Mask用はMaskだけを描画するだけのものです。
  2. 前景Object用とMask用カメラのOutputTextureに一時的なRenderTextureをセットし、それぞれ描画されたものを独自のVolumeComponent経由で独自のRenderFeatureに渡します。
  3. メイン用のカメラの2DRenderPass後にBlit用のPassを追加し、渡したRenderTextureをRenderGraph用のTextureHandleにBlitして独自のContextItemに格納します。
  4. 手法1同様前景ObjectのRenderTextureをぼかし、前景ObjectのAlphaとMaskのRendererTxtureを使って通常のRenderLoopのRenderTextureに書き戻します。

カメラを分けることで手法1でできなかったことを実現してますがLightTextureが無駄に2回作られてるのでパフォーマンス的には最適解ではないですが、現状しょうがないです。。。
あと最後に見せたくないとこを隠すShadowを描画する処理も書いてました。

FrontObjectManager

前景Object用とMask用カメラのOutputTextureにRenderTextueをセットし独自VolumeのFrontObjectVolumeに渡すクラスです。
OutputTextureにセットするRenderTextureはDepthもないとErrorだけ吐くのでDepthもつけときます。
Maskだけ解像度半分にしてますが、前景Objectもぼかすので解像度下げてもよさげですね。

コード
public class FrontObjectManager : MonoBehaviour
{
    [SerializeField] Camera _frontObjectCamera;
    [SerializeField] Camera _frontMaskCamera;
    [SerializeField] Camera _frontShadowCamera;
    [SerializeField] Volume _baseVolume;
    private RenderTexture _frontObjectRT;
    private RenderTexture _frontMaskRT;
    private RenderTexture _frontShadowRT;

    private void Start()
    {
        Init();
    }

    public void Init()
    {
        UpdateRenderTextures();
    }

    public void UpdateRenderTextures()
    {
        UpdateRenderTexture(ref _frontObjectRT, _frontObjectCamera, "FrontObjectTexture", false);
        UpdateRenderTexture(ref _frontMaskRT, _frontMaskCamera, "FrontMaskTexture", true);
        UpdateRenderTexture(ref _frontShadowRT, _frontShadowCamera, "ShadowTexture", false);
        SetVolume(_baseVolume, _frontObjectRT, _frontMaskRT, _frontShadowRT);
    }

    private void UpdateRenderTexture(ref RenderTexture currentRenderTexture, Camera targetCamera, string renderTextureName, bool isMask)
    {
        if (currentRenderTexture != null)
        {
            currentRenderTexture.Release();
            Destroy(currentRenderTexture);
        }
        var scale = isMask ? 2 : 1;
        currentRenderTexture = new RenderTexture(Screen.width / scale, Screen.height / scale, 24)
        {
            name = renderTextureName,
            graphicsFormat = isMask ? GraphicsFormat.R8_SNorm : GraphicsFormat.B8G8R8A8_SRGB,
            useMipMap = false,
            autoGenerateMips = false
        };
        currentRenderTexture.Create();

        targetCamera.targetTexture = currentRenderTexture;
    }

    private void SetVolume(Volume volume, RenderTexture frontObjectRT, RenderTexture frontMaskRT, RenderTexture shadowRT)
    {
        if (volume.profile.TryGet<FrontObjectVolume>(out var component))
        {
            component.frontObjectRT.value = frontObjectRT;
            component.frontMaskRT.value = frontMaskRT;
            component.frontShadowRT.value = shadowRT;
        }
    }
}

FrontObjectRendererFeature

前景ObjectとMaskのRenderTextureをBlitするBlitBufferObjectPassとぼかして描き戻すRenderFrontObjectPassを追加する独自RendererFeatureです。
TextureHandleにするRTHandleのRenderTextureはColorかDepthどちらかにしないといけないためわざわざBlitし直してます。RenderGraphのめんどくさポイントでした。

ぼかしの部分はUnityのDOFのGaussianを方をほぼそのまま利用してMaskと書き戻しの処理を追加しました。
BokefのほうはAlphaのチャンネルを使用していて書き戻せなかったので採用できませんでした。

コード
public enum FrontBufferType
{
    FrontObject,
    FrontMask,
    FrontShadow
}

// FrontObjectを別に描画しガウスブラーなどかける
public class FrontObjectRendererFeature : ScriptableRendererFeature
{
    private static readonly string k_FrontObjectBlur = "FrontObjectBlur";

    public class FrontObjectData : ContextItem, IDisposable
    {           
        readonly string m_FrontBlurTextureName = "FrontBlurTexture";
        readonly string m_FrontObjectTextureName = "FrontObjectTexture";
        readonly string m_FrontMaskTextureName = "FrontMaskTexture";
        readonly string m_FrontShadowTextureName = "FrontShadowTexture";
        RTHandle m_FrontObjectRT;
        RTHandle m_FrontMaskRT;
        RTHandle m_FrontShadowRT;
        RTHandle m_FrontBlurTexture;
        RTHandle m_FrontObjectTexture;
        RTHandle m_FrontMaskTexture;
        RTHandle m_FrontShadowTexture;
        TextureHandle m_FrontBlurTextureHandle;
        TextureHandle m_FrontObjectTextureHandle;
        TextureHandle m_FrontMaskTextureHandle;
        TextureHandle m_FrontShadowTextureHandle;
        
        static Vector4 scaleBias = new Vector4(1f, 1f, 0f, 0f);

        public void Init(RenderGraph renderGraph, ContextContainer frameData)
        {
            var cameraData = frameData.Get<UniversalCameraData>();
            var descriptor = cameraData.cameraTargetDescriptor;
            descriptor.graphicsFormat = GraphicsFormat.B8G8R8A8_SRGB;
            descriptor.msaaSamples = 1;
            descriptor.depthStencilFormat = GraphicsFormat.None;
            
            m_FrontObjectRT = RTHandles.Alloc(m_FrontObjectVolume.frontObjectRT.value);
            m_FrontMaskRT = RTHandles.Alloc(m_FrontObjectVolume.frontMaskRT.value);
            m_FrontShadowRT = RTHandles.Alloc(m_FrontObjectVolume.frontShadowRT.value);
            RenderingUtils.ReAllocateHandleIfNeeded(ref m_FrontBlurTexture, descriptor, FilterMode.Bilinear, TextureWrapMode.Clamp, name: m_FrontBlurTextureName);
            RenderingUtils.ReAllocateHandleIfNeeded(ref m_FrontObjectTexture, descriptor, FilterMode.Bilinear, TextureWrapMode.Clamp, name: m_FrontObjectTextureName);
            RenderingUtils.ReAllocateHandleIfNeeded(ref m_FrontShadowTexture, descriptor, FilterMode.Bilinear, TextureWrapMode.Clamp, name: m_FrontShadowTextureName);

            descriptor.width /= 2;
            descriptor.height /= 2;
            descriptor.graphicsFormat = GraphicsFormat.R8_UNorm;
            RenderingUtils.ReAllocateHandleIfNeeded(ref m_FrontMaskTexture, descriptor, FilterMode.Bilinear, TextureWrapMode.Clamp, name: m_FrontMaskTextureName);
            m_FrontBlurTextureHandle = renderGraph.ImportTexture(m_FrontBlurTexture);
            m_FrontObjectTextureHandle = renderGraph.ImportTexture(m_FrontObjectTexture);
            m_FrontMaskTextureHandle = renderGraph.ImportTexture(m_FrontMaskTexture);
            m_FrontShadowTextureHandle = renderGraph.ImportTexture(m_FrontShadowTexture);
        }

        public override void Reset()
        {
            m_FrontBlurTextureHandle = TextureHandle.nullHandle;
            m_FrontObjectTextureHandle = TextureHandle.nullHandle;
            m_FrontMaskTextureHandle = TextureHandle.nullHandle;
            m_FrontShadowTextureHandle = TextureHandle.nullHandle;
        }

        public void Dispose()
        {
            m_FrontObjectRT?.Release();
            m_FrontMaskRT?.Release();
            m_FrontShadowRT?.Release();
            m_FrontBlurTexture?.Release();
            m_FrontObjectTexture?.Release();
            m_FrontMaskTexture?.Release();
            m_FrontShadowTexture?.Release();
        }

        private class BlitPassData
        {
            public RTHandle source;
            public TextureHandle destination;
        }

        public void RecordBlit(RenderGraph renderGraph, ContextContainer frameData, FrontBufferType frontBufferType)
        {
            using (var builder = renderGraph.AddRasterRenderPass<BlitPassData>($"{nameof(BlitBufferObjectPass)} ({frontBufferType})", out var passData))
            {
                var resourceData = frameData.Get<UniversalResourceData>();

                var buffers = frontBufferType switch
                {
                    FrontBufferType.FrontObject => (m_FrontObjectRT, m_FrontObjectTextureHandle),
                    FrontBufferType.FrontMask => (m_FrontMaskRT, m_FrontMaskTextureHandle),
                    FrontBufferType.FrontShadow => (m_FrontShadowRT, m_FrontShadowTextureHandle),
                    _ => throw new NotImplementedException(),
                };

                passData.source = buffers.Item1;
                passData.destination = buffers.Item2;

                builder.SetRenderAttachment(passData.destination, 0);

                builder.SetRenderFunc((BlitPassData passData, RasterGraphContext rgContext) => 
                {
                    Blitter.BlitTexture(rgContext.cmd, passData.source, scaleBias, 0, false);
                });
            }
        }

        private class BlurPassData
        {
            // Setup
            public int downsample;
            public Vector3 cocParams;
            public bool highQualitySamplingValue;
            // Inputs
            public TextureHandle sourceTexture;
            public TextureHandle maskTexture;
            public TextureHandle shadowTexture;
            public Material material;
            // Pass textures
            public TextureHandle pingTexture;
            public TextureHandle pongTexture;
            public RenderTargetIdentifier[] multipleRenderTargets = new RenderTargetIdentifier[2];
            // Output textures
            public TextureHandle destination;
            public TextureHandle cameraColorTexture;
        };

        public void RecordRenderFrontObject(RenderGraph renderGraph,ContextContainer frameData, ref Material blurMaterial)
        {
            var resourceData = frameData.Get<UniversalResourceData>();
            var cameraData = frameData.Get<UniversalCameraData>();
            
            int downSample = 2;
            var descriptor = cameraData.cameraTargetDescriptor;
            int wh = descriptor.width / downSample;
            int hh = descriptor.height / downSample;

            // Pass Textures
            var pingTextureDesc = GetCompatibleDescriptor(descriptor, wh, hh, GraphicsFormat.B8G8R8A8_SRGB);
            var pingTexture = UniversalRenderer.CreateRenderGraphTexture(renderGraph, pingTextureDesc, "_PingTexture", true, FilterMode.Bilinear);
            var pongTextureDesc = GetCompatibleDescriptor(descriptor, wh, hh, GraphicsFormat.B8G8R8A8_SRGB);
            var pongTexture = UniversalRenderer.CreateRenderGraphTexture(renderGraph, pongTextureDesc, "_PongTexture", true, FilterMode.Bilinear);

            using (var builder = renderGraph.AddUnsafePass<BlurPassData>(k_FrontObjectBlur + "Pass", out var passData))
            {
                // Setup
                float maxRadius = m_FrontObjectVolume.maxRadius.value * (wh / 1080f);
                maxRadius = Mathf.Min(maxRadius, 3f);

                passData.downsample = downSample;
                passData.cocParams = new Vector2(m_FrontObjectVolume.influence.value, maxRadius);
                passData.highQualitySamplingValue = true;

                passData.material = blurMaterial;

                // Inputs
                passData.sourceTexture = m_FrontObjectTextureHandle;
                builder.UseTexture(m_FrontObjectTextureHandle, AccessFlags.Read);

                passData.maskTexture = m_FrontMaskTextureHandle;
                builder.UseTexture(m_FrontMaskTextureHandle, AccessFlags.Read);

                passData.shadowTexture = m_FrontShadowTextureHandle;
                builder.UseTexture(m_FrontShadowTextureHandle, AccessFlags.Read);

                // Pass Textures
                passData.pingTexture = pingTexture;
                builder.UseTexture(pingTexture, AccessFlags.ReadWrite);

                passData.pongTexture = pongTexture;
                builder.UseTexture(pongTexture, AccessFlags.ReadWrite);

                // Outputs
                passData.destination = m_FrontBlurTextureHandle;
                builder.UseTexture(m_FrontBlurTextureHandle, AccessFlags.Write);

                passData.cameraColorTexture = resourceData.activeColorTexture;
                builder.UseTexture(resourceData.activeColorTexture, AccessFlags.Write);

                builder.SetRenderFunc(static (BlurPassData data, UnsafeGraphContext context) =>
                {
                    var dofMat = data.material;
                    var cmd = CommandBufferHelpers.GetNativeCommandBuffer(context.cmd);

                    RTHandle sourceTextureHdl = data.sourceTexture;
                    RTHandle dstHdl = data.destination;

                    // Setup
                    using (new ProfilingScope(new ProfilingSampler(k_FrontObjectBlur + "Setup")))
                    {
                        dofMat.SetVector(PropertyNameConst._CoCParams, data.cocParams);
                        CoreUtils.SetKeyword(dofMat, ShaderKeywordStrings.HighQualitySampling,
                            data.highQualitySamplingValue);

                        SetSourceSize(cmd, data.sourceTexture);
                        dofMat.SetVector(PropertyNameConst._DownSampleScaleFactor,
                            new Vector4(1.0f / data.downsample, 1.0f / data.downsample, data.downsample,
                                data.downsample));
                    }

                    // Downscale & prefilter color + CoC
                    using (new ProfilingScope(new ProfilingSampler(k_FrontObjectBlur + "DownscalePrefilter")))
                    {
                        Blitter.BlitCameraTexture(cmd, data.sourceTexture, data.pingTexture, dofMat, 0);
                    }

                    // Blur H
                    using (new ProfilingScope(new ProfilingSampler(k_FrontObjectBlur + "BlurH")))
                    {
                        Blitter.BlitCameraTexture(cmd, data.pingTexture, data.pongTexture, dofMat, 1);
                    }

                    // Blur V
                    using (new ProfilingScope(new ProfilingSampler(k_FrontObjectBlur + "BlurV")))
                    {
                        Blitter.BlitCameraTexture(cmd, data.pongTexture, data.pingTexture, dofMat, 2);
                    }

                    // Composite
                    using (new ProfilingScope(new ProfilingSampler(k_FrontObjectBlur + "Composite")))
                    {
                        dofMat.SetTexture(PropertyNameConst._ColorTexture, data.pingTexture);
                        Blitter.BlitCameraTexture(cmd, sourceTextureHdl, dstHdl, dofMat, 3);
                    }

                    // BlitBack
                    using (new ProfilingScope(new ProfilingSampler(k_FrontObjectBlur + "BlitBack")))
                    {
                        dofMat.SetTexture(PropertyNameConst._MaskTexture, data.maskTexture);
                        Blitter.BlitCameraTexture(cmd, dstHdl, data.cameraColorTexture, dofMat, 4);
                    }
                    // BlitBackShadow
                    using (new ProfilingScope(new ProfilingSampler(k_FrontObjectBlur + "BlitBackShadow")))
                    {
                        Blitter.BlitCameraTexture(cmd, data.shadowTexture, data.cameraColorTexture, dofMat, 5);
                    }
                });
            }
        }
    }

    private class BlitBufferObjectPass : ScriptableRenderPass
    {
        public override void RecordRenderGraph(RenderGraph renderGraph, ContextContainer frameData)
        {
            var frontObjectData = frameData.Create<FrontObjectData>();
            frontObjectData.Init(renderGraph, frameData);
            frontObjectData.RecordBlit(renderGraph, frameData, FrontBufferType.FrontObject);
            frontObjectData.RecordBlit(renderGraph, frameData, FrontBufferType.FrontMask);
            frontObjectData.RecordBlit(renderGraph, frameData, FrontBufferType.FrontShadow);
        }
    }

    private class RenderFrontObjectPass : ScriptableRenderPass
    {
        public override void RecordRenderGraph(RenderGraph renderGraph, ContextContainer frameData)
        {
            var frontObjectData = frameData.Get<FrontObjectData>();
            frontObjectData.RecordRenderFrontObject(renderGraph, frameData, ref blurMaterial);
        }
    }

    [SerializeField] private Shader m_BlurShader;
    private static FrontObjectVolume m_FrontObjectVolume;
    private static Material blurMaterial;
    private BlitBufferObjectPass m_BlitBufferObjectPass;
    private RenderFrontObjectPass m_RenderFrontObjectPass;
    
    public override void Create()
    {
        m_BlitBufferObjectPass = new BlitBufferObjectPass();
        m_RenderFrontObjectPass = new RenderFrontObjectPass();
        m_BlitBufferObjectPass.renderPassEvent = RenderPassEvent.AfterRenderingTransparents;
        m_RenderFrontObjectPass.renderPassEvent = RenderPassEvent.AfterRenderingTransparents;
        if (m_BlurShader)
        {
            blurMaterial = CoreUtils.CreateEngineMaterial(m_BlurShader);
        }

    }

    public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    {
        if (!m_FrontObjectVolume)
        {
            var stack = VolumeManager.instance.stack;
            m_FrontObjectVolume = stack.GetComponent<FrontObjectVolume>();
            return;
        }
        if (m_BlurShader & !blurMaterial)
        {
            blurMaterial = CoreUtils.CreateEngineMaterial(m_BlurShader);
        }
        
        if (m_FrontObjectVolume.active & m_FrontObjectVolume.IsActive() & m_BlurShader)
        {
            renderer.EnqueuePass(m_BlitBufferObjectPass);
            renderer.EnqueuePass(m_RenderFrontObjectPass);
        }
    }

    private static RenderTextureDescriptor GetCompatibleDescriptor(RenderTextureDescriptor desc, int width, int height, GraphicsFormat format, GraphicsFormat depthStencilFormat = GraphicsFormat.None)
    {
        desc.depthStencilFormat = depthStencilFormat;
        desc.msaaSamples = 1;
        desc.width = width;
        desc.height = height;
        desc.graphicsFormat = format;
        return desc;
    }

    private static void SetSourceSize(RasterCommandBuffer cmd, RTHandle source)
    {
        float width = source.rt.width;
        float height = source.rt.height;
        if (source.rt.useDynamicScale)
        {
            width *= ScalableBufferManager.widthScaleFactor;
            height *= ScalableBufferManager.heightScaleFactor;
        }
        cmd.SetGlobalVector(PropertyNameConst._SourceSize, new Vector4(width, height, 1.0f / width, 1.0f / height));
    }

    private static void SetSourceSize(CommandBuffer cmd, RTHandle source)
    {
        SetSourceSize(CommandBufferHelpers.GetRasterCommandBuffer(cmd), source);
    }
}

FrontObjectBlur.shader

ぼかしと描き戻しのShaderです。
1~4のPassがぼかしで5,6が書き戻しの処理です。

コード
Shader "Hidden/FrontObjectBlur"
{
    HLSLINCLUDE

        #pragma target 3.5

        #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Common.hlsl"
        #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Filtering.hlsl"
        #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
        #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/DeclareDepthTexture.hlsl"
        #include "Packages/com.unity.render-pipelines.core/Runtime/Utilities/Blit.hlsl"
        #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Color.hlsl"
        #include "Packages/com.unity.render-pipelines.universal/Shaders/PostProcessing/Common.hlsl"

        TEXTURE2D_X(_ColorTexture);
        TEXTURE2D_X(_MaskTexture);

        float4 _SourceSize;
        float4 _DownSampleScaleFactor;

        float3 _CoCParams;

        #define Influence        _CoCParams.x
        #define MaxRadius       _CoCParams.y

        #define BLUR_KERNEL 0

        #if BLUR_KERNEL == 0

        // Offsets & coeffs for optimized separable bilinear 3-tap gaussian (5-tap equivalent)
        const static int kTapCount = 3;
        const static float kOffsets[] = {
            -1.33333333,
             0.00000000,
             1.33333333
        };
        const static half kCoeffs[] = {
             0.35294118,
             0.29411765,
             0.35294118
        };

        #elif BLUR_KERNEL == 1

        // Offsets & coeffs for optimized separable bilinear 5-tap gaussian (9-tap equivalent)
        const static int kTapCount = 5;
        const static float kOffsets[] = {
            -3.23076923,
            -1.38461538,
             0.00000000,
             1.38461538,
             3.23076923
        };
        const static half kCoeffs[] = {
             0.07027027,
             0.31621622,
             0.22702703,
             0.31621622,
             0.07027027
        };

        #endif

        half4 FragPrefilter(Varyings input) : SV_Target
        {
            UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(input);
            float2 uv = UnityStereoTransformScreenSpaceTex(input.texcoord);

        #if _HIGH_QUALITY_SAMPLING

            // Use a rotated grid to minimize artifacts coming from horizontal and vertical boundaries
            // "High Quality Antialiasing" [Lorach07]
            const int kCount = 5;
            const float2 kTaps[] = {
                float2( 0.0,  0.0),
                float2( 0.9, -0.4),
                float2(-0.9,  0.4),
                float2( 0.4,  0.9),
                float2(-0.4, -0.9)
            };

            half4 colorAcc = 0.0;
            half farCoCAcc = 0.0;

            UNITY_UNROLL
            for (int i = 0; i < kCount; i++)
            {
                float2 tapCoord = _SourceSize.zw * kTaps[i] + uv;
                half4 tapColor = SAMPLE_TEXTURE2D_X(_BlitTexture, sampler_LinearClamp, tapCoord);
                half coc = Influence;

                // Pre-multiply CoC to reduce bleeding of background blur on focused areas
                colorAcc += tapColor * coc;
                farCoCAcc += coc;
            }

            half4 color = colorAcc * rcp(kCount);
            half farCoC = farCoCAcc * rcp(kCount);

        #else

            // Bilinear sampling the coc is technically incorrect but we're aiming for speed here
            half farCoC = Influence;

            // Fast bilinear downscale of the source target and pre-multiply the CoC to reduce
            // bleeding of background blur on focused areas
            half4 color = SAMPLE_TEXTURE2D_X(_BlitTexture, sampler_LinearClamp, uv);
            color *= farCoC;

        #endif

            return color;
        }

        half4 Blur(Varyings input, float2 dir, float premultiply)
        {
            UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(input);
            float2 uv = UnityStereoTransformScreenSpaceTex(input.texcoord);

            // Use the center CoC as radius
            int2 positionSS = int2(_SourceSize.xy * _DownSampleScaleFactor.xy * uv);
            half samp0CoC = Influence;

            float2 offset = _SourceSize.zw * _DownSampleScaleFactor.zw * dir * samp0CoC * MaxRadius;
            half4 acc = 0.0;
            half accAlpha = 0.0;

            UNITY_UNROLL
            for (int i = 0; i < kTapCount; i++)
            {
                float2 sampCoord = uv + kOffsets[i] * offset;
                half sampCoC = Influence;
                half4 sampColor = SAMPLE_TEXTURE2D_X(_BlitTexture, sampler_LinearClamp, sampCoord);

                // Weight & pre-multiply to limit bleeding on the focused area
                half weight = saturate(1.0 - (samp0CoC - sampCoC));
                acc += half4(sampColor.xyz, premultiply ? sampCoC : 1.0) * kCoeffs[i] * weight;

                accAlpha  += sampColor.a * kCoeffs[i] * weight;

            }
            acc.xyz /= acc.w + 1e-4; // Zero-div guard

            accAlpha /= acc.w + 1e-4; // Zero-div guard
            return half4(acc.xyz, accAlpha);
        }

        half4 FragBlurH(Varyings input) : SV_Target
        {
            return Blur(input, float2(1.0, 0.0), 1.0);
        }

        half4 FragBlurV(Varyings input) : SV_Target
        {
            return Blur(input, float2(0.0, 1.0), 0.0);
        }

        half4 FragComposite(Varyings input) : SV_Target
        {
            UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(input);
            float2 uv = UnityStereoTransformScreenSpaceTex(input.texcoord);

            half4 baseColor = LOAD_TEXTURE2D_X(_BlitTexture, _SourceSize.xy * uv);
            half coc = Influence;

        #if _HIGH_QUALITY_SAMPLING
            half4 farColor = SampleTexture2DBicubic(TEXTURE2D_X_ARGS(_ColorTexture, sampler_LinearClamp), uv, _SourceSize * _DownSampleScaleFactor, 1.0, unity_StereoEyeIndex);
        #else
            half4 farColor = SAMPLE_TEXTURE2D_X(_ColorTexture, sampler_LinearClamp, uv);
        #endif

            half4 dstColor = 0.0;
            half dstAlpha = 1.0;

            UNITY_BRANCH
            if (coc > 0.0)
            {
                // Non-linear blend
                // "CryEngine 3 Graphics Gems" [Sousa13]
                half blend = sqrt(coc * TWO_PI);
                dstColor = farColor * saturate(blend);
                dstAlpha = saturate(1.0 - blend);
            }

            half4 outColor = dstColor + baseColor * dstAlpha;
            // Preserve the original value of the pixels with zero alpha
            outColor.rgb = outColor.a > 0 ? outColor.rgb : baseColor.rgb;
            return outColor;
        }

        half4 BlitBack(Varyings input) : SV_Target
        {
            UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(input);
            float2 uv = UnityStereoTransformScreenSpaceTex(input.texcoord);
            half mask = SAMPLE_TEXTURE2D_X(_MaskTexture, sampler_LinearClamp, uv).x;
            half4 color = SAMPLE_TEXTURE2D_X(_BlitTexture, sampler_LinearClamp, uv);
            color *= 1 - mask;
            return GetLinearToSRGB(color);
        }

        half4 BlitBackShadow(Varyings input) : SV_Target
        {
            UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(input);
            float2 uv = UnityStereoTransformScreenSpaceTex(input.texcoord);
            half4 color = SAMPLE_TEXTURE2D_X(_BlitTexture, sampler_LinearClamp, uv);
            return color;
        }

    ENDHLSL

    SubShader
    {
        Tags { "RenderPipeline" = "UniversalPipeline" }
        LOD 100
        ZTest Always ZWrite Off Cull Off

        Pass
        {
            Name "Gaussian Depth Of Field Prefilter"

            HLSLPROGRAM
                #pragma vertex Vert
                #pragma fragment FragPrefilter
                #pragma multi_compile_local_fragment _ _HIGH_QUALITY_SAMPLING
            ENDHLSL
        }

        Pass
        {
            Name "Gaussian Depth Of Field Blur Horizontal"

            HLSLPROGRAM
                #pragma vertex Vert
                #pragma fragment FragBlurH
            ENDHLSL
        }

        Pass
        {
            Name "Gaussian Depth Of Field Blur Vertical"

            HLSLPROGRAM
                #pragma vertex Vert
                #pragma fragment FragBlurV
            ENDHLSL
        }

        Pass
        {
            Name "Gaussian Depth Of Field Composite"

            HLSLPROGRAM
                #pragma vertex Vert
                #pragma fragment FragComposite
                #pragma multi_compile_local_fragment _ _HIGH_QUALITY_SAMPLING
            ENDHLSL
        }

        Pass
        {
            Blend SrcColor OneMinusSrcAlpha, Zero Zero
            Name "CameraColorTexture BlitBack"

            HLSLPROGRAM
                #pragma vertex Vert
                #pragma fragment BlitBack
            ENDHLSL
        }

        Pass
        {
            Blend SrcColor OneMinusSrcAlpha, Zero Zero
            Name "CameraColorTexture BlitBackShadow"

            HLSLPROGRAM
                #pragma vertex Vert
                #pragma fragment BlitBackShadow
            ENDHLSL
        }
    }
}

描画結果

前景Object
1.png

Mask
2.png

Shadow
3.png

前景ObjectとMaskとShadow以外の描画
4.png

前景Objectをぼかした結果(ガウシアンなのでぼかしを大きくするとクオリティが下がるので控えめ)
5.png

Maskでぬいて書き戻した結果
6.png

Shadowを書き戻した結果
7.png

この後にBloomやUIなど描画されます。

まとめ・2DRendererの感想

2DRendererは2DLightをはじめいろんな独自機能があっていいですが、RendererFeatureなどで独自の描画をしようとしたとき3Dに比べて拡張性が少ないのが致命的です。3DでAdditionalLight取得できるんであれば3Dのほうがいいなと思うくらいには不便でした。Fillterの追加やCameraSortingLayerTextureの複数化など追加してくれないかなと切に願います。特にVFXでLightTexture受け取れないのは困ると思うので要望だそうかなと思います。

  1. Unity6で追加されたRenderGraphの仕組みの1つでPass間でテクスチャなどを受け渡すための仕組み。
    個人的にRenderGraphで一番便利な仕組みだと思います。

  2. 2DLightをCullingするとLightTextureが描画されなくなってしまうので注意です。

5
3
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
5
3

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?