Help us understand the problem. What is going on with this article?

sql oracle model句を使用したグルーピングファンクション

まえがき

rubyのeach_slice的な動き(ceil関数を使用すればできる)を模倣出来たらと思っていたが、いつものように脱線して、別のものができてしまった。

環境

$sqlplus user01/ORACLE_PWD@ORCLPDB01

SQL*Plus: Release 19.0.0.0.0 - Production on 土 12月 7 14:36:55 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

最終正常ログイン時間: 土 12月 07 2019 11:20:25 +09:00


Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
に接続されました。
USER01@ORCLPDB01> 

スクリプト

引数にグループ化したコレクションと上限件数を与えて、上限件数を超えない範囲でグループ化した結果をコレクションで返却するファンクション。

グループ化したコレクションは以下。

sort_keyには日付項目を元に付番した一意の値を与える。
dtmにはその日付項目を与える。
cntにはその日付項目ごとの取得件数を与える。

日付型はよしなに変える。ここではdate型でやってみた。

create or replace type mymap is object (sort_key number,cnt number,dtm date);
/

create or replace type mymap_liz is table of mymap;
/

返却するコレクションのインターフェースは以下。

日付型はよしなに変える。ここではdate型でやってみた。

create or replace type grzz is object (
    sort_key          number
    , cnt          number
    , cum_cnt      number
    , over_flg     number
    , grp          number
    , mn_sort_key       number
    , mx_sort_key       number
    , mn_cnt       number
    , mx_cnt       number
    , mn_dtm       date
    , mx_dtm       date
    , mn_cum_cnt   number
    , mx_cum_cnt   number
);
/

create or replace type grp_liz is table of grzz;
/

表題のファンクション。

model句のrules句では前回の蓄積結果に自身の件数を足して上限を超える、ないしは自身の件数自体が上限を超える場合は、
自身の件数自体を返却。超えない場合は加算。といったロジックを実装。超過フラグのロジックも同様。

measures句は先頭行の値を設定するイメージ。

model句では分析関数を使用できるので、ランニング集計までrules句に組み込んだ。分析関数を使用する際の左辺の参照セルはanyにしておく。(先頭行も含めてランニング集計したいため。)それ以外は2行目以降に参照セルを限定し、ロジックを組込む。

create or replace function each_slice(tgt mymap_liz,upper_limit_cnt number)
return grp_liz
as
rt grp_liz;
begin
select
    cast(collect(grzz(sort_key, cnt, cum_cnt, over_flg, grp, mn_sort_key, mx_sort_key, mn_cnt, mx_cnt, mn_dtm, mx_dtm, mn_cum_cnt, mx_cum_cnt)) as grp_liz)
    into rt
from(
    select
        sort_key, cnt, cum_cnt, over_flg, grp
        ,min(sort_key)over(partition by grp) as mn_sort_key
        ,max(sort_key)over(partition by grp) as mx_sort_key
        ,min(cnt)over(partition by grp) as mn_cnt
        ,max(cnt)over(partition by grp) as mx_cnt
        ,min(dtm)over(partition by grp) as mn_dtm
        ,max(dtm)over(partition by grp) as mx_dtm
        ,min(cum_cnt)over(partition by grp) as mn_cum_cnt
        ,max(cum_cnt)over(partition by grp) as mx_cum_cnt
    from
        (
        select
            sort_key, dtm,cnt, cum_cnt, over_flg, grp
        from table(tgt)
        model
            dimension by (sort_key)
            measures (dtm as dtm,cnt as cnt,cnt as cum_cnt,1 as over_flg,cast(null as number) as grp)
            rules
            (
                cum_cnt[sort_key>1]=case
                                        when cum_cnt[cv()-1] + cnt[cv()] > upper_limit_cnt or cnt[cv()] > upper_limit_cnt then cnt[cv()]
                                        else cum_cnt[cv()-1] + cnt[cv()]
                                    end
                ,over_flg[sort_key>1]=case
                                        when cum_cnt[cv()-1] + cnt[cv()] > upper_limit_cnt or cnt[cv()] > upper_limit_cnt then 1
                                        else 0
                                    end
                ,grp[any]=sum(over_flg)over(order by sort_key)
            )
        )
    );
    return rt;
end;
/

動作確認

日ごとに集計されたあとのテーブルをdetailサブクエリで模倣しています。
summaryサブクエリではmodel句を使用したグルーピングファンクションに渡す引数を生成しています。
メインクエリでは、detailサブクエリとファンクションの返却結果をunnestしたものをsort_keyinner joinしています。最後に取得順は保証されないため、sort_keyで並べています。

with detail as(
select
    sort_key
    , rsv_date
    , cnt
from(
    select
        row_number()over(order by to_date(sysdate+level)) as sort_key
        ,to_date(sysdate+level) as rsv_date
        ,trunc(dbms_random.value(1000,10000)) as cnt
    from dual connect by level<=28
    )
),summary as(
select cast(collect(mymap(sort_key,cnt,rsv_date)) as mymap_liz) as tgt from detail
)select
    s2.sort_key
    ,s2.rsv_date
    ,s2.cnt
    ,100000 as upper_limit_cnt
    ,s3.cum_cnt, s3.over_flg, s3.grp, s3.mn_sort_key, s3.mx_sort_key, s3.mn_cnt, s3.mx_cnt, s3.mn_dtm, s3.mx_dtm, s3.mn_cum_cnt, s3.mx_cum_cnt
from summary s1,detail s2,table(each_slice(s1.tgt,100000)) s3
where s2.sort_key=s3.sort_key
order by s3.sort_key
;

image.png

あとがき

取込元テーブルから取込先テーブルに取込件数を動的に変化させたいとき、取込上限件数を取込元テーブルごとに管理していれば、ハンディに制御できるかも。送信元テーブルから送信ファイルを生成するときの送信件数を動的に変化させたい場合も同じ。送信上限件数を送信元テーブルごとに管理していれば、handyかも。

脱線したけど、model句の練習になった。普段model句使わないけど、こういうロジックは組みやすい。^^

以上、ありがとうございました。

20191213追記

グルーピング後なら、最小と最大を求められるので、引数と戻り値のコレクションから不要項目を除去。

もっとシンプルに。

create or replace type args_map is object (sort_key number,cnt number);
/

create or replace type args_map_liz is table of args_map;
/

create or replace type rt_map is object (
    sort_key       number
    , cnt          number
    , cum_cnt      number
    , over_flg     number
    , grp          number
    , mn_cum_cnt   number
    , mx_cum_cnt   number
);
/

create or replace type rt_map_liz is table of rt_map;
/

create or replace function each_slice(tgt args_map_liz,upper_limit_cnt number)
return rt_map_liz
as
rt rt_map_liz;
begin
select
    cast(collect(rt_map(sort_key, cnt, cum_cnt, over_flg, grp, mn_cum_cnt, mx_cum_cnt)) as rt_map_liz)
    into rt
from(
    select
        sort_key, cnt, cum_cnt, over_flg, grp
        ,min(cum_cnt)over(partition by grp) as mn_cum_cnt
        ,max(cum_cnt)over(partition by grp) as mx_cum_cnt
    from
        (
        select
            sort_key,cnt, cum_cnt, over_flg, grp
        from table(tgt)
        model
            dimension by (sort_key)
            measures (cnt,cnt as cum_cnt,1 as over_flg,cast(null as number) as grp)
            rules
            (
                cum_cnt[sort_key>1]=case
                                        when cum_cnt[cv()-1] + cnt[cv()] > upper_limit_cnt or cnt[cv()] > upper_limit_cnt then cnt[cv()]
                                        else cum_cnt[cv()-1] + cnt[cv()]
                                    end
                ,over_flg[sort_key>1]=case
                                        when cum_cnt[cv()-1] + cnt[cv()] > upper_limit_cnt or cnt[cv()] > upper_limit_cnt then 1
                                        else 0
                                    end
                ,grp[any]=sum(over_flg)over(order by sort_key)
            )
        )
    );
    return rt;
end;
/

使用感

with detail as(
select
    sort_key
    , rsv_date
    , cnt
from(
    select
        row_number()over(order by to_date(sysdate+level)) as sort_key
        ,to_date(sysdate+level) as rsv_date
        ,trunc(dbms_random.value(1000,10000)) as cnt
    from dual connect by level<=28
    )
),summary as(
select cast(collect(args_map(sort_key,cnt)) as args_map_liz) as tgt from detail
)
--select * from summary;
select
    s2.sort_key
    ,s2.rsv_date
    ,min(s2.rsv_date)over(partition by s3.grp) as mn_dtm
    ,max(s2.rsv_date)over(partition by s3.grp) as mx_dtm
    ,min(s2.sort_key)over(partition by s3.grp) as mn_sort_key
    ,max(s2.sort_key)over(partition by s3.grp) as mx_sort_key
    ,min(s2.cnt)over(partition by s3.grp) as mn_cnt
    ,max(s2.cnt)over(partition by s3.grp) as mx_cnt
    ,s2.cnt
    ,100000 as upper_limit_cnt
    ,s3.cum_cnt, s3.over_flg, s3.grp,s3.mn_cum_cnt, s3.mx_cum_cnt
from summary s1,detail s2,table(each_slice(s1.tgt,100000)) s3
where s2.sort_key=s3.sort_key
order by s3.sort_key
;

image.png

アイテムごとにも対応できるようにもう一つ作成した。複数項目ごとの場合は文字列連結して渡せばOK。

create or replace type args_map_byitem is object (sort_key number,cnt number,item varchar2(4000));
/

create or replace type args_map_byitem_liz is table of args_map_byitem;
/

create or replace type rt_map_byitem is object (
    sort_key       number
    , cnt          number
    , item         varchar2(4000)
    , cum_cnt      number
    , over_flg     number
    , grp          number
    , mn_cum_cnt   number
    , mx_cum_cnt   number
);
/

create or replace type rt_map_byitem_liz is table of rt_map_byitem;
/

create or replace function each_slice_byitem(tgt args_map_byitem_liz,upper_limit_cnt number)
return rt_map_byitem_liz
as
rt rt_map_byitem_liz;
begin
select
    cast(collect(rt_map_byitem(sort_key, cnt,item, cum_cnt, over_flg, grp, mn_cum_cnt, mx_cum_cnt)) as rt_map_byitem_liz)
    into rt
from(
    select
        sort_key, cnt,item, cum_cnt, over_flg, grp
        ,min(cum_cnt)over(partition by item,grp) as mn_cum_cnt
        ,max(cum_cnt)over(partition by item,grp) as mx_cum_cnt
    from
        (
        select
            sort_key,cnt,item,cum_cnt, over_flg, grp
        from table(tgt)
        model
            dimension by (sort_key)
            measures (cnt,item,cnt as cum_cnt,1 as over_flg,cast(null as number) as grp)
            rules
            (
                cum_cnt[sort_key>1]=case
                        when item[cv()-1]<>item[cv()] then cnt[cv()]
                        when item[cv()-1]=item[cv()] and cum_cnt[cv()-1] + cnt[cv()] > upper_limit_cnt or cnt[cv()] > upper_limit_cnt then cnt[cv()]
                        when item[cv()-1]=item[cv()] and cum_cnt[cv()-1] + cnt[cv()] <= upper_limit_cnt or cnt[cv()] <= upper_limit_cnt then cum_cnt[cv()-1] + cnt[cv()]
                    end
                ,over_flg[sort_key>1]=case
                        when item[cv()-1]<>item[cv()] then 1
                        when item[cv()-1]=item[cv()] and cum_cnt[cv()-1] + cnt[cv()] > upper_limit_cnt or cnt[cv()] > upper_limit_cnt then 1
                        when item[cv()-1]=item[cv()] and cum_cnt[cv()-1] + cnt[cv()] <= upper_limit_cnt or cnt[cv()] <= upper_limit_cnt then 0
                    end
                ,grp[any]=sum(over_flg)over(partition by item order by sort_key)
            )
        )
    );
    return rt;
end;
/

使用感

drop table test_stock purge;
create table test_stock (
    item          varchar2(8 byte)
    , whs_dtm       date
    , epr_dtm       date
    , stk_qty       number
    , rsv_stk_qty   number
);

REM INSERTING into EXPORT_TABLE
set define off;
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('A0001',to_date('19-12-09','RR-MM-DD'),to_date('20-01-08','RR-MM-DD'),270,6);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('A0001',to_date('19-12-09','RR-MM-DD'),to_date('20-01-09','RR-MM-DD'),219,9);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('A0001',to_date('19-12-09','RR-MM-DD'),to_date('20-01-10','RR-MM-DD'),168,4);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('A0001',to_date('19-12-10','RR-MM-DD'),to_date('20-01-09','RR-MM-DD'),285,2);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('A0001',to_date('19-12-10','RR-MM-DD'),to_date('20-01-10','RR-MM-DD'),294,8);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('A0001',to_date('19-12-11','RR-MM-DD'),to_date('20-01-10','RR-MM-DD'),175,7);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('B0001',to_date('19-12-09','RR-MM-DD'),to_date('20-01-08','RR-MM-DD'),195,5);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('B0001',to_date('19-12-09','RR-MM-DD'),to_date('20-01-09','RR-MM-DD'),297,3);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('B0001',to_date('19-12-09','RR-MM-DD'),to_date('20-01-10','RR-MM-DD'),185,8);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('B0001',to_date('19-12-10','RR-MM-DD'),to_date('20-01-09','RR-MM-DD'),177,10);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('B0001',to_date('19-12-10','RR-MM-DD'),to_date('20-01-10','RR-MM-DD'),267,10);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('B0001',to_date('19-12-11','RR-MM-DD'),to_date('20-01-10','RR-MM-DD'),161,8);
commit;

with stock_detail as(
    select
        row_number()over(order by item,whs_dtm,epr_dtm) as sort_key
        ,item
        ,row_number()over(partition by item order by whs_dtm,epr_dtm) as grpseq
        ,whs_dtm
        ,epr_dtm
        ,stk_qty-rsv_stk_qty as enable_cnt
    from
        test_stock
),stock_args as(
    select cast(collect(args_map_byitem(sort_key,enable_cnt,item)) as args_map_byitem_liz) as tgt
    from stock_detail
)
select s2.sort_key, s2.cnt, s2.item, s2.cum_cnt,700 as upper_limit_cnt, s2.over_flg, s2.grp, s2.mn_cum_cnt, s2.mx_cum_cnt
from stock_args s1,table(each_slice_byitem(s1.tgt,700)) s2;

image.png

20191222追記

グループ化列の有無の違いを吸収。他DBでも書き換え可能なように再帰with版も。

create or replace type args_grp_each_slice is object (
    grp_key            number
    , grpseq_key       number
    , enable_cnt       number
);
/

create or replace type args_grp_liz_each_slice is table of args_grp_each_slice;
/

create or replace type rt_grp_each_slice is object (
    grp_key            number
    , grpseq_key       number
    , enable_cnt       number
    , cum_cnt          number
    , over_flg         number
    , subgrp_key       number
    , mn_cum_cnt       number
    , mx_cum_cnt       number
);
/

create or replace type rt_grp_liz_each_slice is table of rt_grp_each_slice;
/

create or replace function each_slice(tgt args_grp_liz_each_slice,upper_limit_cnt number)
return rt_grp_liz_each_slice
as
rt rt_grp_liz_each_slice;
begin
select
    cast(collect(rt_grp_each_slice(grp_key,grpseq_key,enable_cnt,cum_cnt, over_flg, subgrp_key, mn_cum_cnt, mx_cum_cnt)) as rt_grp_liz_each_slice)
    into rt
from(
    select
        grp_key,grpseq_key,enable_cnt,cum_cnt, over_flg, subgrp_key
        ,min(cum_cnt)over(partition by grp_key,subgrp_key) as mn_cum_cnt
        ,max(cum_cnt)over(partition by grp_key,subgrp_key) as mx_cum_cnt
    from
        (
        select
            grp_key,grpseq_key,enable_cnt,cum_cnt, over_flg, subgrp_key
        from table(tgt)
        model
            partition by (grp_key)
            dimension by (case when grp_key is null then row_number()over(order by grpseq_key) else grpseq_key end as grpseq_key)
            measures (enable_cnt,enable_cnt as cum_cnt,1 as over_flg,cast(null as number) as subgrp_key)
            rules
            (
                cum_cnt[grpseq_key>1]=case
                        when cum_cnt[cv()-1] + enable_cnt[cv()] > upper_limit_cnt or enable_cnt[cv()] > upper_limit_cnt then enable_cnt[cv()]
                        when cum_cnt[cv()-1] + enable_cnt[cv()] <= upper_limit_cnt or enable_cnt[cv()] <= upper_limit_cnt then cum_cnt[cv()-1] + enable_cnt[cv()]
                    end
                ,over_flg[grpseq_key>1]=case
                        when cum_cnt[cv()-1] + enable_cnt[cv()] > upper_limit_cnt or enable_cnt[cv()] > upper_limit_cnt then 1
                        when cum_cnt[cv()-1] + enable_cnt[cv()] <= upper_limit_cnt or enable_cnt[cv()] <= upper_limit_cnt then 0
                    end
                ,subgrp_key[any]=sum(over_flg)over(partition by grp_key order by grpseq_key)
            )
        )
    );
    return rt;
end;
/


create or replace function each_slice_std(tgt args_grp_liz_each_slice,upper_limit_cnt number)
return rt_grp_liz_each_slice
as
rt rt_grp_liz_each_slice;
begin
    with sub as(
        select
            grp_key
            ,case
                when grp_key is null then row_number()over(order by grpseq_key)
                else grpseq_key
            end as grpseq_key
            ,enable_cnt
        from table(tgt)
    ),rec(grp_key,grpseq_key,enable_cnt,cum_cnt, over_flg) as(
        select grp_key,grpseq_key,enable_cnt,enable_cnt,1 from sub where grpseq_key=1
        union all
        select s2.grp_key,s2.grpseq_key,s2.enable_cnt
            ,case
                when s1.cum_cnt + s2.enable_cnt > upper_limit_cnt or s2.enable_cnt > upper_limit_cnt then s2.enable_cnt
                when s1.cum_cnt + s2.enable_cnt <= upper_limit_cnt or s2.enable_cnt <= upper_limit_cnt then s1.cum_cnt + s2.enable_cnt
            end
            ,case
                when s1.cum_cnt + s2.enable_cnt > upper_limit_cnt or s2.enable_cnt > upper_limit_cnt then 1
                when s1.cum_cnt + s2.enable_cnt <= upper_limit_cnt or s2.enable_cnt <= upper_limit_cnt then 0
            end
        from rec s1,sub s2
        where
            nvl(s1.grp_key,1)=nvl(s2.grp_key,1)
        and s1.grpseq_key+1=s2.grpseq_key
    )
    select
        cast(collect(rt_grp_each_slice(grp_key,grpseq_key,enable_cnt,cum_cnt, over_flg, subgrp_key, mn_cum_cnt, mx_cum_cnt)) as rt_grp_liz_each_slice)
        into rt
    from(
        select
            grp_key
            ,grpseq_key
            ,enable_cnt
            ,cum_cnt
            ,over_flg
            ,subgrp_key
            ,min(cum_cnt)over(partition by grp_key,subgrp_key) as mn_cum_cnt
            ,max(cum_cnt)over(partition by grp_key,subgrp_key) as mx_cum_cnt
        from(
            select
                grp_key
                ,grpseq_key
                ,enable_cnt
                ,cum_cnt
                ,over_flg
                ,sum(over_flg)over(partition by grp_key order by grpseq_key) as subgrp_key
            from(
                select
                    grp_key,grpseq_key,enable_cnt,cum_cnt, over_flg
                from rec
                )
            )
        );
    return rt;
end;
/

drop table test_stock purge;
create table test_stock (
    item          varchar2(8 byte)
    , whs_dtm       date
    , epr_dtm       date
    , stk_qty       number
    , rsv_stk_qty   number
);

set define off;
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('A0001',to_date('19-12-09','RR-MM-DD'),to_date('20-01-08','RR-MM-DD'),270,6);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('A0001',to_date('19-12-09','RR-MM-DD'),to_date('20-01-09','RR-MM-DD'),219,9);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('A0001',to_date('19-12-09','RR-MM-DD'),to_date('20-01-10','RR-MM-DD'),168,4);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('A0001',to_date('19-12-10','RR-MM-DD'),to_date('20-01-09','RR-MM-DD'),285,2);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('A0001',to_date('19-12-10','RR-MM-DD'),to_date('20-01-10','RR-MM-DD'),294,8);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('A0001',to_date('19-12-11','RR-MM-DD'),to_date('20-01-10','RR-MM-DD'),175,7);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('B0001',to_date('19-12-09','RR-MM-DD'),to_date('20-01-08','RR-MM-DD'),195,5);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('B0001',to_date('19-12-09','RR-MM-DD'),to_date('20-01-09','RR-MM-DD'),297,3);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('B0001',to_date('19-12-09','RR-MM-DD'),to_date('20-01-10','RR-MM-DD'),185,8);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('B0001',to_date('19-12-10','RR-MM-DD'),to_date('20-01-09','RR-MM-DD'),177,10);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('B0001',to_date('19-12-10','RR-MM-DD'),to_date('20-01-10','RR-MM-DD'),267,10);
insert into test_stock (item,whs_dtm,epr_dtm,stk_qty,rsv_stk_qty) values ('B0001',to_date('19-12-11','RR-MM-DD'),to_date('20-01-10','RR-MM-DD'),161,8);
commit;

動作確認

with stock_detail as(
    select
        dense_rank()over(order by item) as grp_key
        ,dense_rank()over(partition by item order by whs_dtm,epr_dtm) as grpseq_key
        ,item
        ,whs_dtm
        ,epr_dtm
        ,stk_qty-rsv_stk_qty as enable_cnt
    from
        test_stock
),stock_args as(
    select cast(collect(args_grp_each_slice(null,grpseq_key,enable_cnt)) as args_grp_liz_each_slice) as tgt
    from stock_detail
)
select 
    s2.grp_key, s2.grpseq_key, s2.enable_cnt, s2.cum_cnt,700 as upper_limit_cnt, s2.over_flg, s2.subgrp_key, s2.mn_cum_cnt, s2.mx_cum_cnt
from stock_args s1
,table(each_slice(s1.tgt,700)) s2
order by s2.grp_key, s2.grpseq_key
;

image.png

with stock_detail as(
    select
        dense_rank()over(order by item) as grp_key
        ,dense_rank()over(partition by item order by whs_dtm,epr_dtm) as grpseq_key
        ,item
        ,whs_dtm
        ,epr_dtm
        ,stk_qty-rsv_stk_qty as enable_cnt
    from
        test_stock
),stock_args as(
    select cast(collect(args_grp_each_slice(grp_key,grpseq_key,enable_cnt)) as args_grp_liz_each_slice) as tgt
    from stock_detail
)
select 
    s2.grp_key, s2.grpseq_key, s2.enable_cnt, s2.cum_cnt,700 as upper_limit_cnt, s2.over_flg, s2.subgrp_key, s2.mn_cum_cnt, s2.mx_cum_cnt
from stock_args s1
,table(each_slice(s1.tgt,700)) s2
order by s2.grp_key, s2.grpseq_key
;

image.png

with stock_detail as(
    select
        dense_rank()over(order by item) as grp_key
        ,dense_rank()over(partition by item order by whs_dtm,epr_dtm) as grpseq_key
        ,item
        ,whs_dtm
        ,epr_dtm
        ,stk_qty-rsv_stk_qty as enable_cnt
    from
        test_stock
),stock_args as(
    select cast(collect(args_grp_each_slice(null,grpseq_key,enable_cnt)) as args_grp_liz_each_slice) as tgt
    from stock_detail
)
select 
    s2.grp_key, s2.grpseq_key, s2.enable_cnt, s2.cum_cnt,700 as upper_limit_cnt, s2.over_flg, s2.subgrp_key, s2.mn_cum_cnt, s2.mx_cum_cnt
from stock_args s1
,table(each_slice_std(s1.tgt,700)) s2
order by s2.grp_key, s2.grpseq_key
;

image.png

with stock_detail as(
    select
        dense_rank()over(order by item) as grp_key
        ,dense_rank()over(partition by item order by whs_dtm,epr_dtm) as grpseq_key
        ,item
        ,whs_dtm
        ,epr_dtm
        ,stk_qty-rsv_stk_qty as enable_cnt
    from
        test_stock
),stock_args as(
    select cast(collect(args_grp_each_slice(grp_key,grpseq_key,enable_cnt)) as args_grp_liz_each_slice) as tgt
    from stock_detail
)
select 
    s2.grp_key, s2.grpseq_key, s2.enable_cnt, s2.cum_cnt,700 as upper_limit_cnt, s2.over_flg, s2.subgrp_key, s2.mn_cum_cnt, s2.mx_cum_cnt
from stock_args s1
,table(each_slice_std(s1.tgt,700)) s2
order by s2.grp_key, s2.grpseq_key
;

image.png

ukijumotahaneniarukenia
sqlが大好き。(oracle) WordPressはじめました。sql oracleやそれ以外について調べたこととか記していきます。 https://github.com/ukijumotahaneniarukenia
https://ukijumotahaneniarukenia.site/
Why not register and get more from Qiita?
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away
Comments
No comments
Sign up for free and join this conversation.
If you already have a Qiita account
Why do not you register as a user and use Qiita more conveniently?
You need to log in to use this function. Qiita can be used more conveniently after logging in.
You seem to be reading articles frequently this month. Qiita can be used more conveniently after logging in.
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away
ユーザーは見つかりませんでした