LoginSignup
1
1

More than 5 years have passed since last update.

3 key points of SAP ABAP Parallel Processing

Posted at

What I've learned from the last project

In the last project, I faced many issues with SAP parallel processing.

Many people often simply just parallelize ABAP jobs, which are designed and have run as single process. As a result, thoughtless parallelizations cause many problems. In this article, I'm going to share 3 key points of SAP ABAP Parallel Processing.

Parallel Processing is great!

First of all, I want to be clear that I'm not against parallel processing. Actually I really love and evaluate it highly.
SAP BAPI processes tend to be slow, since SAP implemented so much logic in them. We cannot modify SAP standard codes, so there are limited ways to improve performance with single process. In such case, parallelization is easy and powerful method to update many data.

SAP ABAP Parallel Processing methods

Though ABAP server is not for distributed computing, like Apache hadoop and Spark, it has several strong ways for parallel processing.
1. Simple Parallel Background Process
Firstly, I introduce a very simple way. Just run background jobs parallel via t-cd:sm37 or job schedule soft. Is is so simple that no specific knowledge is unnecessary. If you don't trust your engineers much, this way is the best, even though you couldn't get additional advantages.
2. aRFC
aRFC is an abbreviation of asynchronous RFC. I really love the aRFC, since we can dynamically change the number of parallel processing.
Please look at the article "ABAP aRFC Call Sample Program" for sample program.
Make sure aRFC uses dialog work processes, not background work processes.

  • You can configure server group settings via t-cd:rz12.

  • You can monitor aRFC resources via t-cd:sarfc.

Also see the SAP Help.
3. others(bgRFC, tRFC, qRFC)
I haven't used bgRFC, tRFC and qRFC.

3 key points of Parallel Processing

You have to cosider these points for parallelized processes.

  1. Key duplicate and time series
    Key duplicate and time series are the most important things, since when you divide input data, a program doesn't know other divided process's data and may update master/transactions using old data. So when there are duplicate data, to summarize same key data is the first thing to do.

  2. Lock
    Lock is not preferable for parallel processing, since parallelized processes may lock same key data. So only to insert into table(no update), or to collect same key data into one process is necessary.

  3. UPDATE TASK LOCAL
    Usually DB processes become bottleneck when parallel processing. To eliminate unnecessary DB processes are one of the most important consideration for parallel processing. When using "SET UPDATE TASK LOCAL", commit processes don't use table update, so DB can process more and faster.

1
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
1