Help us understand the problem. What is going on with this article?

pythonでconcurrentでメモリがもりもり増大する

More than 1 year has passed since last update.

それなりの量のテキストをそれなりに時間の掛かるREST APIにかけようと思って、concurrentを使って↓のような並列処理を書いたのだけど、使用メモリがあっという間に膨れ上がってプロセスが死ぬ。

executor = concurrent.futures.ThreadPoolExecutor(max_workers=5)
wait_executor = concurrent.futures.ThreadPoolExecutor(max_workers=5)

def proc(line):
  # REST APIを呼び出す処理

def wait_task(future):
    future.result()

for line in sys.stdin:
    future = executor.submit(proc, line)
    wait_executor.submit(wait_task, future)

最後にstdinのループの最後にgc.collect()を入れたら使用メモリは50MBくらいで落ち着くようになったけど、正しい解決方法ってなんなんだろ。

Why do not you register as a user and use Qiita more conveniently?
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away
Comments
Sign up for free and join this conversation.
If you already have a Qiita account
Why do not you register as a user and use Qiita more conveniently?
You need to log in to use this function. Qiita can be used more conveniently after logging in.
You seem to be reading articles frequently this month. Qiita can be used more conveniently after logging in.
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away