LoginSignup
1
0

More than 3 years have passed since last update.

GPT-3でAPI呼び出し時におけるコンテクストの総量上限を取り払う

Posted at

概要

GPT-3でCompletionのEndPointを叩く際,トークンが2049を超えるとエラーがでます.
これを取り除く方法です.

GPT-3のansewer APIでエラーが出る

APIのレスポンスに下記のテキストが返されるようになります.

    import openai
      response = openai.Completion.create(
            engine=engine,
            prompt=prompt,
            temperature=float(temperature),
            max_tokens=int(max_tokens),
            top_p=1.0,
            frequency_penalty=0.0,
            presence_penalty=0.0,
            stop=["###"]
            )
        export_text = response["choices"][0]['text']
        print(export_text)
You requested a completion of 2076 tokens 
(you supplied a prompt of length 1576 and requested to sample 500), 
but this model's maximum context length is 2049. 
If you would like us to add a feature to auto-truncate server-side, 
let us know at support@openai.com.

対処

そのままだが,サポートにいってマニュアルで撤廃してもらいます.

送ったメール

Hello. My name is and GPT-3 works great.When I use the factual answering API I got an error saying that 
'''You requested a completion of 2076 tokens (you supplied a prompt of length 1576 and requested to sample 500), but this model's maximum context length is 2049. If you would like us to add a feature to auto-truncate server-side, let us know at support@openai.com.'''
Can you please stop auto-truncate service on server side, please?
Information about my account is as follows.
- mail: name@mail.com
- Organization title:
Best regards, 

その後

月曜日の8時に連絡したら,4時間後の12時半に連絡がきて解除されました.

Hi , we are not currently auto-truncating server-side so you should be all set. This message is asking for feedback if you would like us to add auto-truncation as a feature. I hope that helps!

1
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
0