3
2

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

OCI Generative AI を Python から APIコールするには

Last updated at Posted at 2023-10-30

はじめに

この記事は、元々「OCI Generative AI pre-GA (Beta) を (まだSDKが未対応の) Python から試すには」というタイトルで投稿したものですが、Oracle Cloud Infrastructure (OCI) Generative AI は GA となりましたので、最新の Python SDK をご利用下さい。

Python SDK を使ったコーディング例

import os, oci
from oci.generative_ai_inference import GenerativeAiInferenceClient
from oci.generative_ai_inference.models import \
  GenerateTextDetails, OnDemandServingMode, CohereLlmInferenceRequest, \
  GenerateTextResult

compartment_id = os.environ["COMPARTMENT_ID"]
config = oci.config.from_file()
config["region"] = "us-chicago-1"

client = GenerativeAiInferenceClient(config=config)

prompt = "Hello World!"

response = client.generate_text(
    GenerateTextDetails(
        compartment_id = compartment_id,
        inference_request = CohereLlmInferenceRequest(
            prompt=prompt,
            max_tokens=10
        ),
        serving_mode = OnDemandServingMode(model_id="cohere.command")
    )
).data # type: GenerateTextResult

print(response.inference_response.generated_texts[0].text)

""" output will be like ...
Hi there! How are you today?
"""

以下のコードは、Python で GenerativeAiInferenceClient を使わずに API コールする例として御覧下さい。GAでも動作するように修正しています。

本題

Generative AI では、次の3つのモデルが使用できます。

  • 生成:テキストを生成したり、テキストから情報を抽出するための指示を与えます。
  • 要約:指示された形式、長さ、およびトーンでテキストを要約します。
  • 埋込み:セマンティック検索、テキスト分類またはテキスト・クラスタリングにアプリケーションで使用するために、テキストをベクトル埋込みに変換します。

モデルの使用方法ですが、すぐに使用できる事前トレーニング済モデルを使用する方法と、専用AIクラスタ上でカスタム・モデルを作成してホストする方法があります。

注1: 現時点(2024年1月)では Embed のみ日本語に対応しています。
注2: 現時点(2024年1月)では Chicago リージョンでのみ提供していますので、利用可能なエンドポイントは以下を使用して下さい。
https://inference.generativeai.us-chicago-1.oci.oraclecloud.com

事前トレーニング済モデルを Python から試す

SDK + requests の組み合わせで Generative AI API (生成、埋込み、要約) を呼び出す関数を作ります。SDK のドキュメントにも書いてある Raw Request というやり方です。

genai.py

""" 
OCI Generative AI を Python SDK (Generative AI) を使わずに実行 
requests で REST コール & 認証モジュールに oci.signer.Signer を使う
"""

import os, requests, oci
from oci.signer import Signer

# ひとまず Endpoint は Chicago リージョンに固定
endpoint = "https://inference.generativeai.us-chicago-1.oci.oraclecloud.com"

def get_config_signer(file_location=None, profile_name=None) -> Signer:
    """ ユーザー秘密鍵ベースの Signer を取得
      環境変数 OCI_CONFIG_FILE_LOC, OCI_CONFIG_PROFILE で読み込み先を変更可
    """
    
    default_file_loc = os.environ.get('OCI_CONFIG_FILE_LOC') if 'OCI_CONFIG_FILE_LOC' in os.environ else '~/.oci/config'
    default_profile_name = os.environ.get('OCI_CONFIG_PROFILE') if 'OCI_CONFIG_PROFILE' in os.environ else 'DEFAULT'
    file_loc = file_location if file_location else default_file_loc
    profile = profile_name if profile_name else default_profile_name
    config = oci.config.from_file(os.path.expanduser(file_loc), profile)
    signer = Signer(
        tenancy=config['tenancy'],
        user=config['user'],
        fingerprint=config['fingerprint'],
        private_key_file_location=config['key_file'],
        pass_phrase=config['pass_phrase']
    )
    return signer

def get_instance_principal_signer() -> Signer:
    """ Instance Principal ベースの Signer 取得 """

    signer = oci.auth.signers.InstancePrincipalsSecurityTokenSigner()
    return signer

def generate_text(
        prompt:str, 
        compartment_id:str=None,
        signer:Signer=None,
        maxTokens:int=None, # Default: 20
        temperature:float=None, # Default: 1, 0-5
        frequencyPenalty:float=None, # Default: 0, 0-1
        presencePenalty:float=None, # Default: 0, 0-1
        numGenerations:int=None, # Default: 1, 1-5
        topP:float=None, # Default: 0.75, 0-1
        topK:int=None, # Default: 0, 0-500
        truncate:str=None, # NONE (default), START, END
        returnLikelihoods:str=None, # NONE (default), ALL, GENERATION
        stopSequences:list=None,
        isStream:bool=None,
        isEcho:bool=None,
        servingMode:dict=None, # Default: ON_DEMAND
        modelId:str=None # "cohere.command" (default), "cohere.command-light" 
        ) -> dict:
    """ テキストの生成 """

    path = "/20231130/actions/generateText"
    body = {
      "compartmentId" : compartment_id if compartment_id else os.environ["COMPARTMENT_ID"],
      "servingMode" : servingMode if servingMode else {
        "modelId": modelId if modelId else "cohere.command",
        "servingType" : "ON_DEMAND"
      },
      "inferenceRequest" : {
        "runtimeType": "COHERE" 
      }
    }

    inference_request = body["inferenceRequest"]
    inference_request["prompt"] = prompt
    if maxTokens: inference_request["maxTokens"] = maxTokens
    if temperature: inference_request["temperature"] = temperature
    if frequencyPenalty: inference_request["frequencyPenalty"] = frequencyPenalty
    if presencePenalty: inference_request["presencePenalty"] = presencePenalty
    if numGenerations: inference_request["numGenerations"] = numGenerations
    if topP: inference_request["topP"] = topP
    if topK: inference_request["topK"] = topK
    if truncate: inference_request["truncate"] = truncate
    if returnLikelihoods: inference_request["returnLikelihoods"] = returnLikelihoods
    if stopSequences: inference_request["stopSequences"] = stopSequences
    if isStream: inference_request["isStream"] = isStream
    if isEcho: inference_request["isEcho"] = isEcho
    
    response = requests.post(endpoint + path, json=body, auth=signer if signer else get_config_signer(), timeout=(3.0, 15.0))
    if not response.ok: raise Exception(response.text)
    return response.json()

def embed_text(
        inputs, 
        compartment_id:str=None,
        signer:Signer=None,
        isEcho:bool=None,
        truncate:str=None, # NONE, START, END
        servingMode:dict=None # Default: ON_DEMAND "cohere.embed-english-light-v2.0"
        ) -> dict:
    """ テキストの embedding """

    path = "/20231130/actions/embedText"
    input_list = [inputs] if isinstance(inputs, str) else inputs
    body = {
      "compartmentId" : compartment_id if compartment_id else os.environ["COMPARTMENT_ID"],
      "inputs" : input_list,
      "servingMode" : servingMode if servingMode else {
        "modelId": "cohere.embed-multilingual-v3.0",
        "servingType" : "ON_DEMAND"
      }
    }
    if isEcho: body["isEcho"] = isEcho
    if truncate: body["truncate"] = truncate
    response = requests.post(endpoint + path, json=body, auth=signer if signer else get_config_signer(), timeout=(3.0, 15.0))
    if not response.ok: raise Exception(response.text)
    return response.json()

def summarize_text(
        input:str, 
        compartment_id:str=None, 
        signer:Signer=None,
        additionalCommand:str=None,
        extractiveness:str=None, # AUTO (default), LOW, MEDIUM, HIGH
        format:str=None, # AUTO (default), PARAGRAPH, BULLETS
        length:str=None, # AUTO (default), SHORT, MEDIUM, LONG
        temperature:float=None, # Default: 1, 0-5
        isEcho:bool=None,
        servingMode:dict=None # Default: ON_DEMAND "cohere.command"
        ) -> dict:
    """ テキストの要約 """

    path = "/20231130/actions/summarizeText"
    body = {
      "compartmentId" : compartment_id if compartment_id else os.environ["COMPARTMENT_ID"],
      "input" : input,
      "servingMode" : servingMode if servingMode else {
        "modelId": "cohere.command",
        "servingType" : "ON_DEMAND"
      }
    }
    if additionalCommand: body["additionalCommand"] = additionalCommand
    if extractiveness: body["extractiveness"] = extractiveness
    if format: body["format"] = format
    if length: body["length"] = length
    if temperature: body["temperature"] = temperature
    if isEcho: body["isEcho"] = isEcho
    response = requests.post(endpoint + path, json=body, auth=signer if signer else get_config_signer(), timeout=(3.0, 15.0))
    if not response.ok: raise Exception(response.text)
    return response.json()

では、これを使って実際に事前トレーニング済モデルを呼び出してみます。

"""
環境変数 COMPARTMENT_ID が設定されていることが前提
もしくは関数の引数で compartment_id を指定するよう修正すること
"""
import genai

# テキストの生成, signer を指定しないので、デフォルト ~/.oci/config ファイルを参照
prompt = "Hello World!"
response = genai.generate_text(prompt, temperature=1.5, maxTokens=100)
print(response["inferenceResponse"]["generatedTexts"][0]["text"])
""" 出力例
Hello! I'm your helpful AI friend. How can I assist you today?
"""

# テキストの embeddong, Instance Principal で実行
texts = ["Hello World!", "What's up?"]
signer = genai.get_instance_principal_signer()
response = genai.embed_text(texts, isEcho=True, signer=signer)
print([ _ for _ in response["inputs"] ])
print([ len(_) for _ in response["embeddings"] ])
print([ _ for _ in response["embeddings"] ])
""" 出力例
['Hello World!', "What's up?"]
[1024, 1024]
[[-0.2199707, ... 省略 ...  -1.7392578]]
"""

# テキストの要約
input_text = """
Oracle Academy provides educational institutions with the resources they need to help educators develop core computing knowledge and skills aligned to industry standards and using current technologies—so they can teach students the skills they need to succeed.
At Oracle Academy, we know that educational institutions and teachers are invested in student success, and we share this core goal. It's critical that institutions and educators have access to the right resources to help ensure students achieve post-graduation success, whether moving onto college or graduate school, or into the job market.
Oracle Academy can help educators keep up with ever-changing technologies and software to help prepare students for their futures. We understand and value educators as collaborators who are empowered to facilitate innovative student learning in and outside the classroom.
Our free program offers institutions and educators access to a wide range of teaching and educational resources, including curriculum, classroom learning resources, software, cloud technology, practice environments, and much more.
"""
response = genai.summarize_text(input=input_text, extractiveness="LOW", length="SHORT")
print(response["summary"])
""" 出力例
Oracle Academy provides educational institutions with resources that help educators prepare students with industry-aligned skills.
"""

最後に

OCI における認証やポリシーについて前提知識がある前提で書きましたので、悪しからず。

3
2
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
3
2

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?