LoginSignup
0
0

clipモデルとCOCOデータセットを使った「半」汎用的画像キャプション

Last updated at Posted at 2024-04-28

概要

clipモデルというゼロショットの画像解析モデルがあるということで、ウハウハで画像キャプション(画像をテキストで説明)してもらおうとモデルを使ってみると、テキストの候補をこちらが提示する必要があるということに幻滅した。

そこで、候補となるテキスト側をできる限り汎用的にすれば、汎用的な画像キャプションモデルを構築することができると考え、COCOデータセットのテキストを利用し、「半」汎用的な画像キャプションモデルを構築する。

はっきり言って、GPT-4Vなどのマルチモーダルモデルの下位互換だが、CSVさえあればCPU環境でも動作することに価値を感じている。

環境

OS:Windows 11
GPU:GeForce RTX 3060 laptop
CPU:i7-10750H
memory:16G
python:3.10.11

※一部google colabを利用

構成

以下のフローのように、COCOデータセットをclipを利用してベクトル化し、そのCSVと画像のエンベディング結果で類似度をとることで、「半」汎用的な画像キャプションを実現する。

Google Colabでデータセット内テキストのエンベディング

以下のコマンドで、google colabにデータセットをダウンロードする。

!wget http://images.cocodataset.org/zips/val2017.zip
!wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
!unzip -q val2017.zip
!unzip -q annotations_trainval2017.zip

実行結果

--2024-04-27 13:21:28--  http://images.cocodataset.org/zips/val2017.zip
Resolving images.cocodataset.org (images.cocodataset.org)... 16.182.33.57, 52.216.134.99, 3.5.22.254, ...
Connecting to images.cocodataset.org (images.cocodataset.org)|16.182.33.57|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 815585330 (778M) [application/zip]
Saving to: ‘val2017.zip’

val2017.zip         100%[===================>] 777.80M  46.4MB/s    in 18s     

2024-04-27 13:21:46 (44.2 MB/s) - ‘val2017.zip’ saved [815585330/815585330]

--2024-04-27 13:21:46--  http://images.cocodataset.org/annotations/annotations_trainval2017.zip
Resolving images.cocodataset.org (images.cocodataset.org)... 3.5.0.101, 3.5.27.87, 52.217.232.89, ...
Connecting to images.cocodataset.org (images.cocodataset.org)|3.5.0.101|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 252907541 (241M) [application/zip]
Saving to: ‘annotations_trainval2017.zip’

annotations_trainva 100%[===================>] 241.19M  46.1MB/s    in 5.8s    

2024-04-27 13:21:52 (41.6 MB/s) - ‘annotations_trainval2017.zip’ saved [252907541/252907541]

clipを利用するのに必要なpythonライブラリをインストールする。

!pip install torch torchvision
!pip install git+https://github.com/openai/CLIP.git

実行結果

Requirement already satisfied: torch in /usr/local/lib/python3.10/dist-packages (2.2.1+cu121)
Requirement already satisfied: torchvision in /usr/local/lib/python3.10/dist-packages (0.17.1+cu121)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch) (3.13.4)
Requirement already satisfied: typing-extensions>=4.8.0 in /usr/local/lib/python3.10/dist-packages (from torch) (4.11.0)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch) (1.12)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch) (3.3)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch) (3.1.3)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from torch) (2023.6.0)
Collecting nvidia-cuda-nvrtc-cu12==12.1.105 (from torch)
  Using cached nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (23.7 MB)
Collecting nvidia-cuda-runtime-cu12==12.1.105 (from torch)
  Using cached nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (823 kB)
Collecting nvidia-cuda-cupti-cu12==12.1.105 (from torch)
  Using cached nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (14.1 MB)
Collecting nvidia-cudnn-cu12==8.9.2.26 (from torch)
  Using cached nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl (731.7 MB)
Collecting nvidia-cublas-cu12==12.1.3.1 (from torch)
  Using cached nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl (410.6 MB)
Collecting nvidia-cufft-cu12==11.0.2.54 (from torch)
  Using cached nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl (121.6 MB)
Collecting nvidia-curand-cu12==10.3.2.106 (from torch)
  Using cached nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl (56.5 MB)
Collecting nvidia-cusolver-cu12==11.4.5.107 (from torch)
  Using cached nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl (124.2 MB)
Collecting nvidia-cusparse-cu12==12.1.0.106 (from torch)
  Using cached nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl (196.0 MB)
Collecting nvidia-nccl-cu12==2.19.3 (from torch)
  Using cached nvidia_nccl_cu12-2.19.3-py3-none-manylinux1_x86_64.whl (166.0 MB)
Collecting nvidia-nvtx-cu12==12.1.105 (from torch)
  Using cached nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (99 kB)
Requirement already satisfied: triton==2.2.0 in /usr/local/lib/python3.10/dist-packages (from torch) (2.2.0)
Collecting nvidia-nvjitlink-cu12 (from nvidia-cusolver-cu12==11.4.5.107->torch)
  Using cached nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (21.1 MB)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from torchvision) (1.25.2)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /usr/local/lib/python3.10/dist-packages (from torchvision) (9.4.0)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch) (2.1.5)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch) (1.3.0)
Installing collected packages: nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, nvidia-cusparse-cu12, nvidia-cudnn-cu12, nvidia-cusolver-cu12
Successfully installed nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.19.3 nvidia-nvjitlink-cu12-12.4.127 nvidia-nvtx-cu12-12.1.105
Collecting git+https://github.com/openai/CLIP.git
  Cloning https://github.com/openai/CLIP.git to /tmp/pip-req-build-nx0hsc47
  Running command git clone --filter=blob:none --quiet https://github.com/openai/CLIP.git /tmp/pip-req-build-nx0hsc47
  Resolved https://github.com/openai/CLIP.git to commit a1d071733d7111c9c014f024669f959182114e33
  Preparing metadata (setup.py) ... done
Collecting ftfy (from clip==1.0)
  Downloading ftfy-6.2.0-py3-none-any.whl (54 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 54.4/54.4 kB 1.7 MB/s eta 0:00:00
Requirement already satisfied: regex in /usr/local/lib/python3.10/dist-packages (from clip==1.0) (2023.12.25)
Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from clip==1.0) (4.66.2)
Requirement already satisfied: torch in /usr/local/lib/python3.10/dist-packages (from clip==1.0) (2.2.1+cu121)
Requirement already satisfied: torchvision in /usr/local/lib/python3.10/dist-packages (from clip==1.0) (0.17.1+cu121)
Requirement already satisfied: wcwidth<0.3.0,>=0.2.12 in /usr/local/lib/python3.10/dist-packages (from ftfy->clip==1.0) (0.2.13)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (3.13.4)
Requirement already satisfied: typing-extensions>=4.8.0 in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (4.11.0)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (1.12)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (3.3)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (3.1.3)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (2023.6.0)
Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (12.1.105)
Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (12.1.105)
Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (12.1.105)
Requirement already satisfied: nvidia-cudnn-cu12==8.9.2.26 in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (8.9.2.26)
Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (12.1.3.1)
Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (11.0.2.54)
Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (10.3.2.106)
Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (11.4.5.107)
Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (12.1.0.106)
Requirement already satisfied: nvidia-nccl-cu12==2.19.3 in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (2.19.3)
Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (12.1.105)
Requirement already satisfied: triton==2.2.0 in /usr/local/lib/python3.10/dist-packages (from torch->clip==1.0) (2.2.0)
Requirement already satisfied: nvidia-nvjitlink-cu12 in /usr/local/lib/python3.10/dist-packages (from nvidia-cusolver-cu12==11.4.5.107->torch->clip==1.0) (12.4.127)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from torchvision->clip==1.0) (1.25.2)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /usr/local/lib/python3.10/dist-packages (from torchvision->clip==1.0) (9.4.0)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch->clip==1.0) (2.1.5)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch->clip==1.0) (1.3.0)
Building wheels for collected packages: clip
  Building wheel for clip (setup.py) ... done
  Created wheel for clip: filename=clip-1.0-py3-none-any.whl size=1369499 sha256=ecd3ce0fec35a3053019fc315bc51b42a601f209d51d5fb0b892c1dbdead97e0
  Stored in directory: /tmp/pip-ephem-wheel-cache-prey6aqe/wheels/da/2b/4c/d6691fa9597aac8bb85d2ac13b112deb897d5b50f5ad9a37e4
Successfully built clip
Installing collected packages: ftfy, clip
Successfully installed clip-1.0 ftfy-6.2.0

データセットの画像キャプション(テキスト)をclipでエンベディングする。

import clip
import torch
from pycocotools.coco import COCO
from PIL import Image
import pandas as pd
import numpy as np
from tqdm import tqdm

# CLIPモデルのロード
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load("ViT-B/32", device=device)

# COCOデータセットの初期化
dataDir = './'
dataType = 'val2017'
annFile = f'{dataDir}/annotations/captions_val2017.json'
coco = COCO(annFile)

# 全画像IDの取得
imgIds = coco.getImgIds()

# データフレームの初期化
df = pd.DataFrame(columns=["caption", "vector"])

# バッチサイズ
batch_size = 32
text_batch = []
ids = []

# 初期のデータフレームを作成
df_list = []

# バッチ処理のループ内
for imgId in tqdm(imgIds):
    annIds = coco.getAnnIds(imgIds=[imgId])
    anns = coco.loadAnns(annIds)
    for ann in anns:
        text = ann['caption']
        text_batch.append(text)
        ids.append(imgId)
        if len(text_batch) >= batch_size:
            text_inputs = clip.tokenize(text_batch).to(device)
            with torch.no_grad():
                text_features = model.encode_text(text_inputs).cpu().numpy()
            for text, vector in zip(text_batch, text_features):
                # 新しい行をリストに追加
                df_list.append(pd.DataFrame({"caption": [text], "vector": [vector.tolist()]}))
            text_batch = []
            if len(df_list) >= 1000:  # 定期的にCSVに保存
                pd.concat(df_list).to_csv('text_embeddings.csv', mode='a', header=False, index=False)
                df_list = []  # リストをリセット

# 残りのデータを処理
if text_batch:
    text_inputs = clip.tokenize(text_batch).to(device)
    with torch.no_grad():
        text_features = model.encode_text(text_inputs).cpu().numpy()
    for text, vector in zip(text_batch, text_features):
        df_list.append(pd.DataFrame({"caption": [text], "vector": [vector.tolist()]}))

# ベクトルをリストとしてCSVファイルに保存
pd.concat(df_list).to_csv('text_embeddings.csv', mode='a', header=False, index=False)

実行結果

100%|███████████████████████████████████████| 338M/338M [00:11<00:00, 30.6MiB/s]
loading annotations into memory...
Done (t=0.06s)
creating index...
index created!
100%|██████████| 5000/5000 [00:51<00:00, 96.60it/s] 

以下の画像(000000000139.jpg)で画像キャプションができることを確認する。
(この画像はCOCOデータセットの画像なので、基本的にはうまくいく。)

000000000139.jpg

# 画像のロードとエンベディング
image_path = './val2017/000000000139.jpg'
image = preprocess(Image.open(image_path)).unsqueeze(0).to(device)
with torch.no_grad():
    image_features = model.encode_image(image)

import pandas as pd

# CSVファイルからデータを読み込み、ベクトルをリストとして解析
df = pd.read_csv('text_embeddings.csv', header=None, names=["caption", "vector"],
                 converters={'vector': eval})

# エラーが発生した部分(類似度計算)を修正
image_features = image_features.cpu().numpy()

# 類似度の計算(修正後)
df['similarity'] = df['vector'].apply(lambda x: np.dot(x, image_features.reshape(-1)) / (np.linalg.norm(x) * np.linalg.norm(image_features)))

# 最も類似度が高いキャプションを取得
most_similar_caption = df.loc[df['similarity'].idxmax()]['caption']
print("Most similar caption:", most_similar_caption)

実行結果

Most similar caption: A large living room is seen in this image.

日本語訳は「この画像には広いリビングルームが見えます。」なので、画像キャプションはうまくいっている。

ローカルでの画像キャプションを実行

ローカルPCで以下のコマンドを実行し、pythonの仮想環境を構築する。

python -m venv clip_explain_venv
.\clip_explain_venv\Scripts\activate
pip install torch torchvision
pip install pandas
pip install git+https://github.com/openai/CLIP.git
deactivate

colabでベクトル化したテキストリストのCSVをローカルにダウンロードし、clip_explain_venvフォルダに格納する。

image.png

colabを使えない人向けに、上記の手順で出力されるCSVを以下に格納している。

以下の画像に関しての説明を出力する。

nemuru2.png

「.\Scripts\activate」などで仮想環境をactivateし、「text_embeddings.csv」を保存したフォルダで以下のプログラムを実行する。

import clip
import torch
import numpy as np
from PIL import Image
import pandas as pd

# CLIPモデルのロード
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load("ViT-B/32", device=device)

# 画像のロードとエンベディング
image_path = r'XXXXX\nemuru.png' # ★TODO:パスを修正
image = preprocess(Image.open(image_path)).unsqueeze(0).to(device)
with torch.no_grad():
    image_features = model.encode_image(image)

# CSVファイルからデータを読み込み、ベクトルをリストとして解析
df = pd.read_csv('text_embeddings.csv', header=None, names=["caption", "vector"],
                 converters={'vector': eval})

# エラーが発生した部分(類似度計算)を修正
image_features = image_features.cpu().numpy()

# 類似度の計算(修正後)
df['similarity'] = df['vector'].apply(lambda x: np.dot(x, image_features.reshape(-1)) / (np.linalg.norm(x) * np.linalg.norm(image_features)))

# 最も類似度が高いキャプションを取得
most_similar_caption = df.loc[df['similarity'].idxmax()]['caption']
print("Most similar caption:", most_similar_caption)

実行結果

Most similar caption: The woman is walking carefully through the leaves.

日本語訳は「女性は葉の間を慎重に歩いている」なので、近い画像キャプションが出力されている。

以下の画像でも検証する。

イラスト112.png

実行結果

Most similar caption: A red headed doll sitting next to a clock.

日本語訳は「時計の隣に座っている赤い頭の人形。」なので、時計なんて存在しないし、人が人形と認識されている。

完全に汎用的ではないため完璧ではないが、ある程度の画像キャプションの性能があることが分かる。

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0