0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

DeepSpeedディストリビューターによるLlama 2 7B Chatのファインチューニング

Posted at

こちらのサンプルノートブックをウォークスルーします。

DeepSpeed Distributorとは

DeepSpeed ディストリビューターは TorchDistributor の上に構築されており、より高いコンピュート電力を必要とするが、メモリの制約によって制限されるモデルを持つお客様に推奨されるソリューションです。

DeepSpeed ライブラリは、 Microsoft によって開発されたオープンソース ライブラリであり、 Databricks Runtime 14.0 機械学習以降で使用できます。最適化されたメモリ使用量、通信オーバーヘッドの削減、高度なパイプライン並列処理により、標準のハードウェアでは実現できないモデルやトレーニングプロシージャのスケーリングが可能になります。

こちらもご覧ください。

モデルのダウンロード

上述のサンプルノートブックでは、Hugging Face上のLlama 2 7B Chatをファインチューニングしていますが、Llama 2 7B ChatはGatedなので、アクセス許可を受けたHugging Faceのトークンが必要です。ノートブック上でトークンを指定しても、チューニングの際に権限がないというエラーに遭遇してしまいます。ですので、今回は別途モデルをダウンロードして、そちらを対象にファインチューニングを行います。

# login hugging face
from huggingface_hub import notebook_login

notebook_login()
import transformers
from transformers import (
  AutoConfig,
  AutoModelForCausalLM,
  DataCollatorForLanguageModeling,
  PreTrainedTokenizer,
  Trainer,
  TrainingArguments,
  AutoTokenizer
)

import torch

MODEL_URI = "meta-llama/Llama-2-7b-chat-hf"
TOKENIZER_PATH = '/Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf'
MODEL_PATH = '/Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf'

# model
model = transformers.AutoModelForCausalLM.from_pretrained(
    MODEL_URI,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True
  )
model.save_pretrained(MODEL_PATH) 

# tokenizer
tokenizer = AutoTokenizer.from_pretrained(MODEL_URI)
tokenizer.save_pretrained(TOKENIZER_PATH) 

これでモデルとトークナイザーがボリュームに保存されました。

('/Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf/tokenizer_config.json',
 '/Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf/special_tokens_map.json',
 '/Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf/tokenizer.model',
 '/Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf/added_tokens.json',
 '/Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf/tokenizer.json')

DeepSpeedディストリビューターによるLlama 2 7B Chatのファインチューニング

このノートブックでは、Apache SparkのDeepspeedTorchDistributorとHugging Faceのtransformersライブラリを用いて、Llama-2-7b-chat-hf modelのファインチューニングの例を提供します。

要件

このノートブックでは、以下を必要とします:

  • GPU、マルチノードクラスター。現時点では、DeepSpeedはCPUでの実行をサポートしていません。
  • Databricks Runtime 14.0 ML以降
  • g4dn.12xlarge (4x T4) on AWS

現時点ではマルチノードでの動作が確認できていないため、今回はマルチノードではなくシングルノードの4GPUを使います。
Screenshot 2024-08-01 at 19.44.44.png

(オプション) DeepSpeed設定の定義

ディストリビューターにDeepSpeedの設定を指定する選択をすることもできます。指定しない場合、デフォルト設定が適用されます。

Pythonのディクショナリーあるいは、jsonの設定を含むファイルのパスを表現する文字列で設定を引き渡すことができます。

deepspeed_config = {
  "fp16": {
    "enabled": True
  },
  "bf16": {
    "enabled": False
  },
  "optimizer": {
    "type": "AdamW",
    "params": {
      "lr": "auto",
      "betas": "auto",
      "eps": "auto",
      "weight_decay": "auto"
    }
  },
  "scheduler": {
    "type": "WarmupLR",
    "params": {
      "warmup_min_lr": "auto",
      "warmup_max_lr": "auto",
      "warmup_num_steps": "auto"
    }
  },
  "zero_optimization": {
    "stage": 3,
    "overlap_comm": True,
    "contiguous_gradients": True,
    "sub_group_size": 5e7,
    "reduce_bucket_size": "auto",
    "reduce_scatter": True,
    "stage3_max_live_parameters" : 1e9,
    "stage3_max_reuse_distance" : 1e9,
    "stage3_prefetch_bucket_size" : 5e8,
    "stage3_param_persistence_threshold" : 1e6,
    "stage3_gather_16bit_weights_on_model_save": True,
    "offload_optimizer": {
      "device": "cpu",
      "pin_memory": True
    }
  },
  "gradient_accumulation_steps": "auto",
  "gradient_clipping": "auto",
  "steps_per_print": 2000,
  "train_batch_size": "auto",
  "train_micro_batch_size_per_gpu": "auto",
  "wall_clock_breakdown": False
}

DeepSpeedディストリビューターの作成

ディストリビューターを作成する際、使用するノードの数、ノードあたりのGPUの数を指定することができます。

import torch
 
NUM_WORKERS = int(spark.conf.get("spark.databricks.clusterUsageTags.clusterWorkers", "1"))
 
def get_gpus_per_worker(_):
  import torch
  return torch.cuda.device_count()
 
NUM_GPUS_PER_WORKER = sc.parallelize(range(4), 4).map(get_gpus_per_worker).collect()[0]

print("NUM_WORKERS:", NUM_WORKERS)
print("NUM_GPUS_PER_WORKER:", NUM_GPUS_PER_WORKER)
NUM_WORKERS: 0
NUM_GPUS_PER_WORKER: 4
from pyspark.ml.deepspeed.deepspeed_distributor import DeepspeedTorchDistributor

dist = DeepspeedTorchDistributor(
  numGpus=NUM_GPUS_PER_WORKER,
  #nnodes=NUM_WORKERS,
  #localMode=False,  # ワーカーにトレーニングを分散
  localMode=True,  # ドライバーノードでトレーニング
  deepspeedConfig=deepspeed_config
  )

トレーニング関数の定義

このサンプルでは、Llama2をファインチューニングするためにHuggingFaceのtransformersパッケージを使います。

from datasets import Dataset, load_dataset
import os
from transformers import AutoTokenizer

TOKENIZER_PATH = '/Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf'
MODEL_PATH = '/Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf'

#TOKENIZER_PATH = '/dbfs/llama2_models/Llama-2-7b-chat-hf'

#TOKENIZER_PATH = "meta-llama/Llama-2-7b-chat-hf"
#MODEL_PATH = "meta-llama/Llama-2-7b-chat-hf"

DEFAULT_TRAINING_DATASET = "databricks/databricks-dolly-15k"

INTRO_BLURB = "Below is an instruction that describes a task. Write a response that appropriately completes the request."
INSTRUCTION_KEY = "### Instruction:"
INPUT_KEY = "Input:"
RESPONSE_KEY = "### Response:"
PROMPT_NO_INPUT_FORMAT = """{intro}

{instruction_key}
{instruction}

{response_key}""".format(
  intro=INTRO_BLURB,
  instruction_key=INSTRUCTION_KEY,
  instruction="{instruction}",
  response_key=RESPONSE_KEY,
)

PROMPT_WITH_INPUT_FORMAT = """{intro}

{instruction_key}
{instruction}

{input_key}
{input}

{response_key}""".format(
  intro=INTRO_BLURB,
  instruction_key=INSTRUCTION_KEY,
  instruction="{instruction}",
  input_key=INPUT_KEY,
  input="{input}",
  response_key=RESPONSE_KEY,
)


def load_training_dataset(
  tokenizer,
  path_or_dataset: str = DEFAULT_TRAINING_DATASET,
) -> Dataset:
  print(f"Loading dataset from {path_or_dataset}")
  dataset = load_dataset(path_or_dataset, cache_dir='/dbfs/llama2-deepspeed')["train"]
  print(f"Found {dataset.num_rows} rows")

  def _reformat_data(rec):
    instruction = rec["instruction"]
    response = rec["response"]
    context = rec.get("context")

    if context:
      questions = PROMPT_WITH_INPUT_FORMAT.format(instruction=instruction, input=context)
    else:
      questions = PROMPT_NO_INPUT_FORMAT.format(instruction=instruction)
    return {"text": f"{{ 'prompt': {questions}, 'response': {response} }}"}
  
  dataset = dataset.map(_reformat_data)

  def tokenize_function(allEntries):
    return tokenizer(allEntries['text'], truncation=True, max_length=512,)
  
  dataset = dataset.map(tokenize_function)
  split_dataset = dataset.train_test_split(test_size=1000)
  train_tokenized_dataset = split_dataset['train']
  eval_tokenized_dataset = split_dataset['test']

  return train_tokenized_dataset, eval_tokenized_dataset

tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_PATH)
tokenizer.pad_token = tokenizer.eos_token
train_dataset, eval_dataset = load_training_dataset(tokenizer)

以下のコマンドでは、Deepspeedが実行するトレーニング関数を定義しています。トレーニング関数は、実行のためにそれぞれのワーカーに送信され、モデルをロードし、トレーニング引数を設定するためにtransformersライブラリを使用し、トレーニングにはHFトレーナーを使います。

from functools import partial
import json
import logging
import os
import numpy as np
from pathlib import Path
import torch

import transformers
from transformers import (
  AutoConfig,
  AutoModelForCausalLM,
  DataCollatorForLanguageModeling,
  PreTrainedTokenizer,
  Trainer,
  TrainingArguments,
)


os.environ['HF_HOME'] = '/local_disk0/hf'
os.environ['TRANSFORMERS_CACHE'] = '/local_disk0/hf'

LOCAL_OUTPUT_DIR = "/Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf/output"

def load_model(pretrained_model_name_or_path: str) -> AutoModelForCausalLM:
  print(f"Loading model for {pretrained_model_name_or_path}")
  model = transformers.AutoModelForCausalLM.from_pretrained(
    pretrained_model_name_or_path,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True
  )
  config = AutoConfig.from_pretrained(pretrained_model_name_or_path)
  model_hidden_size = config.hidden_size
  return model, model_hidden_size

from torch.distributed.elastic.multiprocessing.errors import record                          
                                                                                                                
@record 
def fine_tune_llama2(
  *,
  local_rank: str = None,
  input_model: str = MODEL_PATH,
  local_output_dir: str = LOCAL_OUTPUT_DIR,
  dbfs_output_dir: str = None,
  epochs: int = 3,
  per_device_train_batch_size: int = 10,
  per_device_eval_batch_size: int = 10,
  lr: float = 1e-5,
  gradient_checkpointing: bool = True,
  gradient_accumulation_steps: int = 8,
  bf16: bool = False,
  logging_steps: int = 10,
  save_steps: int = 400,
  max_steps: int = 200,
  eval_steps: int = 50,
  save_total_limit: int = 10,
  warmup_steps: int = 20,
  training_dataset: str = DEFAULT_TRAINING_DATASET,
):
  os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:128"

  model, model_hidden_size = load_model(input_model)
  
  deepspeed_config["hidden_size"] = model_hidden_size
  deepspeed_config["zero_optimization"]["reduce_bucket_size"] = model_hidden_size*model_hidden_size
  deepspeed_config["zero_optimization"]["stage3_prefetch_bucket_size"] = 0.9 * model_hidden_size * model_hidden_size
  deepspeed_config["zero_optimization"]["stage3_param_persistence_threshold"] = 10 * model_hidden_size

  fp16 = not bf16

  training_args = TrainingArguments(
    output_dir=local_output_dir,
    per_device_train_batch_size=per_device_train_batch_size,
    per_device_eval_batch_size=per_device_eval_batch_size,
    gradient_checkpointing=gradient_checkpointing,
    gradient_accumulation_steps=gradient_accumulation_steps,
    learning_rate=lr,
    num_train_epochs=epochs,
    weight_decay=1,
    do_eval=True,
    evaluation_strategy="steps",
    eval_steps=eval_steps,
    fp16=fp16,
    bf16=bf16,
    deepspeed=deepspeed_config,
    logging_strategy="steps",
    logging_steps=logging_steps,
    save_strategy="steps",
    save_steps=save_steps,
    max_steps=max_steps,
    save_total_limit=save_total_limit,
    local_rank=local_rank,
    warmup_steps=warmup_steps,
    report_to=[],
  )
  
  data_collator = DataCollatorForLanguageModeling(
    tokenizer=tokenizer, mlm=False
  )

  trainer = Trainer(
    model=model,
    args=training_args,
    data_collator=data_collator,
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
  )

  print("Training the model")
  trainer.train()

  print(f"Saving Model to {local_output_dir}")
  trainer.save_model(output_dir=local_output_dir)
  tokenizer.save_pretrained(local_output_dir)

  if dbfs_output_dir:
    print(f"Saving Model to {dbfs_output_dir}")
    trainer.save_model(output_dir=dbfs_output_dir)
    tokenizer.save_pretrained(dbfs_output_dir)

  print("Training finished.")

ディストリビューターの実行

dist.run(fine_tune_llama2, epochs=1, max_steps=1)

4プロセスでファインチューニングが行われます。

Started local training with 4 processes
WARNING:__main__:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
*****************************************
/databricks/python/lib/python3.11/site-packages/transformers/utils/hub.py:124: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
  warnings.warn(
/databricks/python/lib/python3.11/site-packages/transformers/utils/hub.py:124: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
  warnings.warn(
/databricks/python/lib/python3.11/site-packages/transformers/utils/hub.py:124: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
  warnings.warn(
/databricks/python/lib/python3.11/site-packages/transformers/utils/hub.py:124: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
  warnings.warn(
2024-08-01 10:28:48.801206: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-08-01 10:28:48.801823: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-08-01 10:28:48.801824: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-08-01 10:28:48.807394: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-08-01 10:28:48.851522: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-08-01 10:28:48.851526: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-08-01 10:28:48.851523: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-08-01 10:28:48.855666: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Loading model for /Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf
Loading model for /Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf
Loading model for /Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf
Loading model for /Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf
Loading checkpoint shards: 100%|██████████| 2/2 [00:08<00:00,  4.43s/it]
Loading checkpoint shards: 100%|██████████| 2/2 [00:08<00:00,  4.46s/it]
Loading checkpoint shards: 100%|██████████| 2/2 [00:08<00:00,  4.47s/it]
Loading checkpoint shards: 100%|██████████| 2/2 [00:08<00:00,  4.50s/it]
[2024-08-01 10:29:01,836] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-08-01 10:29:01,846] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-08-01 10:29:01,846] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-08-01 10:29:01,847] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-08-01 10:29:02,545] [INFO] [comm.py:637:init_distributed] cdb=None
[2024-08-01 10:29:02,545] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
[2024-08-01 10:29:02,546] [INFO] [comm.py:637:init_distributed] cdb=None
[2024-08-01 10:29:02,556] [INFO] [comm.py:637:init_distributed] cdb=None
[2024-08-01 10:29:02,564] [INFO] [comm.py:637:init_distributed] cdb=None
max_steps is given, it will override any value given in num_train_epochs
Training the model
max_steps is given, it will override any value given in num_train_epochs
Training the model
max_steps is given, it will override any value given in num_train_epochs
Training the model
max_steps is given, it will override any value given in num_train_epochs
Training the model
Using /root/.cache/torch_extensions/py311_cu121 as PyTorch extensions root...
Using /root/.cache/torch_extensions/py311_cu121 as PyTorch extensions root...
Using /root/.cache/torch_extensions/py311_cu121 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /root/.cache/torch_extensions/py311_cu121/cpu_adam/build.ninja...
Using /root/.cache/torch_extensions/py311_cu121 as PyTorch extensions root...
/databricks/python/lib/python3.11/site-packages/torch/utils/cpp_extension.py:1967: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation. 
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
  warnings.warn(
Building extension module cpu_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module cpu_adam...
Time to load cpu_adam op: 2.498084306716919 seconds
Loading extension module cpu_adam...
Loading extension module cpu_adam...
Time to load cpu_adam op: 2.504244804382324 seconds
Time to load cpu_adam op: 2.497251510620117 seconds
Loading extension module cpu_adam...
Time to load cpu_adam op: 2.5208399295806885 seconds
Parameter Offload: Total persistent parameters: 266240 in 65 params
  0%|          | 0/1 [00:00<?, ?it/s]`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`.
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`.
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`.
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`.
/databricks/python/lib/python3.11/site-packages/torch/utils/checkpoint.py:464: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
  warnings.warn(
/databricks/python/lib/python3.11/site-packages/torch/utils/checkpoint.py:464: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
  warnings.warn(
/databricks/python/lib/python3.11/site-packages/torch/utils/checkpoint.py:464: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
  warnings.warn(
/databricks/python/lib/python3.11/site-packages/torch/utils/checkpoint.py:464: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
  warnings.warn(
{'train_runtime': 125.5082, 'train_samples_per_second': 2.55, 'train_steps_per_second': 0.008, 'train_loss': 2.365330219268799, 'epoch': 0.02}
100%|██████████| 1/1 [02:04<00:00, 124.09s/it]Saving Model to /Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf/output
Saving Model to /Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf/output
100%|██████████| 1/1 [02:04<00:00, 124.21s/it]
Saving Model to /Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf/output
Saving Model to /Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf/output
Training finished.
Training finished.
Training finished.
Training finished.
Finished local training with 4 processes

run() が完了すると、ローカルの出力パスからモデルをロードできるようになります(このノートブックでは /Volumes/users/takaaki_yayoi/llama2_models/Llama-2-7b-chat-hf/output)。

tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_PATH)
tokenizer.pad_token = tokenizer.eos_token
pipeline = transformers.pipeline(
    "text-generation",
    model= LOCAL_OUTPUT_DIR,
    tokenizer=tokenizer,
    torch_dtype=torch.float16,
    trust_remote_code=True,
    device_map="auto",
    return_full_text=False
)
pipeline("What is ML?")

[{'generated_text': '\n\nMachine learning (ML) is a subfield of artificial intelligence (AI) that involves the use of algorithms and statistical models to enable machines to learn from data, make decisions, and improve their performance on a specific task over time.\n\nIn simple terms, machine learning is a type of AI that allows software programs to learn and improve their performance on a task without being explicitly programmed for that task. It does this by analyzing data and identifying patterns, which can then be used to make predictions or decisions.\n\nMachine learning is used in a wide range of applications, including image and speech recognition, natural language processing, recommendation systems, fraud detection, and predictive analytics.\n\nThere are several types of machine learning, including:\n\n1. Supervised learning: In this type of machine learning, the algorithm is trained on labeled data, where the correct output is already known. The algorithm learns to map inputs to outputs based on the labeled data, and can then make predictions on new, unseen data.\n2. Unsupervised learning: In this type of machine learning, the algorithm is trained on unlabeled data, and must find patterns or structure in the data on its own.\n3. Semi-supervised learning: This type of machine learning combines elements of supervised and unsupervised learning, where the algorithm is trained on a mix of labeled and unlabeled data.\n4. Reinforcement learning: In this type of machine learning, the algorithm learns by interacting with an environment and receiving feedback in the form of rewards or penalties.\n\nMachine learning has many applications in various industries, including:\n\n1. Healthcare: Machine learning can be used to analyze medical images, diagnose diseases, and predict patient outcomes.\n2. Finance: Machine learning can be used to detect fraud, predict stock prices, and optimize investment portfolios.\n3. Retail: Machine learning can be used to personalize recommendations, optimize pricing, and improve supply chain management.\n4. Manufacturing: Machine learning can be used to predict equipment failures, optimize production processes, and improve product quality.\n5. Transportation: Machine learning can be used to improve autonomous vehicles, optimize traffic flow, and predict maintenance needs.\n\nOverall, machine learning is a powerful technology that can help organizations automate decision-making processes, improve efficiency, and drive innovation.'}]

動きましたが、今度はマルチノードでも動かしたいところ。

はじめてのDatabricks

はじめてのDatabricks

Databricks無料トライアル

Databricks無料トライアル

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?