0
0

More than 1 year has passed since last update.

Cloud9のリサイズ

Posted at

No space left on device エラー

Cloud9のディスクサイズのデフォルトは10GBしかなく、少しファイルサイズの大きいものを扱うと下記のようなエラーが発生してしまう。

ERROR: Could not install packages due to an EnvironmentError: [Errno 28] No space left on device

リサイズスクリプト

下記のスクリプトを作成し、引数に変更したい容量を指定する。

#!/bin/bash

# Specify the desired volume size in GiB as a command-line argument. If not specified, default to 20 GiB.
SIZE=${1:-20}

# Get the ID of the environment host Amazon EC2 instance.
INSTANCEID=$(curl http://169.254.169.254/latest/meta-data/instance-id)

# Get the ID of the Amazon EBS volume associated with the instance.
VOLUMEID=$(aws ec2 describe-instances \
  --instance-id $INSTANCEID \
  --query "Reservations[0].Instances[0].BlockDeviceMappings[0].Ebs.VolumeId" \
  --output text)

# Resize the EBS volume.
aws ec2 modify-volume --volume-id $VOLUMEID --size $SIZE

# Wait for the resize to finish.
while [ \
  "$(aws ec2 describe-volumes-modifications \
    --volume-id $VOLUMEID \
    --filters Name=modification-state,Values="optimizing","completed" \
    --query "length(VolumesModifications)"\
    --output text)" != "1" ]; do
sleep 1
done

#Check if we're on an NVMe filesystem
if [ $(readlink -f /dev/xvda) = "/dev/xvda" ]
then
  # Rewrite the partition table so that the partition takes up all the space that it can.
  sudo growpart /dev/xvda 1

  # Expand the size of the file system.
  # Check if we are on AL2
  STR=$(cat /etc/os-release)
  SUB="VERSION_ID=\"2\""
  if [[ "$STR" == *"$SUB"* ]]
  then
    sudo xfs_growfs -d /
  else
    sudo resize2fs /dev/xvda1
  fi

else
  # Rewrite the partition table so that the partition takes up all the space that it can.
  sudo growpart /dev/nvme0n1 1

  # Expand the size of the file system.
  # Check if we're on AL2
  STR=$(cat /etc/os-release)
  SUB="VERSION_ID=\"2\""
  if [[ "$STR" == *"$SUB"* ]]
  then
    sudo xfs_growfs -d /
  else
    sudo resize2fs /dev/nvme0n1p1
  fi
fi

実行

実行してみる。

$ sh resize.sh 50
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    19  100    19    0     0  11137      0 --:--:-- --:--:-- --:--:-- 19000
{
    "VolumeModification": {
        "TargetSize": 50, 
        "OriginalMultiAttachEnabled": false, 
        "TargetVolumeType": "gp2", 
        "ModificationState": "modifying", 
        "TargetMultiAttachEnabled": false, 
        "VolumeId": "vol-0a78ebbe02a4f40bc", 
        "TargetIops": 150, 
        "StartTime": "2023-02-13T14:28:58.000Z", 
        "Progress": 0, 
        "OriginalVolumeType": "gp2", 
        "OriginalIops": 100, 
        "OriginalSize": 10
    }
}
CHANGED: partition=1 start=4096 old: size=20967391 end=20971487 new: size=104853471 end=104857567
meta-data=/dev/nvme0n1p1         isize=512    agcount=6, agsize=524159 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0, rmapbt=0
         =                       reflink=0    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=2620923, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 2620923 to 13106683

こうなれば成功。50GBになってる。

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        970M     0  970M   0% /dev
tmpfs           978M     0  978M   0% /dev/shm
tmpfs           978M  460K  977M   1% /run
tmpfs           978M     0  978M   0% /sys/fs/cgroup
/dev/nvme0n1p1   50G  9.5G   41G  19% /
tmpfs           196M     0  196M   0% /run/user/1000
0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0