Qiita Teams that are logged in
You are not logged in to any team

Log in to Qiita Team
Community
OrganizationEventAdvent CalendarQiitadon (β)
Service
Qiita JobsQiita ZineQiita Blog
0
Help us understand the problem. What are the problem?

More than 3 years have passed since last update.

Organization

# Live Camera/CCTV Time Series Image Compression using Image Similarity

Introduction

Recently, there are many live cameras/CCTVs (closed-circuit television) has been installed in many places such as offices, roads, and homes, in order to help human work. There are various purposes of installing CCTVs, such as security, surveillance, and data analysis. One of the challenges of installing CCTVs is about data management. Most CCTVs are set to take data/image near realtime conditions. If a CCTV takes data/image every minute and each data size is 50 KB, then we need 72MB of hard disk capacity per day, or 26.28 GB per year. If the purpose of using CCTV is to analyze data, we need to take data in a long span. In this scale we can say this is big data, therefore we need to compress data.
Compressing live camera data are commonly done in the following two phases: In the first phase, the CCTV compress the data which will be delivered into the data centre. This is commonly done by JPEG compression. In the second phase, we compress data in the data centre. What I want to explain in this article is a compression which is done in the second phase. In this phase, images are commonly compressed by image subtraction method, then the run-length enconding method. Here we increase the compression rate of the method above by calculating the similarity of images.

Required Application and Libraries

  • Python 3.5
  • OpenCV 3.1
  • scikit-image (compare_ssim) 0.13.0
  • PIL

Tested on macOS Sierra 10.12.2 and Linux (Ubuntu 16.04)

Implementation

1. Explanation about the compression method

We combine two methods; First, the image subtraction method, then the run-length compression method. The image subtraction method increases the number of redundant pixels by subtracting pixels from two similiar images, we then compress the redundant pixels by the run-length encoding method. The formula to subtract the image is below:

image_subtraction_formula
C = B - A

which is:
C is subtracted image (image member)
B is subtraction subject
A is subtraction object (image key)

The algorithm of image subtraction uses an iteration that compares the similarity between two images. If the similarity is less than the threshold then the image member will become image key, else, the image key will be the same until the iteration process finds an image member that has similarity bigger than threshold. The next step, the program compress image key and image member using run-length encoding compression.

Bellow is the pseudocode:

compression_algorithm
image key=""
while image(n) < total image
 image member=""

 if (keyfile == "")
  image key = image(n)
  image member = image(n+1)
 else
  image key = image(n)
 endif

 if similarity(image key, image member) < thresshold
  if (image key == image(n))
   compress(image key)
  endif

  compress (image member)
  image key = image member

  continue
 endif

 image subtracted = image member - image key
 compress (image subtracted)

 if (image key == image(n))
  compress(image key)
 endif
end

Based on that algorithm, the compression rate depends on the total number of image keys and image members. The more image member and less image key that are created, the higher rate compression that will be got.

2. How does image subtraction work?

The image subtraction method uses the OpenCV subtraction function. If there is a same pixel between two images, then the pixel will be colored black, or (0,0,0) in RGB form. Below is the sample of image subtraction using OpenCV and the result:

subtraction.py
img1 = cv.imread("./img1.jpeg")
img2 = cv.imread("./img2.jpeg")
img3 = cv.subtract(img2, img1)

Here we will compare the original image and the subtracted image using OpenCV-im.load():

calculatepixel.py
im = Image.open("./data/img1.jpeg") 
pix = im.load()
w,h = im.size
total_ori = 0
for i in range (0,w):
 for j in range (0,h):
  if "(0, 0, 0)" in pix[i,j]:
   total_ori = total_ori + 1

im2 = Image.open("./data/img3.jpeg") 
pix = im2.load()
w,h = im2.size
total_subtraction = 0
for i in range (0,w):
 for j in range (0,h):
  if "(0, 0, 0)" in pix[i,j]:
   total_subtraction = total_subtraction + 1

print ("original image 0 pixel: %d"% total_ori)
print ("subtraction image 0 pixel: %d"% total_subtraction)

Result:

result
original image 0 pixel: 25
subtraction image 0 pixel: 112833

Based on the result above, it is clear that the image subtraction method can increase the number of consecutive pixels of identical color. Then the next step is to compress the subtracted images by run-length encoding compression.

3. How does the run-length encoding work?

Run-length encoding encodes images. There are many types of run-length encoding; here we use Packbits. Below is a sample code:

packbit_compression
foo=Image.open(filein)
  foo.save(fileout, compression = 'packbits')

How does it actually work?

Let us explain the method using an example.
Here is a 16-byte image:

pixel_image
W W W W
W B B W
W B B W
W W W W

Compression Result:

sample_result
5W 2B 2W 2B 5W

The 16-byte image above will be written as a file with the following format/sequence: "W W W W W B B W W B B W W W W W". Based on the sequence, it is easy to understand that there are redundant pixels in the sequence. Packbits algorithm encodes the redundant pixels by storing series (run) of identical pixels color. In our case, the sequence "W W W W W B B W W B B W W W W W" will encode as "5W 2B 2W 2B 5W". Thus, the total number of bytes decreases from 16 bytes to 10 bytes.

Result

image similarity original size compression size (KB) compression rate (KB)
neighbor image 1542076 1183892 23.22738957
90% 1542076 1393184 15.53101144
80% 1542076 1155136 25.09214851
70% 1542076 1037132 32.74443024
60% 1542076 1015576 34.14228611
50% 1542076 993840 35.55181457
45% 1542076 1013148 33.52675225
40% 1542076 1025068 34.2997362
35% 1542076 1120552 27.33483953

The best compression rate is obtained when the image similarity is 50%

Summary

The key point to get higher compression rate with this method is to increase the number of image members and to decrease the number of image keys. However, we have to maintain the similarity between image keys and image members. The more similar the images, the higher compression rate we will get from Packbits.

Why not register and get more from Qiita?
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away
0
Help us understand the problem. What are the problem?