はじめに
crateのドキュメントに慣れてないので、imageprocのドキュメントを見つつfunctionを片っ端から呼ぶ。結果が明らかにおかしかったりするけど気にしない。よく分かんないところもあるけど、深追いしない。
結果
呼び出しコードは本文にもあるけど、プロジェクトとしてはimage_processにまとまってる。
module | description |
---|---|
binary_descriptors | パターンマッチ。呼び出せたけど結果が謎 |
contours | 輪郭抽出 |
contrast | コントラストを変える。二値化もここ。 |
corners | 変化のある点を列挙する |
definitions | functions定義なし |
distance_transform | 距離に応じた均しを行うっぽい。謎。 |
drawing | 線とかの描画 |
edges | canny |
filter | ガウスとかメディアンとかのフィルター処理 |
geometric_transformations | 回転・拡大縮小・移動 |
geometry | 矩形近似 |
gradients | sobelフィルターとか |
haar | 顔認識(のハズ) |
hog | 諸々謎 |
hough | ハフ変換 |
integral_image | 諸々謎 |
local_binary_patterns | ビットシフトとかの処理 |
map | カラーチャンネルの操作 |
math | ノルム計算 |
morphology | 膨張・収縮とか |
noise | ノイズ処理 |
pixelops | ピクセルのマージ |
point | functions定義なし |
property_testing | functions定義なし |
rect | functions定義なし |
region_labelling | 謎。前景抽出っぽいけどよく分からん。 |
seam_carving | 切れ目の処理。使いどころがよく分からず。 |
stats | 画像の統計情報 |
suppress | 非最大のピクセル変更(謎) |
template_matching | テンプレートマッチ。結果はなんとなくわかるけど、使い処が謎 |
union_find | functions定義なし |
utils | ベンチ画像作ったりピクセル差分取ったり |
imageproc各モジュールごとの呼び出しロジックと結果
binary_descriptors
コード
pub fn run() {
let img = image::open("lena.png").expect("failed to load image");
let img_base = img.to_luma8();
let keypoints = get_keypoints(&img_base);
let length = 128usize;
let override_test_pairs = None;
let base_brief = imageproc::binary_descriptors::brief::brief(
&img_base,
&keypoints,
length,
override_test_pairs,
);
let img = image::open("lena_face.png").expect("failed to load image");
let img_parts = img.to_luma8();
let keypoints = get_keypoints(&img_parts);
let length = 128usize;
let override_test_pairs = None;
let parts_brief = imageproc::binary_descriptors::brief::brief(
&img_parts,
&keypoints,
length,
override_test_pairs,
);
match (base_brief, parts_brief) {
(Ok(base), Ok(parts)) => {
log::info!(
"base descriptor len: {}, test pair len: {}",
base.0.len(),
base.1.len()
);
log::info!(
"parts descriptor len: {}, test pair len: {}",
parts.0.len(),
parts.1.len()
);
let base_descriptor = base.0;
let parts_descriptor = parts.0;
let threshold = 300u32;
let seed = Some(44u64);
let match_result = imageproc::binary_descriptors::match_binary_descriptors(
&base_descriptor,
&parts_descriptor,
threshold,
seed,
);
let img = image::open("lena.png").expect("failed to load image");
let img_base = img.to_rgba8();
let (base_width, base_height) = img_base.dimensions();
let img_parts = image::open("lena_face.png").expect("failed to load image");
let img_parts = img_parts.to_rgba8();
let (parts_width, parts_height) = img_parts.dimensions();
let mut result_pixels: Vec<image::Rgba<u8>> =
Vec::with_capacity((base_width * base_height) as usize);
for y in 0..base_height {
for x in 0..base_width {
if y < parts_height && x < parts_width {
result_pixels.push(*img_parts.get_pixel(x, y));
} else {
result_pixels.push(*img_base.get_pixel(x, y));
}
}
}
let result_pixels = result_pixels
.into_iter()
.map(|rgba| vec![rgba.0[0], rgba.0[1], rgba.0[2], rgba.0[3]])
.flatten()
.collect::<Vec<u8>>();
let mut img_result =
image::ImageBuffer::from_raw(base_width, base_height, result_pixels).unwrap();
let red = image::Rgba([255u8, 0u8, 0u8, 255u8]);
match_result.iter().for_each(|pair| {
let (base, parts) = pair;
let start = (base.corner.x as i32, base.corner.y as i32);
let end = (parts.corner.x as i32, parts.corner.y as i32);
let radius = 2;
imageproc::drawing::draw_hollow_circle_mut(&mut img_result, start, radius, red);
imageproc::drawing::draw_hollow_circle_mut(&mut img_result, end, radius, red);
let start = (start.0 as f32, start.1 as f32);
let end = (end.0 as f32, end.1 as f32);
imageproc::drawing::draw_line_segment_mut(&mut img_result, start, end, red);
});
img_result
.save("./results/binary_descriptors_match_binary_descriptors.png")
.unwrap();
}
(base, parts) => {
if let Err(base_error) = base {
log::error!("base descriptor error: {}", base_error);
}
if let Err(parts_error) = parts {
log::error!("parts descriptor error: {}", parts_error);
}
}
}
}
fn get_keypoints(img: &image::GrayImage) -> Vec<imageproc::point::Point<u32>> {
let (width, height) = img.dimensions();
let (max_width, max_height) = (width - 16, height - 16);
let corners = imageproc::corners::corners_fast9(&img, 80);
corners
.iter()
.map(|corner| imageproc::point::Point {
x: corner.x,
y: corner.y,
})
.filter(|point| point.x > 16 && point.x < max_width && point.y > 16 && point.y < max_height)
.collect::<Vec<_>>()
}
画像2つを元にBriefをつかったマッチングを行う。
これ
と、これ
をマッチさせたら、
こうなったので何かしくじってるはず。
contours
コード
pub fn run() {
log::info!("imageproc contours module");
let img = image::open("contour_base.png").expect("failed to load image");
let img_gray = img.clone().to_luma8();
let threshold = 100u8;
let img_gray = imageproc::contrast::threshold(
&img_gray,
threshold,
imageproc::contrast::ThresholdType::Binary,
);
img_gray
.save("./results/contours_find_contours_threshold.png")
.unwrap();
log::info!("contours find_contours");
let contours = imageproc::contours::find_contours::<u32>(&img_gray);
log::info!("contours length: {}", contours.len());
let red = image::Rgb([255u8, 0u8, 0u8]);
let mut img_result = img.clone().to_rgb8();
contours.iter().for_each(|contour| {
let mut prev: Option<imageproc::point::Point<u32>> = None;
contour.points.iter().for_each(|point| {
if let Some(prev_point) = prev {
let start = (prev_point.x as f32, prev_point.y as f32);
let end = (point.x as f32, point.y as f32);
imageproc::drawing::draw_line_segment_mut(&mut img_result, start, end, red);
}
prev = Some(point.clone());
});
if let Some(prev_point) = prev {
let start = (prev_point.x as f32, prev_point.y as f32);
let end = (contour.points[0].x as f32, contour.points[0].y as f32);
imageproc::drawing::draw_line_segment_mut(&mut img_result, start, end, red);
}
});
img_result
.save("./results/contours_find_contours.png")
.unwrap();
log::info!("contours find_contours_with_threshold");
let contours = imageproc::contours::find_contours_with_threshold::<u32>(&img_gray, threshold);
log::info!("contours length: {}", contours.len());
let mut img_result = img.clone().to_rgb8();
contours.iter().for_each(|contour| {
let mut prev: Option<imageproc::point::Point<u32>> = None;
contour.points.iter().for_each(|point| {
if let Some(prev_point) = prev {
let start = (prev_point.x as f32, prev_point.y as f32);
let end = (point.x as f32, point.y as f32);
imageproc::drawing::draw_line_segment_mut(&mut img_result, start, end, red);
}
prev = Some(point.clone());
});
if let Some(prev_point) = prev {
let start = (prev_point.x as f32, prev_point.y as f32);
let end = (contour.points[0].x as f32, contour.points[0].y as f32);
imageproc::drawing::draw_line_segment_mut(&mut img_result, start, end, red);
}
});
img_result
.save("./results/contours_find_contours_with_threshold.png")
.unwrap();
}
これの
輪郭抽出して赤で描くとこうなる
contrast
コード
fn threshold_mut() {
// imageproc::contrast::threshold_mut();
log::debug!("contrast threshold_mut");
let img = image::open("lena.png").expect("failed to load image");
let mut img_gray = img.to_luma8();
imageproc::contrast::threshold_mut(
&mut img_gray,
100u8,
imageproc::contrast::ThresholdType::Binary,
);
img_gray
.save("./results/contrast_threshold_mut.png")
.expect("failed to save threshold_mut image");
}
これ
を二値化するとこんな感じ
corners
コード
fn corners_fast9() {
log::debug!("corners corners_fast9");
let mut img = image::open("lena.png").expect("failed to load image");
let img_gray = img.clone().to_luma8();
let result = imageproc::corners::corners_fast9(&img_gray, 100);
// https://docs.rs/imageproc/0.25.0/imageproc/corners/struct.Corner.html
let red = image::Rgba([255u8, 0u8, 0u8, 255u8]);
result.iter().for_each(|corner| {
imageproc::drawing::draw_hollow_circle_mut(
&mut img,
(corner.x as i32, corner.y as i32),
10i32,
red,
);
});
img.save("./results/corners_corners_fast9.png").unwrap();
}
これの変化点を求めると
こんな感じ
distance_transform
コード
fn distance_transform() {
log::debug!("distance_transform");
let img = image::open("lena.png").expect("failed to load image");
let mut img_gray = img.clone().to_luma8();
imageproc::contrast::threshold_mut(
&mut img_gray,
100u8,
imageproc::contrast::ThresholdType::Binary,
);
// https://docs.rs/imageproc/0.25.0/imageproc/distance_transform/enum.Norm.html
let norm = imageproc::distance_transform::Norm::L1;
// https://docs.rs/imageproc/0.25.0/imageproc/distance_transform/fn.distance_transform.html
let result = imageproc::distance_transform::distance_transform(&img_gray, norm);
result
.save("./results/distance_transform_distance_transform.png")
.unwrap();
}
これを二値化して距離の均しをすると
こんな感じ
謎。
drawing
コード
imageproc::drawing::draw_text_mut(
&mut image,
white,
400,
20,
scale,
&font,
"draw_line_segment_mut",
);
imageproc::drawing::draw_line_segment_mut(&mut image, (400f32, 60f32), (600f32, 60f32), white);
// はみ出してても描画自体は実施される
imageproc::drawing::draw_line_segment_mut(
&mut image,
(400f32, -120f32),
(600f32, 90f32),
white,
);
imageproc::drawing::draw_line_segment_mut(
&mut image,
(400f32, 120f32),
(600f32, -120f32),
white,
);
割と色々できる
edges
コード
fn canny() {
log::debug!("edges canny");
let img = image::open("lena.png").expect("failed to load image");
let img_gray = img.clone().to_luma8();
let result = imageproc::edges::canny(&img_gray, 30f32, 240f32);
result.save("./results/edges_canny.png").unwrap();
}
これが
こうなる
filter
コード
let image_buffer = img.clone().into_rgb8();
let filter_result = imageproc::filter::median_filter(&image_buffer, 3u32, 3u32);
filter_result
.save("./results/filter_median_filter.png")
.expect("failed to save median_filter image");
これが
こんなかんじにできたりする
geometric_transformations
コード
let projection =
imageproc::geometric_transformations::Projection::translate(center_x, center_y)
* imageproc::geometric_transformations::Projection::scale(0.5f32, 0.5f32)
* imageproc::geometric_transformations::Projection::rotate(45.0f32);
let default = image::Rgba([0f32, 0f32, 0f32, 0f32]);
let result = imageproc::geometric_transformations::warp(
&image_buffer,
&projection,
interpolation,
default,
);
let result = image::DynamicImage::ImageRgba32F(result);
result
.into_rgba8()
.save("./results/geometric_transformation_warp.png")
.expect("failed to save warp image");
これを
移動したり回転したり
geometry
コード
let mut current_point: Option<imageproc::point::Point<u32>> = None;
result.iter().for_each(|point| {
if let Some(prev) = current_point {
imageproc::drawing::draw_line_segment_mut(
&mut img,
(prev.x as f32, prev.y as f32),
(point.x as f32, point.y as f32),
green,
);
}
current_point = Some(point.clone());
});
let start = result[0].clone();
let end = result.iter().last().clone().unwrap();
imageproc::drawing::draw_line_segment_mut(
&mut img,
(start.x as f32, start.y as f32),
(end.x as f32, end.y as f32),
green,
);
img.save("./results/geometry_convex_hull.png").unwrap();
赤: 元画像、緑: 矩形近似
gradients
コード
fn sobel_gradient_map() {
log::debug!("gradients sobel_gradient_map");
let img = image::open("lena.png").expect("failed to load image");
let (width, height) = img.dimensions();
let img = img.to_rgb8();
let result = imageproc::gradients::sobel_gradient_map(&img, |rgb| rgb);
let max_value = result
.clone()
.pixels()
.into_iter()
.map(|pixel| pixel.0)
.flatten()
.max()
.unwrap() as f32;
let rgbs = result
.clone()
.pixels()
.into_iter()
.map(|pixel| {
let r = (pixel.0[0] as f32 / max_value * u8::MAX as f32) as u8;
let g = (pixel.0[1] as f32 / max_value * u8::MAX as f32) as u8;
let b = (pixel.0[2] as f32 / max_value * u8::MAX as f32) as u8;
[r, g, b]
})
.flatten()
.collect::<Vec<u8>>();
image::RgbImage::from_raw(width, height, rgbs)
.unwrap()
.save("./results/gradients_sobel_gradient_map.png")
.unwrap();
}
これを
sobelフィルターとか掛けたらこんな感じ
haar
顔認識のハズだけど、画像が十分に小さくないと使えないっぽい?諸々謎。
コード
pub fn run() {
let img = image::open("lena.png").expect("failed to load image");
let (width, height) = img.dimensions();
let image_buffer = img.to_rgba32f();
let scale = 50.0 / width.max(height) as f32;
log::debug!("haar resize scale: {}", scale);
let projection = imageproc::geometric_transformations::Projection::scale(scale, scale);
let interpolation = imageproc::geometric_transformations::Interpolation::Nearest;
let default = image::Rgba([0f32, 0f32, 0f32, 0f32]);
let (new_width, new_height) = (
(scale * width as f32) as u32,
(scale * height as f32) as u32,
);
let mut resized_image_buffer = image::Rgba32FImage::new(new_width, new_height);
imageproc::geometric_transformations::warp_into(
&image_buffer,
&projection,
interpolation,
default,
&mut resized_image_buffer,
);
let (width, height) = resized_image_buffer.dimensions();
log::debug!("haar scaled size: ({}, {})", width, height);
log::debug!("haar number_of_haar_features");
let result = imageproc::haar::number_of_haar_features(width, height);
log::info!("number_of_haar_features result: {}", result);
log::debug!("haar enumerate_haar_features");
// https://docs.rs/imageproc/0.25.0/imageproc/haar/fn.enumerate_haar_features.html
let features = imageproc::haar::enumerate_haar_features(width as u8, height as u8);
log::info!("features size : {:?}", features.len());
let img = image::DynamicImage::ImageRgba32F(resized_image_buffer);
let mut gray_image = img.clone().to_luma8();
let result = imageproc::haar::draw_haar_feature(&gray_image, features[0]);
result.save("./results/haar_draw_haar_feature.png").unwrap();
features[0..10].iter().for_each(|feature| {
imageproc::haar::draw_haar_feature_mut(&mut gray_image, *feature);
});
gray_image
.save("./results/haar_draw_haar_feature_mut.png")
.unwrap();
}
これを
どうにかこうにかやってみたら
ってなったので謎。
hog
コード
pub fn run() {
let img = image::open("lena.png").expect("failed to load image");
let (width, height) = img.dimensions();
let image_buffer = img.to_rgba32f();
let (scale_x, scale_y) = (400f32 / width as f32, 400f32 / height as f32);
let projection = imageproc::geometric_transformations::Projection::scale(scale_x, scale_y);
let interpolation = imageproc::geometric_transformations::Interpolation::Nearest;
let default = image::Rgba([0f32, 0f32, 0f32, 0f32]);
let (new_width, new_height) = (
(scale_x * width as f32) as u32,
(scale_y * height as f32) as u32,
);
let mut resized_image_buffer = image::Rgba32FImage::new(new_width, new_height);
imageproc::geometric_transformations::warp_into(
&image_buffer,
&projection,
interpolation,
default,
&mut resized_image_buffer,
);
let (width, height) = resized_image_buffer.dimensions();
log::debug!("hog width, height: ({}, {})", width, height);
let img = image::DynamicImage::ImageRgba32F(resized_image_buffer);
let img_gray = img.clone().to_luma8();
let orientations = 15;
let signed = false;
let cell_side = 100usize; // evenly divide width & height
let block_side = 2;
// (width, height) = (400, 400)
let cell_wide = width as usize / cell_side;
let cell_height = height as usize / cell_side;
log::debug!(
"(cell_wide, cell_height) = ({}, {})",
cell_wide,
cell_height
);
let block_stride = 2; // evenly divide (cells high(= height / cell_side) - block side)
let options =
imageproc::hog::HogOptions::new(orientations, signed, cell_side, block_side, block_stride);
let spec = imageproc::hog::HogSpec::from_options(width, height, options);
match spec {
Ok(spec) => {
log::debug!("hog cell_histograms");
let mut array_3d = imageproc::hog::cell_histograms(&img_gray, spec);
let view = array_3d.view_mut();
log::debug!("hog render_hist_grid");
let result = imageproc::hog::render_hist_grid(10u32, &view, signed);
result.save("./results/hog_render_hist_grid.png").unwrap();
}
Err(e) => {
log::error!("HogSpec::from_options error: {:?}", e);
}
}
log::debug!("called hog");
match imageproc::hog::hog(&img_gray, options) {
Ok(vectors) => {
log::info!("hog result: {:?}", vectors);
}
Err(e) => {
log::error!("imageproc::hog::hog error: {:?}", e);
}
}
}
これを
どうにかこうにかやってみたら
ってなったので謎。
hough
コード
log::debug!("hough draw_polar_lines");
let image_buffer = img.to_rgb8();
let result = imageproc::hough::draw_polar_lines(&image_buffer, &result, red);
result.save("./results/hough_draw_polar_lines.png").unwrap();
これが
こう!
integral_image
コード
fn integral_squared_image() {
log::debug!("integral_image integral_squared_image");
let img = image::open("lena.png").expect("failed to load image");
let img_gray = img.clone().to_luma8();
let result = imageproc::integral_image::integral_squared_image::<_, u32>(&img_gray);
parse_to_lumau8(&result)
.save("./results/integral_image_integral_squared_image.png")
.unwrap();
}
これが
こう!
謎!
local_binary_patterns
コード
pub fn run() {
log::debug!("local_binary_patterns count_transitions");
let value = 0b10110010;
let result = imageproc::local_binary_patterns::count_transitions(value);
log::info!(
"bit value {:08b} count_transitions result is {}",
value,
result
);
let img = image::open("lena.png").expect("failed to load image");
let gray_image_buffer = img.to_luma8();
let (x, y) = (100, 100);
let result = imageproc::local_binary_patterns::local_binary_pattern(&gray_image_buffer, x, y);
log::info!(
"local_binary_pattern ({}, {}) result is {:08b}",
x,
y,
result.unwrap()
);
let value = 0b10110100;
let result = imageproc::local_binary_patterns::min_shift(value);
log::info!("bit value {:08b} min_shift result is {:08b}", value, result);
}
画像ではなくビットパターンの処理
map
コード
fn as_blue_channel() {
log::debug!("map as_blue_channel");
let img = image::open("lena.png").expect("failed to load image");
let img_gray = img.to_luma8();
let result = imageproc::map::as_blue_channel(&img_gray);
result.save("./results/map_as_blue_channel.png").unwrap();
}
これの
青チャンネルだけとりだしたりできる
math
コード
fn l1_norm() {
log::debug!("math l1_norm");
let value = vec![5f32, 10f32, 25f32];
let result = imageproc::math::l1_norm(&value);
log::info!("value {:?} l1 norm is {}", value, result);
}
画像の処理ではなく、ノルム計算
morphology
コード
fn dilate_mut() {
log::debug!("morphology dilate_mut");
let img = image::open("morphology_model.png").expect("failed to load image");
let mut img_gray = img.to_luma8();
imageproc::morphology::dilate_mut(&mut img_gray, imageproc::distance_transform::Norm::L2, 20u8);
img_gray
.save("./results/morphology_dilate_mut.png")
.unwrap();
}
これを
膨張とかするとこんな感じ
noise
コード
fn salt_and_pepper_noise() {
log::debug!("noise salt_and_pepper_noise");
let img = image::open("lena.png").expect("failed to load image");
let image_buffer = img.to_rgb8();
let rate = 0.2f64; // rage 0.0 between 1.0
let seed = 10u64;
let result = imageproc::noise::salt_and_pepper_noise(&image_buffer, rate, seed);
result
.save("./results/noise_salt_and_pepper_noise.png")
.unwrap();
}
これに
ノイズをいれたりできる
pixelops
コード
pub fn run() {
log::debug!("pixelops interpolate");
let left = image::Rgb([10u8, 20u8, 30u8]);
let right = image::Rgb([100u8, 80u8, 60u8]);
let result = imageproc::pixelops::interpolate(left, right, 0.7);
log::info!("interpolate result: {:?}", result);
log::debug!("pixelops weighted_sum");
let result = imageproc::pixelops::weighted_sum(left, right, 0.7, 0.3);
log::info!("weighted_sum result: {:?}", result);
}
画像の処理ではなく、ピクセル値のマージ的な機能。
region_labelling
コード
pub fn run() {
log::debug!("region_labelling connected_components");
let img = image::open("lena.png").expect("failed to load image");
let image_gray = img.to_luma8();
let conn = imageproc::region_labelling::Connectivity::Four;
let background = image::Luma([0u8]);
let result = imageproc::region_labelling::connected_components(&image_gray, conn, background);
parse_to_lumau8(&result)
.save("./results/region_labelling_connected_components.png")
.unwrap();
}
これが
こうなる
謎
seam_carving
コード
let img = image::open("lena.png").expect("failed to load image");
let image_gray = img.clone().to_luma8();
log::debug!("seam_carving find_vertical_seam");
let seam = imageproc::seam_carving::find_vertical_seam(&image_gray);
log::debug!("seam_carving draw_vertical_seams");
let seams = vec![seam];
let image_result = imageproc::seam_carving::draw_vertical_seams(&image_gray, &seams);
image_result
.save("./results/seam_carving_draw_vertical_seams.png")
.unwrap();
これの
切れ目がここ
stats
コード
fn histogram() {
log::debug!("stats histogram");
let img = image::open("lena.png").expect("failed to load image");
let image_rgb = img.clone().to_rgb8();
let result = imageproc::stats::histogram(&image_rgb);
log::info!("histogram result: {:?}", result.channels);
}
画像の統計情報とかを算出する機能
suppress
コード
fn suppress_non_maximum() {
log::debug!("suppress suppress_non_maximum");
let img = image::open("lena.png").expect("failed to load image");
let img_gray = img.clone().to_luma8();
let radius = 50u32;
let result = imageproc::suppress::suppress_non_maximum(&img_gray, radius);
result
.save("./results/suppress_suppress_non_maximum.png")
.unwrap();
}
これの
非最大の抑制(謎)をするとこうなる
謎。
template_matching
コード
fn match_template_with_mask_parallel() {
log::debug!("template_matching match_template_with_mask_parallel");
let img = image::open("lena.png").expect("failed to load image");
let img_base = img.clone().to_luma8();
let img = image::open("lena_eyes.png").expect("failed to load image");
let img_eye = img.clone().to_luma8();
let mean = 100f64;
let seddev = 20f64;
let seed = 10u64;
let img_eye_blur = imageproc::noise::gaussian_noise(&img_eye, mean, seddev, seed);
// https://docs.rs/imageproc/0.25.0/imageproc/template_matching/enum.MatchTemplateMethod.html
let result = imageproc::template_matching::match_template_with_mask_parallel(
&img_base,
&img_eye,
imageproc::template_matching::MatchTemplateMethod::SumOfSquaredErrors,
&img_eye_blur,
);
parse_to_lumau8(&result)
.save("./results/template_matching_match_template_with_mask_parallel.png")
.unwrap();
}
これから
これを見つけると
こんなかんじ
マッチした矩形がとれるわけではないっぽい
utils
コード
fn rgb_bench_image() {
log::debug!("utils rgb_bench_image");
let width = 100u32;
let height = 255u32;
let result = imageproc::utils::rgb_bench_image(width, height);
result.save("./results/utils_rgb_bench_image.png").unwrap();
}
ベンチイメージとかを作れる
振り返り
一通りfunctionを呼ぶまでは、filter3x3とかの定義に若干のアレルギーがでていた。↓こんな定義とかでClamp<K> + Primitive
is 何状態。
pub fn filter3x3<P, K, S>(
image: &Image<P>,
kernel: &[K],
) -> Image<ChannelMap<P, S>>
where
P::Subpixel: Into<K>,
S: Clamp<K> + Primitive,
P: WithChannel<S>,
K: Num + Copy,
ひたすらドキュメントを追っかけてたら、アレルギーもなくなったし、なんとなくどう対処すればよいかわかってきた。まぁ、シンプルにドキュメントの各定義読み込むだけなんだけど。
imageprocで色々画像処理できるのは分かった。ただ、haar/brief/template_matchingを使いこなせなかったのがちょっと悔しい。