1. 概要(Overview)
本稿では、MIDIファイルを電子オルゴール音に変換するPythonスクリプトを紹介します。
減衰サイン波と倍音合成により音色を再現し、さらに**8ビットADC量子化処理(N=8)**を追加して、実際のA/D変換後の音質劣化を再現します。
- 技術要素:MIDI解析、倍音合成、減衰モデル、量子化誤差
-
出力ファイル:
musicbox_adc8.wav - 環境:Google Colab(Python 3.10以降)
2. 理論背景(Theory)
(1) 減衰サイン波(Decaying Sine)
オルゴール音は、振動体の共鳴が時間とともに指数的に減衰する波形で表されます。
x(t) = A * e^(-k t) * sin(2π f t)
ここで
A:振幅
k:減衰係数
f:基本周波数
(2) 倍音(Harmonics)
金属バーや弦の共鳴音は、基本周波数 f の整数倍の成分(倍音)を含みます。
x_total(t) = Σ [ g_n * e^(-k t) * sin(2π f_n t) ]
harmonics = [1, 2, 3]
harmonic_gain = [1.0, 0.4, 0.25]
(3) ADC量子化(Quantization)
A/D変換では、アナログ電圧を離散値に丸める「量子化」が発生します。
量子化ステップ幅:
Δ = Vref / (2^N)
ここで
N = 8bit, Vref = 1.0V → Δ ≈ 0.0039V
SN比(理論値):
SNR ≈ 6.02N + 1.76 = 49.9 dB
3. 実装コード(Implementation)
# ======================================================
# Program Name: midi_musicbox_adc8_complete_colab.py
# Creation Date: 20251102
# Overview: MIDI → decaying harmonic sine synthesis → 8-bit ADC quantization
# Usage: Run each cell sequentially in Google Colab.
# ======================================================
# =========================
# 必要ライブラリのインストール
# =========================
!pip install mido numpy scipy matplotlib IPython
# =========================
# 必要ライブラリのインポート
# =========================
import numpy as np
from mido import MidiFile
from scipy.io.wavfile import write
from google.colab import files
import matplotlib.pyplot as plt
from IPython.display import Audio, display
# =========================
# パラメータ設定
# =========================
fs = 44100
decay_base = 4.0
max_channels = 80
amplitude = 0.5
tempo_default = 120
harmonics = [1.0, 2.0, 3.0]
harmonic_gain = [1.0, 0.4, 0.25]
N_bits = 8
Vref = 1.0
bass_gain = 0.8
melody_gain = 1.2
# =========================
# MIDIファイルのアップロード
# =========================
uploaded = files.upload()
midi_path = list(uploaded.keys())[0]
# =========================
# 減衰サイン波関数
# =========================
def decaying_harmonic_sine(f, dur, amp):
t = np.linspace(0, dur, int(fs * dur), endpoint=False)
wave = np.zeros_like(t)
for h, g in zip(harmonics, harmonic_gain):
if f * h < fs / 2:
decay = decay_base * (1.0 + 0.0005 * f)
wave += g * np.exp(-decay * t) * np.sin(2 * np.pi * f * h * t)
wave = wave / (np.max(np.abs(wave)) + 1e-9)
return amp * wave
# =========================
# MIDI解析(Note On / Off)
# =========================
midi = MidiFile(midi_path)
ticks_per_beat = midi.ticks_per_beat
tempo = tempo_default
score = []
note_on_times = {}
for track in midi.tracks:
time_accum = 0
for msg in track:
time_accum += msg.time
if msg.type == 'set_tempo':
tempo = 60_000_000 / msg.tempo
if msg.type == 'note_on' and msg.velocity > 0:
start_time = time_accum / ticks_per_beat * 60 / tempo
note_on_times[msg.note] = (start_time, msg.velocity)
if (msg.type == 'note_off') or (msg.type == 'note_on' and msg.velocity == 0):
if msg.note in note_on_times:
start_time, vel = note_on_times[msg.note]
end_time = time_accum / ticks_per_beat * 60 / tempo
length = max(0.1, end_time - start_time)
freq = 440.0 * 2 ** ((msg.note - 69) / 12)
score.append((start_time, freq, length, vel))
del note_on_times[msg.note]
# =========================
# ベースとメロディ分離
# =========================
bass_notes = [n for n in score if n[1] < 220.0]
melody_notes = [n for n in score if n[1] >= 220.0]
# =========================
# 波形合成
# =========================
total_duration = max(t + l for t, f, l, v in score) + 1
output = np.zeros(int(fs * total_duration))
def synth_part(score_part, gain):
out = np.zeros_like(output)
for note in score_part:
start_time, freq, length, vel = note
wave = decaying_harmonic_sine(freq, length, amplitude * vel / 127 * gain)
idx_start = int(start_time * fs)
idx_end = idx_start + len(wave)
out[idx_start:idx_end] += wave[: len(out) - idx_start]
return out
bass_wave = synth_part(bass_notes, bass_gain)
melody_wave = synth_part(melody_notes, melody_gain)
output = bass_wave + melody_wave
output /= np.max(np.abs(output) + 1e-9)
# =========================
# 8ビットADC量子化
# =========================
signal_analog = (output + 1) / 2 * Vref
delta = Vref / (2 ** N_bits)
quantized = np.floor(signal_analog / delta) * delta
output_adc = (quantized / Vref) * 2 - 1
output_adc = np.clip(output_adc, -1.0, 1.0)
MIDIデータを解析して、各ノートを減衰倍音サイン波に変換後、
出力波形を8ビットADCで量子化→再生します。