0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

MIDIを「減衰サイン波」

Last updated at Posted at 2025-10-15

1. はじめに:数式がそのまま“音”になる

三角関数 sinθ は、時間とともに変化する波(音の振動)を表す数式です。
音とは空気の圧力が周期的に上下する現象。
この波の形をそのまま電子的に再現すれば、数学の式がそのまま音になります。

今回紹介するスクリプトでは、
「減衰サイン波(decaying sine wave)」 を用いて
MIDIファイルを“ミュージックボックス(オルゴール)風”に鳴らします。


2. プログラム概要

プログラム名: midi_to_decay_musicbox_colab.py
目的: MIDIファイルを読み込み、各音符を減衰サイン波で再合成
出力: musicbox.wav(オーディオファイル)をColab上で自動再生


3. 動作の原理

(1) 減衰サイン波の基本式

各音の波形は次の式で生成されます:

x(t) = A * e^(−k t) * sin(2π f t)
記号 意味 単位 説明
A 振幅 [V] 音の大きさ
k 減衰係数 [1/s] 音の消えやすさ
f 周波数 [Hz] 音の高さ
t 時間 [s] 経過時間

指数関数 e^(−k t) により時間とともに音が自然に減衰し、
金属や弦のような柔らかい響きを再現します。


(2) MIDI解析と周波数変換

MIDIの各ノート番号は、周波数に次の式で変換します:

f = 440 × 2^((note − 69) / 12)
  • note=69 → A4(ラ)= 440Hz
  • note=60 → C4(ド)= 261.6Hz

こうして、MIDIの楽譜情報が数学的な音波に変わります。


(3) 多重チャンネルによる合成

MIDIには同時に複数の音(和音)が鳴るため、
最大80チャンネル(max_channels = 80)を設定しています。

各チャンネルが音を鳴らし終えると、次の音がそのチャンネルを再利用します。
この処理を チャネル解析 (Channel analyzer) と呼びます。


(4) 合成処理

プログラムは、すべての音符を以下の手順で合成します:

  1. note_on イベントを検出
  2. 各音を減衰サイン波で生成
  3. 全音を時間軸上で重ね合わせ
  4. 全体を正規化してWAVに変換

結果として、MIDIが「波の足し合わせ」として再構成され、
純粋な三角関数の重ね合わせで音楽が生まれます。


4. 出力される音の特徴

音の要素 数学的意味 聴覚的特徴
sin波 単一周波数の振動 ピュアトーン(純音)
減衰 e^(−k t) エネルギーの減衰 音が自然に消える
周波数 f 振動の速さ 音の高さ
和音(sinの足し合わせ) 周波数の重ね合わせ 響き・厚み・ビート
MIDIノート列 sin波の列 音楽的旋律

特に、sin波を多数足し合わせることで、
三角関数の加法定理「sinA + sinB = 2sin((A+B)/2)cos((A−B)/2)」が音として体験できます。
少し異なる周波数の音を重ねると「ワンワン」とうなる干渉(ビート)も聞こえます。


5. 実行手順(Google Colab)

  1. 新しいColabノートブックを開く
  2. コードセルにスクリプトを貼り付けて実行
  3. 実行時にMIDIファイルをアップロード
  4. 自動的に musicbox.wav が生成され、再生されます

6. コード全文

# Program Name: midi_to_electronic_musicbox.py
# Creation Date: 20251015
# Overview: Generate electronic music box sound from MIDI file using harmonic decaying sine synthesis
# Usage: Run in Google Colab → upload MIDI file → playback "musicbox.wav"

!pip install mido numpy scipy

import numpy as np
from mido import MidiFile
from scipy.io.wavfile import write
from google.colab import files
from IPython.display import Audio

# =========================
# パラメータ管理 / Parameters
# =========================
fs = 44100          # Sampling rate [Hz]
decay_rate = 4.0    # Decay coefficient
max_channels = 80   # Polyphony
amplitude = 0.5     # Base amplitude
tempo_default = 120 # Default BPM
harmonics = [1.0, 2.0, 3.0]   # Overtone ratio
harmonic_gain = [1.0, 0.4, 0.25] # Overtone amplitude ratio

# =========================
# MIDIファイルアップロード / Upload
# =========================
uploaded = files.upload()
midi_path = list(uploaded.keys())[0]

# =========================
# 減衰サイン波+倍音 / Decaying harmonic sine
# =========================
def decaying_harmonic_sine(f, dur, amp):
    t = np.linspace(0, dur, int(fs * dur), endpoint=False)
    wave = np.zeros_like(t)
    for h, g in zip(harmonics, harmonic_gain):
        wave += g * np.exp(-decay_rate * t) * np.sin(2 * np.pi * f * h * t)
    return amp * wave / np.max(np.abs(wave))

# =========================
# 空きチャンネル探索 / Find free channel
# =========================
def find_free_channel(active_channels, t_now):
    for i, ch_end in enumerate(active_channels):
        if t_now >= ch_end:
            return i
    return None

# =========================
# MIDI解析 / Parse MIDI
# =========================
midi = MidiFile(midi_path)
ticks_per_beat = midi.ticks_per_beat
tempo = tempo_default
score = []

for track in midi.tracks:
    time_accum = 0
    for msg in track:
        time_accum += msg.time
        if msg.type == 'set_tempo':
            tempo = 60_000_000 / msg.tempo
        if msg.type == 'note_on' and msg.velocity > 0:
            start_time = time_accum / ticks_per_beat * 60 / tempo
            freq = 440.0 * 2 ** ((msg.note - 69) / 12)
            length = 0.5  # note duration
            score.append((start_time, freq, length, msg.velocity))

# =========================
# 波形合成 / Synthesis
# =========================
total_duration = max(t + l for t, f, l, v in score) + 1
output = np.zeros(int(fs * total_duration))
active_channels = [0] * max_channels

for note in score:
    start_time, freq, length, vel = note
    ch = find_free_channel(active_channels, start_time)
    if ch is None:
        continue
    active_channels[ch] = start_time + length
    wave = decaying_harmonic_sine(freq, length, amplitude * vel / 127)
    idx_start = int(start_time * fs)
    idx_end = idx_start + len(wave)
    output[idx_start:idx_end] += wave[: len(output) - idx_start]

# =========================
# 正規化・出力 / Normalize and export
# =========================
output /= np.max(np.abs(output))
write("musicbox.wav", fs, (output * 32767).astype(np.int16))
print("✅ Saved as 'musicbox.wav'. Playing now...")

Audio(output, rate=fs)

# Program Name: midi_to_fm_musicbox.py
# Creation Date: 20251015
# Overview: Generate FM-synthesized electronic music box sound from MIDI file using basic two-operator FM synthesis
# Usage: Run in Google Colab → upload MIDI file → playback "fm_musicbox.wav"

!pip install mido numpy scipy

import numpy as np
from mido import MidiFile
from scipy.io.wavfile import write
from google.colab import files
from IPython.display import Audio

# =========================
# パラメータ管理 / Parameters
# =========================
fs = 44100           # Sampling rate [Hz]
decay_rate = 3.5     # Decay coefficient
max_channels = 80    # Polyphony
amplitude = 0.5      # Base amplitude
tempo_default = 120  # Default BPM

# FM音源パラメータ / FM synthesis parameters
mod_index = 2.0      # Modulation index (depth)
mod_ratio = 2.0      # Modulator-to-carrier frequency ratio

# =========================
# MIDIファイルアップロード / Upload MIDI
# =========================
uploaded = files.upload()
midi_path = list(uploaded.keys())[0]

# =========================
# FM波生成関数 / FM waveform generator
# =========================
def fm_tone(f, dur, amp):
    t = np.linspace(0, dur, int(fs * dur), endpoint=False)
    # 減衰包絡 / Exponential decay envelope
    env = np.exp(-decay_rate * t)
    # FM合成式 / FM synthesis formula
    mod = np.sin(2 * np.pi * f * mod_ratio * t)
    wave = np.sin(2 * np.pi * f * t + mod_index * mod)
    return amp * env * wave

# =========================
# 空きチャンネル探索 / Find free channel
# =========================
def find_free_channel(active_channels, t_now):
    for i, ch_end in enumerate(active_channels):
        if t_now >= ch_end:
            return i
    return None

# =========================
# MIDI解析 / Parse MIDI
# =========================
midi = MidiFile(midi_path)
ticks_per_beat = midi.ticks_per_beat
tempo = tempo_default
score = []

for track in midi.tracks:
    time_accum = 0
    for msg in track:
        time_accum += msg.time
        if msg.type == 'set_tempo':
            tempo = 60_000_000 / msg.tempo
        if msg.type == 'note_on' and msg.velocity > 0:
            start_time = time_accum / ticks_per_beat * 60 / tempo
            freq = 440.0 * 2 ** ((msg.note - 69) / 12)
            length = 0.5
            score.append((start_time, freq, length, msg.velocity))

# =========================
# 波形合成 / FM synthesis rendering
# =========================
total_duration = max(t + l for t, f, l, v in score) + 1
output = np.zeros(int(fs * total_duration))
active_channels = [0] * max_channels

for note in score:
    start_time, freq, length, vel = note
    ch = find_free_channel(active_channels, start_time)
    if ch is None:
        continue
    active_channels[ch] = start_time + length
    wave = fm_tone(freq, length, amplitude * vel / 127)
    idx_start = int(start_time * fs)
    idx_end = idx_start + len(wave)
    output[idx_start:idx_end] += wave[: len(output) - idx_start]

# =========================
# 正規化と出力 / Normalize and save
# =========================
output /= np.max(np.abs(output))
write("fm_musicbox.wav", fs, (output * 32767).astype(np.int16))
print("✅ Saved as 'fm_musicbox.wav'. Playing now...")

Audio(output, rate=fs)
# Program Name: midi_to_guitar_fluidsynth.py
# Creation Date: 20251015
# Overview: Play MIDI file as realistic guitar sound using FluidSynth and SoundFont (.sf2)
# Usage: Run in Google Colab → upload MIDI + SoundFont → playback "guitar.wav"

!apt-get install -y fluidsynth
!pip install midi2audio mido

from midi2audio import FluidSynth
from mido import MidiFile
from google.colab import files
from IPython.display import Audio

# =========================
# パラメータ管理 / Parameters
# =========================
sf2_default = "acoustic_guitar.sf2"  # SoundFont file name
output_wav = "guitar.wav"            # Output file name

# =========================
# ファイルアップロード / Upload files
# =========================
print("Upload your MIDI file:")
uploaded_midi = files.upload()
midi_path = list(uploaded_midi.keys())[0]

print("Upload your SoundFont (.sf2) file (e.g., FluidR3_GM.sf2 or acoustic_guitar.sf2):")
uploaded_sf2 = files.upload()
sf2_path = list(uploaded_sf2.keys())[0]

# =========================
# MIDI→WAV変換 / Convert MIDI to WAV
# =========================
# FluidSynth を利用して SoundFont を適用 / Use SoundFont with FluidSynth
fs = FluidSynth(sf2_path)
fs.midi_to_audio(midi_path, output_wav)

print(f"✅ Saved as '{output_wav}'. Playing now...")
Audio(output_wav)
# Program Name: midi_to_analog_piano.py
# Creation Date: 20251015
# Overview: Generate warm analog-style piano tone from MIDI using detuned harmonic synthesis with natural decay
# Usage: Run in Google Colab → upload MIDI file → playback "analog_piano.wav"

!pip install mido numpy scipy

import numpy as np
from mido import MidiFile
from scipy.io.wavfile import write
from google.colab import files
from IPython.display import Audio

# =========================
# パラメータ管理 / Parameters
# =========================
fs = 44100           # Sampling rate [Hz]
decay_rate = 2.8     # Decay coefficient (slow, warm decay)
max_channels = 80    # Polyphony
amplitude = 0.6      # Base amplitude
tempo_default = 120  # Default BPM
detune_cents = 3.0   # Detune width for analog warmth
attack_time = 0.03   # Attack envelope [s]
harmonics = [1.0, 2.0, 3.0, 4.0]   # Harmonic frequencies
harmonic_gain = [1.0, 0.4, 0.25, 0.1]  # Relative gains

# =========================
# MIDIファイルアップロード / Upload MIDI
# =========================
uploaded = files.upload()
midi_path = list(uploaded.keys())[0]

# =========================
# アナログ風ピアノ波形生成 / Analog-style piano waveform
# =========================
def analog_piano_tone(f, dur, amp):
    # 時間軸生成 / Time vector
    t = np.linspace(0, dur, int(fs * dur), endpoint=False)
    wave = np.zeros_like(t)
    # アナログ揺らぎのための微小デチューン / Slight detuning for analog warmth
    detune = f * (2 ** (detune_cents / 1200) - 1)
    # 各倍音を合成 / Sum harmonics with detuning
    for h, g in zip(harmonics, harmonic_gain):
        base = np.sin(2 * np.pi * f * h * t)
        slight = np.sin(2 * np.pi * (f * h + detune) * t)
        wave += g * (base + slight) * 0.5
    # アタック+減衰包絡 / Attack + decay envelope
    env = np.exp(-decay_rate * t)
    attack_len = int(fs * attack_time)
    env[:attack_len] *= np.linspace(0, 1, attack_len)
    wave *= env
    return amp * wave / np.max(np.abs(wave))

# =========================
# 空きチャンネル探索 / Find free channel
# =========================
def find_free_channel(active_channels, t_now):
    for i, ch_end in enumerate(active_channels):
        if t_now >= ch_end:
            return i
    return None

# =========================
# MIDI解析 / Parse MIDI
# =========================
midi = MidiFile(midi_path)
ticks_per_beat = midi.ticks_per_beat
tempo = tempo_default
score = []

for track in midi.tracks:
    time_accum = 0
    for msg in track:
        time_accum += msg.time
        if msg.type == 'set_tempo':
            tempo = 60_000_000 / msg.tempo
        if msg.type == 'note_on' and msg.velocity > 0:
            start_time = time_accum / ticks_per_beat * 60 / tempo
            freq = 440.0 * 2 ** ((msg.note - 69) / 12)
            length = 1.0
            score.append((start_time, freq, length, msg.velocity))

# =========================
# 波形合成 / Synthesis
# =========================
total_duration = max(t + l for t, f, l, v in score) + 1
output = np.zeros(int(fs * total_duration))
active_channels = [0] * max_channels

for note in score:
    start_time, freq, length, vel = note
    ch = find_free_channel(active_channels, start_time)
    if ch is None:
        continue
    active_channels[ch] = start_time + length
    wave = analog_piano_tone(freq, length, amplitude * vel / 127)
    idx_start = int(start_time * fs)
    idx_end = idx_start + len(wave)
    output[idx_start:idx_end] += wave[: len(output) - idx_start]

# =========================
# 正規化・出力 / Normalize and export
# =========================
output /= np.max(np.abs(output))
write("analog_piano.wav", fs, (output * 32767).astype(np.int16))
print("✅ Saved as 'analog_piano.wav'. Playing now...")

Audio(output, rate=fs)
0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?