Baseline CAD2 Task2#
Rebalance Classical Ensemble#
Image by Port(u*o)s oder Phil Ortenau from Wikimedia Commons
In a pilot study, we found listeners with hearing loss liked the ability to rebalance the different instruments in an ensemble.
In this second round of Cadenza Challenges, we are presenting a challenge where entrants need to process and rebalance the levesl of the instruments of an ensemble of 2 to 5 instruments.
More details about the challenge can be found on the Cadenza website.
This tutorial walks you through the process of running the Rebalance Classical Ensemble baseline using the shell interface.
Create the environment#
We first need to install the Clarity package. For this, we use the tag version of the challenge, v0.6.1
Setting the Location of the Project#
For convenience, we are setting an environment variable with the location of the root working directory of the project. This variable will be used in various places throughout the tutorial. Please change this value to reflect where you have installed this notebook on your system.
import os
os.environ["NBOOKROOT"] = os.getcwd()
os.environ["NBOOKROOT"] = f"{os.environ['NBOOKROOT']}/.."
%cd {os.environ['NBOOKROOT']}
from IPython.display import clear_output
import os
import sys
print("Cloning git repo...")
!git clone --depth 1 --branch v0.6.1 https://github.com/claritychallenge/clarity.git
clear_output()
print("Installing the pyClarity...\n")
%cd clarity
%pip install -e .
sys.path.append(f'{os.getenv("NBOOKROOT")}/clarity')
clear_output()
print("Repository installed")
Repository installed
%cd {os.environ['NBOOKROOT']}/clarity/recipes/cad2/task1
!pip install -r requirements.txt
clear_output()
Get the demo data#
The next step is to download a demo data package that will help demonstrate the process. This package has the same structure as the official data package, so it will help you understand how the files are organized.
Before continuing, it is recommended that you familiarize yourself with the data structure and content, which you can find on the website.
Now, let’s download the data…
%cd {os.environ['NBOOKROOT']}
!gdown 1UqiqwYJuyC1o-C14DpVL4QYncsOGCvHF
!tar -xf cad2_demo_data.tar.xz
clear_output()
print("Data installed")
Data installed
Changing working Directory#
Next, we change working directory to the location of the shell scripts we wish to run.
%cd {os.environ['NBOOKROOT']}/clarity/recipes/cad2/task2/baseline
/home/gerardoroadabike/Extended/Projects/cadenza_tutorials/clarity/recipes/cad2/task2/baseline
Let’s save the path to the dataset in root_data
root_data = f"{os.environ['NBOOKROOT']}/cadenza_data_demo/cad2/task2"
!ls -l {root_data}
total 8
drwxr-xr-x 3 gerardoroadabike gerardoroadabike 4096 Aug 22 15:35 audio
drwxr-xr-x 2 gerardoroadabike gerardoroadabike 4096 Sep 3 16:52 metadata
Running the Baseline#
The enhancement
baseline employs a ConvTasNet model to separate the lyrics from the background accompaniment. This model was trained for causal and non-causal cases. The pre-trained models are stored on Huggingface. The causality is defined in the config.yaml
.
The config parameters#
The parameters of the baseline are define in the config.yaml
file.
First, it configures the paths to metadata, audio files and, location for the output files.
path:
root: ??? # Set to the root of the dataset
metadata_dir: ${path.root}/metadata
music_dir: ${path.root}/audio
gains_file: ${path.metadata_dir}/gains.json
listeners_file: ${path.metadata_dir}/listeners.valid.json
enhancer_params_file: ${path.metadata_dir}/compressor_params.valid.json
music_file: ${path.metadata_dir}/music.valid.json
scenes_file: ${path.metadata_dir}/scenes.valid.json
scene_listeners_file: ${path.metadata_dir}/scene_listeners.valid.json
exp_folder: ./exp # folder to store enhanced signals and final results
path.root
: must be set to the location of the dataset.exp_folder
: by default, the name of the folder it’s using the causality parameter. But this can be cahnge according your requirements
The next parameters are the different sample rates
input_sample_rate: 44100 # sample rate of the input mixture
remix_sample_rate: 32000 # sample rate for the output remixed signal
HAAQI_sample_rate: 24000 # sample rate for computing HAAQI score
The HAAQI sample rate is uses by HAAQI in the evaluation.
The next parameters are related to the separation and how it will operate
separator:
force_redownload: True
add_residual: 0.1
causality: noncausal
device: ~
separation:
number_sources: 2
segment: 6.0
overlap: 0.1
sample_rate: ${input_sample_rate}
separator.force_redownload
: whether to force redownload the model or not.separator.add_residual
: percentage (value between 0 and 1) of the rest of instruments to add back to estimated instrument.separator.causality
: this is where we set the causality.separator.separation
: these parameters are used for separate large signals using fade and overlap.
The enhancer
parameters are common parameters used by the multiband dynamic range compressor.
enhancer:
crossover_frequencies: [ 353.55, 707.11, 1414.21, 2828.43, 5656.85 ] # [250, 500, 1000, 2000, 4000] * sqrt(2)
attack: [ 11, 11, 14, 13, 11, 11 ]
release: [ 80, 80, 80, 80, 100, 100 ]
threshold: [ -30, -30, -30, -30, -30, -30 ]
You are free to change the parameters if you believe it may improve the signal for the listener panel. However, the evaluation will use these parameters, and your changes may result in lower objective HAAQI scores, as this metric is based on the correlation between the enhanced and reference signals.
The last parameters set some of the evaluation configurations
evaluate:
set_random_seed: True
small_test: False
batch_size: 1 # Number of batches
batch: 0 # Batch number to evaluate
Running enhance.py#
The process goes as:
Loading the different metadata files into dictionaries.
Load the causal or non-causal separation models into a dictionary using the method
load_separation_model()
.Create an instance of a
MultibandCompressor
.Load the scenes and listeners per scenes
Then, the script processes one scene-listener pair at a time.
The process of a scene goes as:
Get the compressor parameters for the listener
# Get the listener's compressor params
mbc_params_listener: dict[str, dict] = {"left": {}, "right": {}}
for ear in ["left", "right"]:
mbc_params_listener[ear]["release"] = config.enhancer.release
mbc_params_listener[ear]["attack"] = config.enhancer.attack
mbc_params_listener[ear]["threshold"] = config.enhancer.threshold
mbc_params_listener["left"]["ratio"] = enhancer_params[listener_id]["cr_l"]
mbc_params_listener["right"]["ratio"] = enhancer_params[listener_id]["cr_r"]
mbc_params_listener["left"]["makeup_gain"] = enhancer_params[listener_id][
"gain_l"
]
mbc_params_listener["right"]["makeup_gain"] = enhancer_params[listener_id][
"gain_r"
]
Get the instruments composing the mixture.
source_list = {
f"source_{idx}": s["instrument"].split("_")[0]
for idx, s in enumerate(songs[song_name].values(), 1)
if "Mixture" not in s["instrument"]
}
Load the signal to process and select the requested segment
mixture_signal, mix_sample_rate = read_flac_signal(
filename=Path(config.path.music_dir) / songs[song_name]["mixture"]["track"]
)
assert mix_sample_rate == config.input_sample_rate
start = songs[song_name]["mixture"]["start"]
end = start + songs[song_name]["mixture"]["duration"]
mixture_signal = mixture_signal[
int(start * mix_sample_rate) : int(end * mix_sample_rate),
:,
]
Estimate the stems of the mixture.
model: dictionary with the separation models.
signal: the original mixture with channels first.
signal_sample_rate: sample rate of the original mixture.
device: cpu or cuda
sources_list: dictionary with the instruments in the mixture.
listener: listener to process
add_residual: percentage of the
rest of instruments
to add back to the estimated instrument.
stems: dict[str, ndarray] = decompose_signal(
model=separation_models,
signal=mixture_signal.T,
signal_sample_rate=config.input_sample_rate,
device=device,
sources_list=source_list,
listener=listener,
add_residual=config.separator.add_residual,
)
Apply the gains. The baseline cannot separate 2 lines of the same instruments, i.e., it cannot separate a
violin 1
andviolin 2
in the same mixture. Therefore, when 2 lines of the same instruments are present in the same mixture, the gains for each line of those instrument becaomes the average between them.
Example:
original_gains = {‘violin 1’: 3, ‘violin 2’: 10, ‘viola’: 6}
violin_avg = (3 + 10) / 2 = 6.5
new_gains = {‘violin 1’: 6.5, ‘violin 2’: 6.5, ‘viola’: 6}
# Apply gains to sources
gain_scene = check_repeated_source(gains[scene["gain"]], source_list)
stems = apply_gains(stems, config.input_sample_rate, gain_scene)
Remix back to stereo the estimated sources with the requested levels.
# Downmix to stereo
enhanced_signal = remix_stems(stems)
Adjust the level of the new mixture to -40 dB and apply the compressor
# adjust levels to get roughly -40 dB before compressor
enhanced_signal = adjust_level(enhanced_signal, gains[scene["gain"]])
# Apply compressor
enhanced_signal = process_remix_for_listener(
signal=enhanced_signal,
enhancer=enhancer,
enhancer_params=mbc_params_listener,
listener=listener,
)
Save the enhanced signal in FLAC. These are saved in the directory
enhanced_signals
within the experiment pathpath.exp_folder
defined in theconfig.yaml
.
We can call the enhance.py
script now.
When calling this script, mind that you are loading the correct files.
In shell, we can call the enhancer using the demo data as:
python enhance.py \
path.root={root_data} \
'path.listeners_file=${path.metadata_dir}/listeners.demo.json' \
'path.enhancer_params_file=${path.metadata_dir}/compressor_params.demo.json' \
'path.scenes_file=${path.metadata_dir}/scenes.demo.json' \
'path.scene_listeners_file=${path.metadata_dir}/scene_listeners.demo.json' \
'path.music_file=${path.metadata_dir}/music.demo.json'
!python enhance.py path.root={root_data} path.listeners_file={root_data}/metadata/listeners.demo.json path.enhancer_params_file={root_data}/metadata/compressor_params.demo.json path.scenes_file={root_data}/metadata/scenes.demo.json path.scene_listeners_file={root_data}/metadata/scene_listeners.demo.json path.music_file={root_data}/metadata/music.demo.json
[2024-09-05 10:18:36,071][__main__][INFO] - Loading model cadenzachallenge/ConvTasNet_Bassoon_NonCausal
config.json: 100%|█████████████████████████████| 204/204 [00:00<00:00, 1.74MB/s]
model.safetensors: 100%|███████████████████| 26.4M/26.4M [00:00<00:00, 30.9MB/s]
[2024-09-05 10:18:38,013][__main__][INFO] - Loading model cadenzachallenge/ConvTasNet_Cello_NonCausal
config.json: 100%|█████████████████████████████| 204/204 [00:00<00:00, 1.19MB/s]
model.safetensors: 100%|████████████████████| 26.4M/26.4M [00:00<00:00, 104MB/s]
[2024-09-05 10:18:39,108][__main__][INFO] - Loading model cadenzachallenge/ConvTasNet_Clarinet_NonCausal
config.json: 100%|█████████████████████████████| 204/204 [00:00<00:00, 2.07MB/s]
model.safetensors: 100%|████████████████████| 26.4M/26.4M [00:00<00:00, 106MB/s]
[2024-09-05 10:18:40,283][__main__][INFO] - Loading model cadenzachallenge/ConvTasNet_Flute_NonCausal
config.json: 100%|█████████████████████████████| 204/204 [00:00<00:00, 2.17MB/s]
model.safetensors: 100%|████████████████████| 26.4M/26.4M [00:00<00:00, 110MB/s]
[2024-09-05 10:18:41,247][__main__][INFO] - Loading model cadenzachallenge/ConvTasNet_Oboe_NonCausal
config.json: 100%|█████████████████████████████| 204/204 [00:00<00:00, 2.00MB/s]
model.safetensors: 100%|████████████████████| 26.4M/26.4M [00:00<00:00, 104MB/s]
[2024-09-05 10:18:42,349][__main__][INFO] - Loading model cadenzachallenge/ConvTasNet_Sax_NonCausal
config.json: 100%|█████████████████████████████| 204/204 [00:00<00:00, 2.09MB/s]
model.safetensors: 100%|████████████████████| 26.4M/26.4M [00:00<00:00, 108MB/s]
[2024-09-05 10:18:43,440][__main__][INFO] - Loading model cadenzachallenge/ConvTasNet_Viola_NonCausal
config.json: 100%|█████████████████████████████| 204/204 [00:00<00:00, 2.12MB/s]
model.safetensors: 100%|████████████████████| 26.4M/26.4M [00:00<00:00, 111MB/s]
[2024-09-05 10:18:44,421][__main__][INFO] - Loading model cadenzachallenge/ConvTasNet_Violin_NonCausal
config.json: 100%|█████████████████████████████| 204/204 [00:00<00:00, 1.94MB/s]
model.safetensors: 100%|████████████████████| 26.4M/26.4M [00:00<00:00, 110MB/s]
[2024-09-05 10:18:46,617][__main__][INFO] - [001/002] Processing S50027: song op1_1_002 for listener L5008
[2024-09-05 10:20:31,040][clarity.utils.flac_encoder][WARNING] - Writing enhanced_signals/valid/S50027_L5008_remix.flac: 31 samples clipped
[2024-09-05 10:20:31,071][__main__][INFO] - [002/002] Processing S50084: song sq7123582_2_006 for listener L5079
/home/gerardoroadabike/anaconda3/envs/tutorials/lib/python3.11/site-packages/numpy/core/fromnumeric.py:3504: RuntimeWarning: Mean of empty slice.
return _methods._mean(a, axis=axis, dtype=dtype,
/home/gerardoroadabike/anaconda3/envs/tutorials/lib/python3.11/site-packages/numpy/core/_methods.py:129: RuntimeWarning: invalid value encountered in scalar divide
ret = ret.dtype.type(ret / rcount)
[2024-09-05 10:22:11,175][__main__][INFO] - Done!
!ls {os.environ['NBOOKROOT']}/clarity/recipes/cad2/task2/baseline/exp/enhanced_signals/valid
S50027_L5008_remix.flac S50084_L5079_remix.flac
Let’s listen to these signals.
from pathlib import Path
from clarity.utils.flac_encoder import read_flac_signal
from clarity.utils.signal_processing import resample
import IPython.display as ipd
audio_path = Path(os.environ['NBOOKROOT']) / "clarity/recipes/cad2/task2/baseline/exp/enhanced_signals/valid"
audio_files = [f for f in audio_path.glob('*') if f.suffix == '.flac']
for file_to_play in audio_files:
signal, sample_rate = read_flac_signal(file_to_play)
signal = resample(signal, sample_rate, 16000)
print(file_to_play.name)
ipd.display(ipd.Audio(signal.T, rate=16000))
S50084_L5079_remix.flac
S50027_L5008_remix.flac
Running evaluate.py#
Now that we have enhanced audios we can use the evaluate.py
script to generate HAAQI scores for the signals. The evaluation should run with the same parameters as the enhancement
! python evaluate.py path.root={root_data} path.listeners_file={root_data}/metadata/listeners.demo.json path.enhancer_params_file={root_data}/metadata/compressor_params.demo.json path.scenes_file={root_data}/metadata/scenes.demo.json path.scene_listeners_file={root_data}/metadata/scene_listeners.demo.json path.music_file={root_data}/metadata/music.demo.json
[2024-09-05 10:59:15,986][__main__][INFO] - Evaluating from enhanced_signals directory
[2024-09-05 10:59:15,995][__main__][INFO] - [001/002] Evaluating S50027 for listener L5008
[2024-09-05 10:59:53,569][__main__][INFO] - [002/002] Evaluating S50084 for listener L5079
[2024-09-05 11:00:29,695][__main__][INFO] - Done!
The evaluation scores are save in the path.exp_folder
/scores.csv
import pandas as pd
pd.read_csv(f"{os.environ['NBOOKROOT']}/clarity/recipes/cad2/task2/baseline/exp/scores.csv")
scene | song | listener | left_haaqi | right_haaqi | avg_haaqi | |
---|---|---|---|---|---|---|
0 | S50027 | op1_1_002 | L5008 | 0.696125 | 0.720061 | 0.708093 |
1 | S50084 | sq7123582_2_006 | L5079 | 0.596429 | 0.578859 | 0.587644 |
The HAAQI scores are compute for the left and right ear and then save the averaged.