{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "tC4jbs5yqzYw" }, "source": [ "# Baseline CAD2 Task1\n", "\n", "## Lyrics Intelligibility\n", "\n", "```{image} ../_static/figures/pop_rock_band.jpg\n", ":alt: hostile band rock guitar show\n", ":class: bg-primary mb-1\n", ":width: 700px\n", ":align: center\n", "```\n", "Image by Marcísio Coelho Mac Hostile from Pixabay\n" ] }, { "cell_type": "markdown", "metadata": { "id": "ffET3AfZFKPt" }, "source": [ "Mishearings of lyrics are very common, and one can find numerous examples on the internet, from websites dedicated to misheard lyrics to stand-up comedies that exploit this in a humorous way.\n", "\n", "However, this is a significant issue for those with hearing loss {cite}`greasley2020music`.\n", "\n", "In this second round of Cadenza Challenges, we are presenting a challenge where entrants need to process a pop/rock music signal and increase its intelligibility with minimal loss of audio quality.\n", "\n", "More details about the challenge can be found on the [Cadenza website](https://cadenzachallenge.org/docs/cadenza2/intro). \n", "\n", "This tutorial walks you through the process of running the lyrics intelligibility baseline using the shell interface." ] }, { "cell_type": "markdown", "metadata": { "id": "pajpylpbFud6" }, "source": [ "## Create the environment\n", "\n", "We first need to install the Clarity package. The tag version for CAD2 is **v0.6.1**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Setting the Location of the Project\n", "\n", "For convenience, we are setting an environment variable with the location of the root working directory of the project. This variable will be used in various places throughout the tutorial. Please change this value to reflect where you have installed this notebook on your system." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "ExecuteTime": { "end_time": "2024-09-03T14:50:06.356397Z", "start_time": "2024-09-03T14:50:06.350656Z" } }, "outputs": [ { "data": { "text/plain": [ "'/home/gerardoroadabike/Extended/Projects/cadenza_tutorials/cad2/..'" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import os\n", "os.environ[\"NBOOKROOT\"] = os.getcwd()\n", "os.environ[\"NBOOKROOT\"] = f\"{os.environ['NBOOKROOT']}/..\"\n", "os.environ['NBOOKROOT']" ] }, { "cell_type": "code", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "zL0yLZUvFr9P", "outputId": "7434b909-ac71-4f3f-c163-2d5870317e9c" }, "source": [ "from IPython.display import clear_output\n", "\n", "import os\n", "import sys\n", "\n", "print(\"Cloning git repo...\")\n", "!git clone --depth 1 --branch v0.6.1 https://github.com/claritychallenge/clarity.git\n", "\n", "clear_output()" ], "execution_count": 1, "outputs": [] }, { "cell_type": "code", "execution_count": 6, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Repository installed\n" ] } ], "source": [ "print(\"Installing the pyClarity...\\n\")\n", "%cd clarity\n", "%pip install -e .\n", "\n", "sys.path.append(f'{os.getenv(\"NBOOKROOT\")}/clarity')\n", "\n", "clear_output()\n", "print(\"Repository installed\")" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "scrolled": true }, "outputs": [], "source": [ "%cd {os.environ['NBOOKROOT']}/clarity/recipes/cad2/task1\n", "!pip install -r requirements.txt\n", "clear_output()" ] }, { "cell_type": "markdown", "metadata": { "id": "8pXJVSt-F-NN" }, "source": [ "## Get the demo data\n", "\n", "The next step is to download a demo data package that will help demonstrate the process. This package has the same structure as the official data package, so it will help you understand how the files are organized.\n", "\n", "Before continuing, it is recommended that you familiarize yourself with the data structure and content, which you can find on the [website](https://cadenzachallenge.org/docs/cadenza2/Lyric%20Intelligibility/lyric_data).\n", "\n", "Now, let's download the data..." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "TpG-bGs8Fzgl", "outputId": "438666a1-ac21-4532-a340-baefee85202d" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Data installed\n" ] } ], "source": [ "%cd {os.environ['NBOOKROOT']}\n", "!gdown 1UqiqwYJuyC1o-C14DpVL4QYncsOGCvHF\n", "!tar -xf cad2_demo_data.tar.xz\n", "\n", "clear_output()\n", "print(\"Data installed\")" ] }, { "cell_type": "markdown", "metadata": { "id": "a0gQKwx8GnfD" }, "source": [ "## Changing working Directory\n", "\n", "Next, we change working directory to the location of the shell scripts we wish to run." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "ExecuteTime": { "end_time": "2024-09-03T14:51:40.107328Z", "start_time": "2024-09-03T14:51:40.098930Z" }, "colab": { "base_uri": "https://localhost:8080/", "height": 53 }, "id": "fopV37z6GSoO", "outputId": "791191b3-6176-427c-bc62-63cd45196b57" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "/home/gerardoroadabike/Extended/Projects/cadenza_tutorials/clarity/recipes/cad2/task1/baseline\n" ] } ], "source": [ "%cd {os.environ['NBOOKROOT']}/clarity/recipes/cad2/task1/baseline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's save the path to the dataset in `root_data`" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "ExecuteTime": { "end_time": "2024-09-03T14:51:44.722025Z", "start_time": "2024-09-03T14:51:44.024071Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "total 8\n", "drwxr-xr-x 3 gerardoroadabike gerardoroadabike 4096 Aug 23 08:37 audio\n", "drwxr-xr-x 2 gerardoroadabike gerardoroadabike 4096 Sep 3 16:11 metadata\n" ] } ], "source": [ "root_data = f\"{os.environ['NBOOKROOT']}/cadenza_data_demo/cad2/task1\"\n", "!ls -l {root_data}" ] }, { "cell_type": "markdown", "metadata": { "id": "jWFEskB3HgGM" }, "source": [ "## Running the Baseline\n", "\n", "The `enhancement` baseline employs a ConvTasNet model to separate the lyrics from the background accompaniment. This model was trained for causal and non-causal cases. The pre-trained models are stored on Huggingface. The causality is defined in the `config.yaml`.\n", "\n", "### The config parameters\n", "\n", "The parameters of the baseline are define in the `config.yaml` file.\n", "\n", "---\n", "\n", "First, it configures the paths to metadata, audio files and, location for the output files.\n", "\n", "```yaml\n", "path:\n", " root: ??? # Set to the root of the dataset\n", " metadata_dir: ${path.root}/metadata\n", " music_dir: ${path.root}/audio\n", " musics_file: ${path.metadata_dir}/music.valid.json\n", " alphas_file: ${path.metadata_dir}/alpha.json\n", " listeners_file: ${path.metadata_dir}/listeners.valid.json\n", " enhancer_params_file: ${path.metadata_dir}/compressor_params.valid.json\n", " scenes_file: ${path.metadata_dir}/scene.valid.json\n", " scene_listeners_file: ${path.metadata_dir}/scene_listeners.valid.json\n", " exp_folder: ./exp_${separator.causality}\n", "```\n", "\n", "* `path.root`: must be set to the location of the dataset.\n", "* `exp_folder`: by default, the name of the folder it's using the causality parameter. But this can be cahnge according your requirements\n", "\n", "---\n", "\n", "The next parameters are the different sample rates\n", "\n", "```yaml\n", "input_sample_rate: 44100 # sample rate of the input mixture\n", "remix_sample_rate: 44100 # sample rate for the output remixed signal\n", "HAAQI_sample_rate: 24000 # sample rate for computing HAAQI score\n", "```\n", "\n", "The HAAQI sample rate is uses by HAAQI in the evaluation.\n", "\n", "---\n", "The next parameters are related to the separation and how it will operate\n", "\n", "```yaml\n", "separator:\n", " causality: causal\n", " device: ~\n", " separation:\n", " number_sources: 2\n", " segment: 6.0\n", " overlap: 0.1\n", " sample_rate: ${input_sample_rate}\n", "```\n", "\n", "* `separator.causality`: this is where we set the causality\n", "* `separator.separation`: these parameters are used for separate large signals using fade and overlap.\n", "\n", "--- \n", "The `enhancer` parameters are the amplification parameters used by the multiband dynamic range compressor that not directly depend on the listener.\n", "\n", "```yaml\n", "enhancer:\n", " crossover_frequencies: [ 353.55, 707.11, 1414.21, 2828.43, 5656.85 ] # [250, 500, 1000, 2000, 4000] * sqrt(2)\n", " attack: [ 11, 11, 14, 13, 11, 11 ]\n", " release: [ 80, 80, 80, 80, 100, 100 ]\n", " threshold: [ -30, -30, -30, -30, -30, -30 ]\n", "```\n", "\n", "You are free to change these parameters if you believe it may improve the signals for the listener panel. However, take in consideration that the objective evaluation uses these parameters. This means that any changes may result in lower objective HAAQI scores, as this metric is based on the correlation between the enhanced and reference signals.\n", "\n", "---\n", "\n", "The last parameters are evaluation configurations\n", "\n", "```yaml\n", "evaluate:\n", " whisper_version: base.en\n", " set_random_seed: True\n", " small_test: False\n", " save_intermediate: False\n", " equiv_0db_spl: 100\n", " batch_size: 1 # Number of batches\n", " batch: 0 # Batch number to evaluate\n", "```\n", "\n", "`whisper_version` indicates what version of Whisper are we using for the intelligibility metric. The objective evaluation will emply the `base.en` version." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Running enhance.py\n", "\n", "The first steps in the script are:\n", "\n", "1. Loading the different metadata files into dictionaries.\n", "2. Load the causal or non-causal separation model using the method `load_separation_model()`.\n", "3. Create an instance of a `MultibandCompressor`.\n", "4. Load the scenes and listeners per scenes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then, the script processes one scene-listener pair at a time." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "1. Load the original mixture and select the requested segment\n", "```Python\n", "input_mixture, input_sample_rate = read_flac_signal(\n", " Path(config.path.music_dir)\n", " / songs[scene[\"segment_id\"]][\"path\"]\n", " / \"mixture.flac\"\n", ")\n", "start_sample = int(\n", " songs[scene[\"segment_id\"]][\"start_time\"] * config.input_sample_rate\n", ")\n", "end_time = int(\n", " (songs[scene[\"segment_id\"]][\"end_time\"]) * config.input_sample_rate\n", ")\n", "```\n", "\n", "2. Normalise to -40 dB LUFS. This is an important steps as affect how the compressor works later.\n", "\n", "\n", "3. Separate the vocals from the background.\n", "```Python\n", "est_sources = separate_sources(\n", " separation_model,\n", " input_mixture.T,\n", " device=device,\n", " **config.separator.separation,\n", ")\n", "vocals, accompaniment = est_sources.squeeze(0).cpu().detach().numpy()\n", "```\n", "\n", "4. Remix the sources into a stereo signal using the alpha as input parameter. You are free to modify the `downmix_signal` function according to your approach. \n", "```Python\n", "enhanced_signal = downmix_signal(vocals, accompaniment, beta=alpha)\n", "```\n", "\n", "5. Load the compressor parameters for the listener and compress the signal using the multiband compressor and the listener audiograms\n", "```Python\n", "# Get the listener's compressor params\n", "mbc_params_listener: dict[str, dict] = {\"left\": {}, \"right\": {}}\n", "\n", "for ear in [\"left\", \"right\"]:\n", " mbc_params_listener[ear][\"release\"] = config.enhancer.release\n", " mbc_params_listener[ear][\"attack\"] = config.enhancer.attack\n", " mbc_params_listener[ear][\"threshold\"] = config.enhancer.threshold\n", "mbc_params_listener[\"left\"][\"ratio\"] = enhancer_params[listener_id][\"cr_l\"]\n", "mbc_params_listener[\"right\"][\"ratio\"] = enhancer_params[listener_id][\"cr_r\"]\n", "mbc_params_listener[\"left\"][\"makeup_gain\"] = enhancer_params[listener_id][\n", " \"gain_l\"\n", "]\n", "mbc_params_listener[\"right\"][\"makeup_gain\"] = enhancer_params[listener_id][\n", " \"gain_r\"\n", "]\n", " \n", "enhancer.set_compressors(**mbc_params_listener[\"left\"])\n", "left_enhanced = enhancer(signal=enhanced_signal[0, :])\n", "\n", "enhancer.set_compressors(**mbc_params_listener[\"right\"])\n", "right_enhanced = enhancer(signal=enhanced_signal[1, :])\n", "\n", "enhanced_signal = np.stack((left_enhanced[0], right_enhanced[0]), axis=1)\n", "```\n", "\n", "6. Save the enhanced signal in FLAC format. These are saved in the directory `enhanced_signals` within the experiment path `path.exp_folder` defined in the `config.yaml`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can call the `enhance.py` script now. \n", "When calling this script, mind that you are loading the correct files.\n", "\n", "In shell, we can call the enhancer using the demo data as:\n", "```bash\n", "python enhance.py \\\n", " path.root={root_data} \\\n", " 'path.listeners_file=${path.metadata_dir}/listeners.demo.json' \\\n", " 'path.enhancer_params_file=${path.metadata_dir}/compressor_params.demo.json' \\\n", " 'path.scenes_file=${path.metadata_dir}/scenes.demo.json' \\\n", " 'path.scene_listeners_file=${path.metadata_dir}/scene_listeners.demo.json' \\\n", " 'path.musics_file=${path.metadata_dir}/music.demo.json'\n", "```" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "ExecuteTime": { "end_time": "2024-09-03T14:58:38.876351Z", "start_time": "2024-09-03T14:57:47.266135Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "config.json: 100%|██████████████████████████████| 204/204 [00:00<00:00, 519kB/s]\n", "model.safetensors: 100%|████████████████████| 43.4M/43.4M [00:00<00:00, 106MB/s]\n", "[2024-09-04 15:33:41,281][__main__][INFO] - [0001/0002] Processing scene-listener pair: ('S50009', 'L5086')\n", "[2024-09-04 15:34:03,629][__main__][INFO] - [0002/0002] Processing scene-listener pair: ('S50077', 'L5042')\n", "[2024-09-04 15:34:24,233][__main__][INFO] - Enhancement completed.\n" ] } ], "source": [ "!python enhance.py path.root={root_data} path.listeners_file={root_data}/metadata/listeners.demo.json path.enhancer_params_file={root_data}/metadata/compressor_params.demo.json path.scenes_file={root_data}/metadata/scene.demo.json path.scene_listeners_file={root_data}/metadata/scene_listeners.demo.json path.musics_file={root_data}/metadata/music.demo.json" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check the output path" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "ExecuteTime": { "end_time": "2024-09-03T14:58:39.580882Z", "start_time": "2024-09-03T14:58:38.880629Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "total 1776\n", "-rw-rw-r-- 1 gerardoroadabike gerardoroadabike 1024842 Sep 4 15:34 S50009_L5086_A0.4_remix.flac\n", "-rw-rw-r-- 1 gerardoroadabike gerardoroadabike 790055 Sep 4 15:34 S50077_L5042_A0.8_remix.flac\n" ] } ], "source": [ "!ls -l {os.environ['NBOOKROOT']}/clarity/recipes/cad2/task1/baseline/exp_causal/enhanced_signals" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "Let's listen to these signals." ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "ExecuteTime": { "end_time": "2024-09-03T15:08:06.071877Z", "start_time": "2024-09-03T15:08:05.990435Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "S50077_L5042_A0.8_remix.flac\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "S50009_L5086_A0.4_remix.flac\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "from pathlib import Path\n", "from clarity.utils.flac_encoder import read_flac_signal\n", "from clarity.utils.signal_processing import resample\n", "import IPython.display as ipd\n", "\n", "audio_path = Path(os.environ['NBOOKROOT']) / \"clarity/recipes/cad2/task1/baseline/exp_causal/enhanced_signals\" \n", "audio_files = [f for f in audio_path.glob('*') if f.suffix == '.flac']\n", "\n", "for file_to_play in audio_files:\n", " signal, sample_rate = read_flac_signal(file_to_play)\n", " signal = resample(signal, sample_rate, 16000)\n", " print(file_to_play.name)\n", " ipd.display(ipd.Audio(signal.T, rate=16000))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### Running evaluate.py\n", "Now that we have enhanced the signals we can use the `evaluate.py` script to generate the HAAQI and Whisper scores for the signals. It is important to run the evaluation using the same parameters as the enhancement." ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "ExecuteTime": { "end_time": "2024-09-03T14:59:09.644066Z", "start_time": "2024-09-03T14:58:39.819318Z" }, "id": "BFvYiF15LJEu" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[2024-09-04 15:37:56,150][__main__][INFO] - Evaluating from enhanced_signals directory\n", "[2024-09-04 15:37:57,373][__main__][INFO] - [0001/0002] Processing scene-listener pair: ('S50009', 'L5086')\n", "[2024-09-04 15:38:18,446][root][INFO] - Severity level - SEVERE\n", "[2024-09-04 15:38:18,446][root][INFO] - Processing {len(chans)} samples\n", "[2024-09-04 15:38:18,453][root][INFO] - tracking fixed threshold\n", "[2024-09-04 15:38:18,724][root][INFO] - Rescaling: leveldBSPL was 90.5 dB SPL, now 90.5 dB SPL. Target SPL is 90.5 dB SPL.\n", "[2024-09-04 15:38:18,724][root][INFO] - performing outer/middle ear corrections\n", "[2024-09-04 15:38:21,237][root][INFO] - performing outer/middle ear corrections\n", "[2024-09-04 15:38:23,266][root][INFO] - Severity level - MODERATE\n", "[2024-09-04 15:38:23,266][root][INFO] - Processing {len(chans)} samples\n", "[2024-09-04 15:38:23,272][root][INFO] - tracking fixed threshold\n", "[2024-09-04 15:38:23,524][root][INFO] - Rescaling: leveldBSPL was 89.1 dB SPL, now 89.1 dB SPL. Target SPL is 89.1 dB SPL.\n", "[2024-09-04 15:38:23,524][root][INFO] - performing outer/middle ear corrections\n", "[2024-09-04 15:38:26,288][root][INFO] - performing outer/middle ear corrections\n", "[2024-09-04 15:38:28,199][__main__][INFO] - [0002/0002] Processing scene-listener pair: ('S50077', 'L5042')\n", "[2024-09-04 15:38:48,559][root][INFO] - Severity level - SEVERE\n", "[2024-09-04 15:38:48,560][root][INFO] - Processing {len(chans)} samples\n", "[2024-09-04 15:38:48,566][root][INFO] - tracking fixed threshold\n", "[2024-09-04 15:38:48,663][root][INFO] - Rescaling: leveldBSPL was 88.1 dB SPL, now 88.1 dB SPL. Target SPL is 88.1 dB SPL.\n", "[2024-09-04 15:38:48,663][root][INFO] - performing outer/middle ear corrections\n", "[2024-09-04 15:38:50,988][root][INFO] - performing outer/middle ear corrections\n", "[2024-09-04 15:38:52,896][root][INFO] - Severity level - SEVERE\n", "[2024-09-04 15:38:52,896][root][INFO] - Processing {len(chans)} samples\n", "[2024-09-04 15:38:52,902][root][INFO] - tracking fixed threshold\n", "[2024-09-04 15:38:52,974][root][INFO] - Rescaling: leveldBSPL was 88.1 dB SPL, now 88.1 dB SPL. Target SPL is 88.1 dB SPL.\n", "[2024-09-04 15:38:52,974][root][INFO] - performing outer/middle ear corrections\n", "[2024-09-04 15:38:55,448][root][INFO] - performing outer/middle ear corrections\n", "[2024-09-04 15:38:57,393][__main__][INFO] - Evaluation completed\n" ] } ], "source": [ "! python evaluate.py path.root={root_data} path.listeners_file={root_data}/metadata/listeners.demo.json path.enhancer_params_file={root_data}/metadata/compressor_params.demo.json path.scenes_file={root_data}/metadata/scene.demo.json path.scene_listeners_file={root_data}/metadata/scene_listeners.demo.json path.musics_file={root_data}/metadata/music.demo.json" ] }, { "cell_type": "markdown", "metadata": { "id": "K9yB6XdlOCRp" }, "source": [ "The evaluation scores are save in the `path.exp_folder`/scores.csv" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "ExecuteTime": { "end_time": "2024-09-03T14:59:09.923967Z", "start_time": "2024-09-03T14:59:09.646968Z" }, "id": "uEgjJQd6N655" }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
scenesonglistenerhaaqi_lefthaaqi_righthaaqi_avgwhisper_leftwhisper_rigthwhisper_bealphascore
0S50009Actions - One Minute SmileL50860.939100.9479330.9435170.00.00.00.40.56611
1S50077Clara Berry And Wooldog - Waltz For My VictimsL50420.598740.5718550.5852980.50.50.50.80.51706
\n", "
" ], "text/plain": [ " scene song listener \\\n", "0 S50009 Actions - One Minute Smile L5086 \n", "1 S50077 Clara Berry And Wooldog - Waltz For My Victims L5042 \n", "\n", " haaqi_left haaqi_right haaqi_avg whisper_left whisper_rigth \\\n", "0 0.93910 0.947933 0.943517 0.0 0.0 \n", "1 0.59874 0.571855 0.585298 0.5 0.5 \n", "\n", " whisper_be alpha score \n", "0 0.0 0.4 0.56611 \n", "1 0.5 0.8 0.51706 " ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import pandas as pd\n", "pd.read_csv(f\"exp_causal/scores.csv\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The HAAQI scores are compute for the left and right ear and then save the averaged.\n", "The intelligibility score as compute for the left and right ear and then saved the better ear." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "colab": { "provenance": [] }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.13" } }, "nbformat": 4, "nbformat_minor": 4 }