Note
Go to the end to download the full example code.
Quality Metrics Tutorial¶
After spike sorting, you might want to validate the ‘goodness’ of the sorted units. This can be done using the
qualitymetrics
submodule, which computes several quality metrics of the sorted units.
import spikeinterface.core as si
import spikeinterface.extractors as se
from spikeinterface.postprocessing import compute_principal_components
from spikeinterface.qualitymetrics import (
compute_snrs,
compute_firing_rates,
compute_isi_violations,
calculate_pc_metrics,
compute_quality_metrics,
)
First, let’s download a simulated dataset from the repo ‘https://gin.g-node.org/NeuralEnsemble/ephy_testing_data’
local_path = si.download_dataset(remote_path="mearec/mearec_test_10s.h5")
recording, sorting = se.read_mearec(local_path)
print(recording)
print(sorting)
MEArecRecordingExtractor: 32 channels - 32.0kHz - 1 segments - 320,000 samples - 10.00s
float32 dtype - 39.06 MiB
file_path: /home/docs/spikeinterface_datasets/ephy_testing_data/mearec/mearec_test_10s.h5
MEArecSortingExtractor: 10 units - 1 segments - 32.0kHz
file_path: /home/docs/spikeinterface_datasets/ephy_testing_data/mearec/mearec_test_10s.h5
Create SortingAnalyzer¶
For quality metrics we need first to create a SortingAnalyzer
.
analyzer = si.create_sorting_analyzer(sorting=sorting, recording=recording, format="memory")
print(analyzer)
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/checkouts/3072/src/spikeinterface/core/job_tools.py:103: UserWarning: `n_jobs` is not set so parallel processing is disabled! To speed up computations, it is recommended to set n_jobs either globally (with the `spikeinterface.set_global_job_kwargs()` function) or locally (with the `n_jobs` argument). Use `spikeinterface.set_global_job_kwargs?` for more information about job_kwargs.
warnings.warn(
estimate_sparsity: 0%| | 0/10 [00:00<?, ?it/s]
estimate_sparsity: 100%|##########| 10/10 [00:00<00:00, 695.42it/s]
SortingAnalyzer: 32 channels - 10 units - 1 segments - memory - sparse - has recording
Loaded 0 extensions
Depending on which metrics we want to compute we will need first to compute some necessary extensions. (if not computed an error message will be raised)
analyzer.compute("random_spikes", method="uniform", max_spikes_per_unit=600, seed=2205)
analyzer.compute("waveforms", ms_before=1.3, ms_after=2.6, n_jobs=2)
analyzer.compute("templates", operators=["average", "median", "std"])
analyzer.compute("noise_levels")
print(analyzer)
compute_waveforms: 0%| | 0/10 [00:00<?, ?it/s]
compute_waveforms: 10%|█ | 1/10 [00:00<00:01, 5.85it/s]
compute_waveforms: 60%|██████ | 6/10 [00:00<00:00, 25.18it/s]
compute_waveforms: 100%|██████████| 10/10 [00:00<00:00, 32.31it/s]
noise_level: 0%| | 0/20 [00:00<?, ?it/s]
noise_level: 20%|## | 4/20 [00:00<00:00, 38.78it/s]
noise_level: 40%|#### | 8/20 [00:00<00:00, 38.89it/s]
noise_level: 60%|###### | 12/20 [00:00<00:00, 39.21it/s]
noise_level: 80%|######## | 16/20 [00:00<00:00, 39.28it/s]
noise_level: 100%|##########| 20/20 [00:00<00:00, 39.27it/s]
noise_level: 100%|##########| 20/20 [00:00<00:00, 39.16it/s]
SortingAnalyzer: 32 channels - 10 units - 1 segments - memory - sparse - has recording
Loaded 4 extensions: random_spikes, waveforms, templates, noise_levels
The spikeinterface.qualitymetrics
submodule has a set of functions that allow users to compute
metrics in a compact and easy way. To compute a single metric, one can simply run one of the
quality metric functions as shown below. Each function has a variety of adjustable parameters that can be tuned.
firing_rates = compute_firing_rates(analyzer)
print(firing_rates)
isi_violation_ratio, isi_violations_count = compute_isi_violations(analyzer)
print(isi_violation_ratio)
snrs = compute_snrs(analyzer)
print(snrs)
{'#0': 5.3, '#1': 5.0, '#2': 4.3, '#3': 3.0, '#4': 4.8, '#5': 3.7, '#6': 5.1, '#7': 11.1, '#8': 19.5, '#9': 12.9}
{'#0': 0.0, '#1': 0.0, '#2': 0.0, '#3': 0.0, '#4': 0.0, '#5': 0.0, '#6': 0.0, '#7': 0.0, '#8': 0.0, '#9': 0.0}
{'#0': 23.688985566223263, '#1': 25.562668727351635, '#2': 13.83221989734586, '#3': 21.930928721211213, '#4': 7.437540176388606, '#5': 7.451767727584651, '#6': 20.824094505903993, '#7': 7.341507851287954, '#8': 8.092695641281358, '#9': 8.95652685734799}
To compute more than one metric at once, we can use the compute_quality_metrics
function and indicate
which metrics we want to compute. This will return a pandas dataframe:
metrics = compute_quality_metrics(analyzer, metric_names=["firing_rate", "snr", "amplitude_cutoff"])
print(metrics)
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/checkouts/3072/src/spikeinterface/qualitymetrics/misc_metrics.py:908: UserWarning: Some units have too few spikes : amplitude_cutoff is set to NaN
warnings.warn(f"Some units have too few spikes : amplitude_cutoff is set to NaN")
firing_rate snr amplitude_cutoff
#0 5.3 23.688986 <NA>
#1 5.0 25.562669 <NA>
#2 4.3 13.83222 <NA>
#3 3.0 21.930929 <NA>
#4 4.8 7.43754 <NA>
#5 3.7 7.451768 <NA>
#6 5.1 20.824095 <NA>
#7 11.1 7.341508 <NA>
#8 19.5 8.092696 <NA>
#9 12.9 8.956527 <NA>
Some metrics are based on the principal component scores, so the exwtension need to be computed before. For instance:
analyzer.compute("principal_components", n_components=3, mode="by_channel_global", whiten=True)
metrics = compute_quality_metrics(
analyzer,
metric_names=[
"isolation_distance",
"d_prime",
],
)
print(metrics)
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/checkouts/3072/src/spikeinterface/core/job_tools.py:103: UserWarning: `n_jobs` is not set so parallel processing is disabled! To speed up computations, it is recommended to set n_jobs either globally (with the `spikeinterface.set_global_job_kwargs()` function) or locally (with the `n_jobs` argument). Use `spikeinterface.set_global_job_kwargs?` for more information about job_kwargs.
warnings.warn(
Fitting PCA: 0%| | 0/10 [00:00<?, ?it/s]
Fitting PCA: 10%|█ | 1/10 [00:01<00:15, 1.75s/it]
Fitting PCA: 40%|████ | 4/10 [00:02<00:02, 2.29it/s]
Fitting PCA: 70%|███████ | 7/10 [00:02<00:00, 4.45it/s]
Fitting PCA: 90%|█████████ | 9/10 [00:02<00:00, 4.09it/s]
Fitting PCA: 100%|██████████| 10/10 [00:03<00:00, 3.96it/s]
Fitting PCA: 100%|██████████| 10/10 [00:03<00:00, 3.21it/s]
Projecting waveforms: 0%| | 0/10 [00:00<?, ?it/s]
Projecting waveforms: 100%|██████████| 10/10 [00:00<00:00, 151.98it/s]
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/checkouts/3072/src/spikeinterface/core/job_tools.py:103: UserWarning: `n_jobs` is not set so parallel processing is disabled! To speed up computations, it is recommended to set n_jobs either globally (with the `spikeinterface.set_global_job_kwargs()` function) or locally (with the `n_jobs` argument). Use `spikeinterface.set_global_job_kwargs?` for more information about job_kwargs.
warnings.warn(
calculate pc_metrics: 0%| | 0/10 [00:00<?, ?it/s]
calculate pc_metrics: 10%|█ | 1/10 [00:00<00:01, 5.20it/s]
calculate pc_metrics: 50%|█████ | 5/10 [00:00<00:00, 9.68it/s]
calculate pc_metrics: 70%|███████ | 7/10 [00:00<00:00, 8.24it/s]
calculate pc_metrics: 90%|█████████ | 9/10 [00:01<00:00, 7.48it/s]
calculate pc_metrics: 100%|██████████| 10/10 [00:01<00:00, 8.40it/s]
isolation_distance d_prime snr amplitude_cutoff firing_rate
#0 111631078842116592.0 29.314857 23.688986 <NA> 5.3
#1 22503.96533 27.426984 25.562669 <NA> 5.0
#2 126067352822193792.0 24.75911 13.83222 <NA> 4.3
#3 895298620985875968.0 30.90995 21.930929 <NA> 3.0
#4 167417577384036896.0 28.448716 7.43754 <NA> 4.8
#5 86612678156279200.0 20.730191 7.451768 <NA> 3.7
#6 2142994535281708800.0 37.228947 20.824095 <NA> 5.1
#7 8878.390636 28.968743 7.341508 <NA> 11.1
#8 5827.4073 20.999115 8.092696 <NA> 19.5
#9 7100.39125 30.249492 8.956527 <NA> 12.9
Total running time of the script: (0 minutes 5.745 seconds)