How to use the spikeinterface.extractors.example_datasets function in spikeinterface

To help you get started, we’ve selected a few spikeinterface examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github SpikeInterface / spikeinterface / examples / modules / comparison / generate_erroneous_sorting.py View on Github external
def generate_erroneous_sorting():
    rec, sorting_true = se.example_datasets.toy_example(num_channels=4, duration=10, seed=10)
    
    sorting_err = se.NumpySortingExtractor()
    sorting_err.set_sampling_frequency(sorting_true.get_sampling_frequency())
    
    # sorting_true have 10 units
    np.random.seed(0)
    
    # unit 1 2 are perfect
    for u in [1, 2]:
        st = sorting_true.get_unit_spike_train(u)
        sorting_err.add_unit(u, st)

    # unit 3 4 (medium) 10 (low) have medium to low agreement
    for u, score in [(3, 0.8),  (4, 0.75), (10, 0.3)]:
        st = sorting_true.get_unit_spike_train(u)
        st = np.sort(np.random.choice(st, size=int(st.size*score), replace=False))
github SpikeInterface / spikeinterface / examples / getting_started / plot_getting_started.py View on Github external
# - :code:`toolkit` : processing toolkit for pre-, post-processing, validation, and automatic curation
# - :code:`sorters` : Python wrappers of spike sorters
# - :code:`comparison` : comparison of spike sorting output
# - :code:`widgets` : visualization


import spikeinterface.extractors as se
import spikeinterface.toolkit as st
import spikeinterface.sorters as ss
import spikeinterface.comparison as sc
import spikeinterface.widgets as sw

##############################################################################
# First, let's create a toy example with the :code:`extractors` module:

recording, sorting_true = se.example_datasets.toy_example(duration=10, num_channels=4, seed=0)

##############################################################################
# :code:`recording` is a :code:`RecordingExtractor` object, which extracts information about channel ids, channel locations
# (if present), the sampling frequency of the recording, and the extracellular  traces. :code:`sorting_true` is a
# :code:`SortingExtractor` object, which contains information about spike-sorting related information,  including unit ids,
# spike trains, etc. Since the data are simulated, :code:`sorting_true` has ground-truth information of the spiking
# activity of each unit.
#
# Let's use the :code:`widgets` module to visualize the traces and the raster plots.

w_ts = sw.plot_timeseries(recording, trange=[0,5])
w_rs = sw.plot_rasters(sorting_true, trange=[0,5])

##############################################################################
# This is how you retrieve info from a :code:`RecordingExtractor`...
github SpikeInterface / spikeinterface / examples / modules / toolkit / plot_1_preprocessing.py View on Github external
Before spike sorting, you may need to preproccess your signals in order to improve the spike sorting performance.
You can do that in SpikeInterface using the :code:`toolkit.preprocessing` submodule.

"""

import numpy as np
import matplotlib.pylab as plt
import scipy.signal

import spikeinterface.extractors as se
import spikeinterface.toolkit as st

##############################################################################
# First, let's create a toy example:

recording, sorting = se.example_datasets.toy_example(num_channels=4, duration=10, seed=0)

##############################################################################
# Apply filters
# -----------------
#  
# Now apply a bandpass filter and a notch filter (separately) to the
# recording extractor. Filters are also RecordingExtractor objects.

recording_bp = st.preprocessing.bandpass_filter(recording, freq_min=300, freq_max=6000)
recording_notch = st.preprocessing.notch_filter(recording, freq=1000, q=10)

##############################################################################
# Now let's plot the power spectrum of non-filtered, bandpass filtered,
# and notch filtered recordings.

f_raw, p_raw = scipy.signal.welch(recording.get_traces(), fs=recording.get_sampling_frequency())
github SpikeInterface / spikeinterface / examples / modules / extractors / plot_4_handle_probe_info.py View on Github external
Handling probe information
===========================

In order to properly spike sort, you may need to load information related to the probe you are using.
You can easily load probe information in :code:`spikeinterface.extractors` module.

Here's how!
'''

import numpy as np
import spikeinterface.extractors as se

##############################################################################
# First, let's create a toy example:

recording, sorting_true = se.example_datasets.toy_example(duration=10, num_channels=32, seed=0)

###############################################################################
# Probe information may be required to:
#
# - apply a channel map
# - load 'group' information
# - load 'location' information
# - load arbitrary information
#
# Probe information can be loaded either using a '.prb' or a '.csv' file. We recommend using a '.prb' file, since it
# allows users to load several information as once.
#
# A '.prb' file is a python dictionary. Here is the content of a sample '.prb' file (eight_tetrodes.prb), that splits
# the channels in 8 channel groups, applies a channel map (reversing the order of each tetrode), and loads a 'label'
# for each electrode (arbitrary information):
#
github SpikeInterface / spikeinterface / examples / modules / sorters / plot_2_using_the_launcher.py View on Github external
"""
Use the spike sorting launcher
==============================

This example shows how to use the spike sorting launcher. The launcher allows to parameterize the sorter name and
to run several sorters on one or multiple recordings.

"""

import spikeinterface.extractors as se
import spikeinterface.sorters as ss

##############################################################################
# First, let's create the usueal toy example:

recording, sorting_true = se.example_datasets.toy_example(duration=10, seed=0)

##############################################################################
# The launcher enables to call any spike sorter with the same functions:  :code:`run_sorter` and :code:`run_sorters`.
# For running multiple sorters on the same recording extractor or a collection of them, the :code:`run_sorters`
# function can be used.
#
# Let's first see how to run a single sorter, for example, Klusta:

# The sorter name can be now a parameter, e.g. chosen with a command line interface or a GUI
sorter_name = 'klusta'
sorting_KL = ss.run_sorter(sorter_name_or_class='klusta', recording=recording, output_folder='my_sorter_output')
print(sorting_KL.get_unit_ids())

##############################################################################
# This will launch the klusta sorter on the recording object.
#
github SpikeInterface / spikeinterface / examples / modules / sorters / plot_3_sorting_by_group.py View on Github external
import time

##############################################################################
#  Sometimes, you might want to sort your data depending on a specific property of your recording channels.
#  
# For example, when using multiple tetrodes, a good idea is to sort each tetrode separately. In this case, channels
# belonging to the same tetrode will be in the same 'group'. Alternatively, for long silicon probes, such as
# Neuropixels, you could sort different areas separately, for example hippocampus and thalamus.
#  
# All this can be done by sorting by 'property'. Properties can be loaded to the recording channels either manually
# (using the :code:`set_channel_property` method), or by using a probe file. In this example we will create a 16 channel
# recording and split it in four channel groups (tetrodes).
#
# Let's create a toy example with 16 channels:

recording_tetrodes, sorting_true = se.example_datasets.toy_example(duration=10, num_channels=16)

##############################################################################
# Initially there is no group information ('location' is loaded automatically when creating toy data):

print(recording_tetrodes.get_shared_channel_property_names())

##############################################################################
# The file tetrode_16.prb contain the channel group description
#
# .. parsed-literal::
#
#     channel_groups = {
#         0: {
#             'channels': [0,1,2,3],
#         },
#         1: {
github SpikeInterface / spikeinterface / examples / modules / toolkit / plot_4_curation.py View on Github external
Curation Tutorial
======================

After spike sorting and computing validation metrics, you can automatically curate the spike sorting output using the
quality metrics. This can be done with the :code:`toolkit.curation` submodule.

"""

import spikeinterface.extractors as se
import spikeinterface.toolkit as st
import spikeinterface.sorters as ss

##############################################################################
# First, let's create a toy example:

recording, sorting = se.example_datasets.toy_example(num_channels=4, duration=30, seed=0)

##############################################################################
# and let's spike sort using klusta

sorting_KL = ss.run_klusta(recording)

print('Units:', sorting_KL.get_unit_ids())
print('Number of units:', len(sorting_KL.get_unit_ids()))

##############################################################################
# There are several available functions that enables to only retrieve units with respect to some rules. For example,
# let's automatically curate the sorting output so that only the units with SNR > 10 and mean firing rate > 2.3 Hz are
# kept:

sorting_fr = st.curation.threshold_firing_rate(sorting_KL, threshold=2.3, threshold_sign='less')
github SpikeInterface / spikeinterface / examples / modules / widgets / plot_1_rec_gallery.py View on Github external
'''
RecordingExtractor Widgets Gallery
===================================

Here is a gallery of all the available widgets using RecordingExtractor objects.
'''

import spikeinterface.extractors as se
import spikeinterface.widgets as sw

##############################################################################
# First, let's create a toy example with the `extractors` module:

recording, sorting = se.example_datasets.toy_example(duration=10, num_channels=4, seed=0)

##############################################################################
# plot_timeseries()
# ~~~~~~~~~~~~~~~~~

w_ts = sw.plot_timeseries(recording)

w_ts1 = sw.plot_timeseries(recording, trange=[5, 8])

recording.set_channel_groups(channel_ids=recording.get_channel_ids(), groups=[0, 0, 1, 1])
w_ts2 = sw.plot_timeseries(recording, trange=[5, 8], color_groups=True)

##############################################################################
# **Note**: each function returns a widget object, which allows to access the figure and axis.

w_ts.figure.suptitle("Recording by group")
github SpikeInterface / spikeinterface / examples / modules / widgets / plot_2_sort_gallery.py View on Github external
'''
SortingExtractor Widgets Gallery
===================================

Here is a gallery of all the available widgets using SortingExtractor objects.
'''

import spikeinterface.extractors as se
import spikeinterface.widgets as sw

##############################################################################
# First, let's create a toy example with the `extractors` module:

recording, sorting = se.example_datasets.toy_example(duration=60, num_channels=4, seed=0)

##############################################################################
# plot_rasters()
# ~~~~~~~~~~~~~~~~~

w_rs = sw.plot_rasters(sorting)

##############################################################################
# plot_isi_distribution()
# ~~~~~~~~~~~~~~~~~~~~~~~~
w_isi = sw.plot_isi_distribution(sorting, bins=10, window=1)

##############################################################################
# plot_autocorrelograms()
# ~~~~~~~~~~~~~~~~~~~~~~~~
w_ach = sw.plot_autocorrelograms(sorting, bin_size=1, window=10, unit_ids=[1, 2, 4, 5, 8, 10, 7])
github SpikeInterface / spikeinterface / examples / modules / comparison / plot_4_ground_truth_study.py View on Github external
import seaborn as sns

import spikeinterface.extractors as se
import spikeinterface.widgets as sw
from spikeinterface.comparison import GroundTruthStudy

##############################################################################
#  Setup study folder and run all sorters
# ------------------------------------------------------
# 
#  We first generate the folder.
#  this can take some time because recordings are copied inside the folder.


rec0, gt_sorting0 = se.example_datasets.toy_example(num_channels=4, duration=10, seed=10)
rec1, gt_sorting1 = se.example_datasets.toy_example(num_channels=4, duration=10, seed=0)
gt_dict = {
    'rec0': (rec0, gt_sorting0),
    'rec1': (rec1, gt_sorting1),
}
study_folder = 'a_study_folder'
study = GroundTruthStudy.create(study_folder, gt_dict)

##############################################################################
#  Then just run all sorters on all recordings in one functions.

#  sorter_list = st.sorters.available_sorters() # this get all sorters.
sorter_list = ['klusta', 'tridesclous', 'mountainsort4']
study.run_sorters(sorter_list, mode="keep")

##############################################################################
#  You can re run **run_study_sorters** as many time as you want.