nems.preprocessing module
- nems.preprocessing.generate_average_sig(signal_to_average, new_signalname='respavg', epoch_regex='^STIM_', mask=None)[source]
Returns a signal with a new signal created by replacing every epoch matched in “epoch_regex” with the average of every occurrence in that epoch. This is often used to make a response average signal that is the same length as the original signal_to_average, usually for plotting.
- Optional arguments:
- signal_to_average The signal from which you want to create an
average signal. It will not be modified.
new_signalname The name of the new, average signal. epoch_regex A regex to match which epochs to average across.
- nems.preprocessing.add_average_sig(rec, signal_to_average='resp', new_signalname='respavg', epoch_regex='^STIM_')[source]
Returns a recording with a new signal created by replacing every epoch matched in “epoch_regex” with the average of every occurrence in that epoch. This is often used to make a response average signal that is the same length as the original signal_to_average, usually for plotting.
- Optional arguments:
- signal_to_average The signal from which you want to create an
average signal. It will not be modified.
new_signalname The name of the new, average signal. epoch_regex A regex to match which epochs to average across.
- nems.preprocessing.average_away_epoch_occurrences(recording, epoch_regex='^STIM_', use_mask=True)[source]
Returns a recording with _all_ signals averaged across epochs that match epoch_regex, shortening them so that each epoch occurs only once in the new signals. i.e. unlike ‘add_average_sig’, the new recording will have signals 3x shorter if there are 3 occurrences of every epoch.
This has advantages: 1. Averaging the value of a signal (such as a response) in different
occurrences will make it behave more like a linear variable with gaussian noise, which is advantageous in many circumstances.
There will be less computation needed because the signal is shorter.
It also has disadvantages: 1. Stateful filters (FIR, IIR) will be subtly wrong near epoch boundaries 2. Any ordering of epochs is essentially lost, unless all epochs appear
in a perfectly repeated order.
To avoid accidentally averaging away differences in responses to stimuli that are based on behavioral state, you may need to create new epochs (based on stimulus and behaviorial state, for example) and then match the epoch_regex to those.
- nems.preprocessing.remove_invalid_segments(rec)[source]
Currently a specialized function for removing incorrect trials from data collected using baphy during behavior.
TODO: Migrate to nems_lbhb or make a more generic version
- nems.preprocessing.mask_all_but_correct_references(rec, balance_rep_count=False, include_incorrect=False, generate_evoked_mask=False, exclude_partial_ref=True)[source]
Specialized function for removing incorrect trials from data collected using baphy during behavior. exclude_nans: remove any REF epoch with nans in the response
TODO: Migrate to nems_lbhb and/or make a more generic version
- nems.preprocessing.mask_keep_passive(rec, max_passive_blocks=2)[source]
Mask out all times that don’t fall in PASSIVE_EXPERIMENT epochs.
TODO: Migrate to nems_lbhb and/or make a more generic version
- nems.preprocessing.mask_late_passives(rec)[source]
Mask out all times aren’t in active or first passive
TODO: Migrate to nems_lbhb and/or make a more generic version
- nems.preprocessing.mask_all_but_targets(rec, include_incorrect=True)[source]
Specialized function for removing incorrect trials from data collected using baphy during behavior.
TODO: Migrate to nems_lbhb and/or make a more generic version
- nems.preprocessing.mask_incorrect(rec, include_ITI=True, ITI_sec_to_include=None, **context)[source]
Specialized function for removing incorrect trials from data collected using baphy during behavior.
- nems.preprocessing.nan_invalid_segments(rec)[source]
Currently a specialized signal for removing incorrect trials from data collected using baphy during behavior.
- TODO: Complete this function, replicate remove_invalid_segments logic
Or delete ME
TODO: Migrate to nems_lbhb or make a more generic version
- nems.preprocessing.generate_stim_from_epochs(rec, new_signal_name='stim', epoch_regex='^STIM_', epoch_shift=0, epoch2_regex=None, epoch2_shift=0, epoch2_shuffle=False, onsets_only=True)[source]
- nems.preprocessing.integrate_signal_per_epoch(rec, sig='stim', sig_out='stim_int', epoch_regex='^STIM_')[source]
Calculates integral for each epoch of a signal
if rec[‘mask’] exists, uses rec[‘mask’] == True to determine valid epochs
- nems.preprocessing.normalize_epoch_lengths(rec, resp_sig='resp', epoch_regex='^STIM_', include_incorrect=False)[source]
for each set of epochs matching epoch_regex, figure out minimum length and truncate all occurrences to that length :param rec: :param resp_sig: :param epoch_regex: :param include_incorrect: (False) not used :return:
- nems.preprocessing.generate_psth_from_resp(rec, resp_sig='resp', epoch_regex='^(STIM_|TAR_|CAT_|REF_)', smooth_resp=False, channel_per_stim=False, mean_zero=False)[source]
Estimates a PSTH from all responses to each regex match in a recording
subtract spont rate based on pre-stim silence for ALL estimation data.
if rec[‘mask’] exists, uses rec[‘mask’] == True to determine valid epochs
Problem: not all the Pre/Dur/Post lengths are the same across reps of a stimulus. Shorten everything to minimum of each. If Dur is variable, throw away post-stim silence.
- nems.preprocessing.smooth_signal_epochs(rec, signal='resp', epoch_regex='^STIM_', **context)[source]
xforms-compatible wrapper for smooth_epoch_segments
- nems.preprocessing.smooth_epoch_segments(sig, epoch_regex='^STIM_', mask=None)[source]
wonky function that “smooths” signals by computing the mean of the pre-stim silence, onset, sustained, and post-stim silence Used in PSTH-based models. Duration of onset hard-coded to 2 bins :return: (smoothed_sig, respavg, respavg_with_spont) smoothed_sig - smoothed signal respavg - smoothed signal, averaged across all reps of matching epochs
- nems.preprocessing.generate_psth_from_est_for_both_est_and_val(est, val, epoch_regex='^STIM_', mean_zero=False)[source]
Estimates a PSTH from the EST set, and returns two signals based on the est and val, in which each repetition of a stim uses the EST PSTH?
subtract spont rate based on pre-stim silence for ALL estimation data.
- nems.preprocessing.generate_psth_from_est_for_both_est_and_val_nfold(ests, vals, epoch_regex='^STIM_', mean_zero=False)[source]
call generate_psth_from_est_for_both_est_and_val for each e,v pair in ests,vals
- nems.preprocessing.resp_to_pc(rec, pc_idx=None, resp_sig='resp', pc_sig='pca', pc_count=None, pc_source='all', overwrite_resp=True, compute_power='no', whiten=True, **context)[source]
generate PCA transformation of signal, if overwrite_resp==True, replace (multichannel) reference with a single pc channel
- Parameters
rec – NEMS recording
pc_idx – subset of pcs to return (default all)
resp_sig – signal to compute PCs
pc_sig – name of signal to save PCs (if not overwrite_resp)
pc_count – number of PCs to save
pc_source – what to compute PCs of (all/psth/noise)
overwrite_resp – (True) if True replace resp_sig with PCs, if False, save in pc_sig
whiten – whiten before PCA
context – NEMS context for xforms compatibility
- Returns
copy of rec with PCs
- nems.preprocessing.make_state_signal(rec, state_signals=['pupil'], permute_signals=[], generate_signals=[], new_signalname='state', sm_win_len=180)[source]
generate state signal for stategain.S/sdexp.S models
- valid state signals include (incomplete list):
pupil, pupil_ev, pupil_bs, pupil_psd active, each_file, each_passive, each_half far, hit, lick, p_x_a
TODO: Migrate to nems_lbhb or make a more generic version
- nems.preprocessing.concatenate_state_channel(rec, sig, state_signal_name='state', generate_baseline=True)[source]
- nems.preprocessing.add_noise_signal(rec, n_chans=None, T=None, noise_name='indep', ref_signal='resp', chans=None, rep_count=1, rand_seed=1, distribution='gaussian', sm_win=0, est=None, val=None, **context)[source]
- nems.preprocessing.split_est_val_for_jackknife(rec, epoch_name='TRIAL', modelspecs=None, njacks=10, IsReload=False, **context)[source]
take a single recording (est) and define njacks est/val sets using a jackknife logic. returns lists est_out and val_out of corresponding jackknife subsamples. removed timepoints are replaced with nan
- nems.preprocessing.mask_est_val_for_jackknife(rec, epoch_name='TRIAL', epoch_regex=None, modelspec=None, njacks=10, allow_partial_epochs=False, IsReload=False, **context)[source]
take a single recording (est) and define njacks est/val sets using a jackknife logic. returns lists est_out and val_out of corresponding jackknife subsamples. removed timepoints are replaced with nan
- nems.preprocessing.mask_est_val_for_jackknife_by_time(rec, modelspec=None, njacks=10, IsReload=False, **context)[source]
take a single recording (est) and define njacks est/val sets using a jackknife logic. returns lists est_out and val_out of corresponding jackknife subsamples. removed timepoints are replaced with nan