Python package for analyzing behavioral data for Brain Observatory: Visual Behavior

Overview

Allen Institute Visual Behavior Analysis package

This repository contains code for analyzing behavioral data from the Allen Brain Observatory: Visual Behavior 2P Project.

This code is an important part of the internal Allen Institute code base and we are actively using and maintaining it. Issues are encouraged, but because this tool is so central to our mission pull requests might not be accepted if they conflict with our existing plans.

Before installing, it's recommended to set up a new Python environment:

For example, using Conda:

conda create -n visual_behavior_analysis python=3.7

Then activate the environment:

conda activate visual_behavior_analysis

Quickstart

and install with pip (Allen Institute internal users only):

pip install git+https://github.com/AllenInstitute/visual_behavior_analysis.git

Installation

This package is designed to be installed using standard Python packaging tools. For example,

python setup.py install

If you are using pip to manage packages and versions (recommended), you can also install using pip:

pip install ./

If you are plan to contribute to the development of the package, I recommend installing in "editable" mode:

pip install -e ./

This ensures that Python uses the current, active files in the folder (even while switching between branches).

To ensure that the newly created environment is visible in Jupyter:

Activate the environment:

conda activate visual_behavior_analysis

Install ipykernel:

pip install ipykernel

Register the environment with Jupyter:

python -m ipykernel install --user --name visual_behavior_analysis

Use

First, load up a Foraging2 output

import pandas as pd
data = pd.read_pickle(PATH_TO_FORAGING2_OUTPUT_PKL)

Then, we create the "core" data structure: a dictionary with licks, rewards, trials, running, visual stimuli, and metadata.

from visual_behavior.translator.foraging2 import data_to_change_detection_core

core_data = data_to_change_detection_core(data)

Finally, we create an "extended" dataframe for use in generating trial-level plots and analysis.

from visual_behavior.translator.core import create_extended_dataframe

extended_trials = create_extended_dataframe(
    trials=core_data['trials'],
    metadata=core_data['metadata'],
    licks=core_data['licks'],
    time=core_data['time'],
)

Testing

Before committing and/or submitting a pull request, it is ideal to run tests.

Tests are currently run against Python 3.6.12 and 3.7.7 on github using CircleCI. You can replicate those tests locally as follows:

Creating test virtual environments

CD {your local VBA directory}
conda create -n VBA_test_36 python=3.6.12
conda activate VBA_test_36
pip install .[DEV]

Then deactivate VBA_test_36 to create the 3.7 virtual environment:

conda create -n VBA_test_37 python=3.7.7
conda activate test_37
pip install .[DEV]

Basic testing (external users): Baseline tests consist of tests that can be run from outside of the Allen Institute and do not require access to any internal databases such as LIMS. The not onprem argument will skip all tests that can only be run on internal Allen Institute servers and are marked as onprem. To run these tests, do the following:

CD {your local VBA directory}
conda activate VBA_test_36
pytest -m "not onprem" 

On Premises Testing + Basic testing (internal Allen Institute Users): Some tests may only be run on premises (at the Allen Institute) because they must access our internal databases such as LIMS. For internal Allen Institute users, the call to pytest could be called without an onprem argument, which would run ALL tests. To run these tests, do the following:

CD {your local VBA directory}
conda activate VBA_test_36
pytest 

Linting / Circle CI Testing (all users):

CircleCI also tests that all files meet Pep 8 style requirements using the Flake8 module - a process referred to as 'linting'. Linting can be performed locally before commiting using Flake8 as follows:

flake8 {FILE_TO_CHECK}

Running a subset of tests: You can run a subset of test by doing the following

All tests in a sub directory:

CD {subfolder of VBA that contains the tests you'd like to run}
conda activate VBA_test_36
pytest {add -m "not onprem" as necessary}

All test in a single .py file:

CD {subfolder of VBA that contains the file with the tests you'd like to run}
conda activate VBA_test_36
pytest fileWithTests.py  {add -m "not onprem" as necessary}

Contributing

Pull requests are welcome.

  1. Fork the repo
  2. Create a feature branch
  3. Commit your changes
  4. Create a pull request
  5. Tag @dougollerenshaw, @matchings to review

Contributors:

Additional Links

Comments
  • [WIP] adds Foraging2 support

    [WIP] adds Foraging2 support

    this PR is a major refactor of this repository which implements the following...

    • formalizes a "core" set of behavioral data items that can be loaded from a Change Detection task & fixes #29 & #34
    • adds support for loading these core data structures from both legacy stimulus_code pkl files and Foraging2 output files, fixes #24
    • refactors much of the repo to isolate functions based on the type of data they manipulate and the types of manipulations they perform
    • removes functions that are not critical to data transformations, daily summary plots, or mtrain analysis to visual_behavior_research (see https://github.com/AllenInstitute/visual_behavior_research/pull/2)

    Outstanding items

    These must be resolved before merging into master

    • [x] @mochic808 noted multiple pieces of data that need to be loaded from Foraging2 output files that do not appear to be present and/or are not computable from existing fields. these currently get filled with nulls, but will need to be populated with real data once Foraging2 meets our needs
    • [ ] build out remaining data columns with new foraging2 data
    • [x] the legacy loading needs to be updated to fully conform to the core data structure
    • [x] bumping the version of this repo to 0.2.0

    cc @dougollerenshaw @matchings @mochic808 @nicain @ryval

    opened by neuromusic 23
  • Major Bug: multiple segmentation directories

    Major Bug: multiple segmentation directories

    https://github.com/AllenInstitute/visual_behavior_analysis/blob/8d6766218de9ec89281a15060dfac263e2d001f9/visual_behavior/ophys/io/convert_level_1_to_level_2.py#L135

    This line will select the "first" directory, but there could be multiple of these.

    bug 
    opened by nicain 20
  • cannot assign timestamps to all encoder values

    cannot assign timestamps to all encoder values

    from visual_behavior.translator.core import create_extended_dataframe
    from visual_behavior.schemas.extended_trials import ExtendedTrialSchema
    from visual_behavior.translator.foraging2 import data_to_change_detection_core
    import pandas as pd
    
    foraging_file_name = "/allen/programs/braintv/production/neuralcoding/prod0/specimen_651725156/behavior_session_703485615/180530092658_363894_81c53274-e9c7-4b94-b51d-78c76c494e9d.pkl"
    
    
    data = pd.read_pickle(foraging_file_name)
    assert data['platform_info']['camstim'] == '0.3.2'
    core_data = data_to_change_detection_core(data)
    df = create_extended_dataframe(trials=core_data['trials'],metadata=core_data['metadata'],licks=core_data['licks'],time=core_data['time'],)
    

    Error message:

    Traceback (most recent call last):
      File "/home/nicholasc/projects/mtrain_api/scripts/debug.py", line 13, in <module>
        core_data = data_to_change_detection_core(data)
      File "/home/nicholasc/projects/visual_behavior_analysis/visual_behavior/translator/foraging2/__init__.py", line 45, in data_to_change_detection_core
        "running": data_to_running(data),
      File "/home/nicholasc/projects/visual_behavior_analysis/visual_behavior/translator/foraging2/__init__.py", line 218, in data_to_running
        speed_df = get_running_speed(data)[["speed (cm/s)", "time"]]  # yeah...it's dumb i kno...
      File "/home/nicholasc/projects/visual_behavior_analysis/visual_behavior/translator/foraging2/extract.py", line 713, in get_running_speed
        raise ValueError("dx and time must be the same length")
    ValueError: dx and time must be the same length
    
    bug foraging2 mtrain_upload 
    opened by nicain 18
  • catch frequency on stage0 autorewards?

    catch frequency on stage0 autorewards?

    What is the catch frequency being set to in stage0 autorewards? I don't see any reference to catch frequency in params, but it is in the top level of core_data['metadata'] (=0.125). Interrogating the trial, it doesn't appear that there are any catch trials (which is correct behavior), so it would appear that this parameter isn't being applied.

    screen shot 2018-05-24 at 10 32 39 pm

    question 
    opened by dougollerenshaw 14
  • Can't read any files from foraging2 commit 0a4a96a

    Can't read any files from foraging2 commit 0a4a96a

    A new batch of foraging2 files started showing up this evening with commit hash '0a4a96a'. Visual_behavior can't open any of them.

    Minimum code to replicate error:

    import pandas as pd
    from visual_behavior.translator.foraging2 import data_to_change_detection_core
    
    datapath= r'/users/dougo/dropbox/sampledata/stage_4/doc_images_0a4a96a_ObstinateDoCMouse.pkl'
    
    data=pd.read_pickle(datapath)
    
    core_data=data_to_change_detection_core(data)
    

    Traceback:

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-1-796296e37cd2> in <module>()
          6 data=pd.read_pickle(datapath)
          7 
    ----> 8 core_data=data_to_change_detection_core(data)
    
    /Users/dougo/Dropbox/PythonCode/visual_behavior/visual_behavior/translator/foraging2/__init__.pyc in data_to_change_detection_core(data)
         35         "licks": data_to_licks(data),
         36         "trials": data_to_trials(data),
    ---> 37         "running": data_to_running(data),
         38         "rewards": data_to_rewards(data),
         39         "visual_stimuli": None,  # not yet implemented
    
    /Users/dougo/Dropbox/PythonCode/visual_behavior/visual_behavior/translator/foraging2/__init__.pyc in data_to_running(data)
        197     - the index of each time is the frame number
        198     """
    --> 199     speed_df = get_running_speed(data)[["speed (cm/s)", "time"]]  # yeah...it's dumb i kno...
        200 
        201     n_frames = len(speed_df)
    
    /Users/dougo/Dropbox/PythonCode/visual_behavior/visual_behavior/translator/foraging2/extract.pyc in get_running_speed(exp_data, smooth, time)
        715 
        716     if len(time) != len(dx):
    --> 717         raise ValueError("dx and time must be the same length")
        718 
        719     speed = calc_deriv(dx, time)
    
    ValueError: dx and time must be the same length
    
    bug foraging2 
    opened by dougollerenshaw 14
  • multiple pickle files.

    multiple pickle files.

    @matchings @wbwakeman @NileGraddis when there are multiple pickle files, which one should we pick? I see all sorts of names for the pkl file: xxx.pkl xxx_stimulus.pkl xxx_session.pkl

    currently the convert code is using the first one :
    pkl_file = [file for file in os.listdir(pkl_dir) if file.startswith(expt_date)][0]

    however this fails for a session like: session_id: 790910226

    where the valid pkl file name is "xxx_stim.pkl", but the convert code picks the "xxx_session.pkl".

    what's your recommendation? ... should we just do a "try and except" loop and pick the one that works?

    Thanks.

    opened by farznaj 13
  • Remove dependency on computer list in `devices`

    Remove dependency on computer list in `devices`

    Currently, visual behavior relies on a hard-coded dictionary linking computer name to 'Rig ID'. The dictionary lives in 'devices': https://github.com/AllenInstitute/visual_behavior_analysis/blob/master/visual_behavior/devices.py

    MPE is maintaining a list of computers and rig IDs in a network location. We should use that list instead. I'll follow up with a link to the MPE location in a comment.

    good first issue task 
    opened by dougollerenshaw 13
  • KeyError: 'auto_reward_vol'

    KeyError: 'auto_reward_vol'

    I'm getting key errors when trying to process 2P6 pkl files using this package. There is no auto_reward_vol in core_data['metadata']. There is a rewardvol key, however. Is there something missing in the foraging translator?

    I suspect commit 7061e37 brought this to light.

    bug 
    opened by ryval 12
  • change time distribution needs fixed

    change time distribution needs fixed

    Two validation functions are currently failing due to issues with the change distribution. We need to find a way to deal with these. The most likely solution is to switch from drawing change times on a continuous exponential distribution to drawing on a discrete distribution based on the expected number of stimulus flashes in the stimulus window.

    Failing validation functions: Function: validate_max_change_time Reason for failure: If the change falls on the last flash, it can fall slightly outside of the stimulus window (in the example below, the max change time is 8.256 seconds and the stimulus window is 8.250 seconds)

    Function: validate_monotonically_decreasing_number_of_change_times Reason for failure: The mapping from the continuous distribution to the discrete flashes distorts the exponential function. See the example below.

    Histogram of change times from \allen\aibs\mpe\Software\data\behavior\validation\stage_4\doc_images_9364d72_PerfectDoCMouse.pkl

    change_time_distribution

    foraging2 
    opened by dougollerenshaw 12
  • foraging2 translator is missing change times

    foraging2 translator is missing change times

    @neuromusic @mochic : No catch trials are being identified when loading Foraging2 data with the master branch. When I revert to the 'fix/load_licks' branch and reload the same PKL file, the problem resolves, so it would appear not to be an issue with the underlying Foraging2 data.

    Minimum code to replicate error (on master branch):

    from visual_behavior.translator.foraging2 import data_to_change_detection_core
    from visual_behavior.change_detection.trials.extended import create_extended_dataframe
    from visual_behavior.visualization.extended_trials.daily import make_daily_figure
    import pandas as pd
    
    datapath=r"\\allen\programs\braintv\workgroups\neuralcoding\Behavior\Data\M347745\output\180430100756650000.pkl"
    
    data=pd.read_pickle(datapath)
    
    core_data=data_to_change_detection_core(data)
    
    trials = create_extended_dataframe(
        trials=core_data['trials'], 
        metadata=core_data['metadata'], 
        licks=core_data['licks'], 
        time=core_data['time'],
    )
    
    assert len(trials[trials['trial_type']=='catch'])>0
    
    bug 
    opened by dougollerenshaw 12
  • Lick time/number mismatch between core_data['trials'] and extended dataframe

    Lick time/number mismatch between core_data['trials'] and extended dataframe

    There seem to be some extra licks showing up in the extended dataframe. These extra licks trigger False from a validation function designed to ensure that any pre-change licks lead to aborted trials.

    image datapath= r'//allen/aibs/mpe/Software/data/behavior/validation/stage_1\doc_gratings_8910798_StupidDoCMouse.pkl'

    bug 
    opened by dougollerenshaw 11
  • Circular import problem

    Circular import problem

    import visual_behavior.plotting as vbp
    

    Throws an attribute error "module 'visual_behavior' has no attribute 'utilities'

    This is because visual_behavior.utilities imports visual_behavior.visualization.behavior which circularly imports visual_behavior.utilities.

    opened by alexpiet 0
  • 8 behavior session NWBs are missing from the platform_paper_cache and do not download properly from AWS

    8 behavior session NWBs are missing from the platform_paper_cache and do not download properly from AWS

    There are 8 behavior sessions in the platform paper experiments table that do not have NWB files in the platform paper cache in the directory below, and don't download from AWS when attempting to load the dataset object. Attempting to load results in the below error indicating that the files are truncated, however they simply don't exist.

    These sessions will not be included in any platform paper analysis until the issue is resolved.

    behavior_session_ids = [1002520823, 1002956042, 1003249011, 814545306, 815045874, 818007489, 818825644, 875471358]

    platform paper cache dir = \allen\programs\braintv\workgroups\nc-ophys\visual_behavior\platform_paper_cache\2.12.4\visual-behavior-ophys-1.0.1\behavior_sessions

    image

    This probably also requires an SDK GitHub issue but i am logging it here first for record keeping purposes and visibility.

    opened by matchings 0
  • update dependencies

    update dependencies

    starting from a fresh conda install, i get errors associated with h5py, pytables, and umap when i try to run VBA loading functions, so i believe they need to be added as dependencies.

    opened by matchings 1
  • calls to visual_behavior.database only work with specific pymongo versions

    calls to visual_behavior.database only work with specific pymongo versions

    Calls I made to visual_behavior.database only work on pymongo==3.12.3 or below (pymongo>=4.0 does not work)

    *MongoClient.database_names() was removed in the migration to version 4, so the function list_database_names() must be used instead

    As shown here: https://pymongo.readthedocs.io/en/stable/migrate-to-pymongo4.html#mongoclient-database-names-is-removed

    opened by saakethmm 1
Releases(v0.4.3)
Owner
Allen Institute
Please visit http://alleninstitute.github.io/ for more information.
Allen Institute
Synthetic Data Generation for tabular, relational and time series data.

An Open Source Project from the Data to AI Lab, at MIT Website: https://sdv.dev Documentation: https://sdv.dev/SDV User Guides Developer Guides Github

The Synthetic Data Vault Project 1.2k Jan 07, 2023
Basis Set Format Converter

Basis Set Format Converter Repository for the online tool that allows you to enter a basis set in the form of text input for a variety of Quantum Chem

Manas Sharma 3 Jun 27, 2022
vartests is a Python library to perform some statistic tests to evaluate Value at Risk (VaR) Models

gg I wasn't satisfied with any of the other available Gemini clients, so I wrote my own. Requires Python 3.9 (maybe older, I haven't checked) and opti

RAFAEL RODRIGUES 5 Jan 03, 2023
A Numba-based two-point correlation function calculator using a grid decomposition

A Numba-based two-point correlation function (2PCF) calculator using a grid decomposition. Like Corrfunc, but written in Numba, with simplicity and hackability in mind.

Lehman Garrison 3 Aug 24, 2022
collect training and calibration data for gaze tracking

Collect Training and Calibration Data for Gaze Tracking This tool allows collecting gaze data necessary for personal calibration or training of eye-tr

Pascal 5 Dec 17, 2022
.npy, .npz, .mtx converter.

npy-converter Matrix Data Converter. Expand matrix for multi-thread, multi-process Divid matrix for multi-thread, multi-process Support: .mtx, .npy, .

taka 1 Feb 07, 2022
sportsdataverse python package

sportsdataverse-py See CHANGELOG.md for details. The goal of sportsdataverse-py is to provide the community with a python package for working with spo

Saiem Gilani 37 Dec 27, 2022
A collection of learning outcomes data analysis using Python and SQL, from DQLab.

Data Analyst with PYTHON Data Analyst berperan dalam menghasilkan analisa data serta mempresentasikan insight untuk membantu proses pengambilan keputu

6 Oct 11, 2022
Flood modeling by 2D shallow water equation

hydraulicmodel Flood modeling by 2D shallow water equation. Refer to Hunter et al (2005), Bates et al. (2010). Diffusive wave approximation Local iner

6 Nov 30, 2022
An implementation of the largeVis algorithm for visualizing large, high-dimensional datasets, for R

largeVis This is an implementation of the largeVis algorithm described in (https://arxiv.org/abs/1602.00370). It also incorporates: A very fast algori

336 May 25, 2022
CaterApp is a cross platform, remotely data sharing tool created for sharing files in a quick and secured manner.

CaterApp is a cross platform, remotely data sharing tool created for sharing files in a quick and secured manner. It is aimed to integrate this tool with several more features including providing a U

Ravi Prakash 3 Jun 27, 2021
Analyse the limit order book in seconds. Zoom to tick level or get yourself an overview of the trading day.

Analyse the limit order book in seconds. Zoom to tick level or get yourself an overview of the trading day. Correlate the market activity with the Apple Keynote presentations.

2 Jan 04, 2022
CPSPEC is an astrophysical data reduction software for timing

CPSPEC manual Introduction CPSPEC is an astrophysical data reduction software for timing. Various timing properties, such as power spectra and cross s

Tenyo Kawamura 1 Oct 20, 2021
PATC: Introduction to Big Data Analytics. Practical Data Analytics for Solving Real World Problems

PATC: Introduction to Big Data Analytics. Practical Data Analytics for Solving Real World Problems

1 Feb 07, 2022
Pipeline to convert a haploid assembly into diploid

HapDup (haplotype duplicator) is a pipeline to convert a haploid long read assembly into a dual diploid assembly. The reconstructed haplotypes

Mikhail Kolmogorov 50 Jan 05, 2023
An ETL Pipeline of a large data set from a fictitious music streaming service named Sparkify.

An ETL Pipeline of a large data set from a fictitious music streaming service named Sparkify. The ETL process flows from AWS's S3 into staging tables in AWS Redshift.

1 Feb 11, 2022
A lightweight, hub-and-spoke dashboard for multi-account Data Science projects

A lightweight, hub-and-spoke dashboard for cross-account Data Science Projects Introduction Modern Data Science environments often involve many indepe

AWS Samples 3 Oct 30, 2021
Pyspark project that able to do joins on the spark data frames.

SPARK JOINS This project is to perform inner, all outer joins and semi joins. create_df.py: load_data.py : helps to put data into Spark data frames. d

Joshua 1 Dec 14, 2021
Python utility to extract differences between two pandas dataframes.

Python utility to extract differences between two pandas dataframes.

Jaime Valero 8 Jan 07, 2023
A lightweight interface for reading in output from the Weather Research and Forecasting (WRF) model into xarray Dataset

xwrf A lightweight interface for reading in output from the Weather Research and Forecasting (WRF) model into xarray Dataset. The primary objective of

National Center for Atmospheric Research 43 Nov 29, 2022