This repo contains functionality to train deep neural decoders for any behavioural variable which can be expressed as a time series. The architecture for the networks is inspired by those described by Frey et. al (2021). We use a pyTorch implementation for added network versatility.
This repo also contains functionality for using these decoders to perform BrainSLAM - a SLAM algorithm designed to operate using neural LFP data (as opposed to camera or lidar input, as traditional SLAM algorithms would).
This repo was used in both https://arxiv.org/abs/2402.00588 and https://www.biorxiv.org/content/10.1101/2024.02.01.578423v1.abstract
Though one could use this repo to decode any variable, it has been built with spatial variables in mind (namely position, direction, and speed).
Install torch and the given requirements.
To train networks, you need at least one HDF5 file in the data/ directory.
This file hierarchy should look like:
-- inputs
---- wavelets
---- fourier_frequencies
-- outputs
---- e.g. position
---- e.g. another behaviourable variable
To train a network:
- First edit training hyperparameters in deepinsight/options.py
- Edit loss keys, and their corresponding loss functions and loss weights in train_CNN.py
- If you'd like to plot losses using wandb, make sure it is instantiated on your machine (run wandb.init())
- Run train_CNN.py to train and save the model e.g.
python train_CNN.py --h5files data/Elliott_train.h5 --mod_name Elliott --use_wandb True
To test a network:
- First loss keys in test_CNN.py depending on what is being decoded.
- Run test_CNN.py to test the model. The plots will work if position, direction, and speed has been decoded. e.g.
python test_CNN.py --h5file data/Elliott_test.h5 --model_path models/Elliott_0.pt
To run BrainSLAM
- You MUST be decoding variables named "position", "direction", and "speed"
- Run run_brainSLAM.py e.g.
--h5file data/Elliott_test.h5 --model_path models/Elliott_0.pt
