Commit 2603ce64 authored by Jong, Stefan1 de's avatar Jong, Stefan1 de
Browse files

Update README.md

parent 7da923a1
......@@ -79,7 +79,7 @@ If you want to use the pretrained models on APPLE_MOTS, download it [here][pretr
## Training
When using pretrained models, this step can be omitted.
To train a model, run `main.py` with a corresponding configuration file e.g. `python main.py configs/3dconv8_24` (`cd algorithms/TrackR-CNN`). Seven models (3d_conv) use two 3D conv layers where the model eight (lstm) uses two stacked lstm layers. The description behind the model corresponds to respectively the batch size and number of epochs. Note that the number of epochs in the configuration files are calculated by the total number of epochs / batch size. Because for every epoch all possible batches of e.g. 8 consecutive frames are processed.
To train a model, run `main.py` with a corresponding configuration file e.g. `python main.py configs/3dconv8_24` (`cd algorithms/TrackR-CNN`). Seven models (3d_conv) use two 3D conv layers where model eight (lstm) uses two stacked lstm layers. The description behind the model corresponds to respectively the batch size and number of epochs. Note that the number of epochs in the configuration files are calculated by the total number of epochs / batch size. Because for every epoch all possible batches of e.g. 8 consecutive frames are processed.
### APPLE_MOTS
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment