Commit 76c675a8 authored by Baja, Hilmy's avatar Baja, Hilmy
Browse files

Changed README.md

parent d247534c
......@@ -2,7 +2,7 @@
This is the code to our research 'Stepping towards real-time detection and tracking of apples using deep learning'. We provide scripts and tutorials for applying multi-object tracking and segmentation (MOTS) on an apple orchard by using the [TrackR-CNN][TrackR-CNN_MOTS] algorithm.
To use the **PointTrack** algorithm instead of **TrackR-CNN**, go to the [PointTrack](Algorithms/PointTrack/) folder for instructions.
To use the **PointTrack** algorithm instead of **TrackR-CNN**, go to the [PointTrack](algorithms/PointTrack/) folder for instructions.
## Video of results
......
......@@ -53,12 +53,12 @@ kittiRoot
│ │ │ ...
```
##Config.py file
## Config.py file
Change the file directories accordingly.
`kittiRoot` is the directory where the dataset is stored.
## Pretrained weights
### Pretrained weights
Fine-tuned models on KITTI MOTS (for cars):
- SpatialEmbedding for cars.
- PointTrack for cars.
......@@ -68,11 +68,16 @@ You can download them via [Baidu Disk](https://pan.baidu.com/s/1Mk9JWNcM1W08EAjh
For Apples, models will be uploaded soon.
The testset segs are found in the folder 'apple_testset_segs'
##Train a dataset from scratch
## Train a dataset from scratch
To train a dataset from scratch, follow instructions from the master branch of this repository on how to label a custom dataset.
Next, order the dataset folders in accordance to the `kittiRoot` structure above.
Next, create a new environment and install all the dependencies.
The steps when training from scratch is as follows:
1. Train the instance segmentation model for detection (SpatialEmbedding in this case). The output is a pth file called **checkpoint.pth**
2. Train the PointTrack model, the model that will do the instance associations. The output is also a pth file called **checkpoint.pth**, not to be confused with the pth file from SpatialEmbedding.
3. Evaluating the models, first by generating the instance segmentation results (forwarding), then running tracking results.
## Training of SpatialEmbedding
......@@ -117,7 +122,7 @@ For the training of SpatialEmbedding, we follow the original training setting of
$ python -u datasets/MOTSImageSelect.py
```
2.Before generating the crops, make sure that the crops sizes are correct in the file. Then do the following:
2.Before generating the crops, make sure that the crops sizes are correct in the file/code. Then do the following:
```
$ python -u utils/generate_crops.py
```
......@@ -130,6 +135,7 @@ $ python -u train_SE.py car_finetune_SE_crop
You can change desired parameters inside the config file.
4.Afterwards start finetuning on KITTI MOTS with BN fixed. SpatialEmbeddings trained the finetuning with a crop size 2x larger than the first training.
This can be changed in the file from step **2**
Create a file on the root with the name **resume_SE**, then copy the **checkpoint.pth** file into it.
......@@ -158,9 +164,9 @@ $ python -u test_mots_se.py car_test_se_to_save
$ python -u datasets/MOTSInstanceMaskPool.py
```
3.Change the line in **train_tracker_with_val.py** which loads weights to the default path as follows:
3.Change the line in **car_finetune_tracking.py** which loads weights to the default path as follows:
```
checkpoint_path='./tuned/checkpoint.pth'
checkpoint_path='./car_finetune_tracking/checkpoint.pth'
```
4.Afterwards start training:
......@@ -186,33 +192,10 @@ The segmentation result will be saved according to the config file **repoRoot**/
$ python -u test_tracking.py car_test_tracking_val
```
## Cite us
We borrow some code from [SpatialEmbedding](https://github.com/davyneven/SpatialEmbeddings).
```
@inproceedings{xu2020Segment,
title={Segment as Points for Efficient Online Multi-Object Tracking and Segmentation},
author={Xu, Zhenbo and Zhang, Wei and Tan, Xiao and Yang, Wei and Huang, Huan and Wen, Shilei and Ding, Errui and Huang, Liusheng},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
year={2020}
}
@inproceedings{xupointtrack++,
title={PointTrack++ for Effective Online Multi-Object Tracking and Segmentation},
author={Xu, Zhenbo and Zhang, Wei and Tan, Xiao and Yang, Wei and Su, Xiangbo and Yuan, Yuchen and Zhang, Hongwu and Wen, Shilei and Ding, Errui and Huang, Liusheng},
booktitle={CVPR Workshops},
year={2020}
}
```
## Contact
If you find problems in the code, please open an issue, or contact Hilmy (hilmy.baja@wur.nl)
## License
This software is released under a creative commons license which allows for personal and research use only. For a commercial license please contact the authors. You can view a license summary [here](http://creativecommons.org/licenses/by-nc/4.0/).
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment