User Tools


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cs:vision:object_detection:start [2018/03/30 17:23]
James Irwin [Train Network]
cs:vision:object_detection:start [2018/03/31 20:01] (current)
James Irwin [Exporting a trained model for inference]
Line 95: Line 95:
 ===== Getting a Model ===== ===== Getting a Model =====
 Instructions were adapted from [[https://​github.com/​tensorflow/​models/​blob/​master/​research/​object_detection/​g3doc/​configuring_jobs.md | here]]. Instructions were adapted from [[https://​github.com/​tensorflow/​models/​blob/​master/​research/​object_detection/​g3doc/​configuring_jobs.md | here]].
-One of the advantages of using TFODA is that it is really easy to try different network architectures (models) and seeing their speed vs. accuracy tradeoffs. ​To get model +One of the advantages of using TFODA is that it is really easy to try different network architectures (models) and seeing their speed vs. accuracy tradeoffs. ​Example config files can be found [[https://​github.com/​tensorflow/​models/​tree/​master/​research/​object_detection/​samples/​configs : here]]. You'll need to modify these config files bit for your own use. In addition, most of the models have network weights that have been pretrained on some dataset. Starting from these check-points is usually much faster than training from scratch. You can locate them [[https://​github.com/​tensorflow/​models/​blob/​master/​research/​object_detection/​g3doc/​detection_model_zoo.md | here]].
  
 +Place the model config file and the pretrained ​ model in the models/ directory of your workspace.
  
 +===== Start Training =====
 +The following commands assume your current working directory is the root of the workspace you created earlier. ​
 +
 +To start training, run:
 +  python ~/​.local/​tensorflow_object_detection_api/​research/​object_detection/​train.py \
 +    --logtostderr \
 +    --pipeline_config_path=<​model config file> \
 +    --train_dir=output/​train
 +
 +To evaluate the performance of the network, run:
 +  python ~/​.local/​tensorflow_object_detection_api/​research/​object_detection/​eval.py \
 +    --logtostderr \
 +    --pipeline_config_path=<​model config file> \
 +    --checkpoint_dir=output/​train/​ \
 +    --eval_dir=output/​eval/​
 +The eval.py script will notice every time the train.py script saves a new checkpoint, and evaluate its performance on the test dataset.
 +
 +To visualize the training process, start up tensorboard:​
 +  tensorboard --logdir=outputs
 +Tensorboard is a little web server, you can access it at localhost:​6006 in your browser.
 +===== Exporting a trained model for inference =====
 +To export checkpoint trained data for ''​%%robosub_object_detection%%''​ format you need to follow [[https://​github.com/​tensorflow/​models/​blob/​master/​research/​object_detection/​g3doc/​exporting_models.md|these]] instructions. Or run this:
 +  python object_detection/​export_inference_graph.py \
 +      --input_type image_tensor \
 +      --pipeline_config_path ${PIPELINE_CONFIG_PATH} \
 +      --trained_checkpoint_prefix ${TRAIN_PATH} \
 +      --output_directory output_inference_graph.pb
 +
 +At this point, you should upload the label_map.pbtext and frozen_inference_graph.pb files into a uniquely named folder inside [[http://​robosub.eecs.wsu.edu/​data/​vision/​trained_models/​]],​ so its easy for other members to access the models.