grimoire/mmdetection-to-tensorrt
Fork: 85 Star: 588 (更新于 2024-11-08 22:25:28)
license: Apache-2.0
Language: Python .
convert mmdetection model to tensorrt, support fp16, int8, batch input, dynamic shape etc.
最后发布版本: v0.5.0 ( 2021-08-29 22:31:31)
MMDet to TensorRT
[!NOTE]
The main branch is used to support model conversion of MMDetection>=3.0. If you want to convert model on older MMDetection, Please switch to branch:
News
- 2024.02: Support MMDetection>=3.0
Introduction
This project aims to support End2End deployment of models in MMDetection with TensorRT.
Mask support is experiment.
Features:
- fp16
- int8(experiment)
- batched input
- dynamic input shape
- combination of different modules
- DeepStream
Requirement
-
install MMDetection:
pip install openmim mim install mmdet==3.3.0
-
install torch2trt_dynamic:
git clone https://github.com/grimoire/torch2trt_dynamic.git torch2trt_dynamic cd torch2trt_dynamic pip install -e .
-
install amirstan_plugin:
-
Install tensorrt: TensorRT
-
clone repo and build plugin
git clone --depth=1 https://github.com/grimoire/amirstan_plugin.git cd amirstan_plugin git submodule update --init --progress --depth=1 mkdir build cd build cmake -DTENSORRT_DIR=${TENSORRT_DIR} .. make -j10
[!NOTE]
DON'T FORGET setting the environment variable(in
~/.bashrc
):export AMIRSTAN_LIBRARY_PATH=${amirstan_plugin_root}/build/lib
-
Installation
Host
git clone https://github.com/grimoire/mmdetection-to-tensorrt.git
cd mmdetection-to-tensorrt
pip install -e .
Docker
Build docker image
sudo docker build -t mmdet2trt_docker:v1.0 docker/
Run (will show the help for the CLI entrypoint)
sudo docker run --gpus all -it --rm -v ${your_data_path}:${bind_path} mmdet2trt_docker:v1.0
Or if you want to open a terminal inside de container:
sudo docker run --gpus all -it --rm -v ${your_data_path}:${bind_path} --entrypoint bash mmdet2trt_docker:v1.0
Example conversion:
sudo docker run --gpus all -it --rm -v ${your_data_path}:${bind_path} mmdet2trt_docker:v1.0 ${bind_path}/config.py ${bind_path}/checkpoint.pth ${bind_path}/output.trt
Usage
Create a TensorRT model from mmdet model. detail can be found in getting_started.md
CLI
# conversion might take few minutes.
mmdet2trt ${CONFIG_PATH} ${CHECKPOINT_PATH} ${OUTPUT_PATH}
Run mmdet2trt -h for help on optional arguments.
Python
shape_ranges=dict(
x=dict(
min=[1,3,320,320],
opt=[1,3,800,1344],
max=[1,3,1344,1344],
)
)
trt_model = mmdet2trt(cfg_path,
weight_path,
shape_ranges=shape_ranges,
fp16_mode=True)
# save converted model
torch.save(trt_model.state_dict(), save_model_path)
# save engine if you want to use it in c++ api
with open(save_engine_path, mode='wb') as f:
f.write(trt_model.state_dict()['engine'])
[!NOTE]
The input of the engine is the tensor after preprocess. The output of the engine is
num_dets, bboxes, scores, class_ids
. if you enable theenable_mask
flag, there will be another outputmask
. The bboxes output of the engine did not divided byscale_factor
.
how to perform inference with the converted model.
from mmdet.apis import inference_detector
from mmdet2trt.apis import create_wrap_detector
# create wrap detector
trt_detector = create_wrap_detector(trt_model, cfg_path, device_id)
# result share same format as mmdetection
result = inference_detector(trt_detector, image_path)
Try demo in demo/inference.py
, or demo/cpp
if you want to do inference with c++ api.
Read getting_started.md for more details.
How does it works?
Most other project use pytorch=>ONNX=>tensorRT route, This repo convert pytorch=>tensorRT directly, avoid unnecessary ONNX IR. Read how-does-it-work for detail.
Support Model/Module
[!NOTE]
Some models have only been tested on MMDet<3.0. If you found any failed model, Please report in the issue.
- Faster R-CNN
- Cascade R-CNN
- Double-Head R-CNN
- Group Normalization
- Weight Standardization
- DCN
- SSD
- RetinaNet
- Libra R-CNN
- FCOS
- Fovea
- CARAFE
- FreeAnchor
- RepPoints
- NAS-FPN
- ATSS
- PAFPN
- FSAF
- GCNet
- Guided Anchoring
- Generalized Attention
- Dynamic R-CNN
- Hybrid Task Cascade
- DetectoRS
- Side-Aware Boundary Localization
- YOLOv3
- PAA
- CornerNet(WIP)
- Generalized Focal Loss
- Grid RCNN
- VFNet
- GROIE
- Mask R-CNN(experiment)
- Cascade Mask R-CNN(experiment)
- Cascade RPN
- DETR
- YOLOX
Tested on:
- torch=2.2.0
- tensorrt=8.6.1
- mmdetection=3.3.0
- cuda=11.7
FAQ
read this page if you meet any problem.
License
This project is released under the Apache 2.0 license.
最近版本更新:(数据更新于 2024-09-27 04:59:47)
2021-08-29 22:31:31 v0.5.0
2021-07-24 00:19:53 v0.4.1
2021-04-01 23:51:20 v0.4.0
2020-11-08 20:00:34 v0.3.0
2020-09-26 12:34:24 v0.2.0
2020-09-02 11:08:31 v0.1.0
主题(topics):
cascade-rcnn, faster-rcnn, inference, mmdetection, object-detection, retinanet, ssd, tensorrt, yolov3
grimoire/mmdetection-to-tensorrt同语言 Python最近更新仓库
2024-11-24 20:32:32 xtekky/gpt4free
2024-11-24 01:08:40 jasoneri/ComicGUISpider
2024-11-23 07:15:18 comfyanonymous/ComfyUI
2024-11-23 02:05:08 hect0x7/JMComic-Crawler-Python
2024-11-22 19:26:55 ultralytics/ultralytics
2024-11-22 18:58:34 home-assistant/core