Loading /home/inference/Amplitudemode_AI/all_model_and_pred/xxx/segment/train3/weights/last.onnx for ONNX OpenCV DNN inference...
[ERROR:0@3.062] global onnx_importer.cpp:1051 handleNode DNN/ONNX: ERROR during processing node with 2 inputs and 2 outputs: [Split]:(onnx_node!/model.22/Split) from domain='ai.onnx'
Traceback (most recent call last):
  File "/home/inference/Amplitudemode_AI/all_model_and_pred/AI_Ribfrac_ths/onnx_test_seg/infer-seg.py", line 167, in <module>
    model = AutoBackend(weights="/home/inference/Amplitudemode_AI/all_model_and_pred/xxx/segment/train3/weights/last.onnx", dnn=True)
  File "/home/inference/miniconda3/envs/yolov8/lib/python3.10/site-packages/ultralytics/nn/autobackend.py", line 124, in __init__
    net = cv2.dnn.readNetFromONNX(w)
cv2.error: OpenCV(4.7.0) /io/opencv/modules/dnn/src/onnx/onnx_importer.cpp:1073: error: (-2:Unspecified error) in function 'handleNode'
> Node [Split@ai.onnx]:(onnx_node!/model.22/Split) parse error: OpenCV(4.7.0) /io/opencv/modules/dnn/src/layers/slice_layer.cpp:274: error: (-215:Assertion failed) splits > 0 && inpShape[axis_rw] % splits == 0 in function 'getMemoryShapes'
> 

上述是尝试用opencv读取模型时的报错信息。

接着去github上的yolov8官方项目的问题区搜索,经过尝试最终搜索关键字如下:

ONNX DNN  splits > 0 && inpShape[axis_rw] % splits == 0 in function 'getMemoryShapes 

找到对应问题如下:Exported ONNX cannot be opened in OpenCV · Issue #226 · ultralytics/ultralytics · GitHubSearch before asking I have searched the YOLOv8 issues and found no similar bug report. YOLOv8 Component Export Bug I created a custom object detector and export to ONNX. Attempting to load it in OpenCV results in the following error: `[...icon-default.png?t=N7T8https://github.com/ultralytics/ultralytics/issues/226

 找到解决方法如下转换时要设置(关键是添加opset=11)

yolo mode=export model=runs/detect/train/weights/best.pt imgsz=[640,640] format=onnx opset=11

实际转化代码如下:

from ultralytics import YOLO

model = YOLO(
    "/home/inference/Amplitudemode_AI/all_model_and_pred/xxx/segment/train3/weights/last.pt")
success = model.export(format="onnx", opset=11, simplify=True)  # export the model to onnx format
assert success

用转换好的onnx调用官方api推理如下:

from ultralytics import YOLO
model = YOLO("/home/inference/Amplitudemode_AI/all_model_and_pred/xxx/segment/train3/weights/last.onnx")  # 模型加载
results = model.predict(
    source='/home/inference/tt',imgsz=640, save=True, boxes=False)  # save plotted images 保存绘制图片

正常推理成功。

ps 2024年2月22日

用官方api推理少了关键dnn=True的配置,添加时会报错。

故而我用官方的模型去测试先转onnx如下代码:

# -*-coding:utf-8-*-
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-seg.pt')  # load an official model
# Export the model
model.export(format='onnx', opset=12)

这里用opset=12,是发现官方实例里升级到12了https://github.com/ultralytics/ultralytics/tree/main/examples/YOLOv8-OpenCV-ONNX-Python

命令行测试推理如下报错:

yolo predict task=segment model=yolov8n-seg.onnx imgsz=640 dnn
WARNING ⚠️ 'source' is missing. Using default 'source=/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/assets'.
Ultralytics YOLOv8.1.17 🚀 Python-3.9.18 torch-1.12.1+cu102 CUDA:0 (Tesla T4, 14927MiB)
Loading yolov8n-seg.onnx for ONNX OpenCV DNN inference...
WARNING ⚠️ Metadata not found for 'model=yolov8n-seg.onnx'

Traceback (most recent call last):
  File "/home/inference/miniconda3/envs/yolov8v2/bin/yolo", line 8, in <module>
    sys.exit(entrypoint())
  File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/cfg/__init__.py", line 568, in entrypoint
    getattr(model, mode)(**overrides)  # default args from model
  File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/engine/model.py", line 429, in predict
    return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
  File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/engine/predictor.py", line 213, in predict_cli
    for _ in gen:  # noqa, running CLI inference without accumulating any outputs (do not modify)
  File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 43, in generator_context
    response = gen.send(None)
  File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/engine/predictor.py", line 290, in stream_inference
    self.results = self.postprocess(preds, im, im0s)
  File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/models/yolo/segment/predict.py", line 30, in postprocess
    p = ops.non_max_suppression(
  File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/utils/ops.py", line 230, in non_max_suppression
    output = [torch.zeros((0, 6 + nm), device=prediction.device)] * bs
RuntimeError: Trying to create tensor with negative dimension -837: [0, -837]

 找到了对应问题:

Error while inferencing with DNN module using CLI and ONNX export · Issue #2178 · ultralytics/ultralytics · GitHub

里面没有人解决,顺便提以下检测模型是能正常调用dnn推理的。

看到资料说可能是torch版本问题,分别尝试了2.0.2,1.12.1,1.11.0没有解决(这期间改动根据代码里注释 WARNING: DNN inference with torch>=1.12 may require do_constant_folding=False 改动也没用),也可能是opencv版本的问题,分别尝试了4.9,4.8,4.7也没解决。感觉是个大坑啊。

Logo

开放原子开发者工作坊旨在鼓励更多人参与开源活动,与志同道合的开发者们相互交流开发经验、分享开发心得、获取前沿技术趋势。工作坊有多种形式的开发者活动,如meetup、训练营等,主打技术交流,干货满满,真诚地邀请各位开发者共同参与!

更多推荐