Nvidia Deepstream极致细节:3. Deepstream Python RTSP视频输出显示
此章节将详细对官方案例:`deepstream_test_1_rtsp_out.py`作解读。`deepstream_test_1_rtsp_out.py`的主要作用是可以输入rtsp格式的视频流。当我们成功运行了这个Python文件后,我们在屏幕上并不会看到视频,但是,系统会生成一个rtsp地址。当我们使用VLC或者这个案例中flask进行读取,就会看到经过物体识别标注后的视频了。
Nvidia Deepstream极致细节:3. Deepstream Python RTSP视频输出显示
此章节将详细对官方案例:deepstream_test_1_rtsp_out.py
作解读。deepstream_test_1_rtsp_out.py
的主要作用是可以输入rtsp格式的视频流。当我们成功运行了这个Python文件后,我们在屏幕上并不会看到视频,但是,系统会生成一个rtsp地址。当我们使用VLC或者这个案例中flask进行读取,就会看到经过物体识别标注后的视频了。有一些模块在之前的案例讲解中已经解释,这里就一笔带过了。
喜欢的朋友记得收藏点赞哈。
文章目录
1. 如何运行
首先来解决一个大家最关心的问题:如何把这个py程序跑起来。如果还没有成功安装Deepstream6.0以及Deepstream Python 1.1.0的同学,请查看此系列第一篇博文:Nvidia Deepstream极致细节:Deepstream 6.0以及Deepstream Python 1.1.0的安装。
关于如何跑起来deepstream_test_1_rtsp_out.py
。首先我们可以找到官方文件夹中README
文件。还是那句话,如果能看明白官方文档,那么强烈建议按照官方文档走。博客总是难以避免地会遗漏一些,或者资料过时。
在我们按照官方博客安装了对应的依赖:
$ sudo apt update
$ sudo apt-get install libgstrtspserver-1.0-0 gstreamer1.0-rtsp
For gst-rtsp-server (and other GStreamer stuff) to be accessible in
Python through gi.require_version(), it needs to be built with
gobject-introspection enabled (libgstrtspserver-1.0-0 is already).
Yet, we need to install the introspection typelib package:
$ sudo apt-get install libgirepository1.0-dev
$ sudo apt-get install gobject-introspection gir1.2-gst-rtsp-server-1.0
Terminal commands如下:
cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps/deepstream_python_apps
python3 deepstream_test1_rtsp_out.py -i /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_qHD.h264
这里需要注意两个地方,第一个是dstest1_pgie_config.txt
中模型以及相关的路径。官方用的是相对路径:
model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
但如果我们移动了dstest1_pgie_config.txt
的位置,建议使用绝对路径,比如:
model-file=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel
proto-file=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=/opt/nvidia/deepstream/deepstream-6.0/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/labels.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/cal_trt.bin
另外,在deepstream_test_1_rtsp_out.py
文件中有一句sys.path.append('../')
。这是因为需要调用上一个路径下common
包里面的一些函数。
代码跑起来后,我们来看一下Terminal的记录:
Creating Pipeline
Creating Source
Creating H264Parser
Creating Decoder
Creating H264 Encoder
Creating H264 rtppay
Playing file /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_qHD.h264
Adding elements to Pipeline
Linking elements in the Pipeline
*** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Starting pipeline
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:00.720879172 15042 0x2996b530 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/models open error
0:00:02.190228549 15042 0x2996b530 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/models failed
0:00:02.190342602 15042 0x2996b530 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/models failed, try rebuild
0:00:02.190377035 15042 0x2996b530 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
0:00:35.550098071 15042 0x2996b530 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:35.607265055 15042 0x2996b530 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
Frame Number=0 Number of Objects=6 Vehicle_count=5 Person_count=1
H264: Profile = 66, Level = 0
NVMEDIA_ENC: bBlitMode is set to TRUE
Frame Number=1 Number of Objects=6 Vehicle_count=5 Person_count=1
实质上,这个例子和上一个deepstream_test_1.py
除了RTSP输出外,没有什么区别。输出的rtsp地址为:rtsp://localhost:8554/ds-test。
2. Pipeline管道
Gst管道由下面几部分组成:
- filesrc:视频文件的导入。上个博客已经描述,这里不再赘述;
- h264parse:parse h264格式的视频文件。上个博客已经描述,这里不再赘述;
- nvv4l2decoder:解码。上个博客已经描述,这里不再赘述。通过指令
gst-inspect-1.0 nvv4l2decoder
可以考到更多详细参数说明; - nvstreammux:将各视频流汇集起来。上个博客已经描述,这里不再赘述;
- nvinfer:CNN模块的调用。上个博客已经描述,这里不再赘述;
- nvvideoconvert:图像格式的转换,从NV12到RGBA,用于nvosd模块。上个博客已经描述,这里不再赘述;
- nvdsosd:作用比如在图像上画bounding box以及文字。上个博客已经描述,这里不再赘述;
- nvvideoconvert:图像格式的再次转化
- capsfilter:The element does not modify data as such, but can enforce limitations on the data format.
- nvv4l2h264enc/nvv4l2h265enc:编码,准备输出给rtsp流。通过指令
gst-inspect-1.0 nvv4l2h264enc
可以考到更多详细参数说明; - rtph264pay/rtph265pay:
- udpsink:输出。
当我们实例化好所有上述插件模块,我们通过pipeline将他们添加:
pipeline.add(source)
pipeline.add(h264parser)
pipeline.add(decoder)
pipeline.add(streammux)
pipeline.add(pgie)
pipeline.add(nvvidconv)
pipeline.add(nvosd)
pipeline.add(nvvidconv_postosd)
pipeline.add(caps)
pipeline.add(encoder)
pipeline.add(rtppay)
pipeline.add(sink)
并链接:
source.link(h264parser)
h264parser.link(decoder)
sinkpad = streammux.get_request_pad("sink_0")
if not sinkpad:
sys.stderr.write(" Unable to get the sink pad of streammux \n")
srcpad = decoder.get_static_pad("src")
if not srcpad:
sys.stderr.write(" Unable to get source pad of decoder \n")
srcpad.link(sinkpad)
streammux.link(pgie)
pgie.link(nvvidconv)
nvvidconv.link(nvosd)
nvosd.link(nvvidconv_postosd)
nvvidconv_postosd.link(caps)
caps.link(encoder)
encoder.link(rtppay)
rtppay.link(sink)
3. nvvideoconvert 与 capsfilter
nvvideoconvert
插件模块的作用就是图片格式的转换。在pipeline中用到很多,比如从NV12到RGBA,用于osd模块,后来又从RGBA转到I420,用于视频编码。
capsfilter
一般和nvvideoconvert
一起使用,用于定义要把图像格式转换成什么类型,比如这个例子caps.set_property("caps", Gst.Caps.from_string("video/x-raw(memory:NVMM), format=I420"))
。这个链接也有类似的解释:Caps filters are often placed after converter elements like audioconvert, audioresample, videoconvert or videoscale to force those converters to convert data to a specific output format at a certain point in a stream.
。
4. nvv4l2h264enc/nvv4l2h265enc
这两个模块就是V4l2 H.264/H。265 video encoder。通过指令gst-inspect-1.0 nvv4l2h264enc
可以考到更多详细参数说明,我们在下面的pipeline图中也可以看到一些:
需要注意的是,nvv4l2h264enc/nvv4l2h265enc
的输入是I420
格式,而输出是byte-stream
。
Pad Templates:
SRC template: 'src'
Availability: Always
Capabilities:
video/x-h264
stream-format: byte-stream
alignment: au
SINK template: 'sink'
Availability: Always
Capabilities:
video/x-raw(memory:NVMM)
width: [ 1, 2147483647 ]
height: [ 1, 2147483647 ]
format: { (string)I420, (string)NV12, (string)P010_10LE, (string)NV24 }
framerate: [ 0/1, 2147483647/1 ]
代码中设置了一些参数:
encoder.set_property('bitrate', bitrate)
if is_aarch64():
encoder.set_property('preset-level', 1)
encoder.set_property('insert-sps-pps', 1)
encoder.set_property('bufapi-version', 1)
分别的解释:
- bitrate: Set bitrate for v4l2 encode. Unsigned Integer. Range: 0 - 4294967295 Default: 4000000;
- preset-level: HW preset level for encoder flags: readable, writable, Enum "GstV4L2VideoEncHwPreset" Default: 1, "UltraFastPreset"
(0): DisablePreset - Disable HW-Preset
(1): UltraFastPreset - UltraFastPreset for high perf
(2): FastPreset - FastPreset
(3): MediumPreset - MediumPreset
(4): SlowPreset - SlowPreset
- insert-sps-pps: Insert H.264 SPS, PPS at every IDR frame. Default: false
- bufapi-version: Set to use new buf API. Default: false
对于这些参数,我不是特别懂,有熟悉的朋友可以流言补充。bufapi-version
这个参数我查阅了一些资料,好像在Deepstream这个场景下必须设置为True。
5. rtppay 与 udpsink
rtppay
的作用就是:Make the payload-encode video into RTP packets
。这个参数我不是很懂,感兴趣的同学可以参考这个链接。
关于RTP和UDP的解释,我觉得这个网站写的不多。如果大家对其中的概念感兴趣,可以学习一下。总的来说,RTP 是介于传输层和应用层之间的,默认是UDP作为传输协议。UDP—用户数据报协议,是一个简单的面向数据报的运输层协议。UDP不提供可靠性,它只是把应用程序传给IP层的数据报发送出去,但是并不能保证它们能到达目的地。由于UDP在传输数据报前不用在客户和服务器之间建立一个连接,且没有超时重发等机制,故而传输速度很快。在TCP/IP协议族(Internet protocol suite)的网络架构中,RTP处于应用层,UDP处于传输层,IP处于网络层,数据的封装以及传输是这样一层层下来的。所以,这也就是为什么在这个pipeline里面,系统先通过nvvideoconvert
以及capsfilter
将数据先转化成I420格式,再通过nvv4l2h264enc/nvv4l2h265enc
变成byte-stream,然后开始封装,经过rtppay,到udpsink,最后生成一个rtsp地址。
代码中相关部分:
if codec == "H264":
rtppay = Gst.ElementFactory.make("rtph264pay", "rtppay")
print("Creating H264 rtppay")
elif codec == "H265":
rtppay = Gst.ElementFactory.make("rtph265pay", "rtppay")
print("Creating H265 rtppay")
if not rtppay:
sys.stderr.write(" Unable to create rtppay")
最后就是UDP sink部分了。代码中涉猎的部分:
# Make the UDP sink
updsink_port_num = 5400
sink = Gst.ElementFactory.make("udpsink", "udpsink")
if not sink:
sys.stderr.write(" Unable to create udpsink")
sink.set_property('host', '224.224.255.255')
sink.set_property('port', updsink_port_num)
sink.set_property('async', False)
sink.set_property('sync', 1)
udpsink
的详细说明,我们在terminal中输入gst-inspect-1.0 udpsink
。
最令人困惑的实际上是sync
和async
这两个参数。这里引用这个网站的解释:
Gstreamer sets a timestamp for when a frame should be played, if sync=true it will block the pipeline and only play the frame after that time. This is useful for playing from a video file, or other non-live source. If you play a video file with sync=false it would play back as fast as it can be read and processed. Note that for a live source this doesn’t matter, because you are only getting frames in at the capture rate of the camera anyway.
Use sync=true if:
There is a human watching the output, e.g. movie playback
Use sync=false if:
You are using a live source
The pipeline is being post-processed, e.g. neural net
As to your other question, async=false tells the pipeline not to wait for a state change before continuing. Seems mostly useful for debugging.
所以,如果我们对实时性要求比较高的话,那么这两个参数都应该设置为False。
6. 在Flask上实现视频流的读取
最后我们在Flask上实现了对视频流的读取。代码非常简单,我写在另外一个博客里了:Flask读取RTSP视频流,及其简单的一个案例。
结果如下:
开放原子开发者工作坊旨在鼓励更多人参与开源活动,与志同道合的开发者们相互交流开发经验、分享开发心得、获取前沿技术趋势。工作坊有多种形式的开发者活动,如meetup、训练营等,主打技术交流,干货满满,真诚地邀请各位开发者共同参与!
更多推荐
所有评论(0)