deepstream python
git地址:deepstream_python_apps/apps/runtime_source_add_delete at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHubhttps://github.com/NVIDIA-AI-IOT/deepstream_python_appsdeepstream_python_apps/apps/
git 地址:https://github.com/NVIDIA-AI-IOT/deepstream_python_apps
简介:
python通过pybindings访问deepstream的C库,pybindings使用的第三方库是pybind11。
pybind11 is a lightweight header-only library that exposes C++ types in Python。
环境搭建:
先安装编译各种库,参见:deepstream_python_apps/bindings at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub
binddings的编译:
deepstream_python_apps/bindings at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub
主要步骤是:
3.1.1 Quick build (x86-ubuntu-20.04 | python 3.8 | Deepstream 6.1)
cd deepstream_python_apps/bindings
mkdir build
cd build
cmake ..
make
4.1 Installing the pip wheel
apt install libgirepository1.0-dev libcairo2-dev
pip3 install ./pyds-1.1.2-py3-none*.whl
测试步骤:
python3 deepstream_test_1.py ../../../../samples/streams/sample_720p.h264
runtime_source_add_delete
This application demonstrates how to: * Add and delete sources at runtime. * Use a uridecodebin so that any type of input (e.g. RTSP/File), any GStreamer supported container format, and any codec can be used as input. * Configure the stream-muxer to generate a batch of frames and infer on the batch for better resource utilization. * Configure the tracker (referred to as nvtracker in this sample) using config file dstest_tracker_config.txt
add_sources{
#达到最大个数后,开始删source。
if (g_num_sources == MAX_NUM_SOURCES):
GObject.timeout_add_seconds(10, delete_sources, g_source_bin_list)
return False
}
main(){
GObject.timeout_add_seconds(10, add_sources, g_source_bin_list) //每隔开10秒加source.
}
deepstream-imagedata-multistream
这个例子的目的:
* Access imagedata in a multistream source
* Modify the images in-place. Changes made to the buffer will reflect in the downstream but
color format, resolution and numpy transpose operations are not permitted.
* Make a copy of the image, modify it and save to a file. These changes are made on the copy
of the image and will not be seen downstream.
* Extract the stream metadata, imagedata, which contains useful information about the
frames in the batched buffer.
* Annotating detected objects within certain confidence interval
* Use OpenCV to draw bboxes on the image and save it to file.
* Use multiple sources in the pipeline.
* Use a uridecodebin so that any type of input (e.g. RTSP/File), any GStreamer
supported container format, and any codec can be used as input.
* Configure the stream-muxer to generate a batch of frames and infer on the
batch for better resource utilization.
标红的特色例子,其他的在其他例子都有出现。总结下来是:1.修改源buffer, 用opencv在上面画个图,但不能修改buffer的颜色格式,分辨率等。2. copy一份源buffer,并修改保存,不会影响源buffer。
开放原子开发者工作坊旨在鼓励更多人参与开源活动,与志同道合的开发者们相互交流开发经验、分享开发心得、获取前沿技术趋势。工作坊有多种形式的开发者活动,如meetup、训练营等,主打技术交流,干货满满,真诚地邀请各位开发者共同参与!
更多推荐
所有评论(0)