快捷导航
Hi,

It's quite common to use gstreamer to handle camera related work in overall jetson series.
Do you have any working gstreamer pipeline code snippet for gmsl camera, e.g. preview display with camera?

I tried to do something same as
https://docs.nvidia.com/jetson/l ... ated_gstreamer.html
> camera capture with gstreamer-1.0
, but couldn't make any working pipeline.
If gstreamer is not possible for using GMSL cameras, is there any python sdk available for that?

举报 使用道具
| 回复

共 17 个关于本帖的回复 最后回复于 2020-11-9 23:50

米米米 版主 发表于 2020-10-16 09:24 | 显示全部楼层
Hi

This is a good question, but currently, we only provide a native sdk, but gstream demo code is not ready yet.
The reason is, Apex's GMSL solution is based on Max9286, which combine 4 videos into 1 device.
This makes we can't just open the video device and read image. The data read from the dev/video need be processed, which we did in SDK.

The SDK usage and sample code can be referred from the following document.
https://docs.miivii.com/product/ ... mon/05.EN_gmsl.html

But your suggestion is absolutely right, we will rise the priority and let you know result(throught the OTA service which will be online in December.).
youngtaek 会员 发表于 2020-10-16 10:28 | 显示全部楼层
米米米 发表于 2020-10-16 09:24
Hi

This is a good question, but currently, we only provide a native sdk, but gstream demo code is n ...

Okay I see.
I'm not sure about how you've done postprocessing in SDK.
But I matter a lot about its implementation.
It would be better if you opensource the SDK.
And gstreamer pipeline matters a lot as well.
Thanks for taking care of this issue, I'll appreciate much if you make it done as soon as possible.
米米米 版主 发表于 2020-10-16 11:48 | 显示全部楼层
youngtaek 发表于 2020-10-16 10:28
Okay I see.
I'm not sure about how you've done postprocessing in SDK.
But I matter a lot about its ...

May I know whether you need timestamp information in your gstreamer pipeline?

The reason we use SDK, is the SDK provide the timestamp of the image when trigger cmos shutter, which can't be get through v4l2 stack.
If the application need to synchronize all sensors, neither gstreamer nor v4l2 can do this.
So in the SDK, all the preprocess, and posprocess, is using low level hardware API on jetson to reduce CPU and GPU cost.

To support gstreamer will lost this feature, that's why it wan not supported.
So your usecase will help us to improve it.
youngtaek 会员 发表于 2020-10-16 12:05 | 显示全部楼层
本帖最后由 youngtaek 于 2020-10-16 12:08 编辑
米米米 发表于 2020-10-16 11:48
May I know whether you need timestamp information in your gstreamer pipeline?

The reason we use S ...

AFAIK it's possible to pass metadata, but not sure since I've never tried to implement it.
https://forums.developer.nvidia. ... ta-is-missing/77676
Does metadata mentioned above not fit to timestamp you said?
Precise timestamp from sensor/shutter helps a lot, since we take delay from sensor to controller into account to estimate next actions.

Using ISP/accelerator in xavier is very essential, we already fully use CPU and GPU resources, which makes it impossible to do additional image processing afterwards.
(Until now we've used camera from FLIR, ISP chip inside it could handle image resizing and every sort of processing necessary on image)

But I can't agree that using gstreamer will ruin such features.
youngtaek 会员 发表于 2020-10-16 12:44 | 显示全部楼层
米米米 发表于 2020-10-16 11:48
May I know whether you need timestamp information in your gstreamer pipeline?

The reason we use S ...

I also saw the comment on header file of sdk.
It mentions that it uses GPU for conversion.
Would it use GPU even if there's no resize of image?
Does it support pixel binning by ISP?
米米米 版主 发表于 2020-10-16 13:23 | 显示全部楼层
youngtaek 发表于 2020-10-16 12:44
I also saw the comment on header file of sdk.
It mentions that it uses GPU for conversion.
Would i ...

You are correct. It's not limited by gstreamer, but by the default source plugin.

Regarding ISP, since we don't manufacture camera module, we are not using ISP to do related operation, even resize.
CMOS raw data is processed by ISP inside camera module, this is the current interface between camera module vendor and us.

Since you are very detail with the Jetson platfrom, it makes the talk easy.
The SDK is using the low level api in multimedia package, which use IVC to do resize, and conversion.
To save time of less professional user.

You can check nvidia's multimedia package: multimedai/12_camera_v4l2_cuda for detail.
Meanwhile, I will discuss with the dev team about the opensource issue.
Maybe we can release the code to you.
youngtaek 会员 发表于 2020-10-16 13:52 | 显示全部楼层
本帖最后由 youngtaek 于 2020-10-16 14:05 编辑
米米米 发表于 2020-10-16 13:23
You are correct. It's not limited by gstreamer, but by the default source plugin.

Regarding ISP,  ...

Oh... I'm confused now.
I thought that there would be no ISP chip in camera,camera's sending raw bayer format directly, deserializer makes it as CSI stream
so that it can utilize ISP in xavier which probably would have more control (depends though)
Could you explain a bit more about current gmsl camera pipeline in apex?

It's a bit different question but, it seems like I can't set anything manually,e.g. partial use of sensor, exposure/gain control, white balance, gamma etc.
How can I do it?


米米米 版主 发表于 2020-10-16 14:16 | 显示全部楼层
youngtaek 发表于 2020-10-16 13:52
Oh... I'm confused now.
I thought that there would be no ISP chip in camera, camera's sending raw b ...

Did you have the datasheet of camera? I will make Ricky send you a copy.
Now all cameras output is YUV data.
And camera's parameter seems not settable.

Firstly, I think we should have the base of camera.
Then we can introduce our camera vendor to you.
Regarding camera function, we can support.
youngtaek 会员 发表于 2020-10-16 14:32 | 显示全部楼层
米米米 发表于 2020-10-16 14:16
Did you have the datasheet of camera? I will make Ricky send you a copy.
Now all cameras output is ...

Okay got it.

Sending YUV from camera means that, it will bypass ISP in xavier, right?
Does image format conversion (to supported image formats in SDK) rely on CPU/GPU?

For the specific camera function requirement, what I think important is
1) ROI of white balance
2) Upper/lower limit of auto gain /  exposure time
There're so many features we set manually in existing cameras,
but those 2 are very essential which affect the system critically.
您需要登录后才可以回帖 登录 | 点我注册

精彩推荐

  • canbus与vcu相连接,出现bus-off状态
  • 有线连接失败
  • Apex 串口通讯
  • 关于SPI通信问题咨询
  • MIIVII APEX DUAL ORIN米文域控制器产品合

明星用户