SENSING Speech at NVIDIA GTC 2024

2024-04-01 18:41


NVIDIA GTC 2024, the world's top developer conference in the field of Artificial Intelligence, was held at the San Jose Convention Center in San Jose, California, U.S.A., from March 18 to 21, EST, 2024, which attracted a large number of industry leaders, technologists, and innovators from around the world to attend the conference and discuss in depth the future of technology such as AI, computing, and automotive intelligence.     


At the recently concluded NVIDIA GTC 2024, SZ SENSING TECH Co., Ltd (hereinafter referred to as SENSING), as an elite partner of NVIDIA, was invited to participate in the event, and Steven DING, co-founder of SENSING, brought a keynote speech on "Principles and Applications of MultiSensor Synchronization for Autonomous

Machines" as the guest speaker online. As the guest speaker, Steven DING, co-founder of SSENSING , delivered a keynote speech on "Principles and Applications of

Multi-Sensor Synchronization for Autonomous Machines", sharing SENSING's multicamera synchronization technology and its several typical applications in

autonomous machines, and discussing the AI technological innovations and future development trends with industry experts and representatives of technological

innovation companies.

                                       


In this speech, Steven DING introduced the application of SENSING's multi-camera synchronization technology in autonomous machines, and explained in detail how SENSING   provides customers with efficient multi-camera synchronization solutions in three typical cases: unmanned delivery trolley, controller vision sensor solution upgrade, and humanoid robot, and further introduced how SENSING can satisfy the demand for multi camera combination sensing and precise synchronization of data from different cameras in autonomous machines through its innovative multi-camera synchronization technology.

Multi-camera application for unmanned delivery trolleys

Unmanned delivery carts, as a typical L4 autonomous driving scenario, commonly use the NVIDIA Jetson Orin platform as the core computational processor, which needs to simultaneously process a variety of sensor information, such as lasers, cameras, GPS, IMUs, etc., so as to carry out autonomous navigation, recognize signal lights, and avoid obstacles.

In the application of unmanned vehicles, there are different requirements for cameras with different functions, such as traffic light cameras need to be able to identify traffic lights, so the requirements for the camera resolution HDR AWB are very high, and it needs to be able to accurately restore the contour and color of the traffic lights in different light environments; perception cameras are used for the identification of the environment around the vehicle, and the requirements for HDR contrast clarity will be relatively high; the surround view camera is used for parallel The camera is used for parallel driving, which not only requires the ability to clearly see the environment around the vehicle, but also needs to be able to clearly image at night to ensure 24hour operation, so the camera's low illumination requirements are very high.

In order to improve the accuracy of the perception algorithm, SENSING based on NVIDIA Jetson automatic driving controller through accurate timing technology to complete the time synchronization of the entire system, to achieve synchronous exposure of multiple cameras.


Industrial Robot Controller Solution Upgrade

With the development of AI, industrial robots need to access multiple cameras at the same time when upgrading their algorithms and require synchronized camera exposure to be able to complete the process of photographing, identifying, grasping, and other operations of the items.

Since most of the traditional industrial robots use USB cameras or Ethernet cameras, the synchronized exposure of cameras cannot be achieved. In order to solve the above problems, SENSING provides a complete set of multi-camera synchronization solutions based on the NVIDIA Jetson Orin platform.

First of all, through the coaxial camera (such as GMSL/RLINC/VbyOne/FPDLink) will be connected to the Jetson Orin controller, Jetson Orin through the trigger to achieve accurate synchronized exposure of multiple cameras, and then Jetson Orin in the multiple camera data splicing and other pre-processing, with the same interface Jetson Orin then uses the same interface to connect to the master control of the industrial robot after pre-processing the data from the multiple cameras.

In this way, not only can multiple cameras have edge arithmetic at the same time, but also through the Jetson Orin's processing of multi-camera synchronization, to solve the original USB camera can not be achieved by the precise synchronization of the exposure problem.


Multi-sensor applications on humanoid robots

Humanoid robot, as a hot spot nowadays, it is also a typical autonomous machine with high dependence on camera as perception/recognition. To realize autonomous work, the robot needs to accurately perceive the surrounding environment and targets, which requires high dependence on camera.

The common humanoid robot camera arrangement scheme is roughly: a binocular camera is arranged in the head, similar to the human eye; two cameras are arranged in the chest and the back of the chest, covering the front and back of the robot's field of view; the cameras in the joints can play a complementary blindness effect, which can better satisfy the target recognition and other operational processes.

In addition, the humanoid robot needs to perceive the complex scene in the working process, in order to meet the precise positioning of the target needs to calculate the depth information, so it requires different cameras need to be strictly synchronized.

In summary, the humanoid robot camera needs to have long distance transmission, high stability transmission, multi-camera time synchronization, etc., and SENSING uses the synchronization technology based on NVIDIA Jetson platform plus multi-channel digital coaxial cameras (such as GMSL/RLINC/VbyOne/FPDLink), which is able to satisfy the needs of machine vision applications very well.


SENSING Launches Series of Products Based on NVIDIA Jetson Orin

As a leading global supplier of imaging and perception products and services, MoriCloud Intelligence has focused on imaging and vision technologies since its foundation, providing first-class services in the fields of intelligent driving, unmanned vehicles, autonomous machines, and vehicle-circuit coordination by creating excellent camera modules and solutions.

SENSING has been cooperating with NVIDIA for a long time. At present, SENSING has launched a series of development kits based on NVIDIA Jetson Orin platform, such as coaxial cameras (e.g., GMSL/RLINC/VbyOne/FPDLink), MIPI cameras, and EVS bionic event cameras.


Click on the image above for more product details