NVIDIA's TX1 SoC now comes on a credit-card sized module

NVIDIA's press release states that "Jetson TX1 is the first embedded computer designed to process deep neural networks -- computer software that can learn to recognize objects or interpret information." The 3.4x2inch module includes a Tegra X1 ARM Cortex-A57 processor with 256-core NVIDIA Maxwell graphics, 4GB of LPDDR4 memory, 16GB of eMMC storage, 802.11ac WiFi and Bluetooth, and Gigabit Ethernet support.

AnandTech Article: http://www.anandtech.com/show/9779/nvidia-announces-jetson-tx1-tegr...

The Jetson TX1 Development Kit will be available for preorder starting Nov. 12 for $599 in the United States. The kit includes the Jetson TX1 module, a carrier board (pictured below), and a 5MP camera. The stand-alone module will be available in early 2016 (for $299 in bulk).

The Jetson TK1 (not TX1) was released in 2014 to encourage the development of products based on the Tegra K1 processor. However, according to AnandTech, developers were using the Jetson TK1 outright as a production board, choosing to focus on peripheral and software development instead of system hardware development. With the new TX1, all of the I/O connectivity is provided on a carrier board, enabling rapid development on the credit-card sized TX1 module. After development is finished, the TX1 module can be directly deployed in products, such as drones. 

NVIDIA used a drone application to promote the Jetson TX1

https://twitter.com/NVIDIATegra/status/664238535096926208

Views: 43924

Comment by Jurgen Stelbrink on July 27, 2016 at 5:35am

Hi Randy, the design of the 6 camera interface board for the TX1 dev kit board has been completed. It is currently in PCB making. It has 6 CSI-2 camera connectors (15 pin with 2 lanes and 1 mm pitch). The pinout is the one from the Raspberry Pi. So you can connect 6 B101 modules or six camera modules. Both Pi camera modules should work hardware wise. They just need the driver to be developed or ported. 

Will it work, if all modules have the same I2C address? The answer is yes, it will. We took care off that. First we use 3 I2C busses. Next we use I2C address translation chips (LTC4316) to modify the I2C address of the second module on each I2C bus. All I2C busses are level translated from 1.8V (TX1) to 3.3V (CSI-2 connector).

Power: each CSI-2 connector has 3.3V power. This is produced efficiently on-board by a 5V to 3.3V DC/DC converter.

A quick update on the J130. The PCBs just came in. Next week we plan to build the first 10 prototypes.

Comment by Jurgen Stelbrink on August 4, 2016 at 3:03pm

Today we have build up the first batch of J130 carrier boards with two HDMI to CSI-2 bridges with the TC358840 devices. One is connected with 8 CSI-2 lanes for 2160p30 (4k x 2k) input.

Comment by Global Innovator on August 4, 2016 at 5:02pm

Thank you Jurgen,

could you explain

"Jetson TX1 is the first embedded computer designed to process deep neural networks -- computer software that can learn to recognize objects or interpret information."

what objects can be recognized and what information can be interpreted ?

Do you mean 3D obstacle avoidance in 3D indoor mapped environments ?

How to track TX1 to learn more about the implemented applications ?

Comment by Dustin Franklin on August 8, 2016 at 2:15pm

Hello, perhaps you may be interested in the Deep Learning resources listed under the TX1 wiki?

http://elinux.org/Jetson_TX1#Deep_Learning

> what objects can be recognized and what information can be interpreted?

Objects or signals that are in your training dataset can be recognized.
For example, the popular Googlenet and Alexnet networks, found in the BVLC Model Zoo, come pretrained on ImageNet database, which includes 1000 types of natural and man-made objects.  By customizing the database, you can get the network to recognize specific things for your application.

> Do you mean 3D obstacle avoidance in 3D indoor mapped environments ?

Sure, segmentation network (SegNet) is becoming popular for obstacle detection.
Also, reinforcement learning for intuitive autonomous navigation (see rovernet).

> How to track TX1 to learn more about the implemented applications ?

If you are interested in tracking, see this Object Detection tutorial in DIGITS4.   DIGITS is an open-source interactive web interface released by NVIDIA for training caffe/Torch networks.   It's current GitHub repo is located here.

Comment by Jurgen Stelbrink on September 7, 2016 at 11:34am

Today we have build the first prototype of the 38216 debug board for the TX1. It is plugged between the TX1 module and the carrier board. It makes certain interfaces accessible for debug purposes.


Developer
Comment by Randy on September 8, 2016 at 12:05am

Very nice Jurgen.

As a side note, I used your J120 on my EnRoute EX700 with a TX1 in order to vision based precision landing.

By the way, I'm not posting much on DIYDrones anymore.  I've updated the ArduPilot.org wiki (see Community section) to show where the other ardupilot developers and I are these days including facebook and gitter.

Comment by Dustin Franklin on September 14, 2016 at 6:44pm

NVIDIA has released JetPack 2.3, the latest update for Jetson TX1 including
upgrades to Ubuntu 16.04 aarch64, CUDA toolkit 8.0, and TensorRT with up to
twice the deep learning inference runtime performance and power efficiency.

See the Parallel ForAll article for the latest benchmarks and features in JetPack,
and visit this GitHub repo for examples of deploying realtime networks with TensorRT.

Comment by Mike Isted on July 1, 2017 at 11:40pm
I'm currently using a Raspberry Pi 3 communicating with the Pixhawk using Dronekit and opencv for feature recognition. I want to move towards the use of neural networks and consider the Pi somewhat limited for this purpose, despite its great flexibility.

The Jetson is the natural choice, but I'm not sure what software combination to use. Dronekit or ROS? Caffe or Tensorflow?

I need to retain the flexibility of my current setup in terms of the programmatic control that Dronekit provides, but just want to add the AI component.

All thoughts welcome.
Comment by Dustin Franklin on July 5, 2017 at 3:45pm

Hi Mike, Jetson comes with JetPack BSP which includes NVIDIA cuDNN and TensorRT inference accelerator.  TensorRT uses graph optimizations, kernel fusion and architecture tuning, and half-precision FP16 to deploy networks with superior performance at runtime (inference).  It currently supports Caffe models along with a custom layer API.  And since cuDNN is included in JetPack, you can run deep learning frameworks on Jetson like Caffe, Torch, pyTorch, and Tensorflow. See this wiki entry on deep learning install guides for Jetson.  You may also be interested in this GitHub tutorial which includes training guides and vision primitives.  

ROS support for aarch64 is also available, although I haven't heard of Dronekit integration yet.

Comment by Fnoop on December 20, 2017 at 10:31am

Hi, the J120 manual from last year says 'Support for the Raspberry Pi cameras is planned'.  Was this ever implemented?

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

Groups

Season Two of the Trust Time Trial (T3) Contest 
A list of all T3 contests is here. The current round, the Vertical Horizontal one, is here

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service