3689671345?profile=original

NVIDIA's press release states that "Jetson TX1 is the first embedded computer designed to process deep neural networks -- computer software that can learn to recognize objects or interpret information." The 3.4x2inch module includes a Tegra X1 ARM Cortex-A57 processor with 256-core NVIDIA Maxwell graphics, 4GB of LPDDR4 memory, 16GB of eMMC storage, 802.11ac WiFi and Bluetooth, and Gigabit Ethernet support.

AnandTech Article: http://www.anandtech.com/show/9779/nvidia-announces-jetson-tx1-tegra-x1-module-development-kit

The Jetson TX1 Development Kit will be available for preorder starting Nov. 12 for $599 in the United States. The kit includes the Jetson TX1 module, a carrier board (pictured below), and a 5MP camera. The stand-alone module will be available in early 2016 (for $299 in bulk).

3689671210?profile=original

The Jetson TK1 (not TX1) was released in 2014 to encourage the development of products based on the Tegra K1 processor. However, according to AnandTech, developers were using the Jetson TK1 outright as a production board, choosing to focus on peripheral and software development instead of system hardware development. With the new TX1, all of the I/O connectivity is provided on a carrier board, enabling rapid development on the credit-card sized TX1 module. After development is finished, the TX1 module can be directly deployed in products, such as drones. 

NVIDIA used a drone application to promote the Jetson TX1

https://twitter.com/NVIDIATegra/status/664238535096926208

3689671216?profile=original

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Hi, the J120 manual from last year says 'Support for the Raspberry Pi cameras is planned'.  Was this ever implemented?

  • Hi Mike, Jetson comes with JetPack BSP which includes NVIDIA cuDNN and TensorRT inference accelerator.  TensorRT uses graph optimizations, kernel fusion and architecture tuning, and half-precision FP16 to deploy networks with superior performance at runtime (inference).  It currently supports Caffe models along with a custom layer API.  And since cuDNN is included in JetPack, you can run deep learning frameworks on Jetson like Caffe, Torch, pyTorch, and Tensorflow. See this wiki entry on deep learning install guides for Jetson.  You may also be interested in this GitHub tutorial which includes training guides and vision primitives.  

    ROS support for aarch64 is also available, although I haven't heard of Dronekit integration yet.

    JetPack
    NVIDIA JetPack SDK is the most comprehensive solution for building AI applications. Use NVIDIA SDK Manager to flash your Jetson developer kit with th…
  • I'm currently using a Raspberry Pi 3 communicating with the Pixhawk using Dronekit and opencv for feature recognition. I want to move towards the use of neural networks and consider the Pi somewhat limited for this purpose, despite its great flexibility.

    The Jetson is the natural choice, but I'm not sure what software combination to use. Dronekit or ROS? Caffe or Tensorflow?

    I need to retain the flexibility of my current setup in terms of the programmatic control that Dronekit provides, but just want to add the AI component.

    All thoughts welcome.
  • 3702300053?profile=original

    NVIDIA has released JetPack 2.3, the latest update for Jetson TX1 including
    upgrades to Ubuntu 16.04 aarch64, CUDA toolkit 8.0, and TensorRT with up to
    twice the deep learning inference runtime performance and power efficiency.

    See the Parallel ForAll article for the latest benchmarks and features in JetPack,
    and visit this GitHub repo for examples of deploying realtime networks with TensorRT.
    3702300066?profile=original

    JetPack
    NVIDIA JetPack SDK is the most comprehensive solution for building AI applications. Use NVIDIA SDK Manager to flash your Jetson developer kit with th…
  • Developer

    Very nice Jurgen.

    As a side note, I used your J120 on my EnRoute EX700 with a TX1 in order to vision based precision landing.

    By the way, I'm not posting much on DIYDrones anymore.  I've updated the ArduPilot.org wiki (see Community section) to show where the other ardupilot developers and I are these days including facebook and gitter.

  • Today we have build the first prototype of the 38216 debug board for the TX1. It is plugged between the TX1 module and the carrier board. It makes certain interfaces accessible for debug purposes.3702296200?profile=original

    3702296276?profile=original

  • Hello, perhaps you may be interested in the Deep Learning resources listed under the TX1 wiki?

    http://elinux.org/Jetson_TX1#Deep_Learning

    > what objects can be recognized and what information can be interpreted?

    Objects or signals that are in your training dataset can be recognized.
    For example, the popular Googlenet and Alexnet networks, found in the BVLC Model Zoo, come pretrained on ImageNet database, which includes 1000 types of natural and man-made objects.  By customizing the database, you can get the network to recognize specific things for your application.

    > Do you mean 3D obstacle avoidance in 3D indoor mapped environments ?

    Sure, segmentation network (SegNet) is becoming popular for obstacle detection.
    Also, reinforcement learning for intuitive autonomous navigation (see rovernet).

    > How to track TX1 to learn more about the implemented applications ?

    If you are interested in tracking, see this Object Detection tutorial in DIGITS4.   DIGITS is an open-source interactive web interface released by NVIDIA for training caffe/Torch networks.   It's current GitHub repo is located here.

  • Thank you Jurgen,

    could you explain

    "Jetson TX1 is the first embedded computer designed to process deep neural networks -- computer software that can learn to recognize objects or interpret information."

    what objects can be recognized and what information can be interpreted ?

    Do you mean 3D obstacle avoidance in 3D indoor mapped environments ?

    How to track TX1 to learn more about the implemented applications ?

  • Today we have build up the first batch of J130 carrier boards with two HDMI to CSI-2 bridges with the TC358840 devices. One is connected with 8 CSI-2 lanes for 2160p30 (4k x 2k) input.

    3702278680?profile=original

  • Hi Randy, the design of the 6 camera interface board for the TX1 dev kit board has been completed. It is currently in PCB making. It has 6 CSI-2 camera connectors (15 pin with 2 lanes and 1 mm pitch). The pinout is the one from the Raspberry Pi. So you can connect 6 B101 modules or six camera modules. Both Pi camera modules should work hardware wise. They just need the driver to be developed or ported. 

    Will it work, if all modules have the same I2C address? The answer is yes, it will. We took care off that. First we use 3 I2C busses. Next we use I2C address translation chips (LTC4316) to modify the I2C address of the second module on each I2C bus. All I2C busses are level translated from 1.8V (TX1) to 3.3V (CSI-2 connector).

    Power: each CSI-2 connector has 3.3V power. This is produced efficiently on-board by a 5V to 3.3V DC/DC converter.

    A quick update on the J130. The PCBs just came in. Next week we plan to build the first 10 prototypes.

    3702273983?profile=original

    3702274104?profile=original

This reply was deleted.