3689712360?profile=original

https://devblogs.nvidia.com/parallelforall/jetson-tx2-delivers-twice-intelligence-edge/

NVIDIA is pleased to announce Jetson TX2, the world's preeminent embedded computing platform for deploying deep learning, computer vision, and advanced artificial intelligence solutions to edge devices in the field. Jetson TX2 delivers server-grade performance that fits in the palm of your hand and runs on less than 7.5W of power.

Driven by integrated NVIDIA Pascal GPU with more than a TFLOP/s performance and hex-core CPU complex with dual-core NVIDIA Denver2, quad-core ARM Cortex-A57 and 8GB 128-bit LPDDR4, Jetson TX2 includes user-tunable energy profiles (Max-Q and Max-P) and is built from the ground-up for ultimate compute efficiency. Jetson TX2 is packed with hardware multimedia engines for streaming high-bandwidth data and 4K inputs/outputs, including up to six simultaneous MIPI CSI cameras, Image Service Processors (ISPs), video encoders and decoders supporting H.245/H.245/VP8/VP9, and additional high-speed I/O like PCIe and USB 3.0.

NVIDIA JetPack 3.0 AI SDK provides comprehensive software support out of the box, including CUDA Toolkit 8.0, cuDNN 5.1, TensorRT, Linux kernel 4.4, and Ubuntu 16.04 LTS. Jetson TX2 is the ideal platform for deploying deep learning frameworks like Caffe, Torch, TensorFlow, and others into an embedded environment.

Developers can get started deploying AI with NVIDIA Two Days to a Demo, a tutorial on GitHub which provides neural network models and deep learning vision primitives like image recognition, object detection, and segmentation, and teaches the workflow for re-training the models using NVIDIA DIGITS interactive training system. New to Two Days to a Demo for TX2 are segmentation models geared for drones and an aerial training dataset to encourage development of autonomous flight control systems. To learn more, see my in-depth technical Parallel Forall blog on the NVIDIA site.

3689712321?profile=original

Jetson TX2 is available as a developer kit and module. The module is $399 in 1K quantities, and the Jetson TX2 Developer Kit is $599, with the $299 Jetson Educational Discount for those belonging to academic institutions. In addition the price of the Jetson TX1 Developer Kit has been reduced to $499. Preorders are available with shipments in North America and Europe beginning March 14.

Key to the recent resurgence in artificial intelligence, it's been exciting times at NVIDIA experiencing the vast increases in compute horsepower put at the fingertips of developers everywhere. With Jetson comes the ability to deploy advanced deep learning capabilities into embedded environments, onboard remote edge nodes. We hope you'll join us to begin developing your own AI-powered smart devices and computer vision solutions. See our full blog post with the details to learn more.­­

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Really nice upgrade to the TX1 Dustin, pretty much twice the performance - per watt anyway and finally with the Pascal architecture.

    I also would like to see a wider availability of Open tools to work with this board, but I do understand it is a very different architecture than conventional Microcontrollers and is going to require some serious software to be developed for it.

    I do think your price for the development board is quite reasonable especially considering the performance offered.

    Best Regards,

    Gary

  • They are very close.  Here's the TX1/TX2 Interface Migration Guide.

    I sent a pre-release TX2 to Jurgen from Auvidea before the launch.  It checked out OK on J120.  There was a patch to the device tree, however this is a file you can just copy into /boot directory.

  • @JB i think they are physical and binary identical.

  • Overall I liked the TX1. But is the TX2 backwards compatible with TX1 boards like the J120 from Auvidea, or will it need a new carrier?

  • MR60

    i personally do not like first the price which puts the board out of reach for most, then a design that is not a fully integrated usable SBC. And in addition if a dev has to learn Nvidia specifics rather than Linux distros they are used to, no major adoption of this product will happen. 

    Sell a TX2 under 100$ and that would be a guaranteed revolution.

  • Hi Jerry, personally I noticed sometimes it is helpful for me to take the module off the devkit and get it on a mini carrier, it gets the wheels turning and then I can get it off my desk and use it practically anywhere. Last night ConnectTech launched their new ultra-compact Sprocket carrier for only $99 bucks, woohoo!

    3702359193?profile=original

  • Hi Dustin, thanks for the links, i am still interested in developing on this board, comparing to Quadcomm which is a total blackbox, nVidia has done excellent documentation. I used to develop on TI SoC and that level open open document has bought the popularity of beaglebone and pandaboard etc. I have noticed the lasted u-boot and kernel, but some distro cooking tutorial or samples will help a lot, thats how startups tech companies evaluate the possibility of product development. The nature of proprietary for silicon companies is all around, i don't mean to be criticism, but on the contrary nVidia is doing good.

    In this year i will put more energy on embedded SDR rather than UAV stuff, and TX1 is the best candidate.

  • Also we provide all u-boot and Linux kernel sources in addition to the several-thousand page Technical Reference Manuals (TRMs) for which document the Tegra registers for custom support.  In the SoC industry it's typical to have NDA/NRE required for that level of info, if available at all, however anyone can access it about Jetson instantly.

  • Debian devs have upstream Debian on TX1:  http://elinux.org/Jetson_TX1#Linux_Distributions

    And e-consystems has expertise in porting Android.  We don't purposely block anyone and have also contributed to tegra nouveau.  If you want the full support of the deep learning stack including cuDNN, TensorRT, ect., yes JetPack-L4T and Ubuntu are recommended however others have been able to port them to ARM-portable Linux distros.  Others have tested additional distros for TK1, of which the install recipes are quite similar across Tegra.

  • its an irony after 10+ month i still haven't seen a buildroot project targeted on TX1, and this fancy board is full of dust on my desk. In this year i am very interested to see if some folks could port some python math labs or VOLK to it. writing code for cuda is too much task to put in for a freelance or weekend hack...

    And nvidia purposely blocking you to run a clean android on it, if AOSP has this target, i would have built a VR glass to play with it a lot...

This reply was deleted.