3689671345?profile=original

NVIDIA's press release states that "Jetson TX1 is the first embedded computer designed to process deep neural networks -- computer software that can learn to recognize objects or interpret information." The 3.4x2inch module includes a Tegra X1 ARM Cortex-A57 processor with 256-core NVIDIA Maxwell graphics, 4GB of LPDDR4 memory, 16GB of eMMC storage, 802.11ac WiFi and Bluetooth, and Gigabit Ethernet support.

AnandTech Article: http://www.anandtech.com/show/9779/nvidia-announces-jetson-tx1-tegra-x1-module-development-kit

The Jetson TX1 Development Kit will be available for preorder starting Nov. 12 for $599 in the United States. The kit includes the Jetson TX1 module, a carrier board (pictured below), and a 5MP camera. The stand-alone module will be available in early 2016 (for $299 in bulk).

3689671210?profile=original

The Jetson TK1 (not TX1) was released in 2014 to encourage the development of products based on the Tegra K1 processor. However, according to AnandTech, developers were using the Jetson TK1 outright as a production board, choosing to focus on peripheral and software development instead of system hardware development. With the new TX1, all of the I/O connectivity is provided on a carrier board, enabling rapid development on the credit-card sized TX1 module. After development is finished, the TX1 module can be directly deployed in products, such as drones. 

NVIDIA used a drone application to promote the Jetson TX1

https://twitter.com/NVIDIATegra/status/664238535096926208

3689671216?profile=original

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Hi JB,

    The Jetson TK1, special promo to Makezine readers (but open to everybody).

    http://diydrones.com/profiles/blogs/jetson-tk1-promo-offer

    Just follow the link in the Blog post and use the promo code: MAKEJTK1 

    They are on back order, but at that price, hard to beat, develop on it then go to the TX1 later maybe.

    I looked at the Paralella too, cool board, and possibly more versatile, but for vision based apps the matrix of GPUs in the TK1 / TX1 are just what I am looking for and I believe that the firmware development system is a lot more mature and usable.

    Best,

    Gary

  • Yeah...

    For sure, that's a development platfom, so it comes with a lot of software and probably a year of support with the th brains that have conceived this gem.

    But it is still a bargain compared the others ADAS development platforms like this one, based on a Xilinx Zynq7000

    http://www.logicbricks.com/Products/logiADAK.aspx

    2907_4.jpg

  • @ Patrick, a was thinking the same, when Nvidia announced the DRIVE PX DEV Kit (in March?) ... bummer, it was about 10.000$ 

    http://blogs.nvidia.com/blog/2015/03/16/live-gtc/

  • While you guys are trying to integrate the Jetson with peripherals and stuff , I'll be flying with my fully integrated and functionnal autopilot that I just ordered from my car dealer !!!

    3702122942?profile=original

    NVIDIA DRIVE PX Auto-Pilot Development platform The DRIVE PX platform supports up to twelve camera inputs, and each Tegra X1 processor on the platform can access the data from the twelve onboard cameras and process it in real-time. Assuming each of these twelve cameras is a 1 Megapixel (1280x800) HD camera outputting at 30 fps, DRIVE PX will have to process 360 Mega-pixels per second of total video data. Since DRIVE PX has the ability to process 1.3 Gigapixels per second, it is capable of handling even higher resolution cameras outputting at higher frame rates. For computer vision-based applications, having higher resolution camera data at higher frame rates allows for faster and more accurate detection of objects in the incoming video streams

    Source: http://international.download.nvidia.com/pdf/tegra/Tegra-X1-whitepa...

    http://international.download.nvidia.com/pdf/tegra/Tegra-X1-whitepaper-v1.0.pdf
  • Developer

    +1

    There is no comparison between a generic ARM CPU and slow GPU, and a 256 core latest generation (Maxwell) GPU with optimized memory buses etc. And there is also a decent generic arm ARM on there just to manage the OS and such.

    GPU architecture is exactly what you want for computer vision application, since the high bandwidth parallelism and matrix/vector based math is a perfect fit.

  • Odroid xu4 gpu has 6 cores @ 142gflops, the x1 has 256 cores @ 1tflops, and twice the memory bandwidth - it's not even the same ballpark.  The x1 does 4k 60fps video codec in hardware.  It's the same size as the xu4, more power efficient, and costs $299.  nvidia has extensive mature libraries and development environments, and tons of people out there who know how to program it.  It's perfectly suited to SLAM, collision advoidance, CV etc - all the things we need to take the next step in uav autonomy.  This could be a complete game changer.

  • In fact using a GPU for imaging is essentially using it in reverse to what it normally does to create on-screen graphics, so it should be well suited to the task.

  • Hey Gary

    Which TK1 did you manage to pick up for $99 from a previous post, was it the Jetson or the Percepto? Can you link it please? I'm keen on trying one out for that price! ;-)

    Have you seen the parralella board? It is interesting for parallel processing using there Epiphany multicore and also has a Zync FPGA and dual ARM core for linux as well for a reasonable cost all on a credit card sized board. With the parrallella there's also an option for SDR integration as well. So it could handle both imaging and comms. PM me for more if you're interested. ;-)

    -

    @ Hugues

    I'm a fan of odroid devices and have quite a few of the U3, C3, XU4 and even raspi 1' and 2's. Even used them in the OBC etc. But as Gary said parallel processing is better, at least IMHO for computer vision because it's possible to divy up each image and process smaller image segments with multiple cores together at the same time, resulting in near real time processing and control outputs. Doing this on multiple pieces of hardware to save cost is even more complicated, let alone that using a SOM (System on module) means you don't have to carry unnecessary cables, plugs and sockets etc for LAN USB etc. which decreases useful payload. Besides I'd rather spend my money on better imaging hardware than extra batteries so I can lift multiple SBCs. ;-)

    Regards

    JB

  • Sounds overpriced compared to a qualcomm dragonboard 410 which looks very competitive,I wonder if anyone has tried the dragonboard 410 on a drone setup ?
  • It seems the real ability of drones to avoid hitting things will depend on computing power.  The video of the flying wing avoiding trees was great, but obviously the distance of recognition will need to improve 4 to 10 times and the area of vision will need to include 360 degrees.  Then we will have very capable beyond visual line of sight machines.  In Canada the only option I've heard of is to operate your own radar station to ensure other aircraft won't hit your drone.  Cheap cameras and super computers will be a much cheaper solution once this finally matures.

This reply was deleted.