Being able to see what the autopilot is thinking in real time is a valuable tool for tuning autopilot behaviour. FPV is normally used for this but I can't use it on my aircraft. A good solution is to use minimOSD on the ground driving a video monitor. I have had a few flights with this and it works very nicely. minimOSD is amazing but still limited in what it can do.
The raspberry pi is very cheap, has a composite output and a good graphics accelerator. I need the composite video for the pirate eye monocle I use. Using a laptop and HDMI->composite converter is not reliable and very poor quality. To get a reasonable frame rate (~20fps) the openGL ES2 3d graphics capability of RPi needs enabling. The amazing pi3d python library was the obvious choice. It will also run on an x86 linux machine running x so development time is quick.
There has been a bit of optimising to get reasonable performance. The resulting python module is a functional proof of concept but it is a way from finished. At 20fps it consumes 70% of the processor time. That leaves plenty of capacity for drawing all of the status items and some capacity for drawing more 3d objects.
I do have my own plans to add to the raspiHUD. What would you put on the display? Proximity warnings with 3d position? Ground elevation map? Local weather conditions?
With some advanced fiddling RPi might be encouraged to put its camera output on the background, then it would become a proper (but large and heavy) in flight FPV HUD. I don't plan to do this myself.
The pirateeye monocle from paya technology. HUD flying without the need for a buddy.