Would someone knowledgeable care to comment on this Rohm video encoder IC, which claims to "integrate the industry's first fog reduction function along with a unique, hardware-based Adaptive Image Enhancer (AIE) that provides real-time image correction for significantly improved visibility in dark, harshly backlit, or unclear/foggy environments."

In particular, the "before" and after pics just seem too good to be true...

I cannot think of an algorithm or process where so much lost detail could be restored.  Does anyone know something more about what they're doing here, and has anyone adapted this to their UAV platform for clearer video on cloudy or foggy days?

Views: 718

Reply to This

Replies to This Discussion

It defineitely won't pull out "missing" detail.

If you look closely at the image, reducing contrast and increasing "green colour would easily give you the corrected image. I guess the benefit is it does this in real time for FPV flying.

I would think it would be pushing it a bit to say it works well in dark or foggy environments, aside from making the image more "viewable" but not a better image really.

Looks interesting though.

I would think incorporating IR data might alow you to "see" through fog and in dark environments, but I don't think this does that.
Mmhmm, it seems at least strange, the fog picture.
It is not fading like a usual fog image, but very uniform instead. Strange that you can see the far background. Strange that the background does not change between the two pictures. Strange that it is so well delimited.
The cabin inner is strange also.

If you try to improve contrast on a low contrast picture, the results is far away from this one.

Could it be simply a reduced contrast image (the fog one)?

Is it really possible to recreate so well contrast that isn't present at all ?

Best regards,

A dedicated chip for auto levels for $12? It can't even do HD. Maybe if it did unsharp mask, image stacking, & stabilization.
Page 3 of the spec sheet for the BU6521KV has a block diagram, which reveals that the image data is first fed to a "fog reduction function" before being fed to an AIE (auto-image enhancement) function. Therefore, the "fog reduction" function is actually more than just standard image enhancement techniques. You'll also note the BU6520KV (as opposed to 21KV) on the same page, which has AIE but lacks "fog reduction." This furthers my assumption that the "fog reduction" is more than just standard AIE techniques.

Now, the question is, does anyone have a hunch as to what this function may be doing? As I said, it's more than just AIE. Wow... I just noticed how redundant I'm being by repeating myself.
I think the demo image is intentionally so ludicrously small that it tricks our minds to fill in the blanks and make us think that the IC actually does miracles. Reality is going to look worse.

Even if we assume whole 8 bits per color channel ("True Color") - which we know we never pull out of our FPV sets, the quality would not be anywhere near what the image shows. A fog that obscures 50% of the incoming light effectively reduces the available shade space to a half, so instead of 8 bits (256 shades) we end up with 7 bits (127 shades). The fog in the image looks like 80% fog (verified in 'shop). That leaves us with just 50 shades after removing the fog and stretching the contrast back to full, i.e. crappy colors and banding.

But it will bring out the details that would be otherwise difficult to spot. And it will also enhance all the CMOS, RF and other noises.

Riccardo, the IC could be doing some kind of a nonlinear reduction, e.g. reducing the middle of the image a little bit more than the sides to make the colors uniformish. But the demo image is faked with a simple contrast reduction, you're right about that.

This I guess is a little bit closer to reality, with real fog and all.

thank you for the great example. That's exactly what I meant and what I would expect with a reworked picture.
Better contrast where you can honestly have it (foreground) and not reinventing what does not exist on the original.

I'm curious to see how it will work on video too. Still picture is something, video (through goggles) is another story. As an FPV pilot I would like to see how it will work in real time. But I feel now like a stopper. Please let the discussion go further. I'm very interested if this is really possible or only a dream.

Okay guys, let's not assume we're using FPV sets or some low-res video camera. In other words, let's not impose artificial limitations in order to try and attack the technology; but rather, let's investigate the technology itself.

Check out this video demonstration. Note the segments with the cars, and how the only thing left is a "swirl" of fog. It looks too good to be true, but Rohm isn't some fly-by-night company. They're big, established, and have an excellent industry reputation. Very, very curious...
I've placed an order for two (2) BU6521KV video encoder chips. Both Future and Digikey are out of stock, but the Rohm representative has assured me they have plenty in stock and would supply them to distributors as needed. I intend to use the first one for destructive testing - that's where you inadvertently hook something up backward and blow out the chip. The second one is to do the actual testing on, once I correct my inevitable mistake.
I've done some more research and now I'm leaning somewhat closer to "possible" than before. My statements about the loss of color definition stay, but looks like I've overestimated their effects. Pics look fine even with just 20 shades per channel. I've written a simple glsl shader that selects areas with low contrast and increases it. The resolution is not that great (e.g. the lamp post on the left gets bloomed out, because it's smaller than the 64x64 contrast kernel), but it does "reveal" details previously lost to the eye.

I'd like to see an image coming from that IC that's larger than 80 pixels though. I mean, what is this, the 19th century?
What I am seeing is a loss of detail and clarity. Look at the trees on the left by the power pole. In the picture that is not edited you can actually see the individual branches, same goes for the line in the middle of the road. Cant say the same for the picture on the right. I do agree it brightens things up but there is no doubt a loss of detail.

Neat find.

That might have been caused by my crude filtering. But yeah, the idea of these methods is to enhance the details subjectively, to make them more visible to humans. Even if the cost is a real loss of information.
That was a pretty impressive little test you did there, Martin. I imagine if both images get stitched back together using some HDRI (high dynamic range imaging) software, you might end up with the best of both worlds (the details that were lost in conversion, as well as the details that were revealed in conversion).

I still think this video demonstration of their chip is pretty impressive, especially the parts where the various vehicles are coming toward you on a mountain road. You can see whisps of fog in the enhanced image sometimes, where before there was a thick cloud of fog. I want to do that! It would be a great way of lifting the cloud of fog that frequently surrounds me.

PS - The last one with someone in a lecture hall sort of puzzles me... there must have been some heavy-duty smoking to get the room that cloudy!

Reply to Discussion


© 2018   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service