Sunday, March 11, 2012

Figures and FG/BG-selective filtering

Some figures for the write-up:


The next two figures show the results of flash/no-flash integration (still very hacked, since hardware support for timing flash isn't available in the current FCam release). The alternating flash/no-flash shots are used to determine foreground (FG) from background (BG); filter is then applied only on FG or BG.



Some more figures:

The remaining figure will be the perf measurements.

Monday, March 5, 2012

Live flash/no-flash integration

We implemented a very hacked version of flash/no-flash integration. The difficulty is in maintaining proper synchronization between successive flash and no-flash shots.

Nevertheless, with the hacked flash toggle implementation, a joint bilateral filter was set up. It would also be interesting to do foreground-background detection by flash/no-flash difference images. There can be a lot of neat effects enabled by this technique, e.g. cartoonization applied only to foreground (or background), or synthetic blurring of background, etc.

[March 6th]: Alas, Nvidia just confirmed that the two methods for Flash control are lacking: (1) FireAction does not support timing in the current FCam implementation; (2) TorchAction is completely asynchronous and hence this is not really the means to get an alternating stream of flash/no-flash.

If/when Nvidia releases stereo camera, we might try shuffling the two stereo cams to the two SharedBuffers.

Friday, March 2, 2012

More pictures (3)

My walk to Stanford campus this morning. Will take more shots later when the sun is in a more favorable spot relative to the Memorial Church:








From the Hoover tower:





Thursday, March 1, 2012

More pictures (2)

Was inspired to take a short walk outside today:












Random stuff inside:







Wednesday, February 29, 2012

More pictures

Pictures of just bilateral filter (two passes) -- no edges:




The edge detection algorithm really needs to be improved. The bilateral filter result is beautiful.

Made a slight tweak to the edge-detection so that we don't use hard thresholding:


Comparison of stylized and non-stylized images:





Continued agenda

From my email:

0) Obviously we can tweak the current shader algorithm. Before we do this, I would like for us to do item #1 below.

1) I would like to run a detailed benchmark on all shader operations. I run a recursive approximation to the bilateral filter, for instance, and do a basic Laplacian edge detection. There are many hard-coded parameters for the filters (such as spatial extent), and I would like to know the dependence of the shader runtime on these parameters. I think that this information will be generally useful to the class and Nvidia, for instance.

2) We can further work on flash integration. I have all of the necessary framework to copy flash and non-flash images into toggled destinations as we discussed on Sunday. However, we need to work out the details of flash timing. Basically, I find that the non-flash image has lingering flash from the flash-shot.

3) We can also think about stereo integration, even independent of flash integration. Here, we need some idea of what it is that we can achieve with stereo shots. Do you have any estimated results on depth calculations from stereo images? (Say, just by taking the difference of two images?)

4) I have been interested in the NPR application to augmented reality. I may code up a virtual object that you can put on top of the viewfinder stream.