0) Obviously we can tweak the current shader algorithm. Before we do this, I would like for us to do item #1 below.
1) I would like to run a detailed benchmark on all shader operations. I run a recursive approximation to the bilateral filter, for instance, and do a basic Laplacian edge detection. There are many hard-coded parameters for the filters (such as spatial extent), and I would like to know the dependence of the shader runtime on these parameters. I think that this information will be generally useful to the class and Nvidia, for instance.
2) We can further work on flash integration. I have all of the necessary framework to copy flash and non-flash images into toggled destinations as we discussed on Sunday. However, we need to work out the details of flash timing. Basically, I find that the non-flash image has lingering flash from the flash-shot.
3) We can also think about stereo integration, even independent of flash integration. Here, we need some idea of what it is that we can achieve with stereo shots. Do you have any estimated results on depth calculations from stereo images? (Say, just by taking the difference of two images?)
4) I have been interested in the NPR application to augmented reality. I may code up a virtual object that you can put on top of the viewfinder stream.
No comments:
Post a Comment