- Push the fragment shader to a known location in the tablet.
- Declare shader via "setup_shader" on the push path; nv_set_attrib_by_name (?)
- Declare SharedBuffers sbuf_in, sbuf_2
- Initialize sbuf_in by sbuf_in->map
- glUseProgram(shader)
- sbuf_in->bindAsTexture2D()
- sbuf_out->makeCurrentSurface()
- glDrawArrays
- glFinish
I've now integrated the above framework into FCam main loop. However, I haven't yet worked out the conversion of the FCam frame data into the SharedBuffer memory. Nevertheless, I've learned a few things:
What do the GLSL keywords mean?
- Running the "manual" shader will compete with the shader that is already in place for the Viewfinder. Thankfully, there's nothing really jarring, because the viewfinder simply doesn't update for the period that you've hijacked the GPU, which is a fraction of a second in the test shader that I'm using ("feature.frag" from Nvidia).
- At the same time, I found that CameraView::onDrawFrame is doing basically all the steps in Java that I've now integrated into the C++ loop. Given that it is feasible to parametrize the shader at runtime, doing the image processing in this "Viewfinder shader" directly seems like the way to go.
- Importantly, Timo's example shows that it is possible to hook up two texture buffers to a single shader, to do some fusion-based image processing. We should aim to implement this in the Viewfinder shader.
- Yes, that is the most natural way to do things. Let's set up a cascade (if needed) of programmable shaders in the CameraView::onDrawFrame routine.
What do the GLSL keywords mean?
- Uniform: read only. Set from the CPU source as follows:
- int location = glGetUniformLocation(shaderIdx,"attributeName");
- glUniform4fv(location,1,value)
- Attribute: read-only in vertex shader
- Varying: data is transferred from vertex to fragment shader. Read-only in fragment.
Maybe useful links:
No comments:
Post a Comment