Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Low level graphics and you

Name: Anonymous 2012-04-18 17:21

After a long time of meditating on OpenGL I've come down with a bad case of "I don't know how to do anything and only use tools made by someone better than me". Since the only cure for this common programmer illness is more meditating I've decided I want to learn graphics on an even lower level than OpenGL, though finding a guide for this route has proven to be nigh impossible.

I'd appreciate it if a guru could give me a friendly pointer in the right direction or an insight on the things that take place between an OpenGL call and a pixel being displayed onscreen.

At the end of my journey I desire the ability of displaying but a single pixel without the aid of instruments given to me by the masters.

Name: Anonymous 2012-04-21 12:38

Wow, this thread has been amazingly useless to you. Sorry OP; now I remember why I don't come around here anymore.

To the people telling you to just use OpenGL or that you have to go through video card drivers: shut up. You're missing the point. Of course a software renderer is going to be 1000x slower than doing it in hardware. The point is just to learn how it works.

OP, what it sounds like you want to implement is a software triangle rasterizer. First, get a good grasp of linear algebra; you need a good understanding of how vectors and matrices are used in 3D graphics:

http://www.wildbunny.co.uk/blog/vector-maths-a-primer-for-games-programmers/

With this you should understand how to take a triangle mesh and transform the triangles to on-screen coordinates. From there you need to rasterize the triangles. Here's a not entirely terrible article on doing this:

http://joshbeam.com/articles/triangle_rasterization/

You basically calculate a bounding box for the triangle, and then calculate the span of each row to determine what pixels are covered by the triangle.

To sample the color of each pixel from a texture, you need to interpolate between the UV coordinates of the vertices. The way to do this is with barycentric coordinates. In other words you transform the xy position of the pixel into a weighted sum of the xy coordinates of the vertices. Then you can use this to compute the texture position of the pixel from the texture coordinates of the vertices. I can't find a good reference for this, but here's the Wikipedia article on it; it gives the formula explicitly for a conversion to barycentric coordinates in 2D:

http://en.wikipedia.org/wiki/Barycentric_coordinate_system_%28mathematics%29

That's really all there is to it. Add a depth buffer and you should be able to render any old textured triangle mesh.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List