Hey /prog/, I'm writing a cross-platform OpenGL application which does some off-screen rendering. I've found two ways of doing that - with pbuffers and with FBOs, the latter is preferable. I want old videocards to be supported too.
Now tell me, which technique should I use? Do the older videocards support FBO? Do they support pbuffers?
Thanks.
Name:
Anonymous2012-09-06 15:20
FBO is supported in any video card made in the last 12 or so years I'd guess. Unless the drivers are really old too.
Depends how old you mean really. I wouldn't use pbuffers personally.
>>3
I don't think the pain of pbuffers over FBO is worth it just to support people who haven't updated graphics drivers in 10 years.
Name:
Anonymous2012-09-06 18:46
FBOs are supported on anything that does GL2.0 or above (and of course a few below that level). I wish the GL and D3D folks would give up on that shit, join forces and make a single glorious API.
Name:
Anonymous2012-09-06 21:42
Better tell me why SDL_Surface.pixels on different OSes have different channel order. Why not just RGBA? Is it that hard to make a consistent API?
Name:
Anonymous2012-09-07 4:35
>>10
Because different graphics adapters and operating systems prefer different endianness.
Also, forgot about the SDL for it is garbage (most notably: SDL_main retardation, very poorly designed video subsystem even in SDL 2). You really should use each platform's preferred windowing toolkit (Xlib, Windows API, Cocoa) and OpenGL.
Name:
Anonymous2012-09-07 5:21
>>11 Because different graphics adapters and operating systems prefer different endianness.
What a retards designed them? RGBA should be standard, like in SVGA.
You really should use each platform's preferred windowing toolkit (Xlib, Windows API, Cocoa) and OpenGL.
These aint portable and hard to use. I want a dumbed down API, which gives me a pointer to framebuffer + keyboard scancodes, just like in good old DOS times with 0A000h
I think DOS had the best API possible. No stupid memory protection, windows or toolkits, just raw machine power.
/* stuff about our window grouped together */
typedef struct {
Display *dpy;
int screen;
Window win;
GLXContext ctx;
XSetWindowAttributes attr;
Bool fs;
Bool doubleBuffered;
XF86VidModeModeInfo deskMode;
int x, y;
unsigned int width, height;
unsigned int depth;
} GLWindow;
/* attributes for a single buffered visual in RGBA format with at least
* 4 bits per color and a 16 bit depth buffer */
static int attrListSgl[] = {GLX_RGBA, GLX_RED_SIZE, 4,
GLX_GREEN_SIZE, 4,
GLX_BLUE_SIZE, 4,
GLX_DEPTH_SIZE, 16,
None};
/* attributes for a double buffered visual in RGBA format with at least
* 4 bits per color and a 16 bit depth buffer */
static int attrListDbl[] = { GLX_RGBA, GLX_DOUBLEBUFFER,
GLX_RED_SIZE, 4,
GLX_GREEN_SIZE, 4,
GLX_BLUE_SIZE, 4,
GLX_DEPTH_SIZE, 16,
None };
GLWindow GLWin;
/* function called when our window is resized (should only happen in window mode) */
void resizeGLScene(unsigned int width, unsigned int height)
{
if (height == 0) /* Prevent A Divide By Zero If The Window Is Too Small */
height = 1;
glViewport(0, 0, width, height); /* Reset The Current Viewport And Perspective Transformation */
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0f, (GLfloat)width / (GLfloat)height, 0.1f, 100.0f);
glMatrixMode(GL_MODELVIEW);
}
/* general OpenGL initialization function */
int initGL(GLvoid)
{
glShadeModel(GL_SMOOTH);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClearDepth(1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
/* we use resizeGLScene once to set up our initial perspective */
resizeGLScene(GLWin.width, GLWin.height);
glFlush();
return True;
}
/* Here goes our drawing code */
int drawGLScene(GLvoid)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
if (GLWin.doubleBuffered)
{
glXSwapBuffers(GLWin.dpy, GLWin.win);
}
return True;
}
/* function to release/destroy our resources and restoring the old desktop */
GLvoid killGLWindow(GLvoid)
{
if (GLWin.ctx)
{
if (!glXMakeCurrent(GLWin.dpy, None, NULL))
{
printf("Could not release drawing context.\n");
}
glXDestroyContext(GLWin.dpy, GLWin.ctx);
GLWin.ctx = NULL;
}
/* switch back to original desktop resolution if we were in fs */
if (GLWin.fs)
{
XF86VidModeSwitchToMode(GLWin.dpy, GLWin.screen, &GLWin.deskMode);
XF86VidModeSetViewPort(GLWin.dpy, GLWin.screen, 0, 0);
}
XCloseDisplay(GLWin.dpy);
}
/* this function creates our window and sets it up properly */
/* FIXME: bits is currently unused */
Bool createGLWindow(char* title, int width, int height, int bits,
Bool fullscreenflag)
{
XVisualInfo *vi;
Colormap cmap;
int dpyWidth, dpyHeight;
int i;
int glxMajorVersion, glxMinorVersion;
int vidModeMajorVersion, vidModeMinorVersion;
XF86VidModeModeInfo **modes;
int modeNum;
int bestMode;
Atom wmDelete;
Window winDummy;
unsigned int borderDummy;
GLWin.fs = fullscreenflag;
/* set best mode to current */
bestMode = 0;
/* get a connection */
GLWin.dpy = XOpenDisplay(0);
GLWin.screen = DefaultScreen(GLWin.dpy);
XF86VidModeQueryVersion(GLWin.dpy, &vidModeMajorVersion,
&vidModeMinorVersion);
printf("XF86VidModeExtension-Version %d.%d\n", vidModeMajorVersion,
vidModeMinorVersion);
XF86VidModeGetAllModeLines(GLWin.dpy, GLWin.screen, &modeNum, &modes);
/* save desktop-resolution before switching modes */
GLWin.deskMode = *modes[0];
/* look for mode with requested resolution */
for (i = 0; i < modeNum; i++)
{
if ((modes[i]->hdisplay == width) && (modes[i]->vdisplay == height))
{
bestMode = i;
}
}
/* get an appropriate visual */
vi = glXChooseVisual(GLWin.dpy, GLWin.screen, attrListDbl);
if (vi == NULL)
{
vi = glXChooseVisual(GLWin.dpy, GLWin.screen, attrListSgl);
GLWin.doubleBuffered = False;
printf("Only Singlebuffered Visual!\n");
}
else
{
GLWin.doubleBuffered = True;
printf("Got Doublebuffered Visual!\n");
}
glXQueryVersion(GLWin.dpy, &glxMajorVersion, &glxMinorVersion);
printf("glX-Version %d.%d\n", glxMajorVersion, glxMinorVersion);
/* create a GLX context */
GLWin.ctx = glXCreateContext(GLWin.dpy, vi, 0, GL_TRUE);
/* create a color map */
cmap = XCreateColormap(GLWin.dpy, RootWindow(GLWin.dpy, vi->screen),
vi->visual, AllocNone);
GLWin.attr.colormap = cmap;
GLWin.attr.border_pixel = 0;
>>12
You're stuck in the past. The framebuffer pointer paradigm is utterly inefficient and definitely doesn't give you access to "raw machine power”. Nowadays, for most purposes, even a lousy Intel GMA can give you more performance than real mode-style framebuffer access. And in the cases where you really need it, there's still offscreen rendering or glReadPixels.
>>13
I like how you are comparing a statement for pixel plotting (with a typo) and a double-buffered window initialisation procedure with mode-setting and OpenGL context creation. It's totally the same thing!
Name:
Anonymous2012-09-07 5:43
>>14 You're stuck in the past.
You have to agree, the past was much nicer and newbie friendly.
double-buffered window initialisation procedure with mode-setting and OpenGL context creation. It's totally the same thing!
What if I just want to put some pixels on screen and dont care about windows/double-buffering/modesetting/context and other bloated crap?
Name:
Anonymous2012-09-07 5:43
Hey guys, rested OP here. Old drivers aren't an issue since they can be easily updated. Ideally, I'd like even old Riva TNT and Voodoo to be supported, but it's not that important.
I guess it'd be better to make it work through FBO, and probably use pbuffers as a fallback, right?
Name:
Anonymous2012-09-07 5:45
>>14 The framebuffer pointer paradigm is utterly inefficient and definitely doesn't give you access to "raw machine power”.
It should be made efficient. Because it is easy to use.
Name:
142012-09-07 6:01
>>15 You have to agree, the past was much nicer and newbie friendly.
I agree with the latter. I would argue that the present is nicer because it lets you do more and many things (i.e. scrolling, blending, rotozoom, 3D, etc.) are easier.
What if I just want to put some pixels on screen and dont care about windows/double-buffering/modesetting/context and other bloated crap?
I honestly don't know. There's a bunch of stuff like SVGAlib and DirectFB but I doubt it's well-maintainted, well-engineered and as lean as real mode. If I were you I would probably write small operating systems or use DOSBox.
>>17
The state of the art of direct framebuffer access is to render everything in CPU memory and then copy it to the GPU for scan-out.
Name:
Anonymous2012-09-07 6:04
>>18 The state of the art of direct framebuffer access is to render everything in CPU memory and then copy it to the GPU for scan-out.
This wont free you from...
#include <stdio.h>
#include <GL/glx.h>
#include <GL/gl.h>
#include <GL/glu.h>
#include <X11/extensions/xf86vmode.h>
#include <X11/keysym.h>
/* stuff about our window grouped together */
typedef struct {
Display *dpy;
int screen;
Window win;
GLXContext ctx;
XSetWindowAttributes attr;
Bool fs;
Bool doubleBuffered;
XF86VidModeModeInfo deskMode;
int x, y;
unsigned int width, height;
unsigned int depth;
} GLWindow;
/* attributes for a single buffered visual in RGBA format with at least
* 4 bits per color and a 16 bit depth buffer */
static int attrListSgl[] = {GLX_RGBA, GLX_RED_SIZE, 4,
GLX_GREEN_SIZE, 4,
GLX_BLUE_SIZE, 4,
GLX_DEPTH_SIZE, 16,
None};
/* attributes for a double buffered visual in RGBA format with at least
* 4 bits per color and a 16 bit depth buffer */
static int attrListDbl[] = { GLX_RGBA, GLX_DOUBLEBUFFER,
GLX_RED_SIZE, 4,
GLX_GREEN_SIZE, 4,
GLX_BLUE_SIZE, 4,
GLX_DEPTH_SIZE, 16,
None };
GLWindow GLWin;
/* function called when our window is resized (should only happen in window mode) */
void resizeGLScene(unsigned int width, unsigned int height)
{
if (height == 0) /* Prevent A Divide By Zero If The Window Is Too Small */
height = 1;
glViewport(0, 0, width, height); /* Reset The Current Viewport And Perspective Transformation */
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0f, (GLfloat)width / (GLfloat)height, 0.1f, 100.0f);
glMatrixMode(GL_MODELVIEW);
}
/* general OpenGL initialization function */
int initGL(GLvoid)
{
glShadeModel(GL_SMOOTH);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClearDepth(1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
/* we use resizeGLScene once to set up our initial perspective */
resizeGLScene(GLWin.width, GLWin.height);
glFlush();
return True;
}
/* Here goes our drawing code */
int drawGLScene(GLvoid)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
if (GLWin.doubleBuffered)
{
glXSwapBuffers(GLWin.dpy, GLWin.win);
}
return True;
}
/* function to release/destroy our resources and restoring the old desktop */
GLvoid killGLWindow(GLvoid)
{
if (GLWin.ctx)
{
if (!glXMakeCurrent(GLWin.dpy, None, NULL))
{
printf("Could not release drawing context.\n");
}
glXDestroyContext(GLWin.dpy, GLWin.ctx);
GLWin.ctx = NULL;
}
/* switch back to original desktop resolution if we were in fs */
if (GLWin.fs)
{
XF86VidModeSwitchToMode(GLWin.dpy, GLWin.screen, &GLWin.deskMode);
XF86VidModeSetViewPort(GLWin.dpy, GLWin.screen, 0, 0);
}
XCloseDisplay(GLWin.dpy);
}
/* this function creates our window and sets it up properly */
/* FIXME: bits is currently unused */
Bool createGLWindow(char* title, int width, int height, int bits,
Bool fullscreenflag)
{
XVisualInfo *vi;
Colormap cmap;
int dpyWidth, dpyHeight;
int i;
int glxMajorVersion, glxMinorVersion;
int vidModeMajorVersion, vidModeMinorVersion;
XF86VidModeModeInfo **modes;
int modeNum;
int bestMode;
Atom wmDelete;
Window winDummy;
unsigned int borderDummy;
GLWin.fs = fullscreenflag;
/* set best mode to current */
bestMode = 0;
/* get a connection */
GLWin.dpy = XOpenDisplay(0);
GLWin.screen = DefaultScreen(GLWin.dpy);
XF86VidModeQueryVersion(GLWin.dpy, &vidModeMajorVersion,
&vidModeMinorVersion);
printf("XF86VidModeExtension-Version %d.%d\n", vidModeMajorVersion,
vidModeMinorVersion);
XF86VidModeGetAllModeLines(GLWin.dpy, GLWin.screen, &modeNum, &modes);
/* save desktop-resolution before switching modes */
GLWin.deskMode = *modes[0];
/* look for mode with requested resolution */
for (i = 0; i < modeNum; i++)
{
if ((modes[i]->hdisplay == width) && (modes[i]->vdisplay == height))
{
bestMode = i;
}
}
/* get an appropriate visual */
vi = glXChooseVisual(GLWin.dpy, GLWin.screen, attrListDbl);
if (vi == NULL)
{
vi = glXChooseVisual(GLWin.dpy, GLWin.screen, attrListSgl);
GLWin.doubleBuffered = False;
printf("Only Singlebuffered Visual!\n");
}
else
{
GLWin.doubleBuffered = True;
printf("Got Doublebuffered Visual!\n");
}
glXQueryVersion(GLWin.dpy, &glxMajorVersion, &glxMinorVersion);
printf("glX-Version %d.%d\n", glxMajorVersion, glxMinorVersion);
/* create a GLX context */
GLWin.ctx = glXCreateContext(GLWin.dpy, vi, 0, GL_TRUE);
/* create a color map */
cmap = XCreateColormap(GLWin.dpy, RootWindow(GLWin.dpy, vi->screen),
vi->visual, AllocNone);
GLWin.attr.colormap = cmap;
GLWin.attr.border_pixel = 0;
>>18 I would argue that the present is nicer because it lets you do more and many things (i.e. scrolling, blending, rotozoom, 3D, etc.) are easier.
First Quake did this all without any problem. Besides, all this graphics bloat just distract programmers from gameplay. A good game, like Dwarf Fortress, doesnt need graphics at all.
Name:
Anonymous2012-09-07 6:11
>>18 If I were you I would probably write small operating systems or use DOSBox.
Can you run Ruby in DOSBox, having a few gigabytes of memory for game world?
Name:
Anonymous2012-09-07 6:11
>>12 What a retards designed them? RGBA should be standard, like in SVGA. I think DOS had the best API possible. No stupid memory protection, windows or toolkits, just raw machine power.
Losetheos
>>20 First Quake did this all without any problem.
I'm no Quake expert but didn't they cut corners everywhere, most notably in the texture sampling?
Can you run Ruby in DOSBox, having a few gigabytes of memory for game world?
I don't know. Is there a Ruby port for DOS?
Name:
Anonymous2012-09-07 6:29
>>23 I don't know. Is there a Ruby port for DOS?
last one dates 1999
Name:
Anonymous2012-09-07 6:39
>>24
That is why I dont understand why people complain that "Ruby is slow", when Ruby was fast enough already in 1999.
Name:
Anonymous2012-09-07 8:10
>>23
Every game ever cuts corners. It doesn't matter though if the result is visually pleasing. Case in point: s3tc aka DXTn. It's a noticeably lossy compression that everyone uses for textures
Name:
Anonymous2012-09-07 8:26
>>26
Texture compression isn't used that much outside of lousy indie games and the mobile segment. And it still is much higher quality than fully CPU-side rendering.
Name:
Anonymous2012-09-07 18:53
>>27
Om IPhone/IPS they use compression even for 2d sprites, which makes them look like they came out of the anus.