Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Software Rendering and OpenGL Textures

Name: Anonymous 2011-11-12 8:54

I have been told that the bus between main and video memory is too slow to allow for constant updates to large textures. This seems like it would rule out, say, rendering a screen of a game in software then dumping it to the video card via OpenGL for postprocessing.

However, it is my reasoning that an emulator developer should want to do exactly this. An array of pixels would be easiest for an emulator to work on. Most emulators offer a DirectX or OpenGL mode for better sync, which makes me wonder what's going on underneath.

I'll get to it, then: Would it be feasible for a 320x240 game to be rendered in software, with the screen updates pushed to video  memory on VBLANK for postprocessing (upscale, scanlines, etc)?

Name: Anonymous 2011-11-12 12:03

>emulated computers are going to have similar or identical interfaces to the hardware

Who said anything about computers?

Emulators for pretty much anything after the Nintendo 64 take this approach. Most games were programmed for these systems using variants of OpenGL, so the video emulation can be quickly approximated by translating OpenGL API calls into whatever the machine running the emulator uses.

Any system released prior to the N64 (as well as portables), I assure you, has nothing in common with a modern computer in terms of hardware design. There's a very important reason why you could just, say, map the emulated system's sprites to GL Rects:

Scanline interrupts.
 
Modern games (and video hardware) are created with the assumption that an entire frame is going to be rendered at once during VBLANK. Old games, on the other hand, often worked on the output image as it was being drawn to the screen. Scanline interrupts (generated whenever the screen finishes drawing a line of pixels) were a common way to achieve this. If the system  only allows 60 sprites on screen at once, for example, a clever programmer could set the video memory to display 60 sprites on the top half of the display, then change all their positions halfway down the screen, effectively doubling the amount of sprites they could use. A common technique on the Sega Genesis to fake water translucency was to modify the palettes of of all sprites and background layers after a certain scanline (ie "height") had been passed. So partway down the frame, everything becomes bluer, making it look like you're submerged in water.

What about systems that didn't have scanline interrupts? Vert clever programmers could achieve similar effects by counting the exact number of cycles that their code ran for so that they could time up things like backgrounds that scrolled at different speeds to fake depth. Most of the time, if you see an NES game with a background that scrolls slower in the "distance" than it does "up close", the programmer had to exactly time whatever code ran while the display was being drawn. If he was off by a microsecond, the effect would be ruined.

I can go on and on about these techniques, but the point is, emulating these old systems that depended on crazy techniques is waaay more complicated than most people would assume. Even for simple stuff, like using the x86 ADD instruction as a direct replacement for whatever the system uses for addition, is going to fuck up any game that depends on that instruction taking up a certain number of cycles. It's hard enough with one processor, but then what do you do with systems like the Sega Genesis, which has 2 processors that many games require to be perfectly in sync for sound to work properly?

For reference, BSNES, the only accurate SNES emulator, is unlikely to run at 60fps on an i7 machine in the accuracy profile.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List