A detailed look at implementing graphically demanding games in OpenGL with Java, using the JOGL library

Showing posts with label opengl. Show all posts
Showing posts with label opengl. Show all posts

Tuesday, December 23, 2008

MultiTexturing and shaders

Textures are nice and all, but one at a time is a bit boring. There are lots of situations that call for multiple textures on a triangle, or just having multiple textures available on separate units to reduce the amount of swapping.
There are many multitexturing effects you can achieve with the fixed function pipeline, but things start getting really cool when you add shaders. These are programs that take over the tasks of transforming vertices and calculating pixel colours in arbitrary ways, allowing techniques such as bump mapping, environment mapping, blurs, bloom and more. Shaders and multitexturing go hand in hand so I'll tackle them both in one post. Source is available as Tutorial 4

Following on from the simple texturing tutorial, we'll make some minor changes. Starting at the beginning:
public void init(GL gl) {
GLHelper glh = new GLHelper();
try {
shader = glh.LoadShaderProgram(gl, "/tutorial4/vertex.shader", "/tutorial4/fragment.shader");
} catch (IOException ex) {
Logger.getLogger(TutorialObject.class.getName()).log(Level.SEVERE, null, ex);
} catch (GLHelperException ex) {
Logger.getLogger(TutorialObject.class.getName()).log(Level.SEVERE, null, ex);
}
gl.glGenTextures(2, textures, 0);
gl.glBindTexture(GL.GL_TEXTURE_2D, textures[0]);
ByteBuffer b = genTexture(32);
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA8, 32, 32, 0, GL.GL_RGB, GL.GL_BYTE, b);

gl.glBindTexture(GL.GL_TEXTURE_2D, textures[1]);
b = genTexture(32);
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA8, 32, 32, 0, GL.GL_RGB, GL.GL_BYTE, b);
}
GLHelper is a utility class to hide some of the complexity of loading a shader. Then we ask for 2 texture handles. We then bind them in turn and fill them with data. This all goes on in texture unit zero, as we don't need to render them in parallel yet.

Then when we draw -

public void display(GL gl, float time) {
int[] i = new int[1];
gl.glGetIntegerv(GL.GL_CURRENT_PROGRAM, i, 0);
if (i[0] != shader) {
gl.glUseProgram(shader);
}

gl.glEnable(GL.GL_TEXTURE_2D);
gl.glActiveTexture(GL.GL_TEXTURE0);
gl.glBindTexture(GL.GL_TEXTURE_2D, textures[0]);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR);
gl.glActiveTexture(GL.GL_TEXTURE1);
gl.glBindTexture(GL.GL_TEXTURE_2D, textures[1]);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR);

gl.glUniform2f(gl.glGetUniformLocation(shader, "offset"), time, time);
gl.glUniform1i(gl.glGetUniformLocation(shader, "tex1"), 0);
gl.glUniform1i(gl.glGetUniformLocation(shader, "tex2"), 1);
gl.glUniform1f(gl.glGetUniformLocation(shader, "r"), time);

gl.glBegin(GL.GL_TRIANGLES);
{
gl.glVertex3f(0, 1, -3);
gl.glVertex3f(-1, -1, -3);
gl.glVertex3f(1, -1, -3);
}
gl.glEnd();
}
The first block checks to see if the shader program is active, and if not, activates it.

The second block enables texturing and binds the textures to the first two units. It also sets up bilinear filtering.

The third block sets some variables that are used by the shaders themselves. These are all uniform variables - they cannot be changed between a glBegin/glEnd pair, but there are also attribute variables that can be applied per vertex. Lighthouse 3d has some good background info.

The actual drawing is actually simpler than the last tutorial - just three vertices. That's possible because the vertex shader calculates the texture coordinates from the vertex coordinates, so they don't have to be specified. Obviously this won't work for every situation, but it's a good example of how the whole fixed function pipeline has been replaced. Those shaders in full:

Vertex:
uniform float r;

void main(void)
{
vec4 v = gl_Vertex;
v.y += 0.3*cos(r+v.x);
gl_Position = gl_ModelViewProjectionMatrix * v;
gl_TexCoord[0].xy = gl_Vertex.xy;
gl_TexCoord[1].xy = gl_Vertex.xy;
}
Fragment:
uniform sampler2D tex1;
uniform sampler2D tex2;
uniform vec2 offset;
void main(void)
{
vec4 c;
c = texture2D(tex1, gl_TexCoord[0].st);
c+= texture2D(tex2, offset+gl_TexCoord[1].st);
gl_FragColor = c / 2.0 ;
}
The vertex shader applies a cosine wave to the y coordinate of each vertex - and the fragment shader blends two textures and applies an offset. The overall effect is a wavy, shimmering triangle. You'll have to run this one to see it in motion, I think!

Texture mapping

Sooner or later everyone gets bored of plain coloured triangles. Texturing in OpenGL is easy enough once you've mastered a few concepts:
  • Texture names
  • Filtering
This post (and the accompanying source code) also builds slightly on the previous tutorial - the rendering we're interested in is now in TutorialObject.java. The Canvas object holds a list of all objects and calls each one to render itself.

Texture names are actually numbers. The numbers are generated by glGenTextures and returned into an array like so:
int[] textures = new int[1];
gl.glGenTextures(1, textures, 0);
The numbers returned in textures[] are effectively handles to textures. At the moment, they are not attached to any data - they are not even declared as 1 or 2 dimensional. To attach actual data, we must bind the texture:
gl.glBindTexture(GL.GL_TEXTURE_2D, textures[0]);
then we can load our image data in with
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA8, 32, 32, 0, GL.GL_RGB, GL.GL_BYTE, b);
Hang on a minute, where do those parameters come from? The full details are at the OpenGL man pages, but the key ones are the 32s and b. This means we are defining a 32 square texture, and the data defining it is in the buffer b. In this example, b is just full of random data, so the texture looks like disco lights.

If we tried to render a triangle now, it still wouldn't be textured. We need to enable texturing and choose a filtering mode. So at the beginning of our object rendering code, we make these calls:
gl.glEnable(GL.GL_TEXTURE_2D);
gl.glBindTexture(GL.GL_TEXTURE_2D, textures[0]);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR);
Note that I re-bind the texture. This is because in a realistic program, different objects will need different textures and you won't know which texture is active at the beginning of an object's display method.
Now we can render vertices and expect to see our texture.

gl.glBegin(GL.GL_TRIANGLES);
{
gl.glTexCoord2f(0.0f,0.0f);
gl.glVertex3f(0.0f, 1.0f, -3.0f);

gl.glTexCoord2f(1.0f,0.0f);
gl.glVertex3f(-1.0f, -1.0f, -3.0f);

gl.glTexCoord2f(0.0f,1.0f);
gl.glVertex3f(1.0f, -1.0f, -3.0f);
}
gl.glEnd();
We need to pass the texture coordinates before the vertex coordinates - when a vertex is defined it picks up the last specified texture coordinates, color, normal etc.

Friday, December 19, 2008

Basic OpenGL set up

So you have a skeleton application compiling and running, you've got a subversion repository to commit your code to, and you're ready to start rendering some 3D graphics to see what your game could look like. Great! Let's start at the beginning.

The init method

The first opportunity to execute any OpenGL calls is your canvas's init method (technical note - the method is actually an implementation of the GLEventListener interface, not the GLCanvas class). This is called when the underlying OpenGL context is created (and could conceivably be called again if the native context was lost and re-created, but it's generally OK to consider it a one-time deal). You can use this method to set up anything that will stay constant over the lifetime of the application. Tutorial 2 calls gl.setSwapInterval(1) which makes GL wait for the vertical blank interval before swapping the front and back buffers. This reduces shearing and allows for silky-smooth animation, and also prevents your game from trying to render frames more frequently than the monitor can physically display them.
init is also a good place to load textures, set up display lists and vertex arrays, and set global lighting parameters like the ambient light level.

The reshape method

The second opportunity to execute GL calls is in reshape. This is called whenever the viewport changes, and once before the first call to display. This is where you set up the viewport and the GL_PERSPECTIVE matrix, and it's as good a place as any to set the default GL_MODELVIEW matrix. Combined, these three settings are like defining the camera in your scene, but it can do things that no physical camera can.

The viewport is the 2d window that OpenGL will draw to. display is called with height and width parameters, and in most cases I can think of these are just passed straight through to glViewport. The only exception I can think of to that is if you are managing multiple views in the same context, and you wish to render to several different viewports. However I would look into multiple GLCanvas instances first.

On to the matrices. Despite the name, GL_PERSPECTIVE doesn't have to be a perspective transformation at all. Though a perspective view makes sense in a lot of games, if you're planning an RTS or other top-down view, or even a totally 2D game (OpenGL turns out to be great for 2d too -), then an orthogonal transform is what you want. There are two helper functions to create matrices, as otherwise the math can get tricky:
  • glFrustum. A Frustum is a shape like an Incan pyramid, and defines the viewing volume for a perspective view, i.e. things further away are smaller. Suitable for first person shooters, many third person games, and anything where realism is key. 
  • glOrtho. Ortho is short for Orthogonal, meaning that vectors that were at right angles before the tranformation will still be at right angles afterwards. Take a moment to think why perspective transforms are not orthogonal. This function sets up the camera for a parallel projection, i.e. things further do not get smaller. This is handy for 2d games, or top down god games, RTSs and surprisingly more gameplay styles.
Both functions produce a matrix that defines the viewing volume. This is the volume of space that is visible to the camera. Note that it has a front and back - like a real camera, if something is too close to the lens you can't see it very well, and unlike a real camera there is a maximum distance beyond which OpenGL will ignore objects. There is a problem with setting the near plane to 0.00001 and the far plane to 9999999 though - depth buffer precision. Depth buffering is a fantastically useful technique we'll cover in detail later, but for now let's just note that we don't want the ratio of near:far to get too large, or you'll see rendering artifacts.

glFrustum takes six parameters. The first five define the position of the front face of the viewing volume - left, right, bottom, top and near. The sixth sets the far distance. The shape of the front face should be the same as the shape of your viewport - if the user drags out a tall, skinny window, then to prevent distorting your graphics, the view volume should be tall and skinny too, like in the following code:
float aspect = (1.0f) * height / width;
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glLoadIdentity();
gl.glFrustum(.5f, -.5f, -.5f * aspect, .5f * aspect, 1.f, 500.f);
This sets up a pretty standard perspective view, where a square of side 1 would fill the screen if it was centered one unit in front of the camera - as long as the screen is square.

glOrtho takes six parameters too - in fact, the same ones. But the near face of this viewing volume will be the same size as the far face - so it probably needs to be much larger to be able the view the same objects - e.g:
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glLoadIdentity();
gl.glOrtho(-width / 64.0f, width / 64.0f, -height / 64.0f, height / 64.0f, 1, 100);
This view would show each unit square as a square 64 pixels across, no matter how large the screen is. Resizing the window would show more of the world, rather than scaling up the same view. It's the camera matrix that JavaPop uses.

Now as for GL_MODELVIEW, all we want to do is set it to the identity matrix, to make sure it's initialised:
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glLoadIdentity();
Now the OpenGL context is set up and ready for you to draw your geometry in the display method. We'll get to that very soon.

Wednesday, December 17, 2008

Fullscreen or windowed?

Quick answer - both! I think that Java provides the most transparent way of handling fullscreen graphics that I've come across. Whether we're fullscreen or windowed, we need to set up an application window (JFrame) and put an OpenGL canvas (GLCanvas) in it. To go fullscreen, we use the GraphicsDevice class, which can be used with OpenGL or without (just in case you want a fullscreen accounting package?)

There are a few steps to turn your app into a fullscreen one:
  1. Remove decorations from your window. If you don't do this, you'll still see a title bar, and users will be able to move the window about on a black background. Not very pro.
  2. Set the full screen window to your JFrame.
  3. Choose a display mode - ideally you'll provide a way for the user to choose this, but to start with there's nothing wrong with hard-coding a resolution you know your machine can handle. Note - you can't change the display mode before setting the full screen window.
  4. Provide a way to get of of fullscreen mode! This should really be zero on the list. As a last resort, you should be able to Alt-F4 or ⌘-Q out of it, but debugging in fullscreen mode can be a pain unless you have a second computer. Fortunately you can debug in windowed mode, and just switch to fullscreen for serious playing.
Lets implement it. The full code can be found at
svn checkout "http://javagamedev.googlecode.com/svn/trunk/Tutorial 2"
and we'll end up with this: More exciting than a grey screen, no? Lets look at the important details. There are two classes this time - TutorialFrame and TutorialCanvas. The Frame class is the top level window. It controls whether the application runs full screen or not -
private void init(boolean bFullScreen) {
fullscreen = bFullScreen;
setUndecorated(bFullScreen);
setSize(800, 600);
setVisible(true);
GraphicsEnvironment.getLocalGraphicsEnvironment().getDefaultScreenDevice().setFullScreenWindow(bFullScreen ? this : null);
canvas.requestFocus();
}
By default the frame starts up windowed, but a quick call to
public void setFullscreen(boolean bFullscreen) {
if (fullscreen != bFullscreen) {
this.dispose();
init(bFullscreen);
}
}

switches between windowed or fullscreen. Note the call to dispose() - without this, the frame is "displayable", and any call to setUndecorated() will fail.

The Canvas class actually performs the rendering, and apart from reacting to a reshape, carries on blissfully ignorant of whether it's full screen or windowed. The Canvas implements KeyListener so that it can call the Frame's setFullscreen method:
public void keyPressed(KeyEvent e) {
switch (e.getKeyCode()) {
case KeyEvent.VK_F:
tFrame.setFullscreen(!tFrame.fullscreen);
break;
case KeyEvent.VK_ESCAPE:
tFrame.setFullscreen(false);
}
}
Nice and easy. Practise checking it out, and try to change the spinning shape being rendered - maybe a cube? Hint: you might need a depth buffer!