Sunday 19 January 2014

UOIT Game Dev - Development Blog 2

What we've learned this week...

This week we've been covering the graphics pipeline, along with VBOs and FBOs.

The graphics pipeline consists of several stages. You create your vertices or have it loaded from an object loader, store them into an array and send it off to the vertex shader. The vertex shader will then convert the vertices into screen space by multiplying it with the projection and view matrix. Once that's done the vertices will be assembled into primitives forming a shape which will then be rasterized into a 2D image. Next, the fragment shader will sample textures with UV coordinates to fill in the pixels with rgb color values. Then finally, the result will be displayed on the screen framebuffer.

What's beautiful about this pipeline is that it allows a programmer the flexibility to modify and affect the resulting image by changing the way this pipeline affects an object's vertices or the object's lighting through the use of mathematical algorithms conducted with the vertex attributes (position, normals, uvs) .

Our framework uses custom shaders to render various lighting techniques like phong, diffuse, and lambert shading.

VBOs (Vertex Buffer Objects) allow vertex data to be stored on video memory for easy access without having to send it over each frame. There are many methods to store vertex data in these buffers, a common way (the way our framework is currently doing it) is creating 3 separate buffers for the position, normals and uvs.

glGenBuffers(1, &tempMesh.vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, tempMesh.vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, tempMesh.vertices.size() * sizeof(glm::vec3), &tempMesh.vertices[0], GL_STATIC_DRAW);

glGenBuffers(1, &tempMesh.normalbuffer);
glBindBuffer(GL_ARRAY_BUFFER, tempMesh.normalbuffer);
glBufferData(GL_ARRAY_BUFFER, tempMesh.normals.size() * sizeof(glm::vec3), &tempMesh.normals[0], GL_STATIC_DRAW);

glGenBuffers(1, &tempMesh.uvbuffer);
glBindBuffer(GL_ARRAY_BUFFER, tempMesh.uvbuffer);
glBufferData(GL_ARRAY_BUFFER, tempMesh.uvs.size() * sizeof(glm::vec2), &tempMesh.uvs[0], GL_STATIC_DRAW);

However, I decided that I'm going to interleave the data storing it in 1 single buffer after hearing about the benefits of it during Dan's tutorial. So soon enough, I should have 1 buffer that contain the vertex data in this fashion [position1, normal1, uv1, position2, normal2, uv2, position3...]

FBOs or (Frame Buffer Objects) is essentially a location in video memory that stores color data (rgb values). These can be used for many various shader effects like shadow mapping and all the cool post processing effects like bloom, motion blur, glow and shadow mapping.

I plan on using the the post processing method discussed in this article to create a glow effect for our night city level. http://www.gamasutra.com/view/feature/2107/realtime_glow.php

On the development side...

My challenge for the week was blending and running 2 different animations into 1. Our shooter game like many 3rd person shooters involves having the upper body animate independently from the legs much like Nathan Drake from Uncharted. Basically, we need to allow the character to reload, shoot or switch weapons at any time while running or walking without interruption.

This was not an easy task as we can't simply just replace the upper torso bone transformations with the second animation skeleton. If 2 animations have different transformations with the spine bone, the animation would have an unnatural torso offsetting from the legs.

Instead, I just decided to replace the torso bones local transform with the new animation bones, then peform forward kinematics again each frame reconstructing the skeleton. This has allowed the animator to make any upper body animation they'd like and still have the spine body connect with the legs without having an offset.

No comments:

Post a Comment