Sunday, 16 February 2014

UOIT Game Dev - Development Blog 6 - Size Matters

This will be different from my usual blog posts about computer graphics, instead it will be more of a journal entry about our team's experience developing a 48 hour game for The Great Canadian Appathon, and the things I've learned.

When the theme "Fantasy" was announced, I had no clue what to think about. Well, other than dragons, fairies, mage and knights, there was very little I can come up with for a mobile game. So after class we headed over to Jord's place to figure out what we were going to make.

There were a couple of ideas thrown around, very indie ideas like entering a white spacious room and what not. After a couple of minutes we decided to brain storm interesting mechanics rather than story. Ok so the theme was fantasy and we knew the majority of the teams were going to make quest games, or games where you fight a dragon, but we wanted to come up with some mechanic that very few games have.

We also took into consideration that this was going to be played on mobile devices, so we later agreed on some sort of infinite runner. Infinite runners are quite popular, I personally enjoyed games like "Robot Unicorn Attack", "Subway Surf" and "Temple Run". Also, these type of games really allow us to focus on the mechanics while only making 1 continuous level, so the scope was pretty easy to follow. We essentially wanted to make a very simple game with 1 fun mechanic.

After 20 minutes of brainstorming, Josh came up with the brilliant idea of scaling the level while running. Both me and Jord thought he was talking about scaling the character instead, however we all loved that idea too. We thought of ways one can change the dynamics of the game when the character is in 3 different size states:

Small: Jump highest, fit through tunnels, fall slower

Regular: jump 1 block high, fall at normal speed

Large: break through blocks, walk over small holes, fall faster

That's all we thought of really, but we were certain that there was at least a couple of games out there that have this main mechanic already. To our surprise, there weren't any endless runner we could find or think of that had it.

So instantly we knew what we wanted to make, an infinite runner where you can scale the character. Simple.

Soon after, we headed over to the room where they were hosting the event. After a short introduction, Jord and Josh went ahead and discussed about art styles. It wasn't too long until Harry showed up to join our team since his previous group had lacked programmers. 

The day was pretty productive. However, since I was new to programming in Unity, I had run into a bunch of annoyances and script errors. But by the end of the day I've managed to get a basic random level sequence generator working. 

The numbers represent the index for the different types of blocks that will be made to scroll while the character is running. 

On the second day, I woke up and found that Jord had finished the scaling mechanic and was able to run it on the phone. The full character running animation was also completed. After implementing my random level sequencing along with some basic placeholder tiles, I began to play with the mechanics. 

I discovered right away the amount dynamics this mechanic can offer. For instance I would sometimes jump too early and would normally crash into a block, but if I change the mouse's size to the smallest amount at the last second I would still be able to make the jump. Just playing around with the sizing mechanic alone felt pretty enjoyable and I immediately believed that this game had a lot of potential. 

The third day we pretty much had almost all of our assets completed, all was needed was completing the tile arrangements and level design. We wanted to get all of the assets and functionality in as soon as possible to allow us more time for polishing and play testing. 

During this process there was a lot of reiteration done changing the tile arrangements. We didn't want the levels to feel too difficult nor did we want them to be too simple. This was probably the most tedious process, not only did I have to arrange tiles in a way that allowed difficulty to be balanced but I also had to make sure that every single set can be randomly sequenced behind one another without making it impossible for the player to surpass. 

A couple of hours have passed, we've managed to get a working game with a points system. There was about 2-3 hours left before the submission deadline and we immediately ported what we had over to the phone for others to test out. The response was quite positive, people seemed to like the idea and learned about the mechanic rather quickly. 

We even had the founder of XMG Studio Ray Sharma come play our game in the last hour during his surprising visit at UOIT (What are the odds? lol). I remember him saying that our game was one of the better ones he had played. 

After further polishing and adding in all of the menus, we finally submitted the game and celebrated. It felt good making a whole game in only 48 hours, and we were pretty proud of what we've accomplished. 

It wasn't until a week later until we found out about making the top 25 list. 3 UOIT teams have managed to make it in total which was pretty amazing. We were all excited and looked forward to the following week where they will reveal the top 15.

Thursday of the following week, I was pretty anxious to find out if we have made it to the top 15 list. That alone would have satisfied me at the time since the 15 teams get a special invite to the "Canadian Open Data Experience" (CODE). Also, making it that far would already be a great achievement to put on our resumes. 

It took a while for XMG to release the list, but as soon as they did I immediately began to look for our Team. I scrolled down to finally find our name and was relieved to see it on there. 

...After a closer glance, I was surprised to find that we had not only made the top 15, but we were among the 3 finalists! I really couldn't believe it at the time, it was quite shocking and I didn't really think we'd make it this far. It was one of the most exciting moments I've had in a long time. My other group mates were in joy of finding out the news and we were all super excited. 

Next week...

We prepared for our 5 minute pitch presentation about the game and headed straight to Toronto. It was long ride there but we made it. 

The venue was like some sort of VIP party, there were XMG employees, industry people, photographers and people from the media. Phones and tablets were on display with the top 15 games on it for people to play. Free food and drinks were offered to everyone and there was also a DJ playing music. 

It was simply amazing.

We also met two other students from UOIT including Kevin. His game won an award for most challenging game play. 

After a good hour of networking, everybody headed over to the other room in order to prepare for the top 3 teams to pitch their games.

The host along with the founder Ray Sharma gave an introduction speech and showed a brief history video about the The Great Canadian Appathon.

After some cheers and applause, it was time for us 3 teams to present.

Both teams from UofT had pretty interesting games

Team Last Minute's "Wild Fire" is a game where you have to utilize the touch screen to drag the villagers away from the dragons and into a safe home.

Team Cool Beans with "Kingdom Crushers" had a cool concept where you mine 3 different types of ores with 3 different types of miners, each miner must be used with their specific type of rock earning resource in order to defend your castle.

Then it was our turn to present our game "Size Matters".

After our 5 minute pitch, the judges questioned about why we had a giant worm chasing a mouse... There wasn't really a good explanation behind that random design decision, so we did our best to answer and laughed it off. We sat back down and the judges proceeded to leave the room to decide on the winners. This was the most anticipating moment of the night.

Tony Clement finally came on stage and began delivering a speech.

It wasn't long until we were up there with the President of the Treasury Board holding a big check for 25 grand. It all occurred really fast and I was still in shock about everything that had happened. I don't think I've ever felt this much success before, and I really couldn't have done it without our team.

It's amazing how something you can make in 48 hours can bring you so much and take you so far. 2 days was all it took, and the amount of things I've learned from this and the Game Jam alone is unimaginable.

Thank you for reading :)

Saturday, 8 February 2014

UOIT Game Dev - Development Blog 5 - Shadows

In terms of visuals, shadows play a very important role in games. It can make a very simple yet dull scene look beautiful. It adds depth, allowing the player to see how far or tall an object is, and it can also give the player a sense of elevation by looking at the shadow cast by a floating object.



I can go on and on, but the main point is that shadows add so much more detail and realism to a game which can make all the difference visually. So there are a few ways games do shadows in real time:

Screen Space Ambient Occlusion


This technique basically takes the rendered depth scene, samples it and darkens areas in which objects occlude one another. Essentially, if an object or mesh is close to one another, then light around that area will bounce an extra amount of times losing intensity creating a slight darkness. So we would need to take a pixel of the rendered screen, sample the neighboring pixels (occluders) rotate them, then reflect them around a normal texture. This technique is quite effective in adding detail to a scene while sampling 16 or less times per pixel. 



While global illumination looks stunning, it simply can't be done in real time due to the number of rays needed to compute such a scene. Radiosity can be a solution as it minimizes the amount of computation needed to create a decently lit scene. 

I think this link does a very good job explaining the algorithm.(

Basically, there are several passes to this technique, during each pass we have to account for all the polygons in the scene and look through their perspective. The more light it sees the more lit it will be, and after a few passes it can collect light from other lit polygons. 

Shadow Mapping

This is probably the most widely used technique. Most easily done with direction lights, we take the perspective depth from the light's point of view creating a depth texture. This depth texture will be used to determine which pixels of the scene from the camera's point of view will be in shadow. During the scene pass we would have to convert each scene pixel to the coordinate from the light's point of view and finally determine if there's anything in front of it. This 2 pass algorithm gives us a realistic shadow projection from objects in the scene. 

Previous tests

I've played with some shadow mapping in the past and I quickly realized of the limitations created by this algorithm. In the scene above you can see 2 things right away; the shadow is a bit pixelated and there is a cut off point near the top of the image. This is because shadow maps are stored on a texture with a limited resolution. While it worked fine for a small area of the scene, 1 shadow map wouldn't work for a larger scene as individual shadows will lose a lot of detail and become fuzzy. 

In the prototype submitted for last semester, I essentially clipped the light projection to the character's position so there would always be a crisp shadow no matter where he goes. 

I don't know much about the cascaded shadows mapping yet but I assume it's something similar where you have a shadow projection clipped to each object then compute the final scene through multiple shadow maps. 

Saturday, 1 February 2014

UOIT Game Dev - Development Blog 4 - HDR and frustrations with Deferred Shading

This week...

So we've been talking about blurring techniques and how they are applied in computer graphics. Conveniently some of the techniques were already mentioned in my previous blog so the material wasn't new to me, but what had really caught my attention was the down sampling technique.


It's a pretty neat trick. All you really need to do is scale down the image losing pixel information, then scale it back up creating a more pixelated/blurred version. I might give this a try sometime in the near future.

I've always had mixed feelings about HDR bloom. When used correctly, games can really benefit from it and look a lot brighter, however sometimes it can be too bright or too blurry (like the one below) which can outright distract the player ruining the experience.


But I'm really interested in figuring out how dynamic light exposure is done in games. When playing certain maps with a lot of daylight in Counter Strike Source, I would walk out of a dark area (like a tunnel) and my whole screen would be blinded by light for a split second, then it would go back to normal. Similarly, I can walk into a darker area and have my screen dimmed before adjusting to the darkness. This just seems so cool and natural, hopefully I will have some time in the future to implement this.


What I've been working on...

My main focus was trying to get multiple lights to render in our game. I've looked up a lighting technique called "Deferred shading" which was brought up in the first week or two of class. To be honest, I've never really understood this technique at first, calculating and rendering lights in a frame buffer really confused me.

But after reading a couple of web pages on the topic, I finally understood the concept. We essentially need to create something called the G buffer, which job is to store pixel data (position, color, normal, etc) from the rendered geometry. We then bind those textures in the light pass using the appropriate pixel data to calculate the light intensity. This is a lot more efficient since we are only calculating light data on the visible pixels in a frame.  

So I created the G buffer and the passthrough shader to output the appropriate values:

No problems there, it works fine. So then I uniformed the textures into my light pass shader calculating a simple directional light. 

It seems to work fine, the correct values are being passed over and the final result looked rather accurate. But there's 1 thing in particular that bothered me, and that was the noticeable drop in frame rate. 

Why was this happening?

Isn't deferred shading suppose to be more efficient?

I've spent hours trying to figure what was wrong and the frame rate was simply unacceptable. While I was unable to fix this issue, I've discovered something in the geometry pass that was causing the lag. 

fragPosition = position;
fragDiffuse = texture(myTextureSampler, UV).xyz;
fragNormal = normalize(normal);

for some reason, outputting the position and normal values creates a strain on processing. I simply commented out the following

//fragPosition = position;
fragDiffuse = texture(myTextureSampler, UV).xyz;
//fragNormal = normalize(normal);

and it ran smooth, but of course I wasn't able to compute the light since I had missing values. After a few more hours I finally gave up and decided to go with forward rendering for the multiple lights. It's really unfortunate but I guess I will seek help next week in hopes of resolving the issue.

Right before writing this blog, I spent the entire day working on the lighting for our night level. This is what I have so far:

It mainly consist of point lights and glowing objects for now. I plan on adding a lot more to the scene next week, hopefully it will look a lot better by then.