Antimatter Instance Dev Log Entry #2: Using Microsoft’s Graphic Debugger

Introduction

When the DreamBuildPlay 2017 challenge was announced, I immediately put aside all personal projects I was working on at the time (they were all rather boring anyway). I created a new instance of my game engine, copied some animations from my previous games, baked some draft 3D models and used all that to build a quick prototype. Despite of being placeholders, when the main character started kicking butts (literally), I got pretty excited. As I went further on, I created more animation sequences as well as a new “combo attack” manager, and I was in the middle of adding some fancy graphics… when I noticed a small stuttering every now and then. I got somewhat alarmed since having performance problems this early in the development phase is pretty much a dark omen.

Wondering what on Earth I had done wrong this time, I ran the Graphic Debugger feature included in Visual Studio. The numbers I got were unexpected, but they pointed out what my problem was. I fine-tuned my code and soon enough I had the performance issue under control – I hope. The process was so useful and smooth that I thought about writing a quick blog about it to share the experience.

Microsoft’s Graphic Debugger

To give a quick introduction history, the Graphic Debugger is a feature in Microsoft’s Visual Studio, introduced in version 2012 Professional Edition and became widely available with Visual Studio 2015 Community Edition. The early version was a little bit confusing. However, the new version is so easy to use that, in just a few clicks, I got an analysis of the tasks that were run by the computer’s GPU, along with the time it took to execute each and every one of them.

Diagnose Procedure

So, without further ado, let me give a quick account of what was done in hope that this same process could be useful to other indie devs: Having my project loaded in Visual Studio, I went to the “Debug” pull down menu, then “Graphics” and then “Start Graphics Debugging”.

GraphicDebugger01

The Graphic Debugger application requested to run in high elevation mode, and then my prototype was launched. As the game session went by, a number in the top left corner kept me informed about the time it took to draw the current frame. This number never went below 17 milliseconds (equivalent to 60 frames per second), which is consistent with the fact that my game engine synchronizes with the display’s refresh rate (as we all should, for there is no point of presenting frames that the player will never see). I pressed the “Print Screen” button a couple of times to take snapshots of the frame at hand, and then I quit the prototype.

The next screen showed me a summary of the collected data.

GraphicDebugger02

In short:

  • The first graphic is a plot of the time that the GPU took to draw the requested frame. Overall, each frame too around 17 milliseconds to draw, for the exception of six instances:
    • The biggest chunk (which can be interpreted as a drop in performance) happens at the beginning of the session. This is expected since that is when the game loads most of the assets in memory, making the game a little bit unresponsive for about 10 seconds, just before the main menu is shown.
    • The second biggest chunk happens when the game loads all assets needed to execute the first level of the game. Likewise, this drop is expected.
    • The next two chunks, linked to an orange triangle on top, are the screenshots taken for the frame analysis. These two drops are also expected.
    • The last two chunks, and this is something I need to work on, is when the game compiled, at run-time, the animation sequences for the main character. In other words, this is a problem and I better add some code to cache those animations.
  • The second graphic on the Report tab is the same information as the previous plot, but seen as the number of frames per second. In other words, this is the mathematic “inverse” of the previous plot.
  • The bottom section has a list of individual frames that were captured by the diagnose tool.

From the frames listed at the bottom, I clicked on the second frame (the first one usually is not that accurate). The next screen provided a detailed list of steps executed by the GPU for that given frame.

GraphicDebugger03

Those tasks that have the icon of a black paintbrush have a screenshot attached to it, so clicking on them will show, on the right, what was being drawn at the time.

To the right of the “Render Target” tab, there is a tab called “Frame Analysis”. On that tab, there are two options: “Quick Analysis” and “Extended Analysis”. The latter option breaks all the time, but the first one is good enough for most diagnoses.

After a quick analysis, the application shows a report of the collected data for the given frame.

GraphicDebugger04

The first section of this output report puts the collected data per task in a bars graphic, focusing on the time it took to process each of them. When seen like this, it is very easy to spot problematic draws. The second part of the report has the information collected in numbers. Likewise, the potentially problematic draws are highlighted in orange (actually, it’s salmon, however this is technical post – the graphic design ones come later).

On the screenshot shown, in both sections, there are two tasks that pop out: tasks 467 and 470. Going back to the “Render Target” tab and clicking on the suspected tasks, I found out that my little trees shown in the background were being a process hog for my game. Altogether, these two tasks alone were consuming a third of the 17 ms threshold for a 60 fps game.

Data Analysis

Although I did implement a custom routine to have these trees drawn in different colors (part of the fancy graphic feature I was trying to implement), the shaders I created were not in any way complex. Moreover, these trees have a very low number of polygons (as seen on the report, each one has about 174 polygons) and that is why I was so surprised about them being the culprit.

Anyway, long story short, the problem can be summarized as follows: these trees are huge. It’s not quite evident on the screenshot, but these trees are instances of a 3D model, and that is why they look different based on the angle of view. This means that some branches are drawn on top of each other, using some transparency effect, meaning that the color of a given pixel is not final until all branches are drawn (not to mention that these trees overlap each other at some point). Now, given that a pixel shader is executed at least once for every pixel on the screen, and that the current resolution of the screen is 1680 x 1050 pixels (making the trees about 500 x 500 pixels), this is A LOT of calculations, especially for texture sampling. Moreover, as the report shows, the GPU tried to draw 18 trees (9 on each pass), however only 8 of them are actually visible, which means that almost half of the time spent was pretty much wasted (I state that almost half of the time because I still need two pair of trees on each side in case the player needs to go side to side).

Conclusion

Usually, every time I work with custom shaders I run the Graphic Debugger to see how many things I broke in the process. Knowing where the time is allocated when drawing a given frame helps me focus my attention to specific shaders. In this case, after reviewing what I had typed, I did find a way to optimize the background trees’ pixel shader and that took care of the stuttering – for now. To be fair, my development system has an NVIDIA GeForce 8500 GT, which has a pretty poor performance (it has a PassMark of 139, making it pretty far below on the list), so, if my game can perform well on this system then it should run fine on most computers out there.

I will continue to use the Graphic Debugger as often as I can. However, if in the final product you see brick walls instead of trees, then you know what happened.

Advertisements

Antimatter Instance Dev Log Entry #1: The Game is Afoot

When I saw that the DreamBuildPlay 2017 Challenge was announced, I was speechless. It was like the good old days were coming back, but this time re-engineered for a multi-platform approach.

Back in 2011, this very same competition was the one that motivated me to really learn about creating video games, and ever since then I’ve been absorbed by this hobby of mine. I mean, creating games is great, but without a strong motivation, everything I’d done pretty much never went beyond the status of a “doodle”. Ugly doodles, to be honest. Instead, the DBP challenge was the “carrot” that kept me pushing myself to publish the best project I could ever deliver. I got the chance to participate a second time in 2012, and I was really looking forward to the next one. Unfortunately, no new competitions were announced for the next years and, instead, Microsoft announced the “sun-setting process” for the XNA Creator’s Club.

After that, a few contests were announced for specific third-party technologies, but none of them were attractive enough to be a real motivation. Things were really slow for a while until the DreamBuildPlay 2017 Challenge was announced.

The dates are a little bit unforgiving, though: The announcement was in July and the due date is on December, which gives us about 6 months to create an entry. For those who have experience publishing games, we know that this time frame is pretty much the biggest challenge to overcome since usually it takes about a year to create a good, playable game. That said, the most likely strategy that most of us will follow is to grab and “package” several assets in one single prototype, thoroughly test it, certify it as per the Windows Store standards and submit it at least a week prior the due date (Internet bandwidth can be really mischievous during Christmas holidays). The game gallery could be reduced to a list of pretty proof of concepts… unless the contestants have already something to start with.

Here is where one of the biggest advantages of using a game engine comes to play: flexibility. As a developer, if you already have a game published using a flexible game engine, you can create a complete different game by re-configuring this very same engine in such a way that it will perform different tasks. I’m not talking about “re-skinning” an already published game – that would be cheating. What I mean is that the same game engine could be used to create, say, a “Platformer” or a “First-Person Shooter”, depending on how it is configured. Of course, creating two different games still require a lot of effort. However, the basic tasks would be already taken care of.

Most 3D game engines are quite flexible in this regard: Use a different set of animation sequences, configure a different camera angle, then just implement a new game logic and a brand new game is created – all drawing, event handling, asset loading, object caching and even game states are taken care of, automatically.

So, that’s pretty much the strategy I’ve decided to follow: The same game engine that I used for my “Third-Person Shooter” and my “Sports” game, will be the core of a Side Scrolling Brawler that I have titled “Antimatter Instance”.

Here is where things get tricky: under the hood, the biggest difference between these three genres is the massive amount of animation sequences to implement in a fighting game. It’s not just about strikes and combos, but also about hurt animation from different angles, times two (the original and the “mirrored”), times the amount of game characters to implement. Even though I was able to create a working prototype within the last month, the sheer amount of work ahead just in animation sequences will be massive, and could be the factor that could prevent me to deliver on time.

I mean, the main character uses a technique inspired in Taekwondo martial art. The henchman currently running in the prototype has a technique inspired in Boxing (not an Olympic level, that is horrible to look at, but instead inspired a more professional level – thanks YouTube!!!). The character in the current covert art is one of the bosses, so the use of a Bo as a weapon is yet another set of animations to implement… and the list goes on and on.

Now here is a question that most readers are wondering: How come a brawler is named after an astrophysics concept? Well, to answer that, we just need to play the game.

Using a Pixel Shader for Facial Expressions

I’ve been playing around with shaders and I thought about making a quick work-in-progress video to show the visual effects that I have been working with. In a nutshell, I have enhanced the human expression engine that I have implemented for my 3D models. The assumption here is that all characters will always be at a certain distance, and there will never be any kind of close-up. Granted, the video is a close-up itself, but that is only for the purpose of this article.

Here is the video:


The effects shown were implemented within the Pixel Shader. That’s a rather big gamble because a complex Pixel Shader may cause a catastrophic drop in performance. The advantage that I have is that my choice of design is towards a cartoon style, which makes pretty much everything much easier and cheaper than what would have been if I had chosen a realistic approach. That allows me to get away with a simplistic algorithm: The simpler the better.

In Layman’s terms, a pixel shader is a program that “paints” a tridimensional model according to a given image called “texture”. Models are made of triangles. Each triangle has three corners or “vertices”, each corner has a coordinate that points to a pixel in our texture, and all three pixels delimit a triangle in this texture. A pixel shader draws this very same triangle on the screen by extrapolating these coordinates, pixel by pixel, sampling the color to display from our texture.

Now, the features shown in the video work by pretty much messing with these sampling coordinates. Not all the texture is always shown and, instead, the pixel shader input specifies which section of the texture should be drawn.

I. Eye movement

When the shader draws the upper half of the face, if the color of the texture sample is white then it’s drawing the eyes. Then, sample again, but this time add an offset that points to an area on our texture where the irises are located. Messing up with this offset gives the effect of eye movement. This variation is a parameter, member of the pixel shader input. Granted, the irises are not a perfect circle, but then again, they are round enough at a distance.

II. Blinking

The texture has three pairs of eye lids in it: open, half closed, and closed. A shader input field specifies which of these pairs is drawn at a given time. My animation engine computes which one should be shown (the less operations at a shader level the better), following the pattern “open – half closed – closed – half closed – open”.

A special condition applies: the animation engine half closes the eyelids when the character is looking down. That makes the overall effect a little bit more natural.

III. Smile and Frown

The tips of the mouth can be crooked up or down. Originally, on the texture, the mouth is flat. Crooking the tips of the mouth can create a smile or a frown. Only the tip of the mouth is crooked in order to give the effect of full body lips.

This one is a little bit of a gamble. DirectX experts may argue here that this effect could have a better performance if done within the vertex shader. They would be right if it were not for the fact that I’m running very close to the 39 bone limit imposed by the size of the memory buffer in the average video adapter for PC.

VI. Eyelashes

Two thirds of the eyelashes can be tilted up and down, expressing the state of emotion of the character: sad, angry, evil mischief. The secret is to know at which point the tilt starts. The fact that this is a cartoon character makes this quite easy.

#AssetGameChallenge Entry “Doomsday for a Spider Nest”: Let the games begin!

I’m quite fortunate to be part of a couple of thriving indie game development communities. Although we don’t always agree, I’ve always enjoyed chatting with people that are as passionate as I am about our craft. As it happens, they are coordinating a game development contest, and I’m feeling like joining them, just for kicks. After all, there’s not much on TV during the holidays.

#AssetGameChallenge is the name of the event, and “Doomsday for a Spider Nest” is the name of my entry. This will be my first attempt to create and publish a 2D game using C++ and my own engine, so I feel I’m going to be at a disadvantage. It’s all right, though, as it will be fun, no matter if I finish on time or not – let alone if I get good feedback, or not.

It’s going to be a shooter game, to be played best with a touch screen. I really want to use my HP Stream 7 tablet for this project. It will have one scene only, though, as there is not much time to do anything else. More than a game, it is going to be like a performance benchmark tool, exploiting as much as I can from particle systems. I’m inclined to believe that will give me some advantage against managed classes… then again, I’m not quite good at coding, so, let’s see what performance I can get.

The contest has a prior phase about the publication of an “asset”. I don’t have a membership of any asset store, so I don’t think I’ll be able to participate in that phase. That’s quite fine, though, as the game entry itself will consume most of my free time.

So, to recapitulate, this webpage states the start of a new project, and the check list stands as follows:

  • Announce the entry,
  • Set the expectation, and
  • Put it back to the icebox while I finish porting my Third-Person Shooter as a Universal Windows Platform game.

Here is the link to the contest

A Snowy Slalom - Danika

“A Snowy Slalom”: Leading the way towards the new era of gaming technologies

In an era where technology is always evolving, he who stands still will be left behind. On the other hand, he who follows it so closely runs the risk of being led to a dead end. Yup, during my professional career, I had a good share of surprises about technologies and third-party frameworks that all of a sudden get discontinued, leaving the investment done in custom applications dead in the water. I have also gotten scolded for not having frameworks up-to-date, losing a sale due to the fact that our technology does not support this or that other requirement that other products do.

As an “indie”, I took the decision of branching out as a way of reaching a bigger audience. I have to admit, the pile of options to choose from is rather overwhelming, as each one offers features compared to none, most of them highlighted with a “wow” factor using multimedia assets widely distributed over the Internet. Crowds of followers are not shy to chant over and over the benefits that each framework has, and some of them are happy to share their knowledge by publishing detailed tutorials for free.

Regardless, the thought of venturing to an unknown landscape, using a different framework, in an unfamiliar development environment and programming for a hardware that I don’t even have, can be daunting. Therefore, it was decided that the best approach to embrace this quest was to port an existing project rather than to start anew. The champion chosen for this challenge was our game “A Snowy Slalom”.

Starting Point

The project “A Snowy Slalom” started in the winter of 2011 – 2012. It was a time when I had already published my first game in the XBLIG channel, and I believed that my 3D engine had evolved enough to take over a wide variety of projects. I still had the strong believe that game developers should be conscious about our influence, so I wanted to try and create something not violent at all, something that would be in contrast to the mass murdering shooters that were in vogue. While going through the standard brainstorm process, I remembered the time when I tried a pool of waves for the first time: It was a cute experience when dodging waves on the shallow end, and yet it was rather impacting when doing so on the deep end, where the water was literally up to the neck and where no floor can help you jump. I recalled such experience while watching a winter sport event on TV (I really need to get cable), and that is where I got the idea of creating a skiing game. Not just the average “dodge and be happy” or “jump & stunt” kind of skiing, but a project in which the overall experience changed completely once the player was right on it, an experience that would allude that the elements of nature should not be taken so lightly.

Back then, there were two main design styles for sports games: On one side, there was the “realistic” approach, in which all efforts were spent towards mimic reality at all costs. On the other side, there was the “cartoon” style featuring characters with over-sized heads and wobbly movements, aiming for a cute gaming experience. The design chosen, however, was something between these two: The style is based on a colorful environment (even the snow has a tone of blue), and yet the character rigs (or character proportions) are close to reality, allowing the execution of natural human movements. Overall, the design is heavily influenced by the masterpieces of leading American animation studios.

By June 2012 the development of the main components of “A Snowy Slalom” using XNA was complete. The game was featured in the “Dream, Build, Play 2012” contest, and it was officially released on September of that same year on the XBLIG marketplace.

The Technology to Use

During the last couple of years, the indie market got flooded with new engines, technologies, frameworks, programming languages and assets stores. The new generation of gaming consoles had gone all of them through a promising debut, the id@xbox program was gaining popularity and everybody started talking about game development. By June of 2014, after taking a well-deserved break, I was ready to choose the technology to use for my next project. So, I started my research and weighted all available options based on my current needs, knowledge, experience and resources. While learning their benefits and restrictions, one by one I scratches these options off my list until, suddenly, my list was empty. This was an unexpected surprise, contrary to what was promised by all marketing material widely spread through electronic media. Shockingly, I found myself with a set of tools and assets that I could no longer use for the brand I wanted to work with, and that was a fact that was hard to swallow.

For starters, from all options available, back in 2014, there were only three frameworks supported by Microsoft’s new gaming console: Unity3D, DirectX11 and Unreal Engine (this last one was not free, so it got discarded by default). Of all these, none of them supported a type of file for 3D models called “X-File”. This was an important requirement since I had worked with a game engine that used this kind of 3D files and, for years, I had enhanced it and enriched it on every game I released. I had chosen to work with this file format back in 2011 because this was by far the most well-documented from the two 3D model file formats supported by XNA. The other option was Autodesk’s “FBX”, and it is not easy to create one of these files programmatically. On my defense, the X-File had been the standard used by DirectX for almost a decade, and never in my life could I have even imagined that Microsoft would drop support for its own 3D model file. About a year before, I had dropped a line in a DirectX11 forum requesting the support of this type of file, and the answer I got was not encouraging.

My situation was a little bit more complicated than an unsupported file: I had my own game engine based on an event-driven animation language of my own invention that pretty much allowed me to create any type of game I desired. I even had quite a set of tools to assist during game development. These tools created the game characters by mixing features like hair, skin color, body type, clothes (to mention a few), compiling them in an x-file ready to use. I had proven the efficiency of this engine with a third-person shooter, “Battle for Demon City” (released on April 2014), which had earned good comments (at least when it comes to the animation used) from gamers and critics around the world. To give some perspective, “Battle for Demon City” has 230 different animation sequences for 10 different models (some animations were applied to more than one model), for a grand total of over 2300 individual frames, all of them loaded, compiled and executed at run-time almost instantly. The best part was that the installer was 22Mbytes big, which was well below the 50Mbyte limit imposed by the XBLIG marketplace.

The importance of this animation engine resided in the fact that it had been the tool that allowed me to create games that were different from all others. This is paramount in a market where there are literally thousands of games available. The results I got with this algorithm had become my style’s “signature”, and I was not keen at all to the idea of discarding everything and start all over from scratch.

I was just about to give up on consoles and focus on games for PC using Monogame, when a thought crossed my mind: If I was able to create x-files programmatically then I might just as well be able to read and upload them, even in an unfamiliar framework. So, I gave it a try. From the two options remaining, Unity3D is more a “commercial game engine” rather than a development platform, so the implementation of my own animation engine was not going to be so easy to achieve. On the other hand, DirectX is a collection of APIs, thus it welcomes any kind of implementation I desire… although it was clear that I had to do it myself, without warranty that any custom development would be supported in further versions. I jumped in, and in two week’s time I had the routine to read, load and animate a 3D model from an x-file format (I think it has taken me more time to write this article). At that point I decided to give DirectX a try, hoping for the best.

I know, this document is sounding more like a charade rather than a serious article since I started talking about new technologies and the buzz around them, and I ended up with the oldest API available and the one framework that nobody seems to cheer for. To give credit to all those who disagrees with my choice, I am very well aware that the extra effort invested in this technology is the result of refusing to let go an “outdated” file format. For those familiar with IT management, in-house development of proprietary technology has a great risk of incurring in a high cost at a long term. Still, I’m willing to take that risk, just to make sure that my games are different from all others available in the market.

The Migration

Not everything went smooth. Once I had migrated all code to C++ and DirectX11 using Visual Studio 2012, I started to suffer from performance problems. The same game in C# and XNA had better performance, which was a cold reminder that the programming language is not always a decisive factor for overall performance. I was just about to throw everything out and start working from scratch with Unity3D (this time for good), when I read something about “instancing”. The more I researched about it the more excited I got. After implementing this feature in full, all performance problems were gone. I got so “hyped” about it that I extended it to all 2D graphics as well.

By early December the migration of “A Snowy Slalom” was complete. The next steps were to ensure that it complied with Microsoft’s Windows Store policies, including support for touch screen. By the end of February, the game was released in the Windows Store as a “desktop app” game.

What Worked

  • The migration project was a success. Overall, the game developed both in XNA and in DirectX have the same gaming experience, although this last one has a better performance as well as some camera enhancements applicable during collision detection, which was a feature that was recently added to my game engine.
  • The music I wrote for this game is, by far, the best thing I have ever done. It’s an up-beat, old-style midi file, played with childish instruments, yet the tone is quite dramatic. It really fits the overall game style.
  • DrawIndexedInstanced(): Everybody who works with DirectX should read about this function. This is the one that fixed all my performance problems.
  • Fonts have “Intellectual Property”. A project released to the public usually has to pay royalties to font creators. Depending on the package, it is a $50 to $100 expense. Still, I was having so many problems displaying text that I decided to implement my own font. I mean, I was hitting a dead end, and the thought of still having to pay for it after all that pain, was upsetting me. So, I took my stylograph pen out and created my own font. This was a struck of luck, as at the time I didn’t know that DirectX has some serious performance issues when it comes to text.
  • I had problems identifying a system that would represent the minimum hardware requirements to support my game engine. However, at the beginning of 2015, I got an HP Stream 7 tablet for only $99 USD. This is the best device I could ever find for performance testing (not to mention the useful touch screen). I highly recommend using one of these devices for QA and load-test.

What Caused Pain

  • The main problem with DirectX is content management. I had to convert all images (png, jpg, bmp) to “dds” files because that is pretty much the only format that I was able to load.
  • If the graphic part of DirectX is complex, the audio part takes the winning prize, by far. I understand that Audio2 is a low-level API, but it really took a big effort, just to play a sound. Likewise, all sounds had to be converted from “wmv” to “wav”. This increased the installer size by a 396% (from 16Mbytes for Xbox 360 to 63Mbytes for App Store).
  • I couldn’t make the C++ XML reader features to work. That was a little bit of a shocker, as all my animations are stored in this format. I mean, normally, in a standard custom application, I would have linked to the XML reader included in the .NET Framework. However, for App Store Applications, is not that easy to identify what can be used and what should be avoided. It’s nowhere documented. As a work-around, I had to create my own XML parser.
  • Managed classes in C++ can become a real pain. In most cases, it was much easier to use native structs and keep them loaded for the entire session rather than create, on every call, instances of managed classes and rely on the garbage collector to clean the mess.

What Didn’t Work

  • The Windows Store is not quite crowded. The more I announced “A Snowy Slalom” for Windows, the more sales I got at the XBLIG marketplace. I’m starting to consider going back to the Xbox 360 platform.
  • The pricing schema may not have helped at all. “A Snowy Slalom” is sold for a dollar at the Windows Store, just like in the XBLIG marketplace. However, the audience at the Windows store seems to be keener to download games for free, even if they have to deal with marketing ads or in-app content purchases.
  • I could not reach the deadline of December 2014. I published my game a couple of months after that, when winter was already half way through. This is a serious draw-back for a seasonal game. Maybe if I had focused on migrating this game from the beginning instead of first migrating my game engine and then re-applied it to the game would have been faster.
  • As an indie developer, I lost a little bit of popularity between my peers. It seems that not many people welcome the idea of creating their own game engine, and some of them are prone to defend their position a little bit too passionate. It’s really not that big of a deal, many people before me have created their own game engine as well. Also, I understand that what works for me may not work for somebody else.

2015

It is rather late to celebrate the start of a new year. However, at this day, I do commemorate the start of a new era, a new phase that comes after accepting the fact that the good ol’ ways should be left in the past, and embrace new technologies that, even if they don’t make much sense at times, it is what everybody is following. Let’s face it: An artist that doesn’t have an audience is pretty much a human that has wasted a lot of resources in vain. I want to make it clear, though, that it is not the spotlight that I crave. In fact, following everybody for the sake of going where everybody is, is not an attractive plan at all. But I have to admit that hearing every now and then that one of my creations have raised a smile, even for a moment, is a thought that encourages me to continue doing what I do.

For four years now, I have used XNA to bring dreams to life. It’s been quite a journey, and I’ve enjoyed each and every step. I will not be one of those Nay-Sayers that yells to all four winds that XNA is death, running from stem to stern, carrying the lifeless head of the XNA framework in a morbid scene, attempting to move the masses towards my wallet’s convenience. Instead, I can attest that the projects I’ve created using this technology, continue to work. Funny stuff is that, the newer the system that runs it, the better they behave.

Still, the Xbox Live Indie Games market has shrunk so much that it has reached a point in which it is no longer sustainable. Simply put, it just doesn’t make any sense to invest more in a channel that nobody is listening anyway. Even the most loyal critics have moved on, looking for some other sources to feast on.

Don’t get me wrong, though, as for months we have been working hard to jump to the next platform, to the point that we are close to unveiling our next release. It hasn’t come easy, as every new framework requires a different set of tools, and none of them come easy, or cheap. Regardless, stay tuned for our first step in the new beginning. It will not bring riches and fame, but it will surely raise some eyebrows, and maybe some witty comments from critics and fellow peers.