Wednesday, November 29, 2017

Making Santa’s Snowball part 1 - Sprite sheets and Preloading

My making of Santa’s Snowball was originally going to be just a single article but as I was covering so much of the various Create.js librarys the article kind of grew much longer than I had anticipated. This means that it will be broken into multiple parts which will be immediately followed by the making of game 2 in my Santa trilogy (to be announced next week) and that will probably be followed by the making of game 3 which means that I will not be able to get back to my creation of an assembler until late December, though I will have some progress reports if significant progress is made.  

Santa's Snowball was an interesting project as I wanted to have a Christmas game for this year and noticed that there were three games that formed a trilogy. I decided that I would port all three games to get into the Christmas spirit as well as to practice my Create.js skills as I am working on a book on that subject so the more familiar I am the better. More about the books in February.

I had performed a quick port of the game to Create.js using Animate only to discover that the size of the assets when converted to JavaScript were huge. The only asset that I bothered keeping was the snowball. For everything else, I created an image atlas that held everything.  An image atlas is simply just a large image that contains other images, including sprite sheets, with the idea that you only load the single image. When dealing with 3D APIs such as Open GL, this is extremely desirable as switching textures is very costly so having everything on a single texture can greatly boost speed. The key thing to remember when creating an image atlas is that you need to leave space between images otherwise you may end up with fringes. I did not remember this so there are some minor fringing issues with Santa’s Snowball but nothing overly concerning.

The sprite sheet support in Create.js is great with the ability to specify complex animation sequences. The class even has support for spreading your images across multiple image atlases which could be an effective way of organizing complex character animation but not something I had to worry about.

Having just a single image that multiple sprite sheets would access meant that I could preload the image. As I have a bunch of sounds to load as well, the preload.js library would be useful. Sound.js is the Create.js sound library and while sounds do not need to be preloaded, if they have not been preloaded they will not play the first time they are called. The preload.js is the preloading library and simply takes a manifest array to determine what gets preloaded. Events are generated for things such as loading a file, problems with a file, and completing the manifest. I simply captured events for the completion of the loading and for errors. It is important to point out that a file that can not be loaded does not block completing the preloading so if you have files that you require then make sure to detect errors in loading. Here is my code for setting up the preloading.

function init() {
// setup preload.js with necessary drivers
preload = new createjs.LoadQueue(true);
// preload.addEventListener("fileload", handleFileLoaded);
preload.addEventListener("error", handleQueueError);
preload.addEventListener("complete", handleQueueComplete);
{src:"images/SantaSnowball_atlas.png", id:"SantaSnowball_atlas"},
{src:"sounds/SnowSplat1wav.mp3", id:"SnowSplat"},
{src:"sounds/WishTest.mp3", id:"WishTest"},
{src:"sounds/woosh1wav.mp3", id:"woosh"}


Notice that there is an additional step that I did not mention in the prior paragraph. Not registering a plugin can result in pounding your head against a wall trying to figure out why sounds won’t play. A plugin needs to be registered for sound. Other plugins can be installed for different types of data so it is conceivable that you could create your own datatype and write a plugin that handles it. A good example of where this would be handy is for a game that uses maps or some other type of level data.  
Another thing that could be done when creating image atlases is break things into two types of atlases. Sprites, which have transparent areas, can be placed into png files while solid backdrops can be placed into jpeg files which would allow for much better compression. This is not something I did with Santa’s Snowball, but will be a consideration for future projects that I work on.

Next week we will take a look at how I assembled the seven scenes that make up the game.

Wednesday, November 22, 2017

Why Write an Assembler?

With the disassembler coming together so easy and a nice table of op codes and their associated mnemonics, it seemed to me that I had everything to write an assembler as well. This was not part of my original plans for an emulator but the idea of having an emulator that was able to take source code and let you edit it while running the program would be nice. When I implement the interpreter I am going to need to write a lot of small chunks of assembly language and step over it to make sure things are working. A built-in assembler would greatly aid this work and would even be very handy for game development.

Looking at the structure for my table, I realized that looking up mnemonics and finding the associated OP Codes would be trivial to implement. A map of mnemonics can be created with each mnemonic having a list of the different variants of the instructions and the associated OP code. If you know the mnemonic and the address mode, the OP code would simply be a traversal of the list. Here is the code for generating the map.

for (inst in m6502.commands) {
    if (mapOfOpCodes.containsKey(inst.OPString))
    else {
        mapOfOpCodes.put(inst.OPString, arrayListOf<M6502Instruction>(inst))

Traditional 6502 assembly language is not that difficult to parse. Or at least it does not appear to be. It should be possible to write a simple tokenizer and a simple parser to break down the assembly language and determine which address mode it is. Of course, there are other aspects to assembling such as directives and labels but how hard can they be? As it turns out a bit harder than I expected but not really that hard.

While the assembler was not in my original design goals, with the above code coming into my head while I was still getting the disassembler written, I managed to convince myself that this would be an interesting path to explore. As I write this I have the assembler functioning and a partially implemented solution for labels so it is well on it’s way. Development on it is now slow as I am using most of my spare time to finish my Santa trilogy. The first of the three games will be posted this Friday since even in Canada Black Friday is when the Christmas season begins. Or at least the shopping part.

Next week will be a look at the game which was developed without Adobe Animate CC, though still using the Create.js libraries. If you don’t have money to spend on Creative Cloud but have Action Script or Flash experience then this is actually a viable path. Even without Flash experience, the Create.js library is a good library to use for JavaScript games. More on that next week.

Wednesday, November 15, 2017

Disassembling the Disassembler

Writing the disassembler turned out to be even simpler than I expected. I had expected the work to be a bit on the time-consuming part as no matter which route I went with to write this I would need to deal with in 56 different instruction with many of them supporting several address modes. There are various approaches that can be taken for disassembling instructions.  For processor architectures such as the Sparc, there are very specific bit patterns that make up the instructions. A look over the instructions clearly shows that this is probably true of the 6502 but with 56 valid instructions and only 256 possible values a simple table approach seemed to be the way to go.

The table approach sets up all the information as a table. By having a function pointer or lambda function in the table, you could also set it up to be able to do the interpretation as well. This isn’t really that inefficient either as it is a simple table lookup which then calls a function that does the interpretation work. The bit approach would be a lot messier and with so few possible outcomes it is not overly cumbersome to create. A more complex processor would be a different story but for this project I will go with the table. Here is the format of the table:

OP Code
The number assigned to this operation. While not technically needed here, it is a good idea to have to make sure the table is complete and it will be needed if an assembler is desired in the future.
Op String
The mnemonic or 3 letter word used to describe the instruction.
How many bytes (1 to 3) the instruction uses.
Address Mode
How memory is addressed.
The base number of cycles for the instruction. Things such as crossing page boundaries or whether a branch is taken will add to this value.
The code that handles the interpretation of this instruction.

Disassembling then becomes simply the matter of looking up the instruction then based on the address mode printing out the value or address that it is working with. There are 14 address modes that I came up with as follows:


The meaning of the individual values in the enumeration are outlined in the following table. This will become important when the interpretor portion of our emulator starts getting implemented.
Specifies the address that will be accessed directly.
The address specified with an offset of the value in the X register.
The address specified with an offset of the value in the Y register.
The value in the Accumulator is used for the value.
Unknown address mode as instruction not official. For the instructions that I end up having to implement, this will be changed as necessary.
The value to be used is the next byte.
The instruction tells you what register(s) it uses and those are what get used.
Use the address located in the address this points to. So if this was JMP (1234) then the value at 1234 and 1235 would be the address to jump to.
The next byte is a zero page address. The X register is added to this value. That byte and the one following it are then used to form the address to jump to.
The next byte is a zero page address. It is the low byte and the following zero page byte is the high byte to form the address. The value in the Y register is then added to this address.
An offset to jump to (relative to the next instruction) if the branch is taken.
Use a zero page address (0 to 255 so only one byte is needed).
Zero page address with the value of the X register added to it.
Zero page address with the value of the Y register added to it.

Calculating the addresses is easy but for people use to big endian architectures may be strange. For addresses the first byte is the low order byte followed by the high order byte. This means that the address is first + 256 * second. For branching (relative) the address is the start of the next instruction plus the value passed (-128 to 127).

Next week will be a look at my assembler decision with some hindsight about the process as I am nearly finished the assembler. 

Wednesday, November 8, 2017

Test Driven Disassembly

When I first started programming, the procedure was simple. You would write the program and then you would test the program. The testing was generally manual testing simply making sure that the program would do what you wanted. This is fine for when working on small or personal projects, but when projects get larger this is not a good way of doing things. Changing things could cause code to break but as it is not tested for it can go unnoticed for a long time and when discovered would require a lot of effort to find and fix.

The idea of automated testing helps solve this problem by making the testing process easy as one just needs to run the test after making changes to see if anything is broken. This does require that the tests exist which can be a problem as writing tests after the code has been completed makes writing the test a chore that can be skipped if one is behind schedule. It also has the problem of the test only testing what is already known to work.

Test driven development moves the testing to the top of the development loop. This has the advantage that the tests are written before the code so code always has test code. You then make sure that the tests fail and then write the code and get the code to pass the tests. You also have the advantage of thinking about how exactly you are going to test things and may uncover issues before you have even started writing the code. A comparison of the three methods is shown in the flowcharts below.

As with pretty much every approach to programming, dogmatism can take over and the advantages of test driven development can quickly be replaced by useless burdens. If you find yourself having to write tests for basic getters and setters then you have fallen into the dogmatism rabbit hole. I have been taking a middle ground with my work coming up with tests before writing code. As some things are simply too difficult to write automated tests for, especially non-deterministic programs such as with many games, manual testing is an option as long as you have a clear test plan. Automated testing is a preference as the tests are always ran so problems are detected earlier.

For my disassembler, the test is simply being able to disassemble known code into the proper instructions. My original plan for the assembly code was to write some test assembly language that would cover all the instructions with the various address modes for the instructions. The code wouldn’t have to perform anything useful, just cover the broad range of assembly instructions. This got me thinking about non-standard instructions.

There are future use operation codes (OP codes) that the 6502 has that when used will do things. As this functionality is unofficial, using such instructions is not wise since the presence is not guaranteed, but some programmers would use these instructions if it would save memory or cycles. As I do want my emulator to work with at least some real cartridges, which may use unofficial instructions, I need my disassembler to be able to detect these instructions and alert me to the use of the instructions so I can figure out what the instruction does and implement it in the future.

This means that all 256 possible OP codes need to be tested. As I want to be able to disassemble from any arbitrary point in memory, this simply meant that my test could be procedurally done. My test memory simply filled memory with the numbers from 0 to 255 so if I set the disassembly address in sequence, I would have all the instructions with the subsequent bytes being the address or value to be used. The fact that the instructions were of different lengths was not a big deal as we would be manually controlling the disassembly address. The list of instructions is something that I have so creating the test result list to compare to was very simple.

When running the test, if there is an error, it is still possible that my test list is incorrect, but this is obvious enough to determine. Once the disassembler is in a state that it can disassemble the complete list, it is probably working so as far as tests are concerned, this is a good way of testing. Once I wrote my disassembler, I did have to fix the list but also did find some issues so overall the test did work. Next week I will go into the disassembler which was surprisingly easy to write.

Wednesday, November 1, 2017

Halloween Scratch Postmortem

I have made huge progress on the emulator finishing the preliminary disassembly and starting work on an assembler so the next few months’ worth of blog posts will be catching up with where I am at with the project. The assembler was not part of my original plans but after getting the disassembler working it didn’t seem like it would be hard. Turns out to be a bit more complex than I expected but still worth doing. As soon as I get the assembler to a stable state (hopefully by next Wednesday’s post) I will post the code that I have written. Haven’t decided were the code will be hosted yet. Since I am so far ahead, I will spend a bit of time on my Christmas trilogy so may be posting three games over the next couple of months. But this week is the postmortem of my Halloween game.

While Halloween Scratch was not originally going to be the Halloween port for this year, while porting the vector assets of most of my Flash games into Create.js compatible JavaScript, it became apparent that some games benefit from developing them in Adobe Animate CC (hereafter referred to as Animate) while others only benefit from having the vector graphics converted into Create.js code. Animate is not the cheapest of software, so with my license running out this month I decided that I would not be renewing it for a while as I don’t really need it. Halloween Scratch was a game that was very animation oriented and was a simple game, finishing it in Animate while I still had the software made sense.

Halloween Scratch is a lottery ticket where instead of winning a prize, you win a monster.  There were other lottery ticket games at the time that were just click to reveal and I wanted to demonstrate to a potential client that you could have a much closer feel to scratching a real lottery ticket.

What Went Right

The animations worked pretty much flawlessly so very little work had to be done there other than with the alien as the transport effect was using color filters. Color filters in Animate are costly so they are cached which means you need to either force the image to be re-cached or come up with some other way of doing it. Simply creating images in the colored states (create a copy of the image then apply the color effect) was all that was needed.  If your game revolves more around animation effects, then using Animate is helpful. Most of my games are more code oriented so I am not sure it is worth it.

What Went Wrong

I had some strange issues with children in a container. I am not sure if this behavior stemmed from Animate, from Create.js, or from JavaScript itself. What was happening was that I used the target of the event listener to hide the dots that make up the scratch cover. There was a reset cover method that should have reset all the children to visible but even though it thought it was doing so, nothing was happening on the screen so already scratched off areas remained invisible. I am not sure why this was happening but was able to get the display list to properly reflect the visibility of a dot by accessing the dot through the dots array instead of the event target. It should not matter which reference of the object has its visibility changed yet in this case it does. I suspect this is one of the “this” pointer related issues that I always run across in JavaScript.

Mixed Blessings

I think the original game did an excellent job of representing the feel of a lotto ticket. Unfortunately, this was originally an ActionScript 1 game so the scratch code had to be pretty much re-written from scratch. I had hoped that using roll-over events would be detectible on tablets allowing for tablet users to scratch the ticket. This was not the case with the browser window being scrolled around instead. To solve this I added click support so by touching a tile it would reveal a block of the image. Not the best solution but it does allow browser users to play the game. Interesting side effect is that the tile can only be clicked if it has not been removed yet so computer users are pretty much forced to scratch the ticket.

Overall,  Animate is a good tool for porting animations and vector artwork to JavaScript but once that is done, the coding work is easier to do in other tools making it more of a utility than a tool. Animate is still a really good tool for creating animations so it would not surprise me if I end up renting the tool in the future but for my porting work, I am pretty much finished with the tool. Create.js is a workable solution for porting Flash games to HTML5 but ultimately you are working with JavaScript which is not the greatest of languages to work with.