When I program, I try to follow the following pattern: get it working, get it working correctly, then if necessary get it running fast. Premature optimization is one of the bigger problems that programmers face. Often optimized code is hard to read and is the ideal spot for bugs to lurk. Making premature optimization even a worse habit is far too often you end up spending time optimizing the wrong code (not the thing that is actually causing the program to run slow) or are optimizing code that you are going to be replacing later. This is why optimizing after you have finished something makes the most sense.
After writing my C++ memory system for my emulator project, I realized that I really didn’t like the code. Several professors that I have had would call this code smell. The thing is, I really didn’t know why I didn’t like the code, just that it didn’t feel right. The subconscious is really good at determining when something isn’t right but feeds this information to the conscious mind in the form of vague feelings. I have learned that it is best to try and listen to these feelings and work out what your subconscious is trying to tell you.
My initial thoughts on the problem were that the code was not going to be efficient. With an emulator this could be a big concern as poor performance on memory operations would reduce the overall performance of the emulator. This lead me to thinking about other ways I could handle the memory and realized that I was prematurely optimizing the problem. This, however, may be a case where that is a good thing. The memory subsystem will be used by everything in the emulator so making sure the interface is locked down is important.
The big issue with the 2600 memory management is that I need to track reads and writes. If I only had to track writing, then memory could be a fast global array with only writes needing to be handled through a function. This got me researching the various 2600 bank switching schemes to verify if any need to handle switching on a read. The most common bank switching scheme does the switch on a LDA instruction so that approach will not work. As tools for refactoring code have improved immensely making such drastic changes to the code later may not be that big of a deal, so I decided to leave things alone and port the existing code to Kotlin.
While re-writing the code in Kotlin, I realized that I may be over-complicating things. In C++, the cartridge loader class would be passed to the A2600 class (the machine) which would then call the cartridge loader install code which would tell the A2600 which memory manager to use. The A2600 - specifically the TIA emulator and the 6502 emulator - would access memory by calling the MMU class and if the code resulted in a bank switch then the MMU would call the cartridge loader to adjust the banks. By having the memory accesses go through the cartridge and having the MMU built into the cartridge (it could still be a separate class but don’t think that is necessary at this point) things are much easier as this picture shows. This change should make any future optimization easier alieving me of most my conserns.
While I am now starting my disassembler, or at least writing the test for the disassembler, next week will be a postmortem of my Halloween game which will be released this weekend. It is a port of a really old “game” that I did and even though it is pretty low on the polling results it is a cute game and would be easier to port now than later (more on why next week).