Tired of jumping through hOOPs

jumping through hoopsEverybody who writes code, whether they’re self-taught or took classes in school, encounters the concepts of object-oriented programming. It’s in every book, it’s behind most APIs, and it’s been the focus of language designs for the past 20-25 years. In both academia and the industry, the conventional wisdom is that OOP is the best, maybe only, way to manage a large codebase. But that hasn’t been my experience, and it hasn’t been the experience of many people, and I think many coders, especially in game programming, are starting to look for alternative approaches to code organization. What I want to do here is go over a couple of aspects of OOP that I have personally found to be roadblocks to getting things done, and then look at some information for alternative approaches.

OOP vs Experience

The problems engendered by OOP vary from person to person, but in my experience there are two main tenets of OOP that keep causing problems when I use them for the construction and organization of code: 1) modelling after real-world objects , and 2) the insular functioning of objects. The first is problematic because aiming to create code that models real-world objects almost always leads to unnecessary abstractions that conceptually muddle the aims and functioning of the program. Our code doesn’t work with “real-world” objects. It works with data. Inputting and outputting data. Of course, wrapping data into abstractions is almost always beneficial at some point, e.g. when working with a Color object is preferable to working with independent RGBA floats, but the abstractions should come about naturally from the needs of the code, rather than being designed up front from what we expect the code to be. I feel like starting a project with a bunch of UML diagrams ignores the fact that I don’t know what the final state of the code will be, even if I know what the program will do.

The second tenet that I find problematic probably has a better technical name, but what I mean by “insular functioning of objects” is that typical OOP takes the mindset of working on one object at a time, usually in a hierarchy. I’m thinking of a scene graph, for example: I take one entity and update it. Then I take each child entity, update those, and then follow their child entities, and so on, until I’ve covered the entire graph. This approach has some advantages, but I think that when I’ve followed this design, I’ve done so because that’s the abstraction that I was starting with, not because the data I was working with was hierarchical. In fact, what I usually find is that the code only needs to work with parts of entities at one time. For example, I need just the position and velocity to update position. Or I just need the time keeping and textures to do the animation. But the code almost never needs the entire entity with all its member data. This approach is closer to how we normally handle particle systems. And the big reveal here, at least for me, is that the easiest, most direct thing to do is almost always the particle system approach, while typical OOP would have us starting with a scene graph.


Component-Entity-System: The component-entity-system (CES) approach was my first step away from the traditional BaseClassSubClassSubSubClass inheritance model, and it has saved me many, many hours of frustration. Adding data & functionality through inheritance leads to tightly coupled systems, and this makes growing the code virtually impossible. CES, on the other hand, keeps entities as plain containers of components, adds data through components, and functionality through systems. Components are then be added and removed from entities as needed. Systems typically keep track of components and do the work of updating them. Plenty of people are writing about CES, but I found this to be a great introduction, this to be really useful in getting into more details.

Data-Oriented Design: Data-oriented design (not to be confused with data-driven design) starts with the premise that the purpose of writing code is move and manipulate data, and any abstractions that aren’t needed for that purpose are unnecessary at best, and time-consuming distractions at worst. Data-oriented design typically has a two-fold benefit: it clarifies the purpose of the code you’re writing, and it (potentially) makes it much faster through optimized CPU cache usage. If you remember from my last post, we’re currently working in JavaScript and PHP, so optimizing for the CPU cache isn’t something I can do at present. But I do find that thinking in terms of “get this data in” and “get this data out” to be useful. It keeps me away from writing code for the sake of better abstractions. For more information here, I like this article and this talk. In the talk, Mike Acton spends a lot of time going over ways to better think about cache usage, which is (really) useful if you’re working in C/C++, but he also spends some time on modelling real-world objects vs modelling the data.

Compression Driven Programming: Compression driven programming is also known as semantic compression, and it is founded in the idea that structures in your code should emerge from their usage, as opposed to being designed up front. The concept of compression here is the same as GZIP compression. Start by writing your code to do what needs to be done, without functions or data structures. When you find that some code gets used more than two or three times, then that’s a candidate for “compression”. Pull it out and make it a function. Likewise, when you find that some data always gets passed around together (like the RGBA floats of a color), then that’s also a good candidate for “compression”. Pull it out and make a struct or class. Following this approach ensures that you only have the abstractions that make sense in your code. The article that everyone references is this one by Casey Muratori. I think this commentary on that article is good for pointing out that there are good and bad ways to compress and come up with abstractions. And finally, in this video (also by Casey Muratori), he talks about how to use compression driven programming throughout a project to work towards an unknown goal. (That video is kind of long, so I deep linked the relevant discussion, which lasts about 10-12 minutes.)


The alternatives here certainly aren’t mutually exclusive. In fact, I think we can take key ideas from each to become more effective and efficient at creating code. A simplistic but useful way to look at these would be to say that Semantic Compression tells us how to identify code and data that’s worth abstracting. A Component-Entity-System approach gives us a flexible framework for organizing those abstractions without getting tangled up in an inheritance hierarchy. And Data-Oriented Design gives guidance on how to work with the abstractions in a straightforward way that works well with memory.

OOP probably has its place somewhere. Maybe in GUIs? The bottom line, of course, is to do what works best for you. Just don’t let conventional wisdom lead you down a rathole of theoretical correctness when clean, working abstractions can be had much more simply.

A Daring Move:
Rewriting the Prototype from Java to JavaScript


The Gambit

This week we decided to rewrite the client prototype and change its run target. The prototype is not even done, and we’re already rewriting. This is usually taken as a bad sign in a project, but I think rewriting going to pay off. That’s the gambit.

I wrote the initial code in Java, and it was targeting the console. When the prototype was ready, I would send a JAR file to Andrew, and we could start work on refining and balancing game mechanics. Now the prototype will be in JavaScript, and the text output will be written to a <div> instead of a terminal. It’s the same basic idea, so why the rewrite? These are the pros and cons.


  • Faster Playable Iterations
    I know there will be bugs that need to be fixed. And I know we’ll want to try different settings for the game. Making changes playable in the JAR means: building the JAR, finding it on the filesystem, sending it through email or Dropbox, and waiting for Andrew to download and run. Making changes playable on the web means hitting save and telling Andrew to reload the page.
  • More Configurable Output
    Doing a console version was supposed to focus on game mechanics, as opposed to graphics, animations, UI, etc. Good idea in theory, but I ran into some problems.
    For example, there’s no universal way to clear the terminal in Java. Best thing you can do is check the OS at runtime and manually send the command: Runtime.getRuntime().exec("cls") for Windows, Runtime.getRuntime().exec("clear") for everyone else. But that works neither in the IDE output, nor in mintty. It’s not a huge deal to have infinitely scrolling text, but compare that to document.getElementById('id').innerHTML = ''. Advantage HTML.
  • A Weak, Dynamic Type System
    This is something that usually drives me crazy about JavaScript, but this early on in the process, I can see the advantage. The server side code is changing everyday, and with JavaScript I can just take whatever JSON string the server sends back and start using the values. In Java, whenever the server sends back new data, I need to change class definitions, track fields that get renamed, parse String to int, etc.


  • The Rewrite Itself
    The rewrite took about 2 days, and that time could have been spent on new functionality.
  • A Weak, Dynamic Type System
    What’s an advantage in the earliest stages is also a drawback once the code starts getting a little complex. There’s no way to see what’s really in an object until runtime. This gets so much worse with 3rd party libraries. What’s in the jqXHR object, for example? The jQuery docs here are really frustrating. I kind of just want a list of member variables and member functions, and it’s easier to get that from Chrome’s Dev Tools than from that documentation.
    A stronger type system lets a Java IDE provide most of this information, but I haven’t found one that can do the same for JavaScript. Maybe this is why people like WebStorm so much—I haven’t tried it. If it can do that, then it’s probably worth the $50. Anyway, </rant>.


So we obviously thought the move would be beneficial enough that we decided to commit the time to it. Fortunately nothing unforeseen popped up, and the rewrite didn’t take too long. I think the trickiest part was adapting the logic from Java’s synchronous (new URL(url)).openConnection().getInputStream() to jQuery’s asynchronous $.getJSON(url, callbackFunction).

So what do you think? Are there pros or cons that I missed? Should I have just used Java Web Start instead? (Just kidding.) Feedback is welcome. Hit us up in the comments, or @robotfriendgamz.


Making It Multiplayer

The Prince is going to be multiplayer. It is, by its nature, a multiplayer game. This is part of what makes the game work well, but it’s also going to be one of the more challenging aspects of development, at least from my perspective as the coder. And since multiplayer is central, it’s going to need to be in the prototype from the get-go.

Server-side PHP
Actual server-side PHP

Adding multiplayer changes the game from being a complex, interactive application to being a complex, interactive application running on top of a state machine. Any action taken by the local player needs to be sent to the server, and any action taken by the remote player needs to be read from the server. The game is base on time-limited turns, so this back and forth needs to be coordinated throughout the entire game.

Since the game is turn-based, I won’t need to deal with the usual multiplayer difficulties, like serializing packets, using dead reckoning, optimizing my protocol, etc. Instead, I can just use TCP to ensure packet delivery, and structure data into JSON strings. Then my main concerns are for synchronizing time and the getting the logic right.

I don’t have too many implementation specifics to offer yet because it hasn’t been completed. What has been done on the server side was written in PHP with MySQL backing for storage. I realize that going to the DB on every call may not be fast enough in the future, but I’ll cross that bridge when we come to it. In the meantime, it’s familiar, (relatively) easy to work with, and it’s more than enough for these early prototyping days.