16 October 2019

Find Your Car Development Log 1: Wait What Now?

Any recent visitors to this blog may have noticed that I haven't posted anything in a few months. To oversimplify, I don't pride myself on my time management skills and I've had more than enough on my plate to overwhelm what little skill I do have without adding blogging to the mix.
ShipBasher is still in "active" development, i.e. not abandonware at this time, but I've flitted temporarily to another project.
Have you ever been lost in a parking garage, perhaps late at night, searching for your car? Did you walk for what felt like hours, through seemingly endless levels and zones, all starting to blend together and look the same, and none giving any clear indication whether you were getting closer to your car or just more lost?
While playing with a test build of my Procedural Universe chunk system, I and some of my contacts noted that it evoked a massive parking garage, and before long an idea took shape to capitalize on this in an art game that would double as a more entertaining demo, which for lack of a better candidate I have given the self-explanatory name "Find Your Car."
Originally I considered naming the game "The Interview" for reasons that become evident below, but unfortunately the name was taken by a game actually about an interview.
In "Find Your Car," the player is to spawn in a car (wow!) within an endless, procedurally generated parking garage. The advertised goal at this point is to find a nice parking space, anywhere the player likes. Once parked, the player leaves the car.
Upon leaving the car, the player is alerted that there is an important job interview about to start and must hurry to it, racing the other candidates on the way, which take the form of simple NPCs (i.e. zombies) that chase the player and move toward the interview location, which is an unknown point at some distance away from the player's parking space as determined by the game's difficulty setting (if a high difficulty is chosen, the interview is very far away).
Once the location is reached, lo and behold! The player gets the job immediately - but don't congratulate yourself just yet. In order to get home, you must find your car. This is when the "real" game starts: the player must now wander through the near-infinite parking garage, perhaps retracing steps, in search of the car. The player no longer has to worry about enemies, but will encounter a variety of distractions including the all-too-tropey "scenic vistas," various other cars, and Easter eggs such as perhaps exotic vehicles (how did a train get in here?), the remains of past wanderers who failed to find their cars, etc.
(We ride eternal on the highways of Valhalla! WITNESS ME)
So that's the design. So far a closed alpha build of the game exists, in which there is a fully functional procedural parking garage, basic (if unstable) controls for a car and player character, pop-up messages about goals, and a victory message indicating how long it took to find the player's car.
Still to do are adding "rival" NPCs, all the cool Easter eggs I mentioned above, a bunch of polish, and solving a few playability issues such as bad luck leading to spawning trapped in a small area or being physically obstructed from reaching the interview location and unpredictable generation conditions leading to having to walk for several kilometers to reach the interview location even on the easiest of difficulties... or at the other extreme, finding it right smack next to where you parked:
In this image the interview location is indicated by the brown door in the background.
Did I mention needing a lot of polish? As it turns out, a humongous modular building comprises a lot of meshes and in turn a lot of polygons, which give my pitiable 6-year-old integrated graphics chip a hard time unless I make everything as simple as possible. But if you join my Patreon... At some point, hopefully, soon, I'll be able to treat myself to a more powerful GPU that can handle prettier graphics at this scale.
All the same, whilst playtesting this I was surprised to find how well it captured the mood of my past experiences wandering despondently through parking garages, so in a way the project is already a success.

Next time I hope to have an open alpha (or beta) build available and perhaps do a post-mortem on what I learned in the course of making this, how my chunk system matured during the process, what I might do going forward, etc.

12 July 2019

ShipBasher Development Log 1: So Far

Way back when I was in game development school I had an assignment for which I built a clunky prototype of a game I, for lack of a better name, dubbed "StarBlast!" (no affiliation with the later and much more successful starblast.io). The basic concept was a sandbox in which players could open up a "Ship Builder" minigame, construct a space warship out of a variety of predefined sections, and then switch to either a "Scenario Editor" or the game's Campaign or Sandbox modes to spawn a bunch of warships either of their own design or pre-built and pit them in battle against each other. I even made a cute little trailer for it even though I never made a proper public release, let alone tried to sell the thing:
One of my pet projects, as I've given a cursory mention previously, on which I've worked off and on in the years since then is to rebuild this game with entirely my own assets, more robust code, a more polished general design, and lots of user customizability and mod support.
Here's what I have so far:

It's rather barebones at this point, but it counts as progress nonetheless. I've also mocked up a ship editor complete with the ability to spawn some cubes and move them around (whoa!) and with some pretty buttons to click that don't do anything useful.
There isn't much else to say at this point aside from expounding on the details of the design of the game, but nobody wants to hear about my Totally Original Idea™ with no implementation in sight, so instead I mean to reveal the workings of the game piecemeal over the course of its development logs.
My current hurdle is figuring out what sort of system to use to save and load ships, and eventually other data - XML? Unity's built-in JsonUtility? A third-party JSON library? My own homebrewed lexer and parser? Hmmmmmmm... Expect the next update to involve my answer to this puzzle.

12 March 2019

Procedural Chunk-Based Universe Part 5: Deterministic Pseudorandom Worley Voronoi Implementation with Cellular Spatial Partitioning for Computational Optimization, a.k.a. geeky math stuff

As I've mentioned here and there in the saga of my procedural universe thingy, I made noteworthy use of phrases such as "Worley noise" and "cell noise" and the word "Voronoi," and I noted my intent to disambiguate and demystify them.

To start, and to explain what led me to look into them: I wanted to create a starfield, but with some important characteristics. First, it had to be pseudorandom, i.e. appear random but in fact be deterministic: any given input, such as a random seed, would produce exactly the same output; the upshot of this would be that I could spot an area of interest, leave it to voyage vast distances, and upon return find the same area of interest right where and how I had left it, turning a pseudorandom smattering of features into what in my experience, as a player, was a massive persistent universe. Second, in order to have areas of interest at all, it had to have local structure, which a truly random distribution hardly has at all, let alone to a stimulating degree.
The result was that I had to do some research into the field of pseudorandom noise. This page proved useful by providing a number of reference images of the basic results of various noise generation methods, including the popular Perlin noise and its variations as well as Worley noise and strategies such as "texture bombing." I could not only easily compare these against the result I wanted but compare them against the results my code was producing to see whether I was doing something wrong (or at least different from what I thought I was doing). I ended up settling on this:
The page describes it as "Worley Voronoi f2-f1, Euclidean distance metric." Decoded into more digestible terms, it means the following:
  • Worley's algorithm was used.
  • The result resembles a Voronoi diagram.
  • The terms used in the algorithm were "f1" and "f2," and the result value was based on the difference between f2 and f1 (more below).
  • Distances used in the algorithm were computed as Euclidean distances, i.e. "real" geometric distances in a straight line as calculated using the Pythagorean Theorem.
To start implementing this, first one needs to understand the foundations of Voronoi diagrams. A fun Voronoi diagram generator exists here and illustrates the important concepts interactively, producing images like this one:
Random "seed" points are arranged in space (the black dots). Each point has a colored cell in the diagram all to itself, which represents all positions in space that are closer to that seed point than to any other.
The important feature for my purposes was the edges of these cells: along each edge, points have two seeds from which they are equidistant, and points near those edges are nearly the same distance from two other points - in other words, the difference between the closest seed point (which one could name "f1") and the second closest ("f2") is very small. Worley noise is the pattern generated by taking values such as this difference (or some other relationship between the nearest few seed points) and using them as result color values, probabilities, heights, etc. Worley noise is sometimes also cited as "Voronoi noise" or as "cell noise" or "cellular noise" (due to its production of cells that can resemble biological cells).
The simplest form of Worley noise is of course Worley f1 - just color the result based on how far away the nearest point is. I started with this just to make sure I was on the right track, but it naturally produced suboptimal results - just a pattern of round voids that gradually transitioned into dense areas with distance; so I shortly thereafter set upon trying a variety of computations such as the f2 - f1 technique illustrated above. In the end I discovered that I needed to in fact make use of the four nearest points so as to create a threadlike structure with "clumps" near where multiple cell boundaries intersected.

How I created this structure specifically is somewhat of a brute-force method: the noise values represent the probability that a galaxy could exist at that point. Random positions are chosen in 3D space, the noise value is calculated for that location, and then a random value is generated and compared against the noise value - if the noise value is higher than the random value, a galaxy is placed at that location, and if not the algorithm tries again until it either finds a valid spot or gives up.
The cell boundaries are perfectly straight lines, so with an infinite number of infinitely small galaxies, an unnaturally precise geometrical structure would be visible, but with a more conservative density, the random variation in chosen positions, plus the ability of galaxies to occasionally pop up far from cell boundaries, lent enough variation that I considered the results good enough:
Note how clumps of galaxies form but have diffuse filaments connecting them to one another, and that there are large "voids" with very few galaxies in them.
Now, one might note, particularly if armed with knowledge about programming and algorithm performance, that checking the distances to all of the seed points every time a new galaxy is generated, multiple times or otherwise, is (I think) an O(n²) operation - the number of calculations getting done ("O") is related to the number of items involved multiplied by itself ("n²"). This is generally regarded in the programming community as "very inefficient and you should be ashamed of yourself."
Naturally I did notice the high performance cost when there were a lot of seed points involved, and I saw a need to optimize. As it turned out, my later efforts to convert this starfield into a collection of cubic chunks proved very useful: each chunk could be made to contain only a few seed points and only concern itself with seed points within itself or the regions that would be occupied by its 26 neighbors. The operation retained its O(n²) performance, but dealing with only 27*[a small number, say 3 or 4] = maybe around 100 seed points meant that the performance cost was low, especially compared to the performance cost that might have been involved in implementing other sorts of optimizations.
A drawback from this is that only so much local complexity can exist within any given chunk - 3 or 4 seed points means only 3 or 4 voids or superclusters can be present - but for my purposes this was more than sufficient, especially considering that, should I need more complexity, I can simply add more chunks, and should I need a huge number of chunks, many of them would be very far from the player where only a few galaxies apiece would be sufficient. Programming is often an art of finding a local optimum of what works well enough for your needs without having to be too complicated to build, maintain, and troubleshoot.
Of note is that Worley noise happens to be the same technique implemented in Space Engine, as illustrated above, although with a few differences: SpaceEngineer notes that he "distorted" the results using "FBM (fractional Brownian motion) noise" to raise the quality that little bit higher, and the octree-based structure of the game meant that optimization had to be done rather differently from the simple chunk-based strategy I used. Nevertheless I felt proud of independently discovering and exploiting the same computational principles and creating something that yielded similar results.

18 February 2019

Procedural Chunk-Based Universe Part 4 - Cyberchunks

In previous parts I outlined the basic architecture of my procedural universe system and how chunks come into being and are managed and refreshed. Toward the end I mentioned getting curious about how well my system would lend itself to another very different but similarly tempting idea I had.

I'm a fan of the cyberpunk genre and have been for several years, since long before things like Cyberpunk 2077 or Blade Runner 2049 took the spotlight and heralded a resurgence of the genre (sorry, I'm being a bit of a hipster for a minute here). My first introduction was The Matrix and its less celebrated but still entertaining and visually stunning sequels, and what made me fall in love with the genre was when I discovered and immediately binge-read the manga Battle Angel Alita in... uhh... 2011 at latest based on what records I can find (yes I mean I drew bad fanart). I highly recommend this manga especially in light of the movie adaptation I patiently awaited for eight years. Whatever you may think of the movie if and when you see it, I maintain that the manga is a true literary masterpiece. I fell for cyberpunk even more deeply when I discovered and read Tsutomu Nihei's "BLAME!" in 2014, especially with the artwork of massive brutalist architecture extending horizontally and vertically as far as the eye can see:

https://killscreen.com/articles/influence-blame-videogame-architecture-rising/
(Click the last image to visit a page with more pretty pictures. Or go here for even more. Or, I dunno, read the actual manga. ;P )
My idea is probably easy to guess at this point: use my procedural universe chunk system for an infinite building rather than an infinite open space. So I made this:
You can actually play an interactive WebGL demo of the system at this point.
I found that even an extremely simple chunk behavior gave surprisingly pleasing results. Yes, it's extremely basic compared to Nihei's beautiful artwork, but the underlying code is even more basic. All it really does is decide whether that chunk contains a room or is empty, and if it contains a room, spawns a simple pre-built room with a random 0, 90, 180, or 270 degree rotation.
The simplicity of this part opened me up to seeing just how many chunks my system could handle, and it was... actually not as much as I'd hoped. I could work with what I had, but it seemed wrong that I wasn't able to render as many chunks as Minecraft could even when the chunks themselves required practically no computation. Implementing a system for preserving existing chunks, and making it run efficiently, had become very important and a surprisingly powerful source of confusion and work. I had actually already begun to encounter and address this and had designed my system to store lists of existing chunks and use multiple loops to process them and match them up to points in space. Some of my decisions turned out to be suboptimal, so it wouldn't do much good to expound on what the design was at the time, but it did give rise to my current design for the chunk management system:
LOOP through all points, skipping invalid points
old chunks may exist at some, so LOOP through all old chunks
if one matches, skip it (unless updateAll) and remove the chunk from oldChunks
if no point matches, some old chunk probably can go there, but it is not yet known which are available, so list the point as empty

now all points either matched or are listed as empty
now all chunks either matched a point or didn't and are thus available to reassign

LOOP through all empty points
if any old chunk is left, pop an old chunk and assign it
if not (all old chunks are used up), points currently outnumber chunks: the universe grew; only in this case, GENERATE a new chunk
pop the point as it now must have been filled

now all points have been filled
now some old chunks may be left if chunks currently outnumber points: the universe shrank

LOOP through any leftover chunks and tell them to deactivate or destroy them

now all points have been filled and all old chunks have been addressed.
As can be seen here, this is another blob of text copied directly out of my personal notes for myself and thus, my apologies, it's poorly formatted and refers to a lot of things I had floating around in my head without providing context. As before I've included it here in full for the edification of others who may be struggling with the logic of managing world chunks in their own game systems. In other words, yes, you can try to follow along and copy it if you want. It's not like I invented the idea of a chunk system anyway.
As can also be seen here, by the time the most recent version was written down I had learned to be very conscientious about the current state of the chunk list and the needed chunk positions. I also capitalized "loop" and "generate" to make it very clear how many loops I needed to have and where in the code chunks actually got generated. It turned out that for this to run properly, chunks should only be generated under a single very specific set of conditions.
The upshot is that my chunk system as it matured became able to handle a huge number of chunks, not make weird mistakes like leaving a trail of old chunks behind the player due to forgetting they existed or that they should have been reassigned instead of new chunks being made, and not create tons and tons of garbage in the process. Both my building and my universe could grow to truly massive proportions:

Each of these images shows over 1000 chunks on screen, and at this point the main bottleneck to showing more is rendering (I don't have a high-end graphics card). The universe shown here, based on matching it up to some images of things like the Sloan Digital Sky Survey and trying my best to eyeball it, represents a radius somewhere in the ballpark of 3 billion light years:
In future posts I intend to discuss more particulars of the 3D noise functions I used and give an overview of the functionality that currently exists in the (hopefully) almost finished product.

16 February 2019

Procedural Chunk-Based Universe Part 3 - Software Development Buzzwords

In previous parts I explained how I set out to make a chunk system for huge 3D game worlds, how I experimented with using it to generate a universe using 3D noise to produce a cosmic web, and how I expanded it to be able to use chunks and visible radii whose values were independent. At the end of Part 2 I recounted how a need arose to be able to keep chunks in the world persistent and unchanged when they didn't really need to be updated.

The architecture of my chunk system had to be pervasively rekerjiggered once again, so much so that  I decided I wanted to be a pro and do it "properly" this time with inheritance and encapsulation and other software development buzzwords. I synergized logistical paradigms and leveraged strategical methodologies for performant serviceability throughout the product lifespan to maximize utility and market penetration.
In other words I got out a text editor and typed out a new vision for what would be the fifth version of the project... and then a mere four days of derping around later, went back to the drawing board... typing window... whatever, and I wrote down the sixth version:
VertexArrayMesh script
    updates MeshFilter with procedural mesh based on Vector3[]
    includes vertex colors, with alpha channel for luminosity / scale

DummyMesh script
    renders mesh (e.g. from MeshFilter) at Vector3 location

Starfield shader
    takes in starfield mesh with vertex positions and colors
    draws textured quad at each vertex
    colors quad with vertex color rgb
    scales quad with vertex color a

Procedural Universe script
    creates and manages chunks
        chunks exist in WORLD space
    collects points from live chunks and pushes to VertexArrayMesh or other scripts as necessary (e.g. for unpacking)
    Chunk class
        points, random seed, Vector3 position (not ints! not worth it)
        persists at world space position, holding points
        Refresh()
            pushes position, size (localScale Vector3), random seed, and density to NoiseProvider
            obtains points Vector3[]
    Refresh(force, updateAll)
        clear refresh queue and wait for RefreshQueue to stop
        if !force, check position and compare with oldPosition
        edgeSize Vector3 = min(3, ceil(visibleRadius / localScale.xyz)).xyz
        triple loop edgeSize.x, y, z
            generate Vector3 xyz and skip if not touching sphere
            store as linked list
        loop chunks
            if !updateAll and chunk touches visibleRadius sphere (double check is faster than extra looping), loop possible positions
                match with chunk position
                if match found, remove position, break back into chunks loop, and continue to next chunk
            if loop completes (no match or updateAll), chunk is dead; add to deadPool linked list   
        if positions left, loop leftover positions
            pop dead chunk if available, or spawn new chunk
            position chunk at position
            if player is within chunk, refresh chunk now
            else add chunk to refresh queue
        if dead chunks left, delete
    RefreshQueue coroutine or thread
        refresh chunks one at a time

NoiseProvider script/function
    Takes in random seed and point count
    Outputs points Vector3[] within AABB with center and size Vector3
    Multiple variants:
        Voronoi - generates seeds Vector3[] once when called and then generates points based on these
        Spiral Galaxy - generates points based on spiral algorithm
            sphere -> ellipsoid -> spiral
            sphere -> spheroid -> disk
Please don't copy this - not because it's my special valuable IP but because I later, as I mean to explain later on, found all the ways this is wrong and bad and made a different and much better one. Also it mentions some "deadPool" entity or person and I feel like there should only ever be one of whoever that is. I've just included it here so you can see where I was at this point.
Also it's my personal notes for myself copied verbatim, so I have no expectations it'll be digestible what with all the references to variables I had bouncing around my head, abbreviations, jargon, shorthand, etc. Cooooooode...
Time for a pretty picture.
Oh hey! This is where we were before (but with the GUI not hidden this time). It was during this phase that I actually implemented the outline gizmo thing I kept mentioning.
Now that I have a visual, I can explain what all that stuff above means a bit better.
Each blue cube is the outline of a single chunk. Chunks don't actually have to be cubes any more - they can be varying amounts of tall, wide, and long for various purposes like maybe making a big wide flat galaxy or whatever. Since the algorithm generates the universe by comparing the chunk dimensions and visible radius during runtime, it works fine even if the chunks are oblong.
Each of these chunks is a GameObject, attached to which is the actual chunk script, a script for providing procedural noise to the chunk, and scripts for using that procedural noise to render starfields. My specific implementation led me to craft a script for rendering meshes that didn't bother with rotation and scaling and a geometry shader that could take in a single mesh and simply use its vertices to draw star sprites so I didn't need to have tons of objects or particles eating up computing resources.
Because the starfield and noise generation features had moved out to their own scripts, I was free to replace these with specially tailored versions that lent themselves to more effective diagnostics and debugging, for instance the Debug Noise script I teased earlier, which inherits from NoiseProvider. This allowed me to put serious work into improving the stability and performance of the system; one by one bugs were exposed and overcome, and efficiency was added, until I was able to produce screenshots like this:
Another example is the current background image for this blog. I not only had a cosmic web working, but I tailored the galaxy sprite colors to produce an imitation of "cosmic latte", the average color of all light sources in the known universe, and I used the power of my geometry shader to add redshift based on the distances to the various galaxies (note in the second image how some galaxies remain bright and nearly white while others, dim due to distance, also have an increasingly strong reddish orange tint). I was quite proud of myself at this point.
Also because the starfield and noise generation features had moved out to their own scripts, I was no longer constrained to using my procedural universe for starfields - any large volume that could be represented as a collection of chunks could be recreated using this system. I continued to iterate on my starfield version, gradually adding improvements, but I began to also poke around at using it for things like flattened disk-shaped starfields as one might find in galaxies, asteroid belts, or ring systems and for things that weren't starfields at all. In theory this could be used for such things as cellular automata or terrains, although I still didn't have much interest in being yet another developer who's made a terrain generator by using Perlin noise to add a heightmap to a plane. But there was another application that was tempting me...
In Part 4 I aim to explain how pursuing that idea brought me to my current design for the system and led to much, much better performance for all implementations of it including the existing starfield generator.

15 February 2019

Procedural Chunk-Based Universe part 2 - the Universe Within the Universe

In Part 1 I explained how I set out to make a chunk system for representing huge game worlds and detailed how I started to tackle the problem.
So far I had planned out what I wanted to do in a vague sense and created a minimal implementation using only a fixed-size cuboid of 3x3x3=27 chunks. I went on to spend a while making these ever fancier and prettier, but a major constraint was making itself known: I couldn't render enough objects within my 27 chunks to represent anything like a fair imitation of a whole visible universe.

The real universe contains some objects that are very bright and visible at great distances, and other objects that are dimmer and only visible up close. In order to handle this practically, I realized that I would benefit greatly from my universe generator being able to be stacked, with "small universes" with small visible radii inside "large universes" with large visible radii. The small ones could handle nearby galaxies (currently represented by "star" sprites) and the large ones could handle distant, bright galaxies and galaxy clusters, rendering fewer of these.
The universe has a structure, however. It is believed to be "homogeneous" on the largest of scales, meaning any given huge volume of it looks roughly like any other given huge volume of it - galaxies aren't aligned to a grid and don't follow a gradient toward somewhere they all bunch up at the center of the universe (you may have heard of the Great Attractor, but while that is huge by our standards, cosmologists believe that other large gravity wells exist far away throughout the universe and that the universe as a whole isn't being pulled into any one place). On smaller (but still huge) scales, though, there is a distinct pattern called the "cosmic web":
Naturally I wanted to replicate this (and already had by implementing Worley noise, a.k.a. Voronoi noise or cell noise in my "star" placement code, as I mean to detail in a separate post), meaning I couldn't just spam stars at random locations in chunks of random scales. My noise function had to not only have a structure , but that structure had to be independent of whatever size I set for the visible universe. This would be far easier to accomplish if the chunks were a consistent size and stacked universes with variable visible radii could simply have variable numbers of chunks.
In conclusion, chunks had to be responsible for sampling 3D noise in a way that's consistent and deterministic regardless of how many of them have been spawned, what the visible radius of their parent universe is, where and in what order they have spawned, and whatever history the player may have of being in other places. Adapting my existing noise to this idea was fairly straightforward, but a paradigm shift had occurred in my progress on this project from this point forward.
As I've mentioned a redundant number of times, I had a 3x3x3 block of chunks, but if I wanted to make a universe of a different size whilst maintaining the same chunk size (and I could go no smaller), I had to develop a procedural tactic for generating a variable number of chunks around the player and arranging them in a ball. A cuboid would work, but its boundaries would be at highly varied distances from the player with some too close, causing "pop in", or some too far, causing redundant stuff to be spawned despite being excessively far away.
What I ended up producing was this.
The universe has a visible radius (indicated by the brown wireframe sphere) and a chunk size each set by the developer and which can have any value independent of one another. The program calculates how many chunks it takes on each axis to meet or exceed the visible radius, with a bit of padding so that it includes all chunks that would touch the visible universe in addition to those whose centers are inside it, and a minimum of 3 chunks on each axis, and uses this information to iterate through points in space that together form a large cuboid. In the screenshot here, each of these has been indicated with a small red line. The cuboid itself measures 5x5x5 chunks.
Chunks outside the visible radius, i.e. in the corners, won't be necessary, so while iterating through these points, the algorithm validates whether they fall within the (padded) visible radius and if so marks them with a small green line. The checks for the visible radius produce the faint green lines, and the padding added to the visible radius is shown by the faint blue lines.
I had the algorithm spit out some chunks on each of these validated points and got this:
Okay not this exactly - I'm using an anachronistic screenshot from a later version again, but it looks the same as what I had at this point.
By this point my derping around with 3D noise functions had given rise to a "Debug Noise" script that didn't bother with the actual Voronoi cells and instead just put some stars in a cube and game them all one color based on the current chunk's random seed (which was based on its world-space position). That cube could be scaled down a bit to more clearly display the boundaries between chunks as I still hadn't built the fancy outlining feature I showed earlier.
Note the blue line indicating each chunk that actually got spawned, the player icon in the center, and the blocky sphere measuring 5 chunks in diameter.
I was now able to make visible universes with arbitrary numbers of chunks, although unsurprisingly huge numbers of chunks led to huge performance drops. This was especially the case while my system didn't fully respect the persistent nature of chunks and would refresh every single chunk in the universe every time the player moved too far and a refresh was called. In the event of a player teleporting 80 billion light years into a completely new visible universe, this would be necessary anyway, but in more mundane cases like flying from one chunk into the next, most of the universe would remain unchanged and only a few chunks would fall out of visibility or come into view. That meant that I could instead have most of the actual chunks remain untouched and only refresh a few. I made sure to retain the option of refreshing all of them in case I wanted (note the buttons to force a full refresh and to "re-seed" the universe with a new random seed and rebuild all of the chunks based on that), but my next task would clearly be to create a mechanism for keeping track of which chunks existed and whether they really needed to be changed.
In Part 3 I plan to explain how I managed this and how it made my algorithm much more sophisticated.

12 February 2019

Procedural Chunk-Based Universe part 1 - It's full of stars!

As a long-time fan of Space Engine, even before the infamous No Man's Sky was announced, I've been big on humongous procedural game worlds. The universe is a big place, after all, and someday I dream of exploring it, so in the meantime it's fun to pretend by immersing myself in exploration games like this.
Now SpaceEngine is being built by a sorcerer with years of experience in graphics programming. SpaceEngineer has created the entire engine in C++ and GLSL, taking advantage of shiny concepts like GPU terrain generation, raymarching, etc. It's all an inspiration to me, but much of it is too hard to chew if I bite it off in the advanced state in which it is. Like any noob, I learn by starting with the simple parts - WASD to move, click another player's head to shoot it, write and compile a few easily understood lines of code at a time to see if I'm doing it right.
So where professionals like SpaceEngineer use an octree to discretize 3D space, I, being mediocre at data structures, have instead gone the way of constant-sized chunks. I don't think it's a bad approach anyway - a number of games use chunks for their worlds and it serves them very well. There's that one about punching trees that I hear a few hundred or so people played...


Now Minecraft's chunk architecture is based on a 2D square grid. Each chunk measures 16 one-meter blocks on a side horizontally and extends the full 256-block depth of the game world (some evolution has occurred in the data storage format, but this basic formula still presents itself in the game logic). This works great for a game about people living on the surface of a large world and using mainly hand tools to make changes to the landscape. They are only able to tunnel a few dozen meters at most into the ground before encountering indestructible bedrock, and they are only able to build pillars a few dozen meters high, which is more than reasonable for a medieval house or even a respectable castle.
But I don't want to spend my life confined to within a short elevator ride from a 2D surface, and that goes for game worlds too. Minecraft's ability to feed the player gratuitous amounts of chunks must for me extend to the vertical axis, not merely so I can build stupidly tall towers or carve Aperture Laboratories into the land, but so I can take off into the great beyond and visit new worlds without end.
So that's my two-paragraph sermon about why I went with cubes.
Now, chunk-based game worlds naturally don't need to be limited to voxel-based landscapes. Those cubes can be as big as I want and contain whatever I want - I could make a cube with a galaxy in it, or even a whole cluster of galaxies.
Okay those are just dots, but they could be anything. The cube doesn't care what they are, only that they belong to it (and in most use cases, are inside it... I didn't have that hammered down by this point).
One cube obviously doesn't constitute a chunk system for the world. I also needed ways to make and control a number of chunks, for them to do useful things to the things inside them, and so on.
So first up was finding a way to put a whole mess of chunks in the world. Actually, even before that was thinking up overall logic for when and in what fashion chunks were placed in the world. My decision ended up being that the "universe" would attach to a game object, thereby having a world-space position. This game object could be the player, the main camera, or a special "Universe" object that follows the player or main camera or even remains stationary. For my earliest tests it remained stationary, but once work was solidly underway I made it follow the player - or at least move around by key presses as if following a player. At all times, the player or universe object would be contained within one chunk, and if it wandered too far (in the beginning this was a distance check, but later evolved into a different and more appropriate sort of check), new chunks would get generated around its new position so that it remained inside the universe (leaving the universe would be bad after all).
Since I was working in 3D, I had to account for the "center" chunk occupied by the player, the eight chunks forming a square around it, and the two other square layers of nine chunks each above and below it for a total of 27 chunks at minimum. This was the smallest I could go while ensuring that at all times the player remained inside the universe and had a new chunk in front no matter which way the player happened to move, including diagonally. Technically I could get away with only six chunks forming a sort of 3D plus sign, but that would have allowed the player to come very close to the edges of the universe before actually leaving the center chunk, and the visible consequence would be seeing a big wall of nothingness a short distance away only for a wad of stars to suddenly pop in right in front of the camera.
I don't have an illustration of exactly how the system looked at this point, but I do have an illustration from a later edition that highlights the basic 27-chunk cuboid idea:
The player and universe object are represented by a white circle and square (I forgot which is which) and currently, since the universe follows the player, they occupy the same location, which is at the center of the center chunk, which has a faint blue highlight. It and the other 26 chunks all have a blue outline drawn around them and a little blue line pointing at their centers. In the early phases I hadn't engineered a shiny system for drawing cube gizmos to outline chunks, so I only had the blue lines given by the Debug.DrawRay function.
The idea here was that if the player left that center chunk, whatever new chunk it occupied would become the center chunk and a new cuboid would be formed around it.

11 February 2019

Oops I Just Accidentally a Roguelike

I've created a simple level generator based largely on Lived 3D's random level generator
Two of my pet projects are a game about building spaceships and a system for procedural game worlds using 3D chunks (imagine Minecraft's chunks, but as cubes so that they can generate an unlimited world vertically as well as horizontally).
The spaceship game needs a system with which the user can select, attach, and change parts on a ship, and one of the best ways I've seen for doing this is a "node attachment" system that connects parts to one another based on attachment nodes, each of which has a position and orientation (and other optional properties). If an object contains one or more of these, when requested (e.g. when spawned by a player or when the player presses a "connect" button), it will match itself with an attachment node on another object and cause its own assigned object to be moved and rotated so as to attach to the other rather like Legos:
I've shamelessly borrowed this image from Bac9 as it's just such an awesome illustration of the concept. I also take no credit for the concept itself, as it's been done beautifully before by games such as Kerbal Space Program (a hardcore addiction of mine for much of the last few years, and part of the inspiration for my spaceship game).
Had I the forethought, I should rightly have made a post about my adventures trying to get this system running and the silly math problems I encountered (turns out the best way to make a quaternion facing the opposite direction from another quaternion is... to rotate it 180 degrees! Wow!). In any case, this system is functional (though not without room for improvement) and allows the player to slap together arbitrary collections of objects, provided they've been set up with attachment nodes, be they pipes, dungeon rooms, spaceship parts, etc.
Now my other project, the procedural world system, lends itself happily to generating terrains, starfields, and even cellular automata. I mean to go into more detail about it later, but for now here's more eye candy:
Beautiful. Such terrain, very procedural. Wow. I should make a whole game about driving Thomas the Tank Engine a cool sci-fi rover around on an alien landscape. It'll be the next Minecraft and I'll make 7 billion dollars and retire to Sweden in a mansion with a cola fountain.
So naturally I had procedural generation on the brain and got to thinking "what if my dungeon builder could run automatically?"
And that's the story of how I accidentally made a roguelike.
It turned out that iterating through all the attachment nodes was a fairly straightforward process, and I only had my own shortsighted design choices as easily surmountable obstacles to overcome. Within a day I had a neat system that could slap new rooms onto the doorways of existing ones as many times as I wanted until I had a huge level to explore, rich in exciting features such as "zombies" (capsules that follow the player and make the "HP" number go down), "chests" (cubes that make the "score" number go up), and... unreachable areas due to some rooms growing straight through existing ones.
I knew what the solution was even before getting this far: when preparing to place a new room, the algorithm must first make sure it can fit in the desired location, i.e. any solid objects comprising it, such as walls, don't intersect (except maybe a smidgen) with any solid pieces of another room. Lived 3D showed in beautiful gif form how to do this with bounding boxes, but I'm overly ambitious and wanted to do it without boxes. What if my rooms are noodley cave segments?
What I needed was a way to check in advance whether a collider (a "hit box" or "collision mesh" except not always a box or a mesh) would intersect with another collider if placed in a given location. Fortunately my old friend the Unity engine has added a feature that happens to contain this functionality (although to my very minor disappointment it comes with a bunch of other overhead I don't need for this purpose) in the form of Physics.ComputePenetration. Come to think of it I should probably make a post serving as a tutorial for how to use this thing. The fact that I discovered it and want to help make sure the word gets around is like half the reason for this post, after all. Also I had a whole other set of adventures making dumb mistakes relating to checking for collisions in the right places.
I had to do a lot of debugging ^^;
Now why, one might ask, did I insist on using cerebral physics queries instead of something basic like OnCollisionEnter that every competent Unity developer knows?
OnCollisionEnter is called on any object that already exists in the scene if, between the previous physics timestep and the current one, the object went from not touching some other collider to touching some other collider. Developers can configure their code to accept information about what colliders were involved, how much they intersected and where, what was attached to them, etc. which is very useful for doing things in response to things bumping into each other, say for example lowering the player's HP if it touches a zombie. Theoretically, this function even gets called if the object in question wasn't touching anything the previous timestep because it didn't exist.
But one might see what I saw as a problem at this point: A room has to be made to exist, then it can be fed information by the physics engine about what has come into contact with it - and in most cases it even has to wait for a physics update, which is roughly the same interval as a "normal" frame update. The long and short of it is that the room has to actually exist in the world for as much as an entire frame before OnCollisionEnter would be useful to it.
Now I as a developer could account for this by designing all of my room prefabs to not do anything when they get spawned and instead wait for their location to be confirmed. But on one hand that's something I'll have to be careful to do when I get creative about room prefabs (spawning hordes of zombies on load, loading saved items from a file, etc.), and on the other hand, I want to be able to distribute this system to other developers without having to deal with them not following this extra rule. So what I needed was a way for the generator to, in a sense, imagine the room being in place without actually sticking anything there just yet. I was pleased to find that despite its apparent complexity, ComputePenetration ran surprisingly quickly and only when piles and piles of rooms had been spawned did it provide any noticeable performance cost (you might notice that the last few frames of the gif up top show the generator slowing down due to the number of rooms in play).
What I hope to figure out next is how to link up connections between existing rooms so that the level can contain loops and how to make level sections generated in one place connect their exits to the entrances of sections generated somewhere else in the world so that I can combine this with my chunk system...

09 February 2019

Introduction

After graduating game design school with a mediocre GPA (hey, I was listed as "in good standing" with the college, okay? Not my fault you need a 3.5 to get anywhere these days. I had some hard classes!) and absolutely failing to hook a proper programming job out of the gate (I blame the economy), I've filled my days in classic millennial fashion with an unsatisfying part-time job and my best attempts to balance playing video games and trying to develop my own in my spare time.
Henceforth -
ahem.
Henceforth and herein shall I be documenting my (mis)adventures in such topics as procedural generation, immersion, shader programming, physics, and saving up for a computer that has a discrete graphics card.

Being on a tight budget, my knowledge of most new AAA titles is secondhand and I've instead focused my attention on the growing indie scene. Perennial favorite games of mine include Kerbal Space Program, Space Engine, and the ubiquitous Minecraft; and I'm currently following projects such as Yandere Simulator and Manifold Garden and trying to learn what I can from others' successes and failures.
Someday I hope to not only follow in these people's footsteps but blaze a legendary trail of my own. Maybe I'll make a fortune for myself, but if I can at least make a living out of doing what I love I'll consider the endeavor a huge success.
As one may guess from my game preferences, my soft spot is for 3D exploration and sandbox games. I've played intense PVP experiences from Halo to EVE Online to League of Legends, but to paraphrase a proverb in the EVE community: there are people who build sand castles, and there are people who kick them down. I'm an unashamed sand castle builder. I respect playstyles focused on defeating other players, but where I like to involve myself is in building something new, whether competitively, cooperatively, or for its own merit. This applies not only to the games I play, but to the games I try my hardest to make.

Sorry this isn't a real post :C

I've entered graduate school and, as I expected, suddenly become far busier than I had been before, leaving no time to maintain my blog,...