Return to “[Archived] Daily Dev Logs, 2012 - 2015”

Post

Week of January 12, 2014

#1
Sunday, January 12, 2014

♫ ♪ They're climbin in your headers, they're snatchin' yo codebase up, tryin to clean it so y'all need to hide ya structs, hide ya strings, hide ya structs, hide ya strings, hide ya structs, hide ya strings...and hide ya functions, cause they're factorin everybody out here ♬♬

:shifty:

Yes..well...what I'm trying to say is that the messaging system has already allowed me to senselessly snatch a few hundred innocent lines of code today :ugeek: The most exciting of these being the header files for weapons and thrusters. Previously, other pieces of code required knowledge of these subsystems, because they needed to be triggered in certain ways. A thruster would need to know how much to thrust, a weapon where to aim and when to fire, etc. But now, using the message system, I can broadcast a general "aim at" message to the entire ship, and any component that feels it necessary to take action can do so. This allows me to literally remove all knowledge of weapons and thrusters from everywhere else in the engine. There is only one file that knows what a weapon is, and it's the same file in which the actual weapon implementation lives. That's dependency elimination at its finest! Can you hear the squeaky-clean-ness? ;)

I've got a lot on my plate for this week, with scanner theory and the new warp lane concept both due. Yet, there are still some final strings to tie up from last week :| You know, I seem to have comparatively little trouble tackling the meat of most problems, but doing the last 5-10% really seems to give me problems...probably because, by the time the problem is "solved," I am not so interested anymore since the rest is just boring, little stuff. That's how I feel about both the final threads of research, as well as this new turret / weapon separation. The hard parts and heavy-lifting are done, the problems are solved, now I yearn to move on to bigger fish...but I really should fry all the small ones first! :monkey:

Here's to hoping for a week of frying all sizes of fish, both big and small :thumbup:

♫ ♪ You don't have to come and confess, we lookin for you, we gonna find you, we gonna find you ~ So you can run and tell that, run and tell that, run and tell that, code bloat, code code code bloat ♬♬

(What the heck is the point of working alone if you can't occasionally break into song and dance? :| )
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of January 12, 2014

#2
Monday, January 13, 2014

Pushed more into some good scanner theory today, and I want to talk a little bit about my ideas! :geek:

Most of my ideas concerning scanners revolve around 'signatures.' A 'signature' is just a function in 3D space that has some meaning. Heat, of course, is a signature. But we can have other signatures too, like 'radioactivity' (missiles, perhaps jump holes), or 'crystalline' (certain minerals in asteroids), 'metallic' (ore), etc. Now, a scanner is just your mechanism for discovering that function. The question is: how can we view it? Exactly what information does the scanner give us that allows us to understand the function?

The answer lies in sampling. Think of your scanner as a device that can sample the value of a signature function at certain locations in space, with a certain pattern and degree of accuracy. It can then relay this information to your HUD. I'm exploring a number of options for how that sampling pattern would look. At first, I was thinking something along the lines of a screen-space aligned grid, which would have the nice benefit that you could use interpolation in-between the grid points to create a heat map visualization of the signature. Cool, but I'd also like to give the player control over the scanning pattern. What if I want to concentrate all power on a teensy little slice of the sky, where I thought I just saw a bit of movement? I should be able to adjust the scanning pattern to focus on that area. Same thing applies when prospecting rocks. My next thought was a cone of varying angle.

A cone would just project to a series of circles on the screen, which makes a lot of sense - move the circle to the region you want to scan. You can increase or decrease the angle of the cone, which causes the circle to grow or shrink, and the sampling pattern to become tighter or looser. Tighter sampling patterns are more likely to pick up signatures within the volume of space that they enclose. What happens when a signature is detected? I'm not positive yet - with the grid / heat map idea, it was straightforward. With the cone idea, I'm thinking maybe you get a sort of 'ping' on one of the circles, depending on how far away the signature is. As you tighten the pattern and hone in on the signature, the pings will get faster and faster since you'll sample the hot zone more frequently.

Scanners will come in a wide variety, having different abilities with respect to detecting specific types of signatures, different ranges, different min and max sampling angles, and even different sampling frequencies. A prospecting scanner will be finely-tuned to detect even the faintest of metal signatures, and with a tight pattern (so that miners can determine precisely where to drill for high yield). A scout's scanner would probably have a much greater range, an extreme sensitivity to heat, and much higher min and max pattern angles, since they attempt to sweep a broad volume.

Something that I love about this theory is that it really cleanly separates concerns: we have the 'data' (signature function), the 'discovery' (scanner), and the 'visualization' (HUD).

In totally separate news, I always find it interesting how the simple act of renaming a concept in the engine can be a remarkable and enlightening experience. Tonight, I renamed the concept of a 'hardpoint' to 'socket,' and the concept of a 'subsystem' to 'plug.' Now, first of all, don't panic - socket and plug doesn't sound particularly spacey, so it's likely that the game will still call these things hardpoints and subsystems when it presents them to you. But within the code, they are called plugs and sockets, and I just love the way it feels. Hardpoint & subsystem don't say much. But plug & socket? Immediately, you understand that 1) there is a connection between these entities, 2) that connection may have the ability to transfer resources at a certain bandwidth, and 3) that connection probably has a certain 'shape' that restricts the compatibility of the plug and the socket. Isn't it so much more conceptually powerful than 'hardpoint'? I know, I know. Nothing groundbreaking at all. But I'm kind of crazy like that. Those simple little moments of clarity excite me so much. For some reason, all of the code surrounding hardpoints and subsystems now feels so much more...tractable! All just because the metaphor is changed to plugs and sockets :D

Fun fact: I knew it was time to go to bed tonight when I thought, 'ah, I'll get myself a bowl of rice krispies.' And I poured the milk first :| :o Yeah. I know. Who does that. Go home Josh, go home.
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of January 12, 2014

#3
Tuesday, January 14, 2014

Finally did it! ;) I finished that last 5% of the turret implementation (which, naturally, turned out to be more like 25%). Looks like I'm capable of finishing things after all :)

So now we've officially got full support for "recursive" hardpoints, including recursive power distribution, articulation, etc. When you equip a weapon, the engine actually first mounts the base of the turret onto the ship, then mounts the actual weapon onto that base (I call the base a "turret" but I'm sure that's an abuse of language :roll: )

One of the nicest things about this (and one of the big motivational factors for doing it) is that I can drop a lot of code related to articulating the weapons. Previously, weapons were comprised of a base and a muzzle. The base was only allowed to swivel (rotate about a vertical axis), and the turret part was only allowed to hinge (rotate about a horizontal axis). This gives a nice sense of realism, while still allowing the gun to aim in any direction - but it took a lot of code. Now I simply have a turret object that looks like the gun's base, and is attached to the ship on a Y-axis constraint, and then the weapons get attached to the turret with an X-axis constraint, and everything just happens automatically! :) No extra code.

Now I haven't tried it yet, but this development also means that we could do multi-barrel guns. E.g., having multiple guns fixed onto the same swivel mount, which means they would always be aiming in the same general region. Could be cool, but I don't know how much I want to expose the turret - weapon separation in the game. Might get just a bit "too detailed" if we start doing that.

In other news I fixed the particle system :) Woo.

In further news, I have a ton of a stuff that needs to get done tomorrow. Wish me luck :wave:
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of January 12, 2014

#4
Wednesday, January 15, 2014

Solid day - I'm pushing closer to having a working scanner prototype (which also means one step closer to "real" mining!)

What I didn't realize is that the bulk of the effort in building a scanner is actually having a good HUD. That's something I haven't solved yet - creating a general framework for showing things on the HUD and trying to understand exactly how information gets displayed. So that's what I started working on today! :)

As you probably already guessed, the HUD is going to use the nodal interface just like everything else ;) But the HUD will be special in that it will use a fixed, projected view to position the nodes. Furthermore, HUD nodes are generally going to be more visually informative than your average node. Earlier this week I came up with the idea of advanced/custom nodes, which basically just boiled down to the idea that a node should be able to draw any number of visual parts. Right now nodes are just a circle, a ring, and some text. For the sake of unification, that's nice. But it's not quite powerful enough - nodes should be able to draw multiple rings (or none at all)! Nodes should be able to draw bars and gauges and all manner of graphical doodads if it helps them to convey whatever it is they need to convey. This is especially important for HUD nodes. They're going to need a lot of visual flexibility to effectively present information. Of course, we also just want things to look shiny, right? :D

With that in mind, I started to write a new "node renderer" today, which will handle the efficient drawing of nodes, while at the same time providing significantly greater flexibility by allowing nodes to draw as many pieces as they like. I'm very excited about this! :) I've repeated myself many times before in saying that this interface is conducive to ultra-fast rendering, and that someday I would take advantage of that. The day has finally come! But my motive now is not just for performance. I can't wait to be able to specify as many visual components as I want to nodes..it should really take the UI to the next level, breaking up the monotony of circle-and-ring nodes everywhere and allowing more information to be presented faster and more elegantly :geek: I'd also be lying if I said I wasn't excited to see just how many nodes I can pack onto the screen at once when the new renderer is finished..... :monkey: :squirrel:

Exciting times ahead for the HUD! Once it's in place, the snowball is just going to keep rolling...because HUD means scanners, scanners mean mining, mining means...getting filthy rich :D

PS ~ Oh, and I stripped like...over a thousand lines of code today by making the scene graph more elegant and hierarchical. Increased performance as well. You know, all in a day's work :lol: :geek:
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of January 12, 2014

#5
Thursday, January 16, 2014

Listen carefully folks. We have a very serious problem here :|

The nodal renderer. It's too fast. It's too. Bloody. Fast. It's going to hurt somebody. We need to take action, and we need to take action quickly. Add some Sleep() calls? Revert to the old, less-efficient system? Throw 100x the number of nodes at it? I don't know, but we need to act sooner rather than later. This ridiculously-fast UI rendering is endangering FPS hits as we know them. What will life be like without any FPS hit when we open complex interfaces!!! :cry:

No but seriously. It's fast. We all knew it would be. Here's the thing about a UI that comprises only a few basic elements: you can easily batch those elements together and draw them all with a single call. That's what I did today. Much like a particle system, those circles and rings and miscellaneous doohickeys that we calll nodes can be grouped together as raw vertex data for massive efficiency gains. When I say massive, I mean massive. Not like, kind of massive. Not like, mildly massive. Like...massively massive.

Massive as in, I can open about 8 or 9 split-screen copies of the system map with maybe 2-3ms extra render time. Compare that to....I mean, I don't even know. It probably would have been well over 100ms with the old system. Like I said, it's too fast and we need to act quickly to curb this reckless power :D

Needless to say I'm excited. Not only is this UI more fast and responsive than ever, it's also a heck of a lot cleaner. The code is so small. Nodes are now capable of rendering as many pieces as they like. All-in-all, it's just a really, really nice state of affairs :) I'm so happy to finally have the nodal UI standing alone, not resting on the shoulders of the old UI. Previously, all the drawing was done via the old UI, which meant a very ugly-looking layer between the nodes and the screen. As of today, the nodal UI uses it's own dedicated rendering layer, which is vastly more efficient and simple than the old UI ever dreamed of being (gee, have I hammered that point home yet?? :lol: ). It's clean, it's conceptually simpler, it's more direct, and, as you've come to expect by now, it will absolutely make you breakfast and walk your cat ;)

Now comes the fun part. Using it. Next step is to cleanly separate the view from the UI data so that I can use the same code for displaying normal nodal UIs as well as the projected HUD version. I'm not positive how this is going to work yet, but I can't wait to have it finished, because being able to use all of this raw nodal power to build UI functionality is just going to be...way, way too much fun. Like, stop-that-guy-he's-having-too-much-fun-at-work fun :)
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of January 12, 2014

#6
Friday, January 17, 2014

To be honest, it wasn't the day I was looking for...just can't find the breakthrough that I need for the HUD at the moment. There are a lot of questions and not enough answers! :|

There are two main points of confusion for right now. The first is the fact that some nodes are fixed in screen-space. How do we deal with them? What exactly are they? Most nodes are positioned in 3D space, but some of the HUD nodes are fixed in screen space. How does a node specify that it wants to do so? More importantly, why exactly is it that some nodes need to do so? Are they a special type of node? Will this pattern appear elsewhere or is it unique to the HUD?

Next question: how do we account for the limited screen real estate (for screen-space nodes)? Not a problem for a traditional game, but when you don't even know the maximal number of weapons / subsystems / fleet members / etc. that you'll need to display on the HUD, how can you guarantee that you can present it all on the screen (and, what's more, how can you guarantee that you can present it at any resolution?) Hmm :think:

I'm sure answers will come in time, but they'll require more brainpower. Burn all the neurons!! :D :ghost:

Honestly, this is one isolated case in which doing things with a "real" 3D cockpit would be a whole lot easier. with a real cockpit, we wouldn't have to answer these questions, because the HUD would be draw onto surfaces in 3D space. Well...LT2, right? :)

Interestingly, the UI is now a big step closer to being ready for the Rift, as it now performs projection to 2D as one of the very last steps of rendering - everything up to that point is in 3D, so it should be fairly simple to render twice from two different angles as the Rift requires. It's not exactly a priority, but it sure will be a cool day when we get to see that thing in true 3D :)

Hoping for some conceptual breakthroughs on the HUD today, as I'm really ready to get scanning, mining, etc. underway!
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of January 12, 2014

#7
Saturday, January 18, 2014

Pretty much finished with HUD theory! :clap: Implementation is on the way.

Two big questions raised yesterday, two big answers proposed today.

First question was why HUD nodes are fixed in screen-space when others aren't. Answer: because HUD nodes are not nodes, they are viewports into nodes. Viewports are, by their very nature, screen-space entities. The minimap is a viewport into a system node, the subsystem widget is a viewport into your ship node, etc.

Second question: how to deal with a potentially-infinite amount of data on the HUD. Answer (sure, a bit cheap, but..): scrollable nodes. I've figured out today how to use the layout mechanism of the UI to implement scrolling quite elegantly. I haven't done it yet, but I'm excited to try it out :)

One of the things that makes me most excited about this new HUD theory is the configurability. If HUD widgets are all just little nodal viewports, I see no reason to not allow them to be resized, moved, enabled and disabled as the player pleases. Everybody can build their own HUD that suits them perfectly! Personally, I don't find minimaps to be of tremendous use, so I'll probably just disable mine ;) Some players will want to make their subsystems viewport very large, so as to be able to see all details of their systems at once. Some will want a minimal, clean HUD with only one or two small pieces of information displayed unobtrusively. Play how you like! :) I can easily make a bunch of different HUD viewports, and then allow you to mix and match as you please to suit your play style.

The list of tasks for the month grows ever longer, yet the hours seem to grow ever shorter. A paradox of the cruelest kind! Ahh well, it is intense pressure that turns the ordinary mineral into a diamond, is it not? ;) :monkey:
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of January 12, 2014

#8
Summary of the Week of January 12, 2014
  • Implemented dedicated renderer for the nodal UI - she's fast enough for you old man!! :)
  • Continued work on scanner theory, including 'signatures' and 'scanning cones'
  • Finished support for nested plugs / sockets, allowing clean turret weapon functionality
  • Major simplifications and performance improvements to scenegraph code
  • Started and made good progress on HUD theory
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford

Online Now

Users browsing this forum: No registered users and 3 guests

cron