Return to “[Archived] Daily Dev Logs, 2012 - 2015”

Post

Week of December 22, 2013

#1
Sunday, December 22, 2013

Happy one-year anniversary!!!

One year ago on this day, the Limit Theory Kickstarter campaign ended, culminating in 5.5 thousand people donating more than three times what was asked, to help a 21-year-old college student bring his dream of a vibrant, procedural universe to life. One year later, I still couldn't possibly be more grateful that you all gave me the chance to do so. It's been a heck of a year. There have been more lines of code than I can count...but more importantly, more conceptual revelations, gameplay ideas, algorithmic triumphs, ridiculously-long-and-detailed discussion threads, and more deeply-satisfying "aha" moments than I can count! And of course, more ghosts, monkeys, and squirrels than any of us care to admit :monkey: :ghost:

It's been a long trek, and it's obviously not over yet. But there's no question that we're far beyond the point of no return. We've come too far to ever look back. Limit Theory is becoming, and will be, the radiant reality that we all wanted it to be. Thanks to you all :) :clap:

---

As for today, I'm still dropping my hours into the interface. I'm gearing up to get "serious" about the rendering piece of it. I mentioned a while back that I believe this type of interface can be rendered significantly more efficiently than a standard one, thanks to a few key properties of it. I'm looking forward to seeing just how snappy I can get it to feel, especially on huge sets of nodes (like a regional map). Currently, the region map cuts framerate in half or so (which isn't all that bad, when you think about the fact that thousands upon thousands of objects are being mapped to those nodes in real-time). I think we can do better ;)

I'm also working on integrating the holographic 3D shader into the interface. Ultimately, I'd like for the nodal UI to look somewhat like the command interface used to look, except perhaps a bit more structured. Yes, I do plan on doing away with the command interface. It was cool and served it's purpose, but I believe that the system map in the new UI will basically end up being the same thing (or at least, that's the plan!)...except that it'll be totally integrated with the rest of the UI, and more functional than ever! The great thing about this is that, if you want a dedicated "command interface," you can just open up your system map. But, since the nodes in the system map are the same as the nodes that will be drawn on the primary render view as an overlay (when I get the whole projected overlay thing implemented), you'll also be able to do all your commanding from first-person perspective! I love this unification. It reminds me of wayyyyy back during the KS when I showed the "tactical interface" in gameplay demo 3. With the nodal UI, the tactical interface is really just a system map that's projected into first-person, in the same way that the command interface is just a system map viewed from an arbitrary, movable camera.

Once I get the holographic stuff integrated, I'm really looking forward to seeing what I can build in terms of a hardpoint / ship subsystems interface. It'll be so cool to see the ship in holographic, full-3D, and be able to just click on the subsystem nodes, which will be overlayed / projected as they're actually positioned on the ship. It'll be exactly what a ship subsystem interface should be :geek:

:squirrel:

(Did you think I forgot about him? :roll: )
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of December 22, 2013

#2
Monday, December 23, 2013

Sigh. Matrices. Those bloody matrices :(

Sometimes I like to think I'm pretty good at math. And then, I start dealing with anything involving projective spaces (aka, Mr. Projection Matrix), and all of that goes out the window :oops: It took me a solid six hours just to get the UI nodes and holographic models positioned and rendering in the same space (and it's still not perfect) :shock: That's pretty embarassing. Really have no idea why that stuff trips me up so badly :ghost: Anyway, life happens I guess...and I'm very close to having the holographic stuff integrated. Really hope to have a sample hardpoint interface ready in time for the update :)

I'm still pushing towards NPC research, and I've come to the point where I need to load in the "default" metaparameters for different branches of technology (e.g., what is the "baseline" mass / integrity / capacity for a fighter, etc.). To do so I'll need my data editor back, which means I'll need editing widgets for the nodal UI! That should be fun :) As soon as I get those done, I'll have a bunch of menus for free - settings, for example, comes to mind. Can't wait to be able to play with the graphics options using the nodal UI :D

I could really use a big, fat epic code day tomorrow...but with family Christmas Eve gatherings, I somewhat doubt the viability of it. Still, one can always hope for...a Christmas miracle, right? :roll: :D
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of December 22, 2013

#3
Tuesday, December 24, 2013

Well...no miracle :D But that's ok, right? :3

I hope those of you who celebrate Christmas have a wonderful Christmas Eve, and hope everyone else is enjoying the Holiday Season!!! :D

:wave: :wave: :wave:
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of December 22, 2013

#4
Wednesday, December 25, 2013

Merry Christmas! :D

I spent the greater part of the day on the road, headed to my Dad's side of the holiday celebrations. Luckily, there was still some quality work time to be had once everyone went to bed ;) Not a lot, but still some.

I fixed lens flare occlusion, which has been disabled ever since I changed techniques last week. Actually, I'm not even sure that I wrote about that upgrade...but I moved to screen-space flares, which is the way it should be (rather than world-space billboards, which are subject to tilting and projection matrix weirdness, especially at the corner of the screen). This required a different approach to occlusion, but I fixed it today :)

I'm having trouble figuring out exactly how I want to approach node-based "editing" widgets. Specifically, I'm trying to figure out the most elegant architecture for defining how a node can interact. I need to be very careful and intentional about it, since this will influence the whole shebang. Much like a widget in a "typical" UI can react to messages like mouseover, keypress, etc...I need to figure out exactly what kinds of messages a node can receive, and, in turn, how it can turn those messages into functionality. For example, the widget for modifying a continuous numeric value - what should it look like? Typically this is done with a "slider" widget. I'm thinking maybe we could have a parent node, and then a connected child that can be dragged along one axis. Pretty much a slider...but with nodes :) Dragging all the way on top of the parent corresponds to the minimal value, dragging all the way out corresponds to the maximal. Maybe the parent also has a gauge-like radial effect around it to give visual feedback. But how does dragging work with other input devices? And what about more complex widgets like a text field? How would that work?

Lots of fun questions like that to be answered, but they're the kinds of questions that I love - because it feels like, with each step forward, I kill hundreds of birds with one stone :D I'm just so in love with the concept of a generic interface! :)

Tomorrow should be a better work day now that I've arrived and am all set up with my desktop (although I'm definitely still looking forward to getting back to "the cave" in a few days) :thumbup:
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of December 22, 2013

#5
Thursday, December 26, 2013

Finally, a great day again! :)

After too many hours of embarassment, I finally managed to tackle the whole issue of different coordinate spaces (I have conquered the feared projection matrix :ugeek:), and now have my holographic renderings sitting happily among the nodes :) After some refinement and enhancements to the old hologram shader, it's looking more futuristic and shiny than ever! It's absolutely too cool seeing the model and the nodes in the same space. For a spatial thinker like myself, it's just the best :thumbup: Actually having a sense of where a certain item in my cargo is in relation to the 3D model of my ship is such a great thing! As you've probably figured out, I love the idea of giving spatial structure and layout to things that aren't spatial in nature ;)

I'm working on the hardpoint viewer now, although I don't have it looking too great yet. Actually, I haven't managed to get the hardpoint nodes aligned to the model :roll: At each step, it's challenging to keep the node code ( :lol: ) (as in, the code required to define a new type of node) as simple as possible while still allowing everything to look good across all of the different interfaces. But that's the cool challenge of developing a unified solution...it's got to be carefully crafted to solve each problem as thoroughly as possible while still maintaining a minimal solution. The pinnacle of design problems :) The hardpoint viewer is well-enough along, and should certainly be ready for the update.

As of this moment I'm working on the "drag" code, as I've decided that I want to use dragging as much as possible for node interactions. I think I can get a lot of mileage out of allowing nodes to respond to a simple OnDrag event. That would immediately solve a lot of widgets like the slider, color picker, and even a 3D position widget. Furthermore, I think it will map naturally to other input devices, since the drag direction can be equated to the primary axis of a joystick or gamepad. It's also very minimal and simple. Let's see how much we can squeeze out of it! I should have the code working in a few minutes and am looking forward to exploring the possibilities.

I need to get back to AI ASAP, but I'm making a lot of headway on the UI at the moment...maybe I can split it half-and-half today :geek:
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of December 22, 2013

#6
Friday, December 27, 2013

Not the perfect split day that I was hoping for, but still a productive one. I didn't get around to AI, as that sneaky old UI drew me in for another full day :geek:

I finished the implementation of dragging, so that nodes can now respond to it. This immediately opened up the door for some cool stuff: world editing! It took only three lines of code to modify the object nodes to respond to dragging by actually modifying the position of their corresponding objects. This allowed me to pop into the system map and literally rearrange the objects however I pleased. I must admit, being able to drag the jump holes right in front of me is a nice ability :lol: If nothing else it was a fun test of the dragging capabilities ;) This capability also immediately solves a lot of other problems. For example: the formation editor!

I worked a lot more on improving the 3D-ness of the interface. It's now using full 3D matrix math to represent node orientations and relationships, as opposed to some half-baked scale + translation stuff. Getting more precise about the coordinate spaces allowed me to finally get those hardpoints lined up! The hardpoint viewer is looking pretty great :) It also opens the door for more complex layouts, for example, using 3D rotations.

Tomorrow I'll finally get back to the coding cave, and I'm looking forward to an epic codesprint to the end of this month!! :D
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of December 22, 2013

#7
Saturday, December 28, 2013

Spent most of the day driving, and spent the other portion of the day tearing up delegation :ugeek:

I've almost arrived. I'm like, in the parking lot of delegation. Or something like that. Here's the deal: you create a plan to achieve something. Along the way, you have a certain number of goal nodes. Based on the metrics involved in the goals, they can be delegable or not. If they're marked as delegable, then those nodes are considered "parallel," and can be offloaded to other persons. I'm not yet 100% positive that this loose of parallelism is rigorous enough to work in all cases, but it seems to work well for the situations that I'm imagining. So far, the only metrics that I've been able to identify as delegable are "ItemCount" and "Health." As far as I can tell, those are totally parallel. More will come in the future, of course. If, at some point, you'll need some item to carry out your plan, then it's safe to delegate that task to someone else at any point, right?

This leads me back to the idea I had a few weeks ago with respect to "resource scheduling." I really do think that's the right way to think of delegation / parallel reasoning. Take a plan - now execute the serial nodes as you normally would, but "schedule" the parallel nodes with some kind of best-fit algorithm. E.g., look at all the tasks in your plan that are independent of the serial nodes, and then match them up with the best resources for executing them. You've got some subordinates who are particularly good at combat, so you fit them to the nodes that require combat. You've got some couriers, so you get them to perform some deliveries. So on and so forth. This gets into another issue that I've been wondering about for a while: the "efficiency" of task performance is influenced by certain state variables, but how do we encode that knowledge? Trading is influenced by cargo capacity, combat by offensive and defensive capabilities, navigation by engine capacity, etc. To really allocate resources efficiently, one must have an understanding of this "efficiency," such that resources can be fitted to their most appropriate task. That's a bit far off - right now I just need to get basic resource scheduling working - but it's good to be thinking about it already :)

There's an interesting question that I ran into today concerning delegation, which I call 'faith.' That is, how much do you trust that someone else can get something done? Previously (and somewhat currently), my plan was to 'have no faith' :lol: As in, NPCs need to have a full plan for accomplishing something, if someone else can solve some part of the plan for them in parallel, then it's just a bonus. That works well for a lot of small scenarios, but not necessary for a faction leader, who should be dreaming of big goals that he can't necessarily accomplish on his own. At some point, I think one must have some degree of faith in one's associates. Perhaps this is something that builds along with reputation. Perhaps it's based on some historical metric of how good the person is at completing tasks. At any rate, it's an interesting question :)

Tomorrow will undoubtedly be a long day of AI :geek:
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of December 22, 2013

#8
Week Summary

(Here's something new...written a whopping 2 weeks later!! :shock: :? )

I'm not sure anyone expected Christmas week to be the most productive in LT development, but with delegation theory finally coming to an end and a whopping number of new features in the UI, it certainly wasn't a week to sneeze at ;) I'm happy to keep pushing the features of the UI, because it feels...well, it feels so right to have all of this powerful, generic functionality at my disposal.

Something (and definitely not my two weeks of future knowledge :roll: ) tells me you guys will end up liking the holographic node effects in the dev update :D :clap:

Accomplishments
  • Implemented holographic rendering in UI
  • Implemented dragging in UI
  • Implemented a slider widget node
  • Theory of delegation via "resource scheduling"
  • Fixed lens flare occlusion
  • Had a nice Christmas with family and friends :)
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford

Online Now

Users browsing this forum: No registered users and 4 guests

cron