Return to “[Archived] Daily Dev Logs, 2012 - 2015”

Post

Week of June 16, 2013

#1
Sunday, June 16, 2013

Summary

It hung before me. Shiny. Dazzling, rather. It teased me. Promises of riches. Promises of a new life. A home for my family. School for my children. It was all right there. A shiny - no, dazzling - chunk of rock. No doubt others had passed by here. Others had searched. But in the darkness of the sector, they had missed what stood before me now. They thought the field clean of all riches. But it was not. Before me - right before my very eyes - hung a melange of rare minerals that would see me and my family through the next few years, at least. As a nearby planet poised itself to drape the field in an icy shade, blocking all primary light for the next sixteen hours, I gently tapped the control panel of my humble vessel, readying the mining equipment. I prepared for what would surely be a joyous night in the field.

Yep, it's true, exaggerated as it may be, I mined and sold my first 30 units of "unknown ore" today! It sure wasn't pretty. The mechanic right now is just a simple test, based solely on collision. Hit an ore-laden asteroid with something - anything (a pulse, a beam, even your ship's hull..) - and it will emit a piece of ore. Scoop it up with your ship, and it's yours! That'll get more interesting. It was primarily a test scenario :) But a successful one nonetheless! I was very pleased to see that some of the nicer chunks of "unknown ore" sold for 1000 credits a pop. One of the only serious parts of this test was the graphics. I'm toying with various graphical effects at the moment to convey asteroid richness. I've come to a fairly decent little variation on the standard asteroid. It looks slightly shinier...not highly obvious - you'd have to know what you're looking for. I like this, as I want to maintain an interesting balance of naked-eye prospectability for some asteroids. Of course, I haven't even embedded visual chunks of ore in the exterior yet, so it will only get better. But ideally, it will be a subtle effect that one could learn to identify with practice, as per the original KS pitch - "You can spot the shimmering reflection off of asteroids a lightyear away. You know the color and composition of each and every raw ore that the generous galaxy provides." It would be very compelling to actually have a somewhat unique visual style for every ore, such that prospecting could be, for those that choose to shun the technology, a matter of sheer skill and experience. Naturally, fancy scanning equipment would usually do a better job than the eyeball. But it could prove fun even still. I would probably prospect without the equipment :) "Let go Luke."

I solved several memory errors today that have been plaguing the engine for too long. One was causing a crash at exit on occasion, the other preventing me from writing a certain function in the most natural way. I'm particularly happy about the former, as crashes always make me uneasy. Overall, though, I'm feeling pretty good these days about memory in the LT engine. There was a time when I was quite scared that everything would go wrong and break when I extended out into world simulation. With all the cleaning and simplifying that I've been doing over the past few months, though, things are finally becoming conceptually simple enough for me to keep in my head without any issues. I feel as though I have a very solid grasp on all of the various memory paradigms that I'm using to power the engine. That's a good thing, because with a hundred or so systems being simulated at once and a few thousand AI actors touching them all, it's going to be crucial that I understand exactly how each piece of memory is being handled.

I figure I'll start bottom-up on the economy. Build the most basic resources that are at the core of everything (ore), build the AI to handle it, then move up to the next level of the economy (processed materials), rinse and repeat until I'm dealing with entire factions and a global economy of every type of good imagineable. A lot of the details are still fuzzy, but I'm certain the next few months will bring about some fantastic opportunities for forum discussions on the various mechanics.

Now, if I could just get in gear here for the rest of this month, maybe we would actually see a living economy happening soon. Someone get the cattle prod. We've got work to do.

PS ~ Still haven't written the weekly summary. I'm so tired! Rawrr. Maybe in the morning...not that it's highly important :squirrel:

[ You can visit devtime.ltheory.com for more detailed information on today's work.
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of June 16, 2013

#2
Monday, June 17, 2013

Summary

Well, I grinded pretty darn hard all day, and still didn't get an economy set up... :?

Truth is, this move to the AMD card is hitting me pretty hard. It takes 4-5 seconds just to start up the game on this computer, and that makes me really uncomfortable. I'm going to have to get a little fancier with my resource management and scheduling. Some more profiling today has confirmed, yet again, that driver calls are killing me. I estimate a very rough 50% or so of startup time being spent in the call to glLinkProgram. Again, I am faced with a decision that I don't want to make. I could spend some time improving the way that scalar fields are written out to code, in the hopes that holding the driver's hand a little more tightly would help it compile faster. But that would just be like shooting squirrels in the dark**. Not only do I not like shooting squirrels, but I also don't like handling firearms in the dark. So overall it doesn't seem to be an appealing solution :squirrel:

On the other hand, my other option is to do it in another thread. Back in metaphor land, this would be about like cleaning plankton off of a shark's molars with a toothpick while simultaneously holding one's bleeding hand up to the shark's nose (interesting question: do sharks use noses to smell blood??) Perhaps, in less absurd terms, you might say that I'm wedged between a rock and a hard place. But, I did some more research on this hard place today, and it may turn out better than I was hoping. I can get another GL context set up in another thread without a problem. Sharing resources between GL contexts is platform-dependent, hence, not really an option without large amounts of death and suffering. BUT, I realized today, that it doesn't really matter! I've been worried about this because it would mean that, even if I do all of my shader linking in another thread, I still wouldn't be able to use the shaders in my main thread. Ok, derp, Josh, think a bit harder there buddy. Why are you linking super-intense shaders in the first place? ...to generate procedural stuff. And what is procedural stuff? ...meshes, textures. And can those be transferred around wherever you like? ....ermm yeah, if you put them on the CPU. Ok, good, now tell me, is it expensive to transfer stuff between the CPU and GPU? ...well, that depends on in what context you're tal...no no Josh, I mean, compared to glLinkProgram on a massive shader, is it expensive? ..well, no, not at all, it's basically instant compared to that. ....!!! You're right! It doesn't matter if we can't access the GPU resources, the only reason we need them in the first place is to generate assets, which are easy enough to transfer back and over to the primary context! Why was I ever worried about it?? (Ok well don't get too excited there buddy, you still have to worry about the fact that none of your framebuffer code is thread-safe...)

So it may be the case that scrubbing the shark's teeth turns out easier than expected. However, to do so, I have to get a suitable toothpick first :think: Which means that I need to make sure that all assets can be seamlessly transferred between the CPU / GPU transparently. I was clever when I wrote the mesh code, and it seems that meshes will have no problem with this, because they are automatically uploaded to the GPU. However, for textures, I was not quite as smart, and used only GPU memory. Maybe I was thinking about optimization. Nonetheless, I started upgrading textures today so that they can support living on either the CPU or GPU without any fuss. I pretty much finished, but there is still an issue with the deferred uploading that is causing some textures to go pretty wonky (neon green, glowing/radioactive ship plating..."it's a feature, not a bug!!") Once I get that fixed, though, we'll be one step closer to a threaded asset generator, which is one step closer to an engine that can handle smooth generation of a massive universe. And, maybe just maybe, I can get the game to load in 1-2 seconds :P (But seriously, I promise this isn't all just for a few seconds. Having a threaded asset generator should be a big win for the overall smoothness of the game, and will also likely mean less loading screens in general :thumbup: )

Now then, did I do anything non-technical today? Yes, as it turns out! I finally implemented varying fog density! Along with it, I improved the fog shaders a bit and, in general, beat the whole fog system back into shape. There's still one big piece missing: when you're in a non-foggy part of the world, you need to be able to see the fog in foggy parts of the world. Er, that made sense..right? Like in Freelancer, you could see the big dust clouds before you ever entered them. I still need to do that. It's interesting, because the thing you're trying to fake is really an incredibly expensive operation..volumetric integration of the fog density. Yet, it would seem that one can pull this off in a very convincing way with billboards, which, IIRC, is what Freelancer did. I may need to go back and play a little bit to get a feel for it ;)

[ **No squirrels were harmed in the making of today's dev log. ]

[ You can visit devtime.ltheory.com for more detailed information on today's work. ]
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of June 16, 2013

#3
Tuesday, June 18, 2013

Summary

Overall a great day! I did some miscellaneous stuff including working on asteroid field dust clouds, improving particle effects, and upgrading the random number generator, but I would prefer to talk about the main theoretical excitement that came today.

It was certainly a good day for thinking, what with rain pouring down outside. It just felt like a good time to be quiet and think, so I did. And I'm glad I did! I had a mini-revelation today about AI reasoning and out-of-system simulation. The revelation was: they're the same thing :shock: Yeah. I'm excited about this.

So, in AI reasoning, you have an AI actor who is thinking about the world. The way this works in LT is that the AI is thinking about, hypothetically, what actions it could take, how these would affect the world, and how favorable the result would be. Classic AI stuff. But that middle part - how actions affect the world - that's kind of the cruxt of everything. Figuring out what actions are available is easy, and an opportunity to employ any number of clever tricks / heuristics. Figuring out how much you like the results is also easy. Every actor has his/her own set of values that influence the way they value the world. But the act of valuing a world is just evaluating some kind of function on that world. Easy. The hard part is in the middle. If you take an action, what will happen? This requires a combination of intelligence, intuition, experience, etc. Nontrivial.

Now let's think about out-of-system simulation. The idea of out-of-system simulation is that we don't want to run the universe simulation at full granularity in every system. We can't, it's just too expensive. Besides, if a tree falls in the forest and no one is around to hear it, does it really need to fall?? Nah. Just replace the tree with a log, pad the grass down a little bit, snap a few branches off of surrounding trees, and call it a day. No one will know that it wasn't the real deal. You might say that OOS simulation is an "abstract" simulation. It does not deal in concrete details. Your laser missed by two inches? Who cares. Not OOS simulation. Your laser had an 80% probability of hitting given your angular velocity with respect to the target? That's more like it! OOS simulation is all about abstract actions acting on an abstract world. When the player enters that world, all of the abstractions are then resolved to concrete details, and it appears to the player as though time had been passing normally.

Hold up. Abstract actions acting on an abstract world. Sounds familiar. Doesn't that sound a bit like what an AI does when it tries to understand the consequences of an action? ...! Yes, actually, it sounds exactly the same. There you have it. AI reasoning and OOS simulation are the same thing. They are the application of abstract actions to an abstract world. In the case of the AI, this world is purely a conceptual model that remains in the AI's head. In the case of OOS simulation, this world is actually the real deal, and the results of the simulation are used to modify the concrete world. In either case, the exact same code can be used.

This is beautiful for so many reasons. It creates a coupling between two of the most conceptually difficult pieces of this game. Two of the hardest problems have just become one! Can't beat that. Perhaps even better, it reduces the problem of building highly-accurately-reasoning AI to the problem of building a highly-accurate OOS simulation. Here's the kicker: the latter is far more measurable than the former. How do you test an AI to know if you've done a good job? Hard. How do you test an OOS simulation to know if you've done a good job? Run a real sim and an OOS sim on the same system, compare. The degree of similarity of the results tells you how accurate your OOS sim is. You have a very real, very measurable metric for the OOS sim quality. But this translates directly into the AI's reasoning ability!!! All you need to do is build a great OOS sim, and the AI gets intelligence for free.

Concretely, think of it this way. You have two AI commanders about to engage in a large-scale battle. In their minds, they are both weighing the odds. In their minds, they try to understand what will be the outcome. They do so via the exact same process that the game will use to determine the outcome if this clash happens in another sector. This gives them both perfect knowledge of the consequences of the outcome, allowing them to make very informed decisions, basically by definition!!

One note. "Perfect knowledge" might scare some people. It scared me when I thought about it. But perfect knowledge is different from the ability to predict the future. I have "perfect knowledge" that when I flip the quarter sitting on my table up in the air, it will come back down on heads with .5 probability, and on tails with .5. I have perfect knowledge of that action. Note the difference between knowing the future - I DON'T know what the result will be. In the same way, an AI commander would be able to accurately evaluate the probability of success in battle. This is NOT the same thing as looking into the future and using it to cheat! So there's nothing to be afraid about there. The AIs will not be gods, just very good judges of probability.

Of course, this is all very theoretical. We all know how AI tends to end up in practice :roll: One can hope that the translation from theoretical to practical goes a little more smoothly in my case :geek:

PS ~ Keep it secret. Keep it safe :silent:

[ You can visit devtime.ltheory.com for more detailed information on today's work. ]
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of June 16, 2013

#4
Wednesday, June 19, 2013

Summary

Not as big of a day as I wanted. I'm having a really hard time getting rolling on "everything"...it's like everything from here on out is really tightly interconnected. A massive problem to solve, and it's hard to see how to solve sub-problems without having the whole thing done :shock: I want to move forward with AI, but what does the AI do if not perform operations that are deeply dependent on the economy? I want to move forward with the economy, but the economy is created and driven by the AI. I want to move forward with the world and the foundations for resources, but how do I generate it all to respect balance? The only way to measure balance would be to run a simulation. So it's like one gigantic, interconnected web of things to solve...I am lost in it! :geek: In a good way, of course. I guess I'll just kind of round-robin these problems until one of them starts to break :D

That being said, I focused on AI today. In particular, I'm trying to grasp time, and how time plays into the AI's reasoning. It's a little bit of a tricky subject, because there are really two distinct types of actions, thanks to time. At least I think there are. There's an action like "buy X from Y." This takes some amount of time, and then it's done. But what about mining? "Mine X" ..how long does it take? That's up to you. It introduces a continuous degree of freedom into the reasoning. You can no longer just pick a discrete action - you now have to fill in a parameter. But the parameter is continuous, so it's a nasty ordeal. Introducing any continuous parameter into the AI's reasoning space is a bad move, because it explodes the amount of "space" that the AI needs to search in order to find a good life plan. Maybe you can quantize it...what would happen if I were to mine for 15 minutes? 30? 60? Etc. Or maybe you can quantize it "intelligently" - "Mine until cargo is full," etc. But then you reduce the overall freedom of the AI to reason. It's a tough call, and I can't quite figure out how to deal with it yet. My intuition is to go with intelligent quantization. I'll just need to try it, I guess :think:

I worked for a while on the AI's abstract model of the world today. I think it's going to need a better / more powerful representation as we move forward and prepare for "real" AI. Right now it's pretty cumbersome to introduce new attributes into the world. Today I started doing some rewriting, taking some inspiration from my recent endeavor into functional programming! I've got a much more elegant solution for reasoning about abstract / hypothetical worlds, but I'm not totally convinced that it will be performant enough. It also requires a bit of c++ heavy-lifting, in particular, solving the "polymorphic equality" problem. It took me a while, but I came up with a solution that I really like. The only ones I could find hanging around the internet involve dynamic_cast, and I understand that this is a very slow operation. Can't have slow operations in the core of the AI! My solution should be significantly faster. I'm not 100% positive that it's portable, so I'll need to keep an eye on it and do some more tests, but I think it will work nicely :thumbup:

I'm excited to see where tomorrow will take me....economy? AI? World? Who knows :geek: I just hope one of these pieces starts showing some cracks soon so that I can attack it in full-force!

[ You can visit devtime.ltheory.com for more detailed information on today's work. ]
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of June 16, 2013

#5
Thursday, June 20, 2013

Summary

Spent a lot of the day drawing up ideas for the economy, production, blueprints / technology, and a few other things. Overall, I'm feeling really good about some things that were fuzzy in my mind before. Unfortunately, I've also come to the realization that I did items the wrong way again. Will I ever get it right? I think I have it right this time. Sadly (and I mean...very sadly), I need to undo some of the work I did last week. But not all will be lost! Items will be simpler than ever. I can drop all the mathematical hackery that I was planning to use to produce coherence. The new system is going to be clean and coherent :clap:

Now a few words about blueprints! They've been on my mind lately. What do you require to build something? Obviously some form of resources, the right production equipment, and then...the idea / technology. I was sort of assuming until today that this would just mean having a blueprint item for each object. Exploring this concept a bit further revealed a strong technical limitation with respect to the mapping between blueprints and the items that they represent. The limitation is a result of the infinite nature of the items in the game. This got me thinking about alternative possibilities with different types of mappings, rather than a one-to-one item-to-blueprint correspondence. The natural next thought is to have blueprints that enable a variety of construction. What if the blueprint is not so much a blueprint, but rather a technology! I.e., "particle physics level 5," or what have you, which is required to construct certain energy weapons. Yes!! Just like a tech tree in an RTS. I like this a lot. For one, it solves the technical problem. For two, it seems like a natural concept, especially considered in the context of an RTS or classic 4X game. And for three, it simplifies everything a bit. I could imagine it being a serious burden to have to find a blueprint for every single item you wanted to build. But having technologies that unlock a range of items is much more tractable!

Perhaps even more interesting is that you can also play with the notion of "universal" vs "regional" technology. Certain technologies are applicable all over the universe, like particle physics, for example. But you might also find very specialized pieces of knowledge that are exploited in certain regions to produce the "regional" equipment. The Kilydellian Lavablaze, for example (I think that was the name of a gun in LTP), might require Kilydellian Engineering Principles, and would be applicable to several Kilydellian pieces of equipment.

I really love this system :D Both the notion of universal vs regional, as well as the notion of technologies rather than blueprints. It also occurs to me that ultra-high-level technologies might be something that the big factions would go to war over. Perhaps you yourself will even need to conquer the Kilydellians if you wish to learn how to build a Kilydellian Star Imploder-class battleship!

Probably also worth mentioning that some new programming insights today allowed me to cut many of the "type" code files in half (armor type, weapon type, ship type, etc.) :D Many of them are now ridiculously simple. Exactly how it should be :thumbup:

Who else is ready for a big Friday :wave:

[ You can visit devtime.ltheory.com for more detailed information on today's work. ]
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of June 16, 2013

#6
Friday, June 21, 2013

Summary

That was definitely the Friday I was looking for :D Super-intense with a side order of fun. Not sure why, but I'm really in the zone at the moment. I think it may be because I've moved back to the kitchen table, which feels like a proper desk, instead of stooping over to code on the coffee table (which may have been affecting my motivation more than I realized :? ). I should really get a desk though :wtf:

Upgraded the item system (for the last time, I swear!) for basically the whole day. Dealing with that thing is no small task, as it's integrated into virtually every piece of the game code. But we're ready to roll now, and the road is paved for hierarchical, coherent item generation. I'm excited, this feels so right! Sadly, it will force me to face a few problems from which I've been hiding. Real serialization of items is going to be a necessity, which is going to require serialization of templatized types...not totally convinced that my current serialization engine is ready for this, but I do have a solution in mind for fixing that. I don't know why I ever thought that the old item system would work, though, honestly. Items were point-evaluated (each represented as a single number in 1-D space), which means, while they were super easy to load and save (because they were just single numbers), there was really no chance of coherence. Worse, there was no chance of custom-defined items...which raises the question of how I ever intended to support the ship editor on the old system. Alright, so I make mistakes, we've heard :roll:

Back on track now, the spic..er, code is flowing, and I can feel the universe coming together. There's already a class for it! Pretty scary / epic thing to type "class Universe { .. "...it's like...handle with care :shock:

At some point today I watched the RSI unveil of the 300 series, which basically gave me several heart attacks in a row followed by a fit of ragecrying. It was too beautiful. So I had to start working on the LT graphics again. The nonlinear color blooming in CryEngine is a must. I've implemented my attempt at it...but there is definitely still work to be done on the HDR / Bloom in LT. And then there's that gorgeous chromatic aberration that RSI has used in every video I've seen so far. It's so beautiful, I think it really sells the reality of the scenes. Seeing the subtle aberration on the edges of bloomed lighting is just :squirrel: Sadly, I have not found any papers on this effect...I can't seem to find any good, modern information on chromatic aberration :( Of course, I know how to implement a simple one, but it is nowhere near the same quality. CryEngine seems to be doing something fancy...I think. Finally, the circular bokeh / DoF. It's also perfect. I'm not sure it's appropriate for in-game, but it sure makes the cinematics look breathtaking. I think I need to implement it!!! That effect is a classic, so hopefully I can handle it.

With this trusty kitchen table in hand, I am hoping for another big day :wave:

[ You can visit devtime.ltheory.com for more detailed information on today's work. ]
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of June 16, 2013

#7
Saturday, June 22, 2013

Summary

A slightly more graphical Saturday than anticipated...but I guess that's never really a bad thing!

Got fairly side-tracked (if you can call it that) for several hours on the chromatic aberration effect that I mentioned yesterday. After some major fails, the correct solution finally hit me, and I figured out how to do it!! An hour or two later, the LT Engine was shining with the glory of a new (and particularly nice) post-process effect. Granted, I haven't matched CryEngine quality yet. Their CA implementation is just flawless. Mine still has some problems, but I definitely figured out the general technique. This got me pretty excited, since I've never seen any details on the technique before, so it was a fun and rewarding challenge to try to reproduce. Like all things, it's conceptually simple once you get it :) I have to say, this subtle effect does wonders for the "reality" of the scene. It makes me wonder..why? Our eyes don't seem to suffer from chromatic aberration...do they? I certainly don't see little rainbows everywhere when I look at bright objects (or do I? Maybe I just don't notice them?) So why does it seem so much more "realistic" when we implement these in a game? Is it because we've gotten so used to seeing this effect on cameras? Interesting questions to think about :geek:

After finishing that up, I moved on to a UI graphics overhaul. The UI is pretty bland at the moment...I think any LTPers can attest to that. I'm trying to figure out how to spice it up and make it feel more spacey (spicy & spacey!) The ultimate goal is to have everyone feeling like a commander in Star Wars when they interact with the UI ;) I'm definitely not there yet, but I improved a lot of things today. I started playing with nonlinear color gradients for the icons, and this already makes a big difference. I actually used the same formulae that I've used before in GLSL demos. It's a lovely blue-green nonlinear gradient that does wonders on the eyes. I am also playing with outer glow on the panels instead of outer shadow. Finally, a flickering/animated background for panels to make them look more exciting! The new interface is certainly more jarring in some respects, but in time, I think I will be able to smooth it over and hone in on a nice, futuristic look. My general approach to interface design at this point is basically - "if LT were an operating system that I used 24/7, how would I want it to look / function?"

Still making (slow) progress on universe and item generation :? Hoping for some conceptual clarity or excitement soon. Generating all the items in the universe is a bit overwhelming I suppose, and, although I see the simplicity in it, I'm still skeptical of everything to the point that I keep questioning my design time and time again. I suppose this is better than picking and rolling with the wrong design, which I seem to have done quite frequently in the past :oops: At least I'm learning ;)

It was a good week, definitely picking up speed near the end (thank you, kitchen table!) This week was a bit more conceptual than usual, so I'm hoping that it implies next week will be more concrete than usual. That would be nice!

[ You can visit devtime.ltheory.com for more detailed information on today's work. ]
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of June 16, 2013

#8
Week Summary

General Sentiment

Loads of thinking this week, for better or for worse :geek: I think future Josh will agree that it was for better. The whole world / items shabang is starting to come together, and I don't think it will be long before we see some nice, coherent items populating the world. I'm also really looking forward to playing with this idea of technology rather than blueprints - I think there's a lot of potential in terms of coherent generation and trying to imbue items / technology with "meaning." The AMD card is still struggling more than I'd like it to on resource loading, so I imagine there will be some technical brouhaha (!?) surrounding threaded resource loading going on fairly soon. Well, will I have anything to show for this month? Stay tuned to find out... :squirrel:

Major Accomplishments
  • "Reasoning == Simulation" philosophical AI revelation
  • Overhaul of type / item system ("for the last time")
  • Improved quality of HDR / Bloom (again!)
  • Improved UI graphics
  • Implemented variable fog density (i.e., foggy in asteroid fields), started working on long-distance graphics but will need a faster solution
  • Implemented chromatic aberration effect
  • Major conceptual progress on technology / research, ditching blueprints
  • Started implementing new coherent item generation scheme
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford

Online Now

Users browsing this forum: No registered users and 2 guests

cron