Return to “[Archived] Daily Dev Logs, 2012 - 2015”

Post

Week of July 7, 2013

#1
Sunday, July 7, 2013

Summary

While I did explore a few potentially-valuable lines of thought in the car today, I didn't manage to come up with anything particularly inspiring. If you'll allow me, I'd like to blame this on the overwhelming amount of hostile rain that was trying to blind and throw me off the road :problem:

I did get pretty excited today when I realized that abstract copies of objects can be created for free if you have a translation layer (without duplicating any code!) The trick is to run all of the pieces of an object through the translator (which is presumably hooked up to an "abstract" object). For example, if you have a ship, then you would run all of the hardpoints through the translation layer via AddObject(hardpoint_i). The translation layer would say "ok, that's a weapon and it does 550 DPS, so I'll add 550 to the abstract ship's firepower score." It would be the exact same as if an abstract ship had just equipped the same weapon. This is a neat trick for enforcing consistency between the conversion from concrete to abstract and the abstract simulation itself. Nothing groundbreaking, but neat and definitely will save some code.

There were a few other interesting ideas, but I feel like my definition of "interesting" is drifting further and further away from reality as time goes on, so I think I'll keep them to myself until I feel more certain that they're worth sharing :roll:

Well, I'm back in the coding cave, and very much ready for a big day back at the "office" tomorrow :D Also looking forward to being more active on the forums again, as I've hardly had any time to read / post during my stay at home.

[ You can visit devtime.ltheory.com for more detailed information on today's work. ]
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of July 7, 2013

#2
Monday, July 8, 2013

Summary

A day of zero code :o A rare occurrence indeed! But not unproductive. Not at all ;)

I spent the whole day working on pages and pages of notes, ideas, questions, answers, more questions, pros, cons, etc. It felt very good. I have a lot of ideas floating around in my head, but I don't write them down enough. I usually keep a pen and paper next to me when I work, but it's just not as fast as typing - so I finally started electronic notes today (in vim, of course ;) ), and they are already becoming pretty prolific.

I just can't bring myself to touch the code until these problems are solved. I'm making great progress, but every time I answer a question five more seem to appear. How will we unify simulation? We'll use events. How will events deal with varying levels of object abstractness? We'll use virtual interfaces. How will virtual interfaces be created? How will they be attached to their parent (and what happens when the parent goes out of scope?) How will an AI take an "action" (create an event) - and what about the player? Speaking of AI - how does a faction relate to an individual? And speaking of individuals...how does an AI know whether he's supposed to be thinking abstractly or concretely? And if he's supposed to be thinking abstractly, how does he reason about himself?

Yep, just a few of the questions that I wrote down today and am trying to understand. I really wish I could say "huge day of code" again, but I don't see that happening until I clear away these questions.

Honestly, I think I've hit the hardest part of LT. This is it - the crux of everything, and it's all entangled into a giant mess of questions. Simulation LOD, AI, factions. If I can just push through, I'll have conquered what seems to be the biggest conceptual challenge in sight.

Just gotta keep at it :geek:

[ You can visit devtime.ltheory.com for more detailed information on today's work. ]
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of July 7, 2013

#3
Tuesday, July 9, 2013

Summary

Powering through. It's not easy, but I'm getting closer every day. Today brought less questions and more answers than yesterday - definitely a trend that I hope to see continue!

A few answers. A faction is a hierarchy of persons. That's all there is to it. The "root" is the "leader," and we can use the leader to run the factional AI. But factional AI does not exist. It is the same as individual AI, except that this particular individual has many, many assets and persons at his disposal. I mean, yes, this is obvious from a logical standpoint - people run companies. But from the game standpoint, it wasn't so obvious to me, and in the beginning I thought it would be a good idea to treat factions as something different, something higher-level than a single person. But that's not the case. The AI just needs to be capable of handling a wide range of levels of ownership. If he owns a ship, he should be thinking about the small things. If he owns a corporate empire, he should be thinking at a much higher level (more of a management position).

So I guess, in theory, we could actually go to the incredible level of realism that factions arise naturally from NPCs who manage to scrape together a lot of assets. In this sense, NPC factions would work the exact same as player factions (remember that these will come post-release). Can you imagine? A game in which you might meet an ordinary trader, get his contact info, then get in touch with him a few years later only to find out that he is now running the largest faction in your local region...! Let's not get crazy. I'm not saying that will actually happen in LT, only that it is a very real theoretical possibility when you unify factions and persons :geek:

A few more answers concerning virtual interfaces and "folding," but these won't make sense since I still haven't really described much about that. As you can probably tell, I'm keeping some of the more critical parts of the simulation engine a "secret," because I think the simulation engine is going to be something that sets LT apart ;)

The biggest remaining question for me is how to determine the layout of abstract objects. Some things are obvious, like the fact that an abstract ship should have things like 'total DPS,' 'total thermal damage mitigation factor,' etc. These would be used for an OOS combat simulation, for example. But why do we choose these particular attributes? Is there a mechanism for automatically doing so? If not, then I actually have to go through the whole game and hand-craft an abstract version of each object. Ew. Surely there is a smarter way? But at the same time, this stuff is so far out there on the edge of crazy that any "smarter way" would probably end up with me shivering in a padded white room, screaming "simulation....virtual...UNIFY...exponential falloff....SCALAR FIELD!!" If I blow a fuse in my brain, it'll all be for naught :shock:

Hmm, now that I mention it..scalar field...I wonder if there is a way to unify this with fields? :roll:

Nah...

...

:shock:

PS ~ I know these dev logs are getting more boring by the day, and for that I am truly sorry! We're in the thick of conceptual work at the moment and it's just not that fun to talk about. It'll get more exciting in a few months ;)

[ You can visit devtime.ltheory.com for more detailed information on today's work. ]
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of July 7, 2013

#4
Wednesday, July 10, 2013

Summary

I actually wrote some code today!!! Huzzah!! :ugeek:

First, ripping away factions as mentioned yesterday. Implementing "person hierarchies," such that we now have chains of command. Everyone can have a boss and subordinates. In practice, I'm not sure if we'll actually see a legitimate hierarchical faction structure, or if it will just be a single leader with everyone else reporting directly to him - it seems like the AI would be a bit easier for that. I did run into some interesting questions while overhauling the factions though. Namely, what happens when a leader dies? Can a leader die? If the leader of the faction is actually a real person like everyone else, then he can be killed like everyone else. But what's very interesting about this is that the leader is the one who imbues the faction with a personality, the one who keeps track of relations with other factions, etc. When the leader dies, someone else will take command, presumably. But this means the personality of the faction will change! Of course, companies always change when they change leadership. But that's interesting!

If a faction leader gets killed, it may end a war - or start one. Relationships with other factions (and people) will be changed by a change of leadership. You might even imagine missions being generated for this very purpose! Faction B is being crushed by faction A, which is led by a warmonger who doesn't like B. B posts up an assassination mission, hoping that if A's leader is taken care of, the war may come to an end. You accept the mission and manage to take out A, a new leader rises to power, and, sure enough, the new leader has no beef with B, so the war ends. You stopped a war! Very interesting possibilities! :)

You might want a faction to have some continuity, though. This could be handled in a number of ways. I think my favorite is that the faction leader will only accept new members that are "relatively aligned" with his own personality. That way, if you have a warmonger leader, most members of the faction will also be somewhat aggressive. Still, there would be enough variance that you could imagine a war ending because of a leadership change. But what really interests me in this scenario is: how do you judge the player's personality? It would be really cool if the game actually tried to compute a "personality" for the player based on play style. Then, this personality could be used for faction membership just like the personalities of other AIs! "You? No way kid, you're a rosey-cheeked wimp, I don't want you flying ships with my men." "I'm sorry sir, but we don't accept members with a history of violence." "I've reviewed your history and you seem like a great match! We'd love to have an experienced trader such as yourself on board!" That would be so cool (in my opinion)!

I also unified human and computer players, but that was less exciting.

So many interesting possibilities now that we're really getting into the meat of factions / AI. If all of this stuff really works, it's going to be one heck of an experience :D

[ You can visit devtime.ltheory.com for more detailed information on today's work. ]
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of July 7, 2013

#5
Thursday, July 11, 2013

Summary

Finally a really good day!! Nice mixture of code and concepts. Feels good to have the code flowing again :monkey:

I'm starting to implement the AI side of events. It's really interesting seeing the AI and simulation code colliding like this. Mmmm...the smell of fresh unification in the morning :)

My biggest struggle today came back to the usual problem - one of the biggest that I've faced so far - AIs reasoning about abstract worlds and how to keep track of these. I've got a nice model for abstract worlds, but there's a separate issue that's even harder to solve: how to allow NPCs to change the world in their head as they are thinking.

This is an absolutely critical concept, as the ability to consider "hypothetical" situations is one of the most fundamental pieces of a truly-intelligent actor. Previously, I used a moderately-clever solution that amounted to a "world" object that consisted of zero or more "changes" chained together in sequence. It worked very well, and is about as memory-efficient as you can get. Time-wise, it's not very efficient: queries are linear in the number of changes that the actor has made to the world in his head, regardless of whether the query is relevant to those changes. Bad. But that's not the worst part. The deal breaker is that all queries have to be specially routed through this "world" object. Moreover, the world object must know how to wrap every single query that you might want to make on the world. This leads to a whole lot of code, and would also seem to seriously muddy the simulation code.

This is another one of those problems that arises purely because of our need to optimize, and, as always, I hate having to solve such problems because they tell me nothing about the structure of the real world. They're contrived solutions trying to meet the needs of contrived hardware limitations. In a world with no limitations, we would simply create a deep copy of the entire universe, and let the AI perform its reasoning on that. Obviously, for efficiency reasons, we can't copy the entire universe every time the AI wants to consider his options :think:

The solution I started implementing today is effective but it scares me a bit. It's basically an "undo" system. It allows the actor to actually make real changes to the world. It keeps track of these changes via some interesting code machinery, and then can undo all of them when the actor is done reasoning. This scares me a lot, because the AI is using the real world as the hypothetical one :shock: Still, it is the most efficient and tractable solution that I have found. It allows all queries to remain unmodified - it doesn't have to intercept them, which leads to way less code. Time will tell whether this is the best answer, but for now, I'm starting the implementation!

Sadly, LT will be badly broken for a while (as in, won't compile). The changes happening right now run very deep, and the AI isn't going to compile until I finish the event system integration, as well as the undo / changes system. The good news is, when it does recompile again...the heaviest of heavy machinery will be in place for deep AI and unified simulation!!

[ You can visit devtime.ltheory.com for more detailed information on today's work. ]
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of July 7, 2013

#6
Friday, July 12, 2013

Summary

Yet another grinding conceptual day, for better or for worse :geek:

One of the final pieces of the event system that I am trying to understand is: how do they translate into actual AI maneuvers? In the previous system, there was a concept of an "action," which was a high-level concept that an AI knew it could apply to the world to produce some change. A "strategy," as I call it, is an implementation of an action. A strategy is the most concrete version of the action. For example, "attack" is an action, and the "attack" strategy is where the AI's actual attack maneuvering code is located. The code that makes them fly around the target, aiming and firing at it in the mean time.

Interestingly, events are leading me in somewhat of an opposite direction. Events are a more global concept than actions. They say "this is happening in the world," not "I am doing this to the world." As such, I am exploring the possibility of a "reactive" AI - an AI that identifies what's happening in the world and reacts to it, rather than proactively operating directly on the world. As usual, it's just two ways of framing the same thing, but it's interesting to me that the core issue seems to come down to action vs. reaction. Still, I'm having a difficult time understanding how AI strategies interact with events. Does the AI iterate over all events in the vicinity and run a corresponding strategy for each? Does it receive notifications from an event in which it is involved, and hence react via notification? Does it receive notifications from other AIs?

Today I explored the idea that an AI reacts to the events that A) it can perceive and B) are "important." This line of thought seemed very natural. Reacting to the things you perceive and deem worthy of reaction is an intuitive concept. It suggested a re-design of the current knowledge engine to allow a more powerful representation of knowledge (such that knowledge can be iterated over). I succeeded in upgrading the knowledge system today, and am excited about the possibilities. I think it's going to make a lot of sense for the knowledge system to be the middleman between events and strategies.

One detail that kept bugging me is "if one fleet ambushes another, how and when and how does the ambushed party react?" If the event sends out notifications to involved parties, then the ambushed party receives immediate information concerning the ambush. Not much of an ambush, is it? But if we use the knowledge system as a middleman, then the ambushed party will only react once it detects the ambush event (which may or may not be sooner than the ambushing party anticipates, depending on the quality of sensors involved). Whoa, treating events as detectable objects? Strange. But powerful!! I will say no more, for within lies too much power to be disclosed publicly :D

[ You can visit devtime.ltheory.com for more detailed information on today's work. ]
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of July 7, 2013

#7
Saturday, July 13, 2013

Summary

Aghhhh I'm trying so hard but I just cannot seem to find the final bits of clarity that I need here! :crazy:

I realized today - a rather nasty realization - that the translation layer used by events to drive the world simulation must also be used by the low-level AI to do everything that it needs to do. This one is causing me huge angst, because low-level AI is such a precise and concrete thing. To think about running it through a layer that doesn't even know whether any real objects exist or not...it's so scary. I don't even know if it's possible.

I should have seen it coming, but I didn't. If you REALLY want to unify simulation, you need to adapt everything that operates on game state to understand that the world is not necessarily a concrete one. To have a perfect coarse simulation, it needs to be built from the exact same building blocks as the fine one. It's the logical consequence of this whole architecture that I've built. But it scares even me to think about the full ramifications :eh:

Well, I hope I don't scare anyone! No one expected simulating an infinite universe to be easy, especially not me :) But the reward will be so great. I have a few tricks up my sleeve that I'm going to try to pull today to see if I can collapse the complexity of everything a bit further. It's been a while since I've done something beautiful and simple at the same time. Maybe today will be the day. Maybe this week. I hope soon. We need some simplicity 'round here! :)

[ You can visit devtime.ltheory.com for more detailed information on today's work. ]
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Week of July 7, 2013

#8
Week Summary

General Sentiment

Last Christmas, I asked for an intense week of simulation. This Christmas, I got one :shock:

Not going to lie, it's one of the more depressing "Major Accomplishments" lists that I've posted...but this is the hardest work I've ever had to do in LT. Jeez, I can remember the simple old days of writing collision algorithms and scalar field asset systems. Everything was so easy :monkey:

I'm dead-set on having the simulation architecture in place by the end of the month. With SIGGRAPH coming in the last week of this month, it places an awfully large burden on this coming week. Bunker down boys, it's going to get ugly :ugeek:

Major Accomplishments
  • Ripped out 'actions' and replaced with 'events'
  • "Hierarchy-of-persons" theory for more tractable factions
  • "Abstract interface" theory for scalable simulation
  • "Memory Diff" theory for efficient hypothetical AI reasoning
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford

Online Now

Users browsing this forum: No registered users and 2 guests

cron