Return to “Dev Logs”

Post

Re: [Josh] Monday, May 21, 2018

#31
For something stationary like trade-stations, the game could keep track of a "heatmap".
A station that could not win back its building cost via revenue during a (to be balanced) timeperiod, it will be sold for scrap.
Now the heatmap of this location will get added negative value, making it less attractive,

The next AI (observing previous economic enterprises) will be less inclined to build a station there.

The heatmap slowly adjust back to a zero point over time by the factor (to be balanced).

Same for profitable enterprises. Stations that outperform the expectation, eg reaching break even at the time (to be balanced) will increase the heatmap there.
This will make either the owner AI want to upgrade the station, or another AI to build another competing station nearby.

The same logic can be applied to successful mining locations, increasing in value at minefields that made the traders a good profit (goldrush),
or deter miners to go there in case of many failed mining missions.

For pirates, a heatmap can indicate dangerous areas, where many attacks happened. The pathfinding will then put a higher cost on those areas, making the route-planner circumvent it.
Or patrol ships try to favor those areas....

Heatmaps gets dynamically updated via positive or negative events, and slowly regress toward the zero point.

The AIs will ultimately try out new random locations for trade, encounters etc. But they take the heatmap data to adjust the likelihood for that location to be chosen.
Over time collective wisdom will bring about pattern of what works best in the economy, but changing circumstances will reshape those heatmaps over time.

(Heatmaps are also good to be debugged visually, or adding scripted values to them, such as marking the area around pirate-station as permanently dangerous)

Here a realworld heatmap you could use to find good jogging paths: https://www.strava.com/heatmap#11.67/-9 ... 58/hot/run
(interestingly joggers avoid the northern neighborhoods..)
Post

Re: [Josh] Monday, May 21, 2018

#32
I think this is my first post? Been following since the very beginning :)

I have a couple of thoughts on this topic:

1) Quanta of a capital investment seems like it should be though of as shares in an enterprise
2) This opens a way for capital investment to be valued based on the market which Josh has already solved to some extent
- AI answers simple question of "how valuable is this to me now" and decides whether or not to invest in shares based on that

3) How does a cargo ship with valuable ore create a "sink" in the "flow economy"? How does information play into this?
4) It seems like there will need to be different heuristics for determining risk, trade benefit... some other categories. I guess what I'm trying to get at is that it might not be possible to find one "solution to everything" but maybe breaking types of investment down into categories would be alright.
Post

Re: [Josh] Monday, May 21, 2018

#33
While I have been unable to come up with a conclusive complete design, I do have come across some guiding principles which may help with this challenge.

In thinking about this over the last few days, I believe that the imperfect knowledge an AI has of their environment is key to guiding their understanding of what is and is not the optimal action/investment. A particular AI may believe that they are placing a trade station in the ideal location, but that's only because they're unaware of the rich asteroid field about 3AU away in another direction. Once that field is discovered, it could massively change the calculation, but then they have to ask themselves whether it would be better to make a new station to replace the old one, a small station along the way to supplement the main station, or to just keep growing their main station and build out some warp rails to the new discovery.

Other guiding principles can come be found in the fields of Theory of Mind and in Behavioral Economics. A perfectly rational economy may in fact be impossible, as we are unaware of any economic actors that behave perfectly rationally. So perhaps we need to turn to the distortions brought about by the irrational processes in the brain.

First up: Loss Aversion
Humans, other primates, and non-primates have all been demonstrated to have a greater emotional response to losing what they already have as compared to what they can potentially gain. In humans specifically, Our aversion to loss is 2.25 times as strong as our excitement over our gains, meaning that if given a choice where there is an 80% chance of losing 1 unit of value and a 20% chance of gaining 2 units of value, around 3/4 of people would not take the choice. Where if it were 80% chance of losing 1 unit vs 20% chance of gaining 3 units, about 3/4 of people would be willing to make the choice.

Additionally, consider the scenario where you have a choice between having a 100% chance of getting 3000 units of value or an 80% chance of getting 4000 units of value and 20% chance of getting nothing, only about 20% of people would take the gamble and 80% would go for the sure gains. This is despite that on average, taking the gamble would be objectively better.
Conversely, if the choice is between a 100% chance of losing 3000 units of value, or an 80% chance of losing 4000 units and a 20% chance of losing nothing, 75% of people would now take the gamble, even though again, on average they'd lose more money.
Meaning that a sure gain is more appealing than a larger potential gain, and a sure loss is more distressing than a potential greater loss, even if these preferences are both illogical.

Second: Expected vs Actual gains
Another aspect of human and primate behavior is that when making a decision, we come up with an estimate for how much we will gain from a particular action, called a Reference level. However, whether we consider the outcome a success or a failure is not dependant on our absolute gains, but on the relation between our actual gains and that reference level. If our actual gains are more than the reference level, we consider the action to be a success and subsequently raise our expectations in proportion to how much better the outcome was. However even if we make an absolute gain, if it is not to the level we expected, we will consider it a failure. And dependant on the level of failure, we will subsequently adjust our behavior/future actions to try and meet that level, if it is only a small failure, where we got close to the reference, we're likely to adjust our behavior and try again, whereas if it is a significant failure, we're more likely to try something else entirely.

Third: Absolute vs Relative success
Summed up in the phrase "Keeping up with the Jones's" it is not our actual level of prosperity that guides our behavior, but our level of prosperity compared to our peers and those around us. Even if a middle class American is far more prosperous than the global average, they don't judge their prosperity and thus their decisions on the global average, but on the average of those in their neighborhood, their city, and those they see in the media. There's nothing so pathetic (imho) as an Upper-Middle-Class family despairing that they only have a single 12,000 sq ft house and 4 cars, when those they aspire to have 3 houses and 10 cars, a yacht, and an entourage.



In terms for LT, These mean that an AI's decision doesn't have to consider the effects of their action on the global economy, but only on the world they are aware of, in relation to those they are aware of, and in relation to the success/failure of their past actions.


In regards to the issue of authority and subordination, we can again turn to what exists in nature. In non-humans authority comes in a single form: Authority of the Individual. This is the classic King of the Hill scenario, where if you want to become the person on top, you have to beat the person already there and depose them. In most primates, this hierarchy seems to come in multiple levels, You have the Alpha, Beta, Delta, Gamma, Omega, etc. If you are say the Gamma in your group and want to challenge the Alpha, you might be able to take them on your own, but it's possible that the Beta and Delta will join the Alpha to maintain the status quo and not to be outdone by a lowly Gamma. Or if The Delta, Gamma, and Omega all team up on the Alpha and Beta, should they succeed, the Delta, Gamma, and Omega will then also have to fight each other for the new Alpha, Beta, and Delta positions.

Humans also adhere to Authority of the Individual, but also uniquely adhere to Authority of the Position. It doesn't matter whether you're Pharaoh of Upper and Lower Egypt or President of the United States, you occupy a position which is transferable from one individual to another, and respect for the position's authority is maintained almost regardless of who the individual in the position is. This position of authority can of course be transferred peacefully, or with violence. Modern democracies tend to have peaceful and regular transitions of power, where one leader steps down after a certain time rather than only leaving the office by dying. However if you're an Emperor of China or Persia, the peaceful transition of power is far less certain, as there may be a contest of claimants, or conquest from an outside dynasty. However, regardless of whether it's a victorious heir or a new dynasty, the new ruler will take on the position of Emperor, gaining the Authority of Position in addition to their de facto Authority of the Individual.


I'm not sure how useful these principles are, but they may be worth considering.
Image
Challenging your assumptions is good for your health, good for your business, and good for your future. Stay skeptical but never undervalue the importance of a new and unfamiliar perspective.
Imagination Fertilizer
Beauty may not save the world, but it's the only thing that can
Post

Re: [Josh] Monday, May 21, 2018

#34
Silverware wrote:
Tue May 22, 2018 3:18 pm
I propose dealing with the AI at three different scales.

A) Individual Scale. This AI is one guy, maybe as many as 5 ships that follow him, and spends mostly to improve what he has, or replace things with better versions. He doesn't care about building stations, or system Infrastructure, he only cares about himself.
These guys do only what is most expedient at the moment. And will happily split into two AIs when they get particularly rich.

B) Small Group Scale. This AI controls a dozen to a hundred ships. He is focused either on finding and developing a single system, or exploiting in a few systems maintaining a single base, and will often spend to convert a good place into a better place. But will happily move on from an area if opportunity lies elsewhere, or he is forced out.
These guys tend to own a single station or pair of stations, and will often bulk produce a small quantity of goods.
These guys split only when their system is fully exploited sending out a new small AI with a dozen ships and some cash to continue the expansion cycle.

C) Large Group Scale. This AI spaces 100+ ships. He tends to focus on controlling space. He will grab systems and attempt to brutally control everything in them. Letting systems go only when they become particularly bad. He is barely a step down from an Empire, just lacks the scale to be one. He is quite interested in policing and taxing any activity in his area. These guys are the factions that vie for actual control of the universe, and will tend to control a vast number of assets, but only few will be upgraded to a high enough level to be useful for high end players.
These guys only split once their number of systems gets too large. Splitting in half and forming an alliance with their new split. Eventually these alliances will degrade and new empires will properly form.

Fascinating.

You might even give these different levels of activity particular descriptive names, such as -- oh, I don't know -- Tactical, Operational, and Strategic, just for example.

;)

Jstorm wrote:
Tue May 22, 2018 6:25 pm
How are you considering Macro mechanics? Because the game will become limited and un-enjoyable if we must micro-manage the acquisition, production, assignment, armament, maintenance, etc, of every..single...asset. Which from what has been demonstrated, will be many, many, assets.

Hi, jstorm -- great first post! What you're saying here is very close to an observation I've been making for some time now, so I think we're in agreement on this.

Getting things working enjoyably in a single star system is important; that's what the player will see visually at any one moment. But things must also be in motion in every system in the game universe. And the features that will make this dynamism enjoyable to interact with are not the same features as what's fun in a single star system. So this aspect of the game definitely is going to require some specialized attention... but I don't recall hearing much from Josh about this important part of developing and testing Limit Theory yet.

That's why this is one of the items on my bullet-point list of "Milestones I'm Looking Forward to Seeing Accomplished" toward shipping the game.

Jstorm wrote:
Tue May 22, 2018 6:25 pm
How will a management AI decide to expand and where to put things? Will the player have to assign it to place every single station or tell it to guard an asteroid field or can it, if the player allows it, to use whatever mechanic the other AI's use to determine where to put things? Or can it be a mixture of both?
...
My last question about the Macro mechanics is if the player wants, can they just let the AI manage literally everything of the player made faction and just play in the aftermath of telling the AI to destroy the universe at all costs?

My impression is that these questions really sum up all your sphere-of-activity questions because they have (I think) the same architectural answer, which is Projects + Delegation.

That is, I believe the idea is to have logic that knows how to:

  1. Define a properly-leveled goal for the NPC's available resources (e.g., Build_Widget, Conquer_Star_System, Expand_To_Weaken_Adjacent_Sector).
  2. Define a Project theoretically capable of achieving this goal.
  3. Decompose the Project into sub-projects and delegate the sub-projects to the NPC's next-level-down subordinates (who may do the same down to atomic actions).
  4. Monitor the progress of the Project, adjusting as necessary for failed sub-projects, until the Project is either complete or likely unachievable.

If this is reasonably close to where Josh and Adam are going, then decision-making that's more complex than purely tactical -- "where to put things" and "who to attack" and so on -- is basically implemented as project management. That covers Military, Economic, Industrial, Research, etc. And the Delegation competency is what would let the player define a project then hand it off to an NPC who breaks it down and delegates it, and so on, allowing the player at the top of a faction to express a really high-level goal and then watch how the faction's NPCs try to accomplish it.

Jstorm wrote:
Tue May 22, 2018 6:25 pm
I just want you to know that Macro tools and mechanics will be absolutely critical parts of gameplay. With procedural generation meaning we can play on massive scales and, with a good engine meaning large amounts of stuff; we are going to need extensive and deep Macro tools to be able to oversee it all while still having the game be playable and enjoyable.

Right there with you on the need for appropriate tools to show enough of the changing patterns of the game universe to allow for enjoyable high-level play. If you've watch the development update videos, the Josh from several years ago showed that he's got an outstanding grasp of how to represent necessary tactical information in a way that's useful and attractive. Operational and Strategic-level information handling is a little different in content, but just speaking for myself I'm confident Josh and Adam understand these needs and will deliver a UI that is functional and gorgeous at every level.

Cornflakes_91 wrote:
Wed May 23, 2018 4:34 am
communicating some intentionality doesnt make it more fun or interesting (in ways that arent shades of disaster recovery) when i have to handle a hundred dudes deciding at the same time that their mission is suicide and break off all on their own.
...
How is my military supposed to work if i get deserters every time my forces arent sure if they can win?
Because thats whats going to happen if they have intelligence and any kind of self preservation and no overriding cause to act opposing to it.

If you need a highly dutiful faction, why would you (or your NPC delegates) hire NPCs who aren't high in the Lawful trait, which would minimize going rogue?

The system I've described, which incorporates NPC personality traits affecting behavior, helps you have the kind of gameplay you enjoy.

Where are your ideas to help people who enjoy autonomous NPCs have the kind of gameplay experience they enjoy?

Hyperion wrote:
Wed May 23, 2018 7:59 pm
First up: Loss Aversion
Humans, other primates, and non-primates have all been demonstrated to have a greater emotional response to losing what they already have as compared to what they can potentially gain. In humans specifically, Our aversion to loss is 2.25 times as strong as our excitement over our gains

This is useful when thinking about making LT's NPCs "think" like humans... but remember that the 2.25 number you cited (I haven't looked up the source, so I assume it's accurate) is only an average of a lot of people. Individual variation on loss tolerance is likely to be significant (i.e., this bell curve will have a relatively large standard deviation). So if we're modeling this aspect of NPCs on Real Live Human Beings, then it's not unreasonable to expect a range of preferences for tolerance of loss, and behaviors appropriate to this variation.

(This is actually one of the things I wrap into my version of temperament theory. Serotonin-sensitive Guardians tend to have a higher response to loss, making them tend toward security-seeking in their behaviors: they want to be members of a stable hierarchy; they're risk-averse; they emphasize following the rules in order to accumulate wealth and status tokens. Other temperaments, less sensitive to serotonin than to dopamine [excitement-seeking Manipulators], testosterone [pattern-seeking Rationals], and oxytocin/estrogen [identity-seeking Idealists], will be less loss-averse than Guardians. The risk-enjoying Manipulators in particular are likely to come in way under that 2.25 number! I'm not suggesting LT's NPCs should have their preference systems based on temperament theory... but maybe it's a useful framing for the suggestions you're making.)

Hyperion wrote:
Wed May 23, 2018 7:59 pm
Humans also adhere to Authority of the Individual, but also uniquely adhere to Authority of the Position. It doesn't matter whether you're Pharaoh of Upper and Lower Egypt or President of the United States, you occupy a position which is transferable from one individual to another, and respect for the position's authority is maintained almost regardless of who the individual in the position is. This position of authority can of course be transferred peacefully, or with violence. Modern democracies tend to have peaceful and regular transitions of power, where one leader steps down after a certain time rather than only leaving the office by dying. However if you're an Emperor of China or Persia, the peaceful transition of power is far less certain, as there may be a contest of claimants, or conquest from an outside dynasty. However, regardless of whether it's a victorious heir or a new dynasty, the new ruler will take on the position of Emperor, gaining the Authority of Position in addition to their de facto Authority of the Individual.

Again, temperament theory suggests that your description here is most applicable to those favoring the Guardian temperament. There's quite a bit of organization theory -- see Charles Handy's Understanding Organizations -- that reveals some patterns in organization form and behavior:

  • Organizations tend to coalesce into particular styles based on the personality of their founders.
  • Handy, citing Harrison, describes four common organizational patterns:
    • Power: linked spider-webs of personal influence
    • Role: hierarchies with well-defined rules for ascension
    • Task: flexible networks good at reconfiguring to achieve specific outcomes
    • People: consensus-oriented circles that emphasize participation
  • Over time, unless actively resisted, all organizations tend to become Role-based.

You may or may not be surprised that the four typical organization patterns tend to line up remarkably well with the four temperaments. :D Sensation-seeking tactical Manipulators create Power organizations that adapt quickly to opportunities; security-seeking operational Guardians establish hierarchical Role organizations that are optimal for survival situations; pattern-seeking strategic Rationals build Task organizations that pick a thing and do it really well; and identity-seeking visionary Idealists generate People organizations that see a human need and inspire people to come together to satisfy that need.

All of this stuff might apply to LT factions in the structure they initially take (based on the personality traits of the NPCs who found the faction), in the authority style of the NPC at the top of the faction (which affects the kind of outputs it prefers to produce and how well it's suited to create those outputs), and how the faction changes over time into a Role-based organization that expects every member to follow the rules (high Lawful trait?) and that wants to have a monopoly on making the one thing it currently creates that made it money to start with.

Imagine the fun when a Role-based organization replaces its top-level authority with a tactical, action-oriented, risk-tolerant, opportunity-seeking Manipulator NPC, who immediately begins trying to remake the organization into a Power-focused structure that's quick to try new ideas. ;)



...I seem to have said a little more than I intended in this post. Um... thanks, Josh, for an update that encourages ideas from so many folks, including first-time posters? :lol:
Post

Re: [Josh] Monday, May 21, 2018

#35
Further ideas on using the heatmap data:

When combining several sector-heatmaps, they can create a more specific solution for AI decisions:

#1 heatmap tracking past economic sucess (regular profits)
#2 heatmap tracking competitor density
----
subtracting #2 from #1 will create a heatmap indicating midterm unsupplied market opportunities.

that can provide an AI with currently open investment-funds (cash) an indication where to erect a certain type of station, in an area that is profitable, but has not been overcrowded by competitors.

-----

Investment personality types:

# risk-taking
-favors a random area with low previous economic activity, but based on an yet unsupplied supply-demand gap
-will invest like 75% of free capital into a new project

# entrepreneurial
-will invest in area that have show recent profitable endevors, but will avoid regions with already high competition
-will invest like 50% of funds into new projects

# market follower
-will invest in areas where there is already a lot of competition and activity, (if its works for others, copy it)
-will invest like 50% of free funds into new projects


# conservative
-will only invest into regions that have shown a stable profit long term (even when its not the most profitable area)
-will invest like 25% of free fund into new projects


------

trading information:

Heatmaps are basically historical (past or very recent) market, activity and event observations that provide a value.
Events at certain locations are updated on heatmaps for AIs in that area.

Every corporation could over time build their own version of such data.

A corporation can then purchase the heatmaps from others, to combine it with their own, improving the range covered, and reducing random noise / have a higher detail.
Unknown areas just have a default value. The AI will not favor (investment) or avoid them (danger) due to a lack of knowledge.

Older heatmap data could be outdated, and thus has a lower value. The same for data that is widespread and thus "common knowledge".


-----------

Lucrative pirate ambush sites.

One heatmap tracks the paths of trading ships, and reduces the local value when armed patrol ships are present.
(Basically indicating unprotected high-traffic tradelanes)
Pirates will favor such spots with high trade, but low defense naturally.

Traders in turn will map pirate activity and avoid such locations.

-> over time there will be a constant shift in the location of high pirate activity.

-------------

Supply-Demand gaps
#1 heatmap tracks the supply of a certain good, lets say Titanium
#2 heatmap tracks the open demand of Titanium (open demand, as in having a repeatidly lower stock than the amount that the station is planning to use up in production)

An AI looking for a new trade opportunity can then look for supply hotspots #1, and demand hotspots #2.
The difference (adjusted by the cost of travel, eg time) will then indicate a potentially good profit.

Stations with a high demand or supply can indicate this publicly (public heatmaps), thus increasing the likelihood of traders coming for business.
Post

Re: [Josh] Monday, May 21, 2018

#36
I'm a bit late on replying here.
Jstorm wrote:
Tue May 22, 2018 6:25 pm
Awesome Work. Truly.

The flow economy is going to be like nothing else we've seen in a game.
The ability for first-person dogfighting or RTS control will be truly fascinating.

However, and I may be mistaken, but for the year I have been following the development of this amazing game I have not come across a discussion about the macro side of gameplay.
And since Josh is starting work on factions I believe it might be time to discuss such things. Macro gameplay is a very, very critical part of this game concept. The flow economy and the dynamic ability for the A.I. to develop means that the environment the team is developing will be ripe for player intervention.
Wow, what a first post. :lol: Welcome to the forum! Very glad you're posting! :D Let me see if I can try to answer a few questions.

First off, there has been a little discussion of macro mechanics here and there, but not overmuch. Most of the "macro" has been in terms of job management. Some of the things you seem to suggest (AI combat macro management) haven't ever come up at all, I think - at least, that I personally know of.


Jstorm wrote:
Tue May 22, 2018 6:25 pm
When or if the player chooses to create a faction/corp/etc will there be an AI similar to what runs all the other factions created for the player? I know that some sort of AI will have to be involved in order to run things. But I'm curious as to how much of the tasks it will run.

In my mind I think the AI will have to accomplish the same tasks as any of the other Factions, however, it will have to do so with player guidance. Which are where my questions come from. How much will the AI do? Will the player be able to dictate what it does? Give it direction? Decide on how aggressive it is or how passive?
As I understand it, when you create a new faction, you're required to take care of it in the same way the AI would - you have to figure out what to do, give orders, make alliances, plan where to build stations, etc. At some point it might be advantageous to give parts of it to the AI, as you've mentioned, but I'm not sure we really got far enough to think about that yet. If we want to be able to handle an infinitely-sized military, there are certainly things we want to take care of - for instance, you don't want to have to constantly purchase ships for your growing empire. Perhaps you should be able to set up "worker AIs" that you can funnel a set percentage of your income into, for them to try to handle themselves - and then you can have them send new ships places at their own discretion, unless you want to build a massive fleet or something similar.

For instance, you simply set a setting: Funnel 15% of income into AI #3
and this AI is tied to a set of star systems, and can purchase or reassign anything within them. This would massively reduce player-side micromanagement, I think, although if you really want to keep your empire at its best, micromanaging might always be the best option... there isn't much way around that. :)

It's important to note, though, that ships have their own AI. Every ship you build can be told to either follow you, or do its own thing. You can set "missions" such as "deposit X ores at Station Y" but they decide whether they take those missions. The AI cannot "force" the player to take an action, and likewise, except for perhaps fleet combat, I don't think the player can (or has to) "force" the AI to take actions. So, for a large part, your ships should manage themselves, removing that level of micromanagement.
Jstorm wrote:
Tue May 22, 2018 6:25 pm
How will a management AI decide to expand and where to put things? Will the player have to assign it to place every single station or tell it to guard an asteroid field or can it, if the player allows it, to use whatever mechanic the other AI's use to determine where to put things? Or can it be a mixture of both?

Lastly Industry. Say the management AI places a factory. Can it automatically assign traders or transports to go to the warehouses to pick up the materials necessary or go to the market and buy the materials then transport it back to the factory? Or will the player have to make the trade itinerary for every...single...logistic ship (trader, Miner, Transport, etc). Can the AI decide how to distribute the products produced by the factory? Like, say the factory was made to produce weapon X for construction on Fighter Y. Will the AI be able to manage the logistics necessary to ferry the weapons from the weapons factory to the ship factory? Or will the player have to assign and coordinate every single ship, factory, and distribution channel of all factory or trade/transport systems? Will the AI be able to make the factory chains totally on its own if instructed or allowed to by the player?
This will never come up. NPCs have full control over their actions. It's not an RTS per se, although the combat can play out like one. :I think you might be able to give forced orders to the AI, but by default, they will do things completely on their own. They want to make money too, after all. :)
Jstorm wrote:
Tue May 22, 2018 6:25 pm
My last question about the Macro mechanics is if the player wants, can they just let the AI manage literally everything of the player made faction and just play in the aftermath of telling the AI to destroy the universe at all costs?
That's an interesting question, and one I don't actually know the answer to. Maybe? :P


Jstorm wrote:
Tue May 22, 2018 6:25 pm
How will procedural generation be controlled? Will you tell it to generate X number of systems on start? or will it actively generate the frontier as the player and NPCs explore and expand? If so can we tell it to stop generating if we feel we have reached a certain size we want?

Are Factions dynamic? Meaning can they come into existence mid-game. Like if an NPC builds enough wealth can it found a new faction. Or will the number of factions be set in the beginning and only be destroyed until one is left?

Will diplomacy be a consideration in gameplay? like when two weaker factions form an alliance to face a big faction. Will those factions be able to split if one wants the others resources after the war?

And this one is just pure curiosity, but will machine learning be moddable into the AI?

And I know that you guys may be theorizing on a lot of the Macro gameplay and not know what you are going to do just yet. I just want you to know that Macro tools and mechanics will be absolutely critical parts of gameplay. With procedural generation meaning we can play on massive scales and, with a good engine meaning large amounts of stuff; we are going to need extensive and deep Macro tools to be able to oversee it all while still having the game be playable and enjoyable.

Love reading the dev-logs by the way. It always excites me to see a new one posted.
1. Procedural generation will, as i understand it, work to start with X systems and then, as the player explores, the universe will expand as the player does. I don't think you'll be able to tell it to stop generating by default, but it should be able to be modded in.

2. Factions are dynamic, as I understand it. They can spring into existence mid-game and, in fact, are really just formed by ships gathering enough ships to be recognized. By that token, you might say every unallianced ship is its own faction too - and as they purchase more assets, their faction grows. They can leave factions or join them, too, iirc.

3. Diplomacy will absolutely be a consideration, as I understand it - but I think in the form of "reputation" and not hard-set "alliances". You can raise your reputation with other factions, or lower it. Raising it lets them trust you, lowering it makes them expect you to attack, so they may try to strike first.

4. Machine learning will be moddable into the AI.

Again, excellent first post! :D


NGimbal wrote:
Wed May 23, 2018 1:34 pm
I think this is my first post? Been following since the very beginning :)

[...]

I guess what I'm trying to get at is that it might not be possible to find one "solution to everything" but maybe breaking types of investment down into categories would be alright.
A warm welcome to you! Welcome to having a post count. :D I've seen you pop on every now and then, so it's nice to see you finally post!

This is personally my thought too. I don't think Josh is going to find his "42" here - his "solution to life, the universe, and everything". I think he'll have to settle with a number of systems that interact cleanly. The way he sounds like he's trying to set it up right now, it'll pepper a star system with a thousand tiny stations. :lol: And that just won't work over the long-term at all.
Have a question? Send me a PM! || I have a Patreon page up for REKT now! || People talking in IRC over the past two hours: Image
Image
Image
Post

Re: [Josh] Monday, May 21, 2018

#37
Flatfingers wrote:
Thu May 24, 2018 12:19 am
Cornflakes_91 wrote:
Wed May 23, 2018 4:34 am
communicating some intentionality doesnt make it more fun or interesting (in ways that arent shades of disaster recovery) when i have to handle a hundred dudes deciding at the same time that their mission is suicide and break off all on their own.
...
How is my military supposed to work if i get deserters every time my forces arent sure if they can win?
Because thats whats going to happen if they have intelligence and any kind of self preservation and no overriding cause to act opposing to it.
If you need a highly dutiful faction, why would you (or your NPC delegates) hire NPCs who aren't high in the Lawful trait, which would minimize going rogue?

The system I've described, which incorporates NPC personality traits affecting behavior, helps you have the kind of gameplay you enjoy.
i'd like for a common faction to be stable at least reasonably so.

a common miner is likely to be very much inclined to just step out of the way (and out of my faction) when threatened if they have a modicum of self preservation.

they'll just run even if theres a patrol on the way and they dont look enough around.
leaving me with a dozend miners less for no real reason.

Flatfingers wrote:
Thu May 24, 2018 12:19 am
Where are your ideas to help people who enjoy autonomous NPCs have the kind of gameplay experience they enjoy?
being subservient to getting basic faction mechanics workable first.

you dont need a universe populated by 100% optimising self preserving agents to have dynamism in there.

NPCs can do retreat maneuvers without having to be independent agents.


basically all i want is that factions dont disintegrate from basic interactions.

factions disintegrating should take some serious stimulus.

Talvieno wrote:
Thu May 24, 2018 11:15 am
It's important to note, though, that ships have their own AI. Every ship you build can be told to either follow you, or do its own thing. You can set "missions" such as "deposit X ores at Station Y" but they decide whether they take those missions. The AI cannot "force" the player to take an action, and likewise, except for perhaps fleet combat, I don't think the player can (or has to) "force" the AI to take actions. So, for a large part, your ships should manage themselves, removing that level of micromanagement.
yeah, if i cant command my own ships around im going to become very very annoying.

outside of special conditions if i tell my ship to jump the only acceptable counter question is "how high" and not "why"
Post

Re: [Josh] Monday, May 21, 2018

#38
Cornflakes_91 wrote:
Thu May 24, 2018 3:11 pm
Talvieno wrote:
Thu May 24, 2018 11:15 am
It's important to note, though, that ships have their own AI. Every ship you build can be told to either follow you, or do its own thing. You can set "missions" such as "deposit X ores at Station Y" but they decide whether they take those missions. The AI cannot "force" the player to take an action, and likewise, except for perhaps fleet combat, I don't think the player can (or has to) "force" the AI to take actions. So, for a large part, your ships should manage themselves, removing that level of micromanagement.
yeah, if i cant command my own ships around im going to become very very annoying.

outside of special conditions if i tell my ship to jump the only acceptable counter question is "how high" and not "why"
Well no, of course, of course, but I mean on a particular level, you see - a particular "scale" if you will. For instance you wouldn't want the kind of micromanaging required to say "mine this asteroid" "take it to this station" "sell it for this price" "refuel and otherwise empty your cargo hold" "fly back" "mine this other particular asteroid" and so on and so forth - and I don't think you'll have to. I think you can tell them to mine, and set "jobs" for them to accomplish, much in the same manner as Dwarf Fortress - but also like Dwarf Fortress, I think it's up to them to decide whether they actually want to do it at that precise moment (except perhaps fleet/distance orders, again - keeping people in a fleet is fairly important, as would be relocating to an entirely new system or perhaps even a particular point in that system) - but they, or someone else, will get around to it eventually.

My point here being, the AI can manage itself, unlike in an RTS - because this isn't supposed to be an RTS. It may have RTS elements, but that does not in any way mean it actually is a proper RTS game. I'm sure you can have a fair level of micromanagement if you want, but for a large portion, things should mostly handle themselves with minimal player intervention unless they so choose. It's important that it works that way, too, because the multi-system and three-dimensional nature of the game makes a true RTS interface somewhat difficult and cumbersome.
Have a question? Send me a PM! || I have a Patreon page up for REKT now! || People talking in IRC over the past two hours: Image
Image
Image
Post

Re: [Josh] Monday, May 21, 2018

#39
Talvieno wrote:
Thu May 24, 2018 3:33 pm
Well no, of course, of course, but I mean on a particular level, you see - a particular "scale" if you will. For instance you wouldn't want the kind of micromanaging required to say "mine this asteroid" "take it to this station" "sell it for this price" "refuel and otherwise empty your cargo hold" "fly back" "mine this other particular asteroid" and so on and so forth - and I don't think you'll have to. I think you can tell them to mine, and set "jobs" for them to accomplish, much in the same manner as Dwarf Fortress - but also like Dwarf Fortress, I think it's up to them to decide whether they actually want to do it at that precise moment (except perhaps fleet/distance orders, again - keeping people in a fleet is fairly important, as would be relocating to an entirely new system or perhaps even a particular point in that system) - but they, or someone else, will get around to it eventually.
yeah, not acceptable. my ships go where i tell them when i tell them.
and they organise themself when i tell them to sort stuff out themself (eg by assigning them to a mining project)

i cant use a transporter that just goes "nah, i first have to do this other shit" when i order him to rearm my personal ship.
or when i see some opportunity for a good trade and the transport just loiters around doing other stuff and i miss the trade because of that.
or get some good rock snagged away in front of my miner because something else was earlier in their queue than my direct order.

of course i'd want ways to automate stuff, but they shouldnt be the only way.
Post

Re: [Josh] Monday, May 21, 2018

#40
Currently, what you're describing is implemented in the game - you can give any sort of custom order to any ship under you. I can personally confirm that. I just don't know if that's staying, as it seems to be something Josh doesn't particularly like or enjoy. I remember him talking excitedly about jobs and missions and free choice for NPCs. You'll have to ask him about that. If he has it in now, there's a good chance he may decide to keep it in if we nudge him in that direction... although I would still think he'll probably make "free choice" the default option.
Have a question? Send me a PM! || I have a Patreon page up for REKT now! || People talking in IRC over the past two hours: Image
Image
Image
Post

Re: [Josh] Monday, May 21, 2018

#41
Thanks for the detailed answers!
Flatfingers wrote:
Thu May 24, 2018 12:19 am
If this is reasonably close to where Josh and Adam are going, then decision-making that's more complex than purely tactical -- "where to put things" and "who to attack" and so on -- is basically implemented as project management. That covers Military, Economic, Industrial, Research, etc. And the Delegation competency is what would let the player define a project then hand it off to an NPC who breaks it down and delegates it, and so on, allowing the player at the top of a faction to express a really high-level goal and then watch how the faction's NPCs try to accomplish it.
I see what you mean. Projects would be a very good way of giving the AI direction and having it achieve a certain end goal. Even if it is just to destroy the universe. But I wonder what would be the best way of expressing a goal. The desired end result? Like, win this war? Or an open-ended one? Like, wage this war?
Talvieno wrote:
Thu May 24, 2018 11:15 am
At some point, it might be advantageous to give parts of it to the AI, as you've mentioned
So by "give" parts of it to the AI are you talking like a how a usual government works where you have a top dog, however, he'll assign certain duties away to be carried how? If this what you mean, would you be giving, say, operational control of certain things to the AI like ship production; or geographical control over certain regions like a feudal type system?
Talvieno wrote:
Thu May 24, 2018 3:33 pm
I think you can tell them to mine, and set "jobs" for them to accomplish
When you say "set jobs" do you mean to the actual miners to just fill cargo and fulfill demand. Or do you mean something like if you had a factory you would have two jobs on it; One job for ore import, and one job for product export?
Post

Re: [Josh] Monday, May 21, 2018

#42
Yeaaaah, I think we may need a ruling from Josh or Adam on this question of NPC autonomy.

I thought your responses were very reasonable, Tal, with a special emphasis on "this isn't supposed to be an RTS." That phrase right there, with respect to NPC autonomy, tells me that I should not expect NPCs to just float aimlessly in space doing nothing until told exactly what to do, and then they do that and only that regardless of anything happening around them -- as in a conventional RTS. How would implementing NPCs in LT so that their only "AI" is pathfinding be a good fit with the rest of the kind of game Josh has said he wants to make?

At the same time, even I'm not sure about trying to play a game in which there's a high probability that any NPC I try to give a project or action to will tell me to bugger off. I don't necessarily want a world of mindless slaves as Cornflakes insists on having, but I do think his perspective ought to be taken seriously. Not just because I want him to be able to enjoy LT, but because I guarantee there'll be lots of Cornflakeses who buy LT expecting to be able to play it as a conventional RTS game.

So how can both these interests be satisfied?

Are we talking about a global "Level of NPC Autonomy" slider that's basically the percentage chance that any individual NPC will refuse an assignment?

Or, as I've suggested, is it good enough to expect players who want their factional NPCs to obey every command without question forever to have to find and hire NPCs with a very high Lawful trait? I personally strongly prefer this approach because: 1) it provides a means to have NPCs who do what they're told, 2) it allows some level of autonomy for players who like that sort of thing, 3) it makes good use of an NPC personality trait, 4) the need to find and hire NPCs with a high Lawful trait adds an interesting gameplay element, and 5) it helps to distinguish LT from other games that are RTSs or contain RTS gameplay.

(Note: anticipating the "I Do Not Want To Have To Waste My Playing Time Looking For High-Lawful NPCs To Hire!!" objection, what if part of the game -- maybe LT v1.0, maybe a mod -- was a kind of hiring agency? When you're ready to build a faction, you "tell" the local agency what kind of NPCs you'll be wanting to hire. Then, when you're ready to add an NPC to your faction, you just bring up the interface to the hiring agency and hire one of the NPCs of the correct type that they've already located for you. Would this really feel painfully more complicated than... whatever the non-agency method of hiring an NPC into a faction will be?)

Or is there some other/better approach for implementing NPCs who can be either highly autonomous or highly dutiful?

Jstorm wrote:
Thu May 24, 2018 10:49 pm
Projects would be a very good way of giving the AI direction and having it achieve a certain end goal. Even if it is just to destroy the universe. But I wonder what would be the best way of expressing a goal. The desired end result? Like, win this war? Or an open-ended one? Like, wage this war?

I have some thoughts on this. :D First, though, let me suggest a few previous posts/threads where we knocked around some of these concepts:

  • AI Task Delegation (Dec. 29, 2013): Early thoughts on defining projects in terms of high-level Goals and atomic-level Tasks.
  • Project Types (May 2, 2014): Includes a specific post from Josh about projects. (Warning: also includes theorycrafting from ThymineC. :D )
  • The "Game" in LT (July 25, 2017): Specific thoughts on implementing Projects, followed by excellent ideas from Hyperion.

And yet, I'm not sure these directly address in useful detail the question you're asking here, which I think is something like, "What does the mechanic for picking a goal to give to an NPC look like?" And inherent in that question is this one: How do you give an NPC a goal that 1) is within its capabilities, and 2) usefully leverages its unique personality traits?

Just off the top of my head (meaning I expect others here can do a lot better), I can see two design decisions that need to be made. First: how are goals to be expressed? Is there a giant Master List of goals that the game understands, and you pick one goal from that list? (Presumably this list would have some internal organization so it's not a hundred pages long.) Or can players (and NPCs) create their own goals by stringing together several constraints, sort of like Mad-Libs where you pick a goal type from a list and then fill in the blanks with the specifics you want?

That's the first thing. For the second, let's just assume there are goals that can be expressed: How do you insure that you can give NPCs goals that make sense? For this question, I can imagine picking a goal being either top-down or bottom-up. Bottom-up is (relatively) simple: the list of goals you are shown are the only goals that this particular NPC is capable of accomplishing given its current resources. Your fun comes from picking a goal for that NPC that makes maximum use of its particular personality traits. The top-down approach is more open-ended; you can pick any goal you want for an NPC, but this means you're responsible for choosing a goal that the NPC is capable of carrying out successfully as well as for optimizing the goal to the NPC's personality.

Here, it might be important to note that Josh has said in the past that he doesn't really want to get into crew management features/gameplay. If that's still the case, then I wouldn't expect this goal creation/assignment system to be too complex in LT v1.0.
Post

Re: [Josh] Monday, May 21, 2018

#43
If the players own AI ships are doing something else than commanded, there needs to be some clear feedback why they chose to, like "have to go to repair station first", or "this sounds too risky for me".
If there is no clear feedback to the player, the decisions would feel confused, random or even buggy.

(As in the first FEAR game, an important trick to make the AI actors feel conscious was them shouting out "thoughts" about their intent.)

When their reason not to follow orders is clear, then the player has a chance to "help" them doing the task they where assigned, or avoid assigning them those commands in those circumstances.
Post

Re: [Josh] Monday, May 21, 2018

#44
Damocles wrote:
Fri May 25, 2018 12:01 am
If the players own AI ships are doing something else than commanded, there needs to be some clear feedback why they chose to, like "have to go to repair station first", or "this sounds too risky for me". If there is no clear feedback to the player, the decisions would feel confused, random or even buggy.
...
When their reason not to follow orders is clear, then the player has a chance to "help" them doing the task they where assigned, or avoid assigning them those commands in those circumstances.

I agree with this, within reason. If I'm commanding a magnificent fleet of a thousand ships, I don't really want to be getting spam from 30 of them all the time telling me "nope."

But clear "yes" or "nope" responses from immediate subordinates sounds very useful.

Damocles wrote:
Fri May 25, 2018 12:01 am
(As in the first FEAR game, an important trick to make the AI actors feel conscious was them shouting out "thoughts" about their intent.)

I said the same thing about Thief, but yes: the first FEAR game felt really satisfying in this respect.

"He's toooo fffffaaaassssstttttt!"
Post

Re: [Josh] Monday, May 21, 2018

#45
Flatfingers wrote:
Thu May 24, 2018 11:58 pm
At the same time, even I'm not sure about trying to play a game in which there's a high probability that any NPC I try to give a project or action to will tell me to bugger off. I don't necessarily want a world of mindless slaves as Cornflakes insists on having, but I do think his perspective ought to be taken seriously. Not just because I want him to be able to enjoy LT, but because I guarantee there'll be lots of Cornflakeses who buy LT expecting to be able to play it as a conventional RTS game.
I have no idea why you claim i want mindless drones.

I want employees who are about as obedient as any of my highly skilled and minded coworkers.
Self organising and not dependent on detailed instructions and capable of working down lists according to priorities.
and "this has to get done now" tends to be faaaar up the priority list.

I dont know how this is handled at your workplace, but where im from its this way:
If your boss tells you to do something now you dont turn around and go on coffee break, unless you want the boss not to stay your boss.

(Minus contractually defined responsibility limits. But i dont expect corporate buerocracy jungles to be included...)

Online Now

Users browsing this forum: No registered users and 13 guests

cron