Return to “Suggestions”

Post

Re: AI Morality

#16
Grumblesaur wrote:
ThymineC wrote: Could you elaborate more on how morality and personality are inter-linked, though? Let's assume we're using Josh's AEGIS system for personality:
  • Aggressive
  • Explorative
  • Greedy
  • Intellectual
  • Sociable
How would an AI's morality differ if, say, it were in a region where the "Intellectual" characteristic is particularly dominant? What kind of differences would there be in its behaviour because of that?
I think it'll work something along these lines:

Let's say we have an NPC with the following set of characteristics on the AEGIS personality mapping.

Passive------------------------|------Aggressive
Unadventurous--|------------------------------Explorative
Charitable--------------------------------|---Greedy
Unintelligent------------------------|----------Intelligent
Avoidant--|----------------------------Sociable

This NPC is fairly assertive, very interested in expanding its own wealth, and reasonably clever. This NPC is also not likely to venture far from its home area, and isn't terribly interested in cooperating with other NPCs. Since "Greed" is this NPC's most prevalent trait, let's assume it's most interested in making money. This gives us some lawful and unlawful possibilities:

Lawfully, there's mining. It doesn't usually step on people's toes and can be a solitary activity. It pays well, and doesn't necessarily require the NPC to go far from home, so long as there's a mineral field in the area. This is compatible with the E, G, and S traits of this NPC. It's Aggression is a bit of a wildcard, because although there's no way to aggressively mine (unless you put all your mining lasers on the "turbo" setting), but it could act aggressively should someone enter its prospecting territory. Intelligence doesn't play into it much, since you don't have to be a genius to shoot rocks.

Unlawfully, there's piracy. Piracy is an aggressive act that fulfills the NPC's want for money, and it doesn't require working with other pirates. The NPC wouldn't even need to talk to its targets much, if it didn't mind losing some cargo to ship explosions (though in favor of greed the NPC might learn to be more "sociable", despite it being an act to get a trader to drop his load). It also requires a certain degree of cleverness to think ahead of the local authorities and stay out of the way of the law. However, the NPC may have trouble with this due to its lack of willingness to explore and look for more hunting grounds.

So if this NPC were more adventurous (with all other personality stats the same), piracy may seem the better option. For now, mining is probably a better choice. (emphasis added)
That's pretty neat, actually. To combine this with Mordakai's idea of NPC's basing their decisions on the "rules of the land", where each region has a standard personality vector and NPC's are more likely to exhibit behaviours in accordance with these, an NPC with traits like you gave that entered/settled into a region with a high adventurous component of its standard personality vector might be more inclined to commit acts of piracy in that region, whereas the same NPC entering/settling into a region where low aggression was the norm would be more inclined to mine.

That's at least how I'm interpreting Mordakai's idea, but it seems strange that piracy would become "less risky" in a system just because the adventurous component of the standard personality vector was high. :think:
Post

Re: AI Morality

#17
I think it's sort of an extension to the principle of the path of least resistance, where the NPC will go for something that'll be in its "comfort zone" based on its personality. But because that leaves no room for an inherent regard (or disregard) for socially acceptable behavior (and that in itself is unrealistic, because the environments that people are in during their development certainly will impress upon them a certain range of "socially acceptable" behaviors, as well as a particular mindset toward authority and legality), I think the additional Morality traits to the AEGIS spectra might be a good idea, despite the fact that it kind of messes with the handy acronym.

But yeah, piracy in a region where NPCs have a greater tendency to explore and be comfortable off the grid would certainly be difficult to a pirate looking to exploit trade lanes, since they might not even find a large volume of traders on those lanes.

So for decision making, NPCs would consider:
  • Their own AEGIS-US* traits, for deciding a range of preferred methods to reach their goal(s).
  • The AEGIS-US traits of actors relevant to something they're planning to do, unless they're far to the right side of the Empathic-Sociopathic scale, in which case they'll likely act without considering this at all.
  • The regional AEGIS-US values.

    *Aggressive-Explorative-Greedy-Intellectual-Sociable-Unstructured-Sociopathic
The last one, the regional values, is a great way to influence the AI's decision-making even if no other actors are present. You might not want to mine in a sector full of aggressive sociopaths (as you probably don't want to end up the victim of a random act of artificially-intelligent terrorism), and likewise you probably don't want to engage in piracy in a system with AI who are generally unstructured, as there's a good chance you'll end up on the wrong side of some good old fashioned vigilante justice.
Shameless Self-Promotion 0/ magenta 0/ Forum Rules & Game FAQ
Post

Re: AI Morality

#18
Grumblesaur wrote:I think it's sort of an extension to the principle of the path of least resistance, where the NPC will go for something that'll be in its "comfort zone" based on its personality. But because that leaves no room for an inherent regard (or disregard) for socially acceptable behavior (and that in itself is unrealistic, because the environments that people are in during their development certainly will impress upon them a certain range of "socially acceptable" behaviors, as well as a particular mindset toward authority and legality), I think the additional Morality traits to the AEGIS spectra might be a good idea, despite the fact that it kind of messes with the handy acronym.
If you replace "Intellectual" with "Professorly" or "Professorial", you get PEGASUS.
  • Professorly
  • Explorative
  • Greedy
  • Aggressive
  • Sociable
  • Unstructured
  • Sociopathic
Professorly is the best synonym I could come up with after an hour of thinking/searching. :(

Sociopathic is a bit weird alongside Sociable. It's also nice to avoid negative prefixes for these things too. It would be nicer to have Empathetic and Structured instead, but that'll ruin the acronym again.
Post

Re: AI Morality

#19
ThymineC wrote:Professorly is the best synonym I could come up with after an hour of thinking/searching. :(
Just off the top of my head:

perspicacious
precocious
perceptive
proficient

ThymineC wrote:Sociopathic is a bit weird alongside Sociable. It's also nice to avoid negative prefixes for these things too. It would be nicer to have Empathetic and Structured instead, but that'll ruin the acronym again.
Sociopathy could change to Sentimentality, maybe, or Simpatico, Sympathy or Sensitivity. Puts the scale back to measuring positive attributes with higher numbers at least.
- The Snark Knight

"Look upward, and share the wonders I've seen."
Post

Re: AI Morality

#20
Just_Ice_au wrote:
ThymineC wrote:Professorly is the best synonym I could come up with after an hour of thinking/searching. :(
Just off the top of my head:

perspicacious
precocious
perceptive
proficient
Yeah, I've considered those all as well, but none of them are really synonyms of "intellectual" or "academic". I've been thinking of switching other stuff about to make it work - can you think of a synonym of "aggressive" that begins with "p" and "empathetic" that begins with "u"? For the latter, I'm going with "understanding" for now, but I've got nothing for "aggressive" with "p".
Just_Ice_Au wrote:
ThymineC wrote:Sociopathic is a bit weird alongside Sociable. It's also nice to avoid negative prefixes for these things too. It would be nicer to have Empathetic and Structured instead, but that'll ruin the acronym again.
Sociopathy could change to Sentimentality, maybe, or Simpatico, Sympathy or Sensitivity. Puts the scale back to measuring positive attributes with higher numbers at least.
What's wrong with just Empathy? That's the positive aspect of the spectrum that Grumblesaur's already decided on. Although if you can give me a good synonym of "aggressive" beginning with "p" then Understanding might work there. :think:
Post

Re: AI Morality

#21
How's this actually?
  • Personable (formerly Sociable)
  • Explorative
  • Greedy
  • Aggressive
  • Smart (formerly Intellectual)
  • Understanding (formerly Empathic)
  • Structured
This keeps the original AEGIS-traits together (with their modified names and a different ordering) with the morality traits Grumblesaur proposed appended as the last two. Additionally, all of these are "positive" traits in that there's no negative prefixing involved.
Post

Re: AI Morality

#23
TanC wrote:I think we should include a bit of AEGIS with PEGASUS with a little of AEGISUS thrown in for good measure. :crazy:
May I add yet another acronym to this fracas? :)

To go back to what Josh originally said about NPC traits, these are the five things he suggested that NPCs would need to decide about "in the NPC's evaluation of the world" (followed by the one side of his AEGIS axes related to each of those five kinds of actions):
  • Death (aggressive)
  • Information (explorative)
  • Money (greedy)
  • Technology (intellectual)
  • Relationships (sociable)
It's been a month now since he posted that, so his plans for world-gameplay actions might have changed since then. But assuming they're still valid, I hope it's OK if I offer a modified version of what I described in this post. I'd now like to suggest the following six axes for personality traits:
  • Feeling (prefers to understand the meaning of things) / Thinking (prefers to know how things work)
  • Reserved (prefers to have a few close allies) / Sociable (prefers to have many acquaintances)
  • Aggressive (prefers quick action) / Cautious (prefers to assess situations before acting)
  • Creative (prefers to add new resources) / Conserving (prefers to minimize usage of existing resources)
  • Acquisitive (prefers to keep the profits of winning) / Charitable (prefers to help others do well)
  • Sociopathic (care only about themselves) / Empathic (care about other people)
AEGIS and PEGASUS are nice acronyms, to which I offer FRACAS.:)

These six trait axes apply in the following ways to (my versions of) Josh's five action areas plus the proposed "morality" action area:
  • Research (Technology)
    • Feeling NPCs trust their emotions for making decisions, not cold technology
    • Thinking NPCs prefer to solve problems through new inventions
  • Relationships [Factions?]
    • Reserved NPCs like to operate independently of other NPCs
    • Sociable NPCs are comfortable belonging to organizations
  • Combat (Death)
    • Aggressive NPCs are willing to take survival risks for immediate gain
      • Primary: tends to attack directly; ignores damage to their assets
      • Secondary: willing to take big risks for potentially big financial gains
    • Cautious NPCs avoid survival risks and generate plans before acting
      • Primary: tends to use local environment to snipe; runs if damaged
      • Secondary: prefers to build capital slowly through investment in "sure things"
  • Resource Management (Information)
    • Creative NPCs seek to expand resources through discovery of the unknown
      • Primary: exploration of unknown space
      • Secondary: prefers researching new base technologies
    • Conserving NPCs seek to preserve scarce resources by emphasizing what's known
      • Primary: defense of existing assets (e.g., ships, territory, structures, people)
      • Secondary: prefers to research modifiers to existing base technologies
  • Commerce (Money)
    • Acquisitive NPCs seek to maximize their financial gains
      • Primary: prefers actions that improve production capabilities
      • Secondary: explores in order to find new resources to exploit
    • Charitable NPCs seek to do well enough by helping others do well
      • Primary: prefers actions that improve trading capabilities
      • Secondary: explores to make connections to new civilizations
  • Morality
    • Sociopathic NPCs don't consider the reactions of other characters when making choices
    • Empathic NPCs carefully weigh the reactions of other characters when making choices
The above formulation does a few things I believe are useful:

1. It changes some of the words from Josh's AEGIS terms -- and their opposites -- so that none of them are clearly negative traits ("greedy," "close-minded," "primitive," etc.). Instead, the words I suggest for both sides of each axis are all generally positive. They only become negative when taken to extremes.

To explain this, suppose each axis goes from -5 to 0 to +5. A -3 on the Aggressive/Cautious axis would denote an NPC who is willing to take some reasonable chances for a potentially valuable reward, while a -5 would indicate an NPC who has no sense of self-preservation and will almost always choose the most aggressive option available.

By generating NPCs according to a bimodal curve, with peaks around -2.5 and +2.5, you get a universe where most characters are pretty reasonable -- they have distinct interests but aren't raving lunatics in some way. In other words, most NPCs you'll encounter aren't all "0" on every axis (boring!), but they aren't a -5 or +5, either. (Although some will be. :twisted: ) Most NPCs will have normal-strength preferences, allowing the universe to function but still permitting some outliers to keep things interesting.

As usual, it would be great if this curve is something players could tweak at world-generation time. I'd like to be able to select:
  • a "boringverse" derived from a 0-centered bell curve
  • a bimodal distribution with peaks around -2.5 and +2.5
  • a flat line (no curve), meaning that all possibilities are equally likely
  • a "crazyverse" from an inverse bell curve with peaks at -5 and +5
(Note: "Sociopathic/Empathic" is the exception to the attempt above to use positive terms to describe each trait axis according to the "normal"-level preferences on both sides of the axis. The Sociopathic NPC is a -5 to the Empathic NPC's +5. The -2.5 and +2.5 positions on this axis might be something like "Independent" and "Caring" respectively.)

2. I believe the "Structured / Unstructured" preference is actually pretty important for real people. (My Gamasutra article on gamer play styles explains why.)

But I don't think it's as useful for Limit Theory. It seems like what Josh is after in the Relationships area is just a way of deciding whether an NPC is likely to join a faction or to prefer to operate solo. I think the Reserved / Sociable axis covers that pretty well.

3. Defining morality as "considering the reactions of others" reduces the performance cost of letting NPCs test the reactions of other NPCs to their planned actions.

Josh feels it would be too expensive for every NPC to recursively assess the reactions of all affected NPCs to their planned actions. Defining morality as "considering the reactions of others" instantly reduces the number of NPCs who would do this kind of assessement as only the more Empathic NPCs would need to check reactions. Average NPCs would only check one level deep of their closer allies, while Sociopathic NPCs wouldn't need any reaction checking at all.

If any amount of testing what another character might think is just always going to be unacceptably expensive, then morality is still useful if it's implemented as a number indicating how much the NPC cares about what happens to other characters. To decide whether to take some action, that static number could be multiplied by the likely damage to another character if the planned action is carried out successfully -- a high result reduces the value of that action in the NPC's planning. Some algorithm for guessing at the "likely damage" of an action will be needed, but I believe that's mandatory if indirectly peeking into another character's brain is off-limits.

4. Three of the personality trait axes are specific to one (each) of the gameplay action areas of Limit Theory, while three have a primary on one area of play and a secondary effect on another area.

Research, Relationships and Morality are simple gameplay areas; their traits apply only to those areas. You either like new tech or you don't; you like to be around people or you don't; and you care what other people think or you don't.

Combat, Resource Management and Commerce are more complex. The trait axes for these gameplay action areas do apply directly to those areas, but they also apply in a secondary way to three other areas. Specifically:
  • Combat preferences have a secondary effect on Commerce (how money gets made)
  • Resource Management preferences have a secondary effect on Research (base techs or modifiers)
  • Commerce preferences have a secondary effect on Resource Management (explore for resources or to meet new people)
The value of this is twofold. First, it provides a well-defined way to calculate an NPC's preferences for certain specializations within a main gameplay area. For example, the preference for Creating versus Conserving is mostly about Resource Management -- it determines whether the NPC prefers to explore to find new stuff or make the most of what they've already got. But that preference also pretty neatly (I think) explains why an NPC for whom Research is the most important action area would rather look for new base technologies or expand on the base techs they've already unlocked.

The second virtue of this minor complexification is that it makes NPCs not quite as one-dimensional. People don't usually focus monomaniacally on just one goal. By letting trait preferences have effects on more than one gameplay action area (such as Research being affected by both Feeling/Thinking and Creative/Conserving), most NPCs become a little more plausible as people and a little less easy to manipulate.

I think this feels right for a game like Limit Theory. Others, though, may feel that being able to easily manipulate NPCs is an important gameplay element. If something like this were to be implemented, it would need to be tested to make sure it's fun for the majority of the intended players of Limit Theory.
Post

Re: AI Morality

#24
Flatfingers wrote:
TanC wrote:I think we should include a bit of AEGIS with PEGASUS with a little of AEGISUS thrown in for good measure. :crazy:
May I add yet another acronym to this fracas? :)
<snip>
That sounds like a good approach, but I have a question - since you're no longer defining spectrums to be between a wholly disadvantageous trait (e.g. dumb) and wholly advantageous traits (e.g. smart), how do we represent things like simple, stupid NPC's in your system, or NPC's that are just plain inferior to other NPC's in some respect?
Post

Re: AI Morality

#25
ThymineC wrote:since you're no longer defining spectrums to be between a wholly disadvantageous trait (e.g. dumb) and wholly advantageous traits (e.g. smart), how do we represent things like simple, stupid NPC's in your system, or NPC's that are just plain inferior to other NPC's in some respect?
To which I reply... why would you want that? ;)

In other words, what gameplay value is provided by NPCs whose main feature is a complete lack of any interesting personality trait? I'm not seeing one, but I'm sincerely open to the possibility that Limit Theory needs the equivalent of cannon fodder.

To look at this from a slightly different angle, all the stuff I said above isn't about getting rid of "bad" traits. It's still possible to have NPCs who are really nasty pieces of work. A "greedy, reckless, egg-headed, overly-sensitive, party-animal reactionary" is absolutely a person who can exist in the system I suggested. Not likely, but completely possible, with all the disadvantages that come with such a severely excessive personality.

The point of framing the "bad" traits as excessive amounts of an otherwise useful personal preference is twofold. One, it fixes (what I see as) the problem of dull NPCs whose personality trait is that they don't have one. ("Stupid" as the zero-point on the "Intellectual" spectrum, for example.)

And two, making both mid-points generally positive traits, with the "bad" versions being those same traits taken to extremes, helps to create a universe full of NPCs who follow the sensible rule in meaningful literature that no one ever sees themselves as the villain. People always think they have perfectly valid reasons for the things they do. The trait system I'm suggesting allows NPCs to give what are -- to them -- entirely reasonable explanations for why they needed to wipe out that entire colony or ram most of their fleet into a space station.

Having said all this, I'll note that it's conceptual. I think it would create a more enjoyable universe of NPCs than straight-up "bad guys," but there may be technical challenges that make it unfeasible, or other parts of the game designed so far that make it undesirable. As always, if someone has a better idea, I'll happily endorse it.
Post

Re: AI Morality

#26
Flatfingers wrote:
ThymineC wrote:since you're no longer defining spectrums to be between a wholly disadvantageous trait (e.g. dumb) and wholly advantageous traits (e.g. smart), how do we represent things like simple, stupid NPC's in your system, or NPC's that are just plain inferior to other NPC's in some respect?
To which I reply... why would you want that? ;)

In other words, what gameplay value is provided by NPCs whose main feature is a complete lack of any interesting personality trait? I'm not seeing one, but I'm sincerely open to the possibility that Limit Theory needs the equivalent of cannon fodder.
That's just it. To promote realism and immersion, you'd likely need a fair proportion of NPC's that were just plain, boring sods. Well not boring, but just average. Typical. A world in which everyone were equally intelligent, but just thought in different ways would seem too artificial in my opinion. Some people are just plain dumb, and others are undeniably brilliant. It's more than just a difference along the Thinking-Feeling spectrum that differentiates Albert Einstein from this guy.

Sure, a lot of NPC's you meet might not be as interesting as otherwise using this system, but it would also allow for the possibility of meeting very interesting NPC's as well. NPC's that are truly exceptional, and who stand out all the more in contrast with the typical NPC pleb. A world in which every single NPC I meet is peculiar and zany in some way doesn't appeal to me as much as a world in which there were a bunch of unexceptional plebs, a fair number of peculiar and zany people and a small number of truly brilliant NPCs, good or bad.

What I propose is to keep the spectrums that you've proposed as they are, since you explain the benefits of them pretty well. But along with them, add a few extra ones orthogonal to these that help differentiate between run-of-the-mill NPC's and exceptional NPC's.

For instance, along with having the Thinking-Feeling spectrum, include an Intelligence scale as well. With this kind of modification, you still retain the benefits you list: "badness" would involve being at one extreme or the other of your spectrums, with "mediocrity" being determined by the orthogonal spectrum. Likewise, you'd have NPC's be able to justify their motives based on their position along your spectrums e.g. the Thinking-Feeling spectrum, and this would seem more natural. No one would see themselves as the villain. For example, on the Myers-Brigg spectrum I'm more "Thinking"-oriented than "Feeling"-oriented and I'd kill every living thing on the planet if I could, and I would regard this as an extremely moral action and wouldn't think of myself as a villain at all, whereas other people have said that this is a villain-like thought process. It's not that either of us are "bad", it's just that we think differently and they're all misguided. This is the kind of behaviour you'd want to see out of NPC's in Limit Theory and this is the kind of behaviour I'd hope to retain with these modifications to your system. All that would change is that some NPC's would be exceptional and many of them would be run-of-the-mill.

So to briefly summarise, I'm not proposing any "zero-point" along your existing spectrums, I'm proposing orthogonal spectrums that would act more like dampeners or amplifiers. Having a zero value along the mediocrity/exceptionality spectrum would dampen the corresponding personality traits down to be completely boring, but other than that I think it would be nice.
Post

Re: AI Morality

#27
O Ancient Thread, in your deep abyss of long slumber, I invoke thee! ARISE! :ghost:

With today's release of Development Video Update #20, we see that Josh's proposed AEGIS model of NPC (and now colony) personality has been replaced by this new model:

Aggressive
Creative
Explorative
Greedy
Intellectual
Lawless
Sociable

Firstly, I note that the first letters of these don't form a decent acronym. "SLAG ICE" isn't all that appealing, and it's the best of the lot. ;)

Secondly, and less silly, would there be any functional difference in how these facets of personality worked if they were cast less judgmentally, and as endpoints on a spectrum of innate motivations? Maybe something like this:

Creative <---|---> Conserving
Aggressive <---|---> Cautious
Reclusive <---|---> Explorative
Practical <---|---> Intellectual
Acquisitive <---|---> Charitable
Lawless <---|---> Lawful
Sociable <---|---> Independent

I like this form better because all of these two-valued traits, and all their combinations, are things that people could accept as descriptions of themselves. This version, with descriptive terms at both ends of each trait, is also IMO easier to understand and use than one where the low end of a singly-named scale is just a lack of something.

To address the obvious objections: "Greedy" (which no one thinks they are) becomes "Acquisitive"; the other side of Intellectual becomes Practical (since "stupid" is not very interesting for NPCs at the low end of an Intellectual scale); and Sociable and Reclusive are not incompatible -- people can easily prefer to stay home with family and friends instead of going off on risky solo explorations.

Oh, and now this can now be acronymized to CARPALS. :)

Finally, can we infer that, because these traits look pretty human-normal, there are no alien lifeforms in LT?
Post

Re: AI Morality

#28
I am wondering how easy it would be to put in a new global culture/behavior variable... or take one out :think:



Also, Lawful vs lawless makes me a bit uneasy... I certainly hope there will not be any objective "These are the laws of the universe, follow them or everyone will hate you" nonsense... I want to be a freedom fighter to one group and a terrorist/war criminal to another, a good businessman to my peers but a fatcat to my employees. When I blow up the deathstar, I want it to make me absolutely hated by some and loved by others, neither group is right, neither group is wrong. Morality should be relative.
Image
Challenging your assumptions is good for your health, good for your business, and good for your future. Stay skeptical but never undervalue the importance of a new and unfamiliar perspective.
Imagination Fertilizer
Beauty may not save the world, but it's the only thing that can
Post

Re: AI Morality

#29
Hyperion wrote:I am wondering how easy it would be to put in a new global culture/behavior variable... or take one out :think:



Also, Lawful vs lawless makes me a bit uneasy... I certainly hope there will not be any objective "These are the laws of the universe, follow them or everyone will hate you" nonsense... I want to be a freedom fighter to one group and a terrorist/war criminal to another, a good businessman to my peers but a fatcat to my employees. When I blow up the deathstar, I want it to make me absolutely hated by some and loved by others, neither group is right, neither group is wrong. Morality should be relative.
Personally, I feel as though the Lawless level should be decided by the nearby empire. :ghost:
Last edited by Idunno on Sat Jan 10, 2015 5:42 pm, edited 1 time in total.
Image The results of logic, of natural progression? Boring! An expected result? Dull! An obvious next step? Pfui! Where is the fun in that? A dream may soothe, but our nightmares make us run!
Post

Re: AI Morality

#30
Thanks for the resurrection! I would like to start this as my first post in a while. I haven't been very active on the forums in the past months except to follow the devlogs and updates. I graduated from college, moved to a new city and started a new job recently! life has been busy!

I would suggest the addition of a new trait:

Risk Tolerance

This would act as a modifier of the AI's defined traits. For instance, we would assume that the base traits would determine what actions an AI is likely to take. An AI that is more creative than anything else will likely undertake research as his chosen field.

If we modify that with his risk tolerance, we would see something like this:
High creativity, High risk: These are the AI players who are likely to explore new fields of technology with lower chances of success. They will also be more likely to research those technologies in which there aren't already markets, with the hope of building a new market.
High creativity, Low risk: These are the AI players who are likely to continue to build on already existing technologies, attempting to refine them. They will also look for defined markets in which they can can research new iterations of high demand technologies.

Take the secondary trait risk tolerance and apply it to any of the suggested primary traits, or any of the traits Josh has defined, and you can see how it will allow for all of the actions we want AI to be able to take. I would suggest that all the AI behaviors we want can be achieved using fewer than seven total traits, if we have primary traits and this single secondary trait to modify them.

Online Now

Users browsing this forum: No registered users and 4 guests

cron