I have been thinking about generalized deception (and its detection) for a while, and ways of integrating it into LT. I had in fact written a 7,000 word treatise on it that no one would want to read; it still exists in its partially completed second draft form, but I realized I was only suggesting a few calculated metrics. and a few concepts. Sparing y'all that painful read, I figured I would just present the concepts in their stripped down form here. Still long, but much denser.
Deception happens for 5 reasons:
Avoiding Harm/Danger, Gaining a Reward, Protecting Others, Inflicting harm, and improving/maintaining relationships. In every case of deception, a player should be trying to accomplish one of these tasks in as economically feasible and as safe a way as possible.
Not all false information is deception, and players can be given a trust metric, describing how likely they are to trust a piece of information. This metric grows with “clean interactions and declines with detection of contradicting information
[Trust: 0-1 ; grows by 0.001 per clean interaction; declines by 0.001 per contradiction encountered]
Contradictions must be encountered to cause the decline, a lie never detected has no effect, this allows many small lies without bringing down everyone's trust level.
Deception requires knowledge of when and how to employ a given tactic, in reality this requires a theory of mind, but since in LT all minds are ultimately known, rather than having to guess what the other player is thinking, they can just read the other’s mind with more or less accuracy dependent primarily on their intelligence (how much brain they have) and how closely their personalities line up; the further away their personalities, the less likely they will understand each other well enough to reliably pull off a successful deception. In the case of reading the human player's mind, they can read your personality, and expect similar decision making results to other players with your personality, capabilities, and known assets. I also think that rather than Each player doing this for themselves, they have varying levels of access to the disembodied superbrain of LT itself.
Deception can be through word or action; this assumes there will be ship to ship communications as well as more active ways of fooling others. These boil down to
omissions (dissimulations),
commissions (simulations), and mixed messages. Omissions are when a player hides some information in their communications, Dissimulation is when a player hides some detectable property or value (Via value and property suppressors) or lures a target into complacency by working in an intentionally suboptimal way only to strike when the target is exposed. Commissions are when a player creates a false piece of information in their communications, Simulations are when a player creates or amplifies a detectable property (via value and property spoofers) or pretends to do one thing when their real intention is something else.
Examples of Active Simulation and Dissimulation
Active Simulation
- Mimicry; the spoofing of signals, behavior, and appearances to look like something else. This can be done by making yourself seem larger and stronger than you are, or smaller and weaker than you are. Mimicry also opens up the possibility of false flags, such as that of a neutral party(which may or may not seriously anger said third party, and be more hassle than it’s worth) or as the enemy themselves
- Either to hide among them, or to attack yourself, and gain the justification for going to war from outside observers; not casus belli in the CK2, EU sense, but in the sense that other parties will care less when you make a counter strike. Attacking yourself may also lower the opinions of those third parties toward the faction whose banner you fly
- Spoofing properties of goods to artificially inflate their value. This can either be to sell watered down stock (sell low grade Xium as mid grade or high grade Xium) or to produce counterfeits. Much like a signal spoofer, a property spoofer would give an object a temporary property based on the quality of the spoofer to appear as having a similar (presumably but not necessarily better) property to sensors for a period of time or until it is used.
_ - Of course doing so not only angers the purchaser, but selling counterfeit goods also angers the manufacturer (if there is one) of the real thing. These would be counteracted by point of purchase scanners; getting caught selling counterfeits may incline the purchaser to offer you even less than it is really worth, and since they now know you are a scammer, they may blackmail you or possibly ruin your reputation.
_ - Fabrication, the use of dummies. These are essentially signal transmitters that can re-broadcast any signal you have stored. My idea is that you could place these transmitters in any object that Josh determines can accept them (containers, drones, ships, wrecks, asteroids, etc); At a received signal, these transmitters will activate and broadcast until turned off or the battery dies. With these dummies, you can appear in greater numbers and/or in more locations. Which can trick an enemy into thinking they are surrounded or can serve as a trap to lure unsuspecting victims.
- Feints, to make an initial move which is different than what you are really going to do, or perhaps even just broadcasting that you are going to do something, when you are actually going to do something else. It is a simple misdirection to anyone looking and listening at your actions.
Active Dissimulation
- Hide information signals by turning off your ID tag, and become a generic “object of x size, moving at y speed, distance:z”
- Appear Harmless by disguising yourself as something weak or hiding some/all of your weapons or allies. If weapons rating are a simple numbers, a simple detection suppressor could be used to hide the real strength of the weapon. (laser with strength 500; attach a suppressor strength 300, appear to sensors as having a strength of 200) Of course, a more advanced sensor can see through a suppressor by a given number as well.
- Appear less capable by performing suboptimally, waiting for opponent to take greater risks*, when their guard is down, strike with full force. This is primarily an AI dogfighting tactic, the ability to perform more or less optimally is dependant on dogfighting mechanics. However, this could also apply to industry or higher level AI’s as well, so long as initial performance is intentionally poor with the express purpose of moving with full force later, it would apply… but may not always be a wise tactic.
- Feign a Retreat; before things turn really bad, run away, straight towards your waiting allies or larger force that can overpower (or try to) the detachment sent to kill a much smaller force. Primarily a military tactic, but could also be used in industry to let the opposition build the infrastructure, while you prepare a hostile takeover.
Deceptions can be detected by validation, either by checking it out yourself, or listening to a trusted source. Validation is the comparison of two calculated metrics I call “Credibility”; Written as
C=X(S+D)(TI(1+N-F)
X: Number of sources
S: Quantifiable sum of sensor data’s reliability (Better sensors are more trusted)
Reliability is a rate per second of observation value
D: Quantifiable metric for detecting false information (spoof detector/counter suppressor)
Spoofing, Suppression, and detection of such are objects with positive and negative values (eg. +500 spoofer meets -300 detector = net 200 spoofing. When detection greater than spoofing, net=0
T: Trust Value; 0-1
I: Integrity value; 0-1 (described below)
N: Number of clean interactions with individual (no contradictory information found)
F: Contradictory information flags, can be multiple per statement or broadcast
Example: known sleazy merchant that you haven’t had problems with before
C=1(500+100)(0.72*0.35(1+16-9); C= 1209.6, a fairly low number when compared to a trusted friend or a known honest man.
Validation of information can be expensive, but real lies open up the possibility of real detectives.
Integrity is twin to Trust, Whereas Trust is how willing you are to accept information from others Integrity is how trustworthy you are. Calculated simply: Whenever you have a good opportunity to deceive someone and you don’t you gain integrity, when you take the opportunity, you lose it; whenever you expose someone else’s deception, you gain even more integrity, as it is multiplied by 1+the jeopardy you put yourself in for the exposure as determined by the risk modelling above.
Integrity: 0-1 deception lowers I by 0.001, Honesty raises I by 0.001, Exposure raises by 0.001*Jeopardy
Deceptions can more easily avoid detection when they contain truth. Bold faced lies are inadvisable unless you are quite sure the other won’t find out. Mixing in truth to create half-truths, exaggerations or economic use of the truth will make detection harder. If a statement contains 10 pieces of information, and all are lies, that’s ten potential conflicts of information. It may be worth it, but future business could be much more profitable if you only gave 2 or 3 lies, and built up a higher credibility.
Consequences for being caught can vary, they can come in the form of reputation hits, bounty on your head, or fine. They should be scalable to how trusted you were and how grievous the betrayal of that trust. Written as Q=Value of Lie(1+(0.X*Credibility)). X being the balancer number, can vary from place to place, which simply increases or decreases the risk of lying. I am unsure how reputation and Bounties are calculated, but this should work for monetary
The value of a lie can vary for different situations. if trying to pass off counterfeit goods, it can be the difference between the real value of the good and the stated value of the good. If it is an omission of collateral damage, it can be the value of that damage, etc.
Thoughts?
Challenging your assumptions is good for your health, good for your business, and good for your future. Stay skeptical but never undervalue the importance of a new and unfamiliar perspective.
Imagination Fertilizer
Beauty may not save the world, but it's the only thing that can