I'm feeling pretty fired up after overcoming a few technical hurdles beck in Graph Theory of Social Networks. In the process of thinking things out in that post and Back to Square one with Clay-stage I have written some scripts to just test out what an interaction system will need. I currently have a script that will basically take 20 simulated men and 20 simulated women and walk them through the process of meeting each other and becoming friends, enemies, or lovers based in random interactions. The code is pretty long but you can read it here. I also provide a sample output.
If you want to run it on your own, snag a copy of my Clay library distribution.
There's nothing overly explicit in the code, but it's not exactly politically correct. Though I do cover things like a simplified handling of heterosexuality, homosexuality, bisexuality and asexuality. In the model it's simply an attribute at the agent's creation, and all it does is select who that person is attracted to sexually (if any).
For a first effort, it's somewhat satisfying. It needs a lot of work, but it does show that I can put a much of agents in a simulated room and they will plausibly interact. What I'm doing currently with random number generators could be replaced with more complex rules that produce results indistinguishable from random numbers, but I digress.
This script is using an early concept of my relationship system. One data structure/object represents both sides of a relationship. If you can't tell by the code, this is getting pretty messy.
The current model doesn't handle cases where a relationship is largely one sided. Or where two sides of the relationship are tracking vastly different data. So for my next effort, each relationship will be two different structures/objects. I think I also need to add a collective memory store for all of the relationships to pool data about a person, organization, etc.
While this may seem like an over-complication, remember that people interact on more than one level at a time. And the rules of these different levels lead to internal conflict that more or less defines our human condition. Two people may know one another from a circle of friends who may not approve of them having a relationship. They two would act differently in the presence of that group.
In this case A and B are in the early stages of dating. A, being the supervisor of the department they are working at, wants to ensure that C and D are kept in the dark until things become serious so they don't suspect her of favoriting B because they are schlepping. A especially wants to keep things from C because, despite A being the effective head of the department, C has been gunning for her job and would use any tool at his disposal to create a scandal. But, of course, B and D are good friends, so eventually D finds out just through normal social interactions. D and C are classmates... but D remembers enough of C's antics to know that he's an asshole, that D really prefers A in charge, and decides on their own to keep A and B's secret to themselves.
This, of course, now brings up the problem of agents needing to know what other people know. And I had a problem along these lines for a research project at work years ago. The ultimate answer is for each agent to not only track it's own state, it maintains a copy of what it has learned about every other agent.
Now, full disclosure, the project at work was much simpler. To make the output as repeatable as we could, we eliminated the idea that people would lie or equivocate. One agent needed to know what another agent know so they could generate a message pass information that was discovered up the chain of command. A seaman going from one compartment to another may discover smoke. They would realize "hey smoke is important." Having an important bit of data in their mind would generate a behavior to communicate that fact to their superior. And when their superior got that fact, they would pass that information to their superior (or wherever that agent thinks someone would act on that data.) Once the information was delivered, the agent would mark off "I communicated this fact", and then resume whatever else they were doing. Some fact were important enough to drop everything and report, others were "mention it when you see this person." And I will grant you it was mostly theater so we could drive communication in our model. But they were important first steps.
For Iliad-07, I need a system that can handle lies and equivocation. Thus, I need agents to not only model what other people don't know, I need them to model what that other person's reaction would be to learning it. With some sort of metric to allow them to decide if that reaction is a good thing or a bad thing.
For my project at work we were using C and data structures, and eventually logging data to Sql for analysis. I'm thinking for stage we should do the opposite. Log things in SQL, and then spawn a few objects to evaluate the effect of that interaction, write new data to SQL, rinse, and repeat. This means that instead of objects that need to track the entire state of the entire model all of the time, I mainly have only a handful of objects centered around one individual agent running at any given time.
The work flow will be like a board game. Each round, an agent gets a turn. The program loads that agent's data. It may load a few other agents that are on that agent's mind. That agent works through all of the interactions that have involved it during the previous round. In the process, it modifies state, generates interactions, and all of that gets written back into the database.
Sometimes an agent is too distracted by a more pressing concern to process other interactions. Depending on how that interaction was sent, it is either ignored or put into a queue to be processed later. A person screaming "LOOK OUT" is either immediately recognized, or not. A written correspondence may take some time to arrive, can be stored later processing, and the reply can also take some time to arrive. We also have concepts like Fridge Logic and Fridge Horror where catching up on mental notes can cause you to reevaluate the world.
In the end... the only thing that is constant are facts. We just need a way to express who knows which facts, and how. Let's say that we pick a fact and we want to spread that fact. Say... someone's favorite color.
Someone's favorite color is a nice squishy fact. Some people have one. Some people don't. Some people will say their favorite color is X, but their entire wardrobe is Y. Yes, we have to leave room in our system for people to believe a fact about themselves that differs from what one would draw from outside scrutiny.
At the same time as we are developing this scheme, we do need to think of how are we going to store it. For my research project, the facts themselves were all shared. I would simply link each fact to the various agents when they observed the fact or that fact was communicated. Now for my research project, veracity was always 100% for a fact. We did have a knob for Entropy. A fact with a high entropy would trigger a "go communicate" behavior, as well as break ties for which facts to communicate first.
I'm using entropy in a sense to record exactly how novel a fact things may be. Now, with deception and equivocation, I need to also have a mechanism to calculate when an agent would withhold information. The simplest is data with a low entropy score. I.E. data that isn't actually information. Stock prices are random. Now, if I bought a particular stock when its price was lower, suddenly that little number means something. Likewise if I bought a stock at one price, and now it is lower.
In the end, whatever system I create has to make something simple enough for a computer to evaluate and make decisions based on. And those sorts of systems are generally numbers. Positive numbers good, negative numbers bad, usually. Or for simple cases a "yes or no".
We need to know how interested the other person would be in a fact, and how interested we would be in that person learning that fact. If I got an A on my report card, and Mom doesn't know yet, you bet that I'd be chasing her down to tell her when I got home. If there was an F, you bet that I would not be looking forward to it. If it was a subject I was having trouble with, and the F is just a logical outcome of the story so far, it's actually not that negative an interest. If the reason for the F is because there was some interest of mine that took up my time instead of study, and I know she will eliminate that distraction, I suddenly have a vested interest in keeping her in the dark.
Decisions can also be multidimensional. One thing I learned as a child was that no matter how bad a bit of news I had to tell my Mom, if she found out that I was hiding something, or worse... lying, whatever punishment I got was going to be WAY worse than anything I would have gotten for speaking the truth.
But, that's not a universal experience. In some cultures, there are just some lies that everyone lives with. With some truths, the ability for another party to comprehend the fact may be in question. We could be concerned with how that other person would spread the fact once they know it. Or perhaps we suspect the other party might misuse the information, or use it for blackmail. Or perhaps the truth itself would utterly shatter whatever relationship you had with that person anyway. The calculus humans use to figure out what we tell to whom is messy and personal.
Part of my system is going to have to include a recipe for the rules on how different agents actually work. And, possibly, how OTHER agents think they work. We've had that mentor who surprised us with an enlightened reaction to what we thought was going to be a disaster. We've had that best friend who never spoke to us again after telling the wrong joke about a subject we had no idea was important.
As far as how this all works on a machine level, in particular how to store behaviors in a database, see Implementing Expert Systems in Clay. And actually I'm going to leave this entry off here.