This version of the model implements a partition and market based scenario (instead of a grid distance based one). The classification mechanism works with agents’ probabilities to get a potential partner’s intended behaviour wrong (instead of a partner’s tendency to be trustworthy).
The model is supposed to improve the understanding of the development of division of labour in large groups. This process is complex since it is driven by several conflicting forces. On the one hand, there is an incentive for division of labour among specialised agents but on the other hand, exchange is a risky and thus one might be better of solving only one’s own problem. Furthermore there is an incentive to search for partners on a global market since the set of potential partners is larger there than in the agents’ neighbourhoods. On the other hand, the agents’ knowledge of each others trustworthiness is more reliable in small groups.
2. State variables and scales
There is only one type of agents a very simple environment. We have N agents distributed among an externally given number of neighbourhoods. In each time step a randomly determined subset of the agents (50%) gets one of P problems. Those agents that have a problem are called P-agents. P-agents can solve their problem on their own or hire an agent without a problem, a so called S-agent, for solving it. Hiring a S-agent is risky since a prepayment has to be made and only afterwards the S-agent decides on whether he delivers a solution or keeps the prepayment without solving the problem. Each agent has a vector of competencies containing a value for each type of problem. Furthermore, each agent has a decision vector containing four probabilities to decide for a certain action in a respective situation. Probabilities one and two determine the agents probability to enter the market in case they are a P- or a S-agent. Probabilities 3 and 4 determine whether or not the agent is trustworthy in case he is a S-agent and was hired to solve a problem. The agents’ probability to be trustworthy in market interaction can be different from that to be trustworthy in local interaction.
Each time step starts with the distribution of problems, i.e. half of the agents get a randomly chosen problem. Afterwards agents decide on whether they want to search for a partner on the market or within their neighbourhood. Then, the S-agents decide on whether they are trustworthy in the current time step or not. After this, agents are matched and their payoffs are determined according to the agents’ decisions and competencies. Finally, agents decision vectors and competencies are updated.
Emergence of agents that are trusting and trustworthy in market interactions.
The probabilities in the agents’ decision vectors (whether they enter the market and are trustworthy) as explained above.
The compare their discounted aggregated payoff with that of their most successful neighbour.
Prediction is not explicitly modelled.
Agents have an imperfect knowledge of the trustworthiness of other agents.
The exchange situation described above.
Yes, especially for the matching algorithm random numbers are very important. This algorithm produces matches that real agents can plausibly bring about, i.e. better the matching is “better” than random matching but worse than what an perfectly informed social planner could do.
Agents are situated in neighbourhoods. It is assumed that the knowledge of each others trustworthiness is better there than on the global market.
During the initialization N agents are randomly distributed among an externally given number of neighbourhoods. All competencies are set so 1/P, where P is the number of different problems. All values in the agents’ decision vectors are set to uniformly distributed random numbers between 0 and 1.
number of agents: 500
Specialisation: If an agent a solves a problem of type i then a certain value (parameter: momentum of specialisation) is added to component i of a’s vector of competencies. Afterwards the vector of competencies is normalized, s.t. afterwards all components add up to one again.
Learning: All agents take a subset of the agents in their neighbourhood (size_of_learning_pool * numer_of_agents_in_neighbourhood) as their learning pool. From this pool, the agent with highest aggregated payoff serves as a role mode. Each component of the role model’s decision vector is copied with a probability of probability_to_adopt_propensiity.