Decentralised Model in Detail - Society Operations System
Next: Decision Making Models.
Society Operating System (SOS)
We’ve covered how the basic necessities (energy, food, water, waste handling, infra) work in decentralised manner and how this is enabled by open sourcing digital blueprints and how basically everything can be digitalised.
How to organise work/activity around the repos?
An organisation is basically a common purse, decision making model and participants with their rights and roles. The resulting diagram is very similar to the autonomous organisation.
In subsequent posts we’ll go through key aspects in more details, but let’s start with the helicopter view:
When a project is started, the starting team needs to decide the initial condition: rules and decision making model. Who is able to join, what users are able to do initially and how their journey progresses as they become more involved into the project and their contributions grow.
Typically projects allow anyone to join (they are permissionless) but nothing prevents you from starting one based on invitation only. And there must be definition what happens when people leave. On internet you cannot prevent anyone from becoming passive, i.e. defacto leaving. What happens when someone has been inactive for a very long.
Ideally a project also has a dissolution mechanism. What if the project never makes it big. Do you just leave a digital ghost behind and people move to other activities or what. What happens to eventual funds that the project has. How are they to be divided between contributors? What if the sum due to each one is smaller than the transaction fees would be?
The actual operational aspects will be programmed as a set of smart contracts. They enable programming different organisational models, define how distributed decision making works and disputes are resolved. Different communities and cultures are likely to have different preferences leading to different models. Smart contracts help experimenting and finding automated governance models that work well for different users and in different domains.
As part of the rules are rules for changing the them. These higher-level rules are typically made more difficult to change. In a sense communities have foundational rules (‘constitution’) that require a very large majority to change and other operational policies topics that are much easier to change. On the lowest level are small parameters that fine-tune some existing rule or procedure.
An initial injection is needed to get any project started; new funds are gained when people use services or download projects assets through a charging mechanism. Being a contributor, you get a rake of income. The reward depends on the value of the asset. We discussed one option earlier for estimating this - TechRank.
The project implements some asset or organises some service in crowdsourced fashion. Users pay for this service or assets in some manner. For example a monthly fee or pay-as-you go. There may also be freemium model (use is free up to a point) or the assets may be free but people offer services around it and the services are paid by the project token using project smart contracts (this allows the project to take in part of the income for future development). Some verticals like health may have their own mechanisms where data sharing is compensated and this income fee could also be used to fund open medicine.
The income going to project goes to a common, shared purse (Treasury). Any development activity requires funding, so most important decisions in the project are are about distribution and allocations funds in the Treasury. The decision making model drives this.
A large project can be divided into smaller projects and teams and they all have their own sub-purses and follow similar ways of working.
Token Engineering for Verification
How should we design the rules of the system?
Token engineering is an emerging concept for simulating for example token based systems, but the tools work with any non-linear system. Its ideally suited for verifying that a proposed token model (token issuance and distribution model combined with predicted user behaviours and expected rates and types of transactions) does not produce unexpected outcomes before implementing it.
The core concept is to put the rules of the system into a model and use discrete simulation to try to detect weak points in the system. Discrete means that you put the rules in and expected behaviour of participants and run a large set of steps to see how the system evolves. You might for example track how many people join and leave, how the token value evolves, what is the collected reserve in the treasury, how many developments can be funded. It’s up to the designer to say what is important.
To use it in meaningful way, one should not assume particular behaviour of participants. Not to expect all people to be well informed, make good decision or act in good faith, but allow some segment to perform all actions allowed by the system. A fleet of simulations varying with starting conditions and probabilities of how people behave are then run. The outcome of simulations tells if there are ways to mistreat the system and gain undue benefits. The simulations also tell how sensitive the proposed system is to small changes in initial conditions,
It is a way of programmatically verifying that the proposed model is sound.
As such the concept is extremely simple as shown in image below but modeling the environment correctly is the difficult part.
Token engineering can be used to put the rules of the system into a simulation environment and run statistical simulations to understand if the resulting system has some dark corners where it can produce totally unacceptable results.
Token engineering is not fool proof but it is a big improvement. If something does not work in simulation, it will not work in real world. But if simulation works, this does not prove that there won’t be corner cases. Simulations are always abstractions, removing details, and details tend to hit back. And on more principal level, economic systems are non-linear and there is no known method for solving them. Non-linear systems have a characteristic called butter-fly effect. The most miniature change in starting condition results in totally different outcomes after a while. This is true of all other real-world system as well.
TechRank for value attribution
How should reward for creators be calculated?
TechRank is one approach that loans its logic from the web search engines. Web pages link to each other and the importance of a page can be estimated by calculating how valuable the pages that link to it is. This is a recursive method as the value of those pages again depends on the pages that link to it as. This recursive method stops at some point (weights no longer change).
Technology is organized in an analogous manner. Any product is composed of subcomponents that in turn are made of other subcomponents etc. The designs of those components and the manufacturing methods to make them use different technologies. Often same components are used in many different types of products and industries. One can view this as a web of interconnections much as links on the web and calculate the value of each contribution. This is TechRank. Same concept is already used in scientific publications where people follow references between studies (and widely misused as many papers seem to have more authors than text…).
This model needs some adjustments as such. Almost all value is coming from old work and old tech but it the new work that needs funding and rewards. Should rewarding follow similar principles like patents? How about applying existing tech to a new field. This reuse can be very valuable. Much of new technology today has decades of university research behind it and it is not monetarily rewarded. But university research get paid, status and many move to industry later. University research is quick trials to validate ideas and a nice paper out. From that there is a long way to a working product. Often turns out to be impossible. Valuing incremental innovation may prove hard.
The impact that service has can also be factored in. Some services are almost daily, other you only need once during lifetime, but they are critical at that moment. What is right allocation is difficult to figure out.
I leave figuring these out as small exercise for the gentle reader.
https://marttiylikoski.substack.com/p/techrank-as-a-contribution-counter