Public organisations - ensuring quality automations (bugs' bounties etc.)
In previous post we presented the idea of Automation Budgets for private citizens and enterprises. Its time to look how make them of good quality
We’ll look at three different options
Using part of automation rewards to fund bugs bounties or bespoke testing
Safety Investigation Board
Citizens’ right to audit government
Testing and Bugs Bounties
The problem with automations is that they can have errors. People are likely to quickly correct errors where automations fail correctly to deliver benefits but there might be slowness in reporting back if automations are handing too much out.
A hedging strategy is needed.
One option is to allow 3rd parties to write test cases seeking for errors. They could construct data sets with known results and test the current automation logic in production. When errors are found, these test companies would be paid a share of the money going out as Automation Rewards. Automation budgets would be split into two parts: one as direct compensation, other reserved for various types of quality testing. The positive part of this scheme is that only successful test cases are rewarded.
Another option is to take a share of the automation rewards and use that money to purchase test cases from test providers.
Nothing prevents using both methods at the same time. They might have slightly different incentives. Paid testers verify that the functionality is correct whereas independents might opt to check the boundaries of the system by feeding its interface with all kinds of false and unrealistic values.
This concept of paying independent 3rd parties reward when they find errors in technical systems is called bugs bounty.
One can also think that various groups like associations for patients, unemployed etc. would collect and aggregate members’ automation budgets to fund teams that find errors in the implementations to ensure that their members’ interests are protected.
The testing does not need to be limited to automated decision and bugs bounty could be extended so that testing covers also manual services.
Safety Investigation Authority
In aviation there is a practice that after every plane crash an extensive investigation is launched to find out the root cause and as a result comprehensive recommendation are made to prevent it from happening. The recommendations can be anything going from interaction inside the cockpit to technical or legislative issues. This means that each crash makes air travel safer. Similar procedure is in place for all road accidents leading to loss of life. Each accident makes the whole system safer - cars, infrastructure, training, decision makers etc.
Safety Investigation Authority concept needs to be extended. For example, after the financial crisis 2008 there never was a comprehensive investigation into root causes with good recommendations to address issues like moral hazard in the system. Instead, strong measures were taken to solve the acute problem with tax payers’ money.
Many other areas are likewise off the radar, not just financial sector. For example, industrial espionage or attempts to influence elections from .
Safety investigations concept in aviation is reactive, after the fact. In many other areas safety is at such a low level that proactive action is rather easy. For example analysis the fragility of international supply chains. Now it is very easy for a large nation holding control points for example in medicine manufacturing to extort other countries to obey them in political and trade issues.
Citizens’ Right to Audit the Government
The two above systems are public side orchestrated models of working. A decentralised approach would be to look for ways to do bottom-up quality assurance.
Automation means efficiency but also means that any fault in process means that errors get created efficiently in volumes. Citizens need a way to ensure that the decisions made are correct. In a complex legal landscape it is unnatural to expect everyone is on top of the legal jungle.
When my data becomes available via an open API, the landscape changes and validation becomes possible.
Technically this work so that citizens have right to grant 3rd parties’ access to their data. This allows independent organisations to build programs that check and validate that the decision follows the law. If I am unhappy with some decision, I would grant such an app access to my personal data in governmental data repositories. The validator app would access my data inside the sandbox and make a number of tests to check the decision follows the law and is in balance with similar decisions (i.e., that I am not unfairly treated).
If a fault is found, an automated correction request (appeal) is made to the right authority. Any found fault is a possible indication of a systematic error in automation. Not necessarily however as also validations will contain mistakes.
When the fault is verified, the automated decision-making system is corrected, and everyone who has gotten wrongful decision can be tracked from databases and their case re-evaluated and benefits retrospectively based on that. The company or organisation that found the error is given a monetary reward. You could think of this as giving a fine to the state. The money to correct the error is taken either from the organisation that implemented the automation (their automation reward) or from a general fund that is gathered from overall savings done via automation.
Part of the state budget could be allocated to organisations for writing these automated apps that test the state. For example, the unemployment organisations could be given funds so they can develop such apps to ensure that unemployed get their benefits according to law and so on.
Auditing government is really just one more level of testing that things work well.