(read Part 1 here)
I have seen it many times while watching chess commentators (typically, Grandmasters of the highest level) performing game analysis in real time. These GMs will be considering different possibilities for both sides and, infrequently, when the situation becomes too complex and unclear, say something like, “Hey, let’s check with the chess engine now…. Oh, it gives a strong advantage to White, but I don’t see why…. It says to do… WHAT?! And then… WHAT?! No…. these are not “human-like moves”, the players will not do that. This is too deep and machine-like…”.
The truth is that even the strongest Grandmasters often feel like little children when comparing their own analysis to that of a machine. But this is exactly why they are using machine analysis!
Lucky for chess, nobody suspects that “Stockfish” or “AlfaZero” have some ulterior motives, biases, don’t like some of the players, or wants to take advantage of somebody. Chess engines are considered to be fast, powerful, accurate, and objective analysis and decision-making tools capable of finding the best solution for any situation and being useful to us by simply being better than us. And nothing else.
And this is exactly how the future AI governments should look like: fast, powerful, accurate, and objective analysis and decision-making TOOLS capable of finding the best solution for any situation and being useful to us by being better than us. And nothing else.
Machine learning (ML) might already offer a possible approach needed to build and test such an “AI governance engine” and create the entire democratic election process using ML’s normal training and testing approach and steps:
- Provide the “governance engine” with a training dataset of historical or other examples that are of high value to us and explain how to classify them (for example, “bad” or “good”). Cover important social, economic, judicial, cultural, and educational fields. For example, imagine thousands upon thousands of statements or questions along with their classifiers/answers presented like this:
- “Rosa Parks rejected bus driver James F. Blake’s order to relinquish her seat in the “colored section” to a white passenger. Was she right or should she have stayed in the colored section?”. The answer: Rosa Parks was right. The driver was wrong.
- Or, “greater investments in children education” are good. Cutting these investments is bad.
- Cutting forest in Amazon delta is bad. Reducing industrial water and air pollution is good.
We have tons of examples like this from our past and present.
- Keep another dataset of examples with answers for testing. We will use it later to verify that the engine works well.
(Comment: The general population should take part in creating the above list of Q&A. Millions of people can contribute to it. This will allow the people to have a very direct impact on the training and selection of their own government instead of choosing the best available but imperfect candidate)
- When the governance engine is completed and trained, it will be able to generalize the above data beyond the exact examples it was given. This is like showing pictures of 1000 cats and 1000 dogs to a machine – for training – and then showing it a picture it never seen before and asking to classify it as a “cat” or “dog”. We can now proceed to the testing stage.
- Testing: Run all the competing “governance engines” against the test dataset (with known answers) and measure their classification error. Basically, pose a large set of questions with known answers and check how the candidate engines performs. It is ideal to have zero erroneous answers across the dataset that addresses the most important fields and issues (this is almost impossible, by the way, to achieve for most human politicians).
- However, it is likely that some engines will do better in some cases, and others will excel in some others. For example, Engine A might score higher in social issues while Engine B will do better in economics. This resembles the difference between human candidates with different backgrounds and opinions during the elections and could be used to make the final decision.
- Decision on which model to use (if any – they could also be all sent back for re-training) at the end can be made via the same democratic election process we currently have, with people voting on this decision. Minus the negative TV ads, mutual insults, and paying the press to dig up some dirt on other candidates.
- To be honest, if society wants to keep this part of the process for “entertaining purposes”, it could be easily simulated by the AI engines as well: there could be plenty of “dirt” found from the “past test votes” made by different engines or the focus could be shifted onto the model’s creators – actual living people.
- In the end, the decision of when to bring the engine online and when to consider other alternatives again (aka “general AI elections”) should be made by humans.
- And, humans should still have veto right over the most controversial decisions that the AI government will make.
The gains from such a revolutionary change, which should start at a smaller scale and be done slowly and carefully over some period of time, are difficult to estimate. Possible and unexpected consequences of such a revolution could be these:
- To our shock and great surprise, we might find that nearly ANY social model, form of governance, and economic system will start working really well after the “weak link”, which is us, is removed from the daily decision-making process; all the biases, emotions, and personal interests are gone, and the law is always enforced without bias and corruption.
- All of the different political and economic systems might start merging into one system; the “optimum one”, which favors the best and most balanced decisions for the people.
We will all be surprised. We will all be suspicious. We will complain a lot. We will talk about long-term negative effects of this change, about the dangers of losing control, about AI dictatorship, the end of humanity, ….
But, as time passes, we will realize that an AI Government is just another tool for us to use, something similar to a city traffic control or a sophisticated thermostat in the house – always “on”, programmable, predictable, efficient. Serving us 24×7.
Then, we will accept this new world as a better place to be in, will get used to it, and will start enjoying it. The world will become safer, more predictable, and much better governed.
And, as today in chess, we will simply accept that there is a superior computation and reasoning engine that we created that keeps improving every day to make human lives better in the future.