Knowledge in a market economy

We can split knowledge (a Bayesian prior) into independent components. Using Catastrophe Bonds (used by the World Bank, for instance), then we can also split the economy into independent components.

Market failure is an inefficient distribution of goods and services in the free market. In an ideally functioning market, the forces of supply and demand balance each other out, with a change on one side of the equation leading to a change in price that maintains the market's equilibrium.

And market failures are common. Moreover, scientists are producers of knowledge, thus they are market agents. They (and the knowledge they produce) are affected by forces of demand. Biased public knowledge is a leading cause of market failures (and of scientific failures), which may further amplify the forces of demand that affect the scientists.

While both the ideal market and the scientific process may seem stable at first sight, once we consider that the public knowledge is affected by supply and demand, we conclude that they are often unstable, since many small deviations from balance can be amplified by scientists and then by forces of demand, in an iterative process.

The way out is to decouple the economy from the scientific process, as much as possible. But since scientific activity is part of the economy and vice versa, what we need to do is to decouple some sectors of the economy from each other (and of public knowledge), which then become isolated from each other as much as possible.

This would lead to a more resilient economy, since a crisis in one sector would not imply a crisis in another sector. Also, to more scientific progress, since statistically independent priors decrease the cost of a crisis (since it only affects one sector) allowing it to occur when it is needed and not when a catastrophe occurs. Note that there are no unbiased Bayesian priors, thus crisis (when a prior is replaced with another one) are often required for scientific progress to occur.

While there is a cost to organize the economy and public knowledge in such a way, the relevant question is whether this is a sustainable investment or not. A Bayesian prior which is the product of many statistically independent priors is merely convenient since it leads to a more peaceful society (including scientists), but it is as biased as any other prior and there is no evidence that the posterior should also have a similar structure (the real world is messy, thus the posterior is very different from the prior), then we need a lot of energy to make predictions about the real world. Most functions are uncomputable, and for most computable functions we cannot say a priori if they are computable. In the real-world, prioritizing survival is the only viable strategy. But, when considering probabilities we can work with computable functions. It does not guarantee survival (because we do not know many things) but it increases our probability of survival.

Such probabilities allow us to compute predictions (with some intrinsic uncertainty) from many independent assumptions. Predictions can be approximated by a small number of terms, which guarantees that at least the approximation is computable (and avoids the dependency hell abate20).

There are still cases when the error does not diminish fast enough with the number of terms, that is, we cannot find a good enough model. In that case, we may need to sacrifice some statistical independence and customize the independent assumptions (for instance using human work , a neural network, or another flexible Bayesian model), but this is a very old approach (not new) and it comes at a cost: the theory behind the model has less knowledge that can be easily applied to this and other problems. Since ever, all solutions to real-world problems include theory naur85 and Bayesian inference (since theories are always idealizations, which only serve as approximations to the final solutions), with more or less inference needed depending on how much knowledge about the solution is already in the theory part.

We conclude that it is sustainable to split knowledge (a Bayesian prior) into independent components. Since the economy is increasingly based on knowledge, then it is also possible to split the economy into independent components, at least in principle. An explicit way of doing it is already widely used (by the World Bank, for instance): Catastrophe Bonds worldbank24.

The "Catastrophe" in the name "Catastrophe Bonds" refers to the fact that these Bonds are often used as a means of insurance against extreme events (catastrophes), because they are mostly isolated from the rest of the economy, so isolated that even in a Catastrophe the Bonds still likely work. But they can be applied in other contexts. The fact that the risk in these Bonds is mostly independent (from most of the economy and from other Catastrophe Bonds) is their main advantage. Note that the Central Limit theorem implies that the overall risk in a Portfolio approximately decreases with frac1sqrtnfrac{1}{sqrt{n}}, where nn is the number of statistically independent investments of similar risk).

According to the World Bank worldbank24:

In a typical catastrophe bond structure, the entity exposed to the risk (known as the "sponsor") enters into an insurance contract with a [special purpose vehicle, such as the the World Bank for > instance]* SPV that issues the bonds to investors. The SPV invests the proceeds of the bond issuance in highly rated securities that are held in a collateral trust, and it transfers the return on this > collateral, together with the insurance premiums received from the sponsor, to the investors as periodic coupons on the bonds.* > > If a specified natural disaster occurs during the term of the bond, some or all of the assets held as collateral are liquidated and that money is paid to the sponsor as a pay-out under its insurance contract with the SPV. If no specified event occurs, the collateral assets are liquidated on the maturity date of the bonds and the money is paid to the investors.

We stress that a specified natural disaster can be replaced by many other specified events in this kind of Bond structure. What is crucial is that the trigger of the Bond is based on a credible measure or entity, mostly independent from the remaining economy.

For instance, many investments in information technology are made under the assumption that PneqNPPneq NP. But the problem P vs. NP is already the subject of a Millenium Prize by the Clay Institute (a credible measure, mostly independent form the remaining economy). So, we could create a SPV that would trigger a Catastrophe Bond in the event that the Clay Institute would declare as proved that P=NPP=NP (until some time limit). The existence of such a bond would decrease the risk of many investments in information technology, while at the same time it would increase the funding available to solve the problem P vs. NP (since a researcher working in the problem could make use of its privileged information).

In fact, one of the main problems in the funding of science is the problem of incentives. How to distribute the funds fairly between the scientists while at the same time making sure that most scientists have an incentive to really solve real problems. It is often the case that a scientist who is recognized as an expert in a specific problem (P vs.NP, for instance) has no incentive that someone else really solves it, since then his field would change in an unpredictable way which he does not want since he is already recognized as an expert on this problem (and not in other problems). When trying to solve a hard problem himself, it is much more likely that the expert will only make an indirect contribution to solve such problem, then the incentive for an expert to try to solve the hard problem (which would likely help someone else to solve the problem) can be null or even negative.

On the other hand, once there is a Catastrophe Bond associated with P vs. NP, then every new relevant contribution is worth money (in a very direct way), since it changes how the public evaluates the odds of a solution of the problem and thus the value of the Bond itself. Someone who made such a contribution can buy the Bond before announcing such contribution to the public and then sell it again (or sell before and buy after the announcement, depending on which contribution he made). Such bonds can be implemented using existing platforms such as polymarket.com, note that a bond can be implemented as a bet by accounting for the interest rates in the price of the bet.

Apprenticeships/interviews in the internet era

Apprenticeships/interviews can be paid to uni-personal non-profit entities which have a fiscal host through opencollective.com . The fiscal host guides an project (the apprenticeship) which is sponsored by a private company (which also guides the project), but that aligns within the mission of the fiscal host (as it happens in some universities), thus it is tax deductible for the private company. Moreover, the money can then be managed by the apprentice in such a way that he may too save significantly in taxes. For instance, he can reinvest it and pay himself for new independent work (which is aligned with the mission of the fiscal host) many years later when he is temporarily out of job and so this income will not pay much taxes. Note that it is better for the general public that the apprenticeships produce public results, this justifies the fiscal advantages of doing so.

On the belief in a superintelligence

We quote Dr. Sean Maguire's monologue about superintelligence, to Will Hunting in the film Good Will Hunting will97:

"I look at you; I don't see an intelligent, confident man; I see a cocky, scared shitless kid. But you're a genius, Will. No one denies that. No one could possibly understand the depths of you. But you presume to know everything about me because you saw a painting of mine and you ripped my fuckin' life apart. You're an orphan right? Do you think I'd know the first thing about how hard your life has been, how you feel, who you are because I read Oliver Twist? Does that encapsulate you? > > &#xNAN;Personally, I don't give a shit about all that, because you know what? I can't learn anything from you I can't read in some fuckin' book. Unless you wanna talk about you, who you are. And I'm fascinated. I'm in. But you don't wanna do that, do you, sport? You're terrified of what you might say. Your move, chief."

While it is possible in principle to find ways to replace human work in virtually any task imaginable, because our brains do essentially bayesian inference which can be done outside the human brain faster and in a larger scale, that does not imply that there exists (let alone that we can create it) a superintelligence, one that will autonomously solve problems that could not qualitatively be solved by humans.

Note that there are problems that take much more time for humans to solve than for a machine and so, a machine can certainly speed up scientific progress. But this has been so since the first computers. The belief in a superintelligence is different, in that the machine is qualitatively more intelligent than any other life form on Earth. But the existing evidence is inconsistent with such a belief.

For precisely the same reason that we can create a machine that replaces human computations (of which now there is abundant evidence), we can in principle replace machines with humans (or many other animals). In fact, there are now biocomputers based on brain cells which are alive. No one doubts that any machine can be replaced by a biocomputer, and that a biocomputer can be replaced by a group of humans working together. There are different speeds and costs, but qualitatively there is no difference.

So, the belief in a superintelligence is equivalent to the belief that a group of humans is qualitatively more intelligent than a single human. Again, a group of humans can speed up progress, but there is nothing that we can understand collectively that we cannot understand as individuals. For instance a health provider and an engineer know more together, but this is a quantitative difference, not a qualitative one. In particular, authoritative regimes (such as China nowadays, where many strategic decisions are made by one or a few individuals) can have a fast scientific progress.

Why is this so? The reason is that no matter the computational resources and data available, a choice must be made about which concrete computations lead to qualitative scientific progress. And this no one knows, no group of humans and no machine, because if someone or something would already know it, it would be part of the current knowledge and it would not be scientific progress. It is true that some computations are more likely to lead to scientific progress than others and that a machine might be faster at estimating probabilities and thus make better decisions. But it is often the case that the improvement in the estimate of probabilities does not scale well with the amount of resources invested, that is, the probabilities estimated by a powerful supercomputer with access to vast ammounts of data are only marginally better than those estimated by a single human being.

Most people believe that P!=NP, that is brute force does not work in general. Scientific breakthrougs are often driven by very biased (in comparison with the mainstream), stubborn individuals, who have access to fewer resources than the elite scientists but for some reason believe that they know something that everyone else doesn't. And it is this different knowledge that allows them to solve a few important problems, not their superior intellectual ability, otherwise they could spend all their lives doing breakthroughs. The breakthroughs stop once the advantage given by their different knowledge stops. The seed of this different knowledge is randomness since people have naturally different priors (which is a form of knowledge), but it is much amplified by what they learn during their lives interacting with the real world, which requires some ammount of intellectual ability but this is not a decisive factor by itself, the decisive factor is how they interact with the real world (differently from their peers), which is a consequence of how they adapt to their environment. Machines can also interact with the real world in different ways that humans cannot, but by and large, there is no clear superiority of any machine here against any single human when adapting to any specific real environment or when trying to solve any sufficiently complex problem.

Looking to animals including other human species (such as the Neanderthal) and within our own human species, there is no evidence that more porwerful brains are always a decisive advantage in the fight for survival. The dinosaurs had small brains, yet they were successful. Elefants, dolphins and whales have brains as powerful as ours and face existintion risk while the mouse is abundant. The Neanderthals were as intelligent as us. And there is no evidence that the human evolution favors more intelligent individuals, in the past or even nowadays. Even from a strictly economic point of view, only some contexts/environments are more favourable to very intelligent people.

More evidence that intellectual ability is not a decisive factor, in reference[@kahan_motivated_2013].

Another way to see it, is that simulation (which is based on the current knowledge of the world) is often not enough for scientific progress. If it would be, then there would be no meaningful scientific progress since we would already know enough about the world. We need the feedback from the real world for the same reason that progress is possible: because we don't know enough about the real world. And to improve feedback from the real-world, intelligence or economic resources are not a decisive factor. Living beings (made of carbon sequences, not sillicone) are adapted to the real-world in a way that machines cannot be.

Certainly there are machines that complement our senses (a microscope, for instance), but we only know which ones in hindsight: there is no known method to innovate or to make innovations significantly more likely (otherwise it would not be called an innovation). Innovations happen by chance, as life (and its byproducts, including machines) progressively adapts to a dynamic environment. A single person can innovate more than the rest of the world combined (machines included), we do not know what we do not know.

That's why Waymo can have more success than Tesla in autonomous driving, with fewer data but more detectors (and thus a better adaption to the environment), and why narrow AI only works in some specific problems. The solution is not general AI (since there are limited resources and brute force does not work), neither an AI that can create other AIs, otherwise these new AIs would also solve the problem by creating other AIs and so on, in an infinite recursion that would never really solve the problems we cannot solve. Fundamental theorem of software engineering: We can solve any problem by introducing an extra level of abstraction... except for the problem of too many levels of abstraction[@nqgbbwdycvw].

Mathematical proofs as a subcase of quantum-time evolution

The theorem is a logical statement, the mathematical theory defines the Hamiltonian (and thus the basis) implicitly or explicitly and the proof is the calculation of the wave-function amplitude corresponding to the logical statement, the (absolute value of the) result is one if the statement is true and 0 if false. The advantage of this is that it can be approximated iteratively and in infinitesimal steps (which favors optimization using differential calculus).

Undecidability corresponds to statements for which an Hamiltonian cannot be found, that is, the mathematical theory is incomplete (and it may not be possible to complete it in a consistent way). Inconsistency is when an Hamiltonian is defined implicitly and no Hamiltonian exists that satisfies the conditions, which implies that the probability space is empty (all statements are undecidable).

An AI that can find proofs for theorems is after all an algorithm to iteratively compute wave-function amplitudes corresponding to some statement. Thus it is implemented by this algorithm.

Overfitting and survival of the fittest

The survival of the fittest is amplified by natural selection due to a changing environment, over many reproduction cycles. That is, often the chances of survival of the fittest of the organisms is not much larger than the chances of survival of most other organisms. However, even a small difference can become very large after many reproduction cycles.

A similar effect happens in Bayesian statistics: the maximum likelihood is often not usefull. We must average (or at least select randomly, as in neural networks) over many possibilities with a reasonable likelihood, to obtain a reasonable solution (avoiding overfitting).

Only when we consider many independent, identical random experiments, the maximum likelihood becomes useful. But running independent experiments consumes more resources than a single experiment, which by itself is often incompatible with the maximum likelihood. Thus, there is often a tension: if we run few experiments we can maximize the likelihood but it is not a useful solution, if we run many independent experiments then maximizing the likelihood as much as possible becomes useful, but we cannot do it fully since we are consuming resources with running many independent experiments.

Analogously, the organisms that survive natural selection often have a will to maximize their likelihood of survival of their offspring at all costs, despite that this survival strategy is not fully compatible with the diverse set of organisms required by the natural selection mechanism that selects precisely such individuals.

There is an analogous tension in international relations: war tends to select states that seek maximum power at all costs, but such strategy (realism) has no justification once the number of independent states (we mean independent in the military sense, for instance Portugal and Spain are both part of NATO, thus they are not independent from each other in the military sense) in the world is low. Note that the standard deviation in the central limit theorem is proportional to frac1sqrtnfrac{1}{sqrt{n}}, so we need 100 (militarily) independent states to achieve a 10 fold amplification, which we do not have even if we consider a multi-polar world order.

There are many definitions of Artificial Intelligence. But one which by definition (and thus at all costs) allows the USA (which has only a small fraction of the population of the world and its wealth and power is mostly concentrated in an even smaller fraction of the population of the world) to continue to be much more powerful militarily than any other country now and in the future, is logically inconsistent. There is no rational incentive for that, even from the point of view of the USA's military. China certainly does not have such incentive: they have been growing extraordinarily in the last decades, quantitatively and qualitatively, being the now leader in most of the relevant technologies, while having a military much weaker than that of the USA.

"Only 10 years ago, most people in the West, particularly in Europe, and to some extent in the US, were saying that China would never catch up in technology, that the US had an unassailable lead, that Europe had an unassailable lead. Now, it is certainly true that China is ahead of the US in technology and way ahead of Europe in technology. I was there just a few months ago in Shenzhen, looking at the extraordinary Chinese technology companies there, and the speed of innovation in Chinese technology, not to mention the enormous size of China's supply chain, means that, in my view, China's not only ahead in most areas of technology with the key exception of computer chips, it's also in an unassailable lead, in my view. So DJI drones are far ahead of any other country in the world, and people need to really wake up and smell the coffee. Because this is a huge global trend that is affecting all of our economies and all of our prospects going forward." --- James Kynge Europe-China correspondent, Financial Times

Even if such incentive would exist, then it would also apply to China's military and then someone (USA or China) would have to solve the problem of how to keep under control an adversary with a more powerful military, that would be the relevant problem that leads to peace. Moreover, since China is on a path which risks to lead them to become the most powerful militarily in the future, we in the West would have all the incentive to help them solve that problem in an acceptable way for them now (avoiding war now), and perhaps for us in the West in the future (avoiding war in the future). But we in the West are not doing it, and we are not doing it because the incentive to be much more powerful militarily than any other country now and in the future, is logically inconsistent.

To significantly increase the chances of survival for most of the people, what we need are many, many independent domains of knowledge and domains of society. Then it makes sense that each of these domains tries to become very powerful. For instance, it is legitimate for the USA (or any other country) to try to become very powerful militarily at any given time. As long as that does not affect the independence of most independent domains of knowledge and domains of society. In that case, many other countries have a real chance to become also very powerful militarily in the future, since a powerful civil society (including in the USA) is mostly independent from the military (including the USA's military) and thus can create a new military in the future.

This would not be a perfect world: wars, conflicts, death, even genocides would still happen. Despite that there would not be much that we could do to significantly reduce the rate of occurrence of such disasters, their scale would likely be smaller, because it would be a much more resilient world than a world with few independent domains of knowledge. It would be a world that at least makes some sense, from statistical, biological, military, rational points of view.

What happened in 2017

In the end of 2016, Donald Trump (an uncommon candidate) won the elections for the Presidency of the USA and immediately made public that he called the Taiwanese leader, abruptly changing the decades-old USA policy with respect to China[@allen-ebrahimian_special_2021]. Note that up to 2017, the strategy for economic growth the USA was dependent on China's own growth and vice versa, through economic cooperation (which included exchange of public and private information, for instance the Chinese would sell smartphones to all world adapted to a mostly open-source Operative System that by default accesses many important American web services such as Google Search). In short, China would provide the economic support that the USA needed to become the (only) police in the world, with the USA providing the safety across the world that China needed to develop and to trade with the world. So, 2017 was not only a geopolitical change, but mainly a change of strategy for future economic growth.

And the USA has been ever since testing multiple ways (through AI, for instance) to grow faster than China, which by itself is legitimate. That's what changed since 2017, AI became successful by necessity, because access to better computer chips is one of the few relevant technological advantages that the USA still had over China. As a consequence, the investment in AI grew a lot and went in large part to transformers. Because transformers were the state-of-the-art at the time and not because transformers allowed much more than previous designs of artificial neural networks. Of course, when we invest a disproportionate amount of resources in developing something, usually we get something useful as a result (but the investment does not necessarily pay off).

Last updated