*Krzysztof Zawisza*

**Introduction **

What I will present in this text will be completely unbelievable and absolutely obvious at the same time. So it will be reasonable. Reasoning is based on obviousness, but at the same time it always leads to unbelievable conclusions. This is because reason is higher than faith and the results of logical thinking always transcend beliefs.

Already in the beginnings of science, random and aleatory aspects of the Universe were noticed by the Greeks. According to Democritus, atoms move chaotically and randomly, and their collisions generate our visible world of changeable things[i]. However, the strict mathematical probability theory that we currently use to describe random phenomena began to emerge only in the 17th century in the considerations of Blaise Pascal and Pierre de Fermat. Work in this field continued in later centuries by such famous mathematicians as Jacob Bernoulli, Abraham de Moivre and Thomas Bayes. Carl Friedrich Gauss, Peter Gustav Lejeune Dirichlet and Pierre-Simon Laplace used probability theory to analyze random variables and stochastic processes. However, it was not until the 20th century that probability theory was more fully formalized by Andrei Kolmogorov[ii].

Meanwhile, in the 19th century, important literary works appeared that testify to the emerging fascination with the interconnections of concepts such as randomness, destiny and fate. In Alexander Pushkin’s well-known novella “The Queen of Spades” (1834), the main character tries to acquire a system that will allow him to win at cards, while Prosper Mérimée’s “The Dice Game” (1830) is a story about a young man who decides to risk everything in a game of dice with the devil. In turn, in the 20th century, the theme of chance and fate is explored in works such as the short story “Lottery” (1948) by Shirley Jackson or the novel “The Dice Man” (1971) by Luke Rhinehart. All these literary works allow, in one way or another, to experience the concepts of chance and randomness as one of the greatest mysteries of the Universe.

As Stanford stats professor Persi Diaconis states:

“Our brains are just not wired to do probability problems very well”[iii].

However, solving probabilistic problems has nothing to do with our biological brain. You have to use your rational mind for this.

Below I will shed light on these age-old issue from a purely rational perspective.

1 **The so-called random events or about the unity of the Universe and the meaning of our lives**

You probably know this probabilistic joke. If you know it, read it:

*I used to make omelets with four eggs but the FDA said that one in four eggs may contain salmonella, so now I make omelets with three eggs*.

This probability jest *implicite* illustrates quite well what probability theory is. Namely it orders what can never be ordered, and knowledge of this order allows us to predict events that are not predictable. However, we can predict the degree of their unpredictability.

Deterministic order is unity (in/of) multiplicity (uniformity in diversity), i.e. a whole composed of parts, while probabilistic order is the inversion of deterministic order, i.e. probabilistic order is a part separated from the whole. Probabilistic order is a multiplicity (in/of) unity, e.g. many different possible ways of realizing one event. Probabilistic order is a partial order, or to put it another way, a false order, i.e. an imitation of the true order. No falsehood is yet true, but every falsehood already contains grains of truth. Therefore, falsehood is an image of truth (falsehood is imaginary truth), and probability is probing the ability for truth.[iv]

Do you know this Polish joke beneath? Did you know it? So, read it.

A math teacher in a high school probabilistics class pesters a student for an answer. The student is ignorant, so the teacher wants to take pity on the student and asks him:

– What is the probability of rolling a 6 on a dice?

– One – the student replies.

– But think about it. After all, the cube has 6 sides – the teacher helps him.

“One,” the student insists.

Teacher (nervous, hands the student a dice).

– Take it. Throw!

The student rolls a 6.

The teacher is astonished and asks:

– Throw it again!

The student rolls a 6 again. The teacher’s eyes widen, he takes the dice and throws it: the result is a six.

– You’ve got a six [v] – he says – writing down the grade 6 in the school journal. Sit down!

What do you think about the probability theory exam presented above? Improbable, right? It is very difficult to roll the same number of dice three times in a row (and exactly the desired number at that). This is what Edgar Allan Poe wrote about it in one of his most mysterious novellas, *The Mystery of Marie Rogêt*:

“Nothing, for example, is more difficult than to convince the merely general reader that the fact of sixes having been thrown twice in succession by a player at dice, is sufficient cause for betting the largest odds that sixes will not be thrown in the third attempt. A suggestion to this effect is usually rejected by the intellect at once. It does not appear that the two throws which have been completed, and which lie now absolutely in the Past, can have influence upon the throw which exists only in the Future. The chance for throwing sixes seems to be precisely as it was at any ordinary time—that is to say, subject only to the influence of the various other throws which may be made by the dice. And this is a reflection which appears so exceedingly obvious that attempts to controvert it are received more frequently with a derisive smile than with anything like respectful attention. The error here involved—a gross error redolent of mischief—I cannot pretend to expose within the limits assigned me at present; and with the philosophical it needs no exposure. It may be sufficient here to say that it forms one of an infinite series of mistakes which arise in the path or Reason through her propensity for seeking truth *in detail*”.[vi]

Poe is an American master of horror literature, but also (what his readers may not always know) a thinker and scientist who solved, among others, Olbers’s famous astronomical paradox[vii]. In the quoted quote, he is also clearly trying to solve something, although at first it is not clear what exactly he means. But when we take a closer look at it, we can see that Poe simply rejects the possibility of the existence of the so-called *independent events*:

“I repeat, then, that I speak of these things only as of coincidences. And farther: in what I relate it will be seen that between the fate of the unhappy Mary Cecilia Rogers, so far as that fate is known, and the fate of one Marie Rogêt up to a certain epoch in her history, there has existed a parallel in the contemplation of whose wonderful exactitude the reason becomes embarrassed. I say all this will be seen. But let it not for a moment be supposed that, in proceeding with the sad narrative of Marie from the epoch just mentioned, and in tracing to its dénouement the mystery which enshrouded her, it is my covert design to hint at an extension of the parallel, or even to suggest that the measures adopted in Paris for the discovery of the assassin of a grisette, or measures founded in any similar ratiocination, would produce any similar result. For, in respect to the latter branch of the supposition, it should be considered that the most trifling variation in the facts of the two cases might give rise to the most important miscalculations, by diverting thoroughly the two courses of events; very much as, in arithmetic, an error which, in its own individuality, may be inappreciable, produces, at length, by dint of multiplication at all points of the process, a result enormously at variance with truth. And, in regard to the former branch, we must not fail to hold in view that the very Calculus of Probabilities to which I have referred, forbids all idea of the extension of the parallel:—forbids it with a positiveness strong and decided just in proportion as this parallel has already been long-drawn and exact. This is one of those anomalous propositions which, seemingly appealing to thought altogether apart from the mathematical, is yet one which only the mathematician can fully entertain”[viii].

The author of *The Mystery of Marie Rogêt* clearly suggests that even such “independent” events as separate dice rolls are somehow related to each other. And that is why their subsequent results cannot be completely arbitrary and subsequent throws cannot be considered (as today’s probabilists would like) in isolation from the previous ones.

A modern mathematician will tell us that Poe is clearly wrong and that the chance of getting a six when rolling the dice is always the same, regardless of how many sixes we rolled on previous rolls. Interestingly, however, the outstanding Enlightenment scholar Jean le Rond d’Alembert, who lived before Poe, was of a similar opinion to Poe’s.

D’Alembert – a famous physicist, mathematician, philosopher and music theorist – in the fourth volume of *Opuscules mathématiques* published in 1768, noted that **if two events A and B are equally probable and A occurred several times in a row, then it is now physically more the probable occurrence of an event B**.[ix]

Views of this kind are rejected *a priori*, as I have already noted, by most modern mathematicians (and physicists). But if we take a closer look at these views, their correctness (and concretness) will become obvious. Well, the fact that “the throws which have been completed, and which lie now absolutely in the Past, can have influence upon the throw which exists only in the Future” [E.A. Poe] is a(n) (onto)logical derivative of another fact, namely that the present, separating the past and the future apart, is *ex definitione* a synthesis of the past and the future and mutually replaces them. The past currently (i.e. the past in the presence) is the future for the future, and the future is the past of the past. Every past in the past was the future, and the future in the future will become past. Therefore, the past is the reverse (i.e. reversal) of the future and *vice versa*. Therefore, if the averse came up during a coin toss, its reverse will now have a greater chance of being thrown. And *vice versa*. Looking into the future in the present – we see the past reversing (that is why the wheel of life is turning). Therefore, if in the past heads had the ontological advantage when tossing a coin, now tails will have such an advantage (and *vice versa*).

This is what the Law of Large Numbers actually say. It is well known to all probabilists, but usually interpreted one-sidedly. Meanwhile, every correct (i.e. complete) interpretation (like every complete, correct coin) has two different sides.

The Law of Large Numbers adopted by mathematics state that if we repeat a random experiment a “large number” of times, the percentage frequency of obtaining a certain result will be more and more consistent with the theoretically calculated probability of obtaining this particular result.

This regularity was noticed by many researchers who performed long series of coin tosses. Buffon flipped the coin 4040 times and got heads 2048 times, so the frequency of heads was m/n = 0.50693. Pearson flipped a coin 24,000 times and got a heads incidence of 0.5005. It is quite clear that the observed frequencies range around the number 0.5 ^{[x]}.

This is because if I throw a correctly symmetrical coin (this is an example of a so-called *random experiment*), the probability of getting heads is 50%. However, if I toss a coin only 10 times, I may get, say, 6 heads and 4 tails (i.e. 60% heads). This is quite a possible outcome. Mathematics can calculate that the chance of getting “at least 6 heads or 6 tails” by tossing a coin 10 times is approximately 0.75. So it’s very big. If I flip a coin 100 times, the probability of getting at least 60% of one type of outcome (heads or tails) is still relatively high. It amounts to ca. 0,0569, so over 5%. But if I toss the same coin very many times – let’s say 10,000 times – getting about 6,000 heads and 4,000 tails (or 6,000 tails and 4,000 heads) is practically impossible. Then the chance that there will be at least 60% of the results of one type, i.e. heads (or tails), is approximately 5.5 ∙ 10^{-89}. This is the so-called a negligible chance. When properly tossing a symmetrical coin 10,000 times, the highest practical number of outcomes of one type (i.e. heads or tails) is usually considered to be 5150. The chance that there will be more than 5150 (or less than 4850) heads or tails is less than 0.003. As the Polish probabilist Jerzy Zabczyk wrote about it:

„With *n* = 10,000 tosses of a symmetrical coin with probability practically equal to 1, the number of heads should be between 4850 and 5150. If such a situation did not happen, we would have grounds to assume that the coin tossed was not symmetrical” [xi].

To show how the probability of percentage deviation from the predicted mean changes with increasing number of coin tosses, we have compiled the following two tables:

**Tab. 1** Series of 10,000 coin tosses

N | Nσ | The probability that the number of heads/tails rolled will fit within the range [5000 – Nσ; 5000 + Nσ] |

1 | 50 | 0.683 |

2 | 100 | 0.955 |

3 | 150 | 0.997 |

4 | 200 | 0.99994 |

5 | 250 | 0.9999994 |

6 | 300 | 0.999999998; 0.000000002 |

7 | 350 | 0.999999999997 |

8 | 400 | 0.9999999999999988 |

9 | 450 | 0.9999999999999999998 |

10 | 500 | 0.99999999999999999999998 |

20 | 1000 | 1 – 5.507248237212467390152e-89 |

40 | 2000 | 1 – 7.311787081830059407498e-350 |

60 | 3000 | 1 – 1.237573028643398497399e-784 |

80 | 4000 | 1 – 1.8048460032409472178188e-1392 |

The *Keisan Online Calculator* on the keisan.casio.com website was used to perform the calculations.

**Tab. 2** Series of 20,000 coin tosses

N | Nσ | The probability that the number of heads/tails rolled will fit within the range [10000 – Nσ; 10000 + Nσ] |

1 | 71 | 0.683 |

2 | 141 | 0.955 |

3 | 212 | 0.997 |

4 | 283 | 0.99994 |

5 | 354 | 0.9999994; 0.0000006 |

6 | 300 | 0.999999998; 0.000000002 = 2e-8 |

7 | 350 | 0.999999999997 |

8 | 400 | 0.9999999999999988 |

9 | 450 | 0.9999999999999999998 |

10 | 500 | 0.99999999999999999999998 |

20 | 1000 | 1 – 5.507248237212467390152e-89 |

40 | 2000 | 1 – 7.311787081830059407498e-350 |

60 | 3000 | 1 – 1.237573028643398497399e-784 |

80 | 4000 | 1 – 1.8048460032409472178188e-1392 |

It is also worth noting that the probability of rolling 1e + 4 heads/tails in a row is (1/2)^(10^4) ≈ 5e-3011. With such an abstractly low probability, we obviously assume that with proper throwing, it is impossible to throw the correct coin 10,000 heads in succession. At the same time, however, it should be noted that the a posteriori probability of each result of tossing a coin 10,000 times is the same: 5e-3011. We accept some outcomes as possible (those containing approximately equal numbers of heads and tails), while others are not possible, despite their equal probability.

But what does this mean in practice? Of course, if we toss a symmetrical coin many times and at the beginning we get many more heads than tails, then in the further part of the toss we will almost certainly start to get fewer heads than at the beginning (the percentage advantage of heads must decrease). Otherwise it would violate the laws of probability.

Similarly, if (as in the joke quoted at the beginning of this paragraph) we throw a six-sided die and at the beginning of this experiment we rolled a pair of sixes in succession, then at some point the sixes must stop rolling. And only when there are results of rolls other than 6, the 6 can appear again. As the theoretical laws of statistics say in practice, **the probabilistic (i.e. partial) nature of reality strives for balance (i.e. for the wholeness of all parts)**. This is what E.A. Poe clearly had in mind.

Of course, the question arises whether the laws of statistics have to be true in practice? They have to. Probabilists and statisticians themselves point out that the laws of probability theory apply only with a certain probability and therefore there are accidental deviations from forecasts made on the basis of knowledge of the laws governing accidents. However, as the famous statistician Jean Martin wrote:

“The use of statistical calculations actually allows us to predict the future to some extent. The success of insurance company predictions is proof of this”[xii].

As noted by the German thinker and gnostic Thorwald Dethlefsen:

«Es gibt keinen Zufall. Hinter jedem Ereignis steht ein Gesetz. Nicht immer können wir dieses Gesetz auf Anhieb erkennen. Dies berechtigt uns jedoch nicht, seine Existenz zu leugnen. Die Steine fielen auch zu jenen Zeiten gesetzmäßig nach unten, als die Menschen das Fallgesetz noch nicht entdeckt hatten.

Es ist wohl wieder die Ironie des Schicksals, daß jene professionellen Verfechter des Zufalls, die Statistiker, es sich nicht nehmen lassen, eigenhändig die Unhaltbarkeit ihres Zufallsbegriffs mit methodischer Akribie zu beweisen. Ein Statistiker glaubt nämlich, daß beim Werfen eines Würfels dieser nur zufällig die 3, die 5 oder eine andere Ziffer zeigen kann. Würfelt man jedoch lange genug, so ergibt die Summe aller Zahlen eine gesetzmäßige Kurve, genannt Normalverteilung. Welch Wunder offenbart sich hier! Die Summierung nichtgesetzmäßiger Einzelereignisse ergibt eine Gesetzmäßigkeit. Die gesetzmäßige Flugbahn eines Körpers setzt sich schließlich ja auch nicht aus zufälligen Einzelstrekken zusammen. Hätten die Statistiker recht, so müßte auch der Satz gelten: Je öfter man sich bei einer Rechnung verrechnet, um so richtiger wird das Ergebnis».

As well as:

«Ein Kosmos aber wird von Gesetzen beherrscht und hat keinen Platz für einen Zufall.

Der Zufall als ein nicht berechenbares und nicht gesetzmäßiges Geschehen würde jeden Kosmos in ein Chaos verwandeln. Bauen wir einen Computer, so stellt dieser in sich einen kleinen Kosmos dar: Er ist gesetzmäßig konstruiert, sein Funktionieren ist von der Einhaltung dieser Gesetze abhängig. Lötet man in dessen Schaltkreise willkürlich ein paar Transistoren, Kondensatoren und Widerstände ein, die nicht zum gesetzmäßigen Schaltplan gehören, so verwandeln diese eingebauten Repräsentanten des Zufalls den gesamten Kosmos in ein Chaos, und der Computer arbeitet nicht mehr sinnvoll. Das gleiche gilt auch für unsere Welt. Bereits beim ersten zufälligen Ereignis würde unsere Welt aufhören zu existieren»[xiii].

It is worth mentioning G.W. Leibniz’s Principle of Universal All-Union from his ** Théodicée** [xiv]:

“(…) determination does not eliminate (…) randomness (…)”,

and:

“(…) randomness is perfectly compatible with the inclinations or reasons that contribute to determination”.

Baruch Spinoza’s statements from his *Ethics* have a similar meaning:

*“Nothing* in the *universe* is contingent, but *all things* are conditioned to *exist* and operate in a particular manner by the *necessity* of the *divine nature”*[xv]*.*

Let us now consider how all this relates to Poe’s probabilistic problem (and to the content of the joke presented at the previously), i.e. to the practical chance of rolling a six for the third time in a row when rolling a six-sided die. According to the classical probability theory, the chance of this result is supposedly the same as with the first throw and is still exactly 1/6 = 0.1(6). We would therefore have to take the possibility of obtaining this result as seriously as we did the first time. From a statistical point of view, if we obtain the third six in a row, we can perform a posteriori reflection, often practiced in stochastics, as a result of which we will conclude that an event with an initial probability of (1/6)^{3} = 1/216 ≈ 0.0046 [i.e. almost half a percent]. This probability is still so high that we will not have (contrary to Poe’s statement) strong enough grounds to assume that some “pathology” has occurred. Sometimes we observe events with a chance of half a percent. However, if we now roll the dice a fourth time and get a six again, we will find *a posteriori* that an event with the *a priori* probability of (1/6)^{4} = 1/1296 ≈ 0.0008 has occurred. This probability is extremely low, several times lower than the value of 0.003 assumed in stochastics, which we still consider “possible to occur in practice”. We will therefore conclude that something “extraordinary” has already happened, which, in accordance with the accepted rules of statistics, makes us assume that the dice we are throwing is incorrect (e.g. loaded on one side). Thinking the other way around – if we are sure of our dice and the correctness of throwing, we should assume that a six should not fall for the fourth time in a row. This will result in the fact that after three consecutive throws in which sixes are rolled, we will be ready to “bet the highest” [as Poe put it] that the fourth time the six will not be rolled.

And such a bet will work well in practice. “The success of insurance company predictions is proof of this.” This means, however, that we treat the fourth subsequent throw of the dice completely differently than the first one, and the results of all four throws are *de facto* treated not as “independent”, but as related to each other, or (what in this case it comes to one thing) as forming an indivisible whole.

Mathematicians – like probably all reasonable people – also treat these elements of a series of dice rolls in similar cases. Not as independent, but as related. Therefore, after the fourth roll, which resulted in the fourth six in a row, even the most staunch supporters of the concept of independence of events will begin to suspect that “something is wrong with the dice”, even though they will not suspect it after the first roll (probably not after the second one either). Therefore, they will treat the fourth subsequent throw differently than the first or second. Even though they will loudly declare that each subsequent throw is independent of the others and must be treated “as if it were the first”. Mathematicians do not admit to anyone (not even themselves) that they treat “independent events” as dependent. Because there is no physical connection between the results of subsequent throws. Therefore, the only relationship here can be a meta-physical (ontological) relationship. And a positivist or postmodern scientist will not openly admit that metaphysics [*meta ta physica*] is a logically necessary continuation of physics. Or that ontology is the (onto)logical basis on which the correct science of physics (if it is to be a science) must be based.

The problem is that *a priori* probability (i.e. probing the ability) concerns the future (refers to future events), while calculating a posteriori probability is a reflection on past events, i.e. the events already realized (real, and therefore real ones). The future is the time that not came true yet, i.e. it’s still false for now. Every falsehood contains only grains of truth. Therefore, the future is not yet a full whole. So the future has only loose parts, and there is no connection between them yet. However, if certain events have already occurred, then the relationship between them has came true, i.e. become real (actual) and it will have a necessary, i.e. determining, influence on the future. The past, when it is already determined, is capable of determining. Therefore, from the perspective of the future, the relationship between subsequent throws of the dice does not yet exist, but when some of these rolls have already been made and have become the past, this part of the rolls made will partially determine future relationships. For falsehood does not determine truth, but truth determines falsehood. For no truth depends on falsehood, but falsehood always refers to some truth and is determined by it.

G.W. Leibniz wrote that truth, or reality, is an inseparable ontological whole, all elements of which are interconnected, which his contemporary Polish commentator, Jerzy Perzanowski, summarizes as follows:

„Brak związku wprost jest także związkiem”[xvi] [The lack of a direct relationship is also a relationship].

Therefore, there are two types of relationships in the universe (the universal relationships): direct relationships (cause-effect relationships) and indirect relationships (cross-relationships).

Did you know this joke?

A mathematician is afraid of flying due to the small risk of a terrorist attack. So, on every flight he takes a bomb with his hand luggage. “The probability of having a bomb on a plane is very low”, he reasons, “and the probability of having two bombs on the same plane is virtually zero”.

The ridiculousness of the above joke lies, as you can immediately see, in the confusion of two orders – the probabilistic and deterministic order. Or, to put it another way, it is a mixture of two types of relationships – a direct, physical relationship (the bomb is in the mathematician’s luggage because he personally, physically placed it there) and an “indirect”, metaphysical relationship (the bomb is in the luggage compartment of this particular person). plane in which our mathematician is flying, because “it could have happened – allthough it didn’t have to”). Of course, the fact that a mathematician placed a bomb in his luggage has no influence on whether a terrorist placed his bomb in the cargo hold of the plane.

If a mathematician placed a bomb in his luggage, then the fact that the bomb is now in the cargo hold of the plane is, from the mathematician’s perspective, a certainty, not a probability. Truth is always cartain, but what is merely probable is only probing the ability for certainty, i.e., is a possibility. Necessity is something which must be there, but possibility “must be able to be”. Therefore, possibility is the basis of certainty, and probability is the basis of truth. But truth (i.e. reality) is the realization of probability. Therefore, reality is a fulfilled (complete) probability. Therefore, what is only probable (i.e. what is only probing the ability) is only an empty shell of reality, something that may or may not be true. That is why we usually refer probability to the future, because the future is a time that has not yet come true, i.e. has not yet been fulfilled. Therefore, certainty and truth are holistic, while probability is only partial. Probability is a part of certainty. If something is somewhat probable, it is only so certain.

**2** **Three possible interpretations of probability as three aspects of time**

From everything we have seen so far, there are three main possible perspectives on probability.

a. The perspective of the past, i.e. the perspective of experience. If, in the experiment of multiple tosses of a symmetrical coin, we throw heads several times in a row, then, based on this previous experience, we predict that the next time it will also come up heads. If in this perspective we get *n* – 1 heads in a row, then the probability of getting heads on the nth toss is:

This probability tends to 1 as *n* goes to infinity.

b. Future time perspective, i.e. predictive perspective. Looking at our experience of repeatedly flipping a coin from a future perspective, we see the possible outcomes of this experiment as mutually equal sets of heads and tails. If so far we have only had heads, we have fewer and fewer heads to throw and, relatively, more and more tails. Therefore, the chance of getting heads, classically associated with the ratio of the number of favorable events to the number of possible events, is:

.

This probability tends to zero as *n* approaches infinity.

c. Current perspective, i.e. transitional or temporary. It is a present time perspective that is a synthesis of future and past perspectives. The probability of getting tails/heads in a single toss, seen from this perspective, can be calculated – by analogy to the way we calculate the probability of two events A and B overlapping – as:

[For every natural *n*].

As you can see, the *current probability* is constant and always equals ½. Its constancy, however, results from limiting the perspective to the temporal horizon of the present moment.

The above threefold distinction shows that the method of calculating the probability of events such as a single coin toss changes depending on the time perspective. This is understandable because the very essence of the concept of probability is related to time. In this case, if we are dealing with the shortest (i.e. instantaneous) time perspective (we are only interested in this one, single coin toss), we should count the probability of throwing heads/tails as probability c). If we have a longer time perspective (several tosses), then if, for example, it comes up heads after tails, it is worth betting on heads (i.e. in practice, it should be assumed that the formula a) will be an adequate method of counting). In fact, identical outcomes of coin tosses tend to cluster rather than be interleaved (the law of series). The probability that in the course of 100 coin tosses (or even just 10 coin tosses) we will get tails after every head and heads after every tail is extremely small (equal to the probability that we will throw only subsequent heads or only tails). However, if we look at the experience of multiple tosses of a symmetrical coin from a very long time perspective, then the probability of throwing heads when n heads has already been thrown, it is worth calculating from formula b). This could be called the law of balance (harmony). It says that all happiness (but also all bad luck) must come to an end, and the longer it lasts, the greater the chance of it ending.

The past perspective (the law of series) is undoubtedly a derivative of the existence of a certain inertia (i.e. unity) of phenomena, while the future (predictive) perspective is a consequence of the existence of multiplicity (the existence of various states and things), which implies the tendency to variability.

Of course, in any long run, the future perspective is the most forward-looking, i.e. the most prospective.

In this sense, both d’Alembert (and Poe) and modern mathematicians are right. Each of them simply approaches the issue from a different time perspective.

In every long term our entire world is an (onto)logically interconnected whole. it results directly from the generally accepted laws of statistics, or rather it is a hidden averse of the accepted interpretation of these laws. We will show this below in the examples in which there will be (*nomen est omen*) an averse and a reverse.

These subsequent examples of the application of probability theory may seem more and more improbable as you read them. Truth, however, is never probable. Since truth is certainty, it surpasses all probability.

**3 ****Examples and conclusions “very far-reaching”**

**Example 1**

As we have already stated, for a 10,000-time toss of a symmetrical coin, in practice the permissible deviation from 50%, i.e. in this case from 5,000, is 150 in the number of heads/tails thrown. Therefore, in this experiment we should not get more than 5,150 heads or tails and not less than 4850. For 20,000 consecutive coin tosses, such a “practically possible” deviation is 212. So we should get neither more than 10,212 heads or tails nor fewer than 9788 of them.

Now we come to the experiment. Alice tosses a symmetrical coin correctly 10,000 times. After performing the entire series of throws, she received the following results: 5142 heads and 4858 tails. This result is within the “practically possible deviation” of 150 for 10,000 throws. As a result of a posteriori reflection, Alice has no grounds to revise her belief that the coin is symmetrical and the tosses were performed correctly. At that moment, Bob enters the room, sees the coin that was used to perform Alice’s experiment, and announces his intention to throw it another 10,000 times. Bob does not know Alice’s result and treats his series of throws as an isolated, single experiment. Alice, however, naturally treats Bob’s experiment as a second part of the experiment of performing a series of 20,000 coin flips. The thrower – if thrown correctly – cannot influence the course and result of the demonstration. Therefore, Alice knows that if Bob receives the result that he rolls more than 5070 heads, the total result of the entire experiment will not fall within the “practically acceptable range”. It will then amount to over 10,212 heads per 20,000 throws. Knowing this, she offers Bob the following bet. She will bet 20:1 on the outcome that Bob will roll no more than 5,070 heads. Is this bet fair? From the future (partly a posteriori) perspektive it can be easily calculated that (from Alice’s point of view) the so-called the expected value of winning will be positive, so Alice has the right to assume that her bet is profitable. From Alice’s point of view, the chance of Bob throwing more than 5070 heads is equal (a posteriori) to the chance of throwing more than 10,212 heads in 20,000 throws. It is 0.0015 (i.e. only one and a half per mille). Meanwhile, from Bob’s point of view, the probability that the number of heads in 10,000 coin tosses will be within the range of 5000±70 is 0.081 (over eight percent). Therefore, it is already the probability of an event “that has a real chance of happening”.

Which one of them is damn right?

Good question.

Of course, there is always a greater probability that the one who has a broader perspective, i.e. a broader view of a given issue, is right. Because having a broader view of a given issue means having more complete knowledge about it. And he (or she) who knows more is wrong less often. Therefore, since we are moving in the sphere of probability (the area of likeness), Alice will be right, because being right (due to her greater knowledge) is more likely than not being right.

**Example 2** (another version of the previous bet)

Alice tossed a coin 10,000 times in London, and Bob is about to toss a coin 10,000 times in New York. Both coins used in both experiments come from the same mint series, have been repeatedly checked for symmetry, etc. Alice’s result is the same as in example 1, but Bob still does not know it. Alice calls Bob and offers him the same bet as before. Is Alice’s proposal fair? The solution is of course the same as above. The fact whether Alice carried out her part of the experiment in Warsaw, or in New York, or finally on the planet Venus, cannot have any significance for the course of the demonstration if the symmetrical coin is tossed correctly. Otherwise it would negate our notion of stochastic independence. However, as we can see, our common sense intuitions related to this concept cannot be sustained anyway.

Common sense (i.e. intellect) – unlike reason – always ends up contradicting itself. Until common sense becomes reason (i.e. until consciousness becomes self-consciousness), it does not judge its own judgments and is therefore not in harmony with itself.

Worthy of note is the opinion of Edmund Husserl, who wrote:

„In relation to the natural system of thought of the sciences, one falls into promising theories which, however, always end in contradiction or absurdity”[xvii].

**Example 3**

Alice conducts an experiment by tossing a symmetrical coin 20,000 times. After tossing 10,000 times and getting 5,142 heads and 4,858 tails, when she tried to throw again, the coin fell into a crack in the floor and was lost forever. Alice no longer has a similar (or any other) coin at home. However, he wants to complete the experience. It still has a raw disc from the same mint that issued the lost coin. However, the mint did not have time to strike heads and tails on this disk. But Alice has a minting press at home, and she strikes heads on one side of the disc and heads on the other. Then he continues the experiment. However, symmetrical, correct coins are not stochastically distinguishable, so the further, probabilistically predicted course of the experiment should be the same as in the previous examples. Alice should not get more than 5070 heads in the series of the last 10,000 tosses (i.e. in the series of tosses of the new coin she minted, but stochastically indistinguishable from the old one). The problem is that which side of the new disc Alicja chose to make heads and which side she made tails was the result of her *arbitrary decision*.

In other words, if before the new coin was minted, Alice had a significant advantage of heads in a series of 10,000 tosses of the old coin, then from the point of view of the whole experiment, it is more likely now, i.e. in the course of tossing the new coin, to have an advantage of tails, or – in the worst case – relative balance of heads and tails. However, as a result of an arbitrary decision, Alice chose which side of the coin was heads and which was tails. At the same time, this choice should have no impact on which side the coin falls on. Therefore, if the coin now falls more often on the “left” side, which Alice decided to mark as heads, we will be dealing with an increasing advantage of heads over tails, which according to interpretations commonly accepted today in stochastics, makes us suspect that there is some abnormality. However, if Alice had decided, before continuing the experiment with the new coin, that the “left” side of this coin would be tails, we would now be dealing with the elimination of the advantage of heads over tails, which was to be expected in a properly performed experiment and which should not lead us to suspecting any anomalies.

It can be seen that in this case, Alice’s free will (i.e. choice) will be crucial for the further course of this phenomenon of multiple coin tosses. That is, Alice’s belief in which side of the coin is heads and which is tails. Or finally, in other words, Alice’s imagination (i.e. representation) of the above-mentioned will be of fundamental importance for the course of the phenomenon. Therefore, the fact is that – at least in random phenomena, and these are what we de facto mostly deal with in the Universe – “faith moves mountains”, i.e. our ideas (i.e. faith) about some physical reality shape this physical reality. This fact will probably also explain the previously mysterious dependence of the course of many quantum phenomena on the knowledge (i.e. faith) of the observer. The name “conscious observer effect” in quantum mechanics should only be replaced with the name “believing (or knowing) observer effect”. This is about the state of our will, i.e. knowledge, not consciousness (logical intellect). On the contrary, consciousness itself cannot (at least so far) explain the above. effect, even when fully aware of it.

On closer consideration, this situation appears trivial. Such things as the sides of a symmetrical coin are in the course of the above-mentioned. stochastic processes physically indistinguishable. They are only distinguishable in the sphere of our knowledge, i.e. opinions or imagination. The course of the above-mentioned stochastic process does not depend on the objective laws of physics, but on our subjective imagination. And since imagination is the same as will (if we dream about something, we have the will to achieve it), the course of certain experiences and random processes must depend on our will. That’s what logic says.

**Example 4**

As in the previous example, except now one side of the “raw” coin disk is painted black and the other side is white. Alice decides in her mind (scil. decides in her will, i.e. in her imagination) to identify the white side of the coin with tails, and the black side with heads. Whether Alice now mints or does not mint the images of heads and tails (respectively) on the corresponding sides of the coin cannot influence the frequency of the coin falling on one side or the other. Alice therefore no longer makes these “mintages”, content with her internal belief which side is tails and which is heads. Alice’s experience is observed by Bob, who in turn decides to consider the white side of the puck as heads and the black one as tails. According to everything we have reached so far and according to logic, which side of the coin will now be stochastically preferred in the further course of the experiment must depend on which of the beliefs (Alice’s or Bob’s) is “more real”, i.e. actually stronger.

It follows that the look of the accidental (i.e. most common, most universal) part of the material universe is shaped by the strength of our belief in what this reality looks like [Mt 17:20]. Common sense may blindly deny this fact at first. However, it will thereby contradict itself; because it will base its denial of the power of faith not on rational grounds, but solely on the strength of its faith in the lack of the causative strength of faith.

**Example 5**

The coin that Alice was tossing fell into the mouse hole. Alice has neither more coins in the house nor raw disks, but she does have a six-sided dice, so she decides to continue her experiment of repeatedly tossing a coin by assuming that rolling 1, 2, or 3 is heads. and rolling 4, 5 or 6 means throwing tails. If so far Alice has thrown heads more often in a long series of coin tosses, then there is obviously a greater chance of throwing a 6 than a 1. However, if Alice decided the opposite (i.e. rolling 1, 2 or 3 would mean getting tails), then there would now be a greater chance of rolling 1 than 6.

**Example 6**

If Bob assumes (as a result of an act of will, i.e. faith) that while driving around the city, hitting a red light at a given (specific) intersection is equivalent to getting heads in a coin toss (and hitting a green light is equivalent to getting tails), then , what is the probability that Bob will get to work on time may be related, for example, to the results of a coin toss performed by someone in Enlightenment Switzerland. For if Bob chooses to think that his current “traffic light experiment” is a continuation of Bernoulli’s coin-tossing experiment with heads, then, from the predictive perspective, the chance that Bob hits the green light “gets tails” should be currently greater than the fact that it will hit the red light (i.e. that it will “get heads”).

**4 Conclusion**

Reality, as you can see, is a logical network of internal connections and a structure that is much richer than our common sense would admit (common sense is not yet reason but sense). The so-called ordinary people (as opposed to “scholars”) have always known this. The truth about the logical unity of the Universe is universally felt (until it begins to be suppressed by the common sense, which is able only to analyze, not to synthetise, and therefore only breaks into parts what is in fact one). This is reflected in the literature. For example, in Stephen King’s novel *Buick 8*, one of its main characters reflects on this topic as follows:

“You can call that a coincidence if you want to, but I […] don’t believe in coincidences, only chains of event […]”[xviii].

Another popular author, Dean R. Koontz, in one of his books, declares at the beginning a belief in a mysterious bond connecting the multitude of all things:

“I believe about the uncanny interconnectedness of things and about the profound and mysterious meaning in all our lives”[xix].

Paulo Coelho, in turn, in his famous *Alchemist* writes, among others:

“There’s no such thing as coincidence, […] The mysterious chain that links one thing to another”[xx].

The non-existence of pure chance in the world is also the central and expressis verbis thesis of the famous film *Signs* by M. Night Shyamalan starring Mel Gibson [xxi]. It is also our strong belief in the purposefulness of the Whole Universe and the lack of room for chance in It that results in the common belief that one of the greatest contemporary scientific authorities “Albert Einstein believed that everything has its cause and purpose, nothing happens by chance.” Attributing such opinions to Einstein (apart from some obvious, colloquial simplifications) is not far from the truth. After all, the German scientist until the end of his life defended the view that *der Herrgott würfelt nicht*:

«Die Quantenmechanik ist sehr achtung-gebietend. Aber eine innere Stimme sagt mir, daß das doch nicht der wahre Jakob ist. Die Theorie liefert viel, aber dem Geheimnis des Alten bringt sie uns kaum näher. Jedenfalls bin ich überzeugt, daß *der* nicht würfelt».[xxii]

Already over 2 thousand years ago, the cowardly protagonists of Platonic dialogues who dialogued with Socrates claimed that reason and logic are useful and important, and that by using them we should learn about reality, but we must remember, however, “to stop in these considerations where necessary” and we must avoid this so as not to go “too far” in the search for truth. Of course, this attitude is always met with the unequivocal disagreement of Socrates, who is truly seeking the truth.

The attitude of today’s scholars is most often that of Socrates’ interlocutors. Polish mathematician Tomasz Downarowicz, who, in cooperation with the French mathematician Yves Lacroix, discovered an important new fact in probabilistics[xxiii] at the end of the last decade this is what he writes about this attitude:

„[…] żadne szanujące się czasopismo matematyczne nie chciało opublikować naszej pracy i była ona systematycznie odrzucana z adnotacją, ze jest ona ‘bardzo interesująca matematycznie, jednak autorzy wyciągają zbyt śmiałe i zbyt daleko idące wnioski’”[xxiv]

[„no respectable mathematical journal wanted to publish our work and it was systematically rejected with the note that it was ‘very interesting mathematically, but the authors draw too bold and too far-reaching conclusions”].

As you can see, ‘boldness’, i.e. courage in searching for the truth, is today considered a disadvantage in science, not an advantage. Similarly, the current disadvantage is that someone in search of the truth does not only go close, but “far” and goes far beyond the previously “accepted findings” that, for various reasons, average scientists would like to believe.

Albert Schweitzer once gave a beautiful reflection on this subject:

“To become ethical means to begin to think sincerely. Thinking is the argument between willing and knowing which goes on within me. Its course is a naïve one, if the will demands of knowledge to be shown a world which corresponds to the impulses which it carries within itself, and if knowledge attempts to satisfy this requirement. This dialogue, which is doomed to produce no result, must give place to a debate of the right kind, in which the will demands from knowledge only what it really knows”[xxv].

Of course, we have also gone “too far” for common sense, i.e. for common opinion, in our stochastic considerations and conclusions. However, these conclusions are logically necessary. This is what science and the search for truth are all about – our only limitation is reason and logic, and not the fear of going “too far” beyond common opinions. Because while for common sense opinions (acts of faith) limit the logic of our thinking, for true thinking (i.e. reason) logic must limit, i.e. shape our opinions.

Ancient societies perceived reality in an ontological way (today often referred to as “magical thinking”), similar to ours, presented above in six probabilistic examples. The ancients saw the world as one, interconnected whole. Today, this way of perceiving the world by old societies is considered “archaic”. But it was the representatives of those societies who created great and extremely sophisticated cultures and civilizations. And we are often only their epigones.

It’s time to get back to building a great, sophisticated civilization. This is what rational beings would do.

To answer the question posed at the beginning, the Universe as a whole is wholly deterministic, however, randomness is what determines it. The universe is one, i.e. whole, i.e. complete, and therefore it is entire and defined in its entirety. And from this (holistic, therefore complete) perspective, the Universe is completely determined. But every complete (holistic) whole is the complex of all its parts and must consist of parts. For the part is the opposite, that is, the end, and therefore the limit of the whole. Therefore, the parts of the complete whole complete its overall shape. And the shape of the whole is the place of the whole in reality (and it is the wholly reality). Therefore, every whole manifests itself in its parts, and the boundaries of the Universe are the parts of this Universe. Parts are aspects, or accidents. Having necessarily accidents as Its external features, the Universe is formally accidental, i.e. aleatory, and therefore random. However, this external (i.e. formal) randomness contains deterministic content.

We can say that the Universe is *absolutely* deterministic, but *relatively* It is random.

In other words, the Universe appears to be random in its inconstancy, but randomness itself is not random in its purpose. This idea is well illustrated by the *Rule of Chance* that I discovered. It says that in every time-space distribution of elements a rule can be found. It was already G.W. Leibniz who talked about it explicitly. I found this mathematical rule that governs coincidences in the Universe, i.e. coincidentally rules the Universe. This rule is both simple and complex (i.e. it is quite simple in its complexity and very complex in its simplicity). It turned out that despite the very good opinions that this work received from various Polish professors, no scientific journal dared to publish it and rejected it with the note that this work was very important, but that it should be published by “someone else, not us.”

The paper in question can be found here.

[i] They wrote about it, among others: Sextus Empiricus in “Adversus mathematicos” and Epicurus in “Epistulae”. The latter in particular emphasizes the impact of the randomness of atomic motion on our understanding of the Universe and life.

[ii] Kolmogorov, A. *Foundations of the Theory of Probability*, New York, US: Chelsea Publishing Company, 1950.

[iii] Mlodinow, L. *The Drunkard’s Walk: How Randomness Rules Our Lives*, Pantheon Books, 2008.

[iv] The ontological nature of certainty is different from that of probability, just as the ontological nature of truth and falsehood is different. While falsehood can only be falsified, verity can only be verified. Therefore, statistical (probabilistic) tests can only be used to reject a hypothesis, but cannot confirm it with certainty.

[v] Grade 6 in the Polish education system corresponds to the grade A in English-speaking countries.

[vi] “The Mystery of Marie Roget”, in: E.A. Poe, Tales, WILEY & PUTNAM, London, 1846.

[vii] Poe, E.A. *Eureka. A Prose Poem*, Geo. P. Putnam, New York, MDCCCXLVIII.

[viii] Ibid.

[ix] d’Alembert, J. *Opuscules mathématiques*, Chez Briasson, Paris, M. DCC. LXVIII.

^{[x]} Fisz, M. *Rachunek prawdopodobieństwa i statystyka matematyczna*, PWN, Warszawa 1967.

[xi] Zabczyk, J. „Teoria prawdopodobieństwa”, in: *Leksykon matematyczny*, Warszawa 1993 (my own transl.).

[xii] Martin, J. *Notions de base en mathématiques et statistiques: à l’usage des médecins, pharmaciens et biologistes*, Gauthier-Villars, 1963 (my own translation).

[xiii] Dethlefsen, T. *Schicksal als Chance*, Wilhelm Goldmann Verlag, München, September 1998, p. 32 -33.

[xiv] Leibniz, G.W. *Theodicy*, transl. by E.M. Huggard, Open Court Publishing Company, Peru, Illinois, 1985.

[xv] Spinoza, Benedict de, *Ethics*, Translated from the Latin by R. H. M. Elwes, 1883.

[xvi] Perzanowski, J. *Teofilozofia Leibniza*, in: G.W. Leibniz, *Pisma z teologii mistycznej*, Kraków 1994.

[xvii] Claesges, U. [ed.] & Husserl, E. *Husserliana, *Vol*.* XVI*: Ding und Raum. **Vorlesungen* *1907*, Den Haag 1973 (my own translation).

[xviii] King, S. *From a Buick Eight*, Scribner, 2002.

[xix] Koontz, D. *From the Corner of his Eye*, Bantam Publishing, 2000.

[xx] Coelho, P. *The Alchemist*, Harper Torch, 1993.

[xxi] Empiremovies, August 2, 2002.

[xxii] Albert Einstein, 4.12.1926, In einem Brief an seinen Berliner Kollegen Max Born; «Einstein und die Quantenmechanik im Licht neuerer Forschungen». In: *Einstein*, S. 107-130. Ulm: Humboldt-Studienzentrum.

[xxiii] Downarowicz, T. Lacroix, Y. The Law of Series, Ergodic Theory and Dynamical Systems, Volume 31, Issue 2, April 2011, pp. 351-367.

[xxiv] Downarowicz, T. „Prawo serii w ujęciu matematycznym”, *Wiadomości Matematyczne*, Tom 47, Nr 1(2011), 1- 16.

[xxv] Schweitzer, A. *The Philosophy of Civilization*, transl. by C.T. Campion, Actonian Press, Boston 2008.