(writing started 16th Sep 2019)
I want to start a series about what has been going in lately in French research circles about sustainability and resilience. Because a lot of intelligent debate is happening there without the rest of the world participating, even though it concerns everyone. I will attempt to present all that to you in English here.
However this is a subject that introduces shocking knowledge that is delicate to present in a manner that won’t have the reader brush it off too quickly.
For the sake of credibility, let us start with the base: the method.
Epistemology
I want to introduce you to my personal preferred youtube channel: Hygiène Mentale. I suppose you don’t need the translation (just flip adjective with nouns and you get English).
It presents elements of critical thinking. Notably the video maker is a member of the “observatory of zététique”. Get the definition here Zététique (wiki).
And is more or less a crash course to spot fake news. Let’s list the point covered in the channel.
Principle of non prejudice
When encountering new elements of knowledge, for example a conference or an article, you must turn off all a-priori beliefs about the content. And especially all prejudice about the speaker. Dismissing elements because of the speaker, is a classic ad-hominem fallacy. Unfortunate equivalent judgment biases will happen if any prejudice isn’t silenced during the time of the study. You can reactivate them to update your judgment, at the end of the presentation.
For example, in the Episode 1, he joins a conference of mediums; the a-priori would be “it will be a pile of manipulative crap served to poor desperate idiots”, but you shut down that prejudice, and listen with honest innocent ears, and at the end, confront the content to the evidence. One of the talks was about the wakening of human superpowers in recent years, thanks to babies being born with 3 DNA strands. The source seemed to be a British article. After a simple search, the baby was handicapped, and it was nothing related to psychic powers. This is just an example of cherry picking and sensationalizing. The woman was selling consultations to discover children with psychic abilities.
Fallacies
Talking about fallacies, some are treated in Episode 12. Most common are the “old pot” ancient cultures are necessarily wiser – fallacy. “ad populum” – everybody knows / calling to agreement from crowds – fallacy. “call to authority” it’s not because the dude has a white blouse that the speech is automagically is true. “call to nature” with the example of pseudo-medicinal plants.
He gives an example in marketing, at the back of a bottle of lemonade, the text is a pile of fallacies. “ancient recipe!” “loved by all the connoisseurs!”
A good list can be found on this website that I recommend: https://yourlogicalfallacyis.com/.
Knowledge versus belief
In episode 19 he speaks about logic applied to debate.
Black cloud: theory / models / explanations
Dots: observations
Orange cloud: possibly know-able field
Grey zone: never-knowable. e.g. outside of observable universe / god …
[My personal 2 cents after losing too much time on philosophy.stackexchange: Logical propositions can only be stated, while the discussion is inside the domain of knowledge. Otherwise said: logic doesn’t apply to beliefs about unicorns or flying teapots.]
Next he presents a plane, with the axis of knowledge (proofs) vertically (Y)
and the axis of belief horizontally (X).
He notes that the word “belief” has many senses, like “I believe I forgot my keys”, or “I believe in you”. But here it means: reconciled mental model / things accepted / “I think that“…
One can be rational only when one respects a rigorous intellectual position of situating oneself on the central diagonal of: «I believe because I know» – «I have no belief because I don’t know» – «I believe that not, because I know that not».
X: belief, Y: knowledge. On the left: ‘denial’, on the right ‘leap of faith’; and in green: the rational stances.
Later, he uses that 2D plane, cut in 9 quadrants to place beliefs in theism, and makes sure to precisely define theism, atheism, deism. And corrects a common abuse of the word “agnostic” which doesn’t refer to the axis of belief like usually misused, but to the axis of knowledge.
In the above figure, belief is horizontal, knowledge is vertical, and the position of rationality is X=Y (the center diagonal). The fillings in the graph are an example of application of this graph to religion.
Note that «I don’t believe» (have no belief), is very different to «I believe that not». “not believing” means the absence of belief. Which is the rational position when there is no proof. He states, multiple times «no proof = no reason to believe». He says this is a zetetician’s proverb.
Humans can live with false beliefs, because what matters for survival is the cohesion of the group. This is explained in the article {this article won’t change your mind} by Julie Beck from theatlantic. This is a very long article so just 2 quotes:
_«people wear information like team jerseys. Especially because a lot of false political beliefs have to do with issues that don’t really affect people’s day-to-day lives.»
_«Having social support, from an evolutionary standpoint, is far more important than knowing the truth.»
(Farhad Manjoo is quoted a lot)
Proof-claim weight equivalence
“An extraordinary claim requires more than ordinary evidence” also called the Sagan standard (but really is from Pierre-Simon Laplace). And leads to “the burden of proof“, notably as explained by Betrand Russel. And finally two principles: The Occam’s razor, and the Hanon’s razor. I’ll let you google all that if you’re interested; and hint one thing: the Invisible Pink Unicorn (IPU) is the same thing as the Russel’s teapot.
The youtube channel “la statistique expliquée à mon chat” (channel which received the Wernaers first price 2017 for scientific vulgarization; and are sometimes doing cross references with hygiene mentale) gives the following example:
if my friend claims to own a frog at home, I believe him, because the claim is not extraordinary.
But if my friend claims to possess a frog-who-speaks, I have a serious doubt.
One of the interesting graphs Christophe presents:
This is titled “the levels of proof“, from bottom to top (reverse order):
– lowest is “popular wisdom, rumors”
– next to low “reported testimony” (I know someone who..)
– low “personal anecdote” (it works for me)
————- scientific threshold ————–
– one scientific study (randomized, blinded experiment)
– replications (scientific consensus)
Media chain deformation
In episode 7 he speaks about «why is there so many falsehoods on internet?» and «why is it shared so much without verification?».
When people are comforted in what they already believe they become much less critical.
He takes the example of a news titled “serious revelations: 600 UK soldiers are coaching ISIS soliders”. And makes it a demonstration for his method of verification.
The first thing to do: go back to the source.
In the article we read “the daily star said…”, and link to the source at the bottom of the site, but the link is never directly the daily star. Following the trail, finally the original seems to be the blog of a political activist. Finding the real daily star article required to look for it independently. The title was click bait, but the body of the article is nuanced. It said «”We know there are people with English accents and a military background training members of IS”» The original phrase with the figure 600 is a catastrophic example of bad writing. It makes so little sense I won’t even reproduce it here. Anyway, here is Christophe’s source analysis:
Imagine tracing backward from right to left. Your first encounter is a retweet.
He calls that the effect skin of sorrow. Where substance disintegrate through your fingers.
Don’t hesitate to take a pen and reconstruct the graph of information flow on paper when doing that. Learn how to use google reverse image search, or the date limiter in the query. Also use google scholar, to limit results to scientific research when looking for linked studies.
Disinformation
In that same episode 7 he presents echo chambers, and how above-mentioned sites are all copying each other without checking the origin of the information. It can give a false illusion of consensus to people drowned in the meanders of such networks.
Example of an illustration to serve nationalist propaganda:
With google chrome you can right click on the image and do “search web for this image“. Or manually copy link to image and go to reverse image search.
You’ll immediately find that this image was used by an insurance company in Australia for an advert campaign.
He suggests to make your own black list, where any site that didn’t bother verifying an article that is provably fakenews, like this pole story, should go straight to your bin.
Later he peeks into a documentary where the makers managed to score an interview with the originator of one of those fakenews. And the man casually is presented in front of 2 factual manipulations of his doings, a picture of a building with Algerian flags at the windows, claiming it’s in Paris, but the picture was taken in Alger. Or same tactic with a crowded bus. He defends himself saying it’s “for illustration”. Another classic is a spread sheet with (phony and mistaken) calculations for earnings of a working couple, versus an unemployed couple earning full welfare, and ending up with comparable income. Immediately shared in the tens of thousands by angry nationalists waiting for any crumb that would go their way. Here: how those lazy ass migrants profit the system.
A lie can travel halfway around the world while the truth is putting on its shoes
In modern times we prefer to invoke what has become the “law of Alberto Brandolini” or the “bullshit asymmetry principle” which states:
The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it
Think fast and slow
In episode 20 he introduces the two speeds of thought. Referring to the book “think fast and slow” by Nobel price Daniel Kahneman. He describes a model with two systems of thought. The fast: innate, low energy. And the slow: methodological, scientific, and tiring!
Christophe’s introduction aims to introduce that duality, to solve a paradox that his viewers were surprised with: why should we grant more importance to an expert than an anonymous, while we were told that this the fallacy of argumentum at verecundiam (argument from authority) ? It seems contradictory, but only because this precept hasn’t been placed within the two speeds of thought system.
Kahneman explains that as any evolved machine we have energy savings systems. And the fast thinking track is one of them. When we’re not awake or critical enough that’s the active system. The second system, the slow track, is the costly one. Taking significant amount of time to research the subject, inspect the source of data, compare hypothesis; read scientific publications…
When you don’t have time to switch to system 2, the energy saver system 1, will have to do, and it is in this condition that dismissing anon’s theory, and giving credit to the expert, is rational.
To decide when to use the costly system, Christophe says he constituted for himself, a little track switcher at the entrance: the bullshit-o-meter.
If he sees the usual signs in the form of the message that corresponds to forms taken by fakenews in the past, his warning lights will flash. Like mentions of conspiracies, or dramatic music, or scary messages. When a warning light flashes, it should be the time to switch to the slow system. What he calls, the “analytic” system.
In the analytic system, indeed, the messenger doesn’t matter, but only the content of the message. Because you can take the time to analyze the claim. Recoup it with other publications, read the history of bibliography etc.
He mentions cases where the fast system can make mistakes. For example «there is aluminum in this vaccine, aluminum is toxic, therefore this vaccine is toxic». The reasoning sounds appealing, but there should be a warning light: “isn’t it an essentialist reasoning ? it’s the posology that we should be looking at. The balance risk/benefit”. And this takes time.
Perhaps the most important plug we should wire the bullshit-o-meter to, is our own “confirmation bias” trigger. Indeed we are but too quick to validate and share, when the information goes our way. Christophe says, in fast thinking, it is natural to confront the information to our knowledge a-priori, it’s bayesian. But there is an asymetry of tolerance for bullshit when the belief update goes OUR way. That’s why we must be doubly as critical as usual when an information is too beautiful for us, before rushing on the share button. Say you have a liberal capitalist core in you, and some data about the failure of a parecon experiment seems like godsend news to comfort you and your group, in your position. That’s when you should take a step back and switch to slow system, verify the source and the quality of the data. Likewise, if you have an anarchist core in you, and you come across a report of violent crowd control police beating up demonstrators, it’s the perfect opportunity to prove how you were right all along. No: it’s the perfect opportunity to verify if it’s not fakenews first.
Inversely, picture a news that hurts your conviction. Say you’re an environmentalist, and there is a news like “NASA reveals that there never has been more green”. The switch to analytic thinking system will work very well. But inversely a title like “municipal water contaminated with tritium for 6 million people !”, you’ll be tempted to share that, very swiftly. To show your friends you’re right to think what you think. That’s exactly when the risk of sharing bullshit is the highest.
Christophe goes on to say that the innate system (fast), is otherwise pretty good. It serves as a Judge Dredd against bullshit. An expeditious judge for truth and fake. The fast system is very deeply rooted in us. Result of millions of years of evolution, and that made us a social specie.
However conversely, the slow system (analytic), needs to be learned. It is not intuitive, requires to learn logic axioms, statistics, blind experiment method, scientific language, epistemology, suspension of judgment (principle of non prejudice).
Bayesian thinking
In episode 26 Christophe mentions how he stopped bickering with friends and families as he came to realize that each person’s knowledge base, is an iterative evolution from proof to proof, that leads to a certain position of beliefs. And of course when confronting them, if you disagree, even if you present evidence, your “opponent” doesn’t change his/her mind right away. This is frustrating but completely obvious is the wake of Bayesian thinking.
history
He introduces by mentioning that over a century, scientists from 2 clans, the clan of frequentists, and the clan of bayesians, are clashing about the scientific method.
Frequentists who say: «probability of events according to a certain theory»
Bayesians who say: «plausibility of theories according to certain events»
He presents the history of the original formula, Thomas Bayes writes it in the margin of personal notes under the form of an equality of set intersections: P(A).P(B|A)=P(B).P(A|B). Laplace finds it posthumously, and divides by P(A) to express it under its know form today:
And publishes it in a memoir called «probability of the causes by the events».
Platon’s cave & science
Christophe constructs a thought experiment using “which dice” as a placeholder for “which hypothesis”. Constructing a situation with a game master picking one of the dice we don’t know about, and spells a result. What dice was used ?
The frequentist has to answer that all possible dice is an equally probable source→33%. Christophe asserts that this a case where the frequentist is wrong. Let’s demonstrate.
The analogy by construction here, is one dye one explanation, or one model, as such:
He says it is directly linked to critical thinking, because it determines the “foundations of decision taking”, that we need to decide if we believe in that or this hypothesis.
Example: There is a noise coming from the ceiling. “What is that noise?”
– It’s a ghost. I believe at 2%.
– It’s the cold that contracts woods and it cracks. I believe at 32%
– It’s my cat that walks in the attic. I believe at 66%
Here we reason in terms of plausibility of hypothesis, we evaluate and compare them.
The frequentist, refuses to reason this way; he takes one hypothesis, and if the result of the experiment are too implausible, we can reject the hypothesis. This method is the norm in scientific research. He explains that it was suggested by Ronald Fisher in the thirties. And that Fisher found the bayesian practice too complex and too subjective. Christophe goes on saying he agrees; but argue that “complex” is not an argument anymore, it’s 3 multiplications 1 division and 1 addition. And “subjective”, indeed ! Because of the presence of P(B), the probability a-priori of the hypothesis tested now. But that’s not a sad thing at all, since we already have plenty of subjectivity everywhere if you take a global view of science within society:
I would translate it, but it’d be letter-to-letter redundant. You can perfectly read this. Just remove some silent endings –e | change –f with –ve | and –ie with –y | and -ique by -ic.
What we retain is that science is already used in a subjective way. What Christophe argues, is that he would rather encompass this whole system INTO science itself, were we to be bayasians !
there is a lot to win to incorporate this subjectivity in the heart of the method, in a transparent way; instead of trying to hide it under the carpet.
Back with the dice experiment, the real answer was:
On the left you’ll see the frequentist probabilities, and it will be our bayesien a-priori. A-priori all dice are equal, 20% chance then (5 dice→ 1/5). a 7 is drawn. What’s the update to our beliefs ? Well we can infer that D4 and D6 are impossible. We can say that for D20, a 7 is just one of twenty → 5% etc. In bayes formula the numerator is the surface created by the rectangles. And the denominator is the sum of these surfaces. Effectively acting as a normalizer. The result is on the right, the answer to the original problem was not 33% but something like 19%, 32% and 48%. D8 was more plausible. If you had to bet, there was a clear probable winner to bet on: D8. Whereas the frequentist would not have been able to pick a clear dye.
And this is iteratable, imagine all draws from now on, return a number never over 8, your confidence in D8 will grow and grow and reach 99% after a few dozens iterations.
Replace D8 by Einstein’s gravity, or second principle of thermodynamics. And you see that Bayes is there indeed, it’s just under the carpet, not quantified.
You’ll note that, say, after 230 draws, a 10 comes out. 99% falls to 0% instantly. That’s exactly what science is. #allmodelsarewrongcertainsareuseful
The rational open mind
This series effect, applies for our evolution of understanding about the world, we have “cursors of belief” that we tune, as we are presented with evidence along the course of our lives.
The part of subjectivity fades out as we update our beliefs cursor
Indeed in episode 28 Christophe presents a scale of «belief cursors».
Which represents the logarithm of the P(B) value in the Bayes formula. He says:
The pragmatic bayesian, manipulates orders of magnitude rather than percentages
For the math persons, the level of conviction is the logarithm of the ratio of conviction percentages, as such:This way you can add the levels of proof to your current cursor position instead of multiplying the percentages. (log is a morphism)
And same thing for “levels of proof” = the log of the ratio of plausibility of the tested hypothesis.
As such, if you are now “level -3” on, say, cold healing by basil tea, and I present you a study which shows with certainty +4 (10.000 times more likely than not) that it works, you should now believe, very little (level +1), in cold healing by basil tea.
lessons
People stuck in their convictions, that would be level ±infinite, are breaking the bayesian machine. This position is what we call a dogma. This is not very sane. P(B)=0 or P(B)=1 should be banned. But on the same time, it is not mandatory to grant at least 1% or at max 99% to a position. e.g. 0.00001% is perfectly reasonable, that’s the position -5.
On the other hand, most rational persons are not going to be convinced the same way as you in the light of the same evidence. They can be perfectly open minded and rational, and yet still appear obtuse to your demonstration. Because the update you caused to their cursor, is just not enough to reach your current cursor, given their original position. Everyone starts with different a-priori.
One point Monsieur Phi makes (Mr Phi, yet another of those French youtubers) in his coverage of bayesianism; is that Bayes, is the fix for the hardest, wrongest, dirtiest fallacy of all; my all time first, the undethroned: “base rate fallacy“. In modern times, where articles, studies, figures and occurrences are thrown to you from all directions. Indeed the base rate fallacy is what causes people the most to be irrational. Bayes is there to the rescue, lest you forget to think about him, if you wire his voice in the back of your head you’ll ask yourself: «what is the probability a-priori?».
Example: a test for disease trollititis is reliable at 99%. You pass this test and it returns positive. What are your chances to have contracted trollititis ? You’d think “well… 99% ? but I know there is a trap right ?” you’d be right. What is the probability a priori ? trollititis has an occurrence rate of 1 person out of 200 in the population, therefore in 200 persons it will fail for 199*1%=2 people, and most certainly be right for the 1 really sick. Your positive result is one of this bag of 3. Therefore, your probability of being sick is one of 3: 33%. Not 99% at all.
Conclusion of this paragraph, which is the longest in this article, is that bayesian thinking is (probably) the most important chapter in critical thinking. It shows that everything has subjectivity in it, it re-demonstrates mathematically that extraordinary claim requires extraordinary evidence; it is a great fallacy repellent, and a medium of peace, since it explains why people have different opinions. It is even a constant reminder of humility, how many of you answered false at the trollititis ? As history painfully showed with Marylin vos Savant in her demonstration, that in a tv show with “2 goats and 1 car behind 3 curtains”, you should switch your initial choice if the tv host opens an unrelated curtain to reveal a goat (which is usually to rise suspense). 10.000 angry readers, 1000 of them with ph.d, wrote that she was “dead wrong” in aggressive tone letters. Turns out, 1000 ph.d were dead stupid (looking at you Scott Smith, Charles Reid, E. Ray Bobo, Paul Erdős…) and beleived their a-priori intuition in a frequentist method, which doesn’t work. wiki link of the story. And if you find it confusing like me, this way better article by Zachary Crockett is fascinating.
Christophe’s sources are from the book “la formule du savoir” by Lê Ngyen Hoang. Who also happens to make the youtube channel science4all. He also has a website and presents very interesting series about democracy (and Condorcet voting system) or data science.
And from his peer youtuber Julia Galef. From whom he got the inspiration for his interactive diagrams about bayesian probabilities (visual guide to bayesian thinking).
The prosecutor bias
In episode 28 Christophe gives an example of statistics misuse, by a zealous prosecutor in the Great Britain’s Sally Clarke case (1998).
It goes like this: “according to this study, SIDS (sudden infant death syndrome) occurs once in 8500; the chances that both her kids died from SIDS are of one to 73 million. If you are rational, you must be convinced of her guilt”.
And convinced was, the Jury, the judge, the media and the public. She was jailed for the murder of her two kids. Then released in 2003 and later died in 2007 from over drinking.
The reasoning was a sophism. It was false. 1/73m is the likelihood of two subsequent SIDS. This is the “probability of the 2 deaths knowing her innocence”.Didn’t anybody forget to evaluate the likelihood of her killing them ? If you think in a bayasian way, you will reverse the formula, and try to compute the “probability of her innocence knowing the 2 deaths”.
For that, you have to know one likelihood a priori: that’s P(innocence). Which will be 1-P(guilt). The probability of guilt for a double infanticide is 1/500 millions.
Now what we must do is compare both hypothesis (guilt vs innocence), and the astronomical gap between the original “frequentist” thinking, versus the bayasian thinking is mind boggling. The actual probability of guilt was really 1/500/(1/500+1/73)=12%.
Because of this abyssal mistake, and it took 4 years to notice, then UK banned the use of statistics in courts.
more on wikipedia: Prosecutor’s fallacy ; Sally Clark
Which leads me to more statistical analysis with…
Overfitting
For this paragraph, I will refer to the channel science4all by Lê Nguyên Hoang (a mathematician with a phd in mecanism design and a published book about bayesianism).
Particularly of interest here, his series about artificial intelligence. He has 54 episodes on it, and episode 11 —in collaboration with Christophe—, talks about overfitting.
To simplify: machine learning is the field of studying databases of samples, and try to fit models that would explain them. If you want to be confident at 90% that your explanation is universal, you can’t use it before having observed 100 (unbiased) samples that confirms this explanation. The result 100, comes from the «fundamental theorem of statistical learning» which gives a sufficient condition:
One of the rare English spoken works by these French youtubers is actually available around the explanation of that theorem. https://youtu.be/RkWuLtFPBKU
Here a graph directly taken from episode 13 of the AI series:
The dashed bar is the region where just enough “explanations” are used to explain a dataset of a certain size. Less “explanations” mean under-fitting, more “explanations” means over-fitting.
More details are introduced in subsequent videos, notably the bias-variance tradeoff. That is, stick too much to the data, or stick too much to the model. Which is the subject of a whole 300 pages book by statistician and philosopher Nassim Taleb, in the black swan; with the data represented by fat Tony, and the model represented by doctor Jones.
Because there is a philosophical aspect to this, it is more than just a mathematics and statistics matter. Lê Nguyên Hoang argues that this applies to how we interpret news. Because of our cognitive biases, we are naturally enclined to perform over-fitting.
I’ll drop-in a possible illustration of my personal experience. In the newspaper japantoday during the course of the last year, they repeatedly made articles about elderly drivers causing traffic accidents. Of course after two or three articles, the internet commenters were all «what is the government doing! we need to forbid these geezers to drive!». If they had critical thinking, they’d be looking for an academical source, in sociology, to get objective statistics on elderly drivers, and they would find this graph:
No there is no boom of accidents caused by elderly. They are in fact diminishing regularly. Yes it is possible to interpret these 2 counts (orange&yellow) as “the percentage of elderly incriminated in accidents rising” though. But the boom was perceived not as relative, but as absolute simply of the occurrences in the news. Occurrence in the news doesn’t demonstrate of reflect anything about the frequency of a phenomenon.
In the comments of the video, a remark I found interesting emerged. Is it not fundamentally overfitting to do a literal analysis of a text in French class? (replace French by your native language) That is, trying to find all possible meanings and hidden meanings to an essay, or a painting, even an advert ? This is posed as a question but a teacher replying, seemed interested by the possibility.
Another natural deviance attributable to overfitting is generalization. E.g. “all arabs are terrorists”. Just because your experience is from fox news doesn’t make your generalization a good model that fits the data well. Nas Daily video about that sadness.
I will conclude on overfitting by mentionning the novels Earth’s Children by Jean M Auel. In many occurence she finds ways for the heroes to find some spiritual explanation for each and every event that occurs in their lives. This is of course superstition. But superstition itself sounds a great deal just like overfitting!
For the lulz, I’ll encumber your browser’s tabs yet again, with this funny example of overfitting by xkcd: https://xkcd.com/1122/
(the idea came from this article: elitedatascience.com/overfitting-in-machine-learning)
Simpson
Edward Simpson, a statistician from Cambridge postulated the Simpson’s paradox. It is covered in the channel Science étonnante, in this video.
You have a tumor that needs treatment, there is chemotherapy or surgery. Which one do you want ? A-priori you don’t know, so asking more data, the doctor gives you a study with these numbers:
The choice seems all made, just pick the chemotherapy ?
After you divulge your choice to your physician, he seems skeptic and presents you a different view on data because he remembers for small tumors, surgery was better:
According to this view, the healing chances are separated between big tumors and small tumors. So surgery is better for small tumors, but… surprise it’s better also for big tumors. What ? Surgery is always better ! That seems to contradict the original study. But looking closer both results come from the same dataset. So what’s happening ?
Indeed when you make the weighted total you find the same figures.
The explanation of this paradox comes from the presence of “confusion factors“. Or in Anglo-saxon litterature: a confounding factor. If you look at surgery, big tumors are numerous but they are also more difficult to treat, which lowers it’s total success percentage. That’s because «which one do you chose?» is not only dependent on people choices (which depends on the statistics), but it’s dependent on the fact that we give difficult cases to surgery more often. A confounding factor is a factor that acts on both the output of the statistics but the input too. Second example: «students who have repeated a school year perform less than average at the baccalaureate (HS diploma). Therefore repeating school years doesn’t work». You spotted the confounding factor ? It’s because they performed bad from before, that they repeated. So the factor is performance. Third example (invented): People seem to have more libido after drinking beers. Hard to spot ? Because confounding factors can be very subtle and hard to spot indeed. Here it’s sex. Separate the data between male and female and no correlation can be observed anymore.
Motivated Numeracy
This paragraph is for completeness, and does not originates from a French source. «Motivated reasoning» is a thing, and in particular I found the following study very interesting for critical thinking: motivated numeracy and enlightened self government
This rejoins the article by Julie Beck from theatlantic that I mentioned above on certain points. Let’s show the shocking curve:
The question asked in the lower half panel was “what happens after passing gun control laws in a city ?”. A question meant to trigger partisanship. The upper half was about skin cream treatment. In all 4 cases they present controlled data and they ask your judgment.
To me this is scary. It means that even if you are proficient with mathematics, you’re as likely as anybody else to misinterpret data to fit your views in presence of strong motivation to do so.
I hope there would be a “rank 10” in this figure, where people read this article !
Conspiracies
Finally we’ll make a footer note about conspiracies as a conclusion. Christophe says conspiracy theories, are pit of despair for the critical mind, they put thinking in a place on the “Bayesian scale” which doesn’t allow for updates. He calls that position being dogmatic. When your thinking is not longer update-able by rational mind openness, it’s because you have “divided by zero” in a way; you’ve put yourself in a 10-∞ position.
79% of French citizens believe in at least one conspiracy theory. According to a poll on 1700 people in January 2019 by Ifop (historical polling organism). I translated you one page of the report:
The education about critical thinking is severely lacking in the common national courses program of the education ministry.
Conclusion
If there is one meta course that should be taught before anything else, it’s critical thinking.
So pass this along, share it!
Comment it on tweeter (@zabigfrench1)