Jonathan - thank you for this wonderful piece. Just bought your book. I'm very glad that Persuasion exists as a place to support such writing. Keep up the good work.
You say: "No final say” insists that to be knowledge, a statement must be checked.
It follows then that the fact that I hate the taste of over-cooked peas is not knowledge.
Now that is true according to your definition of knowledge, but that shows that your definition is not the common definition. Now I agree that you get to define the words you use.
But it is common practice in math and science (both of which routinely redefine common words) to warn people that they are changing the definition on us. (See the definitions of charm, color, flavor, and strangeness used by particle physicists. Or "open" and "closed" used in math.)
It's also considered polite not to use personal definitions when there is no need to (as the woke so often do). So I would suggest that you use the term "scientific knowledge" or "checkable knowledge" and leave the term "knowledge" to its common meaning. That way we don't need to argue over whether or not I know I don't like over-cooked peas. I say I know that. You say I don't. A pretty silly argument, but I think many people would side with me -- because that's the common English definition of "know" and they trust me on over-cooked peas.
Why does that matter? Because we are in the midst of a huge political fight, which you discuss -- Some say I know that hearing the sound of the word "nigger" affects me just like physical violence. And some say that is not knowledge. That's no different epistemologically than the question of over-cooked peas.
Of course, you could insist that your definition of "knowledge" is known to be the true definition, but I don't think that's checkable. So you would be trapped in self-contradiction.
So I'm not convinced you have really solved that problem, which is too bad because it's an important one.
I don't think you have to distinguish a common view of "knowledge" from "scientific knowledge" in order to understand what the author is trying to show. He isn't switching definitions on us.
It's more like a warning to consider if we actually "know" something we think we know; or if we are just accepting something as fact because someone said it -- particularly because we like the person who has told us the information.
Courts use similar approaches all the time without getting scientific about it. Is something an "established" fact (corroborated, for example)? Or, is it merely "heresay" or "opinion" (unsubstantiated)?
Great, you tell us you hate the taste of cooked peas. That can be taken at face value, as an article of faith (or trust in your word) ...until it can't. In a sense, the veracity is not an issue unless we are discussing it, and perhaps something else starts to hinge upon it.
Sorry, but I think that is what we are talking about here -- unconsciously moving from the "insignificant" to the "significant" and not recognising how we bring personal biases and assumptions along with us in the process.
For example, to use your illustration of someone hating something:
Let's say, "Trump proclaims he hates fraud". Well, a lot of people who support him apparently believe that is an undisputed fact. After all, he is leaving no stone unturned. But, a lot of other people think he has been, and is, guilty himself of plenty of fraud of all sorts. He doesn't walk the walk in this matter. Rather, many might contend that it is closer to the truth to say, "Trump hates losing".
Now, all this matters, in a way that your hating peas doesn't matter (again, sorry), because, well, Trump's actions have been a matter of heated universal discussion, and it affects the realities people do or do not accept as people talk past each other (ie, the proposition that the election stolen).
The more an issue is purported to be "important" outside of our own heads and the more we expect our reality to be accepted by others, then the more rigorously it needs to be tested and "peer reviewed". That just follows. Instead, many people tend to go the other way and get even laxer -- like being merely loud or repetitive is a substitute for rigour.
I think I agree with you, but I'm not sure why we couldn't at least attempt to test the veracity of the proposition "Steven Stoft hates the taste of overcooked peas." We have lie detectors (as imperfect as they are), we can interview others who know Steven, and engage in double-blind taste tests with Steven to see if he knows the difference, and which he likes better or worse. We might even be able to watch how his brain reacts to each taste to see if there's a difference.
Well, you just made my point for me. Thank you. Those are the kind of things necessary to solve the problem of internal truths. And as I'm sure you noticed, they were not mentioned in the post by Rauch.
I'm not claiming there is no solution. In fact, I end by saying that Rauch's problem of "personal authority" is "an important one," by which I meant we should look for a solution.
I don't think the answer lies in the direction Rauch is headed (I think your direction is more logical, but out of reach at present). I would suggest looking a what claim "personal authority" has on others.
I can't prove you do not have knowledge if you say "your words just did me a million dollars of damage." Perhaps you do know that. But that doesn't mean I owe you a million dollars. The point is that only checkable knowledge should be accepted by society as the basis of substantial claims on others.
Now here's why I think your approach is not yet a solution. You could definitely "attempt to test." And I agree that a lie detector could provide some evidence, but not nearly the level required by real science.
And consider the real problem, those who claim terrible harm from hearing contrary opinions. All of their friends would vouch for their claim.
And double-blind tests are impossible because I can tell if I'm tasting peas or something that tastes like them. So I, the subject, am not blind to the experimental conditions. And the same for hearing trigger words.
How do we go about mitigating the profit motive in the Constitution of Knowledge? How do we prevent an entity from putting their thumb on the scale at the marketplace of ideas?
The social network is ripe for incentivization. Upton Sinclair put it like this: "It is difficult to get a man to understand something, when his salary depends on his not understanding it." That difficulty increases when "salary" is defined in a broader modern-day sense, in which subscribers and followers are a coveted currency in the flourishing influencer market. Clicks and shares are the gold standard of the digital exchange rate. As long as selling people a bill of goods made of whole cloth remains profitable, performance capitalists will remain riveted to giving them whatever reality they want.
Returning to my question of mitigating the profit motive, I wonder if one solution might be a kind-of "Consumer Reports" outfit. By that I mean a cooperative enterprise dealing in good faith that assesses the level of fulfillment a Reality™ brings. That assessment would explore why-when-where a manufactured vehicle of truth brings satisfaction, and point out any potential hazards that would impact the quality of life of a passenger riding inside the vehicle of truth.
UPDATE: my wife just read the last paragraph and said, "Why do you need a cooperative? That's what family and friends and co-workers are for." She makes an excellent point. Let's hope the relaxation of pandemic rules leads to more interpersonal interaction and less reinforced tribalism.
The big problem with the "Consumer Reports" idea, François, is that judging putative knowledge by the fulfillment it brings (at least in the short term) is not the same as judging its truthfulness. Contingent on the specific interests of the "knower", the two goals can often even be diametrically opposed. As the old saying goes, "the truth hurts".
You are right to point out the problem here, and I think it gets at the core of why our present situation looks so bleak. The incentives are misaligned with the goal of establishing a shared reality, and it isn't clear how to re-align them properly.
What works to our advantage is that most people do, in principle, want to believe that they are looking at the truth and not being deceived. The drawback is that most people resolve conflicts between their own beliefs and those of mainstream society by heavily critiquing and seeking out ways to discredit society's beliefs, not by seriously scrutinizing their own. Ultimately, people have to want this cultural divide to end, and realize that commodifying ideas like products in a market, to be evaluated solely by their short-term utility, is what's largely contributing to it.
Knowledge is best viewed as a shared ecosystem or collection of many ecosystems. It is evolving organically, not mechanistically. Rules and institutional roles are important ways to describe and understand the relationships, and values inform the various perspectives that can help guide how knowledge is shared and used, but the corpus of knowledge develops independently of our ability to control it. "The more I learn, the less I know" is an aphorism I use to describe the Zen of Knowledge.
Jonathan - thank you for this wonderful piece. Just bought your book. I'm very glad that Persuasion exists as a place to support such writing. Keep up the good work.
You say: "No final say” insists that to be knowledge, a statement must be checked.
It follows then that the fact that I hate the taste of over-cooked peas is not knowledge.
Now that is true according to your definition of knowledge, but that shows that your definition is not the common definition. Now I agree that you get to define the words you use.
But it is common practice in math and science (both of which routinely redefine common words) to warn people that they are changing the definition on us. (See the definitions of charm, color, flavor, and strangeness used by particle physicists. Or "open" and "closed" used in math.)
It's also considered polite not to use personal definitions when there is no need to (as the woke so often do). So I would suggest that you use the term "scientific knowledge" or "checkable knowledge" and leave the term "knowledge" to its common meaning. That way we don't need to argue over whether or not I know I don't like over-cooked peas. I say I know that. You say I don't. A pretty silly argument, but I think many people would side with me -- because that's the common English definition of "know" and they trust me on over-cooked peas.
Why does that matter? Because we are in the midst of a huge political fight, which you discuss -- Some say I know that hearing the sound of the word "nigger" affects me just like physical violence. And some say that is not knowledge. That's no different epistemologically than the question of over-cooked peas.
Of course, you could insist that your definition of "knowledge" is known to be the true definition, but I don't think that's checkable. So you would be trapped in self-contradiction.
So I'm not convinced you have really solved that problem, which is too bad because it's an important one.
I don't think you have to distinguish a common view of "knowledge" from "scientific knowledge" in order to understand what the author is trying to show. He isn't switching definitions on us.
It's more like a warning to consider if we actually "know" something we think we know; or if we are just accepting something as fact because someone said it -- particularly because we like the person who has told us the information.
Courts use similar approaches all the time without getting scientific about it. Is something an "established" fact (corroborated, for example)? Or, is it merely "heresay" or "opinion" (unsubstantiated)?
Great, you tell us you hate the taste of cooked peas. That can be taken at face value, as an article of faith (or trust in your word) ...until it can't. In a sense, the veracity is not an issue unless we are discussing it, and perhaps something else starts to hinge upon it.
Sorry, but I think that is what we are talking about here -- unconsciously moving from the "insignificant" to the "significant" and not recognising how we bring personal biases and assumptions along with us in the process.
For example, to use your illustration of someone hating something:
Let's say, "Trump proclaims he hates fraud". Well, a lot of people who support him apparently believe that is an undisputed fact. After all, he is leaving no stone unturned. But, a lot of other people think he has been, and is, guilty himself of plenty of fraud of all sorts. He doesn't walk the walk in this matter. Rather, many might contend that it is closer to the truth to say, "Trump hates losing".
Now, all this matters, in a way that your hating peas doesn't matter (again, sorry), because, well, Trump's actions have been a matter of heated universal discussion, and it affects the realities people do or do not accept as people talk past each other (ie, the proposition that the election stolen).
The more an issue is purported to be "important" outside of our own heads and the more we expect our reality to be accepted by others, then the more rigorously it needs to be tested and "peer reviewed". That just follows. Instead, many people tend to go the other way and get even laxer -- like being merely loud or repetitive is a substitute for rigour.
I think I agree with you, but I'm not sure why we couldn't at least attempt to test the veracity of the proposition "Steven Stoft hates the taste of overcooked peas." We have lie detectors (as imperfect as they are), we can interview others who know Steven, and engage in double-blind taste tests with Steven to see if he knows the difference, and which he likes better or worse. We might even be able to watch how his brain reacts to each taste to see if there's a difference.
Well, you just made my point for me. Thank you. Those are the kind of things necessary to solve the problem of internal truths. And as I'm sure you noticed, they were not mentioned in the post by Rauch.
I'm not claiming there is no solution. In fact, I end by saying that Rauch's problem of "personal authority" is "an important one," by which I meant we should look for a solution.
I don't think the answer lies in the direction Rauch is headed (I think your direction is more logical, but out of reach at present). I would suggest looking a what claim "personal authority" has on others.
I can't prove you do not have knowledge if you say "your words just did me a million dollars of damage." Perhaps you do know that. But that doesn't mean I owe you a million dollars. The point is that only checkable knowledge should be accepted by society as the basis of substantial claims on others.
Now here's why I think your approach is not yet a solution. You could definitely "attempt to test." And I agree that a lie detector could provide some evidence, but not nearly the level required by real science.
And consider the real problem, those who claim terrible harm from hearing contrary opinions. All of their friends would vouch for their claim.
And double-blind tests are impossible because I can tell if I'm tasting peas or something that tastes like them. So I, the subject, am not blind to the experimental conditions. And the same for hearing trigger words.
Someday brain scans might do it, but not yet.
How do we go about mitigating the profit motive in the Constitution of Knowledge? How do we prevent an entity from putting their thumb on the scale at the marketplace of ideas?
The social network is ripe for incentivization. Upton Sinclair put it like this: "It is difficult to get a man to understand something, when his salary depends on his not understanding it." That difficulty increases when "salary" is defined in a broader modern-day sense, in which subscribers and followers are a coveted currency in the flourishing influencer market. Clicks and shares are the gold standard of the digital exchange rate. As long as selling people a bill of goods made of whole cloth remains profitable, performance capitalists will remain riveted to giving them whatever reality they want.
Returning to my question of mitigating the profit motive, I wonder if one solution might be a kind-of "Consumer Reports" outfit. By that I mean a cooperative enterprise dealing in good faith that assesses the level of fulfillment a Reality™ brings. That assessment would explore why-when-where a manufactured vehicle of truth brings satisfaction, and point out any potential hazards that would impact the quality of life of a passenger riding inside the vehicle of truth.
UPDATE: my wife just read the last paragraph and said, "Why do you need a cooperative? That's what family and friends and co-workers are for." She makes an excellent point. Let's hope the relaxation of pandemic rules leads to more interpersonal interaction and less reinforced tribalism.
The big problem with the "Consumer Reports" idea, François, is that judging putative knowledge by the fulfillment it brings (at least in the short term) is not the same as judging its truthfulness. Contingent on the specific interests of the "knower", the two goals can often even be diametrically opposed. As the old saying goes, "the truth hurts".
You are right to point out the problem here, and I think it gets at the core of why our present situation looks so bleak. The incentives are misaligned with the goal of establishing a shared reality, and it isn't clear how to re-align them properly.
What works to our advantage is that most people do, in principle, want to believe that they are looking at the truth and not being deceived. The drawback is that most people resolve conflicts between their own beliefs and those of mainstream society by heavily critiquing and seeking out ways to discredit society's beliefs, not by seriously scrutinizing their own. Ultimately, people have to want this cultural divide to end, and realize that commodifying ideas like products in a market, to be evaluated solely by their short-term utility, is what's largely contributing to it.
Knowledge is best viewed as a shared ecosystem or collection of many ecosystems. It is evolving organically, not mechanistically. Rules and institutional roles are important ways to describe and understand the relationships, and values inform the various perspectives that can help guide how knowledge is shared and used, but the corpus of knowledge develops independently of our ability to control it. "The more I learn, the less I know" is an aphorism I use to describe the Zen of Knowledge.