What can I know? What does it mean to know anything at all? Or more radically, could it be that I am just dreaming or trapped in ‘the Matrix’? Traditionally, philosophical epistemology has dealt with questions such as these primarily from the individual’s perspective. Social epistemology, in turn, puts the ‘social’ into knowing.
As its name suggests, social epistemology takes a much broader view on the process of gaining knowledge. After all, almost everything that we know or think that we know, we have learnt through others. This highly social aspect of the creation and spread of knowledge thus gives rise to a multitude of dependencies between different individuals, where these dependencies can be unilateral, reciprocal, or networked in certain ways. The rapid rise of social epistemology is directly related to the rapidly advancing intellectual-scientific division of labour in the pursuit of knowledge.
In addition, today’s communication platforms and social media channels create many new kinds of knowledge-related problems—for instance, just think of the very serious question of how to distinguish real expertise and epistemic authority from mere pretences thereof. By examining everything that currently goes under the heading of disinformation, misinformation, fake news and even truth decay, social epistemology investigates some of the most pressing problems faced by modern democracies. This, together with the computational approach it adopts, explains why we teach social epistemology in the Bachelor of Science in Management, Philosophy & Economics (MPE).
For, social epistemology’s high topicality is also reflected in the methods it uses. Indeed, it is probably the first philosophical sub-discipline which has fruitfully applied computational modelling and simulations to philosophically pressing questions. I myself embarked on the path of agent-based modelling three decades ago. At that time, pursuing this path was considered rather absurd. Today, it makes you a bit of a pioneer. 😉
My starting point was a model now known as the bounded-confidence model (call this the ‘BC model’). This model concerns the revision, modification and updating of one’s opinions in social exchange processes. It is based on a simple idea for the updating of opinions in an exchange with others, namely: if others’ diverging opinions are close enough to your own, you will take these other opinions into account when revising your beliefs. If, in turn, others’ opinions are too far removed from your own beliefs, you will discard them. Of course, for modelling and subsequent simulations, this informal idea must be further specified, translated into a mathematical language and then programmed.
Once this translation work has been completed, though, the simple BC model can be further extended. Here is a short video showing the opinion dynamics of a society that has access to two different therapies for fighting a disease, say, but does not know as yet what their respective objective probabilities of success are:
While individuals start from purely subjective guesses about the therapies’ respective effectiveness, they will soon try to find out about their objective probabilities by conducting experiments and exchanging views with others. The vertical green line shows the first therapy’s objective probability of success (0.2); the horizontal green line denotes the second therapy’s objective probability of success (0.8). However, this is a two-class society. The blue individuals are active truth seekers: period by period, they experiment with one of the two therapies and systematically evaluate their own experiences, while also taking into account—within certain limits and by assigning particular weights—what the others think. The red individuals, in turn, are copycats: their updated opinions are simply what is believed on average by all others within a circle with a certain radius around their present position in the two-dimensional opinion space. The right part of the video shows statistical analyses. The top chart is about distances to the truth (intersection of the green lines). The lower chart show for both classes the lost share of possible successes if the better therapy had been chosen from the beginning and always.
Overall, this model allows us to analyse the reasonableness, efficiency and also the costs of certain policies for forming opinions and trying to gain knowledge. For example, is it reasonable for truth seekers to try, with a certain probability, the therapy that is currently considered to be worse? Or should one simply give up this therapy at some point? These are the kinds of questions to which, thanks to its employment of computational methods, social epistemology can now provide answers!