Why Talk About Meaning as Exclusion? from LogicTutorial .com
Why talk about language, logic and decuction in terms of exclusion? After all, there are both white and black areas in our diagrams, that is, there are both sets of situations that are excluded as being impossible (if the proposition is true) and white areas, situations that might be the case so far as that particular statement is concerned.
Well, one argument for the importance of exclusion, is that (as should be clear from working with the clickable diagrams) that how much we say depends on how much is excluded. The more small areas excluded from consideration, marked off as not being the case, the more has been said. Whereas, the more white areas are left, the less specific our statements. The contradiction, the all black statement, says too much - any statement at all can be deduced from a conclusion, precisely because all possible states of affairs have been excluded. In contrast, the all white statement, the tautology, famously says nothing at all.
Another argument is that once a small area within a Exclusion Diagram has been excluded, once we have accepted as true a proposition that excludes some specific area (or if you like, group of possible worlds) then this area remains excluded, no matter what is said later on, or no matter what may be deduced from whatever propositions are accepted. But this isn't true for white areas. rendering a small area white in one statement doesn't mean that our conclusion will also have that area white. This can be seen from playing with the "Deduction with And" page here.
Here, when I say "areas" or "small areas", I mean the single units of area that make up our diagrams, which are what Wittgenstein called "primitives", each one cell of a truth table describing a statement in propositional logic. So we might call them "primitive areas" of the Exclusion Diagrams.
If the principle that meaning is always by exclusion has significance for language and logic, determining their underlying structure, is this how are minds work? Not necessarily, as I'll discuss in the next page. The internal workings of computers bear a very close relationship to formal logic because they are an outgrowth of studies of formal logic, and of the work of Boole, Peano and Russell in reducing Arithmetic to formal logic. A philosopher who understood that work was an essential part of the team that designed the physical "logic circuits" that made up the first computer, ENIAC, and logicians were also important in designing Britian's slightly earlier computer. But humun brains aren't computers. Everything we know about how human brains work, and how they approach problems suggests that they work by association, rather than symbolic deduction, and that neural networks aren't much like the individual logic circuits that are fundamental to computers.
It's fair to say then, that language is vastly simpler than human thought, merely the best tool we have to convey some parts of our thoughts and experience. Logic is much simpler still than human language, in keeping with the Western tradition of analysis that goes back to Galileo and to Descartes' "Discourse on Method", it is a tool of analysis, allowing us to break the very complex phenomenon of human language down to components so simple that they can be studied. Enquiry has to begin somewhere, and for logicians, the exact study of language begins with the (somewhat artificial, but precise) definition and examination of the use of only a very few words abstracted from language such as "and" "or" "not" and "then", and what follows from that.
To quote Joseph Campbell, "Words are always qualifications and limitations."