|Help Me, Captain Philosophy!
||[Aug. 16th, 2006|04:51 pm]
here) that characterizes me as a metaphysical Realist, an epistemological Subjectivist, and an ethical Utilitarian.I took a relatively interesting philosophy quiz (|
The corresponding viewpoints aren't a horrible match -- they're better than the subcategories listed under the polar opposite "Reductionist/Absolutist/Relativist" type, but still, there were a bunch of questions where none of the answers really fit what I believe. There are lots of these questions to which my answer is really mu -- either the question is ill-posed, or there's not enough context to give a proper answer.
So, not that I actually expect anybody on my flist can answer the question, but: Can you help me find a label for my philosophical outlook?
In a nutshell, here's what I think. There are two kinds of thing in the world: physical things, and informational things. A rock is physical; a 30-60-90 triangle is informational. Your mind is software (informational) that runs on the hardware of your body (physical). Part of your mind is a model of the objective physical universe; this model is imperfect, being fed by your imperfect perceptions of the universe, but there's an isomorphism between model and reality.
Here's the part that seems to be unconventional: I've come to believe that statements about physical things are qualitatively different than statements about informational things. In particular, boolean truth is applicable only to purely informational propositions. Statement about physical things evaluate to what I'll call "floating-point truth".
So what is that? Property dualism? Fuzzy-logic Aristotelianism? Any ideas?
(side note: if this stops being interesting, stop at any time, no offense taken)
This is interesting in itself: the idea of people dealing with the fuzziness of things in their world by "invoking ... their mental model of the universe."
Can you describe how this is done? Does this allow people to resolve fuzziness into distinction, or does it allow them to trick themselves that they resolve fuzziness when really they have no idea, or does it allow people to think "fuzzily" without the need to resolve the borders? In your example, it seems that it allows them to decide border cases, which seems to point toward an apparent resolution of the fuzziness of the borders based on these larger models.
Feel free to use or drop the three books example, whatever is useful.
Well, let me start by saying this is nothing strange or special. It's what you do all the time whenever you need to resolve an ambiguity. You just take the cloud of features that makes up the definition of a conceptual category, compare it to the features you observe on a physical thing, and see how well they match.
So (to blatantly rip off my example from someone else's blog), if you see something skittering through your kitchen out of the corner of your eye, and you think "is that a rat?" you take a better look at it (after pausing the TiVo) and compare: it's a small rodent, but look, it's got the wrong kind of tail. It's a squirrel, not a rat. The vast majority of the time, the matching is easy and obvious, so fuzziness doesn't matter.
Now, consider a fuzzier case: there's a big juniper in your yard. Is it a bush, or is it a tree? It's hard to tell. We could probably make a determination, but we don't need to if all we want to do is trim the thing; whether it's a tree or a bush is irrelevant. So we can handle fuzziness without addressing it in some cases.
In other cases, we have to resolve the ambiguity in order to make a decision or some such. And the need provides important context for the resolution, because if we didn't have a need to resolve it, we could just leave it ambiguous and address it in its naturally fuzzy state, right?
The stack of Hamlet pages on the coffeetable can be matched against any number of patterns. The ambiguity in matching it against the "book" pattern only matters if the question "is it a book?" has somehow been asked, implicitly or explicitly, in a way that does not allow "sort of" as a valid answer.
[Aside: I'm starting to think that the correct answer to many (most?) philosophical conundrums is "that's a bad question, because it depends on context that hasn't been provided".]
So let's consider this situation: Alan says to Bob "hey, would you go and grab the books off my coffeetable?" Bob sees two regular books and a stack of loose pages. Did Alan mean to grab that, too? Is that "a book"? The answer depends on context. If Bob knows that Alan is editing a manuscript, he might decide yes. If Alan is shelving things on a bookshelf, he might decide no. It depends on the circumstances surrounding the question.
If the immediate circumstances don't help, then Bob will start pulling on broader context, drawing on his personal understanding of the universe. If he works at a bookstore, the aspect of books as "things that are bound" comes more easily to mind for him, and he decides that the unbound pages are not a book. If he's been sorting through letters and forms all day, the aspect that is "a lengthy collection of text" might be prominent, and he decides yes, it is.
Now, in neither case has the fuzziness gone away. All that's happened is that Bob was constrained to deal with a borderline case in an all-or-nothing way, so he evaluated it in that context to determine whether the mapping between physical object and informational object held. When exterior context was insufficient to resolve the ambiguity, he called on his mental model of the universe to get more context so that he could make a subjective decision. In a different context, he might come to a different decision. A different person in the same context might come to a different decision.
The fuzziness remains, it's just a way of treating it as if resolved in a particular context.