correlation and identification

In the formation of what we call consciousness or awareness,is there a prioritizing going on in acquiring data, recording that data, and analyzing it, in order of formation?

Is there a sense to coherence of a structural kind to say that identification may occur as the correlation of kinds of sensory data becoming more frequently channeled between objects and similar objects?

Or is it legitimate to argue in reverse : that similar objects tend to be correlated with a variety of sub- types of objects, and those sub- types reduced to sub -sub types, ultimately deriving idea of what that object is?

May a hypothesis of a pri lingual idea of an object be presupposed?

I propose, that even if the answer to this is 50 % , a utilitarian approach to idea formation can validate the hypothesis on basis of the conjecture of likelihood in certain situation on probabilistic, perspective, or logical basis. Can the problem of idea formation be based within contexts of the type of approach used? And finally, is the type of approach used not in it self a co-varient of that perspective?

I realize this theme has been taken up in many contexts between an idea and its representation, but I am looking at the relation between a pre-idea and the object, one level down. Are these two processes similar and parallel or dissimilar and overlapping, or both?

A pre idea can be interpreted as a state of anomalie between a preceeding and succeding state, where the state of the exact ideation can only be looked at postsriptively, with a sense of psychological indeterminate time. It is the presriptive notions of time which really apply here.

This OP, if at all worthwhile, begs on pre conceptual hypothesis at the least, or substantial concepts, at the most.

Have you ever run into something so new to you that you have no clue what it could be? What did your brain do?

 The fact is, I don't remember. It must have proppcessed it and filed it away somewhere.  Utilisation occurs after that.  But this type of awareness,is probably not on a conscious level,  at least not  in the storing phase of it.

Then the test is to be given a new thing and document as much process as possible. This should be done with at least 20 people, more would be better.
For me, senses come into play. then comparision then tests then more of the above.

Ok. Sounds like a way. But so? 20 or more. Why not 50? Or 80? Does the numbers matter? I guess the more the merrier. But Kris, I am just kidding. I think I know where thiis is going.

I remember thinking about “maximum specificity” in language once. Like, you just keep adding modifiers and criteria and what have you until you’ve eliminated all the worlds in which a thing might not exist, (or in which a “proposition might not obtain” as they say), and I’ve thought about how you might use a counterfactual conditional, and then another, and then another and then another, starting with things you see in front of your face and can feel with your hands, and using the rules of things as they can only exist in the abstract, and boiling down your words to mean just one thing, or the least amount of things possible.

I’ve also thought about how not all snowflakes can really be different. It’s not impossible for 2 of them to form identical physical properties barring location in space and time.

Why do we only need a certain number of points of identity to determine that a fingerprint belongs to someone?

So how do we identify objects? By relating them to things…yeah. By ruling out the possibility that they are other objects, leaving a smaller and smaller set of possible objects that the one in question might be. By comparing them to things we think we know to be true and seeing how they hold up in what we know should be a proper structure of some language. We might do this with objects themselves as they’re presented to our senses, or we could do it the same with a description of an object as it’s presented to us in a language if we use tricky enough language to encapsulate all we need to about the simpler versions of a multitude of kinds of descriptions and objects. Maybe…I dunno. But I think it’s so.

If you compare an object “you know to be true” that is you know it’s identity, by propositional maximal specificity,by adding modifiers and criteria (your words), —the counter factual conditions are arrived by a binary process of acceptance or elimination. The identity of established modes (a given identity), meets this condition. Right?

You’re desription begs a model, if I understand you correcly, and that model links the two processes of correlation and identification. The specificity, are the degree of offset of similarity is conditional upon the corresondence of the model.

The model begs maximum and minimum propositional specificity, in order to maximise the correspondence between the propositional specificity (tuth) and the factual condition. (Of the correlation between the two)
I think this is true, I I understand you correctly.

(The one in question (your quote) is the model paradigmn).

I think it starts with something like the idea that eventually, upon observation of enough things, we find persistent threads of similarity such that one might conclude these are the result of the structure of the way in which we take things in empirically so to speak.

So you have that as somewhat of a framework. All eyeballs and nerve endings, (barring special cases), function in physically similar ways. Because of that, there will be a similarity that is superimposed on objects that may not share them beyond our representation of them. We can’t know because there’s no getting to that thing in itself.

So we got the fundamental problem of distinguishing objects and identifying objects with one another, on some kind of gradient scale based on whether they are more like, or less like other objects.

About the counterfactuals and the binary thing. Stalnaker thinks that we only need one possible world in order to do a proper counterfactual analysis. Lewis thinks that we need infinite ones. In my mind, this makes Stalnaker a computer programmer logician, and it makes Lewis a philosopher logician. You gotta have room for everything in there, and because that’s necessary, Occam’s razor doesn’t apply. You avoid the problem of having an overall binary system by leaving space for an infinite number of counterfactual analyses over time. Just like in science, we observe, then we keep observing, and sometimes we realize we were wrong. Except in this schema, instead of having a paradigm shift, we just realize we’re in the wrong world and move our understanding of what world we’re in closer to what the world we’re actually in is like through this repeated processing of information.

Which is the model paradigm? How we identify objects?

Depends which end the analysis is started from. The paradigm shift occurs when the degree of specificity changes I suppose. As you go farther from specific to general concepts, the factual conditions change, and there is a more general correlation, with the object.

We identify objects by correlating the paradigmn with the specific objects.

A blurb of words just went through my head about this. I remember a giy named Hintikka who wrote about identifying objects via acquaintance vs via stipulation. Lewis wrote about de dicto, de re and de se. Ill think and come back.