Artificial intelligence, inference, epistemology, ontology

I am researching in the field of artificial intelligence. I know there is controversy about the feasibility of such a venture, but I’d like to accept this term (in its weaker sense) for us to focus on the question that I am presenting.

Some approaches to AI, called symbolic approaches, seeking to replicate intelligent behavior through the manipulation of symbols (which represent concepts, relations, properties, etc., high-level). These systems make use of logic to arrive at new conclusions from given premises. For these intelligent systems (in the weak sense), the world is all that can be described and represented by symbols.

Recently, int this area, has emerged an interest in the philosophical discipline of Ontology. Now, we use Philosophical principles (such as analysis of the principle of identity and unity) to produce ontologically weel founded symbolic descriptions of world, from whose the agent thinks. In this sense, we call ontology (without capital “O”) the engineering artifact that describes the world (at least partially, within the scope of interest). We know that what we call ontology differs from what philosophers call ontology, although there is a connection.

So, we assume that an ontology is a formal and explicit specification of a shared conceicualização of a domain. Thus, ontology represents the static portion of the knowledge of intelligent agent. The knowledge que structures the facts that the agent know. Underling a fact, there is an ontology.

However, there are certain issues related to the agent’s reasoning, which are of an epistemological nature, it seems to me. For example, when I say that

“Every apple that has dark spots is not suitable for consumption”

From this implication, every agent who notices an apple with dark spots could conclude that it is not suitable for consumption.

I have the impression that I get in the field of epistemology. Because this implication seeks to base a conclusion based on evidence. Am I correct?

Anyway, at the same time, this implication part of an ontology, which states that there are things in the world such as: apples, dark spots, etc. …

What is the exact interplay between epistemology and ontology, in this case? I have the impression that ontology precedes epistemology. The ontology seems to offer the “substance” which is the content of these theories and foundations, which are the subject of epistemology. What do you think?

I want to undertand better these issues, since I have the impression that the area of AI would gain much paying attention in philosophyc distinctions. I’ve been wanting to bring the philosophy of AI.

Thanks.

I cant give you the formal philosophical input, but I’d ask if an AI needs to have the subjective divide that we do? …does it need to know a difference between the object of an apple and the knowledge of one. A more simple an precise example; would it need to know the difference between the colour green and human perception of it ~ where the latter may vary though is for the most part universally agreed upon [green is green and that’s all an ai needs to know].

However if you want the AI to ‘learn’ then at the higher human level it will need that subjectivity, this is so that it does not simply make assumptions on the given facts but it’s always required to make further inspections, as per your example; it would not assume all apples with black spots are bad, it would look at the apple and see if the black spot is simply a bruise or something stuck onto the apple.

I suppose that if it is pre-programmed with all the holistic variations such as what we have already learned, then it wont need to distinguish between object and knowledge.