Computability and Knowledge

In computer science, there’s a problem called the Halting Problem. It clearly demonstrates that there are some things which a turing machine cannot compute.

The easiest way to explain it is like so:

Say you wish to write a program which will verify the correctness of every program. In other words, this program takes as input another program and gives as output proof that the program will “Halt” (solve or not solve the problem). The question then becomes one of what will happen if you put this program into itself. The obvious answer is that it would never halt.

This is, of course, a watered down proof, but it basically says that you cannot automate everything with computers.

This leads to a greater question of mine: assuming the human brain is a computer of a different form, is it possible that we cannot solve every problem?

Have you encountered a problem which prevents your brain from “halting”? This of course means that not only can you not find a solution, but your brain just completely wraps in on itself and you’re unable to make ANY judgment.

What problems make you go ](*,) ?

Nihilism, solipsism, and proof of God seem to be some prevalent issues that need ‘debugging’ from time to time around here.

Very possible, and most probable, in my opinion. I don’t think that we ever “halt” ourselves, for instance. Although, our we will inevitably idle and sleep. The screen savers are bleak affairs.

Some problems can’t be solved because of our limited knowledge and capacity. However, the computer analogy does not apply because the human brain is based on fuzzy logic. Strictly speaking it never halts by arriving at a solution. It finds an acceptable way to view the problem and solution.

Yeah, point taken.

I think there is a possibility that every problem can be solved except for one; the inability to stop creating problems in the first place.

Fuzzy logic is still algorithmic; if you want complete and consistent fuzzy logic, you’re going to get a Goedel paradox. If you have a system S that’s susceptible to G(S) and then design a system S’ that monitors itself to catch the paradoxes of G(S), there is a new weakness - G(S’)

Raj - This is the root of Penrose’s argument that the workings of the brain are not algorithmic, and it’s not one I’ve ever felt comfortable accepting. For a start, there may just be an input that causes a (discrete portion of the) brain to seize, it’s just assumed there isn’t. Secondly, the case is never made that the system is complete and consistent, besides “well, we can think about anything we like”.