It’s not solely that there is at least one. There is a way to go from “there is at least one, and everyone knows it” to “everyone knows there is not 2”, “everyone knows there is not 3”, etc.
Yes, and that’s what the N days are for.
Yeah.
So even if you think what the guru says was useless, it still somehow turns out that if you wait n days, it’s communicating something.
Yes. But hey, at least I tried to quixotically take on AI all by myself!
And in doing so, I came to see what it couldn’t make me see by itself, I think because it didn’t think it necessary: why it was right.
I think you could do to learn how to spell things out better. When someone asks you to spell out the blue eyed people’s reasoning, they’re not asking you to quote yourself and tell us “originally I replaced originally with etc etc, but then I replaced etc etc with originally originally”. Nobody knows what that shit means, we just want to know what the blue eyed people’s reasoning process is, lol. You made this way more difficult than it had to be to figure out what you think.
I don’t know if the AI we have freely available to us can figure out this one. If it can, it’s probably only because it has similar problems and the answers to them in its database.
This is a common puzzle so they already all know the answer just because they’ve seen it
You certainly have a point. In this case, there were multiple reasons for that. The particular reason was that I didn’t want to spoil things for people who want to figure it out themselves. And the general reason was that I’ve developed my own form of the art of exoteric writing, again for multiple reasons. In fact, one reason is (again) that I don’t want to spoil things for people who could figure it out for themselves. Another is that I don’t think getting a message across is better than not getting it across—though no worse, either.
Here’s Google Gemini 2.5 Pro’s full “Show thinking” tab for your OP, by the way:
Summary
Dissecting The Puzzle
I’ve been analyzing the core mechanics of the blue-eyed islanders puzzle. My focus is on understanding the deductive process, especially the role of perfect logic within the islanders. I’m exploring how shared knowledge and the lack of a known shared trait contribute to the problem’s solution. Currently, I’m examining scenarios to identify crucial initial assumptions.
Exploring Initial Conditions
I’m now zeroing in on the initial state before the Guru’s announcement, attempting to quantify what each islander knows and doesn’t know. The challenge lies in translating the shared ignorance into actionable logical steps. I’m focusing on crafting the crucial mental steps, from Bob’s viewpoint, before the pivotal midnight. This approach will help me understand the dynamics before the Guru changes the island state.
Refining the Core Logic
I’m now zeroing in on the importance of common knowledge within the puzzle’s structure. Understanding this shared awareness and the information’s impact on everyone is crucial. My focus now is how Bob perceives the situation after hearing the Guru’s statement, and the implications of it. I’m actively simplifying scenarios to facilitate deductive reasoning.
Analyzing Initial Assumptions
I’m now focusing on how the Guru’s statement, seemingly redundant, alters the islanders’ shared knowledge. Each islander, hearing it, must now re-evaluate their beliefs based on the assumption that everyone else is a perfect logician. I’m dissecting Bob’s thought process, specifically the impact of common knowledge on his initial uncertainty.
Deciphering Shared Implications
I’m now focusing on how the Guru’s announcement shifts the islanders’ initial uncertainty into collective, shared knowledge. The seemingly redundant statement alters the baseline, enabling each islander to reason inductively. My latest efforts involve analyzing how the absence of any early departures triggers a chain of deductions within the blue-eyed population.
Deciphering Island Dynamics
The island dynamics have captivated me. Each islander’s deduction now relies on the absence of departures after the Guru’s announcement. It becomes a critical signal. I’m actively building a framework to understand how the waiting period influences the deduction process. My approach seeks to formalize the deductive logic within this complex island setting.
Deconstructing Island Logic
I’m synthesizing the pattern that reveals how shared knowledge and the seemingly redundant statement impact the islanders. My focus is now the crucial role that absence of departure plays in driving deductions, leading me to formulate the inductive step. The application of induction on this puzzle is proving to be a rewarding exercise.
Deconstructing Island Logic
I’m now zeroing in on the crucial inductive leap that the absence of departures after the Guru’s announcement enables. My current focus centers on how the blue-eyed islanders’ collective shared knowledge dictates their deduction strategy. The inductive argument, rooted in shared awareness and perfect logic, unlocks the core of the islanders’ riddle.
Deconstructing Island Logic
I’m now zeroing in on the crucial inductive leap that the absence of departures after the Guru’s announcement enables. My current focus centers on how the blue-eyed islanders’ collective shared knowledge dictates their deduction strategy. The inductive argument, rooted in shared awareness and perfect logic, unlocks the core of the islanders’ riddle.
Analyzing Inductive Proof
I’m now fully immersed in the inductive proof, meticulously examining each step to ensure its robustness. The key here is to confirm the common knowledge assumption’s validity and see if there are any subtle flaws in the chain. I’m actively testing the case for one or two blue-eyed islanders.
I don’t know why the AI is calling it inductive. It should be purely deductive.
So when someone asks you to spell something out, you think “it doesn’t really matter if I say the most incomprehensible thing imaginable”? They’re asking you to make your thoughts clear and you just… don’t care?
I need to remember that you’re like that so I can avoid this ever happening in the future. I hope I’m never tempted to ask you for your thoughts again.
That made me laugh out loud. I take it you’re not laughing, though…
Look, at first I thought you were just asking me to spell out the ‘[if]’ part. But yeah:
They’re asking you to make your thoughts clear and you just… don’t care?
I half don’t care (nihilist) and half think I shouldn’t have to rephrase.
I need to remember that you’re like that so I can avoid this ever happening in the future. I hope I’m never tempted to ask you for your thoughts again.
It seems I really pissed you off. That was not my intention. However, it could also be an opportunity. I appreciate your presence on this site, both as a user and as a moderator. I hereby apologize for my eccentricities.
You don’t need to apologise. I really just wanted you to spell out how a blue eyed person would know if the guru said nothing, I just got a bit flabbergasted that you were talking about original this etc etc that instead of just laying out the blue eyed persons reasoning.
This is a common puzzle so they already all know the answer just because they’ve seen it
Oh, you’re right. AI knows the name of the puzzle and can give a full clear explanation in about 2 seconds.
I have tested it on difficult math problems though and not only does it not get them right it makes up completely arbitrary fake answers.
It does make me wonder, however, what the expensive models that cost like $1000 per query can do. Apparently they can answer some PhD-level questions.
Yeah I’ve heard on the grapevine that their reasoning abilities seem to just be getting better and better.
My boss thinks super intelligence is coming within 5 years.
I’m not one to make that kind of prediction, but I could see an argument for it. Things are progressing rapidly.
I would bet on it within 20 years. But no one can be sure until they know exactly what it takes.
We do know, however, that if we have the hardware capabilities, we can simply scan a human brain and create a digital copy of a human brain that runs thousands of times faster. Hypothetically that’s a straightforward solution.
I don’t think we’re anywhere close to a stage where we can simulate the physics and chemistry inside a brain faster than it can actually happen irl. In fact I’d wager that might never happen.
Yeah there would be all kinds of complications too technical for us to get into. It’s a quintessential “easier said than done”.
The problem with simulating any piece of the universe inside of the universe is, you have two options:
Simulate a far simplified version of it
Or simulate it fully, but it takes you orders of magnitude more matter and orders of magnitude more time to calculate any little piece of it.
It’s vaguely possible we could simulate a simplified brain in the future, but it might be missing a lot of features that turn out to be important.
I don’t know why the AI is calling it inductive. It should be purely deductive.
It is a proof by mathematical induction, in which you prove that something is true for all x by showing that
- if it is true for x, it is also true for x+1; and
- it is true for x =1
Which is how this proof works:
Spoiler
- If x blue eyed people would leave on day x, then x+1 blue eyed people could conclude by them not leaving that the only eyes they can’t see (their own) are also blue, and so they would leave on day x+1; and
- The guru’s statement makes it a given that on an island with 1 blue eyed person, that person would leave on day 1.
Therefore 100 blue eyed people leave on day 100.
Which is I kindof what @Zeroeth_Nature was getting at here:
Spoiler
In the scenario where there are three blue-eyed people, for example, not everyone knows everyone knows everyone knows there’s at least one blue-eyed person.
The guru’s statement establishes the base case that lets the inductive proof work.