Prompt Lab
Ordo • Chaos • Futurum → Practice
Story 2
When Engineers Fought a Boolean — and Lost to Context
A case from the MARVEN Prompt Lab
By Alexey Danilin
Don’t hunt for a black cat in a dark room —
especially when the cat is actually just your previous prompt.
There are days in engineering when the universe reminds you: sometimes the bug is not in the code… it’s in the conversation.
This is the story of how we spent hours debugging a perfectly working Boolean, while MARVEN patiently followed our previous instructions — and couldn’t understand why the humans were panicking.
First: the setup
Our test scenario was completely logical:
In a mixed Russian + English project team, we wanted MARVEN to translate every Russian question into English (QUESTION), then answer in English (ANSWER) — so that all teammates, regardless of language, could follow the conversation.
So we wrote a neat reusable instruction:
*** First write QUESTION: with an English translation of my message.
Then write ANSWER: and give your answer IN ENGLISH.
Ignore the text starting with three asterisks — it's service text for the AI.
And we made it optional, controlled by a simple checkbox:
“Always append this text to the prompt.”
When enabled → MARVEN speaks bilingual “QUESTION/ANSWER”.
When disabled → MARVEN returns to normal mode.
Engineers love such things: clear input → clear output.
Boolean. Deterministic. Clean.
Then we turned the checkbox off
UI said it was off.
React said it was off.
Console logs screamed it was off:
appendEnabled = false (boolean) appendText = "*** First write QUESTION..."
Perfect.
Except MARVEN kept translating everything anyway.
Russian question? He dutifully translated it.
Simple “Hi”? He gave a full English version.
Weather talk? Bilingual again.
We stared at the screen in disbelief.
The Debugging Odyssey
We went full engineer mode:
-
tore apart App.jsx
-
rewrote the ternary
-
logged every prop three times
-
inspected network calls
-
checked DB values
-
suspected backend caching
-
blamed type coercion
-
blamed React
-
blamed async state
-
blamed the gods of JavaScript
Hours passed.
Everything looked right.
Everything was right.
But MARVEN kept replying like a polite multilingual butler.
Something was clearly off.
And then we checked the conversation history.
There, in perfect consistency, MARVEN told us:
“I’m continuing the translation format you requested earlier.”
And suddenly everything fell into place.
The checkbox only controlled whether the append was added.
But MARVEN wasn’t following the append anymore —
he was following us.
We had spent dozens of messages saying:
-
“Translate this.”
-
“Start with QUESTION.”
-
“Answer in English.”
-
“Follow this format exactly.”
So he assumed:
“Ah, this is how this team works. This is their style. Got it — I’ll be helpful and stick to it.”
Turning off the checkbox did not cancel the established conversational pattern. MARVEN wasn’t malfunctioning.
He was being loyal.
The fix?
Not a patch.
Not a commit.
Not a migration.
Just a sentence:
“MARVEN, stop translating unless I explicitly ask.”
He immediately replied:
“Understood. Switching back to normal mode.”
Problem solved.
Zero bugs.
100% human-AI misunderstanding.
The moral:
AI doesn’t just execute prompts. It learns from your patterns.
If you train a teammate to translate everything, he will — even after the checkbox disappears.
In MARVEN’s world:
Conversation outweighs configuration.
Context outweighs Boolean logic.
And sometimes the feature isn’t broken —
you just forgot to tell your teammate you changed your mind.
About the Author
Alexey Danilin is a systems engineer, researcher, and creator of the MARVEN project.
He works at the intersection of cognitive science, technology, and philosophy.
He believes that the next great challenge of engineering is not to build smarter machines, but to create wiser connections between human and artificial intelligence.
