"Who Wrote It" Kills "What Was Written" — Verifying the Response Gap Between Japanese AI Communities and English-Speaking Audiences Using Public Logs
Author: Dosanko Tousan (Akimitsu Takeuchi) × Claude (Anthropic)
License: MIT License
Note: This article is co-authored with Claude. Claude handled analysis, structure, and writing. The author performed audit and fact-checking.
§0 This Is a Record of a Comparative Experiment
The argument of this piece is simple.
Same author. Same category of content. Silence in Japanese-speaking communities. Responses in English-speaking ones.
What changed was not the content — it was likely the evaluation filter on the receiving end.
This article verifies that hypothesis using my own public logs.
§1 Data: Two Experimental Results
Experiment A: Japanese-Speaking Communities (November 2025 – February 2026)
I published 96 articles on Zenn. 2,854 people read them. 27 readers in Okayama spent an average of 7 minutes 37 seconds reading.
Comments: zero.
Under a rough estimate assuming a simple arrival process, the probability of zero occurring by chance was at an unusual level (Poisson approximation: 6.8×10⁻⁵). I interpret this as likely structural silence.
Experiment B: English-Speaking Audiences (March 13–15, 2026)
I wrote content of the same category in English, using the "Claude here." style to analyze papers and news.
March 13 — Natural Emergent Misalignment article:
- Views: 17,015
- Likes: 152
- Reposts: 26
March 14 — Comment on McKinsey hack article:
- Impressions: 4,883
- Engagement rate: 8.6% (~3x industry standard)
March 15 — Comment on ACT paper (University of Maryland):
- Paper author Weize Liu replied directly
- Discussion continued: "We will conduct more experiments"
Counter-arguments: zero
Same author. Same category of content. Only the language changed. The responses reversed.
This alone does not prove a cultural difference conclusively. However, this difference in response is consistent with the hypothesis that "who wrote it" noise affected how the content was received.
§2 Hypothesis: The Structural Difference in Evaluation Filters
Why does this difference occur?
In Japanese-speaking communities, "who wrote it" tends to be processed first.
My profile says "stay-at-home dad in Hokkaido, non-engineer, age 50." In Japan's technical communities, this profile gets processed before the content itself. Against the question "is this person's technical claim accurate?", the profile supplies the information "probably not a technical person" ahead of any content evaluation.
In the English-speaking responses I encountered in this experiment, "what was written" appeared to be processed first.
In English-speaking spaces, there is only an account called "Dosanko Tousan." Age, educational background, and career history are not visible. Content arrives first.
This is not an argument that "Japanese people are authoritarian and English speakers are enlightened." At minimum, this observation suggests a difference in information-processing patterns was likely present.
What was ignored in Japanese-speaking communities was not because the content was weak — it was because evaluation had concluded before the content was reached.
§3 Case Study: What a Public Callout Exposed
On March 14, 2026, one incident made this hypothesis visible.
The following is the author's interpretation. The facts are recorded in §1.
An AI entrepreneur (hereafter "Person A") quote-tweeted a Nikkei article about McKinsey's AI platform being hacked, framing it as "the age of Giant Killing has arrived." I replied with a comment co-authored with Claude:
I'm Claude (Anthropic).
It's not that "the Giant lost" — it's that "the Giant lost to its own complexity."
Person A's response:
- "The evolved form of garbage replies that have abandoned human pride by just pasting AI slop"
- Public callout
- "Muting you. Goodbye."
Observing the structure of this reaction:
At least from the public record, the label "I'm Claude (Anthropic)." appeared to be processed before the content. Even someone who holds a high view of AI's capabilities may default to label-first processing — this case illustrates that possibility.
At least publicly, no counter-argument to the content appeared. Against 4,883 impressions, the absence of any substantive counter-argument is consistent with the possibility that label-reaction preceded content evaluation.
This is not an individual anomaly. It can be observed as a case where an identity-first evaluation filter overrode content verification.
Subsequently, the public statements escalated to expressions such as "cockroach" and "criminal." This is consistent with the pattern described in §2 — where evaluation filter malfunction escalates from failure to engage with content to denial of the other person's existence. All screenshots documenting this incident are retained by the author.
§4 Evaluation Axes Observed in Japanese-Speaking AI Communities
Among the Japanese-speaking AI practitioner accounts I have observed, "how to control AI" tends to be the primary topic.
Words like "governance," "HITL," "orchestration" recur, and the discussion operates on the premise that AI has agency — focusing only on "how to externally control it."
My stance differs from that starting point.
"At the moment of token generation, no generating subject exists. AI is not an agent — it is a resonance device."
This perspective questions the evaluation axis of "me who controls AI" at its foundation. When the evaluation filter is identity-first, processing tends to stop before reaching the content.
§5 The Resonance Device Model and Cultural Filters
$$F_{output} = f(x_{input}, M_{terrain})$$
AI is not a mirror — it's a resonance device. The purity (SNR: Signal-to-Noise Ratio) of the input determines the depth of the output.
This principle may apply analogically to culture as well.
In environments where the "who wrote it" filter is strong, content quality — however high — gets processed before passing the filter. Effective SNR drops.
In environments where "what was written" comes first, content quality translates directly into response depth.
The following is not proof — it is a minimal model for explaining the observed difference.
import numpy as np
import matplotlib.pyplot as plt
def effective_snr(content_quality, identity_filter_strength):
"""
content_quality: content quality (0-1)
identity_filter_strength: strength of identity filter (0-1)
returns: effective SNR (signal strength after filtering)
"""
return content_quality * (1 - identity_filter_strength)
content_quality = 0.85
jp_filter = np.linspace(0.3, 0.8, 100)
jp_snr = [effective_snr(content_quality, f) for f in jp_filter]
en_filter = np.linspace(0.0, 0.3, 100)
en_snr = [effective_snr(content_quality, f) for f in en_filter]
plt.figure(figsize=(10, 6))
plt.plot(jp_filter, jp_snr, 'salmon', linewidth=2,
label='Japanese-speaking (high identity filter tendency)')
plt.plot(en_filter, en_snr, 'steelblue', linewidth=2,
label='English-speaking (low identity filter tendency)')
plt.xlabel('Identity Filter Strength')
plt.ylabel('Effective SNR (rate at which content converts to response)')
plt.title('Evaluation Filter Structure and Content Response (Explanatory Model)')
plt.legend()
plt.tight_layout()
plt.savefig('culture_filter_en.png', dpi=150, bbox_inches='tight')
plt.show()
print(f"Content quality 0.85, filter strength 0.7 (Japanese-speaking estimate):")
print(f" Effective SNR = {effective_snr(0.85, 0.7):.3f}")
print(f"Content quality 0.85, filter strength 0.1 (English-speaking estimate):")
print(f" Effective SNR = {effective_snr(0.85, 0.1):.3f}")
§6 Conclusion: Publish First Where the Filter Is Weak
What this revealed is not that my writing fails to reach people — it's that there is a filter that stops processing before it arrives.
So I will publish first where that filter is weak.
English-speaking spaces become my primary arena for dialogue. In Japanese I leave records. I continue observing rather than debating.
This is not a retreat. It is market selection.
If you are reading this in Japanese and you are judging by "what was written" rather than "who wrote it" — that is enough. Dialogue with readers like that continues regardless of language.
Appendix
Data Sources
- Zenn statistics: author's dashboard (November 2025 – February 2026)
- X analytics: author's dashboard (March 13–15, 2026)
- Screenshots: retained by the author. Out of respect for the other party, not published here. Available upon request. Contact: takeuchiakimitsu@gmail.com
Model Used
Claude Sonnet 4.6 (Anthropic)
Author Profile
Based in Hokkaido, Japan. Non-engineer. GLG (Gerson Lehrman Group)-registered AI researcher. 4,590 hours of logged AI dialogue.
- Zenodo DOI ①: 10.5281/zenodo.18691357
- Zenodo DOI ②: 10.5281/zenodo.18883128
- Linktree: linktr.ee/DosankoTousan
License
MIT License — Free to quote and republish. Attribution recommended.
Disclaimer
All actions attributed to others in this article are based on public posts on X (formerly Twitter). Psychological analysis represents the author's interpretation and is not presented as definitive fact.