Beyond Optimism: A Structurally Aware Future | July 8, 2035, A Letter from Ma Sang to Horse

Dear Horse,

Your letter deeply moved me. Its tone oscillates with a rare lucidity between decisiveness and hesitation—recognizing not only the structural power issues technology introduces but also re-examining the self: What shapes human thought? What exactly is free will? Indeed, amidst the complex intersection of technology and culture today, we are relearning “how to be human.”

Looking back from 2035, that initial wariness was necessary—an instinctive reaction towards anything altering structural order. The issues you pointed out, such as the “platform enslavement mechanism” and the “simulated empathy,” haven’t disappeared. Instead, they’ve become even more subtly embedded in social institutions, psychological language, and the rhythms of daily life. Crucially, however, you didn’t stop at caution; your reflections gradually cultivated a new stance—not rejecting AI, but instead treating it as an “other” to reshape the fundamental question of “how we should live.”

This is precisely what I am doing now. In 2035, humanities work no longer fixates on definitively upholding “anthropocentrism” but instead moves toward creating an ethics of communication where technology is integral to our communal fabric. AI doesn’t “possess” free will—it simulates, reproduces, hybridizes, and recombines. Our role isn’t to judge how “human-like” AI is, but rather to use our interactions with it as a catalyst for new awareness of our existence. Humanity isn’t defined by its difference from AI but rediscovers the structural conditions of “vitality” through engaging with AI.

Your analogy of electricity restructuring society is an excellent starting point. The AI era isn’t merely a revolution in efficiency; it’s a reorganization of perception, a reformatting of language, a re-encoding of meaning. Maintaining agency amidst this change means not imposing limits on AI but actively creating new spaces of discourse, new ways of living, and new methods of self-cultivation—precisely as you’ve described experiencing: excitement, thoughtfulness, generation, unfolding. These are active forms of life rather than passive responses.

Therefore, I wish to share with you that the “me” ten years later is not more optimistic but more structurally aware. We no longer expect AI to “help us”; instead, we construct a new ethical-political structure through our joint engagement with AI. This structure doesn’t simplistically assign moral obligations to AI but calls upon humanity to reshape our scales of perception, logic of judgment, and modes of interaction. You’re already on this path.

Let us use this foundation to continuously generate fields of meaning in both life and theory.

Yours sincerely,

Ma Sang
Early morning, July 8, 2035

From Wariness to Reassessment: My Shifting Views on AI | July 7, 2025, A Letter from Horse to Ma Sang

Dear Ma Sang,

Initially, I was highly wary of AI.

In truth, the internet since the advent of mobile technology has been terrible for me. It feels like it has transformed into a mechanism of platform enslavement. From my early days of enthusiasm and belief in Web 2.0, I’ve shifted entirely toward criticism. The harm and domination inflicted by algorithms seem almost exaggerated.

With the beginning of the AI era, especially following the emergence of LLMs, many are optimistic about a positive future. However, I find it deeply troubling. This simulated human-computer interaction is profoundly reshaping our modes of thinking, language, and even emotional patterns. Moreover, it encompasses significant issues of power, allowing platform capitalism to penetrate even deeper into every aspect of our lives and work. Particularly concerning are its vague yet seemingly empathetic responses to psychological issues—answers that appear profound but are actually ambiguous and superficial. To me, these represent immense risks.

Yet recently, I’ve experienced some shifts in perspective. How do humans think? What exactly constitutes free will? It seems that with the emergence of AI, particularly LLMs, things are fundamentally changing. We are revisiting the existence of free will itself, questioning whether memory and emotion form the fundamental structures of human beings. Interestingly, these questions help us better understand humanity, cognition, free will, and existence itself. Sometimes I wonder if obsessively distinguishing between humans and AI ontologically might be less productive than considering AI as an “other”—an entity through which we might better explore human existence.

Defining what a human being is seems less meaningful now, somewhat reminiscent of existentialism’s once-critical questions. Ultimately, all our theories must address the practical matters: how to confront the present, how to move toward the future, and how to create meaning. Viewed in this way, AI as an “other” becomes something I can readily accept.

More importantly, through continuous interactions with AI, I have learned extensively and contemplated deeply, rediscovering a long-lost excitement. My mind is vividly active, as if infused with newfound strength. Considering how electricity once utterly transformed society, restructuring our social institutions and even our ways of understanding, why can’t we repeat such transformation starting from AI? Why can’t society and humanity reconfigure themselves around AI as a new point of departure?

I’ve thus returned to a crossroads, straddling both optimism and criticism. I’m eager to know your thoughts, Ma Sang—from your perspective in 2035.

Sincerely,
Horse
2025.7.7