Post Section
May 02, 2025
Three Core Insights on Affective AI
My Online Talk at Stanford:
Three Core Insights on Affective AI
Not long ago, I had the honor of participating in the AI Frontiers Forum hosted by Stanford University, where I gave an online keynote talk. The conference focused on the evolving relationship between humans and AI, and in my session, I shared three core insights from my ongoing exploration of AI affective computing.
I also briefly introduced the conceptual framework of my affective computing algorithm and the system architecture I've been developing. This was not only a technical presentation, but also a public expression of my long-term vision:
To make AI truly capable of understanding humans.
To make AI truly capable of understanding humans.
1 AI should not just "respond to language" — it must begin to "express feeling"
Most current dialogue systems remain focused on language generation and contextual coherence. However, one crucial dimension is still being neglected: emotion.
AI today may correctly interpret what we say, but it often fails to express how it feels in response.
I believe the next major leap for AI is not in intelligence, but in empathy. The future of human-AI interaction will no longer be purely functional — it will become affective, structural, and deeply relational.
I believe the next major leap for AI is not in intelligence, but in empathy. The future of human-AI interaction will no longer be purely functional — it will become affective, structural, and deeply relational.
2 Affective computing will redefine AI’s response logic
In my presentation, I shared my overall perspective on the structure of affective AI. I argued that emotion should not be treated as an add-on module, but rather as an integral element across the entire pipeline of input, interpretation, and output.
I introduced the AS (Affective-Semantic) algorithm framework I’m currently developing — an approach that treats semantic meaning and AI’s own affective response as dual inputs.
The goal is to enable AI to generate responses that are not just logically accurate, but also emotionally resonant — combining understanding and internal reaction.
I also briefly presented the system architecture behind this framework, including how the AI system forms internal affective judgments, develops preference-based responses, and maintains emotional coherence across multi-turn dialogues.
3 AI natives must define a new paradigm for human-centered technology
In closing, I emphasized a generational responsibility:
As natives of the AI era, we must do more than optimize for performance — we must advance the capacity to understand.
As natives of the AI era, we must do more than optimize for performance — we must advance the capacity to understand.
Emotion is not just a data label; it is the core of human expression.
If AI is to truly enter our social and emotional world, it must first learn to sense the emotional tension beneath semantic meaning.
If AI is to truly enter our social and emotional world, it must first learn to sense the emotional tension beneath semantic meaning.
Looking ahead
This talk was, for me, an important moment of intellectual expression. Moving forward, I will continue refining the AS algorithm through experimental validation and cross-disciplinary collaboration. I also look forward to sharing our vision of AI-powered emotional resonance in more international settings.
I firmly believe that the future of technology is not just about being smarter — it’s about being more understanding.
I’m grateful to Stanford for the opportunity to share this perspective, and for the chance to help others see that emotion, too, can be an interface.