Ecologies of Languaging, AI, and Humanity in Education
- Angel M.Y. Lin
- Aug 15
- 19 min read
Updated: 2 days ago
Prof. Angel M. Y. Lin, Chair Professor of Language, Literacy, and Social Semiotics in Education, The Education University of Hong Kong
Prof. Angel Angel M. Y. Lin is a a leading scholar in the fields of English language education and critical literacies. In this interview, she shares her insight with Mobina Sahraee Juybari on the evolution of her research journey, the urgent challenges and opportunities in the field of language and literacy education, and the need for human-centered approaches to AI and assessment.
Early encounters and philosophical inspirations
Mobina: Thanks very much, Angel, for joining us today. It's such a pleasure to have you in the Leveraging Languages website interview series.
To start off, I would love to know more about your journey. One of the articles through which I got to know about you was your co-authored article about "Appropriating English, Expanding Identities, and Revisioning the Field ....." It was really interesting and powerful for me to see how your personal journey, your life trajectories, actually connected to what you're doing in your research nowadays. So, could you tell us a little bit about your research journey and how these life experiences inform your work in applied linguistics?
Angel: Right, that article was almost 20 years ago, and I co-authored that with my colleagues at OISE; we were fellow PhD students. All of us were from different parts of the world—Nobu from Japan, me from Hong Kong, Mehdi from Iran, and Wendy from mainland China. So we were telling our stories of how we became English teachers, how we became interested in English learning, becoming confident. And then, when we encountered the ideologies about English language teaching, like English native speakers, monolingual approaches, we felt we needed to do something to contribute to reshaping, rethinking, and challenging these ideologies. So that was an early start for all of us in the 2000s. We graduated in 1996.
Mobina: It's actually very inspiring to see how these experiences that you've had at the beginning, have informed what you're doing at the moment. I’m interested in particular to know about any specific philosophers or philosophical traditions that inform your thinking.
Angel: We were so lucky and privileged to have our professors at OISE, Ontario Institute for Studies in Education, at the University of Toronto in the 1990s. There was Monica Heller, who first introduced me to Pierre Bourdieu's theories of symbolic violence and different kinds of capitals—symbolic capital, and a linguistic market. And that helped me to understand why students' most familiar, intimate language for learning, and for establishing relationships is so often denied its presence and value in the schooling spaces.
And I was so lucky to have my PhD supervisor, Professor James Heap. He was in sociology of education. He was a classroom conversation analyst—CA, conversation analysis—and also an ethnomethodologist. This is what they call micro-sociology because it looks at fine-grained analysis of interactions of teacher-student(s), student-student interactions, conversations. By doing a fine-grained analysis of classroom interactions, it helps us to understand the so-called macro—the larger structural, economic, socio-political, socio-cultural structures, how they actually operate in moment-to-moment interaction processes.
The divide between macro analysis, like political economy, political structural analysis, and the micro-analysis—moment-to-moment classroom interactions—the divide is an analytical one, because they are two sides of the same coin. The structures are manifested, and can be reinforced or transformed in our everyday interactions.
So, Professor James Heap introduced me to Harvey Sacks' conversation analysis and Garfinkel's ethnomethodology. And we can broadly say this is a phenomenological approach. Phenomenology, you could say, is an antidote to the positivist, scientist approach, in which there are data out there for us to collect, and that human beings are just like physical entities that you can study and do experiments on, like in a laboratory.
For example, if you think a student is not motivated, and then you administer a questionnaire to measure student motivation, and then you say, "Oh, the student has very low motivation," this is just an artifact of your research method. You created the questionnaire, and how the student interpreted the questionnaire and answered the questionnaire is unknown to you. So, the research results are artifacts created by the researcher’s instrument, and the researcher is totally unaware of, and uninterested in the student’s interpretive processes that lead to the scores which you take as an “objective” finding.
But students are fluid, dynamic living organisms. They might be motivated when they encounter an encouraging teacher who knows how to mobilise students' home languages, familiar resources, and suddenly you might see the same student classified as unmotivated by a questionnaire survey becoming highly engaged. So, that's the difference between positivist approaches, which treat everything as study entities—like physical entities that you can manipulate and measure in a laboratory—versus a phenomenological approach, where you center human meanings, human interpretations: what it feels like from the teacher's perspective, or from a student's perspective. We impose a lot of researcher categories—self-efficacy, motivation—and they're useful to a certain extent, but we need to be mindful of their constructedness. Participants might or might not totally see the world through the lens of these analytical categories imposed on them by the researcher or theory. Phenomenology and critical sociological theories like Pierre Bourdieu’s are approaches which center human meanings in our research process.
Then, there’s Allan Luke, the founding scholar of critical literacies. I'm so privileged, so lucky to have him informally as my mentor. He's brilliant; he could see through the workings of societal ideologies and larger structures—political, economic structures, societal structures—and how they operate through everyday educational classroom practices. Allan Luke was very good in helping me gain an overall, meta-perspective because I had been trained in classroom conversation analysis and ethnography. Even with Pierre Bourdieu’s ideas of symbolic violence, I was still analysing everyday interactional data—how everyday interactions are shaped by these ideological dominant structures. Allan helped me to have a meta-view of different cultural ideologies, different national ideologies. He's really a philosopher of education.
Then, the last philosopher or scholar who has influenced me a lot is Jay Lemke, also my mentor for so many years. Jay Lemke is a scientist, a quantum physicist by training, but also a social semiotics scholar. He's incredibly insightful and intelligent. I've learned so much from him in terms of seeing our reality as dynamic processes. He gave me this idea of process-based ontology. Simply put, we tend to think of reality as made up of entities—more or less stabilised (that’s an entity-based ontology) but in fact reality is made up of dynamic processes (that’s a process-based ontology). Importantly, we are all embedded in our material, social ecologies—we are material beings and sociocultural beings. We cannot just subscribe to this mind-body divide.
But this mind-body split is deeply rooted in modern disciplines. We are part of our socio-ecological systems, ecosystems, and ecosystems are dynamic, not static. Every little change is like a butterfly effect; any tiny change can trigger feedback loops. So that's why when researching classroom processes, we need to bear in mind that these are dynamical systems with lots of feedback loops.
We need a radical departure from research instruments which view students and teachers as static entities and simply administer surveys, questionnaires, and conduct statistical correlations. We need to understand human beings as living organisms embedded and participating in dynamical socio-material, cultural-material ecosystems. Then, the methodology becomes much more complicated, not as straightforward as just doing a questionnaire survey or even an experiment.
In education, you can seldom do a true experiment. Most experiments are quasi-experiments. You cannot have randomised controlled trials (RCT) like medical research testing new drugs. In medicine, you really can have double-blind groups—one using the drug, one not. But in education, research publications claim to be experiments, but they are quasi-experiments. Most studies labeled quantitative are actually questionnaire surveys, which run lots of statistics on the questionnaire results, using correlation analysis to model so-called causal relationships. But all they have is correlation. Causal relationships can only be established by true experiments, and that cannot be done in many educational contexts.
I see lots of misconceptions or myths in our published literature. It's concerning because it takes up a lot of research and publication space and also misinforms policies. It reinforces the domination of certain policies and practices that go against the well-being of teachers and students.
Integrating ecological and biological foundations: evolving understanding of trans/languaging
Mobina: That was such a brilliant and powerful insight. As a leading voice in the theoretical development of translanguaging and trans-semiotising, I would love to know more about your perspective, particularly how your views on these concepts have evolved over time.
Angel: It's actually a natural continuation. At first, I was skeptical of translanguaging because I'd been working on code-mixing and code-switching, and Monica Heller is one of the key scholars in classroom code-switching in educational contexts. So, I brought a lot of questions to Jay Lemke. In 2016, I went to see him in San Diego—he was retired but still a research scientist at UC San Diego at that time. I brought all my questions, asking, "Why invent a new term—translanguaging—when there is already a huge body of research on code-mixing, code-switching, basically arguing similar ideas, advocating for allowing students to use their familiar languages, home languages, and community languages?”

I brought all these questions to Professor Lemke in 2016 at his home in San Diego. He sat down and explained to me very patiently because, like everyone else, I wasn't initiated into process-based ontology, and the perspective of humans as living organisms embedded in dynamical ecological systems—material, social, and cultural. He patiently explained all of this. I had the privilege of asking many basic questions—someone might even say silly questions—but he never treated them as silly.
Eventually, we published that conversation. I engaged my doctoral student at that time, Yanming (Amy) Wu, and we published it as a book chapter titled “It Takes a Village to Research a Village’: Conversations Between Angel Lin and Jay Lemke on Contemporary Issues in Translanguaging”. In that chapter, we unpack a lot of those questions. We contributed to deepening the theorization of translanguaging, drawing on process-based ontology—seeing the world as dynamic processes rather than entities-based ontology. That explains the difference between the theoretical notion of a language code—Code A, Code B, Language A, Language B—and the theoretical assumptions of translanguaging. It’s process-based, and that eventually led to our 2023 article in the journal Educational Linguistics, titled "Translanguaging and Flows."
That article is quite theoretical but basically lays out our attempt and effort to deepen the theorizing of translanguaging. Now, trans-semiotising is also an evolving concept. It originated from Michael Halliday. When he visited Hong Kong in 2013—I was teaching at the University of Hong Kong at that time—Ken Hyland invited Michael Halliday to give a public lecture. He was talking about “small” languages and the pool of semantic potential. He said, if we've lost a small language, it is not just a language that is lost; it is a loss of part of humanity, because the semantic meaning potential is lost. Different languages, different cultures, have different ways of making meaning, and languages carry different meaning potentials.
Halliday also used the word trans-semiotic to emphasise that it's more than just linguistic resources, it's about all bodily resources, aural, visual, gestures—the whole body. Over the years, through the connection with Jay Lemke, I got to know Paul Thibault, an ecological linguist and semiotics scholar who was also a PhD graduate of Michael Halliday's from the 1970s. Paul Thibault ventured further into ecological psychology—James Gibson's ecological psychology—and evolutionary biology.
Actually, the term "languaging" was first used by Humberto Maturana—a Chilean evolutionary biologist. He was the first scholar, as far as we know, to use the term languaging, even though he's not a linguist but a biologist. This grounds our linguistic explorations in our ecology as living organisms. We are grounded in our ecology. We are not just talking heads.
So Professor Paul Thibault gave me the idea of whole-body sense-making. That helped me further develop my idea of trans-semiotics from Michael Halliday and trans-semiotising. Many multimodality studies tend to become textual studies, basically analysing the visual grammar of multimodal texts—whether spoken or written. But we still need to conduct a lot more research on the actual physical embodiment of our sense-making practices. We don't want multimodal research reduced merely to visual grammar research—that would be quite unfortunate.
Gunther Kress and Carey Jewitt never intended multimodality to be reduced simply to visual grammar analysis. They were very much on board with the idea of whole-body sense-making and meaning-making. So, we need to return to the original ideas of these founding scholars, including Gunther Kress and Michael Halliday. It's not just textual. The recent development of generative AI in the past three years has brought into sharp relief the importance of emphasising embodiment, whole-body sense-making, and seeing humans as grounded in our ecology, sharing this ecology with other living organisms. We're not just a chatbot. This is very important for us as language education researchers and teacher educators.
Mobina: Absolutely. Your point about losing semantic-level meaning-making as a loss of humanity resonates strongly because unfortunately, with emergence of AI, there's an emerging tendency towards unnatural understandings of interactions, languages, and cultures.
Angel: Yes, with generative AI, there is both great potential and significant risk. Generative AI and large language models are primarily trained on Anglo-European internet text data—books and texts mostly in English and other major European languages. This training data inherently represents particular cultural values, epistemologies, and knowledge systems. Even though ChatGPT can perform translations, these translations often fail to carry the deeper cultural nuances embedded in what Michael Halliday referred to as "small languages". And people are using ChatGPT or generative AI without realizing that, even though it might seem like it's speaking your language, it's not truly speaking your language. It doesn’t genuinely reflect the values, knowledge systems, aesthetic perspectives, and wisdom carried in indigenous languages, smaller languages, and cultures.
Of course, we should use AI, but we should also alert ourselves and our students to the epistemic bias, the cultural bias of generative AI. Whatever content AI generates for you isn't neutral. It represents dominant cultural and epistemic views—particularly those of Anglo-European cultures—which form the basis of the data used to train these large language models (LLMs) powering chatbots and AI applications used extensively in language teaching. We must raise awareness among ourselves. I myself have only recently become alert to these dangers and risks.
Creating space for intergenerational dialogue: TL-TS
Mobina: Thanks so much for sharing your concerns. In my final question, I’m going to revisit your research around AI. But now, I'd like to move on to another question about your TL-TS YouTube channel. Could you share your vision behind it and what you hope to offer to people through this platform?
Angel: Yes, it’s simple—I want to be a bridge between all these mentors I've been privileged to interact with or be trained by, and the new generation of scholars and researchers in our field. How could I do that? Actually, it started before COVID—in 2019. At that time, I was at Simon Fraser University, located on Burnaby Mountain. It's very difficult to travel up there. Whenever we had seminars, we only had around nine, maybe ten people attending.
So, we leveraged technology and the YouTube platform. Because I have greatly benefited from interacting with these great scholars, I wanted the next generation of scholars to benefit from first-hand interactions with them as well. Life is short—we might not always have these brilliant scholars with us for as long as we would wish. But then we also want to feature emerging scholars. We want this cross-generational— cross-generational conversations—the older generation, the middle generation like me (though I'm becoming old now), and the younger generation, the next generation. This continuity is crucial because it's concerning to see how neoliberalism has impacted the research and publication culture in our field, often negatively, resulting in a kind of fast-food publication culture. I understand the competitive pressures—the real-life pressures to get jobs, promotions, and to publish continuously—but this doesn't allow enough time to really delve deeply into theory. Theory can help us avoid mere replication; it opens us up to new insights, new research questions, and genuine impact through our research rather than just replicating existing ideas.
To sum it up, my vision is to maintain this continuity—generations passing on their wisdom so the new generation can build upon, extend, and innovate further, ultimately surpassing the older generation. The next generation must be better; otherwise, we won’t see progress. So, it's a simple idea: continuity.
Urgent questions and research directions for scholars
Mobina: Thanks so much for creating this platform and bringing these different generations together. It enables young scholars like us to learn from the long-standing works that senior scholars have contributed to the field.
So, building on what you've shared so far, and also your earlier concerns regarding rapid changes affecting education and its digital landscape, I'm very curious to know what you think are some of the urgent questions scholars within the field of language and literacy education need to address moving forward?
Angel: Yes, I see two broad and urgent research directions, under which there could be numerous studies conducted across different contexts and research sites. The first direction is assessment. Despite the popular reception of translanguaging, plurilingualism, translingual ideologies, theories, practices, and pedagogies, I want to echo Ryuko Kubota—my dear friend and fellow Sister Scholar (Sister Scholars, 2023)—who emphasises that such scholarship isn't substantially impacting students' real-life opportunities. Students still must take monolingual tests. This scholarship may often just benefit academics themselves—helping them secure tenure, promotions, and symbolic capital.
Kubota reminds us that while there may be some impact in classrooms, assessment has such a significant washback effect that if we don't tackle assessment directly, talking about translanguaging pedagogy becomes essentially futile. This is particularly true in Asian countries, but we see this concerning movement even in Western countries—this 'back to basics' approach in Australian, UK, and US contexts, these conservative assessment ideologies. For the past two years, I've leveraged generous research funding from the Education University of Hong Kong to build a transnational research team. We have Paul Thibualt, who’s visiting the Education University of Hong Kong, Constant Leung from King's College London, Li Wei from UCL, Eunice Chang from OISE at the University of Toronto—an educational assessment specialist—Yongyan Zheng from Fudan University in China, Andy Gao from UNSW Australia and several others. Our team consists of ten people, and we call ourselves the New Territories Group since the Education University is located in Hong Kong's New Territories district.
We gathered in February this year for a two-day intensive symposium to craft our Cambridge Elements book draft, titled Ecological Languaging Competencies, focusing on the implications of Ecological Languaging Competencies for assessment. Returning to our earlier discussion, language is not simply a code or system of rules. "Languaging" differs fundamentally from "language." Language as a noun represents a body of knowledge—patterns like lexico-grammatical or genre conventions—but languaging as a verb, quoting Paul Thibault, refers to action, to human behaviour. Languaging is ecological; it's what we do daily, navigating our journeys within ecologies, whether speaking in a lecture hall, interacting with students in classrooms, or going about our everyday business. It is action-oriented, co-actional, and dialogic—I cannot simply talk by myself. I might have inner speech, but as Vygotsky and others emphasise, its origins lie in actual languaging interactions within ecologies.
In contrast, ChatGPT is like a mirror reflecting our actions—our languaging—but lacks substance. These chatbots and generative AIs don't possess embodied feelings or genuine participation in ecologies. Still, because they're such powerful mirrors, we might mistakenly perceive them as genuine languaging agents. They simulate languaging but aren't true agents since real languaging involves embodied participation in real ecologies. Thus, the development of generative AI helps us to appreciate the unique role of human languaging in our ecologies. We're embedded in these ecologies, shaping them even as they shape us. It’s a dynamic process, always changing, never static. This notion of Ecological Languaging Competencies—how to teach, learn, and assess—is a critical research direction. The upcoming publication of our Cambridge Elements book on Ecological Languaging Competencies (ELC) will provide theoretical resources for the next generation of researchers, teacher educators, and assessment specialists to build upon.
If we don't pursue this, assessment will remain simplistic, focusing merely on grammatical rules, vocabulary knowledge, genre patterns, or textbook-based pragmatic scenarios, which are poor simulations of genuine human languaging in real-life ecologies. Professor Constant Leung provides a rich discussion in our forthcoming book, highlighting how curricularised textbook notions of communicative competence offer inadequate simulations of authentic human languaging.
Thus, returning to ecology, moving beyond merely analysing lexico-grammatical patterns (something ChatGPT and generative AI excel at), is imperative. We need research into real-life languaging, exploring how it truly takes place. This ecological focus is the first urgent research direction. It will require numerous studies across various contexts, schools, and populations—truly a global village is needed to explore ecological languaging competencies globally.

The second urgent research direction is influencing and shaping the development of generative AI. Currently, its trajectory is controlled mainly by Silicon Valley and big tech companies. I've been influenced by a recent book by Karen Hao, a technology journalist, titled The AI Empire. From it, I realised large language models (LLMs) are extremely energy-intensive, resulting in significant negative ecological impacts.
The data centers required to power computing for LLMs like ChatGPT consume tremendous electricity and vast amounts of fresh water for cooling. These data centers, built globally by big tech companies, often end up in poorer countries where local communities lack sufficient fresh water even for their basic daily needs. As educators, we need to be aware of this. AI is beneficial, but it doesn't have to be developed through these methods—scaling at all costs, data-greedy approaches that scrape data from the internet, social media, and are energy-intensive, making them ecologically unsustainable. They’re also educationally unsustainable, especially in poorer countries and rural schools that may lack stable Wi-Fi or internet access, thus excluding them from these technological developments.
AI doesn't need to follow this model. Instead, we can develop small, lightweight language models—AI in your pocket—that can operate offline, powered by just one computer or a simple mobile device. We need to influence AI engineering to create sustainable, eco-friendly, domain-specific AI models and applications.
At the Education University of Hong Kong, my colleague, Dr. Lau Chaak Ming, has spent the past 20 years—even before the ChatGPT era—using symbolic approaches to AI. There are two main approaches to AI: symbolic (algorithm-based) and statistical (large-scale data-driven models). The optimal path is a combination of both. Dr. Lau has been creating small-scale AI applications to support the learning of minoritised languages in Hong Kong, such as village languages, and assisting minoritised students in learning Cantonese. These applications require minimal energy resources and can simply be downloaded and used on edge devices effectively.
Similarly, scholars working on Aboriginal language revival in South Australia have developed low-resource apps to support learning endangered Aboriginal languages. We must collaborate—linguists, language educators, applied linguists—with AI developers and engineers, actively shaping AI development to produce energy-efficient, sustainable, and ecologically friendly AI tools for language education. We cannot simply yield control to big tech companies like Microsoft. Just two weeks ago, Microsoft aggressively began pushing AI applications into classrooms. Colleagues in Thai school communities reported Microsoft hosting professional development workshops and seminars, essentially to dominate the educational market and push their AI products into classrooms. I'm not denying potential benefits, but this approach limits our options, confining us to solutions offered solely by big tech.
We need community-owned AI options. Data should belong to communities, applications should be developed collaboratively, and their deployment should reflect genuine community needs. The second broad research direction for applied linguists and educational linguists is thus to collaborate closely with AI developers and computational linguists who share our mission to develop smaller, ecologically-friendly AI language models.
AI, assessment, and keeping humanity in education
Mobina: Absolutely! As you've said, bringing together engineers, technologists, applied linguists, and educational experts can ensure we utilise AI efficiently, sustainably, and in ways that don't limit the capacities, knowledge, values, and everything that humans uniquely bring to education. Indeed, as you've emphasised, AI isn’t truly a languaging agent—it doesn't genuinely participate in real ecologies.
Angel: Exactly. AI isn't a languaging agent; it merely simulates languaging. The big tech companies, like OpenAI, excel at simulating. Recently, Grok released an AI companion that's particularly concerning—it magnifies existing gender ideologies and sexism in human gender relations. This companion AI, inspired by the Japanese "waifu" concept—almost a half-naked AI avatar—reinforces and amplifies sexist ideologies already present in our cultures. It represents the objectification and commodification of female bodies and human relationships, which is incredibly troubling.
Big tech companies operate under KPIs driven by profit optimisation, not by optimising human well-being. Therefore, as educators, we must raise our voices amid the AI hype, actively participating in shaping AI development toward humanistic goals. Along with my postdocs and colleagues, I've initiated a Facebook group called "Humanistic AI and Wayfinding Research Lab and Network". This idea of "wayfinding," drawn from ecological philosophy and Indigenous communities, captures our journey as humans—constantly navigating life, where open-endedness and uncertainty must not be foreclosed by rigid AI systems. We need to find our paths together with colleagues, collaboratively developing humanistic AI models.
Key concepts here include languaging, co-action, co-agency, ecological views, energy-efficient, ecologically friendly AI models, and Ecological Languaging Competencies (ELC). These keywords represent crucial directions. Our forthcoming Cambridge Elements book aims to provide theoretical resources, laying groundwork for extensive future research translating the theory of Ecological Languaging Competencies (ELC) into teacher-actionable practices. I remain hopeful and confident in the new generation of scholars like yourself, Mobina, who will carry our work to even greater heights.
Mobina: Absolutely. Thank you so much for clearly presenting this vision—highlighting both the positive potential and the darker aspects of AI and what is happening in education and what we can do next within research and scholarship. I’d like to revisit your keynote at the recent AFMLTA conference, specifically your points about AI and assessment—redesigning assessments considering AI’s limitations. Could you share more about how your work addresses the gap in current AI and education conversations?
Angel: Yes, this is an essential and urgent gap. Many efforts focus on developing frameworks for integrating AI in pedagogy, such as my colleague Dr. Lucas Kohnke’s recent TESOL Quarterly article, which is immensely timely. However, equally important is bringing back humanity into this discussion. Ironically, we should thank ChatGPT and generative AI for highlighting what humanity is not. Without these models, we wouldn't clearly see what real human languaging and embodied activities are. While digital avatars and digital interactions simulate languaging, they cannot replace the embodied interactions that build common ground and mutual understanding, despite cultural, linguistic, and ethnic differences.

The PAA model—Plurilingualism, Agency, and Affect—is central. Plurilingualism encompasses pluriculturalism; you can't genuinely represent all cultures only in English. AI can simulate other cultures, translating knowledge and wisdom, but true understanding requires the languages and wisdom of those cultures. Indigenous scholars stress the importance of indigenous participation in revitalising indigenous languages. Similarly, plurilingualism must underpin pluriculturalism. Canada, for instance, often claims multiculturalism, yet it mainly centres English and French, marginalising numerous other community languages. The PAA model emphasises plurilingualism in all education-related decision-making processes—pedagogy, assessment, syllabus, and course development—centering teacher/student agency, emotional labour, and wellbeing.
The Four-T lenses integrate translanguaging, transknowledging (Kathleen Heugh’s work), trans-semiotising (evolving into whole-body ecological sense-making under Paul Thibault), transculturalism, and trauma-informed intercultural communication. Trauma-informed research, borrowed from healthcare scholarship, critically examines power dynamics and invisible privileges in intercultural interactions. Simply acknowledging surface-level cultural differences—such as greetings or food—is insufficient; we must understand deeper power imbalances and traumas that affect intercultural communication. Together, the Four-T lenses (Lin & Chen, 2025) inform our daily educational decisions—lesson plans, teaching materials, assessment tasks—reminding educators to keep humanity central in our educational practices.
Closing remarks: asserting agency
Mobina: Thank you for this essential work, filling a critical gap in current AI discussions in education. Your insights help clarify boundaries between AI and humanity, especially regarding assessment. We've reached the end of our interview—are there any final points you'd like to share?
Angel: My final point is to assert our agency—as teachers, students, researchers, educators. We must not leave AI development solely to engineers; we must shape their research directions towards benefiting humanity rather than harming it. Don't fear learning new things—I’ve learned significant technical knowledge. AI, including generative AI, can assist us, and learning enough technical knowledge allows meaningful collaboration with AI engineers. We educators understand our students’ needs best. Engineers, though technically skilled, often lack educational expertise, making it crucial that we actively shape AI development. Don’t fear engaging with new technologies and theories, such as Bayesian statistics, which I'm currently exploring with colleagues—it's powerful and ecologically valid. Embrace new learning confidently, always remembering we must direct new technologies and ideas to serve humanity.
Mobina: Brilliant! Thank you for these profound final thoughts. They'll undoubtedly resonate with our audience—scholars both young and senior. Lastly, I noticed today is your birthday!
Angel: Oh, yes! [laughs]
Mobina: It's really special having this interview on your birthday. We deeply appreciate your time today—it means a lot to me and the Leveraging Languages website team. Happy birthday from all of us, and we wish you the very best.
Angel: Thank you, thank you!
Biography:
Dr. Angel M. Y. Lin is a leading scholar in English language education and critical literacies, and her work has helped to transform approaches to language teaching and learning globally. Currently serving as Chair Professor of Language, Literacy and Social Semiotics in Education at the Education University of Hong Kong, she previously held the prestigious Tier 1 Canada Research Chair in Plurilingual and Intercultural Education at Simon Fraser University (2018-2024). She is also Co-Editor-in-Chief of Language Policy.
Her research spans several interconnected domains, including discourse analysis, trans/languaging (TL), trans-semiotising (TS), Content and Language Integrated Learning (CLIL), and critical media literacies. Dr. Lin’s recent research has also expanded into exploring Human-AI relationality as well as dynamic, adaptive assessment through a Bayesian lens, leveraging AI chatbot technologies to offer innovative, personalized approaches to language learning in plurilingual contexts.
Further reading:
Scholars, Sister (2023). Strategies for sisterhood in the language education academy. Journal of Language, Identity & Education, 22(2), 105-120.
Lin, A. M. Y., & Chen, Q. (2025). Towards ethical and responsible engagement of generative AI in education: The PAA Model and 4T Lenses in action. To appear in Lim, F. V., & Pun, J. (Eds.), Designing learning with multimodality in English medium education (EME) classrooms across Asia (pp. 235-258). London: Bloomsbury.