On October 27 to 28, 2025, the 2nd Asia-Pacific Symposium on Sustainable Development in Higher Education takes place in Singapore. Dr. Tetiana Gorokhova, guest researcher at CAIS, together with her co-authors Prof. Dr. Zaneta Simanaviciene, professor and director at Mykolas Romeris University in Lithuania, Kateryna Polupanova, project coordinator of the HEI-TRAIN project and also from Mykolas Romeris University, and Inga Fedotova, present their research on Artificial Intelligence and responsibility titled “CSR-Driven AI or AI-Driven CSR? A Course-Based Design Experiment on Responsible Campus Chatbots.” In their research, the authors examine the question of the extent to which our sustainability commitments and ethical principles shape the systems we develop, or whether platform standards are gradually defining what “sustainability” and responsibility mean. The presentation addresses the question of whether sustainability and ethical commitments drive the systems (CSR-driven AI) – or whether platform defaults influence the definition of “sustainability” (AI-driven CSR). In a course lasting several weeks at Mykolas Romeris University, teams of students developed a sustainability advisory chatbot, recorded user concerns, and created a compact knowledge base with protective mechanisms such as warnings, bias checks, and referral to a human operator in sensitive cases. The prototypes were tested in “Wizard-of-Oz” sessions, with a human reviewing the responses. Learning, usability, and social friction were examined; course materials, pilot instruments, and notes on typical failure modes are available.
The research is part of the HEI-TRAIN project and appears in the Springer Nature book series “World Sustainability Series.”
Abstract
Universities are adopting AI at speed, and one question keeps returning: do our sustainability commitments and ethics steer the systems we build (CSR-Driven AI), or do platform defaults quietly steer what “sustainability” becomes (AI-Driven CSR)? We take this lens into a short, hands-on course at a Lithuanian university where student teams co-create a sustainability advisory chatbot. Over 4-6 weeks, they map real user intents (e-waste, low-carbon purchasing, repair options), compile a compact, source-anchored knowledge base, and add basic guardrails – clear disclaimers, spot checks for bias, and one-click handover to a human when issues are sensitive or contested. Instead of a campus-wide launch, we run small, supervised “wizard-of-oz” sessions (a human verifies or drafts replies behind the scenes) and look at three areas: learning (pre/post surveys of autonomy, competence, relatedness; a rubric for responsible-AI skills), usability (System Usability Scale and answerability with cited sources), and social frictions (an error/conflict log plus short interviews). Early classroom evidence suggests that making guardrails explicit nudges the prototype toward the CSR-Driven AI mode: answers become more transparent, handovers faster, and disputes de-escalate. We share the course materials, pilot instruments, and a short note on typical failure modes and how we addressed them.
About the HEI-TRAIN project
HEI-TRAIN is part of the Higher Education Initiative of the European Institute of Innovation and Technology (EIT). The project combines artificial intelligence, entrepreneurship, and social inclusion to make universities future-ready. Target groups include students, university staff, and underrepresented groups such as women, migrants, and internally displaced persons. Planned measures include the establishment of International Freelancer Schools (IFS), the development of tailored curricula, as well as mentoring and start-up programs to foster digital and entrepreneurial skills and create long-term societal benefits.
Further information about the HEI-TRAIN project can be found on the website:
https://eit-hei.eu/projects/hei-train/
