Introduction
The University of Cape Town (UCT) was among the first universities in South Africa, and possibly on the African continent, to disable its Artificial Intelligence (AI) detection protocols in the plagiarism checker Turnitin.¹ UCT acknowledges that generative AI (GenAI) technologies, such as ChatGPT, Claude, and Dall-E, which are designed to produce new content, including text, audio, code, images, simulations, and videos, have significant potential to enhance productivity for students, academics, and staff. At the same time, the university recognises that, without ethical guardrails, these tools can pose serious challenges to assessment, teaching, and learning. In response, UCT implemented the Framework for Artificial Intelligence in Education: Generative and Other AI in Teaching, Learning and Assessment in 2025.²
Reframing the Debate
This reflective piece does not aim to examine the details of the UCT AI policy framework. Rather, it explores how African universities can move beyond skepticism and empower students, staff, and academics to engage responsibly and productively with generative AI. To this end, UCT has developed guidelines to help undergraduate students, postgraduate researchers, and academics incorporate ethical GenAI use into teaching, learning, and assessment.³
Research on the use of artificial intelligence among social scientists is polarized. Critics highlight risks such as data fabrication and misinformation⁴; over-reliance and automation bias⁵; and ethical issues and epistemological concerns⁶, fuelling scepticism. Conversely, supporters stress that human judgment⁷ remains vital for critical thinking, ethics, and theory development, regardless of AI involvement.8 They view AI and humans as collaborators,⁹ particularly for managing large datasets¹⁰ and tasks such as literature summarisation and coding,11 which can speed up processes and free researchers to focus on interpretation. Chakravorti et al;12 emphasise that responsible AI integration is more crucial than technical fixes, requiring a rethink of research values, human-centred design, and institutional support. Consequently, stakeholders should prioritise explainability, transparency, ethics, and sustainability in AI tools and their application.
Amidst these global debates on AI’s potential and pitfalls, it is essential that African universities actively engage with these developments rather than lag. The imperative now is to shift from a reactive posture, driven by misunderstanding or anxiety about what teaching and learning might become, to a proactive one. Rather than policing students and researchers, institutions should invest in sound policies, practical guidelines, and dedicated support structures that equip the entire academic community to use GenAI tools effectively and ethically.
As tools such as ChatGPT, Claude, and Copilot have grown more prominent in recent years, many academics, particularly in the social sciences, have become concerned that the twin pillars of their disciplines, critical thinking and academic writing, are under serious threat. This has given rise to a broadly pessimistic stance, with scholars questioning the ethical foundations of these technologies. I argue, however, that the social sciences and humanities move beyond such scepticism, embrace generative AI, and innovatively review how they approach teaching, learning, and assessment. Pedagogical strategies should evolve away from a surveillance-based model that instils fear towards a more flexible approach that encourages students to disclose their AI usage and take genuine ethical responsibility for the research they produce.
The concern raised by skeptics that scholars are ceding their critical thinking and writing to machines carries some intuitive weight. Yet generative AI is here to stay, and everyone, including those same sceptics, stands to benefit when its use is grounded in ethical intention. In the paragraphs that follow, I address two commonly raised objections to GenAI in academic contexts and propose ethical approaches to producing rigorous, critical social science research.
The first concern is that generative AI is displacing critical thinking, as evidenced by student submissions containing fabricated citations and shallow arguments that lack an authentic student voice. To address this, universities should consider equipping students and researchers with the emerging critical skill of effective AI prompting, the ability to guide these tools purposefully and discerningly, rather than accepting their output uncritically.
A second concern is that, given existing struggles with plagiarism, GenAI will make students lazier and encourage unethical behaviour that ultimately impedes learning. This reservation is understandable. However, just as Africa leapfrogged into the digital telecommunications era in the early 2000s, rendering organisations that failed to innovate obsolete, the same logic applies to academia. Programs that refuse to adapt risk irrelevance. While traditional learning strategies, such as engaging with scholarly texts, remain valuable, GenAI can meaningfully complement them. The more pressing question for educators is: how do our lessons empower the 21st-century student navigating a technologically advanced world?
If GenAI operates within structured frameworks that require deliberate prompting, educators may consider developing the capacity to guide students in using these tools responsibly. As social scientists, we value reflexivity as a means of understanding the learning process. Rather than designing assessments solely to measure outcomes, we should assess the process, encouraging students to declare and critically reflect on their use of GenAI. Creating an open environment for such disclosure will cultivate a generation of socially conscious researchers who can navigate traditional ethical boundaries while working with creativity, efficiency, and purpose.
Conclusion
Ultimately, African social science researchers are strongly encouraged to think both innovatively and ethically about how to incorporate GenAI across every phase of the research workflow, from conceptualisation and literature reviews through fieldwork and data analysis to the writing and dissemination of findings. UCT’s AI policy enabled the Centre for Innovation in Learning and Teaching to develop the Researcher Guide: Ethical Use of Generative Artificial Intelligence (AI) for Research Purposes.13 This guide identifies appropriate AI tools for each research phase, explains how to use them effectively, and highlights the opportunities and ethical risks researchers should consider. Crucially, it is grounded in the recognition that research is a fundamentally human endeavour, demanding critical thinking, intellectual rigour, and scholarly integrity. In this way, UCT offers not only a policy but a model for how African universities can lead in shaping an ethical, empowered relationship with GenAI in academic life.
Endnotes
- “UCT ends use of AI detection tools for student assessments”, IOL News, accessed April 16, 2026, https://iol.co.za/capeargus/news/2025-07-30-uct-ends-use-of-ai-detection-tools-for-student-assessments/.
- “UCT Framework for Artificial Intelligence in Education: Generative and Other AI in Teaching, Learning and Assessment”, Centre for Innovation and Teaching, accessed April 16, 2026, https://cilt.uct.ac.za/sites/default/files/media/documents/cilt_uct_ac_za/2486/uct-ai-in-education-framework-june-2025-final.pdf
- “Artificial Intelligence for Teaching and Learning”, Centre for Innovation and Teaching, accessed April 16, 2026, https://cilt.uct.ac.za/teaching-resources/artificial-intelligence-teaching-learning.
- Hua, Hong-Uyen. “Scientific Data Fabrication and AI—Pandora’s Box.” JAMA Ophthalmology 143, no. 6 (2025): 522-523.
- Grossmann, Igor, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock, and William A. Cunningham. “AI and the transformation of social science research.” Science 380, no. 6650 (2023): 1108-1109.
- Greene, Catherine. “AI and the Social Sciences: Why All Variables Are Not Created Equal.” Res Publica 29, no. 2 (2023): 303-319.
- Mseer, Ismail, and Ahmad Abdelhafiz Ali Samhan. “Collaboration Between Humans and AI.” In The Paradigm Shift from a Linear Economy to a Smart Circular Economy: The Role of Artificial Intelligence-Enabled Systems, Solutions and Legislations, pp. 1299-1308. Cham: Springer Nature Switzerland, 2025.
- [Shahid, Ayesha, and Naeem Fatima. “Use of AI in social sciences research*.” (2024).
- Huang, Yi-Chih. “From Labor to Collaboration: A Methodological Experiment Using AI Agents to Augment Research Perspectives in Taiwan’s Humanities and Social Sciences.” arXiv preprint arXiv:2602.17221 (2026).
- Bail, Christopher A. “Can generative AI improve social science?” Proceedings of the National Academy of Sciences121, no. 21 (2024): e2314021121.
- Janiūnienė, Erika, Fausta Kepalienė, Elena Macevičiūtė, and Thomas D. Wilson. “How social science academics use AI–Activities and tools.” Journal of Information Science (2026): 01655515251396892.
- Chakravorti, Tatiana, Xinyu Wang, Pranav Narayanan Venkit, Sai Koneru, Kevin Munger, and Sarah Rajtmajer. “Social Scientists on the Role of AI in Research.” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, vol. 8, no. 1, pp. 528-540. 2025.
- “Researcher Guide- Ethical Use of Generative AI for Research Purposes,” Centre for Innovation and Teaching, accessed April 16, 2026, https://docs.google.com/document/d/14XaTVheTtr7XpDWX33OthT4piMHnYUfl/edit.
