EVALUASI EFEKTIVITAS SISTEM UMPAN BALIK BERBASIS AI DALAM MENINGKATKAN HASIL BELAJAR MAHASISWA
DOI:
https://doi.org/10.51878/edutech.v4i3.3142Keywords:
Umpan balik berbasis AI, hasil belajar, teknologi pendidikanAbstract
This study aims to evaluate the effectiveness of an AI-based feedback system in enhancing student learning outcomes at the Faculty of Education and Psychology, Mandalika University of Education. A quantitative survey method with a descriptive approach was employed on 238 students to measure various aspects of learning, including material comprehension, learning motivation, and system accessibility. Data were analyzed using descriptive statistics, Shapiro-Wilk normality test, and Pearson correlation analysis. The results indicate that the AI-based feedback system significantly facilitates material comprehension (mean = 3.134) and enhances students' learning motivation (mean = 3.067), with data distribution approximating normality (p < 0.001). However, the instrument reliability measured by Cronbach's alpha was very low (0.001), indicating a need for improvements in instrument design. This study concludes that the AI-based feedback system is effective in improving various aspects of learning, but further evaluation is needed to enhance the instrument's internal consistency and optimize its implementation in higher education contexts. These findings provide significant contributions to the development of educational technology and the formulation of policies supporting AI integration in the learning process.
ABSTRAK
Penelitian ini bertujuan untuk mengevaluasi efektivitas sistem umpan balik berbasis kecerdasan buatan (AI) dalam meningkatkan hasil belajar mahasiswa di Fakultas Ilmu Pendidikan dan Psikologi Universitas Pendidikan Mandalika. Metode survei kuantitatif dengan pendekatan deskriptif digunakan pada 238 mahasiswa untuk mengukur berbagai aspek pembelajaran, termasuk pemahaman materi, motivasi belajar, dan aksesibilitas sistem. Data dianalisis menggunakan statistik deskriptif, uji normalitas Shapiro-Wilk, dan analisis korelasi Pearson. Hasil penelitian menunjukkan bahwa sistem umpan balik berbasis AI secara signifikan memudahkan pemahaman materi (mean = 3,134) dan meningkatkan motivasi belajar mahasiswa (mean = 3,067), dengan distribusi data yang mendekati normal (p < 0,001). Meskipun demikian, reliabilitas instrumen yang diukur dengan Cronbach's alpha menunjukkan nilai yang sangat rendah (0,001), mengindikasikan perlunya perbaikan dalam desain instrumen. Penelitian ini menyimpulkan bahwa sistem umpan balik berbasis AI efektif dalam meningkatkan berbagai aspek pembelajaran, namun memerlukan evaluasi lebih lanjut untuk meningkatkan konsistensi internal instrumen dan mengoptimalkan implementasinya dalam konteks pendidikan tinggi di Indonesia. Temuan ini memberikan kontribusi penting bagi pengembangan teknologi pendidikan dan penyusunan kebijakan yang mendukung integrasi AI dalam proses pembelajaran.
Downloads
References
Afzaal, M., Zia, A., Nouri, J., & Fors, U. (2024). Informative Feedback and Explainable AI-Based Recommendations to Support Students’ Self-regulation. Technology, Knowledge and Learning, 29(1), 331–354. https://doi.org/10.1007/s10758-023-09650-0
Bagunaid, W., Chilamkurti, N., & Veeraraghavan, P. (2022). AISAR: Artificial Intelligence-Based Student Assessment and Recommendation System for E-Learning in Big Data. Sustainability, 14(17), Article 17. https://doi.org/10.3390/su141710551
Bhutoria, A. (2022). Personalized education and Artificial Intelligence in the United States, China, and India: A systematic review using a Human-In-The-Loop model. Computers and Education: Artificial Intelligence, 3, 100068. https://doi.org/10.1016/j.caeai.2022.100068
Carifio, J., & Perla, R. (2008). Resolving the 50-year debate around using and misusing Likert scales. Medical Education, 42(12), 1150–1152. https://doi.org/10.1111/j.1365-2923.2008.03172.x
Carless, D. (2022). From teacher transmission of information to student feedback literacy: Activating the learner role in feedback processes. Active Learning in Higher Education, 23(2), 143–153. https://doi.org/10.1177/1469787420945845
Gavião, L. O., Sant’Anna, A. P., Lima, G. B. A., & Garcia, P. A. de A. (2023). Composition of Probabilistic Preferences in Multicriteria Problems with Variables Measured in Likert Scales and Fitted by Empirical Distributions. Standards, 3(3), Article 3. https://doi.org/10.3390/standards3030020
Hsia, L.-H., Hwang, G.-J., & Hwang, J.-P. (2023). AI-facilitated reflective practice in physical education: An auto-assessment and feedback approach. Interactive Learning Environments, 0(0), 1–20. https://doi.org/10.1080/10494820.2023.2212712
Kochmar, E., Vu, D. D., Belfer, R., Gupta, V., Serban, I. V., & Pineau, J. (2022). Automated Data-Driven Generation of Personalized Pedagogical Interventions in Intelligent Tutoring Systems. International Journal of Artificial Intelligence in Education, 32(2), 323–349. https://doi.org/10.1007/s40593-021-00267-x
Ouyang, F., Wu, M., Zheng, L., Zhang, L., & Jiao, P. (2023). Integration of artificial intelligence performance prediction and learning analytics to improve student learning in online engineering course. International Journal of Educational Technology in Higher Education, 20(1), 4. https://doi.org/10.1186/s41239-022-00372-4
Rad, H. S., Alipour, R., & Jafarpour, A. (2023). Using artificial intelligence to foster students’ writing feedback literacy, engagement, and outcome: A case of Wordtune application. Interactive Learning Environments, 32(1), 1–21. https://doi.org/10.1080/10494820.2023.2208170
Shaik, T., Tao, X., Li, Y., Dann, C., McDonald, J., Redmond, P., & Galligan, L. (2022). A Review of the Trends and Challenges in Adopting Natural Language Processing Methods for Education Feedback Analysis. IEEE Access, 10, 56720–56739. IEEE Access. https://doi.org/10.1109/ACCESS.2022.3177752
Sharma, A., Lin, I. W., Miner, A. S., Atkins, D. C., & Althoff, T. (2023). Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support. Nature Machine Intelligence, 5(1), 46–57. https://doi.org/10.1038/s42256-022-00593-2
Thomas, D. M., Siegel, B., Baller, D., Lindquist, J., Cready, G., Zervios, J. T., Nadglowski Jr., J. F., & Kyle, T. K. (2020). Can the Participant Speak Beyond Likert? Free-Text Responses in COVID-19 Obesity Surveys. Obesity, 28(12), 2268–2271. https://doi.org/10.1002/oby.23037
Voutilainen, A., Pitkäaho, T., Kvist, T., & Vehviläinen-Julkunen, K. (2016). How to ask about patient satisfaction? The visual analogue scale is less vulnerable to confounding factors and ceiling effect than a symmetric Likert scale. Journal of Advanced Nursing, 72(4), 946–957. https://doi.org/10.1111/jan.12875
Westland, J. C. (2022). Information loss and bias in likert survey responses. PLOS ONE, 17(7), e0271949. https://doi.org/10.1371/journal.pone.0271949
Wu, R., & Yu, Z. (2024). Do AI chatbots improve students learning outcomes? Evidence from a meta-analysis. British Journal of Educational Technology, 55(1), 10–33. https://doi.org/10.1111/bjet.13334
Yang, H., Gao, C., & Shen, H. (2024). Learner interaction with, and response to, AI-programmed automated writing evaluation feedback in EFL writing: An exploratory study. Education and Information Technologies, 29(4), 3837–3858. https://doi.org/10.1007/s10639-023-11991-3
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 EDUTECH : Jurnal Inovasi Pendidikan Berbantuan Teknologi
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.