Kualitas Butir dan Estimasi Kemampuan Matematika Siswa SMP pada Soal Ujian Sekolah

Authors

  • Novi Indriyani Kones Yogyakarta State University http://orcid.org/0000-0003-3321-2301
  • Raden Rosnawati Program Studi Magister Penelitian dan Evaluasi Pendidikan, Universitas Negeri Yogyakarta

DOI:

https://doi.org/10.29408/jel.v7i2.3054

Keywords:

characteristic items, online mathematic assessment, student ability

Abstract

Mathematics school exams as summative assessments are expected to have good quality, so the description of students' abilities is under the actual conditions. However, due to COVID-19, math school exams are carried out online, which results in bad quality of instruments, so analysis of math exam questions is needed for information about the quality of questions, and an overview of students' abilities is obtained. This study aimed to analyze the quality of the questions, which included analyzing the characteristics items seen from the level of difficulty, differences, distractions, and estimation of students' abilities with the item response theory approach. This study employs a quantitative research method with an exploratory, descriptive approach. The subjects of this study were 758 grade 9 students of State Junior High School who took mathematics school exams in Gunung Jati Sub-district, Cirebon Regency. Data from student's responses or answers from mathematics exam questions in the form of multiple-choice with 40 items for the 1st school, 35 for the 2nd school, and 20 for the 3rd school. The research data fulfills the IRT's assumptions first. Then, the researchers found that from the online assessment analysis with SPSS software, the R and Ms. Excel program, it showed that the level of difficulty there were still some items that needed to be revised or removed if not needed. The difference between the items in the three schools could be said to be good even though the distribution was not evenly distributed. The effectiveness of the distractor is more dominant in having a bad distractor with a percentage of 100% and 65.7% and the ability of students to answer math exam questions by obtaining the highest ability information from different students from the three schools.

Author Biography

Novi Indriyani Kones, Yogyakarta State University

Prodi magister penelitian dan evaluasi pendidikan

References

Alkharusi, H. (2015). An evaluation of the measurement of perceived classroom assessment environment. International Journal of Instruction, 8(2), 45–54. https://doi.org/10.12973/iji.2015.824a.

Aricak, O. T., Avcu, A., Topcu, F., & Tutlu, M. G. (2020). Use of item response theory to validate cyberbullying sensibility scale for university students. International Journal of Assessment Tools in Education, 7(1), 18–29. https://doi.org/10.21449/ijate.629584.

Bichi, A. A., & Talib, R. (2018). Item response theory: an introduction to latent trait models to test and item development. International Journal of Evaluation and Research in Education (IJERE), 7(2), 142. https://doi.org/10.11591/ijere.v7i2.12900.

Bulut, O. (2015). Applying item response theory models to entrance examination for graduate studies: Practical issues and insights. Eğitimde ve Psikolojide Ölçme ve Değerlendirme Dergisi, 6(2). https://doi.org/10.21031/epod.17523.

Dirlik, E. M. (2019). The comparison of item parameters estimated from parametric and nonparametric item response theory models in case of the violance of local independence assumption the comparison of item parameters estimated from parametric and nonparametric item response. International Journal of Progressive Education, 15(4), 229-240. https://doi.org/10.29329/ijpe.2019.203.17.

Ferber, R. (1977). Research by convenience. [Editorial]. The Journal of Consumer Research, 4(June), 57–58. https://doi.org/10.1086/208679.

Friatma, A., & Anhar, A. (2019). Analysis of validity, reliability, discrimination, difficulty and distraction effectiveness in learning assessment. Journal of Physics: Conference Series, 1387(1), 012063. https://doi.org/10.1088/1742-6596/1387/1/012063.

Hambleton, R. K., & Swaminathan, H. (1985). Item response theory principles and applications. New York: Kluwer-Nijhoff Publishing. https://doi.org/10.1007/978-94-017-1988-9.

Jamal, F. (2018). Kompetensi pedagogik guru matematika sekolah pahlawan kabupaten Aceh Barat. Maju: Jurnal Ilmiah Pendidikan Matematika, 5(1), 108–119.

Karabatsos, G., & Sheu, C. F. (2004). Order-constrained bayes inference for dichotomous models of unidimensional nonparametric IRT. Applied Psychological Measurement, 28(2), 110–125. https://doi.org/10.1177/0146621603260678.

Köse, A., & Doğan, C. D. (2019). Parameter estimation bias of dichotomous logistic item response theory models using different variables. International Journal of Evaluation and Research in Education (IJERE), 8(3), 425–433. https://doi.org/10.11591/ijere.v8i3.19807.

Kurniawan, N. I. A. (2019). Analysis of the quality of test instrument and student’s accounting learning competences at vocational school. Jurnal Penelitian Dan Evaluasi Pendidikan, 23(1), 68–75. https://doi.org/10.21831/pep.v23i1.22484.

Lailiyah, S., Hayat, S., Urifah, S., & Setyawati, M. (2020). Levels of student’s mathematics anxieties and the impacts on online mathematics learning. Cakrawala Pendidikan, 40(1), 107–119. https://doi.org/10.21831/cp.v39i2.28173.

Liu, Y., Yin, Y., & Wu, R. (2020). Studies in educational evaluation measuring graduate students ’ global competence: Instrument development and an empirical study with a Chinese sample. Studies in Educational Evaluation, 67(July), 100915. https://doi.org/10.1016/j.stueduc.2020.100915.

Mahmud, J., Sutikno, M., & Naga, D. (2016). Variance difference between maximum likelihood estimation method and expected a posteriori estimation method viewed from number of test items. Educational Research and Reviews, 11(16), 1579–1589.

Mardapi, D. (2012). Pengukuran, penilaian, dan evaluasi pendidikan. Yogyakarta: Nuha Medika.

Ningsih, K. (2016). Kemampuan guru mipa membuat penilaian pengetahuan. Jurnal Pendidikan Matematika dan IPA, 7(2), 44–54. https://doi.org/10.26418/jpmipa.v7i2.17691.

Osarumwense, J. H., & Duru, C. P. (2019). Assessment of model fit for 2016 and 2017 biology multiple choice test items of the national business and technical examination. International Journal for Innovation Education and Research, 7(4). https://doi.org/10.31686/ijier.vol7.iss4.1319.

Retnawati, H. (2014). Teori respon butir dan penerapannya untuk peneliti, praktis, pengukuran, dan pengujian mahasiswa pascasarjana. Yogyakarta: Nuha Medika.

Şahin, M. G., & Boztunç Öztürk, N. (2019). Analyzing the maximum likelihood score estimation method with fences in ca-mst. International Journal of Assessment Tools in Education, 6(4), 555–567. https://doi.org/10.21449/ijate.634091.

Titin. (2015). Deskripsi kompetensi guru smp mata pelajaran matematika dan IPA. Jurnal Pendidikan Matematika dan IPA, 6(2), 39–48. https://doi.org/10.26418/jpmipa.v6i2.17338.

Uyar, Åž. (2020). Item parameter estimation for dichotomous items based on item response theory : Comparison of BILOG-MG, Mplus and R (ltm).

Eğitimde ve Psikolojide Ölçme ve Değerlendirme Dergisi, 11(1), 27–42. https://doi.org/10.21031/epod.591415.

Wardhani, S. (n.d.). Modul matematika SMP program bermutu instrumen penilaian hasil belajar matematika SMP: Belajar dari PISA dan TIMSS. Jakarta: Pusat Pengembangan Pemberdayaan Pendidik dan Tenaga Kependidikan Matematika.

Yang, F., Ren, H., & Hu, Z. (2019). Maximum likelihood estimation for three-parameter weibull distribution using evolutionary strategy. Mathematical Problems in Engineering, 2019. https://doi.org/10.1155/2019/6281781.

Downloads

Published

13-07-2021

How to Cite

Kones, N. I., & Rosnawati, R. (2021). Kualitas Butir dan Estimasi Kemampuan Matematika Siswa SMP pada Soal Ujian Sekolah. Jurnal Elemen, 7(2), 280–294. https://doi.org/10.29408/jel.v7i2.3054

Issue

Section

Articles

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.