Comparison of different ways of computing grades in continuous assessment into the final grade

  1. Marin-Garcia, Juan A.
  2. Maheut, Julien
  3. Garcia Sabater, Julio J.
Aldizkaria:
WPOM

ISSN: 1989-9068

Argitalpen urtea: 2017

Zenbakia: 8

Orrialdeak: 1-12

Mota: Artikulua

DOI: 10.4995/WPOM.V8I0.7242 DIALNET GOOGLE SCHOLAR lock_openSarbide irekia editor

Beste argitalpen batzuk: WPOM

Garapen Iraunkorreko Helburuak

Laburpena

We present the results of comparing various ways of calculating students' final grades from continuous assessment grades. Traditionally the weighted arithmetic mean has been used and we compare this method with other alternatives: arithmetic mean, geometric mean, harmonic mean and multiplication of the percentage of overcoming of each activi-ty. Our objective is to verify, if any of the alternative methods, agree with the student’s performance proposed by the teacher of the subject, further discriminating the grade be-tween high and low learning outcomes and reducing the number of approved opportunists. [Comparación del efecto de diferentes modos de agregar las califica-ciones de evaluación continua en la nota final]

Erreferentzia bibliografikoak

  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7-74. doi:10.1080/0969595980050102
  • Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139-148.
  • Bliuc, A. M., Ellis, R. A., Goodyear, P., & Piggott, L. (2011). A blended learning approach to teaching foreign policy: Student experiences of learning through face-to-face and online discussion and their relationship to academic performance. Computers and Education, 56(3), 856-864. doi:10.1016/j.compedu.2010.10.027
  • Bliuc, A. M., Ellis, R., Goodyear, P., & Piggott, L. (2010). Learning through face-to-face and online discussions: Associations between students' conceptions, approaches and academic performance in political science. British Journal of Educational Technology, 41(3), 512-524. doi:10.1111/j.1467-8535.2009.00966.x
  • Dalziel, J. (1998). Using marks to assess student performance, some problems and alternatives. Assessment and Evaluation in Higher Education, 23(4), 351-366. doi:10.1080/0260293980230403
  • Gatfield, T. (1999). Examining student satisfaction with group projects and peer assessment. Assessment & Evaluation in Higher Education, 24(4), 365-377.
  • Gibbs, J. C., & Taylor, J. D. (2016). Comparing student self-assessment to individualized instructor feedback. Active Learning in Higher Education, 17(2), 111-123. doi:10.1177/1469787416637466
  • González-Marcos, A., Alba-Elías, F., Navaridas-Nalda, F., & Ordieres-Meré, J. (2016). Student evaluation of a virtual experience for project management learning: An empirical study for learning improvement. Computers and Education, 102, 172-187. doi:10.1016/j.compedu.2016.08.005
  • Green, R. A., Farchione, D., Hughes, D. L., & Chan, S. P. (2014). Participation in asynchronous online discussion forums does improve student learning of gross anatomy. Anatomical Sciences Education, 7(1), 71-76. doi:10.1002/ase.1376
  • Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1995). Multivariate data analysis (4º ed.). New Jersey: Prentice Hall.
  • Kane, M., & Trochim, W. M. K. (2007). Concept mapping for planning and evaluation (Vol. 50). London: SAGE.
  • Knight, P. T. (2002). Summative assessment in higher education: Practices in disarray. Studies in Higher Education, 27(3), 275-286. doi:10.1080/03075070220000662
  • Knight, P. T., & Banks, W. M. (2003). The assessment of complex learning outcomes. Global Journal of Engineering Education, 7(1), 39-49.
  • Losilla, J. M., Navarro, J. B., Palmer, A., Rodrigo, M. F., & Ato, M. (2005). Análisis de datos. Del contraste de hipótesis al modelado estadístico. Barcelona: Edicions a Petició.
  • Marin-Garcia, J. A. (2009). Los alumnos y los profesores como evaluadores. Aplicación a la calificación de presentaciones orales. Revista Espanola De Pedagogia, 67(242), 79-97.
  • Marin-Garcia, J. A. (2017). Protocol: Inter-rater and intra-rater consistency validation of a rubric to assess oral presentation skills for university students. WPOM-Working Papers on Operations Management, 7(2), (in press).
  • Marin-Garcia, J. A., & Santandreu-Mascarell, C. (2015). What do we know about rubrics used in higher education? Intangible Capital, 11(1), 118-145. doi:http://dx.doi.org/10.3926/ic.
  • Marin-Garcia, J. A., Aragonés Beltran, P., & Melón, G. (2014). Intra-rater and inter-rater consistency of pair wise comparison in evaluating the innovation competency for university students. WPOM-Working Papers on Operations Management, 5(2), 24-46. doi:http://dx.doi.org/10.4995/wpom.v5i2.3220
  • Marin-Garcia, J. A., Garcia-Sabater, J. P., Morant Llorca, J., & Conejero, J. A. (2016). Passam: Peer assessment and monitoring system. Paper presented at the Congreso Nacional de Innovación Educativa y Docencia en Red- Universitat Politècnica de València-Valencia 07/07/16 al 08/07/16.
  • Marin-Garcia, J. A., Martínez-Gómez, M., & Giraldo-O'Meara, M. (2014). Redesigning work in university classrooms: Factors related to satisfaction in engineering and business administration students. Intangible Capital, 10(5), 1026-1051.
  • Marin-Garcia, J. A., Ramirez Bayarri, L., & Atares-Huerta, L. (2015). Protocol: Comparing advantages and disadvantages of rating scales, behavior observation scales and paired comparison scales for behavior assessment of competencies in workers. A systematic literature review. WPOM-Working Papers on Operations Management, 2(6), 49-63. doi:http://dx.doi.org/10.4995/wpom.v6i2.4032
  • Medina-López, C., Alfalla-Luque, R., & Marin-Garcia, J. A. (2011). Research in operations management teaching: Trends and challenges. Intangible Capital, 7(2), 507-548.
  • Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9(0), 129-144.
  • Perello-Marin, M. R., Vidal-Carreras, P. I., & Marin-Garcia, J. A. (2016). What do undergraduates perceive about teamwork? International Journal of Engineering Education, 32(3), 1171-1181.
  • Perez-Benedito, J. L., Perez-Alvarez, J., & Casati, M. J. (2015). Pbl in the teaching of design in aeronautical engineering: Application and evolution of a consolidated methodology. International Journal of Engineering Education, 31(1), 199-208.
  • Potgieter, M., Ackermann, M., & Fletcher, L. (2010). Inaccuracy of self-evaluation as additional variable for prediction of students at risk of failing first-year chemistry. Chemistry Education Research and Practice, 11(1), 17-24. doi:10.1039/c001042c
  • Pratten, M. K., Merrick, D., & Burr, S. A. (2014). Group in- course assessment promotes cooperative learning and increases performance. Anatomical Sciences Education, 7(3), 224-233. doi:10.1002/ase.1397
  • Sanna, A., Lamberti, F., Paravati, G., & Demartini, C. (2012). Automatic assessment of 3d modeling exams. IEEE Transactions on Learning Technologies, 5(1), 2-10. doi:10.1109/tlt.2011.4
  • Tejeiro, R. A., Gómez-Vallecillo, J. L., Romero, A. F., Pelegrina, M., Wallace, A., & Emberley, E. (2012). Summative self-assessment in higher education: Implications of its counting towards the final mark. Electronic Journal of Research in Educational Psychology, 10(2), 789-812.
  • Trotter, E. (2006). Student perceptions of continuous summative assessment. Assessment & Evaluation in Higher Education, 31(5), 505-521. doi:10.1080/02602930600679506
  • Trullas, I., & Enache, M. (2011). Theoretical analysis of the antecedents and the consequences of students' identification with their university and their perception of quality. Intangible Capital, 7(1), 170-212. doi:10.3926/ic.2011.v7n1.p170-212
  • Valle, A. R. A., Gonzalvo, M. J. M., & Abril, F. S. (2011). Is there an alternative to master classes? An ocular physiology experience as part of an optics and optometry degree course. Arbor, 187(EXTRA 3), 189-194. doi:10.3989/arbor.2011.Extra-3n3143
  • Viles Diez, E., Zárraga-Rodríguez, M., & Jaca García, C. (2013). Tool to assess teamwork performance in higher education. Intangible Capital; Vol 9, No 1 (2013)DO - 10.3926/ic.399.
  • Walker, D. J., & Palmer, E. (2011). The relationship between student understanding, satisfaction and performance in an australian engineering programme. Assessment and Evaluation in Higher Education, 36(2), 157-170. doi:10.1080/02602930903221451
  • Watts, F., García-Carbonell, A., & Llorens, J. (2006). Introducción a la evaluación compartida: Investigación multidisciplinar. In F. Watts & A. García-Carbonell (Eds.), La evaluación compartida: Investigación multidisciplinar (1 ed., pp. 1-9). Valencia: Editorial de la UPV
  • Yorke, M. (1998). The management of assessment in higher education. Assessment & Evaluation in Higher Education, 23(2), 101-116.
  • Yorke, M. (2010). How finely grained does summative assessment need to be? Studies in Higher Education, 35(6), 677-689. doi:10.1080/03075070903243118
  • Yorke, M. (2011). Summative assessment: Dealing with the 'measurement fallacy'. Studies in Higher Education, 36(3), 251-573. doi:10.1080/03075070903545082