[Free food] 3/2 Student Seminar: Twin talks by Jean Salac

Monday, March 2

12:30-1:20pm

JCL 390

Comprehending Code: Understanding the Relationship between Reading and Math Proficiency, and 4th-Grade CS Learning Outcomes

Patterns in Elementary-Age Student Responses to Personalized & Generic Code Comprehension Questions

Lunch will be from A Taste of the Philipines.

Unlike a leap day, the Student Seminar happens about once every two weeks. This week, Jean Salac (@salac) will be presenting two practice talks for the upcoming SIGCSE. The abstracts are reproduced below.

Comprehending Code: Understanding the Relationship between Reading and Math Proficiency, and 4th-Grade CS Learning Outcomes

As many school districts nationwide continue to incorporate Computer Science (CS) and Computational Thinking (CT) instruction at the K-8 level, it is crucial that we understand the factors and skills, such as reading and math proficiency, that contribute to the success of younger learners in a computing curriculum and are typically developed at this age. Yet, little is known about the relationship between reading and math proficiency, and the learning of key CS concepts at the elementary level. This study focused on 4th-grade students (ages 9-10) who were taught events, sequence, and repetition through an adaptation of the Creative Computing Curriculum. While all students benefited from access to such a curriculum, there were statistically-significant differences in learning outcomes, especially between students whose reading and math proficiency are below grade-level, and students whose proficiency are at or above grade-level. This performance gap suggests the need for curricular improvement and learning strategies that are CS specific for students who struggle with reading and math.

Patterns in Elementary-Age Student Responses to Personalized & Generic Code Comprehension Questions

The CS community has struggled to assess student learning at the K-8 level, with techniques ranging from one-on-one interviews to written assessments. While scalable, automated techniques exist for analyzing student code, a scalable method for assessing student comprehension of their own code has remained elusive. This study is a first step in bridging the gap between the knowledge gained from interviews and the time efficiency and scalability of written assessments and automated analysis. The goal of this study is to understand how student answers on various types of questions differ depending on whether they are being asked about their own code or generic code. We find that while there were no statistically-significant differences in overall scores, questions about generic and personalized code of comparable complexity are far from equivalent. Our qualitative analyses revealed interesting patterns in student responses, inviting further research into this assessment technique. In particular, students answered differently from students with generic code when presented with individual blocks from their code taken out of context and placed into different code snippets, and students answered in a way that demonstrates a functional, instead of structural, understanding on Explain in Plain English (EiPE) questions.

1 Like