This article presents a protocol for conducting online think-aloud interviews as well as the reflections of the participants and interviewer on this process. The interviewer and participants commenced the interviews in person but then shifted to an online mode partway through the study. Thus, their reflections provide a comparison of the two different modes. These reflections are situated within a study of novice database designers. This computer science context, to some extent, influences both the protocol and the experiences of the study participants. Recommendations based on these experiences are provided for future computer science education researchers interested in using an online mode for conducting think-alouds, and suggestions for the use of aspects of the protocol in teaching are presented.

Introduction

The think-aloud data collection research method in essence involves the verbalization of thought process while performing a task and was initially developed to study human problem-solving strategies [28]. Using this technique, researchers aim to acquire as directly as possible data that allows an analysis that builds an understanding of the mental or cognitive processes involved when a human undertakes a specific task.

Think-aloud protocols in education research typically involve asking students, often novices, to perform an exercise or task while speaking aloud their thoughts. Using this method allows us to gain insight into our students and their learning and consequently allows us to adjust and improve our teaching. Thus, understanding the think-aloud protocol and the potential for online think-alouds is of value for us as computing educators engaged in the scholarship of teaching and learning (SOTL).

Think-aloud interviews have been used to study novice computer scientists in many different contexts. The most common is the observation of novice computer programmers. Some examples, not an exhaustive list, but illustrative of the importance of the method to computer science education research (CSER) are the following.

  • Novice Programming
    • Novice programmers performing code comprehension and writing tasks [29,34,39]
    • Novice programmers performing debugging tasks [13,20,40]
    • Role of socioemotional traits and self-regulation in learning programming [22]
  • Database modelling and design
    • Novice database designers developing conceptual models [4,5,31,32]
    • Novice database designers on SQL tasks [25]
  • Software testing [30]

Most think-aloud CSER studies aim to understand the way in which students perform a task and to gain insight into the processes and strategies the students are using. Others aim to understand how the students learn and what triggers a learning event. Some studies explore aspects of self-regulation and the socioemotional aspects of learning in the context of computer science courses.

An assumption in the analysis of data gathered through a verbal protocol is that thinking aloud does not interfere with the thought process [12]. However, its robustness depends on a participant's ability to think out loud while attempting a task. Despite this acknowledged limitation, the think-aloud method is regarded as one of the more effective ways to gain insight into problem-solving processes [12,31].

Traditionally the think-aloud interview has relied on in-person interactions between the interviewer and interviewee [33]. However, since 2020, researchers have reported on the COVID-19 pandemic forcing their transition to virtual and online research methods [33,35,36].

In this article, we examine the transition from an in-person think-aloud interview to an online one in a computer science context and how the participants and interviewer perceive this change. This look at the virtualization of think-alouds is situated in a case study of novices learning to design and model relational databases. We present the protocol that was utilized to gain insights into students' progress by using thinking out loud. It should be noted that while the COVID pandemic was a driver that forced us to transition to a virtual online method, we discovered that this change, in the end, was a positive experience that challenged our preconceptions of the online format. Such opportunities challenge the status quo and open new avenues in terms of research methods that have the potential to broaden participation and the potential to address some limitations of data-gathering approaches. Herein, we discuss our participants' experiences navigating this change. We describe the virtual think-aloud approach we developed, the challenges we faced, and the lessons we learned. Finally, we outline some take-home messages for researchers considering using an online protocol and how educators can use this method to gain insights into their student's learning.

Developing the Think-Aloud Protocol

The research protocols discussed in this article were designed in late 2020, and ethics approval was sought through the peer-review process of an institutional ethics committee. At this time, the country had reopened to some extent after the first wave of COVID-19 lockdowns. As a precaution, while we proposed a face-to-face protocol, we made provisions for conducting the think-aloud interviews online should future lockdowns occur that interrupt our time-critical data-gathering activities.

In coming up with a protocol, we first investigated what others had written about conducting virtual and remote interviews. Several researchers have published articles on conducting one-on-one interviews, not think-alouds, and their experiences using various tools (e.g., Zoom and Skype) to conduct such interviews [15,23,26]. The only relevant paper on think-aloud was by Trate et al. [35] and situated in chemistry education. The paper described an online semi-structured think-aloud protocol. In their study, participants were asked to verbalize their problem-solving process while answering some general chemistry questions. The cornerstones of Trate et al.'s protocol included using Doodle polls [11] to schedule interviews; in-person meetings with the participant to pass on interview artefacts and gather consents; minimal intervention during the think-aloud sessions; and use of interviewer and participant webcams with the session conducted and recorded using Zoom. They used a second webcam to record student interaction with worksheets and other items external to their computer, such as their calculators. Their motivation for using online interviews was to enable data gathering for a multi-institutional study. The authors highlighted some of the advantages of online interviews stating that:

Remote interviews allow data collection from a more representative, multi-institution sample, allow a single researcher to conduct interviews consistently across the population, are more cost- and time-effective as there is no time lost to travel, and provide flexibility for the researcher to continue other professional duties. [35: 2422]

To the best of our knowledge, this work was the first study to appear in education literature that reports conducting a think-aloud study online. Despite being a popular method used in CSER, no researchers have yet reported their experience in applying this method virtually. The recent studies by Hassan and Zilles [16,17] mentioned using think-aloud online during COVID however, other than identifying the software they used, the authors do not focus on how the protocol was implemented nor its implications.


The participants whose views and experiences of think-alouds are reported in this article were recruited from students enrolled in an undergraduate first-year introductory database course through a process of informed and voluntary consent. … The institute's research ethics committee approved all the research instruments and protocols used.


The following sections describe our study context and the in-person think-aloud protocol. We then describe the changes needed to transition to a remote online format. Finally, we report on the participant and interviewer's perceptions of the move to online think-alouds before making recommendations for future online think-aloud data collection protocols.

The Think-Aloud Context

The participants whose views and experiences of think-alouds are reported in this article were recruited from students enrolled in an undergraduate first-year introductory database course through a process of informed and voluntary consent. The course covers database design and modeling, database implementation, and SQL. The institute's research ethics committee approved all the research instruments and protocols used.

Each participant undertook a series of four think-aloud sessions throughout their semester of study in the latter half of 2021. The task in each session was timed to be at least a week after the relevant topic was covered in lectures and the practical tutorial/laboratory sessions. Each session was intended to last approximately one hour. This time included a retrospective interview. The tasks were all typical novice database modeling tasks, including developing business rules, developing a conceptual model, transforming the conceptual model into a logical model, and normalization. The interviewer who conducted the think-aloud interviews was not part of the teaching team for the course.

The first interview was conducted in person with the interviewer and participant in the same room. At the beginning of the interview, written consent to participate was sought—this meant that we did not need to include a provision for obtaining the participant's informed consent remotely. We would note here that other researchers may need to consider this process. Trate et al. [35] also managed to gather consent for their students before shifting to their remote protocol but mentioned that this could be managed in the same way that consent is gathered for online surveys. However, our experience has been that the best source of guidance is provided at the institutional level by the ethics board or committee, as requirements may differ between countries and institutions. For example, our institute's ethics committee has developed an oral consent protocol when interviews are conducted via video conference. In this protocol, a separate recording is required of oral consent being provided by each participant.

Participants were asked to complete a simple practice exercise in the first think-aloud interview to familiarize them with the think-aloud process before the session. The remaining three sessions were conducted online during the lockdown. The sessions were scheduled ahead of time, with each participant using a Doodle poll for both the in-person and remote sessions. Email invitations of appointments for each agreed session, including a reminder to the recipient that they were free to opt out of the study at this point, were then sent to confirm the interview time and location.

• The In-Person Protocol

At the beginning of each session, participants were briefed on the purpose of the session and were asked to say everything they thought as they worked on the database modeling tasks. They were also asked if they were reading something to verbalize as they read. But this was not strictly enforced. If a participant appeared to be reading silently, they were asked to indicate what they were reading. If a participant made notes or highlighted parts of a question or a solution to a prior task, the interviewer noted this and explored the reasons behind it during the retrospective interview. These retrospective interviews were held once each task was completed. A final reflective discussion was conducted at the end of each session that allowed the interviewer to explore and confirm (or not) their understanding of the challenges the students faced working on the session's tasks. This also provided an opportunity for the participants to reflect on the session and ask any questions.

Interactions between the participant and the interviewer were kept to a minimum, but if the participants were silent for more than one minute, the interviewer prompted them to remember to say out loud what they were thinking. If it appeared a participant was finding it hard to think and speak or was struggling to progress on a task, the interviewer asked specific questions intended to trigger progress or continue the think-aloud process. These interventions included providing hints or clarification on a term, concept, or aspect of the task. In cases where the participants could not progress, an alternative exercise designed to act as a scaffold was provided. Here we are not focusing on the tasks or the prompts beyond the impact these might have had on a participant's ability to think aloud when interviews were online.

If a participant provided an incorrect answer or struggled even if help had been given, they were provided with a post-task tutoring phase where the concepts, ideas, and processes needed to solve a task were provided. In some cases, the interviewer also modeled how to perform a task and walked the participant through the model answer.

The tasks in the in-person session were provided to the students on paper. The students worked on the tasks using a LivescribeTM Echo 2 smartpen [21] and LivescribeTM SmartPaper (See Figure 1). Their interactions with the paper (i.e., writing, annotations, and doodles) were recorded along with their verbalizations.

As the participants worked on a task, the interviewer took observational notes recording any critical events, periods of silence or lack of progress. The retrospective interviews were used to explore aspects of instances of silence or to discuss any challenges the participants faced to try and identify the reasons for any difficulties encountered.

• Transitioning to a Remote Protocol

After the first think-aloud session, we moved to online interviews during the lockdown period of the second wave of the pandemic. Pragmatically, the most critical requirement was to find an alternative to using the LivescribeTM smart pen and paper for data collection. We considered several video conferencing platforms and additionally consulted our participants about the tools that would suit them best for the online sessions. After this consultation, we decided to go ahead with Microsoft (MS) Teams. One main reason for this was that we could hold the recordings on the institute's servers, allowing us to meet data privacy storage requirements, and MS Teams was freely available for all the students via the institute.

We were also concerned about the hardware and software that our participants had at home. Initially, we consulted with the participants about whether they owned a tablet and were willing to use that during the session. We considered this mainly to keep the number of tools involved to a minimum, as dealing with multiple tools could have added to the cognitive load of the tasks and reduced the participant's ability to think aloud. Using a tablet with a smart pen also had the advantage of preserving the nature of the performance of the task as it was consistent with the original in-person digital pen and paper approach. Having a pen and tablet would also have allowed participants to readily underline and annotate key aspects of the task problem description and make doodles to support their problem-solving process if this was a mechanism they usually used. However, not all the students owned a tablet, and we couldn't arrange to provide them with one within the short timeframe we had to make the data collection method changes. Our country had restrictions that meant we were unable to access resources kept on campus, like our store of tablets. In the end, we decided to go ahead and let the participants with tablets use them if this was their preference. For the others, we used Visual Paradigm's [38] community edition for the second and third sessions and MS OneNote [24] for the last session.

The tasks and nature of the data collected remained the same, but the collection mechanism changed. The participants performed the tasks on their machines at home while sharing their screens and microphone. The recording of the shared screen and the audio was captured using MS Teams. The participants themselves were not recorded, in line with our ethics consent requirements. Because we were exploring student problem-solving/modeling processes rather than socio-emotional factors related to learning, we believed that being able to see the participants in the recorded data would not affect our analysis of the data. We had not considered that the interviewer would then be unable to see the participant during the interview and how that might affect the interview process.


After the first think-aloud session, we moved to online interviews during the lockdown period of the second wave of the pandemic.


Because the participants were either continuing with a task or working on a new problem built on an earlier problem, in the case of the online sessions, the interviewer circulated each participant's previous session's workings and/or solutions to them by email ahead of the session.

We also needed to address how we distributed the task or problems. We did not want to email the task specifications to participants ahead of time as we wanted the session to be the first time that they encountered the problem. Providing the tasks ahead of time would have allowed the participants to think about the tasks without the researcher capturing those thoughts. In the end, the interviewer opted to share their screen and display the task sheet while undertaking the session briefing, and at that time, the sheet was also provided to the participants as a file in the MS Teams meeting chat channel. The participants were then able to download and view a local copy of the task/problem.

Because of the change to remote online interviews, we asked the participants if they were willing and able to participate. At this point, some participants opted not to participate further due to the lockdown. These students volunteered that online learning was placing a more significant burden on their daily lives, and they did not have the time to devote to this research study.

The protocol otherwise remained largely the same. The interviews were one-on-one, Doodle polls were used to schedule interview times, and the same prompts and interventions as used in-person were used in the virtual interviews. After the session briefing, the interviewer asked the participants to share their screens and open the tool they would use to record their workings and solutions. In contrast to the in-person protocol, we tended to emphasize more the need for the participants to verbalize as they read, mainly to maintain the correspondence between the interviewer and interviewee. Without such utterances from the participants, it was at times challenging for the interviewer to monitor and interpret the participants' actions. For an interviewer, there was a deeper reliance on the ability of the student to think aloud as many of the visual cues usually relied on to guide the interview were not present. This meant we prompted the participants to continue speaking aloud more often. Finally, at the end of the session, once the recording had stopped, the researcher asked the participants to send a copy of their solutions and any other notes that they took during the task, including photos of any pen and paper notes, to the researcher.

The entire session was recorded, including the retrospectives. After the session, the video recordings were uploaded to the institution's cloud storage. These recordings were downloaded to a secure storage device as soon as possible, and the versions stored on the servers were deleted. An unintended side effect of using MS Teams was that transcription was generated in real time as the session was recorded. When sessions were in-person, the data had to be transcribed in a separate step using a transcribing tool, Descript [10]. Figure 2 shows the details of the online aspects of the think-aloud data collection process.

Reflections on the Online Think-Aloud Process

The COVID-19 pandemic has highlighted the importance of robust online protocols for qualitative research. Even though this work was undertaken as an "emergency" transition from an in-person to an online protocol, we believe that such an approach is viable as a means of conducting remote think-aloud data collections in CSER. However, a few caveats that fellow CESR researchers should be aware of are noted in the following sections. Protocols for avoiding these pitfalls and improving the experience of participants and the interviewer are also discussed in the following sections.

• The Interviewer/Researcher Perspective

Our ethics requirements for in-person interviews stipulated that we would not record the participants. This meant that the interviewer could not see the participants in the online format. This led to one of the main disadvantages from the interviewer's perspective, which was not being able to see the participants' facial expressions. This meant knowing when to intervene and help the participants was more difficult than in-person as they relied solely on audio cues. In addition, it was not possible to see what the participants were doing offline. In better circumstances, where participants could be supplied with equipment ahead of time and the study is intended to be remote from the very outset, we recommend using a second camera to record any pen and paper notes the participant makes. We also recommended that the consent process provided in any ethics application includes a protocol that allows for the participant to be recorded. However, there is a caveat as we believe that our participants would have been reluctant to be video recorded, and this may have affected their think-aloud performance and even their decision to continue participating when the study went online.

Dealing with several different tools and technologies is always a challenge when working virtually, and having to think aloud simultaneously could have added additional cognitive load. However, we did not encounter many issues, and possibly this is due to our participants being tech-savvy computer science students. The main issue encountered was a lack of familiarity with sharing a screen in MS Teams, and that was quickly overcome with help from the interviewer. Trate et al. [35], in their online interviews, also report not encountering any notable technical issues. As we mentioned earlier, transcribing audio was done using two different tools, Descript and MS Teams live transcriber. While both the tools provided reasonably accurate transcriptions, we favored Descript due to certain features like its ability to assign speaker roles in distinct colors, which made the transcription more readable and easier to correct where necessary.

One of our participants used a tablet during the sessions, and we noticed that there was a slight difference in the richness of the data gathered when compared to the data from other tools like Visual Paradigm and OneNote. This student tended to use Doodle and Scribble to visualize their steps in the task. However, we can't confirm that using a tablet device helps students conduct think-alouds online. It may be that this student had a natural predilection to annotate and doodle as a tactic that supports their modeling process while others may not find this tactic useful.

Maintaining the emotional stability of the participant is extremely important when conducting virtual sessions of any form [13]. Establishing a rapport with the participants can be a time-consuming process. However, in our study, the participants were already working with the interviewer in person, so the interviewer only had to ease them into the virtual mode of the think-aloud sessions. However, we found it was important to be mindful of the need to acknowledge and respond to the participant and provide encouragement to continue the task. We found that the participants sought more reassurance of the interviewer's attention on their progress on task than in the in-person format.

While in both modes, the interviewer spent time building a connection with the participants she found that in the online version, it was more important to take the time before and after the session to check in with the students and establish a more personal connection. All the participants noted their appreciation of the extra time and support the interviewer gave them online. From the interviewer's perspective, it was hard to know if this need was a result of the isolation and fatigue students were experiencing during lengthy lockdown periods or if it was related more to using an online interview format. Both factors likely contributed, and when conducting online think-alouds, more time needs to be included in the protocol for building a rapport with the participants than for an in-person protocol. Unlike the students, the interviewer experienced no notable additional fatigue in moving from in-person to online.


Despite our challenges, this shift to online think-alouds allowed us to identify ways to conduct think-aloud interviews virtually. Conducting these sessions online allowed us to work on a more flexible schedule, saving the participants' time and money (the travel costs of attending in person). Further, we observed that online sessions lead to richer notetaking since the interviewer could record time stamps to allow them to revisit critical incidents later without disturbing the participants.


During our think-aloud sessions, we provided different prompts and hints to trigger progress if the participants struggled when attempting the tasks. We didn't observe any notable difference in the way these interventions helped the students in-person vs. online.

Despite our challenges, this shift to online think-alouds allowed us to identify ways to conduct think-aloud interviews virtually. Conducting these sessions online allowed us to work on a more flexible schedule, saving the participants' time and money (the travel costs of attending in person). Further, we observed that online sessions lead to richer notetaking since the interviewer could record time stamps to allow them to revisit critical incidents later without disturbing the participants.

• The Participant Perspective

After completing the sessions, we asked the participants about their perceptions of the transition from an in-person to an online model. The participants seemed to have mixed feelings about the remote think-aloud sessions. Their main concern seems to have been around having less interaction with the interviewer. Interestingly, this mirrors the experience of the interviewer. The participants missed that human element and the acknowledgement and visual cues they received in person of their verbalizations, struggles and achievements.

"Personally, for me I can sort of adjust to anything and get through something like this quite easily. So it didn't affect me, but it definitely did sort of slow me down and sort of have me a bit more…you know it's the feeling of just talking to yourself and hoping that the person on the other side of the screen is nodding their head and agreeing with you."

Another participant also mentioned the importance of being able to see the interviewer:

"But the thing with online is like I'm quite a visual person so I'm like where are you? What are you writing now, you know? That was bit of a trouble I had but that was it."

While the interviewer did have their camera on, the participants were not able to keep the focus on their screens as they were often working with a single screen. One participant particularly struggled with having only one screen to open several windows.

"Here we go. I was…I was just trying to reduce the size of the windows…so I can see both screens the question and the visual paradigm. It's harder otherwise."

Whereas another participant explained how having two screens was helpful during the session.

"I got two screens. It's very handy to have something like this so we can, I guess work from here easily."

One advantage a participant noted being online was that the interviewer seemed to have more time for them:

"If it were done like face to face our time will be quite limited. Whereas it's been done online allows us more time to you know like cover the content and discuss."

While another noted that they didn't really feel like they lost any interaction with the interviewer but that doing the think-alouds online was harder than in-person:

"I feel like I've gotten maybe as much as I would have gotten if it was in person. There's just…it doesn't feel I was able to do it as efficiently or as comfortably."

A few participants indicated that they actually felt more comfortable thinking aloud online. As per the interviewer's observation, this could have been due to having less pressure to perform well in the online mode.

"I didn't really feel much of an impact moving the sessions online, although naturally felt a bit more comfortable being at home thinking out loud."

Interestingly, even those who noted not liking the online format overall seemed to be largely unaffected by the transition.

• Take Homes for Conducting Online Think-Alouds

The experience of transitioning into an online protocol has been quite interesting for us, and we hope that our protocol and lessons learned help others to avoid the pitfalls we encountered. Because we worked with computer science undergraduates, no notable barriers related to the use of technologies were encountered. One issue we did encounter was the participants not knowing how to share their screen(s) in MS Teams. To mitigate such issues, we recommend a practice session focused on the online think-aloud setup to eliminate any potential hardware and software issues prior to conducting the actual data gathering. We also recommend that this session is in addition to the practice session that gives participants a chance to familiarize themselves with the process of thinking aloud.

We found that participants with only one screen struggled more with the organization of their workspace, and this caused an unnecessary distraction that in turn affected their ability to think aloud. This pitfall is one that is more likely to be encountered within the computer science discipline as participants are more likely to be working within specialized applications, like an integrated development environment (IDE) or Computer Aided Software Engineering (CASE) tool like Visual Paradigm, and requiring multiple applications open at the same time. Anecdotally, the ability to organize workspace was also noted in a programming think-aloud study conducted in person by one of the authors [39].

While we were acting in an emergency setting, for those with more time to plan their online think alouds we recommend ensuring that participants are provided ahead of time with an additional screen and camera. With an additional camera, the interviewee can set up a camera that focuses on their desk and their pen and paper. This additional camera allows the participants to highlight and annotate printed artifacts and allows researchers to record these actions. Our participants tended to make such annotations in-person, but we did not capture these interactions online except where the participant had a touch screen and digital pen. In some cases, in the online think-alouds some students did mention writing something down with pen and paper, but their actions were not visible to the interviewer. An alternative to an extra camera might be to ask the participants to take a photo or scan of their notes, although this limits the data and its interpretability, especially if the student is not speaking at the time, as we are only seeing the product rather than the process of the notes being generated.


One consideration is online fatigue. … In a recent survey, 70% of learners reported difficulty staying focused and feeling exhausted when attending online classes … Fatigue was the most prominent challenge highlighted by the participants in our study.


One consideration is online fatigue [14]. In a recent survey, 70% of learners reported difficulty staying focused and feeling exhausted when attending online classes [2]. Fatigue was the most prominent challenge highlighted by the participants in our study. The interviewer also noted more participant fatigue in the online think-alouds. In part, this was due to the in-person sessions being shorter—the same thing was done in less time. We also believe that the online format requires more attention and causes additional fatigue. While conducting the online sessions, we found that it was important to schedule regular breaks during the session. We recommend having a short break every 15 minutes or at the end of a task if that task takes 15 minutes or less. It is also important to establish a rapport with the participants and have conversations that are not directly related to the task at hand. Indeed, a break may simply involve a social conversation about the weather. In our context, the purpose of these breaks was to provide the participants with a "mental break." However, for longer sessions it may be necessary to schedule breaks that allow participants to have a break from the screen.

It is accepted that the process of thinking aloud is cognitively demanding and can hinder a participant's ability to successfully undertake a task [12]. So, it is important not to underestimate the cognitive load involved in the tasks that you set and add to that the effort of talking about what you are thinking. One way to make a task manageable is to provide the research instrument as a scaffolded set of sub-tasks or activities. Of course, this must be balanced with not making the tasks so small that you lose the insights you wanted to gain from getting your students to think aloud.

And when collecting informed consent and data online, it is important to understand the storage mechanisms of the tools you are using and the implications of ethical data storage and data privacy.

Finally, learn from others engaged in SOTL in computer science as methods are refined and evolved. In our case, some insights from online teaching are also useful and can inform the development of online think-aloud protocols. Some of the recent CSER articles that have emerged because of the COVID-19 pandemic and the shift to online learning and virtual student teams give insights into further potential challenges faced by participants working as virtual teams that might factor into the choice to conduct online think-alouds [8,18,26]. In this context, the interviewer and interviewee might be considered a virtual team, albeit a small one. Tony Clear, in his article, notes his observations of the challenges faced by students learning online at home. These challenges include using shared home workspaces and inadequate internet connections [8]. Unsurprisingly, accessibility to appropriate resources and suitable environments for online learning depends on a student's socioeconomic status. This is an aspect that we in CSER should consider when choosing to adopt online data collection methods [8,26].

• The Potential of Think-Alouds as a Teaching Tool

While this article is intended for those readers engaged in SOTL who are undertaking research with the intent of publication, there are other reasons to engage SOTL. Getting students to think aloud about a problem using a formalized online protocol involves self-explanation. Research has found that self-explanation helps learners with problem-solving and progresses their learning [6,9]. A positive correlation between self-explanation and learning gain has been reported in fields like physics [6,9], biology [7] and geometry [1]. In computing, self-explanation has been considered as a way of helping students learn to program [3]. However, one research study looked at self-explanation as a way of scaffolding students undertaking database modeling exercises and found that self-explanation did not help their learners [19]. In contrast, we noted that our participants benefited from articulating to the interviewer their thought processes as they developed their database designs. Our participants also spoke about the fact that they also perceived a benefit in verbalizing their thinking:

"It's good to talk about what I'm thinking about which made me sometimes realize ah…that's wrong. But also having sort of feedback and having someone to bounce off of that, that's been very useful. I can throw my idea out there, get it criticized. That helped me realize what is wrong and from there what I need to do. I can actually improve instead of just getting an idea of what I need to do."

We suggest that using a more structured process with scaffolded exercises and prompts—such as those used in our think-aloud protocol—might facilitate self-explanation and result in more effective self-explanation that leads even the less proficient students to progress their learning.

During our think-aloud study, several interventions were used to help the participants who struggled to complete a task. Among these, hints and redirected tasks appeared to help the participants overcome difficulties and progress the most.

Hints were helpful when the participants knew what to do but needed that little extra help to move forward. One participant came up with an almost correct solution when deriving business rules and was aware that something was missing, but they couldn't work out what. They were missing may or must. The interviewer hinted at this by asking them whether or not the relationship was mandatory. The student realized what they were missing and was able to fix their mistake.

We know that database modeling tasks are hard for novices because they involve multiple abstract concepts. Redirected tasks were used only when other prompts failed. They were developed in our study to help reduce this complexity and to focus on the aspect of the task that the interviewer believed was preventing the participant from progressing. The fact that redirected tasks helped suggest that tools to progress students through database modeling should help learning. If such a system can predict the source of the issue, then students could be presented with a task that focuses on that concept alone thus reducing the complexity of the task and creating a learning opportunity. That learning can then be taken forward to complete the original task. Of course, there are three challenges in this: firstly, predicting the source of a students' lack of progress on a task; secondly, designing effective redirected tasks that assess the students' knowledge of single atomic concepts to check the prediction; and thirdly, designing a series of scaffolded exercises to build the students' knowledge so that they are able to return to the original problem and solve it and variants of it independently also poses a challenge.

Concluding Thoughts

To the best of our knowledge, our protocol is the first work that has attempted a think-aloud structure to collect data in the CSER domain. In this article, we have shared our experience conducting think-aloud interviews online. While we acknowledge that our protocol might not be suitable in all the scenarios, it is a starting point to address the issues in qualitative research brought upon by the pandemic. We had an advantage in certain aspects since this study was started in person before transitioning to the online mode, like managing the ethics, getting the necessary consent, and even building a rapport with the participants in person. However, this is an avenue that can be further developed to conduct think-aloud interviews entirely online. Even though we did not encounter many difficulties with the online setup, we believe the protocol can be further explored and enhanced to run successful think-aloud interviews online to collect richer data. Reflecting upon our work, we believe that adopting the online think-aloud technique for data collection provides the CSER community with something that is more accessible and convenient for many participants. It even could be a pedagogical approach to help novice designers use self-explanation as a teaching intervention. However, as noted in our take homes for future researchers, we need to be cognizant that both the nature of the study and the participants' circumstances may affect the usefulness of such an approach.

References

1. Aleven, V.A.W.M.M., and Koedinger, K.R. An effective metacognitive strategy: learning by doing and explaining with a computer-based Cognitive Tutor. Cognitive Science, 26, 2 (2002), 147–179; https://doi.org/10.1207/s15516709cog2602_1.

2. Asgari, S., Trajkovic, J., Rahmani, M., Zhang, W., Lo, R.C., and Sciortino, A. An observational study of engineering online education during the COVID-19 pandemic. PloS One, 16, 4 (2021), 1–17; https://doi.org/10.1371/journal.pone.0250041.

3. Aureliano, V. C. O., Tedesco, P. C. D. A. R., and Caspersen, M. E. Learning programming through stepwise self-explanations. in Proceedings of Iberian Conference on Information Systems and Technologies, (Gran Canaria: IEEE,2016), 1–6; https://doi.org/10.1109/CISTI.2016.7521457.

4. Batra, D., and Antony, S. R. Novice errors in conceptual database design. European Journal of Information Systems, 3, 1 (1994), 57–69; https://doi.org/10.1057/ejis.1994.7.

5. Batra, D, and Davis, J. G. Conceptual data modelling in database design: similarities and differences between expert and novice designers. International Journal of Man-Machine Studies, 37, 1 (1992), 83–101; https://doi.org/10.1016/0020-7373(92)90092-Y.

6. Chi, M. T. H., Bassok, M., Lewis, M. W., Reimann, P., and Glaser, R. Self-Explanations: How Students Study and Use Examples in Learning to Solve Problems. Cognitive Science, 13, 2 (1989), 145–182; https://doi.org/10.1207/s15516709cog1302_1.

7. Chi, M. T. H., De Leeuw, N., Chiu, M.-H., and Lavancher, C. Eliciting Self-Explanations Improves Understanding. Cognitive Science, 18, 3 (1994), 439–477; https://doi.org/10.1207/s15516709cog1803_3.

8. Clear, T. THINKING ISSUES Loosening Ties: Permanently Virtual Teams and the Melting Iceberg of Relationship. ACM Inroads 12, 3(2021), 6–8; https://doi.org/10.1145/3479419.

9. Conati, C. Commentary on: Toward Computer-Based Support of MetaCognitive Skills: a Computational Framework to Coach Self Explanation. International Journal of Artificial Intelligence in Education, 26, 1 (2016), 183–192; https://doi.org/10.1007/s40593-015-0074-8.

10. Descript; https://www.descript.com/. Accessed 2022 October 18.

11. Doodle; https://doodle.com/meeting/organize/groups. Accessed 2022 October 18.

12. Ericsson, K. A. Protocol Analysis. in A Companion to Cognitive Science, edited by W. Bechtel and and G. Graham (New Jersey: Blackwell Publishing Ltd., 2017), 425–432. https://doi.org/10.1002/9781405164535.ch33.

13. Fitzgerald, S., Lewandowski, G., McCauley, R., Murphy, L., Simon, B., Thomas, L., and Zander, C. Debugging: finding, fixing and flailing, a multi-institutional study of novice debuggers. Computer Science Education, 18, 2 (2008), 93–116; https://doi.org/10.1080/08993400802114508.

14. Fosslien, L., and Duffy, M. W., How to Combat Zoom Fatigue. Harvard Business Review. (April 2020); https://hbr.org/2020/04/how-to-combat-zoom-fatigue. Accessed 2022 July 29.

15. Gray, L., Wong-Wylie, G., Rempel, G., and Cook, K. Expanding Qualitative Research Interviews Strategies: Zoom Video Communications. The Qualitative Report, 25, 5(2020), 1292–1301. https://doi.org/10.46743/2160-3715/2020.4212.

16. Hassan, M., and Zilles, C. Exploring 'reverse-tracing' Questions as a Means of Assessing the Tracing Skill on Computer-based CS 1 Exams. in Proceedings of the 17th ACM Conference on International Computing Education Research, (Virtual ACM, 2021), 115–126; https://doi.org/10.1145/3446871.3469765.

17. Hassan, M., and Zilles, C. On Students' Ability to Resolve their own Tracing Errors through Code Execution. in Proceedings of the 53rd ACM Technical Symposium on Computer Science Education, (Rhodes Island: ACM, 2022), 251–257; https://doi.org/10.1145/3478431.3499400.

18. Hodges, C., Moore, S., Lockee, B., Trust, T., and Bond, M. The Difference Between Emergency Remote Teaching and Online Learning. EDUCASE Review. 3 (March 2020); http://hdl.handle.net/10919/104648. Accessed 2022 November 9.

19. Lin, M.-H., Chen, M.-P., and Chen, C.-F. Effects of Question Prompts and Self-explanation on Database Problem Solving in a Peer Tutoring Context. in Intelligent Information and Database Systems: Lecture Notes in Computer Science. (Switzerland: Springer, Cham, 2015), 180–189; https://doi.org/10.1007/978-3-319-15705-4_18.

20. Liu, Z., Zhi, R., Hicks, A., and Barnes, T. Understanding problem solving behavior of 6–8 graders in a debugging game. Computer Science Education, 27, 1(2017), 1–29; https://doi.org/10.1080/08993408.2017.1308651.

21. Livescribe; https://us.livescribe.com/pages/echo-set-up. Accessed 2023 May 7.

22. Loksa, D., and Ko, A. J. The role of self-regulation in programming problem solving process and success. in Proceedings of the 2016 ACM Conference on International Computing Education Research, (New York: ACM, 2016), 83–91; https://doi.org/10.1145/2960310.2960334.

23. Lo Iacono, V., Symonds, P., and Brown, D. H. K. Skype as a Tool for Qualitative Research Interviews. Sociological Research Online, 21, 2 (2016), 103–117; https://doi.org/10.5153/sro.3952.

24. Microsoft OneNote; https://www.microsoft.com/en-nz/microsoft-365/onenote/digital-note-taking-app. Accessed 2023 February 25.

25. Miedema, D., Aivaloglou, E., and Fletcher, G. Identifying SQL misconceptions of novices. ACM Inroads, 13, 1 (2022), 52–65; https://doi.org/10.1145/3514214.

26. Moore, S., Trust, T., Lockee, B., Bond, A., and Hodges, C. One Year Later…and Counting: Reflections on Emergency Remote Teaching and Online Learning. EDUCASE Review. (November 2021); https://er.educause.edu/articles/2021/11/one-year-later-and-counting-reflections-on-emergency-remote-teaching-and-online-learning. Accessed 2022 November 9.

27. Nehls, K., Smith, B. D., and Schneider, H. A. Video-Conferencing Interviews in Qualitative Research. in Enhancing Qualitative and Mixed Methods Research with Technology. (Hershey PA, IGI-Global Publishing, 2015), 140–157; https://doi.org/10.4018/978-1-4666-6493-7.ch006.

28. Newell, A., and Simon, H. A. Human problem solving. (New Jersey: Prentice-Hall, 1972).

29. Prather, J., Pettit, R., Becker, B. A., Denny, P., Loksa, D., Peters, A., Albrecht, Z., and Masci, K. First Things First: Providing Metacognitive Scaffolding for Interpreting Problem Prompts. in Proceedings of the 50th ACM Technical Symposium on Computer Science Education, (New York: ACM, 2019), 531–537; https://doi.org/10.1145/3287324.3287374.

30. Rojas, J. M., Fraser, G., and Arcuri, A. Automated unit test generation during software development: a controlled experiment and think-aloud observations. in Proceedings of the 2015 International Symposium on Software Testing and Analysis, (New York: ACM, 2015), 338–349; https://doi.org/10.1145/2771783.2771801.

31. Rosenthal, K., and Strecker, S. (2019). Toward a taxonomy of modeling difficulties: A multi-modal study on individual modeling processes. in 40th International Conference on Information Systems, (December 2019), 1–25; https://aisel.aisnet.org/icis2019/learning_environ/learning_environ/12/. Accessed 2022 July 29.

32. Rosenthal, K., Ternes, B., and Strecker, S. Understanding individual processes of conceptual modeling. in Modellierung 2020, (Germany: Gesellschaft für Informatik e.V., 2020), 77–92; http://dl.gi.de/handle/20.500.12116/31848. Accessed 2022 July 26.

33. Santana, F. N., Hammond Wagner, C., Berlin Rubin, N., Bloomfield, L. S. P., Bower, E. R., Fischer, S. L., Santos, B. S., Smith, G. E., Muraida, C. T., and Wong-Parodi, G. A path forward for qualitative research on sustainability in the COVID-19 pandemic. Sustainability Science, 16, 3(2021), 1061–1067; https://doi.org/10.1007/s11625-020-00894-8.

34. Teague, D., Corney, M., Ahadi, A., and Lister, R. A Qualitative Think Aloud Study of the Early Neo-Piagetian Stages of Reasoning in Novice Programmers. in Proceedings of the Fifteenth Australasian Computing Education Conference, (Australia: Australian Computer Society, Inc., 2013), 87–95; https://dl.acm.org/doi/10.5555/2667199.2667209.

35. Trate, J. M., Teichert, M. A., Murphy, K. L., Srinivasan, S., Luxford, C. J., and Schneider, J. L. Remote interview methods in chemical education research. Journal of Chemical Education, 97, 9 (2020), 2421–2429. https://doi.org/10.1021/acs.jchemed.0c00680.

36. Tremblay, S., Castiglione, S., Audet, L.A., Desmarais, M., Horace, M., and Peláez, S. Conducting Qualitative Research to Respond to COVID-19 Challenges: Reflections for the Present and Beyond. International Journal of Qualitative Methods, 20 (2021); https://doi.org/10.1177/16094069211009679.

37. Vihavainen, A., Miller, C. S., and Settle, A. Benefits of self-explanation in introductory programming. in Proceedings of the 46th ACM Technical Symposium on Computer Science Education, (New York: ACM, 2015), 284–289; https://doi.org/10.1145/2676723.2677260.

38. Visual Paradigm Community Edition; https://www.visual-paradigm.com/download/community.jsp. Accessed 2023 February 22.

39. Whalley, J., and Kasto, N. A qualitative think-aloud study of novice programmers' code writing strategies. in Proceedings of the 2014 Conference on Innovation & Technology in Computer Science Education, (New York: ACM, 2014), 279–284; https://doi.org/10.1145/2591708.2591762.

40. Whalley, J., Settle, A., and Luxton-Reilly, A. Novice Reflections on Debugging. in Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, (New York: ACM, 2021), 73–79; https://doi.org/10.1145/3408877.3432374.

Authors

Asanthika Imbulpitiya
Computer Science and Software Engineering Department
Auckland University of Technology
St Paul Street, Auckland Central, 1010, New Zealand
[email protected]

Jacqueline Whalley
Computer Science and Software Engineering Department
Auckland University of Technology
St Paul Street, Auckland Central, 1010, New Zealand
[email protected]

Mali Senapathi
Computer Science and Software Engineering Department
Auckland University of Technology
St Paul Street, Auckland Central, 1010, New Zealand
[email protected]

Figures

F1Figure 1. Overview of the 'in-person' think-aloud setup from the student and interviewer perspective. A: Livescribe pen and paper used by the student, B: A close-up of the task provided to the student, C: Echo desktop used by the interviewer to play and view the recording of the student's think-aloud session

F2Figure 2. A: Student's shared screen while attempting the task; B: Interviewer's shared screen while introducing the task

Copyright held by authors. Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.

Contents available in PDF
View Full Citation and Bibliometrics in the ACM DL.

Comments

There are no comments at this time.

 

To comment you must create or log in with your ACM account.