Improving computer science education requires that transformative innovations in teaching become everyday practices within the larger community of CS educators. Increasing usage requires substantial time and energy, intentional planning, and evidence-based practices for propagation. To that end, this column presents another installment of ongoing efforts to capture knowledge by interviewing prominent propagators [1,2,3], individuals who have successfully spread educational innovations within the CS education community.

In this column, we interview Dr. Monica McGill, who founded the new non-profit CSEdResearch.org, which grew out of her work co-leading the K-12 Computer Science Education Research Resource Center available at csedresearch.org. Funded by the NSF over the last five years, the Center provides over 900 article summaries, 150 high-quality survey instruments specific to computing education, and research tips for K-12 CS education, including surveys, questionnaires, and interview protocols [7]. Dr. McGill has co-authored several research studies on the state of CS education research [6,8,9,11,12]. She has also worked on broadening participation in computing [10] and game design curricula [4,5]. She serves as an Associate Editor for ACM Transactions on Computing Education (ToCE) and chaired the inaugural ACM-W North America committee.

Below are highlights of the interview, which ran approximately an hour. The transcript has been edited for clarity and style.

Q: HOW DID THE RESOURCE CENTER GET STARTED?

MM: My background is professional. I first worked for the government for a few years, then moved into industry as a computer scientist. I then took a few years away from my career when my kids were very young. When I decided to go back to work, I took a job teaching Computer Science as a Lecturer at a local university. While I was doing that, I decided to get my doctorate in education as a non-traditional student—as a single parent working full-time and going to school part-time.

My doctoral training was very eye-opening. When I saw the research that was being produced in other education fields, I started noticing a gap between the CS education research being produced and what I was learning in my formal education research training. Part of that, which I and others have hypothesized, is that computer science faculty are mostly trained in computer science, not in education. So, we have this huge number of very smart people who no doubt can grasp education research, but many haven't been formally trained in it. I've had this idea in the back of my mind for probably about 12 or 13 years that it would be great to be able to address this gap and then have higher quality and more meaningful data produced out of the CS education research community.

Adrienne Decker was my collaborator on a project designed to address the lack of girls and historically marginalized students studying computer science. We received an NSF grant and a website was part of that: a resource center to share the data we're collecting for our study. We were already going to collect data on this subject, so why not share it with everybody else as well? That's when we thought the site could be a place where we start sharing resources for conducting high quality CS education research.

We manually curated over 40 data points from articles, and roughly 20 on the evaluation instruments. That gave us the ability to offer users a quick way to filter through articles and survey instruments that they don't normally have on hand, and it is also producing a dataset that we can go back to and start to see trends over time. It lets us compare against other theories and work outside of CS education to see where we're doing well and where there may be room for growth.

Q: HOW DO YOU CURATE MATERIALS, AND ARE THERE THINGS YOU'VE DECIDED NOT TO INCLUDE?

MM: We were selective at the very beginning because we knew that we had only so many resources to commit to this. As far as the articles themselves, we had a set of core conferences and journals specific to CS education research that we were reviewing and curating data from. People can also submit their own articles if we don't include them. But we had certain criteria that we were looking at because we wanted it to be more intervention-based. Of course, it's grown since then because we have position papers and experience reports and other articles in the dataset as well.

The evaluation instruments side is the most used resource on our center. With that, we're a little bit more open. I have people who are submitting instruments to me: they'll email them to me and then we can vet them and post them, as long as they're education-related. We also tag them, whether it's computing or cybersecurity or whatever, so users can quickly find them.

Q: DO YOU HAVE A SENSE OF HOW MANY PEOPLE ARE USING THE RESOURCE CENTER?

MM: We released it in October 2018, and since then our user base has doubled every year. Last year, we had over 14,000 unique visitors to the site. The resource center is also starting to become more well-known internationally. Then we have social media as well: on Twitter, we have over 1,600 followers and that number keeps rising. Those numbers still surprise me, and it just motivates me to produce more high-quality resources for the community.

And then on the back end of course we're collecting data when people download instruments and when people review articles as well. So, we can start to see which ones are the most viewed and downloaded, and then potentially used.

Q: HOW DO YOU ATTRACT USERS?

MM: It takes time, and it takes going to conferences and speaking at webinars when you can. At NSF meetings, even if we're not speaking, it's being able to tell somebody who's looking for something, "Hey, we have this site that you may not know about." Some of it is word of mouth: people who use the resource center regularly and consistently have been passionate about sharing it with other people.

Some of it is our social media. When we post new articles and instruments, some of them generate an auto tweet. The community can see that we're active and adding information. We've also started to use social media to share events in the community, like when things come up on the SIGCSE server list and there's an event that is relevant to either researchers or faculty in general. I think it helps draw attention to the resource center, but more importantly helps grow the community.

Q: HOW DID YOU IDENTIFY PEOPLE TO REACH OUT TO? WERE THERE SPECIFIC EVENTS YOU ATTENDED BECAUSE OF OUTREACH OR WAS IT A MATTER OF TAKING ADVANTAGE OF EVENTS YOU WERE ATTENDING ANYWAY?

MM: It's definitely a combination. So, strategically thinking about presenting a special session on resources for the community and then being one part of that. Presenting with six or seven other people who are also sharing resources with the community can broaden our reach as well. Being involved and active with the Computer Science Teachers Association (CSTA) and CSforALL communities, among many others. Working with collaborators and partners and making sure that they know what we're doing.

Q: HOW HAS USER FEEDBACK PLAYED A ROLE IN YOUR DEVELOPMENT PROCESS?

MM: Before it launched, we held a user focus group with several key education researchers, then went through concept testing and alpha testing with users, and then made a beta version that we also tested with users. So, we got feedback very early on from the community to understand the needs. Now that it's been out there for three years, we're going back and having outside eyes go through that process again. Over the summer, a graduate capstone team at University of Washington's iSchool interviewed users to find out what they like best, what they don't like, and what they would like to see as part of the resource center. I think it's going to be very beneficial for us.

Also, we're always looking for feedback. People can email us directly from the website, and I'm always open to hearing what can be improved or what's needed if users need something specific. I'm happy to walk people through the site as well. That helps me learn if there's something that's technical that can be fixed on the site, or something design-wise that can be fixed, or if there's something we don't have that we should add. Helping people understand how to use the resource center to find what they're looking for is important.

Q: WHAT TYPE OF CHANGES HAVE YOU MADE SO FAR?

MM: One thing is making sure that instruments have evidence of reliability and validity. There are quite a few ad hoc instruments used in CS education research. So, when people give pre- and post-intervention surveys, for example, they are more likely to create their own rather than choose an instrument that's already been through a validation process—I don't remember the exact number, but I think it's like two-thirds of the time researchers in our community create their own instruments. Out of those, many don't take it to the next step to actually test if the instrument is reliable or has any measures of validity. So that's where then the resource guides came in, because we saw that there was a need.


[Attracting users] takes time, and it takes going to conferences and speaking at webinars when you can. … Some of it is word of mouth: people who use the resource center regularly and consistently have been passionate about sharing it with other people.


Q: WHAT UNEXPECTED CHALLENGES HAVE YOU ENCOUNTERED?

MM: You always forget how long it takes to run a website! We have a pretty complex system on the backend. I've trained over 30 undergraduate students both in software development and also the curation process for the articles and instruments. That takes a lot of time as well. This is part of the broader perspective of training the next generation of researchers.

Q: WHAT WERE SOME OF THE GOOD SURPRISES WHERE THINGS WENT BETTER THAN YOU EXPECTED?

MM: I think one of the good surprises is when ACM reached out and said they'd be willing to basically open their paywall. On our site, you can now click through and bypass the paywall on any articles from ACM. That's huge for us because now we have another entryway for people to get that important research when they're trying to do a lit review or search for research. It's a really great service for the community.

Q: WHAT ADVICE WOULD YOU GIVE TO OTHER PEOPLE WHO WANT TO GET THEIR INNOVATION OR RESEARCH INTO THE HANDS OF OTHER PEOPLE?

MM: We found organizations like ECEP [Expanding Computing Education Pathways], CSTA (Computer Science Teachers Association), and CSforAll that work with communities that can benefit from our resources, and spoke to their communities in the early stages of designing the resource center. So, I would say, one is to identify those people or organizations, or a platform of connected people, that could use your tools or resources and talk to them early in the process. It's much easier to do that than to build your own community, at least at first.

I also take what I've learned in working with other groups to model what I think would be good for our users, and only address the things that may be useful for them, especially as a small nonprofit. Leverage what other people have done and grow from that, intentionally and purposefully.

It's important to establish your own identity. What is it that you want to do? What is your mission? How are you fulfilling that mission intentionally with collaborators and partners?

Q: WHAT ARE YOUR THOUGHTS ON SUSTAINING THE RESOURCE CENTER BEYOND THE INITIAL FUNDING?

MM: It's important to think about how things can be sustained long-term and grow, not only with the help of the NSF but other potential funders as well. We have had some donations from organizations including Google and IBM, and others are in the works. That can be coupled with a way to streamline all the processes around the resource center. I'm also looking at the resource center as a service: when I have other future grants, I can help sustain the site by including work that those projects produce.

Q: HOW MUCH DO YOU THINK WE CAN OR SHOULD LEARN FROM THE OTHER SCIENCES OR OTHER FIELDS THAT ARE DOING EDUCATION RESEARCH?

MM: The core research methodologies and many of the general theories are very applicable. We may need to interpret them and tweak them, but in general, I think that there's a lot to learn by looking outside of the CS Ed research community. Likewise, we should make sure that we're not just publishing in CS venues. CS Ed research doesn't really get out to the general education research community, which is a shame.

We're actually in the process of creating a new page that will show the places where articles that are part of our data sets are coming from: the publication venues with a link and description in one page for everybody to see. It's at least a starting point for other places where people can publish.


I think one of the good surprises is when ACM reached out and said they would be willing to basically open their paywall. On our site, you can now click through and bypass the paywall on any articles from ACM. That's huge for us because now we have another entryway for people to get that important research when they are trying to do a lit review or search for research. It's a really great service for the community.


Q: IT MAY BE A CHALLENGE IF YOU'RE A CS ACADEMIC: PUBLICATIONS IN AN EDUCATION JOURNAL ARE TOUGHER TO JUSTIFY TO YOUR ADMINISTRATION.

MM: Yeah. Although, we now have PhDs coming out of their programs who have specifically been trained in CS education research, and I think that that will help make more people aware that there are high quality venues to publish in besides ACM. And I love ACM and I'm not saying go away from ACM, but maybe do both.

Q: WHERE DO YOU SEE YOUR RESOURCE CENTER IN RELATION TO INCREASING THE NUMBER OF PEOPLE WITH CS PHDS IN CS EDUCATION?

MM: When I think about the scope of the resource center and why it's needed in the US, there are 55 million children in K-12 and only a fraction of them are being taught computer science. The intent in many states and nationally is to grow this number, so that every K-12 student is learning computational thinking or computer science. That's a lot of research that can be done and that will need to be done. I think this is an area that's definitely poised to grow, and in the interim, we hope to help people learn best research practices.

Q: WHAT DOES SUCCESS LOOK LIKE?

MM: That's a good question. That's one of the things that's very difficult to measure, because we're part of this larger ecosystem of resources for researchers. We're specifically focused on computer science, and I don't think anybody else is doing that right now and has a resource center like we do. I know that there are evaluation instrument repositories within STEM and for education research more generally.

Ultimately, success would mean that the quality of CS education research would be higher than it is now. How do you measure that? I'm not sure. There isn't really an end point to it that I can see, and I'd like to keep maintaining this and growing it over the next decade or so.

Q: WHAT ARE YOUR GOALS FOR CS EDUCATION IN GENERAL?

MM: I think what I'd really like to see is not only that we get better, but we move ahead of the other fields. As a field, we're looking at all the data behind education research. As computer scientists, we know how to build tools for that and make them useful, and that would go beyond specifically CS Ed research. Ideally in the next ten years, I'd like to see us not only improve the research that we produce and disseminate, but also turn our attention to how we can improve education research in general using computer science as a test bed.

Q: IS THERE ANYTHING ELSE THAT YOU WANTED TO SAY?

MM: It's worth recognizing how generous the community is in giving their time to support efforts like ours. You just can't do this isolated in your own office and then hope everybody uses it; that's just not the way it works. We had people share their advice and provide mentoring; people who recognized the value of what we were trying to do helped bring it to fruition, including our advisory board. I just couldn't have founded CSEdResearch.org without a number of people along the way, including a strong, supportive staff. We have a great community, and it's collaborative, and it's great to be a part of it.

References

1. Bunde, D.P., Butler, Z., Hovey, C.L. and Taylor, C. 2021. CONVERSATIONS: Conversation with a prominent propagator: Beth Quinn and Stephanie Weber, EngageCSEdu. ACM Inroads, 12, 4 (2021), 6–9. DOI: https://doi.org/10.1145/3490178.

2. Bunde, D.P., Butler, Z., Hovey, C.L. and Taylor, C. 2022. CONVERSATIONS: Conversation with a prominent propagator: Mark Guzdial. ACM Inroads, 13, 1 (2022), 6–9. DOI: https://doi.org/10.1145/3497877.

3. Bunde, D.P., Butler, Z., Hovey, C.L. and Taylor, C. 2021. CONVERSATIONS: Conversation with a prominent propagator: Tim Bell. ACM Inroads, 12, 3 (2021), 14–17. DOI: https://doi.org/10.1145/3457774.

4. McGill, M. Critical skills for game developers: an analysis of skills sought by industry. Proceedings of the 2008 Conference on Future Play: Research, Play, Share (New York, NY, USA, 2008), 89–96.

5. McGill, M.M. Defining the expectation gap: a comparison of industry needs and existing game development curriculum. Proceedings of the 4th International Conference on Foundations of Digital Games (New York, NY, USA, 2009), 129–136.

6. McGill, M.M. and Decker, A. A Gap Analysis of Statistical Data Reporting in K-12 Computing Education Research: Recommendations for Improvement. Proceedings of the 51st ACM Technical Symposium on Computer Science Education. (Association for Computing Machinery, 2020), 591–597.

7. McGill, M.M. and Decker, A. Defining Requirements for a Repository to Meet the Needs of K-12 Computer Science Educators, Researchers, and Evaluators. 2018 IEEE Frontiers in Education Conference (FIE) (2018), 1–9.

8. McGill, M.M. and Decker, A. 2020. Supporting Research on Inclusion in K-12 Computer Science Education using CSEdResearch.org. 2020 Research on Equity and Sustained Participation in Engineering, Computing, and Technology (RESPECT) (2020), 1–2.

9. McGill, M.M. and Decker, A. Tools, Languages, and Environments Used in Primary and Secondary Computing Education. Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education (New York, NY, USA, 2020), 103–109.

10. McGill, M.M., Decker, A. and Settle, A. Undergraduate Students' Perceptions of the Impact of Pre-College Computing Activities on Choices of Major. ACM Transactions on Computing Education. 16, 4 (2016), 15:1–15:33. DOI: https://doi.org/10.1145/2920214.

11. Petre, M., Sanders, K., McCartney, R., Ahmadzadeh, M., Connolly, C., Hamouda, S., Harrington, B., Lumbroso, J., Maguire, J., Malmi, L., McGill, M.M. and Vahrenhold, J. Mapping the Landscape of Peer Review in Computing Education Research. Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education (New York, NY, USA, 2020), 173–209.

12. Upadhyaya, B., McGill, M.M. and Decker, A. A Longitudinal Analysis of K-12 Computing Education Research in the United States: Implications and Recommendations for Change. Proceedings of the 51st ACM Technical Symposium on Computer Science Education. Association for Computing Machinery (2020), 605–611.

Authors

David P. Bunde
Knox College
2 E. South St
Galesburg, Illinois 61401 USA
[email protected]

Zack Butler
Rochester Institute of Technology
Rochester, NY 14623 USA
[email protected]

Christopher L. Hovey
University of Colorado Boulder
1045 18th Street, UCB 315
Boulder, CO 80309
[email protected]

Cynthia Taylor
Oberlin College
10 N Professor St
Oberlin OH, 44074
[email protected]

Footnotes

Interested in sharing your survey instruments to the Resource Center?

Visit: https://csedresearch.org/resources/submit-to-repository/

Figures

UF1Figure. Monica McGill

Copyright held by authors/owners.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.

Contents available in PDF
View Full Citation and Bibliometrics in the ACM DL.

Comments

There are no comments at this time.

 

To comment you must create or log in with your ACM account.