Feedback on our Research
- Jiah Hwang
- Jun 18
- 4 min read
Updated: Jun 19

When the data, specifically the survey results from the students from the beginning and end of the workshops, was analyzed through systems of multiple linear regression, we decided to compile it in a research article to summarize our findings.
This is our abstract:
"Aventura Digital was a ten-week series of workshops held in Asunción, Paraguay, designed to promote digital literacy among high school upperclassmen. Through quantitative benchmarks and student self-assessment, this research examines the effectiveness of utilizing educational technology to bridge the digital divide in Paraguay's education system. Asuncion is an important case study for MERCOSUR more broadly because of its rapid urbanization, infrastructure deficits, multiculturalism, issues with corruption, and patterns of agribusiness and re-exportation. This study concludes that integrating education technology into classroom lessons is sufficient to increase student engagement with technology more broadly. This increased technological engagement is particularly critical for MERCOSUR because the region is shifting to integrate technology into its economy and infrastructure. However, further research is needed to determine the long-term impact of this increased technological literacy on students’ academic and creative growth."
You can view the full research paper here:
Professor Mark Chin of Peabody College from Vanderbilt University was the first I was able to contact for expert feedback regarding the first draft of the research, and he was the optimal choice as he is the Assistant Professor of Education Policy and Inequality. In his email, he stated that he was wondering the extent to which the main research question was answered. In his words, "In the abstract, it seems like the primary purpose was to 'investigate the difference in students’ knowledge, enthusiasm, and overall attitude when they have digital tools compared to when they do not'- I couldn’t identify in the method where this question was answered."
He issued questions that, if this group of high schoolers from Paraguay are indeed a pilot group that are being exposed to having digital tools, and the design of this research is that there is no comparison group, how I could be able to effectively access differences? He further stated that the results seem more like an implementation assessment (how did those who were in the program feel about the program.) In that case, the issue would be that there was no randomized selection of access to the digital tools. He therefore suggested that I venture into exploring the limitations that were found in this initial research, such as determining the extent to which responses on the final survey were predicted by responses on the initial survey using multiple regression (e.g., Was satisfaction predicted by having access to computer equipment at baseline?). His final remark was on the low follow-up rate (25%) on the final survey that was handed out to the participants.
Because of this feedback, I was able to fix what I could at that point; there was no way to go back and ask all the students to do the final survey as that would lead to a large difference in the amount of time it had been since the end of the workshop between the students who originally completed the reflection survey versus those who would be completing them late. However, this mistake will be recorded in the final blog regarding the pilot project, where we will compile all of the learnings and mistakes into plans for implementation on our future project. What we could fix (and did) was the quality of our research article. Originally, the "Limitations" and "Implications" sections of the research article was grouped together, leading to an overgeneralized and respectively little focus on one another. But because there were obvious limitations from this initial research, the "Limitations" section was separated so that it was one of its own, addressing all of the points Dr. Chin had questioned about.
Another big change resulting from his feedback was in the abstract. The abstract is one of the most important parts of the research article as it is not only the first section that most readers will see, but like Dr. Chin pointed out, it is the section that outlines the objectives of the research and is therefore what readers will use to judge whether such objectives were met at the end of the research article. Initially, the abstract did not look like how it does above—instead, it looked like the one below:
"The project Aventura Digital was a long-term series of workshops in Asuncion, Paraguay. By
partnering with and donating eight laptops to the educational organization Centro Melodia,
various schools could rotate the laptops and have willing students partake in a ten-step
workshop that would target specific digital alphabetization skills necessary for maximum
understanding. This research investigates the difference in students’ knowledge, enthusiasm,
and overall attitude when they have digital tools compared to when they do not. The project
aimed to uncover the promising elements of taking steps to close the digital gap in learning in the Paraguay education system."
The revised abstract addresses the confusion that Dr. Chin identified in the original abstract above so that the new abstract is effective in better identifying the focus of the research and the direction the research ended up going, which, as Dr. Chin stated, ended up being more of an analysis of implementation rather than a comparative research between a control/comparative group with this pilot group.




Comments