Skip to content

Commit

Permalink
Literature search
Browse files Browse the repository at this point in the history
  • Loading branch information
richelbilderbeek committed Feb 13, 2025
1 parent a00e167 commit 66b0a5d
Show file tree
Hide file tree
Showing 3 changed files with 35 additions and 1 deletion.
Binary file added docs/jordan_et_al_2018_fig.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/raupach_et_al_2011_fig.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
36 changes: 35 additions & 1 deletion docs/results_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,13 +103,35 @@ Neither does the referred literature:
- `[Jordan et al., 2018]` shows the results of surveys from
Data Carpentry workshops.
Also here, it is not described how the evaluation questions
came to be and with which reasoning the best were selected
came to be and with which reasoning the best were selected:
this paper has zero references to the literature

Taking a closer look at the evaluation questions of `[Jordan et al., 2018]`,
we see that some questions of its questions were not used.
The reasoning behind this is absent in `[Gurwitz et al., 2020]`.

One such removed evaluation question is to let learners self-assess
their confidence in learning outcomes.

???- question "How does such a question look like?"

Here we can see the results of learners self-assessing
their competences before and after the teaching session,
figure from `[Jordan et al., 2018]`:

![Figure from Jordan et al., 2018](jordan_et_al_2018_fig.png)

Here we can see a similar results for an earlier
paper `[Raupach et al., 2011]`:

![Figure from Raupach et al., 2011](raupach_et_al_2011_fig.png)

We know that this self-assessment does not relate to actual
skill `[Liaw et al., 2012]` (with more studies showing this in that paper).
However, there is some evidence that self-assessment

Check failure on line 131 in docs/results_1.md

View workflow job for this annotation

GitHub Actions / check_markdown

Trailing spaces [Expected: 0 or 2; Actual: 1]
does correlate to teacher effectiveness `[Raupach et al., 2011]`,
although other studies argue that more measurements are needed to
properly assess teacher effectiveness `[Darling‐Hammond et al., 2010]`.

## References

Expand All @@ -121,6 +143,10 @@ The reasoning behind this is absent in `[Gurwitz et al., 2020]`.
Brazas, Michelle D., and BF Francis Ouellette.
"Continuing education workshops in bioinformatics positively impact
research and careers." PLoS computational biology 12.6 (2016): e1004916.
- `[Darling‐Hammond et al., 2010]`
Darling‐Hammond, Linda, Xiaoxia Newton, and Ruth Chung Wei.
"Evaluating teacher education outcomes: A study of the Stanford Teacher
Education Programme." Journal of education for teaching 36.4 (2010): 369-388.
- `[Gurwitz et al., 2020]`
Gurwitz, Kim T., et al.
"A framework to assess the quality and impact of bioinformatics training
Expand All @@ -131,6 +157,14 @@ The reasoning behind this is absent in `[Gurwitz et al., 2020]`.
"Analysis of Software and Data Carpentry’s pre-and post-workshop surveys."
Software Carpentry. Retrieved April 13 (2018): 2023.
[PDF](papers/jordan_et_al_2018.pdf)
- `[Liaw et al., 2012]`
Liaw, Sok Ying, et al. "Assessment for simulation learning outcomes: a
comparison of knowledge and self-reported confidence with observed clinical
performance." Nurse education today 32.6 (2012): e35-e39.
- `[Roxå et al., 2021]` Roxå, Torgny, et al.
"Reconceptualizing student ratings of teaching to support quality discourse
on student learning: a systems perspective." Higher Education (2021): 1-21.
- `[Raupach et al., 2011]`
Raupach, Tobias, et al. "Towards outcome-based programme evaluation:
using student comparative self-assessments to determine teaching
effectiveness." Medical teacher 33.8 (2011): e446-e453.

0 comments on commit 66b0a5d

Please sign in to comment.