Post
by SilverBreeze » January 8th, 2021, 11:41 pm
Skink wrote: ↑January 8th, 2021, 6:19 pm
Interesting discussion. A few points:
1. I signed on to search for thoughts on the (troubling) fact that events seem entirely "open note" this season. What's the point in building a binder or cheat sheet when teams are going to have phones, tablets, or other computers available for looking up answers, a substantial fraction of which will be within reach (especially among the strongest competitors who know what they're looking for and able to find)?
The average ES' tests are neither challenging nor lengthy such that I fear my teams are walking into something extremely highly scoring where the differences between scores may very well be determined by the quality of the open notes.
I agree with the underlined portion; however, to my knowledge, the only official competition with open note events is MIT, which is known for test quality and difficulty. (correct me if I'm wrong)
Skink wrote: ↑January 8th, 2021, 6:19 pm
2. Whether questions are objective, subjective, or both, length must be there to separate scores adequately. The rule of thumb is the number of registered teams doubled is the minimum number of points there should be, but even this is limiting depending on the subject matter. The real problem here, as I see it, is the scoring system's demands for breaking all ties. See, at an invite of forty teams, there is no theoretical reason why Skink MS JV3 should score uniquely compared against some other team that came (very) inadequately prepared or not at all. Our goal is to separate the medalists and a few more places from there for the sake of the team score. There is no meaningful difference between a rank of 25 and 30, for example, but the scoring system pretends that there is. Discarding the assumption that low scores need to be separated, we shift focus to medalists, which are less of an issue.
I agree, and it's also important to give less-prepared teams a good test experience, even if they get similar scores. (you weren't implying the opposite; just wanted to clarify in case people begin attacking this as elitist)
Skink wrote: ↑January 8th, 2021, 6:19 pm
3. I saw someone report a two hundred question test.
There should almost never be a two hundred question test because this goes against the program's goals. The goal is for the ES to present a limited number of problems for teams to think critically about and solve; this requirement places a (kind of low) ceiling over how many questions can be completed in fifty minutes. Even the most brilliant and Scioly-famous participants squirm under pressure.
I'm not sure how much the "almost" is meant to encompass, but high-quality, difficult Orni tests
regularly have 150-200 questions. When done well, this tests participants on interesting and useful information about birds while still engaging critical thinking. Perhaps it is the ID events themselves that need reworking to be less about only gathering, organizing, and retrieving information, but I think the underlined portion might be too broad a generalization.
Last edited by
SilverBreeze on January 8th, 2021, 11:41 pm, edited 1 time in total.
- These users thanked the author SilverBreeze for the post:
- Unome (January 9th, 2021, 9:46 am)
Troy SciOly 2019 - now
Suzanne SciOly 2016 - 2019
Events this season: Water Quality, Forensics, Ornithology, (some experience in DP)
The team may be gone, but the program lives on...