Poorly Run Event Stories

For anything Science Olympiad-related that might not fall under a specific event or competition.
User avatar
Cherrie_Lan
Member
Member
Posts: 18
Joined: April 19th, 2016, 5:04 pm
Division: C
State: NY
Has thanked: 0
Been thanked: 0

Re: Poorly Run Event Stories

Post by Cherrie_Lan »

Yesterday, at SOUP Invitational, the room setup for the different events was inconvenient, which can be understandable due to the amount of space given within the college. However, the herpetology exam I took yesterday wasn’t only poorly run, but the test was a poor representation of the event itself.

Standing outside the event room, I was wondering why it started 15 minutes late. However, when I walked in, I could see why.

The test was out of 585 questions. The time per station was fair, but given the setup of the room (a lecture hall filled with seats with tiny tables that were attached to the seats), the space for writing and propping up a binder was much less than satisfactory. Additionally, the way the stations were set up made it difficult to find the next station, making teams lose more time while switching between stations. Since we were allowed to start as soon as we got to the next station, some teams may have started before others. Teams starting in a particular position wouldn’t have to run around half the room figuring out where their next station was located. A good way to fix this issue could be to make sure everyone started and ended at a particular time.

However, 15 questions per station (including multiple specimens from the list per station) is too many for one station. Furthermore, there was an extremely high percentage of trivia questions. Writers for Herpetology exams should ask more questions about the diet, habitat, and status of a specimen, not about the Greek god associated with a particular specimen or the name of a Chinese divination practice that used tortoise bones. If the top team doesn’t even score higher than a 25% on the test, that means the spread was much too low. Therefore, it’s not an accurate representation of a team’s skill level.

Another issue with the test was the first part, a 96 question section with both multiple choice and short answer. We were given 3 stations in total to work on it (7 and a half minutes). But, since the questions were in packets, the teams wasted time searching through the packet just to see where they last left off. Not only that, many teams likely didn’t finish over half the questions in that amount of time.

Although SOUP should be a relatively difficult tournament, this test went beyond that standard, though not in a satisfactory way. The number of questions, lacking setup (although it might not be helped), and types of questions weren’t an accurate representation of the event.

Hopefully the test writer will consider these comments as they write their next test or write the test for this invitational next year.
Ward Melville High School 10th Grade
User avatar
Unome
Moderator
Moderator
Posts: 4343
Joined: January 26th, 2014, 12:48 pm
Division: Grad
State: GA
Has thanked: 240 times
Been thanked: 95 times

Re: Poorly Run Event Stories

Post by Unome »

Cherrie_Lan wrote:Yesterday, at SOUP Invitational, the room setup for the different events was inconvenient, which can be understandable due to the amount of space given within the college. However, the herpetology exam I took yesterday wasn’t only poorly run, but the test was a poor representation of the event itself.

Standing outside the event room, I was wondering why it started 15 minutes late. However, when I walked in, I could see why.

The test was out of 585 questions. The time per station was fair, but given the setup of the room (a lecture hall filled with seats with tiny tables that were attached to the seats), the space for writing and propping up a binder was much less than satisfactory. Additionally, the way the stations were set up made it difficult to find the next station, making teams lose more time while switching between stations. Since we were allowed to start as soon as we got to the next station, some teams may have started before others. Teams starting in a particular position wouldn’t have to run around half the room figuring out where their next station was located. A good way to fix this issue could be to make sure everyone started and ended at a particular time.

However, 15 questions per station (including multiple specimens from the list per station) is too many for one station. Furthermore, there was an extremely high percentage of trivia questions. Writers for Herpetology exams should ask more questions about the diet, habitat, and status of a specimen, not about the Greek god associated with a particular specimen or the name of a Chinese divination practice that used tortoise bones. If the top team doesn’t even score higher than a 25% on the test, that means the spread was much too low. Therefore, it’s not an accurate representation of a team’s skill level.

Another issue with the test was the first part, a 96 question section with both multiple choice and short answer. We were given 3 stations in total to work on it (7 and a half minutes). But, since the questions were in packets, the teams wasted time searching through the packet just to see where they last left off. Not only that, many teams likely didn’t finish over half the questions in that amount of time.

Although SOUP should be a relatively difficult tournament, this test went beyond that standard, though not in a satisfactory way. The number of questions, lacking setup (although it might not be helped), and types of questions weren’t an accurate representation of the event.

Hopefully the test writer will consider these comments as they write their next test or write the test for this invitational next year.
...wait there were 39 stations?

what even
Userpage

Opinions expressed on this site are not official; the only place for official rules changes and FAQs is soinc.org.
User avatar
NeilMehta
Wiki Moderator
Wiki Moderator
Posts: 318
Joined: August 27th, 2016, 5:27 am
Division: C
State: NY
Pronouns: He/Him/His
Has thanked: 0
Been thanked: 0

Re: Poorly Run Event Stories

Post by NeilMehta »

Cherrie_Lan wrote:Yesterday, at SOUP Invitational, the room setup for the different events was inconvenient, which can be understandable due to the amount of space given within the college. However, the herpetology exam I took yesterday wasn’t only poorly run, but the test was a poor representation of the event itself.

Standing outside the event room, I was wondering why it started 15 minutes late. However, when I walked in, I could see why.

The test was out of 585 questions. The time per station was fair, but given the setup of the room (a lecture hall filled with seats with tiny tables that were attached to the seats), the space for writing and propping up a binder was much less than satisfactory. Additionally, the way the stations were set up made it difficult to find the next station, making teams lose more time while switching between stations. Since we were allowed to start as soon as we got to the next station, some teams may have started before others. Teams starting in a particular position wouldn’t have to run around half the room figuring out where their next station was located. A good way to fix this issue could be to make sure everyone started and ended at a particular time.

However, 15 questions per station (including multiple specimens from the list per station) is too many for one station. Furthermore, there was an extremely high percentage of trivia questions. Writers for Herpetology exams should ask more questions about the diet, habitat, and status of a specimen, not about the Greek god associated with a particular specimen or the name of a Chinese divination practice that used tortoise bones. If the top team doesn’t even score higher than a 25% on the test, that means the spread was much too low. Therefore, it’s not an accurate representation of a team’s skill level.

Another issue with the test was the first part, a 96 question section with both multiple choice and short answer. We were given 3 stations in total to work on it (7 and a half minutes). But, since the questions were in packets, the teams wasted time searching through the packet just to see where they last left off. Not only that, many teams likely didn’t finish over half the questions in that amount of time.

Although SOUP should be a relatively difficult tournament, this test went beyond that standard, though not in a satisfactory way. The number of questions, lacking setup (although it might not be helped), and types of questions weren’t an accurate representation of the event.

Hopefully the test writer will consider these comments as they write their next test or write the test for this invitational next year.
Although I personally did not attend SOUP, I can certainly agree that, looking at the raw scores, things seemed out of place. A top score in the 130s, with a maximum of 585 seems extremely unreasonable for any team, even the talented and experienced powerhouses that attended. The score distribution made it evident that the test tried to raise the ceiling in ways that weren't ideal, resulting in a shockingly low cap. Instead of raising the difficulty of questions, it seems that the test writer used less representative methods to accommodate the incredibly skilled teams that attended. Simply increasing depth of questions or variety would be greatly favored to asking questions that diverge from the topic and reducing time.
i can't feel my arms wtf i think i'm turning into a lamp

voted least likely to sleep 2018, most likely to sleep in class 2017+2018, biggest procrastinator 2018
User avatar
Cherrie_Lan
Member
Member
Posts: 18
Joined: April 19th, 2016, 5:04 pm
Division: C
State: NY
Has thanked: 0
Been thanked: 0

Re: Poorly Run Event Stories

Post by Cherrie_Lan »

Unome wrote:
Cherrie_Lan wrote:Yesterday, at SOUP Invitational, the room setup for the different events was inconvenient, which can be understandable due to the amount of space given within the college. However, the herpetology exam I took yesterday wasn’t only poorly run, but the test was a poor representation of the event itself.

Standing outside the event room, I was wondering why it started 15 minutes late. However, when I walked in, I could see why.

The test was out of 585 questions. The time per station was fair, but given the setup of the room (a lecture hall filled with seats with tiny tables that were attached to the seats), the space for writing and propping up a binder was much less than satisfactory. Additionally, the way the stations were set up made it difficult to find the next station, making teams lose more time while switching between stations. Since we were allowed to start as soon as we got to the next station, some teams may have started before others. Teams starting in a particular position wouldn’t have to run around half the room figuring out where their next station was located. A good way to fix this issue could be to make sure everyone started and ended at a particular time.

However, 15 questions per station (including multiple specimens from the list per station) is too many for one station. Furthermore, there was an extremely high percentage of trivia questions. Writers for Herpetology exams should ask more questions about the diet, habitat, and status of a specimen, not about the Greek god associated with a particular specimen or the name of a Chinese divination practice that used tortoise bones. If the top team doesn’t even score higher than a 25% on the test, that means the spread was much too low. Therefore, it’s not an accurate representation of a team’s skill level.

Another issue with the test was the first part, a 96 question section with both multiple choice and short answer. We were given 3 stations in total to work on it (7 and a half minutes). But, since the questions were in packets, the teams wasted time searching through the packet just to see where they last left off. Not only that, many teams likely didn’t finish over half the questions in that amount of time.

Although SOUP should be a relatively difficult tournament, this test went beyond that standard, though not in a satisfactory way. The number of questions, lacking setup (although it might not be helped), and types of questions weren’t an accurate representation of the event.

Hopefully the test writer will consider these comments as they write their next test or write the test for this invitational next year.
...wait there were 39 stations?

what even
I don’t think there were 39 stations because of those 96 questions from part 1, so I think it was about 30 stations although I can’t remember the exact number off the top of my head.
Ward Melville High School 10th Grade
pb5754
Exalted Member
Exalted Member
Posts: 518
Joined: March 5th, 2017, 7:49 pm
Division: C
State: NJ
Pronouns: He/Him/His
Has thanked: 45 times
Been thanked: 85 times

Re: Poorly Run Event Stories

Post by pb5754 »

Cherrie_Lan wrote:
Unome wrote:
Cherrie_Lan wrote:Yesterday, at SOUP Invitational, the room setup for the different events was inconvenient, which can be understandable due to the amount of space given within the college. However, the herpetology exam I took yesterday wasn’t only poorly run, but the test was a poor representation of the event itself.

Standing outside the event room, I was wondering why it started 15 minutes late. However, when I walked in, I could see why.

The test was out of 585 questions. The time per station was fair, but given the setup of the room (a lecture hall filled with seats with tiny tables that were attached to the seats), the space for writing and propping up a binder was much less than satisfactory. Additionally, the way the stations were set up made it difficult to find the next station, making teams lose more time while switching between stations. Since we were allowed to start as soon as we got to the next station, some teams may have started before others. Teams starting in a particular position wouldn’t have to run around half the room figuring out where their next station was located. A good way to fix this issue could be to make sure everyone started and ended at a particular time.

However, 15 questions per station (including multiple specimens from the list per station) is too many for one station. Furthermore, there was an extremely high percentage of trivia questions. Writers for Herpetology exams should ask more questions about the diet, habitat, and status of a specimen, not about the Greek god associated with a particular specimen or the name of a Chinese divination practice that used tortoise bones. If the top team doesn’t even score higher than a 25% on the test, that means the spread was much too low. Therefore, it’s not an accurate representation of a team’s skill level.

Another issue with the test was the first part, a 96 question section with both multiple choice and short answer. We were given 3 stations in total to work on it (7 and a half minutes). But, since the questions were in packets, the teams wasted time searching through the packet just to see where they last left off. Not only that, many teams likely didn’t finish over half the questions in that amount of time.

Although SOUP should be a relatively difficult tournament, this test went beyond that standard, though not in a satisfactory way. The number of questions, lacking setup (although it might not be helped), and types of questions weren’t an accurate representation of the event.

Hopefully the test writer will consider these comments as they write their next test or write the test for this invitational next year.
...wait there were 39 stations?

what even
I don’t think there were 39 stations because of those 96 questions from part 1, so I think it was about 30 stations although I can’t remember the exact number off the top of my head.
wow
585 questions is actually monstrous :shock: :shock: :shock:
West Windsor-Plainsboro High School South '21
2021 Nationals: Astronomy - 1st, Geologic Mapping - 1st, Team - 6th
varunscs11
Member
Member
Posts: 163
Joined: March 14th, 2015, 9:02 pm
Division: Grad
State: PA
Has thanked: 0
Been thanked: 0

Re: Poorly Run Event Stories

Post by varunscs11 »

Cherrie_Lan wrote:Yesterday, at SOUP Invitational, the room setup for the different events was inconvenient, which can be understandable due to the amount of space given within the college. However, the herpetology exam I took yesterday wasn’t only poorly run, but the test was a poor representation of the event itself.

Standing outside the event room, I was wondering why it started 15 minutes late. However, when I walked in, I could see why.

The test was out of 585 questions. The time per station was fair, but given the setup of the room (a lecture hall filled with seats with tiny tables that were attached to the seats), the space for writing and propping up a binder was much less than satisfactory. Additionally, the way the stations were set up made it difficult to find the next station, making teams lose more time while switching between stations. Since we were allowed to start as soon as we got to the next station, some teams may have started before others. Teams starting in a particular position wouldn’t have to run around half the room figuring out where their next station was located. A good way to fix this issue could be to make sure everyone started and ended at a particular time.

However, 15 questions per station (including multiple specimens from the list per station) is too many for one station. Furthermore, there was an extremely high percentage of trivia questions. Writers for Herpetology exams should ask more questions about the diet, habitat, and status of a specimen, not about the Greek god associated with a particular specimen or the name of a Chinese divination practice that used tortoise bones. If the top team doesn’t even score higher than a 25% on the test, that means the spread was much too low. Therefore, it’s not an accurate representation of a team’s skill level.

Another issue with the test was the first part, a 96 question section with both multiple choice and short answer. We were given 3 stations in total to work on it (7 and a half minutes). But, since the questions were in packets, the teams wasted time searching through the packet just to see where they last left off. Not only that, many teams likely didn’t finish over half the questions in that amount of time.

Although SOUP should be a relatively difficult tournament, this test went beyond that standard, though not in a satisfactory way. The number of questions, lacking setup (although it might not be helped), and types of questions weren’t an accurate representation of the event.

Hopefully the test writer will consider these comments as they write their next test or write the test for this invitational next year.
Firstly, sorry for the format of the room. The event supervisors have no control over that, in fact, we weren't even informed of what the format would be like. So that really can't be helped.

Secondly, the event started late because we had 10 minutes between the two events. Meaning in 10 minutes, WIDI had to be cleaned up and Herpetology had to be set up, so yes, some things did slip through the cracks like explaining the rotation direction. And based on what happened in the last block, it wouldn't matter even if the rotations were explained. And the stations were organized in the standard way - snake pattern.

Thirdly, I can tell you with 100% certainty that having a specific amount of time allocated to rotations would not have changed the fact that teams who get to their stations first, would start first.

Fourthly, content wise the rules were followed. Sure there were some trivia questions, but trivia questions only accounted for at most 10% of the exam. Yes I could have written an exam where I asked diet, range, predators for every specimen but that's not what is to be expected at the national level and that is not something that would let the team with the most knowledge win. Furthermore, by having 10-15 questions per station, speed becomes a large issue meaning that teams not only need to have the knowledge but need to know it and be able to find it. And actually the spread of teams was pretty good at the top. Would you rather have preferred that there be 10 way ties for 1st place or places be differentiated by minuscule point values?

Fifthly, I was actually planning to give everyone more time to work on the MC section (e.g. like it was at MIT) but due to financial constraints, I was limited by the number of color copies I could make.

And finally, Herpetology is a binder event, meaning that you are expected to know a lot more information than an event with a single page of notes. If you felt like the timing was hard or that the questions were too hard for you, maybe that is something you should work on or research more into. Ultimately, my goal was to create an exam that would serve as useful practice material for future competitions (e.g. have a lot of questions to use to test knowledge leading up to state or nationals). If you think that the test didn't select for the teams that deserved to win because you didn't win then you are completely wrong. WWPN recently got 3rd at Princeton and they are extremely good at biology events in general. LASA got 10th at MIT while Columbia got 8th. Lower Merion got 5th and 6th at Princeton. Harriton got 2nd at MIT (although they were split at SOUP). These are all teams that have shown on a national scale in difficult exams that they know what they are doing. So no, I don't think that the exam was not an accurate representation of a team's skill level. You also say that the top score was too low - yes that is true, it was very low, but MIT has extremely low percentages as well, and sometimes the event exams don't really capture the essence of the event, yet you still see that the teams that do well on those exams do well at Nationals and that's because they don't cling to the notion that all exams are going to be similar in format and question type. They recognize that these things vary and prepare for the breadth of information out there and seek to incorporate as much information as they can into their heads and binders.

585 points btw not questions and it was actually out of 542 because station 4 was omitted.

One final thing, no team even got all the ID right or in fact got close. So even if the rest of the exam had been simple diet, range, reproduction, etc, most teams would've still missed most of the questions. So responding to Neil's comment, you can't say teams were highly "skilled" if they didn't even get the identification component right in an identification event.
Liberal Arts and Science Academy 2015-2017
University of Pennsylvania 2021
MIT Rocks and Minerals 2018, Fossils 2019

varunscs11's Userpage
User avatar
Unome
Moderator
Moderator
Posts: 4343
Joined: January 26th, 2014, 12:48 pm
Division: Grad
State: GA
Has thanked: 240 times
Been thanked: 95 times

Re: Poorly Run Event Stories

Post by Unome »

varunscs11 wrote:
Cherrie_Lan wrote:Yesterday, at SOUP Invitational, the room setup for the different events was inconvenient, which can be understandable due to the amount of space given within the college. However, the herpetology exam I took yesterday wasn’t only poorly run, but the test was a poor representation of the event itself.

Standing outside the event room, I was wondering why it started 15 minutes late. However, when I walked in, I could see why.

The test was out of 585 questions. The time per station was fair, but given the setup of the room (a lecture hall filled with seats with tiny tables that were attached to the seats), the space for writing and propping up a binder was much less than satisfactory. Additionally, the way the stations were set up made it difficult to find the next station, making teams lose more time while switching between stations. Since we were allowed to start as soon as we got to the next station, some teams may have started before others. Teams starting in a particular position wouldn’t have to run around half the room figuring out where their next station was located. A good way to fix this issue could be to make sure everyone started and ended at a particular time.

However, 15 questions per station (including multiple specimens from the list per station) is too many for one station. Furthermore, there was an extremely high percentage of trivia questions. Writers for Herpetology exams should ask more questions about the diet, habitat, and status of a specimen, not about the Greek god associated with a particular specimen or the name of a Chinese divination practice that used tortoise bones. If the top team doesn’t even score higher than a 25% on the test, that means the spread was much too low. Therefore, it’s not an accurate representation of a team’s skill level.

Another issue with the test was the first part, a 96 question section with both multiple choice and short answer. We were given 3 stations in total to work on it (7 and a half minutes). But, since the questions were in packets, the teams wasted time searching through the packet just to see where they last left off. Not only that, many teams likely didn’t finish over half the questions in that amount of time.

Although SOUP should be a relatively difficult tournament, this test went beyond that standard, though not in a satisfactory way. The number of questions, lacking setup (although it might not be helped), and types of questions weren’t an accurate representation of the event.

Hopefully the test writer will consider these comments as they write their next test or write the test for this invitational next year.
Firstly, sorry for the format of the room. The event supervisors have no control over that, in fact, we weren't even informed of what the format would be like. So that really can't be helped.

Secondly, the event started late because we had 10 minutes between the two events. Meaning in 10 minutes, WIDI had to be cleaned up and Herpetology had to be set up, so yes, some things did slip through the cracks like explaining the rotation direction. And based on what happened in the last block, it wouldn't matter even if the rotations were explained. And the stations were organized in the standard way - snake pattern.

Thirdly, I can tell you with 100% certainty that having a specific amount of time allocated to rotations would not have changed the fact that teams who get to their stations first, would start first.

Fourthly, content wise the rules were followed. Sure there were some trivia questions, but trivia questions only accounted for at most 10% of the exam. Yes I could have written an exam where I asked diet, range, predators for every specimen but that's not what is to be expected at the national level and that is not something that would let the team with the most knowledge win. Furthermore, by having 10-15 questions per station, speed becomes a large issue meaning that teams not only need to have the knowledge but need to know it and be able to find it. And actually the spread of teams was pretty good at the top. Would you rather have preferred that there be 10 way ties for 1st place or places be differentiated by minuscule point values?

Fifthly, I was actually planning to give everyone more time to work on the MC section (e.g. like it was at MIT) but due to financial constraints, I was limited by the number of color copies I could make.

And finally, Herpetology is a binder event, meaning that you are expected to know a lot more information than an event with a single page of notes. If you felt like the timing was hard or that the questions were too hard for you, maybe that is something you should work on or research more into. Ultimately, my goal was to create an exam that would serve as useful practice material for future competitions (e.g. have a lot of questions to use to test knowledge leading up to state or nationals). If you think that the test didn't select for the teams that deserved to win because you didn't win then you are completely wrong. WWPN recently got 3rd at Princeton and they are extremely good at biology events in general. LASA got 10th at MIT while Columbia got 8th. Lower Merion got 5th and 6th at Princeton. Harriton got 2nd at MIT (although they were split at SOUP). These are all teams that have shown on a national scale in difficult exams that they know what they are doing. So no, I don't think that the exam was not an accurate representation of a team's skill level. You also say that the top score was too low - yes that is true, it was very low, but MIT has extremely low percentages as well, and sometimes the event exams don't really capture the essence of the event, yet you still see that the teams that do well on those exams do well at Nationals and that's because they don't cling to the notion that all exams are going to be similar in format and question type. They recognize that these things vary and prepare for the breadth of information out there and seek to incorporate as much information as they can into their heads and binders.

585 points btw not questions and it was actually out of 542 because station 4 was omitted.

One final thing, no team even got all the ID right or in fact got close. So even if the rest of the exam had been simple diet, range, reproduction, etc, most teams would've still missed most of the questions. So responding to Neil's comment, you can't say teams were highly "skilled" if they didn't even get the identification component right in an identification event.
15 questions that aren't like long response or such in 1 minute 20 seconds wouldn't be too bad - a team with two people who both really know what they're doing could probably finish fine. Although, I doubt there are more than 3 or 4 teams nationwide at that point even at the end of the season - maybe half a dozen if I'm being generous.

I obviously can't comment on the test questions or whether they contained too much trivia. 10% does sound a little high though.

In my opinion, ~30 stations is way too much (if this was actually the number of stations). The only reason I would ever consider more than ~20 stations to be a good idea would be if the number of teams per session required it.

With the way the schedule was, the tournament definitely needed a longer break between 3rd and 4th sessions.
Userpage

Opinions expressed on this site are not official; the only place for official rules changes and FAQs is soinc.org.
varunscs11
Member
Member
Posts: 163
Joined: March 14th, 2015, 9:02 pm
Division: Grad
State: PA
Has thanked: 0
Been thanked: 0

Re: Poorly Run Event Stories

Post by varunscs11 »

There were only 17 stations, with 14 having specimens to identify and most answers were a few word answers (originally I had 15 but had to add 2 more because there are 17 teams per block). And you had 2.5 minutes to complete each station. 10% is approximately 1 question per station (some had more). Also there were a lot of bonus points built in for knowing extra knowledge, I think New Trier was the only team to get any of them.

A mistake lots of teams make on the exam was not working separately on the MC portion. I know you couldn't have taken the exam apart but there are many ways to make it work. I know there were only 7.5 minutes total to work on Part A, but a lot of the questions were easy points, that few teams got.
Liberal Arts and Science Academy 2015-2017
University of Pennsylvania 2021
MIT Rocks and Minerals 2018, Fossils 2019

varunscs11's Userpage
User avatar
Unome
Moderator
Moderator
Posts: 4343
Joined: January 26th, 2014, 12:48 pm
Division: Grad
State: GA
Has thanked: 240 times
Been thanked: 95 times

Re: Poorly Run Event Stories

Post by Unome »

varunscs11 wrote:There were only 17 stations, with 14 having specimens to identify and most answers were a few word answers (originally I had 15 but had to add 2 more because there are 17 teams per block). And you had 2.5 minutes to complete each station. 10% is approximately 1 question per station (some had more). Also there were a lot of bonus points built in for knowing extra knowledge, I think New Trier was the only team to get any of them.

A mistake lots of teams make on the exam was not working separately on the MC portion. I know you couldn't have taken the exam apart but there are many ways to make it work. I know there were only 7.5 minutes total to work on Part A, but a lot of the questions were easy points, that few teams got.
17 stations makes a lot more sense, and also sounds more like what you would do. For the MC, personally I'm not a fan of separated packets of questions unrelated to the stations because it's just more papers to keep track of, but I can understand the appeal.

I still think even 1 in e.g. 15 questions being trivia is too much, though we may be working off of different definitions of "trivia".
Userpage

Opinions expressed on this site are not official; the only place for official rules changes and FAQs is soinc.org.
ehuang
Member
Member
Posts: 0
Joined: February 18th, 2018, 8:22 pm
Has thanked: 0
Been thanked: 0

Re: Poorly Run Event Stories

Post by ehuang »

Unome wrote:
varunscs11 wrote:There were only 17 stations, with 14 having specimens to identify and most answers were a few word answers (originally I had 15 but had to add 2 more because there are 17 teams per block). And you had 2.5 minutes to complete each station. 10% is approximately 1 question per station (some had more). Also there were a lot of bonus points built in for knowing extra knowledge, I think New Trier was the only team to get any of them.

A mistake lots of teams make on the exam was not working separately on the MC portion. I know you couldn't have taken the exam apart but there are many ways to make it work. I know there were only 7.5 minutes total to work on Part A, but a lot of the questions were easy points, that few teams got.
17 stations makes a lot more sense, and also sounds more like what you would do. For the MC, personally I'm not a fan of separated packets of questions unrelated to the stations because it's just more papers to keep track of, but I can understand the appeal.

I still think even 1 in e.g. 15 questions being trivia is too much, though we may be working off of different definitions of "trivia".
Hi! I have a few things to say to this:

Firstly, I don't know about the last time block, but rotation instructions would've been extremely helpful for me, and shouldn't have "slipped through the cracks" regardless. We weren't even told in the beginning which way the stations rotated - a lot of people went the wrong way because they assumed the rotation was to the right when it was actually to the left.

Secondly, even if the supervisors have no control over the room, why were the stations taped to the chairs?? This made it extremely difficult to balance a binder, bend over and look at the station, and try to write on the answer sheet at the same time, especially with the tiny space between the desks and the seats below.

Thirdly, point differences being miniscule is obviously not good, but neither is having the top team only getting 20% of the points. Both are extremes that should be avoided. And about the length - an alum from my school peer reviewed your test and told you it was too hard but apparently that meant nothing because you said you didn't care and that it was your "style". Maybe you should listen to your peers next time - 10-15 short answer questions is too much for a station that was only, what, 2 minutes long? My partner couldn't even write simultaneously because she was trying to balance and use the binder so that left me frantically scribbling down what I could. At that point, it becomes a matter of who's the fastest writer, not who knows their stuff. 6-8 short answer questions? Probably fine. 10-15? Way too much, so much that it even became a nuisance to find which question I was answering (since I skipped around as I'm sure most people did).

Fourthly, let's talk about the content! As Cherrie said, there was too much fringe information - things that were relevant but weren't really to the event. Additionally, some of the ID was really pointless - asking us to identify 12 range maps at a station, or 4 different skeletons. This isn't me whining like "oh no range maps and skeletons!!!" It's just that the sheer amount of it made it pointless, when combined with the questions. Sure, maybe some of the content were things that top schools would have known, but a majority of it was so high level that that test probably could've made an actual herpetologist light-headed. I mean pictures of blood cells and skin sections? How were we supposed to know that?

And finally, maybe you're right! Maybe Cherrie just /sucks/ at herpetology. But also maybe next time, before you indirectly call my team member not smart enough or not hard-working enough, you should consider the extremely valid criticisms on your event. I heard complaints after Rocks at MIT, and I heard complaints after this one as well. No one is "clinging" to the notion that a test has to be a certain format with certain questions. We just want a test that gives a more valid representation of how good we are at the event. And if your goal was to help people practice for future competitions, I have to say, I didn't really find it that valuable as a practice. In fact, it felt like a complete waste of time. But that's just my opinion!
Post Reply

Return to “General Competition”

Who is online

Users browsing this forum: No registered users and 13 guests