Poorly Run Event Stories

For anything Science Olympiad-related that might not fall under a specific event or competition.
ehuang
Member
Member
Posts: 0
Joined: February 18th, 2018, 8:22 pm

Re: Poorly Run Event Stories

Postby ehuang » February 18th, 2018, 9:07 pm

There were only 17 stations, with 14 having specimens to identify and most answers were a few word answers (originally I had 15 but had to add 2 more because there are 17 teams per block). And you had 2.5 minutes to complete each station. 10% is approximately 1 question per station (some had more). Also there were a lot of bonus points built in for knowing extra knowledge, I think New Trier was the only team to get any of them.

A mistake lots of teams make on the exam was not working separately on the MC portion. I know you couldn't have taken the exam apart but there are many ways to make it work. I know there were only 7.5 minutes total to work on Part A, but a lot of the questions were easy points, that few teams got.
17 stations makes a lot more sense, and also sounds more like what you would do. For the MC, personally I'm not a fan of separated packets of questions unrelated to the stations because it's just more papers to keep track of, but I can understand the appeal.

I still think even 1 in e.g. 15 questions being trivia is too much, though we may be working off of different definitions of "trivia".
Hi! I have a few things to say to this:

Firstly, I don't know about the last time block, but rotation instructions would've been extremely helpful for me, and shouldn't have "slipped through the cracks" regardless. We weren't even told in the beginning which way the stations rotated - a lot of people went the wrong way because they assumed the rotation was to the right when it was actually to the left.

Secondly, even if the supervisors have no control over the room, why were the stations taped to the chairs?? This made it extremely difficult to balance a binder, bend over and look at the station, and try to write on the answer sheet at the same time, especially with the tiny space between the desks and the seats below.

Thirdly, point differences being miniscule is obviously not good, but neither is having the top team only getting 20% of the points. Both are extremes that should be avoided. And about the length - an alum from my school peer reviewed your test and told you it was too hard but apparently that meant nothing because you said you didn't care and that it was your "style". Maybe you should listen to your peers next time - 10-15 short answer questions is too much for a station that was only, what, 2 minutes long? My partner couldn't even write simultaneously because she was trying to balance and use the binder so that left me frantically scribbling down what I could. At that point, it becomes a matter of who's the fastest writer, not who knows their stuff. 6-8 short answer questions? Probably fine. 10-15? Way too much, so much that it even became a nuisance to find which question I was answering (since I skipped around as I'm sure most people did).

Fourthly, let's talk about the content! As Cherrie said, there was too much fringe information - things that were relevant but weren't really to the event. Additionally, some of the ID was really pointless - asking us to identify 12 range maps at a station, or 4 different skeletons. This isn't me whining like "oh no range maps and skeletons!!!" It's just that the sheer amount of it made it pointless, when combined with the questions. Sure, maybe some of the content were things that top schools would have known, but a majority of it was so high level that that test probably could've made an actual herpetologist light-headed. I mean pictures of blood cells and skin sections? How were we supposed to know that?

And finally, maybe you're right! Maybe Cherrie just /sucks/ at herpetology. But also maybe next time, before you indirectly call my team member not smart enough or not hard-working enough, you should consider the extremely valid criticisms on your event. I heard complaints after Rocks at MIT, and I heard complaints after this one as well. No one is "clinging" to the notion that a test has to be a certain format with certain questions. We just want a test that gives a more valid representation of how good we are at the event. And if your goal was to help people practice for future competitions, I have to say, I didn't really find it that valuable as a practice. In fact, it felt like a complete waste of time. But that's just my opinion!

varunscs11
Member
Member
Posts: 163
Joined: March 14th, 2015, 9:02 pm
Division: Grad
State: PA

Re: Poorly Run Event Stories

Postby varunscs11 » February 18th, 2018, 9:49 pm

There were only 17 stations, with 14 having specimens to identify and most answers were a few word answers (originally I had 15 but had to add 2 more because there are 17 teams per block). And you had 2.5 minutes to complete each station. 10% is approximately 1 question per station (some had more). Also there were a lot of bonus points built in for knowing extra knowledge, I think New Trier was the only team to get any of them.

A mistake lots of teams make on the exam was not working separately on the MC portion. I know you couldn't have taken the exam apart but there are many ways to make it work. I know there were only 7.5 minutes total to work on Part A, but a lot of the questions were easy points, that few teams got.
17 stations makes a lot more sense, and also sounds more like what you would do. For the MC, personally I'm not a fan of separated packets of questions unrelated to the stations because it's just more papers to keep track of, but I can understand the appeal.

I still think even 1 in e.g. 15 questions being trivia is too much, though we may be working off of different definitions of "trivia".
Hi! I have a few things to say to this:

Firstly, I don't know about the last time block, but rotation instructions would've been extremely helpful for me, and shouldn't have "slipped through the cracks" regardless. We weren't even told in the beginning which way the stations rotated - a lot of people went the wrong way because they assumed the rotation was to the right when it was actually to the left.

Secondly, even if the supervisors have no control over the room, why were the stations taped to the chairs?? This made it extremely difficult to balance a binder, bend over and look at the station, and try to write on the answer sheet at the same time, especially with the tiny space between the desks and the seats below.

Thirdly, point differences being miniscule is obviously not good, but neither is having the top team only getting 20% of the points. Both are extremes that should be avoided. And about the length - an alum from my school peer reviewed your test and told you it was too hard but apparently that meant nothing because you said you didn't care and that it was your "style". Maybe you should listen to your peers next time - 10-15 short answer questions is too much for a station that was only, what, 2 minutes long? My partner couldn't even write simultaneously because she was trying to balance and use the binder so that left me frantically scribbling down what I could. At that point, it becomes a matter of who's the fastest writer, not who knows their stuff. 6-8 short answer questions? Probably fine. 10-15? Way too much, so much that it even became a nuisance to find which question I was answering (since I skipped around as I'm sure most people did).

Fourthly, let's talk about the content! As Cherrie said, there was too much fringe information - things that were relevant but weren't really to the event. Additionally, some of the ID was really pointless - asking us to identify 12 range maps at a station, or 4 different skeletons. This isn't me whining like "oh no range maps and skeletons!!!" It's just that the sheer amount of it made it pointless, when combined with the questions. Sure, maybe some of the content were things that top schools would have known, but a majority of it was so high level that that test probably could've made an actual herpetologist light-headed. I mean pictures of blood cells and skin sections? How were we supposed to know that?

And finally, maybe you're right! Maybe Cherrie just /sucks/ at herpetology. But also maybe next time, before you indirectly call my team member not smart enough or not hard-working enough, you should consider the extremely valid criticisms on your event. I heard complaints after Rocks at MIT, and I heard complaints after this one as well. No one is "clinging" to the notion that a test has to be a certain format with certain questions. We just want a test that gives a more valid representation of how good we are at the event. And if your goal was to help people practice for future competitions, I have to say, I didn't really find it that valuable as a practice. In fact, it felt like a complete waste of time. But that's just my opinion!
1. Touché. You're right, it shouldn't have slipped through the cracks, but I also had exec's breathing down my neck to get started.
2. Stations could have been taped to the tables but that would've meant that you would have to put the binder in your lap and then put the answer sheet on top of stations. But again valid point, but not in my control
3. Top team actually got 25% of the points and when dynamic at MIT was 30% as 1st place, that's not a crazy difference between the two. And I never said that it was my style, in fact, I got no comments that said such things. And once again, stations were 2.5 minutes. And no, it does not become who is the faster writer, because teams that memorize information won't even have to look at the binder, which saves a lot of time.
4. Skeletons are within the scope of the rules, in fact rule 3b explicitly gives "skeletal material" as an example. Furthermore rule 3c and 3d states that competitors are expected to show knowledge of biogeography and distribution of specimens. Asking to identify range maps fits that and in fact, I drew inspiration from the 2009 Nationals Herpetology test, so there is a precedent. And blood cells and skin sections fits within rule 3d as well and that wouldn't have made a real herpetology (or vet) light headed.
5. I never called your team member "not smart" or "not hard-working". I simply pointing out areas of weakness that as a team, you may want to improve upon.
6. I don't know who you heard complaints from about Rocks and MIT, but the responses I got were on average, positive with people noting the specimen quality, the exam difficulty, and application based questions as positives and test length being a negative. And that exam had an great distribution and in general, teams that did well deservedly did so.
7. I'm not going to apologize for the length nor "fringe" information. Some of the "fringe information" is related to questions that have shown up on previous exams (for example MIT asking about what animal the Egyptian God Sobek is or children walking in a line). I will however apologize for logistics. If you believe that it was a complete waste of time, then so be it - that's just your opinion. If you are looking for exams that ask you to identify from standard images and a standard set of questions, you aren't going to find those exams at the top tier invitationals.
Liberal Arts and Science Academy 2015-2017
University of Pennsylvania 2021
MIT Rocks and Minerals 2018, Fossils 2019

varunscs11's Userpage

pb5754[]
Member
Member
Posts: 387
Joined: March 5th, 2017, 7:49 pm
Division: C
State: NJ

Re: Poorly Run Event Stories

Postby pb5754[] » February 18th, 2018, 9:51 pm

There were only 17 stations, with 14 having specimens to identify and most answers were a few word answers (originally I had 15 but had to add 2 more because there are 17 teams per block). And you had 2.5 minutes to complete each station. 10% is approximately 1 question per station (some had more). Also there were a lot of bonus points built in for knowing extra knowledge, I think New Trier was the only team to get any of them.

A mistake lots of teams make on the exam was not working separately on the MC portion. I know you couldn't have taken the exam apart but there are many ways to make it work. I know there were only 7.5 minutes total to work on Part A, but a lot of the questions were easy points, that few teams got.
17 stations makes a lot more sense, and also sounds more like what you would do. For the MC, personally I'm not a fan of separated packets of questions unrelated to the stations because it's just more papers to keep track of, but I can understand the appeal.

I still think even 1 in e.g. 15 questions being trivia is too much, though we may be working off of different definitions of "trivia".
Hi! I have a few things to say to this:

Firstly, I don't know about the last time block, but rotation instructions would've been extremely helpful for me, and shouldn't have "slipped through the cracks" regardless. We weren't even told in the beginning which way the stations rotated - a lot of people went the wrong way because they assumed the rotation was to the right when it was actually to the left.

Secondly, even if the supervisors have no control over the room, why were the stations taped to the chairs?? This made it extremely difficult to balance a binder, bend over and look at the station, and try to write on the answer sheet at the same time, especially with the tiny space between the desks and the seats below.

Thirdly, point differences being miniscule is obviously not good, but neither is having the top team only getting 20% of the points. Both are extremes that should be avoided. And about the length - an alum from my school peer reviewed your test and told you it was too hard but apparently that meant nothing because you said you didn't care and that it was your "style". Maybe you should listen to your peers next time - 10-15 short answer questions is too much for a station that was only, what, 2 minutes long? My partner couldn't even write simultaneously because she was trying to balance and use the binder so that left me frantically scribbling down what I could. At that point, it becomes a matter of who's the fastest writer, not who knows their stuff. 6-8 short answer questions? Probably fine. 10-15? Way too much, so much that it even became a nuisance to find which question I was answering (since I skipped around as I'm sure most people did).

Fourthly, let's talk about the content! As Cherrie said, there was too much fringe information - things that were relevant but weren't really to the event. Additionally, some of the ID was really pointless - asking us to identify 12 range maps at a station, or 4 different skeletons. This isn't me whining like "oh no range maps and skeletons!!!" It's just that the sheer amount of it made it pointless, when combined with the questions. Sure, maybe some of the content were things that top schools would have known, but a majority of it was so high level that that test probably could've made an actual herpetologist light-headed. I mean pictures of blood cells and skin sections? How were we supposed to know that?

And finally, maybe you're right! Maybe Cherrie just /sucks/ at herpetology. But also maybe next time, before you indirectly call my team member not smart enough or not hard-working enough, you should consider the extremely valid criticisms on your event. I heard complaints after Rocks at MIT, and I heard complaints after this one as well. No one is "clinging" to the notion that a test has to be a certain format with certain questions. We just want a test that gives a more valid representation of how good we are at the event. And if your goal was to help people practice for future competitions, I have to say, I didn't really find it that valuable as a practice. In fact, it felt like a complete waste of time. But that's just my opinion!
Woah... :o
West Windsor-Plainsboro High School South '21

User avatar
lumosityfan
Exalted Member
Exalted Member
Posts: 320
Joined: July 14th, 2012, 7:00 pm
Division: Grad
State: TX
Location: Houston, TX
Contact:

Re: Poorly Run Event Stories

Postby lumosityfan » February 18th, 2018, 9:53 pm

I just wanted to note that it's absurd to require teams to answer 542 questions. I understand you want to make the test hard to differentiate the teams, and that's great. However, there comes a certain point where the test is literally impossible to take fully. At that point, it becomes useless in preparing and differentiating teams since no one's getting points. That's why I aim for a top score of about 60-75%; low enough to prevent ties but high enough to, well, prevent ties. Also, saying that "you just need to prepare more" is an insult to the competitor because, I don't know, THEY ARE TRYING MORE!!!!! What are they supposed to do, win nats? (Oh it's convenient you won nats in that event. Interesting.) How about you stop this elitism that everyone can just act like the top teams and understand that THE MAJORITY OF TEAMS CANNOT?????!!!!! (And that goes for all of scioly; let's reach out to "lesser" and newer teams and try not to make teams stupidly hard. Not everyone is as good inherently as you are. Let's not crush them before they can get up.)
John P. Stevens Class of 2015 (Go Hawks!)
Columbia University Class of 2019 (Go Lions!)
2016-19 UCC Regs Astronomy ES, 2018 NJ States Astronomy ES, 2017 Princeton Helicopters ES, 2018 Princeton WGYN ES

birdylayaduck08
Member
Member
Posts: 23
Joined: December 21st, 2013, 10:06 am
Division: Grad
State: TX

Re: Poorly Run Event Stories

Postby birdylayaduck08 » February 18th, 2018, 9:55 pm

I've always thought that the purpose of this thread was for tests that took 30 minutes to write, inexperienced proctors, and simple carelessness. Maybe some people might think Varun's test is over the top, but he definitely put many hours into writing and formatting the test. It seems almost unfair that people can get angrier over a long test rather than a short, low quality test that asks you what an amphibian is.

I personally didn't take this test, but in my experience, long, difficult tests are always the best kind. It forces you to rely less on your binder and more of what you actually remember, and it lets you realize that there's so much more of the event that you have yet to learn. Besides, if the test is that long, no one is going to finish, so what's the point in worrying about that? The rest of the test is just more information that you can easily add to your binder later. At the end of the day, maybe you think his test is too long, or the room orientation was painful, but Varun is a busy college student who dedicated a ton of time into writing such a long and difficult test. Don't hate someone for putting too much effort in when there are so many others who put in none.
Seven Lakes High School '19

User avatar
Adi1008
Moderator
Moderator
Posts: 478
Joined: December 6th, 2013, 1:56 pm
Division: Grad
State: TX
Location: Austin, Texas

Re: Poorly Run Event Stories

Postby Adi1008 » February 18th, 2018, 10:01 pm

I just wanted to note that it's absurd to require teams to answer 542 questions.
I think the test was 542 points, not questions.
University of Texas at Austin '22
Seven Lakes High School '18
Beckendorff Junior High '14

User avatar
PM2017
Member
Member
Posts: 495
Joined: January 20th, 2017, 5:02 pm
Division: Grad
State: CA

Re: Poorly Run Event Stories

Postby PM2017 » February 18th, 2018, 10:10 pm

I've always thought that the purpose of this thread was for tests that took 30 minutes to write, inexperienced proctors, and simple carelessness. Maybe some people might think Varun's test is over the top, but he definitely put many hours into writing and formatting the test. It seems almost unfair that people can get angrier over a long test rather than a short, low quality test that asks you what an amphibian is.

I personally didn't take this test, but in my experience, long, difficult tests are always the best kind. It forces you to rely less on your binder and more of what you actually remember, and it lets you realize that there's so much more of the event that you have yet to learn. Besides, if the test is that long, no one is going to finish, so what's the point in worrying about that? The rest of the test is just more information that you can easily add to your binder later. At the end of the day, maybe you think his test is too long, or the room orientation was painful, but Varun is a busy college student who dedicated a ton of time into writing such a long and difficult test. Don't hate someone for putting too much effort in when there are so many others who put in none.
As I was reading this thread (which honestly is getting out of hand, and everyone just should chill out about it a little), I was thinking the same thing. Give the guy credit for the amount of effort that he put in, and make criticism a tad bit more constructive, and less "oh-you-did-such-and-such-and-so-the-test-is-terrible."

On the other hand, I feel as though criticism could have been received better, but that's just my two cents.
West High '19
UC Berkeley '23

Go Bears!

fanjiatian
Member
Member
Posts: 243
Joined: March 16th, 2010, 6:46 pm
Division: Grad

Re: Poorly Run Event Stories

Postby fanjiatian » February 18th, 2018, 10:19 pm

Please don't complain that a test is too long and/or difficult :P It seems like the Herpetology test will be extremely good study material post-competition, and that is something to be grateful for. It takes a lot of time and effort to run a good event, and there are many other things college students could be doing, that aren't related to organizing SciOly tournaments. Some of you probably remember the good ol' times when taking an invitational test not pulled off the test exchange was something to be excited about in itself xD
Last edited by fanjiatian on February 18th, 2018, 10:21 pm, edited 1 time in total.

varunscs11
Member
Member
Posts: 163
Joined: March 14th, 2015, 9:02 pm
Division: Grad
State: PA

Re: Poorly Run Event Stories

Postby varunscs11 » February 18th, 2018, 10:20 pm

Chill, there were not 542 questions, that's just how many points there were. Most questions were worth 2-3 points. And again, I literally NEVER said that the competitors aren't working hard enough. Just keep in mind that all you east coast teams are privileged when it comes to science olympiad. In Texas, we get little if any support from our schools (monetary or otherwise), no one cares about science olympiad in general, we have one of the worst state competitions in the country with arguably the worst 3 way deadlock (maybe solon-centerville-mentor is worse, but not really considering their national bids aren't determined by one grader in one event choosing to give partial credit to her half of the stack), and a general lack of high quality competitions with national level competitors (although that's slowly changing). Given these facts, I wrote my exam to benefit those who were flying out to come to the competition (e.g LASA, New Trier, Clements, and the school from CA, although the latter 2 dropped). And your test philosophy might be different and I'm not say it's invalid in any way, but mine is just different from yours and I'm not going to apologize for that or accept any claims that yours is better than mine. You may have a duty to lesser and newer teams, but my duty is to make it worth the money teams driving long hours/flying worth their money (which is usually raised through fundraisers).

And note, it wouldn't have mattered if I had made the exam shorter and cut out all the "fringe information" because that doesn't change the fact that most teams missed the identification questions, which is the foundation of the event. In fact, if I had made the exam more standard, teams who missed the first question would have likely missed the rest of the station. In my exam, there were some questions that teams could get without identifying the specimen questions, but apparently those too are "fringe" questions.

Also I don't see why there's so much flak for exam content, when the similar stuff happens at MIT.
Liberal Arts and Science Academy 2015-2017
University of Pennsylvania 2021
MIT Rocks and Minerals 2018, Fossils 2019

varunscs11's Userpage

Joycegu99
Member
Member
Posts: 7
Joined: September 6th, 2016, 8:13 am
Division: Grad
State: PA
Location: Being Invasive

Re: Poorly Run Event Stories

Postby Joycegu99 » February 18th, 2018, 10:21 pm

As an invite test writer this year and a recently turned science olympiad alum, I know that last year I would really appreciate those tests that paid attention to details that not every team would know. I know all teams that have put their time into studying and working hard for an event don't want to go into a competition to take a test that isn't a challenge.
At the same time, I think a huge part of ID events is the teamwork and the time management aspect. A well-prepared team will either know their stuff or know exactly how to get to that information quickly with the resources they have. A well-prepared team will have put the time into gathering all the information they might need (we as test writers make sure to base these questions off readily available sources). And a well-prepared team will know how to approach a stations test in order to maximize the number of points they can attain.
That's kind of what considerations I put into my own tests and I know Varun did for his as well. His test rewarded those who could access their resources well and who could attack the test in the most efficient way. He put a lot of thought into this test and in trying to cover every aspect of Herpetology that could be relevant, and that's very admirable. Don't blame him or his test for the logistics of things or the set up of the room.
Harriton High School

2016-17:
UGA/Forsyth Central/Brookwood/MIT/Regionals/State/Nationals
Team Spirit (Individual): 1/3/2/14/3//
Crave the Minerals: 11/-/1/13/-/-/
Wright it Do it: 3/1/-/2/2//
Invasive Rocks: -/-/4/11/5//
Remote Planet: 8/1/-/13/1//
Picture That: 12/2/18/16/4//


Return to “General Competition”

Who is online

Users browsing this forum: MSN [Bot] and 2 guests