Science Olympiad at Penn Invitational 2019

heterodon
Member
Member
Posts: 2
Joined: May 10th, 2018, 2:20 pm
Division: C
State: PA
Has thanked: 0
Been thanked: 0

Re: Science Olympiad at Penn Invitational 2019

Post by heterodon »

GoldenKnight1 wrote: This is again why I wish teams had been able to separate the tests.
For all the events I competed in, I was able to take apart the tests and staple them back together.
Synaptic_Cleft
Member
Member
Posts: 5
Joined: February 26th, 2014, 6:03 pm
Division: C
Has thanked: 0
Been thanked: 0

Re: Science Olympiad at Penn Invitational 2019

Post by Synaptic_Cleft »

Just a couple of thoughts.

On printing: GoldenKnight makes an excellent point--printing test packets in two halves is one of those great practical ideas that has been overlooked. However, for what it's worth, as far as I know, most (if not all) events allowed competitors to split the test, just not to write on it. With that said, my perspective on printing tests has changed. I made the unusual transition from being involved with the executive level of SOUP to the event supervisor level this year. As a board member, it made perfect sense to save costs by printing only 20 test packets since our tournament tends to be crazy expensive (another key reason is that we are required to hire security officers on overtime rates, whereas other tournaments are not). However, after being an ES this year, I think that it's probably worth printing one test per team because no matter how often you tell people not to write on the test, they still do it. The added stress in time block transitions from checking test packets to make sure they are blank and the cost of all the extra copies that need to be printed when people inevitably write on them isn't worth the saved printing costs. I do think it's practical to print colored images separately.

On tests that are not possible to finish: To me this is about what you consider to be important about Science Olympiad. If indeed the purpose is just to learn in order to compete, then I agree that there is no purpose to making such a test. However, to me it's about something else. As a test writer, my goal is always two-fold: 1) write a test that is within the rules and that will effectively separate teams for the competition and 2) even more importantly, write a test that is going to challenge students' problem solving abilities and that is based in topics relevant to modern research related to the event, within the rules. This is especially important at an invitational, where students will get the tests back. Almost all of us come into college with the mentality that science is this immense set of "facts" and "truths" that need to be internalized and miss out on the most thrilling part (and really the essence) of doing science, which is using something you know to figure out something you don't. By making tests that are "not possible to finish" which have tough and interesting questions, you give students an opportunity to get excited about a problem they know they can solve using what they know, which hopefully encourages them to read more about it. In my mind, the best tests are the ones that have a solid set of standard Science Olympiad questions, which will help separate teams for the competition, and also have a large set of challenging and stimulating questions which you don't expect students to finish in 50 minutes, but you hope they take the time to work through, to understand, and to get excited about after the fact.
syo_astro
Exalted Member
Exalted Member
Posts: 620
Joined: December 3rd, 2011, 9:45 pm
Division: Grad
State: NY
Has thanked: 3 times
Been thanked: 20 times
Contact:

Re: Science Olympiad at Penn Invitational 2019

Post by syo_astro »

GoldenKnight1 wrote:
zcgolf16 wrote:My one comment would be that I don't see the point of a test that is "not possible to finish". It prevents teams from getting in competition practice on all aspects and topics of the event.
...I have read so many times about hardworking division D SO members who work hard on making fantastic test questions but who often don't know how to edit their test. If the top teams are only seeing about half your test because they are running out of time then what is the point of including as much as you did...
<3 these points. I always wished to get decent test editors, but it seems like nobody wants to edit much...various invites have boards for this or something, but I find it very tough to edit multiple events even when you have the experience (I've seen this for others too).

I'll also add part of the issue isn't just editing but also intent: most don't know what competitors want to or should get out of tests. Or else they don't even consider the different backgrounds of those taking the tests...or different takes on the issue other than their own and a select few >.>. To be fair, though, I feel like these issues (editing and intent) apply to many MANY test writers.

Let's take
winchesetr wrote: ...The nationals test is long. In fact, many teams do not finish the Nats exam. That doesn't make the exam useless

...The test allows for a good separation of scores. Part of Science Olympiad is also learning how to take a test well (which is a very useful skill for college). I could show you the distribution if you are interested, but it's approximately normal...The length also allows teams that are less familiar with the event to still acquire points and be competitive.

...The point of the length (and the difficulty) is to also be able to take the exam home and learn from it...Tests are supposed to be a resource in this manner, especially for teams that may not know as much.

...Despite the length, teams were able to do well and accumulate a good amount of points. The length of the exam was approx. the same as last year's exam...
I'll preface this with saying that I didn't take the test, so I'm not saying strictly "you're wrong" or aiming my issues directly at you (or whoever wrote the test, sorry I'm not sure >.>), but I see some healthy argument based on what you've said. Point by point:
-Just because teams don't finish the nationals exam doesn't mean teams at an invite should have difficulty with finishing. They're different tournaments. Say an invite is a practice for a regional or state competition for some (even many) teams. Does that mean they should get floored with a national level exam? It definitely wouldn't make the exam useless (I don't think / hope nobody thinks that), but it definitely limits the test's usefulness.

-Just because a test is long doesn't mean it helps separate scores. Again, not trying to knock on you specifically, I'm sure you put in a variety of question difficulties and such. But some teams definitely get floored if they see a bunch of difficult questions about stuff they've never seen. Let me put it this way: if students don't even answer questions, then it's as if you can't even evaluate them on that question, which is part of why length doesn't really go with separating scores as much as "long enough" (accounting for question difficulty of course). Science Olympiad test taking is also different from regular test taking because you have a partner, so I don't understand that as a relevant goal to shoot for...I always saw it more like tackling a big project and learning to split the work, it's definitely non-standard.

-So this is an interesting point and one I've been thinking about more, recently (looking at tests from the "learning side" and not just as "diagnostics")! I have read that difficult tests do help learning (that's right, I learn about learning, I'm a nerd:P), but I think it depends on A LOT of factors. Some of the main reasons I've read why is that students have to prepare more broadly for the tests. I've also read that it's important to "make mistakes and learn from them". Usually, students are supposed to give a serious effort on a question or a test first and then learn from those mistakes. But I'm not sure how that applies if you can't even try the questions in the first place. This part is definitely an opinion, but I don't think getting students to make mistakes needs to be taken TOO far or even done all the time to aid in their learning. It's quite the balancing act, it's been interesting to read about!

As for the last point, no comment, good to hear. Anyway, probably a different thread for this, apologies for the interruption.

Edit: One extra point I see a lot based on the last post is that tests should involve challenges or critical analysis similar to doing "real science". While I agree that would be great, practically it can be difficult to execute that on every question, and I have yet to read about how those questions actually do on the diagnostic / learning side of things (I'll get to it soon enough:P). It can sometimes even be difficult to really see how a question on a test relates to...well, "real science". And again, there's a diversity of teams competing anyway, and some don't compete as intensely...just stuff to keep in mind.
B: Crave the Wave, Environmental Chemistry, Robo-Cross, Meteo, Phys Sci Lab, Solar System, DyPlan (E and V), Shock Value
C: Microbe Mission, DyPlan (Fresh Waters), Fermi Questions, GeoMaps, Grav Vehicle, Scrambler, Rocks, Astro
Grad: Writing Tests/Supervising (NY/MI)
User avatar
lumosityfan
Exalted Member
Exalted Member
Posts: 418
Joined: July 14th, 2012, 7:00 pm
Division: Grad
State: TX
Pronouns: He/Him/His
Has thanked: 192 times
Been thanked: 84 times

Re: Science Olympiad at Penn Invitational 2019

Post by lumosityfan »

My philosophy on test writing is that while the test should be challenging and want teams to learn more, at the same time a test is practice for further competitions. A test is there for competitors to get used to what's up ahead. The point of a good invite test is not to crush the participant before one even begins. That does no one any good if they don't know how to answer any of the questions. If anything it'll scare teams off because they realize that they've got no chance. (Note: this does not mean dumb down tests, but gradate difficulty through the test to have some easy, some medium, and some hard questions, with the ratios adjusted based on type of tournament.) In addition, invites should not be used as a "nationals-prep" tournament. They should be used as a states/regs practice tournament. (That being said I think there should be a big nationals-prep tournament but after all states have finished.) Thus, the difficulty should be adjusted accordingly. syo's right when it doesn't do you any good when you scare off half the teams, most of them are fairly new to Science Olympiad in general. If one of our goals is to spread science and SO in general to more schools, then we can't do them by slamming at them college-level stuff, because then they'll just think that it's not for them and walk away, which does us no good.
John P. Stevens Class of 2015 (Go Hawks!)
Columbia University Class of 2019 (Go Lions!)
primitivepolonium
Member
Member
Posts: 53
Joined: August 3rd, 2013, 9:00 am
Division: Grad
State: CA
Has thanked: 0
Been thanked: 0

Re: Science Olympiad at Penn Invitational 2019

Post by primitivepolonium »

syo_astro wrote: I'll also add part of the issue isn't just editing but also intent: most don't know what competitors want to or should get out of tests. Or else they don't even consider the different backgrounds of those taking the tests...or different takes on the issue other than their own and a select few >.>. To be fair, though, I feel like these issues (editing and intent) apply to many MANY test writers.
There is certainly a school of writers who have the ideology that it's better an invite exam is too long, since it provides teams study content after competition. It's also shaped by the fact that it's still fairly common to get plagiarized, off-topic, and/or insultingly easy exams at official tournaments; people aim to be better than that.

It's a very difficult balancing act. I've seen decently long, very-on-topic but not difficult exams get put down because they had highs of 90% and were expected to be "harder", and I've also seen difficult, long, and educational exams get put down but they're "excessive" and "no one's seen the last half of the exam anyway". It's often a matter of "pigeon if you do, pigeon if you don't" unless you're extremely experienced at exam writing (and a lot of ESs are college freshmen or sophomores) or very lucky. (Though of course, one should always consider accessibility and constructive criticism!)

At the end of the day, the supervisor has to gauge the competition they are writing for (a regionals exam might be easier on average, but really prioritize score separation). One must decide what their goal and priorities is and stick to it as long as they are reasonable and remain within the scope of the rules. For instance, some supervisors prioritize testing breadth of knowledge, and others test how good you are at problem-solving and adapting to challenges. And I'm sure you've all seen ID supervisors who try to wean competitors off binder dependence. More personally, (soapbox time) I, as a Chem Lab writer, try to get kids to use chemical intuition by throwing problems that seem hard but can be solved by an examination of fundamental chemistry concepts taught in high school classes. It's a much better alternative, IMO, to rewarding kids for rote memorization of question formats and information, since there's already a pervasive, false, and harmful perception of chemistry as "memorization" when it really isn't. My aim is to write interesting questions with keywords that kids recognize, so that they 1) are intrigued by the subject and 2) at least have something concrete to study as they prepare for competition.
syo_astro wrote:While I agree that would be great, practically it can be difficult to execute that on every question, and I have yet to read about how those questions actually do on the diagnostic / learning side of things (I'll get to it soon enough:P). It can sometimes even be difficult to really see how a question on a test relates to...well, "real science"
I know what you mean. Sometimes, in the process of trying to test "problem-solving", event supervisors write contrived and ambiguous questions that actively try to lead you towards a "right answer". I do think that for a lot of events, the "case format", where each big FRQ question builds off a scenario, is very helpful. It's how a lot of college exams are written, and as long as the scenarios aren't super dumb, it gives the kids context for how the event fits into real life. That said, I'd much prefer a useful recall question to a leading "problem-solving" question.
Div D! I really like chem, oceanography, and nail polish--not in that order.

Troy HS, co2016.

Feel free to PM me about SciOly or college or whatever! I really enjoy making online friends.
User avatar
Unome
Moderator
Moderator
Posts: 4336
Joined: January 26th, 2014, 12:48 pm
Division: Grad
State: GA
Has thanked: 235 times
Been thanked: 85 times

Re: Science Olympiad at Penn Invitational 2019

Post by Unome »

On the subject of printing, I've been trying out paperclipped test packets instead of staples, which seems to be working alright.

On the subject of test length, etc. (not going to quote half a dozen people :P ): I personally fall into the camp that would prefer the top teams are able to finish my tests, if usually barely. That doesn't always work out, especially if I misanticipate the strength of teams - as I continually seem to do with Geomaps... somehow. That said, I know plenty of great test writers who follow the opposite philosophy, and would prefer to make their tests very long to maximize post-tournament benefits (part of the reason I don't do this is because I'm pretty sure no matter what I do, most of the teams will never look at the test again anyway). I've definitely had to rely a bit on lead-in questions, mainly because I have no idea how to write Geomaps math questions easy enough for teams to answer at a decent rate (like the three-point problem is the most fundamental structural geology math question in existence, what am I supposed to do if I can't even get 10% on that?).
Userpage

Opinions expressed on this site are not official; the only place for official rules changes and FAQs is soinc.org.
ScienceTurtle314
Member
Member
Posts: 6
Joined: January 14th, 2016, 5:55 pm
Division: C
State: PA
Has thanked: 0
Been thanked: 0

Re: Science Olympiad at Penn Invitational 2019

Post by ScienceTurtle314 »

syo_astro wrote: I'll preface this with saying that I didn't take the test, so I'm not saying strictly "you're wrong" or aiming my issues directly at you (or whoever wrote the test, sorry I'm not sure >.>), but I see some healthy argument based on what you've said. Point by point:
-Just because teams don't finish the nationals exam doesn't mean teams at an invite should have difficulty with finishing. They're different tournaments. Say an invite is a practice for a regional or state competition for some (even many) teams. Does that mean they should get floored with a national level exam? It definitely wouldn't make the exam useless (I don't think / hope nobody thinks that), but it definitely limits the test's usefulness.

-Just because a test is long doesn't mean it helps separate scores. Again, not trying to knock on you specifically, I'm sure you put in a variety of question difficulties and such. But some teams definitely get floored if they see a bunch of difficult questions about stuff they've never seen. Let me put it this way: if students don't even answer questions, then it's as if you can't even evaluate them on that question, which is part of why length doesn't really go with separating scores as much as "long enough" (accounting for question difficulty of course). Science Olympiad test taking is also different from regular test taking because you have a partner, so I don't understand that as a relevant goal to shoot for...I always saw it more like tackling a big project and learning to split the work, it's definitely non-standard.

-So this is an interesting point and one I've been thinking about more, recently (looking at tests from the "learning side" and not just as "diagnostics")! I have read that difficult tests do help learning (that's right, I learn about learning, I'm a nerd:P), but I think it depends on A LOT of factors. Some of the main reasons I've read why is that students have to prepare more broadly for the tests. I've also read that it's important to "make mistakes and learn from them". Usually, students are supposed to give a serious effort on a question or a test first and then learn from those mistakes. But I'm not sure how that applies if you can't even try the questions in the first place. This part is definitely an opinion, but I don't think getting students to make mistakes needs to be taken TOO far or even done all the time to aid in their learning. It's quite the balancing act, it's been interesting to read about!

As for the last point, no comment, good to hear. Anyway, probably a different thread for this, apologies for the interruption.

Edit: One extra point I see a lot based on the last post is that tests should involve challenges or critical analysis similar to doing "real science". While I agree that would be great, practically it can be difficult to execute that on every question, and I have yet to read about how those questions actually do on the diagnostic / learning side of things (I'll get to it soon enough:P). It can sometimes even be difficult to really see how a question on a test relates to...well, "real science". And again, there's a diversity of teams competing anyway, and some don't compete as intensely...just stuff to keep in mind.
[/quote][/quote]

Sorry to quote so much...

The SOUP disease exam was not incredibly difficult, and was actually filled with typical reg-state level material, and only some national level things... especially given that in disease, the big difference between nationals and states is the amount of common-sense understanding you need to have of public health in general. I know you weren't directing comments at this exam, but let me defend it because I appreciated the test. For the record I agree with a lot of what you said in general. However, your comments were relatively abstract, which makes them hard to use on specific tests... I'm trying to break them down a bit in this post.

There are four reasons that I believe people go to invites. (1) For fun,(2) for practice taking an exam in realistic settings, (3) for the content of the practice exam itself, and (4) for evaluation of where you are. I think the first one is largely independent of actual tests. The disease test at SOUP was certainly good practice at taking a realistic exam; it was a mix of case study and general knowledge, and the length made test-taking strategies important. I agree with you though that if a test is super hard, and 'floors' competitors, it's not so realistic/useful for regionals/states. But much of the test was not so difficult. I think that if you moved at the speed that's expected of someone who wasn't able to answer the hardest questions, you should have had no problem finding enough easy questions to fill the exam period, with basic test-taking strategies.

Moving onto (3), as you said, there's a lot to be said for an exam you can take home and learn from. The knowledge needed for the test was not super difficult, except for a 3-5 scattered questions worth a small fraction of total points. It's a little specific to the event of disease detectives, but most hard points don't come from obscure knowledge, but from conceptual questions. Tests like this one asked a lot of conceptual questions, that you learn how to answer easily only by being in similar situations many times before. For example (not specifically on this test, but questions like this come up on regionals, states, and nats, and SOUP tests), "What measures would you first take to stop spread of outbreak X in this restaurant?" This is the 'hard' part of disease; that's why practice for the event emphasizes so much practice as opposed to textbook knowledge.

That raises the question of evaluation. Is it fair to evaluate all teams on a test so difficult that they don't finish? I agree that saying 'you won't finish nationals' is not a fair argument, because many teams aren't planning to go to nationals (although many are, and SOUP has some disgression... they used nationals rules for Mousetrap, which I did not agree with at all because most vehicles weren't prepared). My defense of the length though is centered on the idea of fluency. The event is designed to evaluate how proficient competitors are at really understanding public health and its challenges and approaches. It's a test of your fluency with the concepts at hand. Behavioral studies tend to demonstrate that fluent behavior is not only accurate, but it's also fast. Someone is fluent in public health knowledge if they take 5 seconds to recognize that mosquito nets should be used to prevent malaria. If they take 5 minutes to come to the same conclusion, then they probably haven't thought about the situation before seeing it on this test, and a lot of public health really is about pattern recognition; fluency. While a team moving slower will get points for their accuracy, I believe that an ideal test should be just barely finishable by a top national team, because that way, not all the really good teams will finish, and their fluency with the material can be compared. In my experience with the exam, an ideal set of partners could finish it... we came tantalizingly close :(.

Best,
Sam

EDIT: I said nothing about the quality of the case studies themselves, which I think make or break the disease test; a good test has realistic, plausible outbreak investigations. This test was very good quality, although the first section maybe could have been done better with some of the ecological study questions (9/10 for content).
primitivepolonium
Member
Member
Posts: 53
Joined: August 3rd, 2013, 9:00 am
Division: Grad
State: CA
Has thanked: 0
Been thanked: 0

Re: Science Olympiad at Penn Invitational 2019

Post by primitivepolonium »

Unome wrote:On the subject of test length, etc. (not going to quote half a dozen people :P ): I personally fall into the camp that would prefer the top teams are able to finish my tests, if usually barely. That doesn't always work out, especially if I misanticipate the strength of teams - as I continually seem to do with Geomaps... somehow.
This is SO true. I've done that thing where I compare the length of my exam with exams I've taken in the past and finished (but didn't medal highly in), with the logic that, "Hey, if I got 3rd and finished it, there's at least 2 other people that can finish".

Yeah, nope; the teams still sometimes don't finish. I think when in the season you're writing also matters; even between mid-January and early February, there's a lot improvement.
Div D! I really like chem, oceanography, and nail polish--not in that order.

Troy HS, co2016.

Feel free to PM me about SciOly or college or whatever! I really enjoy making online friends.
User avatar
Unome
Moderator
Moderator
Posts: 4336
Joined: January 26th, 2014, 12:48 pm
Division: Grad
State: GA
Has thanked: 235 times
Been thanked: 85 times

Re: Science Olympiad at Penn Invitational 2019

Post by Unome »

primitive_polonium wrote:
Unome wrote:On the subject of test length, etc. (not going to quote half a dozen people :P ): I personally fall into the camp that would prefer the top teams are able to finish my tests, if usually barely. That doesn't always work out, especially if I misanticipate the strength of teams - as I continually seem to do with Geomaps... somehow.
This is SO true. I've done that thing where I compare the length of my exam with exams I've taken in the past and finished (but didn't medal highly in), with the logic that, "Hey, if I got 3rd and finished it, there's at least 2 other people that can finish".

Yeah, nope; the teams still sometimes don't finish. I think when in the season you're writing also matters; even between mid-January and early February, there's a lot improvement.
Heh, it took me a while to start trimming down my Fossils tests... and even then it feels weird to write a test that short. I think with Geomaps I might have just overestimated the top teams, because when I competed we were nowhere near being one of top teams in Geomaps (nationally, at least) so I'm kind of guessing at the difficulty level.
ScienceTurtle314 wrote:I believe that an ideal test should be just barely finishable by a top national team, because that way, not all the really good teams will finish, and their fluency with the material can be compared. In my experience with the exam, an ideal set of partners could finish it...
Wow, that's exactly how I describe my goal when I try to write high-level tests...
Userpage

Opinions expressed on this site are not official; the only place for official rules changes and FAQs is soinc.org.
User avatar
windu34
Staff Emeritus
Staff Emeritus
Posts: 1383
Joined: April 19th, 2015, 6:37 pm
Division: Grad
State: FL
Has thanked: 2 times
Been thanked: 40 times

Re: Science Olympiad at Penn Invitational 2019

Post by windu34 »

primitive_polonium wrote:It's how a lot of college exams are written
I think this is a super important point worth mentioning. One of the things that has helped me the most in college is knowing how to take a "bad" and/or "outrageously hard" test through my experiences in science olympiad. I think that even if the only benefit of outrageously long and hard tests is that students are better-prepared for college, there is enough merit to justify writing those kinds of exams. Alot of this boils down to the following question: What is the purpose of Science Olympiad?
If you ask NSO, its to inspire kids to pursue STEM. If you ask parents, its to get into college. If you ask students, youll get some combination of to challenge themselves, explore and learn more about science, or get into college. If you ask alumni, I think many of them will talk about how science olympiad not only helped them get into college, but it prepared them for college. I believe that no matter which of these categories your motivations fall in, the long, difficult tests better fit your own self-defined purpose of science olympiad than an easy, short test.

All test writers have their own personal goals with their exams as primitive_polonium mentioned earlier, but in the end, the goal that they all most definitely have in common is to write an exam that is within the rules. As far as regionals/states practice goes, what else really matters? The real hard part is finding good practice for nationals, as well as finding good ways to compare yourselves to other teams to figure out where you stand. This is where the difficulty and length of exams comes into play. While I understand the futility of writing an exam that the vast majority of teams cant finish more than half of, I think this actually rarely happens. Perhaps teams cant answer half the questions, but It is my experience that teams usually read all of the questions. This argument changes for ID events. I am one of the supervisors that strongly believes in extremely fast paced and challenging ID event exams because, as primitive_polonium suggested, I want to reward the teams that actually know the information and understand how to apply it. I also secretly want to punish the teams with large, cumbersome binders that are inefficient, but that point has become moot with the new 2 inch rule.

One final point I would like to make is that not every team gets to attend nationals, but why should this mean that not every team gets to take a nationals-level test? The new wave of event supervisors has addressed this point and many of them write their exams as similarly to the nationals exams as they can (if the nationals exam for that event is typically written well, which is another story for another thread) to give more teams the opportunity to "experience" what it feels like to compete in a high-quality setting. I think there is great value in this and teams that only attend regionals/states should recognize the value in this.
Boca Raton Community High School Alumni
University of Florida Science Olympiad Co-Founder
Florida Science Olympiad Board of Directors
[email protected] || windu34's Userpage
Post Reply

Return to “2019 Invitationals”

Who is online

Users browsing this forum: No registered users and 1 guest