National Test Discussion

ScottMaurer19
Member
Member
Posts: 592
Joined: January 5th, 2017, 9:39 am
Division: Grad
State: OH
Has thanked: 0
Been thanked: 1 time

Re: National Test Discussion

Post by ScottMaurer19 »

efeng wrote:
Flavorflav wrote:
chalker wrote:
No there isn't. There are bigger factors that come into play, like corporate sponsorships, ease of running / participating, and committee foci. We can't standardize / control too much, or else we'll cause all kinda of work and problems at the ~400 tournaments that occur each year around the country.

WIDI is a signature event, that goes back almost to the beginning of SO. We've tweaked it a little bit over the years, but it's so integral to the organization I doubt we'll change it significantly anytime soon.

As a side note (and I readily admit this is a bit of a humblebrag), I can personally attest to the fact that it's possible to perform consistently in WIDI. Way back when I was a competitor, I got 1st place in the event at Nationals in 1992 and then again in 1993. In 1994 I got 5th place.
Just to add a view from below - if you guys ever did decide to give WIDI a well-earned rest for a few years, I suspect that you would hear a chorus of Hosannahs ringing out across the country. WIDI certainly has its fans, but it is in my experience it is far and away the most dreaded of all SciO events from a coach's perspective.
As somebody who has never done WIDI, I do have an opinion on this. Personally, I think that the low correlation coefficient for WIDI may be due to the fact that some schools just don't focus on it. Instead of focusing on the inquiry events, many schools focus on the more "sciencey" events. For example, my school has the same set of partners for Anatomy and Physiology. From what I observed, they did not work on WIDI nearly as much as they focused on A&P. The results are clearly visible, as the got 28th in WIDI and 2nd in A&P. Coincidentally, we roomed on the same floor with the people from the Fulton Science Academy (GA), and they had somebody who also did A&P and WIDI. They, however, have an excellent WIDI coach, as mentioned earlier by Unome's Userpage. Fulton ended up receiving 1st in WIDI, as they have for the past couple years, and got 39th in A&P. This may not prove much, but just comes to show that WIDI can be consistent, and that there are strategies and methods to the event. It also shows that many teams may underperform due to the lack of effort and preparation, rather than that the event itself is inconsistent, though I'm sure it is still one of the more inconsistent events.
Being from one of these schools I can personally attest to this (at least for our team) being false. The students in WIDI spent hours both at practices and their own study sessions practicing. I would estimate they did about a build every day or two after states. Even with all of this preperation and the pairs from both div B and C feeling that they did well after discussing the build with their partners after the event they still didnt break top 12.
Solon '19 Captain, CWRU '23
2017 (r/s/n):
Hydro: 3/5/18
Robot Arm: na/1/1
Rocks: 1/1/1

2018 (r/s/n):
Heli: 2/1/7 
Herp: 1/4/4
Mission: 1/1/6
Rocks: 1/1/1
Eco: 6/3/9

2019 (r/s/n):
Fossils: 1/1/1
GLM: 1/1/1
Herp: 1/1/5
Mission: 1/1/3
WS: 4/1/10

Top 3 Medals: 144
Golds: 80
User avatar
pikachu4919
Moderator
Moderator
Posts: 716
Joined: December 7th, 2012, 2:30 pm
Division: Grad
State: IN
Pronouns: She/Her/Hers
Has thanked: 89 times
Been thanked: 167 times

Re: National Test Discussion

Post by pikachu4919 »

I'm just curious to know, but how much did Woz recycle old trivia questions this year on Forensics C?
Carmel HS (IN) '16
Purdue BioE '21? reevaluating my life choices
Nationals 2016 ~ 4th place Forensics


"It is important to draw wisdom from different places. If you take it from only one place, it becomes rigid and stale." -Uncle Iroh

About me || Rate my tests!
Opinions expressed on this site are not official; the only place for official rules changes and FAQs is soinc.org.

MY CABBAGES!
User avatar
Unome
Moderator
Moderator
Posts: 4338
Joined: January 26th, 2014, 12:48 pm
Division: Grad
State: GA
Has thanked: 235 times
Been thanked: 85 times

Re: National Test Discussion

Post by Unome »

ScottMaurer19 wrote:
efeng wrote:
Flavorflav wrote: Just to add a view from below - if you guys ever did decide to give WIDI a well-earned rest for a few years, I suspect that you would hear a chorus of Hosannahs ringing out across the country. WIDI certainly has its fans, but it is in my experience it is far and away the most dreaded of all SciO events from a coach's perspective.
As somebody who has never done WIDI, I do have an opinion on this. Personally, I think that the low correlation coefficient for WIDI may be due to the fact that some schools just don't focus on it. Instead of focusing on the inquiry events, many schools focus on the more "sciencey" events. For example, my school has the same set of partners for Anatomy and Physiology. From what I observed, they did not work on WIDI nearly as much as they focused on A&P. The results are clearly visible, as the got 28th in WIDI and 2nd in A&P. Coincidentally, we roomed on the same floor with the people from the Fulton Science Academy (GA), and they had somebody who also did A&P and WIDI. They, however, have an excellent WIDI coach, as mentioned earlier by Unome's Userpage. Fulton ended up receiving 1st in WIDI, as they have for the past couple years, and got 39th in A&P. This may not prove much, but just comes to show that WIDI can be consistent, and that there are strategies and methods to the event. It also shows that many teams may underperform due to the lack of effort and preparation, rather than that the event itself is inconsistent, though I'm sure it is still one of the more inconsistent events.
Being from one of these schools I can personally attest to this (at least for our team) being false. The students in WIDI spent hours both at practices and their own study sessions practicing. I would estimate they did about a build every day or two after states. Even with all of this preperation and the pairs from both div B and C feeling that they did well after discussing the build with their partners after the event they still didnt break top 12.
Yeah it's definitely not a lack of practice here either. The difference seems to be in the method of preparation (which I never figured out).
Userpage

Opinions expressed on this site are not official; the only place for official rules changes and FAQs is soinc.org.
MIScioly1
Member
Member
Posts: 128
Joined: April 30th, 2017, 12:27 pm
Division: Grad
State: MI
Has thanked: 0
Been thanked: 0

Re: National Test Discussion

Post by MIScioly1 »

Unome wrote:
ScottMaurer19 wrote:
efeng wrote:
As somebody who has never done WIDI, I do have an opinion on this. Personally, I think that the low correlation coefficient for WIDI may be due to the fact that some schools just don't focus on it. Instead of focusing on the inquiry events, many schools focus on the more "sciencey" events. For example, my school has the same set of partners for Anatomy and Physiology. From what I observed, they did not work on WIDI nearly as much as they focused on A&P. The results are clearly visible, as the got 28th in WIDI and 2nd in A&P. Coincidentally, we roomed on the same floor with the people from the Fulton Science Academy (GA), and they had somebody who also did A&P and WIDI. They, however, have an excellent WIDI coach, as mentioned earlier by Unome's Userpage. Fulton ended up receiving 1st in WIDI, as they have for the past couple years, and got 39th in A&P. This may not prove much, but just comes to show that WIDI can be consistent, and that there are strategies and methods to the event. It also shows that many teams may underperform due to the lack of effort and preparation, rather than that the event itself is inconsistent, though I'm sure it is still one of the more inconsistent events.
Being from one of these schools I can personally attest to this (at least for our team) being false. The students in WIDI spent hours both at practices and their own study sessions practicing. I would estimate they did about a build every day or two after states. Even with all of this preperation and the pairs from both div B and C feeling that they did well after discussing the build with their partners after the event they still didnt break top 12.
Yeah it's definitely not a lack of practice here either. The difference seems to be in the method of preparation (which I never figured out).
An event like WIDI is obviously not going to correlate as well as a pure study event because less is in the control of the students. However, there are people who perform fairly consistently well in WIDI with a lot of practice. I don't think there is anything wrong with it in principle - while occasionally every team is probably going to have it blow up, that is a problem that everyone has to deal with. It's not "unfair" to certain teams but fair to others.
University of Michigan Science Olympiad Executive Board
Skink
Exalted Member
Exalted Member
Posts: 948
Joined: February 8th, 2009, 12:23 pm
Division: C
State: IL
Has thanked: 0
Been thanked: 5 times

Re: National Test Discussion

Post by Skink »

ScottMaurer19 wrote:Being from one of these schools I can personally attest to this (at least for our team) being false. The students in WIDI spent hours both at practices and their own study sessions practicing. I would estimate they did about a build every day or two after states. Even with all of this preperation and the pairs from both div B and C feeling that they did well after discussing the build with their partners after the event they still didnt break top 12.
Do note that A)that preparation may have been unproductive and B)practical events have lesser assurance of results with a given amount of preparation (and that there's nothing wrong with this!) than builds or straight take-a-test ones. I imagine Optics has been frustrating for some teams because they, just, don't perform on-site with the laser shoot despite hours and hours of prep work.
User avatar
Magikarpmaster629
Exalted Member
Exalted Member
Posts: 578
Joined: October 7th, 2014, 3:03 pm
Division: Grad
State: MA
Has thanked: 0
Been thanked: 1 time

Re: National Test Discussion

Post by Magikarpmaster629 »

I've had some time to think about these now, so here are my ratings:

Forensics (11th)- Brilliant. This event was amazing in just about every way (besides the microscopes). My partner and I discovered, as pikachu is pointing out, that the Woz recycles many questions from previous nationals tests, so our strategy was to go through all the ID really fast and accurately, and use answers to the questions on the cheatsheet in order to medal. We recognized that this was not in the spirit of the event, but we really wanted to do well so we went into the event with this plan. Needless to say, it didn't work. Instead of having ~150 questions with 50-75 recycled trivia questions, there were 110 questions with ~10 trivia questions total, about half of them recycled. This meant that the test was entirely ID and doing the chemical tests correctly. We went through as fast as we could, and ended up having to skip hairs, glass, and blood despite having 55 minutes total. I'm proud of my placement in this event, and I also think it's a good thing that the Woz did decide to remove the trivia questions as they don't really add anything to the event, besides maybe giving a cushion to teams that don't know how to do the tests. Another note, the room this year was about twice as large as last years, and they also had enough materials for every two teams to share a water bath, microscope, and bunsen burner. Again, microscopes didn't work which was about the only bad thing about this event. 9.5/10

Rocks and Minerals (9th)- Also amazing. Going in I was a bit worried about the difficulty of the test, because I had looked at the 2014 test as well as read previous years' threads for R&M and it sounded like it would be painfully easy. My fears went unrealized. First of all, the specimens used are just about the highest quality I've seen outside the Harvard Natural History Museum's, and that's hard to compete with. Next, the way they incorporated the nationals specimens were very good- not only did they ask for identification of kyanite and other minerals, but also asked questions on their mineral environments, economic uses, and so on. I also liked how they included rutilated quartz, which is like quartz with rutile inclusions. The station setup was also very good- rather than have an obligatory calcite vs. aragonite station, they combined specimens based on physical properties (cleavage and hardness), modes of formation, polymorphs other than CaCO3 (the quartz/chalcedony/opal station), and so on. Some of the stations were pretty typical (igneous rocks, sedimentary rocks), however they asked thoughtful questions on them. For example, the sedimentary rock station had specimens A-G and a chart with empty boxes and descriptions of each rock, and each letter had to be matched with a box; problem was there were more boxes than specimens, so you had to figure out which ones were missing. Many of the questions were set up like this, and although most were multiple choice, they all required identification before answering, so you had to know both ID and stuff about the mineral/rock. Best test I've taken all season IMO, 10/10.

Now for my...less positive reviews:

Hydrogeology (22nd)- I wouldn't say this was a bad test. However, while it was the best Hydrogeo test I've taken in person, it was not the best I've ever seen. I'll start with the positive, however. It was well paced: most hydrogeology tests are very short, leaving us with about 20 minutes left after finishing everything. On this test however we definitely had to check the clock frequently; with 40 questions on part 1 and I think 20 on part 3, it was definitely long enough, which as I just mentioned is much better than many hydrogeology tests. Second, no technical issues with computers, everything went smoothly; I also like how they made the hydrogeology challenge about as interesting as possible with something like 10 wells. This introduced some difficulty in figuring out part 3, and I actually had to mess around with the scenario to find the wells in danger of contamination. Now for the bad stuff. Part 1 was mostly a trivia test, mainly asking for obscure vocab and statistics (what is the term for a geological report on layers of soil for a well to be dug? what fraction of household water usage is used by flushing the toilet), which while typical isn't really desirable; this is nationals, they should be able to come up with something better (perhaps labeling parts of a well or something- I guess I don't have many alternatives, but I think it could have been improved upon). I also dislike how they asked for cost on the remediation techniques section- answers for costs are widely variable, and it was good that they did not ask for costs last year- they should have kept that approach. Overall, I think (again) it was not a bad test, however there are definitely some improvements that could be made. The supervisor was friendly and made a lot of cynical jokes- "You enjoyed this event? I think you need to get some mental help." 7.5/10.

I really enjoyed hydrogeology as an event. It was one of my favorite events to compete in, and my partner was one of my favorite people to work with. I'm a bit sad we didn't get top 15 or so, I think that would have reflected how much we worked on the event, but no matter. When it comes back (and I hope it does) I'd like to see the rules expanded on- as of now, they are very short and are pretty vague on what to test, which is I think one of the causes of the bland tests overall.

And finally... Astronomy (15th)- I have a lot of problems with not just this test, but with failures all over the place for this event during this season. So for the test specifically, it was...pretty bad, as far as nationals astronomy tests go. I mean, it wasn't a bad test, but it wasn't great like I'd say last year's or pretty much every year's before was. One thing I did like was how they made the DSO section harder. Last year, I didn't even need to open the binder for my DSO questions, they were so easy I could do them and I can still do them without checking them. I definitely needed it this year however, and my partner and I fought over the binder on more than one occasion (we really dislike laptops). This is about the only thing I liked about the test, however. There were a few things that were not only dumbed down or simplified but were ENTIRELY WRONG. I have two examples off the top of my head. The first was the questions about the future of double degenerate white dwarf systems based on initial mass: it asked the future of white dwarf binary systems at system mass below 1.4 solar masses, between 1.4 and 2.4 solar masses, and above 2.4 solar masses. Since I saw this on the webinar, I knew the answer they wanted was more massive white dwarf, neutron star, and SN Ia respectively; the problem with this is their is no real basis for these answers. SN Ia can occur as long as carbon is detonated in the center of a WD- this does not require meeting the Chandrasekhar limit. In addition, neutron star formation occurs under accretion induced collapse, which results from carbon igniting off-center in a WD, and is also not dependent on system mass. This question was totally arbitrary, and the test writers should have known this. Another distinct issue I have is that of the radial velocity problem. I did like how they gave us a redshift vs. time rather than simply velocity, meaning you had to know what Z is, however there was still a major failure in this question- they gave us radial velocity, period and semi-major axis. To solve all variables in an orbital problem (assuming inclination 90 degrees), one needs two of velocity of each component, mass of each component, center of mass separation of each component, and period. They gave us three, meaning there were two possible solutions for each variable. We assumed they wanted us to use the vanilla Kepler's law to find the mass and the velocity ratio to find individual masses, but there's no way to tell which one they wanted. Speaking of math, this test had very little. Instead of asking about surface gravities or densities of white dwarfs and neutron stars, expansion rate and energy of supernovae, luminosity and temperature of accretion disks, or any other of the very interesting and fun math questions, it was just a few repetitive Phillips relation, distance modulus, and Hubble's law problems. Math is extremely important in this event, and it was largely ignored. My last problem will probably get me a lot of flak, but I really dislike their frequent use of "why" type questions. While these problems are good for thinking about and are interesting, they are very poor at separating teams, due to how difficult it is to grade them. Often one question can have many answers, and a correct answer may not be the one the test writer thought of (one could argue for example that SN2011fe is decidedly double degenerate or definitely not depending on which studies used to back up their evidence). In these ways, I think this test was much poorer than astronomy tests in previous years, and my placement does not reflect the amount of time I poured into this event, 6/10.

This is taking me a long time to write, so I'll write up why I thought tests were bad this season later.

Overall (Team 11th(!!!!!))- while this was only my second nationals, it was by far better in nearly every way than last year. Volunteers and also a few undergrads there were very helpful, and although things were in many different buildings, directions between buildings were very clear. The opening ceremony was pretty boring, but I loved the music and effects at the closing ceremony, and the swap meet was run pretty well too (would have liked to see team labels on the tables again though). I'm kinda sad that I'm leaving Science Olympiad without having medalled at nationals despite how much time I put into it, but I'm really happy for how my team placed; we did not expect at all to do that well. Best experience of my life with minimal flaws, 9.9/10.
Ladue Science Olympiad (2014ish-2017)

A wild goose flies over a pond, leaving behind a voice in the wind.
A man passes through this world, leaving behind a name.
nicholasmaurer
Coach
Coach
Posts: 422
Joined: May 19th, 2017, 10:55 am
Division: Grad
State: OH
Has thanked: 1 time
Been thanked: 22 times

Re: National Test Discussion

Post by nicholasmaurer »

Skink wrote:
ScottMaurer19 wrote:Being from one of these schools I can personally attest to this (at least for our team) being false. The students in WIDI spent hours both at practices and their own study sessions practicing. I would estimate they did about a build every day or two after states. Even with all of this preperation and the pairs from both div B and C feeling that they did well after discussing the build with their partners after the event they still didnt break top 12.
Do note that A)that preparation may have been unproductive and B)practical events have lesser assurance of results with a given amount of preparation (and that there's nothing wrong with this!) than builds or straight take-a-test ones. I imagine Optics has been frustrating for some teams because they, just, don't perform on-site with the laser shoot despite hours and hours of prep work.
Obviously events with a practical component (and builds too where you don't know distances etc. in advance) can go horribly wrong on the day of competition either do to bad luck or stupid mistakes. I'm not arguing this is unfair; all teams have to deal with the same challenges. My objection is to the arbitrary nature of the grading which makes it even harder to adequately prepare and work out a system.The Inquiry committee made some big changes to Game On this year that greatly improved the quality of the event and the consistency of its scoring. I'm simply arguing that similar steps could be take to improve WIDI.

This wouldn't guarantee that preparation would translate to good results. However, it would reduce the risk that a team who has studied and prepared really well will completely bomb the event.
Assistant Coach and Alumnus ('14) - Solon High School Science Olympiad
Tournament Director - Northeast Ohio Regional Tournament
Tournament Director - Solon High School Science Olympiad Invitational

Opinions expressed on this site are not official; the only place for official rules changes and FAQs is soinc.org.
User avatar
EastStroudsburg13
Admin Emeritus
Admin Emeritus
Posts: 3203
Joined: January 17th, 2009, 7:32 am
Division: Grad
State: PA
Pronouns: He/Him/His
Has thanked: 48 times
Been thanked: 204 times
Contact:

Re: National Test Discussion

Post by EastStroudsburg13 »

alli_burnett wrote:I never said that it was a bad test. In fact, it was a really difficult test, in my opinion. I would also say it wasn't like anything I had ever seen before. I've experienced PowerPoint invasives tests before but never anything that was that fast paced. Honestly, we should've done more preparation, especially with identifying all of the species by only their scientific names. However, expecting us to identify the biological controls for ~10 different species in 1.5 minutes was pretty extreme. We definitely could've prepared more, and with this knowledge I'll focus on doing well in herpetology next year. My score out of 10 was based on my experience with the event and the test, not just the questions. The test itself wasn't bad or poorly written, but I thought the test was difficult for everyone, not just my team. Clearly that wasn't the case.
I think most of the people here are rating the tests based on the quality of the tests, so giving it a 4/10 on most people's rating systems would indicate that they thought it was a bad test, rather than just a rough experience.
nicholasmaurer wrote:Obviously events with a practical component (and builds too where you don't know distances etc. in advance) can go horribly wrong on the day of competition either do to bad luck or stupid mistakes. I'm not arguing this is unfair; all teams have to deal with the same challenges. My objection is to the arbitrary nature of the grading which makes it even harder to adequately prepare and work out a system.The Inquiry committee made some big changes to Game On this year that greatly improved the quality of the event and the consistency of its scoring. I'm simply arguing that similar steps could be take to improve WIDI.

This wouldn't guarantee that preparation would translate to good results. However, it would reduce the risk that a team who has studied and prepared really well will completely bomb the event.
This is basically my position on it. WIDI is of course a very signature SO event, but I think it could use some updates/improvements. We should always be looking for ways to improve different aspects, since I think that striving for excellence is what defines the competition of Science Olympiad.
East Stroudsburg South Class of 2012, Alumnus of JT Lambert, Drexel University Class of 2017

Helpful Links
Wiki
Wiki Pages that Need Work
FAQ and SciOly FAQ Wiki
Chat (See IRC Wiki for more info)
BBCode Wiki


So long, and thanks for all the Future Dictator titles!
Locked

Return to “2017 Nationals”

Who is online

Users browsing this forum: No registered users and 2 guests