I apologize for being so late to this forum, but I thought it would be best as a graduating senior to first take the time to look back at my entire Science Olympiad career and the experiences I've had. I've enjoyed every minute of competition over the years, and I plan to continue being involved in SO in any way I can. With that said, I hope that those involved with SO and the National Tournament will seriously consider the thoughts that I and many others on this forum put out so that SO can continue to improve at the highest level. I apologize in advance for the length!
I'll start with my events!
Anatomy & Physiology:
(21) - I'll be completely honest - this was not good at all. This test failed to represent the rules manual, as many topics in each system were completely left out. For the most part, the stations were far too easy, but there also were questions that were either too random or too irrelevant to the heart of the event (i.e. pink puffers, blue bloaters). Although the case study was a good idea, it just wasn't executed well enough and the patient gave too many contradictory responses.
One of the beautiful parts of an event like Anatomy & Physiology is the multitude of ways in which an event supervisor can test the competitors' knowledge. I am always impressed by invitational tests like MIT's that are able to cover all of the rule book and really separate teams based on their knowledge
of A&P. This test was truly embarrassing for a national-level tournament in its inability to accomplish that. I thought last year's A&P test was better - it had parts that were easy, medium, and hard, as well as sections that forced the competitors to dig into the knowledge of A&P that they had built over many months. You could tell that last year's test was written by someone with a doctor's perspective and a solid grasp of the rules. I cannot say the same about Ms. Palmietto's exams from both 2015 and 2018. It was just so frustrating for someone like me who made this event my first true love over the years and was fortunate enough to attend the SfN Conference through SO last year to end on such a sour note. However, I know I would've had the same complaints even if I had placed in the top 6. Congrats to Mason on a phenomenal job in this event and as a team all year.
(4) - The CDC does a good job with this event on a yearly basis. The exams are always long, challenging, and test the competitors' ability to adapt to different scenarios and use critical thinking. All in all, I thought this year's event was done well, but probably not as well as previous years. The general knowledge section at the beginning of the test was an interesting twist (especially with the bias and confounding part), but it was a little too straightforward for a national test. The second section on plague had some new questions that I enjoyed answering and thinking through, but it still felt too easy overall (i.e. the Bradford-Hill Criteria listing). Although I think topics like occupational health and plague are interesting inclusions in Nationals case studies, it's still strange that foodborne disease was hardly ever addressed.
The test was definitely doable in the 40 minutes given, but the national supervisors need to do a much better job with checking teams in and handing out booklets so that all teams have 50 minutes to work, not 40 or fewer. This test was OK overall, but I'd point to the tests from the last three years as much better examples of how a nationals DD test should be structured. Congrats to TJHSST for winning this event.
(25) - Just like how an NBA game can be "tale of two halves", this MatSci exam was a tale of two parts. I'll start with the test. For the most part, I thought it was quite good for a Nationals test. It had varied multiple choice questions that covered most, if not all, of the rule book. For example, I really liked the questions that tested the materials testing equipment because it's something mentioned in the rules that takes a good amount of research to fully understand. The matching section could've been written and spaced out better, and there was probably too much on certain topics like IR. Like Unome said earlier, there was also probably too much focus on individual polymers rather than how polymers behave as a class.
The lab, on the other hand, was quite flawed. The lab asked competitors to melt some cheese and then extrude it through a syringe to form a string that had to carried to a station for judging. Next, it asked us to make a hashtag from the extruded cheese that also had to carried. Seriously? I understand that challenging labs can be expensive to create and difficult to complete in the allotted time, but I've seen many interesting and intuitive labs at tournaments all year that blew these out of the water. I was definitely expecting better from the event supervisor because my partner had a blast doing the metals-based labs at Wright State Nationals.
Also, the scoring system for the cheese lengths was just bad. I don't remember the numbers exactly, but the scoring ranges were unreasonably large. For example, a team with a string of 21 cm would get the same number of points as a team with a 99 cm string, a significant difference considering how much cheese we were initially given. It just didn't capture the essence of the event's focus on polymers and seemed rather silly in the grand scheme of things. Frankly, it seemed like the event supervisor felt the need to force cheese into the event in some way because he's from Wisconsin. The rest of the lab section dealt with opacity and general polymer stuff, both of which were OK. This event was decent, but it had the potential to be a great overall with a better lab component. Hats off to Troy for winning this event and the overall tournament.
Again, I don't want to give the impression that I'm complaining or suffering from sour grapes because I didn't do well as I had hoped in all my events. I've thought long and hard about each one, and I feel that those events and others should be analyzed in an unbiased manner that accounts for all the factors involved.
With that said, I thought the overall tournament
was run pretty well. The event staff and volunteers were great as usual, and the builders on my team didn't seem to have any glaring problems with how events were run. Colorado State had a great campus, and I didn't really have any issues with traveling between events. I will say that there seemed to be very few things that the competitors could do on Friday and Saturday other than walk around and/or compete. The Noosa was great, however, and I haven't eaten yogurt since then because other brands are just a letdown in quality
I'd rank this tournament as better than Nebraska (for obvious reasons) and around the same as Wright State and UW-Stout. In my opinion, the UCF Nationals from 2014 and 2012 were just awesome and unparalleled in quality.
I'll premise my final part by first saying that Science Olympiad does a LOT of things right. I've always loved picking up a variety of events that have allowed me to grow in so many ways. SO is also incredibly rewarding, and its benefits obviously go far beyond a few medals at States and Nationals. However, the National Tournament and SO have some pressing issues that need to be tackled. Nationals should reflect the highest standards that SO has to offer. I think a lot of people on these forums believe that the quality of Nationals, especially in the bio events, has been declining a bit over the years. It's far too common to see tests at invitationals like MIT, GGSO, and SOUP in January and February that are significantly
better than Nationals tests. That shouldn't happen, plain and simple. I understand that the National event supervisors have responsibilities outside of SO, but it's simply unacceptable when a college student or professor writes a test that demonstrates a greater understanding of the event and all it entails than the National supervisor does. This is especially concerning since the event supervisors have almost an entire year to prepare an exam that they know students have been diligently preparing for. There are too many exams like Ecology last year and Herpetology and Anatomy & Physiology this year that leave competitors asking for much, much more. The National supervisor committee as a whole should honestly reconsider their approach, especially since there's a solid argument that tests from invites like MIT are better nearly across the board.
I've been truly honored to attend 7 national tournaments, and as a result, it feels like SO has become ingrained in my DNA. I'm a little sad that my team and I were not able to do as well as we had wished in my final competition ever. In the end, however, I'll remember all the incredible memories I've created from SO. I love my teammates to death, and I know they'll be back next year with a vengeance. We all have a great deal of respect for teams like Troy, Solon, and Harriton that consistently churn out winning teams that capture what SO is all about. Another shoutout goes to all the people on these forums for their awesome performances and dedication to Science Olympiad. You guys rock.
I'd love to hear what others think. Feel free to reply or PM me if you'd like to discuss.