Page 14 of 24

Re: Science Olympiad at MIT Invitational 2019

Posted: January 13th, 2019, 7:54 pm
by windu34
antoine_ego wrote:Will MIT watermark their tests this year?
They will not. On the website, it says all tests will be released. Additionally, ESes have been given permission to post their exams and keys online.

Re: Science Olympiad at MIT Invitational 2019

Posted: January 13th, 2019, 8:18 pm
by windu34
Unome wrote:
Raleway wrote:Gonna be a spicy hot take here;

This being my last year competing (therefore my last MIT invitational) and having competed at many of the other top competitions such as Princeton and SOUP, I strongly disliked MIT. I competed here last year and also disliked it, but this year I felt was much worse. I competed in the last block of Sounds of Music - and I waited 50 minutes in line to have to do my 5 minute build portion. Having to stand in the hallway with your instrument and people walking by with everything going on is terrible. Luckily, my other build portions went early on in the day and were able to finish right on time - big props to the Wright Stuff event supervisors that I know are great at what they do based on personal experience. Despite this, I definitely feel the flying arena was poor and strongly conducive to inherent disadvantages, but nothing on the event supervisors since the picking of the arena is fully on the MIT organizers.

It's not just that, but knowing that my Fermi exam was scored incorrectly as well as my Codebusters exam (after only glancing through it for about 30 minutes on our long bus ride home) really irks me. One was in my favor, but I strongly dislike any error in grading, albeit understanding the time crunch. My team also mentioned how in Thermodynamics two teams were even handed out the answer key... and those two teams mentioned it only 5 minutes after they got it. Completely unacceptable in more ways than one. Not just that, each team in that first period for that event was at a disadvantage for only having 30 minutes rather than 50 minutes because of issues.

It is simply my opinion that an invitational that is run smoothly and simply is the best. All these small irking issues pile up and really create an uncomfortable invitational. In attending PUSO and SOUP, I felt each one was better run than MIT - especially PUSO. It's simple, is exactly what it says it is, and we actually got a homeroom that was a "room" with a men's bathroom on the same floor. SOUP has a slightly confusing format to its campus but not as bad as MIT. Despite this, they had volunteers in the cold directing people and answering questions to keep competitors in the right place.

I appreciate that MIT tried to expand its team list and number, but inevitably, there comes a point that it is infeasible. Here, that's what happened. I hope each following invitational can read the feedback and apply it so the same mistakes do not happen again. Many teams travel long distances and put up with pretty bad complications to compete, and it's just very disheartening to see mistakes that are avoidable. Congratulations to our Mason 2.0 from MIT this year and everyone who competed!

*My opinion simply represents my own opinion given my experience and thoughts*
I noticed many events had delays in device testing. There seems to have been a general shortage of general volunteers this year, which may have been a part of it - for example, I heard that Mission ran with something like half the expected staff, and I ran with 2 people, one of whom arrived halfway into the second session and left immediately after the last one.

I don't think the size is a serious problem. Some builds definitely needed more people and more device testing setups for sure, but grading was not a problem for me despite what people tell me was an extraordinarily long and difficult test. I had an assistant grade the multiple-choice (85 questions at a point each) while I graded the rest of the test (195 points, divided into thematically appropriate chunks) and had no problems at all with grading. We finished grading each session's tests during the next one, with time to spare and at a downright leisurely pace (at least, leisurely by my standards). I finished my final grading by 4:30 and left for scoring with a fully tidied up room by 4:45. I'd like to think my grading is accurate, and I was particularly careful with summations because those are such dangerous errors. Honestly, I probably could have graded the entire test myself and been done in time to make it to awards by the official start time.

The choosing of rooms is more like 25% organizers and 75% what the admin will allow. This is likely the reason I was in the same room as Disease - I assume they had something else in mind but ran into problems with admin or another organization that caused them to lose a room.

I don't find MIT confusing, but I'm generally good at navigation and maps, so I can't really comment on that.
I believe the primary reason for the shortage of volunteers was the fact that 25% or so of people werent on campus and volunteer recruitment started kinda late. I honestly thought I would be in trouble with needing more volunteers with mission, but I overestimated how many i needed and I was actually good with the 4-5 that I had.
I think some of the problems with builds taking so long are mostly the ESes fault. Many of the build event supervisors were first-time supervisors at MIT and although we had experience supervising and almost all of us are national medalists, supervising for 76 teams is something that is hard to even imagine until you have done it. I think the best way to improve that would be to have ESes submit "Supervising plans" that outline exactly how many volunteers they need and what they will be doing so that the planning committee can look it over and make sure it seems reasonable. I think that could help alot with grading as well, but it may still be difficult for some events like forensics that involve long essays.

As for rooms, Unome is right. We have some say, but not a ton.

Re: Science Olympiad at MIT Invitational 2019

Posted: January 13th, 2019, 8:25 pm
by nicholasmaurer
windu34 wrote: Many of the build event supervisors were first-time supervisors at MIT and although we had experience supervising and almost all of us are national medalists, supervising for 76 teams is something that is hard to even imagine until you have done it.
This is an important point. For test or lab events, having alumni with national experience is often the best way to ensure quality exams. MIT has excelled at recruiting these individuals. For build events, I would argue that past supervising experience at large tournaments is far more important than being an alumnus. Experience as a competitor doesn't translate as directly into quality supervising for these events. This is exactly why we focused on recruiting national and state event supervisors for these events at the Solon HS Invitational this year.

Re: Science Olympiad at MIT Invitational 2019

Posted: January 13th, 2019, 9:00 pm
by ptabraham_nerd01
Hey guys. Went to the MIT Invitational this past weekend and here are my event reviews:

Anatomy (12): The test was really difficult from a content perspective (I honestly didn't recognize many words on the test, but that may be because I studied for about a week), but I felt the difficulty was appropriate for MIT. The primary issue was that we were unable to divide the test because there were only 13 copies printed, which made completing the test way more difficult. Also, the heart model from the stations' portion wasn't difficult because of content, but because finding small numbers on the model was time-consuming (my partner and I didn't get close to finishing that station). Maybe, diagrams should have been used or less questions about the diagram should have been asked. Reflecting on the multiple choice, the questions were well written, but the directions on the 2-point problems should have been better articulated before the test began. 6/10

Disease Detectives (25): I didn't study much for this event, so I was expecting the test to seem extremely difficult. Indeed, it was. However, the difficulty arose from novel types of questions (ex: ELISA, specificity/sensitivity problems etc.) that I thought were appropriate for a Disease test. 9/10

Experimental Design (13): A normal experiment. 9/10

Fossils (22): For this event, I depended completely on my partner's knowledge and binder. Based on what I saw, the questions were well written and covered the content well, and there were real samples to interact with (always a +). 9/10

Re: Science Olympiad at MIT Invitational 2019

Posted: January 13th, 2019, 9:02 pm
by dragonfly
TheSquaad wrote:MIT this year was one of the best competitions I’ve ever been to in terms of content. The build testing rigs/facilities were great. The tests I took were a challenge unlike anything I’d seen. Overall a great tournament.

Except for one major issue. I had 3 scheduled build events. I wasn’t able to test any of them in the block I scheduled. The build testing facilities (Mission Possible tables, boomi rig, sounds room) were able to accommodate far too few people. For example, boomi testing normally takes ~6 minutes per team, and MIT has six 60 minute blocks. But there are 70 teams at MIT. They had 1 boomi testing rig. It doesn’t add up.

Build tests were constantly backed up; my mission test was pushed into my boomi block, which forced me to test it after block six, but my sounds build test also ran after it’s scheduled block 6.

Each of the actual testing facilities were great, but MIT needs more of them if they want to stay this big.
Congrats to the teams from MIT’s invite Saturday!! Despite it being my third year as MIT’s balsa ES, nothing quite compared to the insanity of running this year’s Boom event. With 76 teams, 1 functioning (but messy, sand-spraying) rig, and a belated volunteer recruit, you’re right, the minutes didn’t compute. We ended up racing through testing the entire day, stayed open an extra two hours past the end of the last event slot just to get everyone in, and *still* had folks backed up in lines basically all day. I’m sorry I didn’t have much chance to talk with every team nor give them the proper time they deserved, but thanks to you all for being as patient and understanding as possible. Despite the drawbacks, MIT still runs one of the best invitationals around and has some of the best talent there is. Hopefully next year we’ll learn from this year’s mistakes and keep up the caliber of competition we always hoped for as competitors ourselves.

Re: Science Olympiad at MIT Invitational 2019

Posted: January 13th, 2019, 9:47 pm
by Adi1008
poonicle wrote:Disease Detectives - the test was unlike any other Disease test I've taken before, and quite difficult as well. I'm impressed with how the test writer managed to ask questions on stuff that doesn't show up on other tests at all. Scores were pretty low for a Disease test for this event. Coming out of this event, my partner said something along the lines of, "I think everyone finished this test, but I don't know whether any of it is correct." I definitely did not feel good about this test walking out of it.
One of my friends was the Disease Detectives ES and I know he put an insane amount of effort into writing the exam and asking good conceptual questions. Given his talent for Disease Detectives (complete with both MIT and national medals!) and experience I think you'd be hard pressed to find someone more capable for the job!
Name wrote:Astro was well ran, and very difficult. Unfortunetely since I took the DSO questions, I cant really speak for the math questions (which are probably my favorite), but this was a very well written test. We didn't even get close to finishing, and just from glancing at the last page, those questions looked ridiculous. 10/10 well done
antoine_ego wrote:Shoutout to adi1008 who wrote what was almost certainly the most difficult astronomy test I've ever taken
CrayolaCrayon wrote:That Astro test was amazingly written.
Thank you all for the kind words! I spent months researching and writing the questions I wrote, so I'm happy that they were well received by the competitors. I didn't write this test alone; Donna Young (the national event supervisor) wrote Section A, dkarkada wrote Section B, while I only wrote Section C. We are currently working on an expanded answer key with more in-depth explanations, along with analyzing some interesting data from our feedback survey from the competing teams. We'll post all of this in a few days on scioly.org, so stay tuned if you are interested!

Lastly, a big thank you to everyone at MIT Science Olympiad who made my experience truly unforgettable; I loved every moment of it, from helping with the preparation the day before to the thrill of watching the awards ceremony. MIT is not perfect, as others in this thread have pointed out, but it is still amazing. I'm immensely grateful to have gotten the chance to experience this tournament as an event supervisor and meet so many thoughtful and amazingly smart people who share the same love for Science Olympiad this weekend.
jaah5211 wrote:
antoine_ego wrote:Will MIT watermark their tests this year?
Hopefully not...
Galahad wrote:Anybody know when they'll release the tests?
MIT is planning on releasing all exam materials publicly. It'll probably be after Wednesday, January 16th, since some event supervisors are fixing typos in their documents, making expanded answer keys with more details for competitors, etc. before they're released to the public.

Re: Science Olympiad at MIT Invitational 2019

Posted: January 13th, 2019, 10:05 pm
by Raleway
Adi1008 wrote: MIT is planning on releasing all exam materials publicly. It'll probably be after Wednesday, January 16th, since some event supervisors are fixing typos in their documents, making expanded answer keys with more details for competitors, etc. before they're released to the public.
WOAH THIS IS HUGE!!! I've always been a huge fan of publicly released tests - maybe we can prove Nash equilibriums incorrect! Disregarding the nerd that just came out of me, this is truly a step in the right direction. Also, despite my dislike of some aspects of MIT, I strongly applaud their decision to come this way around. So many of us appreciated PUSO (Princeton) to be the first to truly open up their tests and keys, and it's great to know more will follow their footsteps. SOUP, GG, and all the other invitationals, please take note! The whole Scioly community would truly appreciate it. Bravo to MIT!

Re: Science Olympiad at MIT Invitational 2019

Posted: January 13th, 2019, 11:54 pm
by pikachu4919
Raleway wrote:Gonna be a spicy hot take here;

This being my last year competing (therefore my last MIT invitational) and having competed at many of the other top competitions such as Princeton and SOUP, I strongly disliked MIT. I competed here last year and also disliked it, but this year I felt was much worse. I competed in the last block of Sounds of Music - and I waited 50 minutes in line to have to do my 5 minute build portion. Having to stand in the hallway with your instrument and people walking by with everything going on is terrible. Luckily, my other build portions went early on in the day and were able to finish right on time - big props to the Wright Stuff event supervisors that I know are great at what they do based on personal experience. Despite this, I definitely feel the flying arena was poor and strongly conducive to inherent disadvantages, but nothing on the event supervisors since the picking of the arena is fully on the MIT organizers.

It's not just that, but knowing that my Fermi exam was scored incorrectly as well as my Codebusters exam (after only glancing through it for about 30 minutes on our long bus ride home) really irks me. One was in my favor, but I strongly dislike any error in grading, albeit understanding the time crunch. My team also mentioned how in Thermodynamics two teams were even handed out the answer key... and those two teams mentioned it only 5 minutes after they got it. Completely unacceptable in more ways than one. Not just that, each team in that first period for that event was at a disadvantage for only having 30 minutes rather than 50 minutes because of issues.

It is simply my opinion that an invitational that is run smoothly and simply is the best. All these small irking issues pile up and really create an uncomfortable invitational. In attending PUSO and SOUP, I felt each one was better run than MIT - especially PUSO. It's simple, is exactly what it says it is, and we actually got a homeroom that was a "room" with a men's bathroom on the same floor. SOUP has a slightly confusing format to its campus but not as bad as MIT. Despite this, they had volunteers in the cold directing people and answering questions to keep competitors in the right place.

I appreciate that MIT tried to expand its team list and number, but inevitably, there comes a point that it is infeasible. Here, that's what happened. I hope each following invitational can read the feedback and apply it so the same mistakes do not happen again. Many teams travel long distances and put up with pretty bad complications to compete, and it's just very disheartening to see mistakes that are avoidable. Congratulations to our Mason 2.0 from MIT this year and everyone who competed!

*My opinion simply represents my own opinion given my experience and thoughts*
Long rant (emphasis on the fact that it is a 3 a.m. rant) coming, brace yourselves.

I'll just put my (more than) two cents in here as one of the supervisors, but sure, the constructive criticism helps, but there are so many things that as a competitor, you don't see why those things are the way they are until you actually become an event supervisor or a tournament director yourself. And some of those biggest things involve scoring, rooms, and manpower, and many times, things competitors complain about are things that may not necessarily completely be in tournament executives' control.

First, volunteers. Getting them is hard. I know because I'm in charge of volunteer management for my university's own regional tournament. I literally blast out our volunteer form link EVERYWHERE (my own social media, our Facebook page, multiple Facebook groups, multiple Groupme chats, etc) and I even bother our people to also broadcast the form as much as they can multiple times and I still am not sure I have gotten as much personnel as I would have ideally wanted to be there. But I do know why - when you hit college, it can be tough to give up that much of your time. I remember being severely burnt out from doing so much tournament traveling last year, I can say some of my grades did take a slight hit from the overall exhaustion and constantly working on nothing but SciOly in whatever free time I could squeeze out between classes, homework, projects, and other things college throws at ya. While general volunteers aren't always involved to this extent, many of them still prioritize other things over helping out, even with bribes of free food and free T-shirts (which is decently effective), which is entirely understandable because we are students first. I would say that getting volunteers for us can be a bit easier than for MIT, since our tournament is during school when people are here and MIT's is during IAP while school isn't in session, which makes it harder to recruit volunteers anyways since many students aren't on campus at the time, including members of the planning committee. But if they didn't do it then, there's no way it would be possible to host the tournament in the first place. The fact that they can get as many volunteers that they do, even ones that have never done SciOly and are just interested in helping out high schoolers passionate about science, is impressive in itself. And again, being understaffed sucks, and on top of that, we supervisors and our volunteers are not perfect human beings - we may make mistakes, and if so, we recognize that point you brought up as a valid concern and are genuinely sorry for that, but honestly, we really sacrificed a lot to be there for you and it does hurt us supervisors a little to hear words that harsh (even though I know you didn't take the event I was running) with all the effort we put in as well as the planning committee.

A second point about volunteers - sometimes they do questionable stuff without ES permission. I know what it's like to be "one of those" annoying general volunteers that tries to hard to overcorrect mistakes that ES's make because sometimes I do take on that archetype (mostly ONLY if the event is REALLY being screwed up, and I also always be sure to ask for ES permission before grading something in a way that is technically right but not in the key), but like this weekend I definitely experienced what it's like to have those kinds of volunteers, and especially ones that do have a bit of an ego clash with the supervisors. Some of our general volunteers this past weekend supposedly seemed to have thought they could grade how they saw fit and not according to the way we structured how we wanted the question to be answered in the key, and my co-supervisor and I were quite annoyed that they went rogue on us, especially since it would contribute to inconsistent grading - we ended up regrading everything we told them to not grade but they went ahead and graded anyways, which put us behind on scoring (historically I'm always one of the last people to finish grading but like....this was a whole new situation to experience on that). Sometimes volunteers really do help, but in some cases, they can hinder grading in some situations, which leads to panic and such, which is never good. We're also all human, and all humans make mistakes.

Point about rooms: sometimes it sucks, depending on the host institution's policies - that's the biggest control factor in getting rooms for any university-hosted tournament by far. As a tournament executive at my own university's regional, I can say that many of our room choices this year were not ideal at all, but some recent drastic changes to the campus policies on which rooms student organizations were even allowed to reserve really pigeon us off, but in a lot of cases, more than you think, it can generally be hard for students, especially undergraduates, to attempt to protest these kinds of policy changes. Sometimes it's almost like we're basically powerless against these kinds of circumstances, depending on the strictness of the administrators of the school. These things happen.

Wanna reemphasize, but we always tell people to keep in mind that EVERYONE (bar and miss maybe a handful of exceptions every now and then) on tournament staff for university-run competitions in general is a volunteer. No one is getting paid for their service (unless you count travel reimbursement (which only nonlocal ESs like myself get) as "being paid"). As hard as you prepare to take our tests and rightfully have frustration if we make grading errors, you also have to think about how much time we spent tirelessly writing your tests and preparing the materials to run them (COUGH COUGH FORENSICS) almost basically for free, and then taking a whole weekend out of our busy college lives to be there for you. Many of the tournament staff members become exhausted leading up to it to make sure everything's in order, and then tournament day itself is a whole other animal for us to deal with as well. It's crazy and, as I've realized from having served from all sides of Division D (as a general volunteer, an event supervisor, and a tournament executive), something that I probably never could have seen as a competitor back in those days. While us being competitors once ourselves does give us the insight into how to improve running of tournaments once we reach that point, being on the other side definitely makes you realize that as much work as competitors put in to prepare for tournaments, a few more orders of magnitude's worth of effort goes into organizing it all. No tournament can be run perfectly. It's just impossible. On the scale that MIT did this year, I can say that it was one of the best tournaments I've ever internally been a part of. One that has definitely been run much better than several national tournaments, one definitely worth praise, and one definitely worth returning to.

These are my personal opinions and may not necessarily reflect the views of MIT Science Olympiad.

Re: Science Olympiad at MIT Invitational 2019

Posted: January 14th, 2019, 12:07 am
by dkarkada
Thank you everyone for your compliments on the Astronomy exam! And big thanks to Adi1008 for writing those awesome questions and helping out so much during the before/during the tournament - definitely couldn't have done it alone.

There's no doubt about it, this exam was much harder (and much longer) than in previous years. I underestimated the difficulty of the exam, which led to many teams feeling frustrated by it, so I apologize for that.
Adi1008 wrote:We'll post all of this in a few days on scioly.org, so stay tuned if you are interested!
I couldn't wait that long lol so I've gone ahead and uploaded the exam/key here. I've also written a longer reflection, and posted some basic statistics. I'll update the site as the week goes on and there's more stuff to post.

Most importantly -- I thought it would be a good idea to write up a much more thorough solution guide for the questions I wrote (section B). I basically walk through how you should think about the problems, what I would do, and also point out some references for further reading. I think Adi1008 will be writing something similar for his section, and I'll update the document as soon as it's ready. For now, the section B walkthrough is also posted at the link above. We hope that you'll find it helpful! As always, let us know if you have any questions. :)

Thank you everyone that came and competed, and thanks to MIT for having us!

Re: Science Olympiad at MIT Invitational 2019

Posted: January 14th, 2019, 4:26 am
by Unome
windu34 wrote:I think the best way to improve that would be to have ESes submit "Supervising plans" that outline exactly how many volunteers they need and what they will be doing so that the planning committee can look it over and make sure it seems reasonable.
This sounds nice, I actually tried this for a bit at Chattahoochee's invitational when recruiting ESes from our own team (didn't work too well there because it was so small).
nicholasmaurer wrote:
windu34 wrote: Many of the build event supervisors were first-time supervisors at MIT and although we had experience supervising and almost all of us are national medalists, supervising for 76 teams is something that is hard to even imagine until you have done it.
This is an important point. For test or lab events, having alumni with national experience is often the best way to ensure quality exams. MIT has excelled at recruiting these individuals. For build events, I would argue that past supervising experience at large tournaments is far more important than being an alumnus. Experience as a competitor doesn't translate as directly into quality supervising for these events. This is exactly why we focused on recruiting national and state event supervisors for these events at the Solon HS Invitational this year.
I'll second this. I ran builds a few times earlier this year, and there's no substitute for experience.