Page 18 of 27

Re: MIT Invitational 2018

Posted: January 22nd, 2018, 7:46 pm
by shoujolivia
we always love to hear more feedback and answer any questions you guys might have! Here's our email addresses again:
jmg120@duke.edu
royce.lee@yale.edu

If we get the okay from the MIT gods, I will post the raw score distribution.
wow thank you!!! i'm really curious about the top raw scores + i'll definitely send an email after i study for/make up the calc test i missed.

Re: MIT Invitational 2018

Posted: January 22nd, 2018, 7:52 pm
by pikachu4919
Oh yeah, if anyone has questions about or feedback on the Forensics exam, feel free to contact me! PM on scioly.org or email me at the address I provided on the answer sheet (liu1841@purdue.edu)!

(I may also post a raw score distribution if it's OK to and if enough people are interested) <-- they already said no :(

Re: MIT Invitational 2018

Posted: January 22nd, 2018, 8:02 pm
by slowpoke
If anyone has questions, feedback, etc. for Materials Science I am also available by PM or at the email on the cover page!

Re: MIT Invitational 2018

Posted: January 22nd, 2018, 8:05 pm
by varunscs11
Finally, a question to anyone that helped write the MIT tests, would anybody else be open to answering questions like with Rocks? I know myself and many others on my team have questions about material that couldn't be answered by resources through our school (such as imaging on Remote).
Yes, anyone interested in getting answers for Rocks and Minerals can email me at varunscs@sas.upenn.edu

Re: MIT Invitational 2018

Posted: January 22nd, 2018, 8:27 pm
by alleycat03
On another note, I’ve kinda been awaiting some spicy event ratings, would anyone like to start?
Ecology (31): This test was difficult, possibly harder than the test at nationals last year. I think it would have been a lot better as just a straight up test instead of stations. I didn’t really see the need for the stations. Otherwise, the content was solid and difficult.
I'm glad it was a challenging test! I debated the merits of stations vs. a regular test. Ultimately, I decided upon stations for two reasons. First, it limits the amount of paper that needs to be printed (2 copies of 15 stations rather than 70+ copies of an exam). Second, I think the pressure and time crunch of stations really forces people to think quickly and not rely too heavily on their notes.
That definitely makes sense. You’re saving some trees by doing stations ;) The time crunch was difficult, but not impossible. There were definitely a few times where I was like “oh shoot, I have this in the notes but I don’t have time to look for it!” Overall, it was a well-run test. Thank you for working so hard on it!

Re: MIT Invitational 2018

Posted: January 22nd, 2018, 8:29 pm
by alleycat03
Oh yeah, if anyone has questions about or feedback on the Forensics exam, feel free to contact me! PM on scioly.org or email me at the address I provided on the answer sheet!

(I may also post a raw score distribution if it's OK to and if enough people are interested)
I would definitely want to see a raw score distribution if it’s allowed! (please, MIT gods, please)

Re: MIT Invitational 2018

Posted: January 22nd, 2018, 8:31 pm
by JShap
If we get the okay from the MIT gods, I will post the raw score distribution.
Can confirm MIT does not want the raw scores shared.

Re: MIT Invitational 2018

Posted: January 22nd, 2018, 8:57 pm
by jkang
If anyone has any questions about Fermi (I think I did my best explaining everything in the answer key, although the physical copies I was given in my box were incorrect so you guys might have gotten incorrect ones), feel free to send me an email at justin.kang23@utexas.edu

Re: MIT Invitational 2018

Posted: January 22nd, 2018, 9:30 pm
by windu34
At what point does a test become too long (paraphrased from Unome)
As a supervisor trying to write a challenging test that will not only be able to separate the top teams, but also the lower teams, I find this "middle-ground" to be extremely challenging to hit. At some point, you just keep adding more and more questions, both easy and hard, in hopes of having more points available to teams will result in more distribution. I think many new-era event supervisors who dominated at the national level when they were competitors are going to struggle with separating the lower teams because they can only really remember the frustration of not being able to separate themselves from other top-tier teams when given a test that was too short or too easy. IMO, the top 5-10% of teams should be able to score above the 66 percentile (maybe even the 75 percentile). I think the goal as far as test length is to have only the very top team be able to finish in time because that approximately means that if you have a full understanding of the material, you could theoretically get a 100%. This is way too difficult to try to gauge though so I think it would be safe to aim for the top team to be able to finish ~80-90% of the test. It is certainly easier to separate the top teams than the bottom teams and I don't have any idea of a general statistic for the bottom-tier I would want currently (hopefully that will change after I analyze raw scores from the test I will be administering at Princeton).

Just kinda making these statistics up based on what I aimed for when writing my test. Not actually sure if this is correct or realistic. I guess Ill find out :P

Re: MIT Invitational 2018

Posted: January 22nd, 2018, 9:47 pm
by Wallytowers
While it's great that they are trying to protect the security of the exams, teams with upcoming competitions are now left wasting valuable days of preparation time without the exams to look through. There has to be a better way to deal with exam distribution. If they knew that this was the method for distribution, then the exams should have been prepared in advance for timely distribution. Will definitely reconsider attending next year because the exam distribution is ALWAYS a problem.