R/S Crime Busters: 1/2 Disease Detectives: 1/5 Hovercraft (both builds failed, rip): 11/9
GGSO/R/S Disease Detectives: 6/3/10 Circuit lab:11/4/- Sounds:3/2/1 Protein: -/-/6
If I recall correctly, MIT had one instance of minor calculus (a single trig function). Considering that MIT is MIT and they often go overboard in difficulty, I wouldn’t worry about calculus too much.Is it possible that calculus will be in this event? Neither my partner nor I have taken precalc yet (not offered at our school for our grade level), but we both know basic trig functions and basic logarithms.
Don't worry about calculus.If I recall correctly, MIT had one instance of minor calculus (a single trig function). Considering that MIT is MIT and they often go overboard in difficulty, I wouldn’t worry about calculus too much.Is it possible that calculus will be in this event? Neither my partner nor I have taken precalc yet (not offered at our school for our grade level), but we both know basic trig functions and basic logarithms.
The rule clarification on 10/11 changing "best" to "average" opened up a whole host of issues. First, it is not specified how the average is calculated, i.e. how frequently should the app be set to take a reading?, should 0 Hz be included as part of the data set?It's more of a dear of inconsistency between different tournaments. Do you change the instrument to not have 0Hz as part of the average or do you find a fix to the problem and have all tournaments be the same. To me, having 0Hz as part of the average doesn't seem fair because it also depends on how well an Es's microphone pics up the frequency. There are just a lot of factors that should be figured out and be mentioned officially from Science Olympiad.My bad I thought you were asking from a competitors point of view. If you’re just trying to calibrate your own build then doing the crop thing works. From a proctors point of view I don’t think there’s anything they can really do short of taking the average over a time less than 5 seconds, which is clearly against the rules. No app will just magically not record 0 or strange overtones - it’ll be up to students to build a device that does its best to avoid this by sustaining the note for the entire 5 seconds without having to repeat it while keeping it at a volume that the app will register it at consistently.The rules state that: "The pitch measurement will be the average value during the 5 seconds.". To me that means that the average will be taken from the entirety of the 5 seconds, and not what looks like the average fundamental.
MIT used Sci Journal. I’d just prepare for the worst and use that as your tuning app.The rule clarification on 10/11 changing "best" to "average" opened up a whole host of issues. First, it is not specified how the average is calculated, i.e. how frequently should the app be set to take a reading?, should 0 Hz be included as part of the data set?It's more of a dear of inconsistency between different tournaments. Do you change the instrument to not have 0Hz as part of the average or do you find a fix to the problem and have all tournaments be the same. To me, having 0Hz as part of the average doesn't seem fair because it also depends on how well an Es's microphone pics up the frequency. There are just a lot of factors that should be figured out and be mentioned officially from Science Olympiad.My bad I thought you were asking from a competitors point of view. If you’re just trying to calibrate your own build then doing the crop thing works. From a proctors point of view I don’t think there’s anything they can really do short of taking the average over a time less than 5 seconds, which is clearly against the rules. No app will just magically not record 0 or strange overtones - it’ll be up to students to build a device that does its best to avoid this by sustaining the note for the entire 5 seconds without having to repeat it while keeping it at a volume that the app will register it at consistently.
Second, the official event supervisor guide on soinc.org, refers to 2 suggested apps, the Accord Chromatic tuner (android only) and Google Science Journal. From our team's experience, the accord tuner only picks up sounds made by the instrument, not background noise or silence between "attacks" on the instrument in calculating the average. Google Science Journal picks up all background noise and silence, and in doing so, distorts the average to a point that it is possible for the same instrument to get a perfect pitch score with the Accord chromatic tuner, and a pitch score of 0 with Google Science Journal. For example, with Google Science Journal measuring 10x per second and a C4 note (262 Hz) that plays for 4.8 s with .2 s of silence, this would lead to a cents difference of 67 and a pitch score of 0.
It seems unlikely that the switch from "best" to "average" was made to over-emphasize the playing of sound over the entire 5 s interval, although that seems to be a major issue now.
Has anyone been to any invitationals and can speak of how the average was calculated and what app was used?
I'd suggest printing everything you could where need to know as long as it's still organized into different sections and doesn't just repeat the same thing. After some time using the massive binder, you can start to take stuff out just because a lot will eventually be memorized. Then for the last binder you make, type as much of it as you can (in your own words) so that you memorize even more and have it make sense for you.Do any of you have suggestions about the binder? This is my first year doing a binder event, so how do I make sure my binder has everything I need without printing every single thing?
The UNSW website is pretty long.
You can't really use mode with continuous data because you'll have to round it so that there are pitches with the same frequency, which might not be preferable.MIT used Sci Journal. I’d just prepare for the worst and use that as your tuning app.The rule clarification on 10/11 changing "best" to "average" opened up a whole host of issues. First, it is not specified how the average is calculated, i.e. how frequently should the app be set to take a reading?, should 0 Hz be included as part of the data set?It's more of a dear of inconsistency between different tournaments. Do you change the instrument to not have 0Hz as part of the average or do you find a fix to the problem and have all tournaments be the same. To me, having 0Hz as part of the average doesn't seem fair because it also depends on how well an Es's microphone pics up the frequency. There are just a lot of factors that should be figured out and be mentioned officially from Science Olympiad.
Second, the official event supervisor guide on soinc.org, refers to 2 suggested apps, the Accord Chromatic tuner (android only) and Google Science Journal. From our team's experience, the accord tuner only picks up sounds made by the instrument, not background noise or silence between "attacks" on the instrument in calculating the average. Google Science Journal picks up all background noise and silence, and in doing so, distorts the average to a point that it is possible for the same instrument to get a perfect pitch score with the Accord chromatic tuner, and a pitch score of 0 with Google Science Journal. For example, with Google Science Journal measuring 10x per second and a C4 note (262 Hz) that plays for 4.8 s with .2 s of silence, this would lead to a cents difference of 67 and a pitch score of 0.
It seems unlikely that the switch from "best" to "average" was made to over-emphasize the playing of sound over the entire 5 s interval, although that seems to be a major issue now.
Has anyone been to any invitationals and can speak of how the average was calculated and what app was used?
I feel like this issue would be way less pronounced if they changed “average” to “mode.” I’m guessing they originally changed it from “best” to make sure the instrument is primarily hitting the desired pitch, and “mode” fixes that while also not having background noise or overtones screw the readings up.
Users browsing this forum: Google [Bot] and 0 guests