I had the pleasure of supervising ED at the MIT invite this last weekend, so thought I'd give a few pieces of unsolicited advice now that I've read ~70 labs from this year (including from a bunch of top teams):
- The Statement of Problem should, generally, take the form of "How does [IV] affect [DV]?" It makes it so much easier on your scorer though if it is immediately clear from the very beginning what you tested. If the supervisor has to skip ahead to the procedure or the charts to understand what the lab was about, it's a lot harder to give a good score. Expanding on that statement of problem, either by adding additional detail in words or showing an extra diagram (if it's less staightforward what you did) can be a worthwhile investment, even though there isn't a point allocated there on the rubric.
- In your hypothesis, don't just say "We hypothesize that as [IV] increases, [DV] will [increase/decrease]." Also describe the shape of the relationship -- as in, follow that up with "We expect this to follow a [direct/indirect] [linear/exponential/power/quadratic/inverse/etc.] relationship."
- Add more precision, especially in the first sections. Next to zero teams got full credit for operationally defining their variables, and similarly for "enough information to repeat procedure". It needs to be entirely unambiguous what you're doing. For instance, variable definition shouldn't just be "the angle of incidence" nor even "the angle of incidence as measured in degrees with a protractor," it should be "the angle of incidence, operationally defined as the angle between the center of a mirror's face and a theoretical straight line to the emitter of a laser pointer as measured in degrees by a protractor with its measurement origin matched over the center of the mirror."
- For your controlled variables, don't just throw in the kitchen sink. List variables you're intentionally not varying for the purpose of isolating the effect you're hoping to see. Using the same strength laser when you're evaluating what it looks like after refracting through water is clearly important, the air pressure of the room is less so.
- The "all data discussed and interpreted" piece of the analysis section does not, in my mind, mean you should be re-writing the data tables in words. Writing a lot very quickly to just try to snag those points that way makes me think less of the lab overall -- it's better to pick the important points and talk about important findings from throughout the experiment (i.e. maybe list the averages of all of the trials in order, or maybe note that the first trial at each level was the highest, etc.). The commentary on trends should almost always include an interpretation of the trend line or z-tests you show between treatments (i.e. "the slope of M means that for every unit increase in X, we expect an M change in Y") and a discussion of summary statistics (i.e. the R^2 of G means that G% of the variation in Y can be explained by this regression on X).
- For statistics, don't give the overall means/modes -- those are usually meaningless! Give the means across trials for each level.
Of course, continuing to nail the basics and also be creative in your labs is important, but those were areas where I saw even the top teams fall down. Hope that helps!
Harriton '10, UVA '14
Event Supervisor in MA (prev. VA and NorCal)