Yup, it's a complete headache trying to make sense of all the studies out there and what, overall, they mean. Usually some helpful person does what's called a meta-analysis, where they raid the published scientific literature using well-defined inclusion criteria and then combine multiple studies together, adjust as best they can for confounding factors and then see what overall the evidence suggests. To do this requires a knowledge of statistics that I'm not terribly au fait with, but these meta-analysis are usually the most conclusive because individual studies in isolation rarely give ubiquitously conclusive evidence. It's the type of study organisations like the WHO rely on to form their opinions and recommendations. Obviously, some individual studies are better than others, and when it comes to population-based studies there are a few questions I always ask when reading about them in newspapers or journals. I can't claim this is comprehensive, so I'm sure others can suggest things worth considering. The best design of study is a randomised double-blind study, which means patients are divided amongst groups randomly (although distribution of age and sex should be equal) and the patients and scientists are unaware if they are getting given a placebo treatment or real treatment. With dietary studies this is quite hard, so they are often done by tracking highly selected groups of people with food diaries, retrospectively relying on recall or (in the case of the China study), by assuming dietary conditions based on geographical variation. So, with studies published in journals I'd look at: (a) What's the study design - Is it controlled and blinded, or open to a selection bias? (b) Is the control group suitable - Do they vary considerably in age, sex, location or some other factor that could skew the results? Have they selected a 'normal' control population or compared extremes? © How is the data is collected - Does it rely on recall or self-reporting? (d) Is the sample size appropriate - Are they claiming significant results applicable to the entire world based on just a dozen people? (e) Are the authors open about confounding factors - Do they recognise the limitations (that every study has) themselves? (f) Are the results actually statistically significant - Is wooly language used to suggest that their results are significant when they may not be? (g) How large is the effect they're actually reporting - A 100% percent increase in risk of disease [x] may not mean a great deal if your risk of disease [x] was only 1 in 1,000,000 to start with. Do they use natural numbers (e.g. increased risk from 1 in 100 to 2 in 100) or rely on obfuscation with percentages? With anything that's reported in the media, you're often a few steps away from seeing the original research - A good bit of science reporting should include enough information to cover most of the points above, but I'd add one or two extra points on to science reported in the media: (h) Is the data actually published in a peer-reviewed journal - Or is it fed directly to the media? (i) Has the work actually been done - Or is a press-release announcing a study which hasn't even started yet? (Some 'studies' have their results announced before they even commence - see the school fish oil debarkle for an example of this). I can't recommend Bad Science, Ben Goldacre's web-blog, enough for some great discussion on the representation of science by the media. His latest entry is diet related, so people may get a kick out of reading it: Health Warning: Exercise Makes You Fat Bad Science