It’s been a while since I’ve been to a conference, but this week I was at the Society for Integrative and Comparative Biology in Portland. These were things that I noticed while looking at the posters.
Fabric posters are still a minority, but I think you can always count on seeing a few. I finally saw a fabric poster made by Spoonflower. I’ve blogged about this service, but hadn’t seen one “in the wild,” so to speak. The presenter was generally happy with how it looked, although was putting in quite a bit of effort to make it hang right. It is a very stretchy fabric, almost like spandex, so tends to sag. If you are going to have a fabric poster, remember to iron it before bringing it to the session.
I ran across multiple
posters that tried to say something about differences that were not
statistically significant. I read text like, “The experimental group was
slightly higher than the control (p = 0.07).” No! If the
difference is not significant, saying anything more about the relative
values of the averages is meaningless. Because if the difference is not statistically
significant, you are saying that difference is due to chance, which mean that the difference you are describing could just have easily
been in the opposite direction.
I referred multiple people to this blog post, “Still not significant.”
Too many titles were hard to read from a distance. The poster sessions are busy, with a lot of browsers, so your title should be visible from the moon.
I referred multiple people to this blog post, “Still not significant.”
Too many titles were hard to read from a distance. The poster sessions are busy, with a lot of browsers, so your title should be visible from the moon.
I bugged many presenters about their error bars. Most posters I saw had at least one bar graph with error bars, and about 80-90% of those had no indication anywhere on the poster of whether the bars were standard deviation, standard error, or something else. This matters a lot for interpretation.
Update, 8 January 2016: My efforts to make a graphic for this post backfired. I’m leaving the image here, but several people busted me on an insufficiently nuanced quote about p-values. I’ll pick this blog post from Scientist Sees Squirrel for further discussion.
While the image here could be better, I think the larger point still makes sense: if your model says your results are probably due to chance (however you set that model up), describing experimental conditions as larger or smaller doesn’t make sense.
This comment has been removed by the author.
ReplyDelete