Wayback Machine
Feb MAR Jul
Previous capture 12 Next capture
2016 2017 2018
9 captures
30 Jan 2017 - 05 Jul 2017
About this capture
COLLECTED BY
Organization: Archive Team
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.

History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.

The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.

This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.

Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.

The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.

Collection: ArchiveBot: The Archive Team Crowdsourced Crawler
ArchiveBot is an IRC bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs). You give it a URL to start at, and it grabs all content under that URL, records it in a WARC, and then uploads that WARC to ArchiveTeam servers for eventual injection into the Internet Archive (or other archive sites).

To use ArchiveBot, drop by #archivebot on EFNet. To interact with ArchiveBot, you issue commands by typing it into the channel. Note you will need channel operator permissions in order to issue archiving jobs. The dashboard shows the sites being downloaded currently.

There is a dashboard running for the archivebot process at http://www.archivebot.com.

ArchiveBot's source code can be found at https://github.com/ArchiveTeam/ArchiveBot.

Healthier & Happier
  • Home
  • Healthy Initiatives
    • Healthier by Design >
      • Educational Services >
        • Keynotes & speeches
      • Analysis & Evaluation
    • Slim by Design
    • Healthy Weight Registry
    • Cornell Food and Brand Lab
    • Smarter Lunchrooms
  • A Healthy, Happy You
  • Healthy Profits
  • Discoveries
    • Some Greatest Hits
    • Slim by Design >
      • More Slim by Design
    • Mindless Eating >
      • More Mindless Eating
    • Marketing Nutrition >
      • More Food Marketing
    • Asking Questions
    • Consumer Panels
    • Kids & Schools
    • Cool & Quirky Findings
  • PhD Advice
  • About

The Grad Student Who Never Said "No"

11/21/2016

67 Comments

 

     Here's What We're Doing -- See blog "Statistical Heartburn and Long-term Lessons"

Addendum I

 
Good discussion on this post.  Here are two key clarifications to make about data analysis and about the stressed-out workloads of post-docs.
 
P-hacking and MTurk-iterating isn’t helpful to science, and it’s one of the reasons our lab seldom cites on-line studies.  However, P-hacking shouldn’t be confused with deep data dives – with figuring out why our results don’t look as perfect as we want.
 
With field studies, hypotheses usually don’t “come out” on the first data run.  But instead of dropping the study, a person contributes more to science by figuring out when the hypo worked and when it didn’t.  This is Plan B.  Perhaps your hypo worked during lunches but not dinners, or with small groups but not large groups. You don’t change your hypothesis, but you figure out where it worked and where it didn’t.  Cool data contains cool discoveries.  If a pilot study didn’t precede the field study, a lab study can follow -- either we do it or someone else does.
 
About Post-doc workloads.  Academia is impatient for publications.  It’s the reason why most professors don’t get tenure at their first school (I didn’t get it until my 3rd school).  For Post-docs, publishing is make-or-break – it determines whether they stay in academia or they struggle in academia.   Metaphorically, if they can’t publish enough to push past the academic gravitational pull as a post-doc, they’ll have to unfairly fight gravity until they find the right fit.  Some post-docs are willin to make huge sacrifices for productivity because they think it's probably their last chance.  For many others, these sacrifices aren’t worth it.
 
What follows is a tale of two young researchers.

 
------------------
 

Addendum II

There’s been some good discussion about this post and some useful points of clarification and correction that will be made with these papers.  All of the editors were contacted when we learned of some of the inconsistencies, and a non-coauthor Stats Pro is redoing the analyses.  We’ll publish any changes as erratum (and we’ll have an analysis script).  This will also give us a chance to change oversights such as cross-citing the papers (they all came out within a year of each other, which led to that slipping between the cracks.  Sorry.)
 
Sharing data can be very useful – like with lab studies and large secondary data sets –  and in some instances being willing to do so (or a good reason why not) is a precondition to publishing in some journals.  When we collected the data for this study, our agreement to the small business and to the IRB would be that it would be labeled as proprietary and would not be shared because it contained data sensitive to the small town company (sales data and traffic data) and data sensitive to the small town customers (names, identifying characteristics, how many drinks they had, the names of the people they were sitting with, and so on).  This is data that cannot be depersonalized since sales, gender, and companions were central to some analyses.  (We had explained this when someone requested the data.) At the time we published these papers, none of the journals had the policy of mandatory data sharing, or we would have published these papers elsewhere.

 
Upon learning of these inconsistencies, we contacted the editors of all four journals to swiftly and squarely deal with these inconsistencies.  We told them that we would have the data reanalyzed, and we would write addendums. 
 
Importantly, this field study was not intended to test specific, pre-registered hypotheses.  Instead it was intended to learn initial answers to some interesting unanswered questions about eating in a real life restaurant.  Does your first bite of a meal influence your attitude more than your last bite? Do you regret eating expensive food less than eating cheap food? Does the person you’re eating with influence how much you eat or like the food?  
 
A PhD econometrician (not a coauthor) is now reanalyzing the data to confirm or refute the published results, and they will be sent to the journal editors.  Following this, we will make the addendums, the data analysis scripts, and the data (if we can sufficiently anonymize it to protect our research subjects) available at a link we will add here.

 
In the end, I think the biggest contribution of bringing this to attention (van der Zee, Anaya, and Brown 2017) will be in improving data collection, analysis and reporting procedures across many behavioral fields.  With our Lab, a rapidly revolving set of researchers, lab and field studies, and ongoing analyses led us to be sloppier on the reporting of some studies (such as these) than we should have been. This past Thursday we met to start developing new standard operating procedures (SOPs) that tighten up field study data collection (e.g.,  registering on trials.gov), analysis (e.g., saving analysis scripts), reporting (e.g., specifying hypo testing vs. exploration), and data sharing (e.g., writing consent forms less absolutely).   When we finish these new SOPs (and test them and revise them), I hope to publish them (along with implementation tips) as an editorial in a journal so that they can also help other research groups.  Again, in the end, the lessons learned here should raise us all to a higher level of efficiency, transparency, and cooperation.



---------------------
 
A PhD student from a Turkish university called to interview to be a visiting scholar for 6 months.  Her dissertation was on a topic that was only indirectly related to our Lab's mission, but she really wanted to come and we had the room, so I said "Yes."

When she arrived, I gave her a data set of a self-funded, failed study which had null results (it was a one month study in an all-you-can-eat Italian restaurant buffet where we had charged some people ½ as much as others).  I said, "This cost us a lot of time and our own money to collect.  There's got to be something here we can salvage because it's a cool (rich & unique) data set."  I had three ideas for potential Plan B, C, & D directions (since Plan A had failed).  I told her what the analyses should be and what the tables should look like.  I then asked her if she wanted to do them.  

Every day she came back with puzzling new results, and every day we would scratch our heads, ask "Why," and come up with another way to reanalyze the data with yet another set of plausible hypotheses.  Eventually we started discovering solutions that held up regardless of how we pressure-tested them.  I outlined the first paper, and she wrote it up, and every day for a month I told her how to rewrite it and she did.  This happened with a second paper, and then a third paper (which was one that was based on her own discovery while digging through the data).

At about this same time, I had a second data set that I thought was really cool that I had offered up to one of my paid post-docs (again, the woman from Turkey was an unpaid visitor).  In the same way this same post-doc had originally declined to analyze the buffet data because they weren't sure where it would be published, they also declined this second data set.  They said it would have been a "side project" for them they didn't have the personal time to do it.  Boundaries.  I get it. 

Six months after arriving, the Turkish woman had one paper accepted, two papers with revision requests, and two others that were submitted (and were eventually accepted -- see below). In comparison, the post-doc left after a year (and also left academia) with 1/4 as much published (per month) as the Turkish woman.  I think the person was also resentful of the Turkish woman.

Balance and time management has its place, but sometimes it's best to "Make hay while the sun shines."  

About the third time a mentor hears a person say "No" to a research opportunity, a productive mentor will almost instantly give it to a second researcher -- along with the next opportunity. This second researcher might be less experienced, less well trained, from a lessor school, or from a lessor background, but at least they don't waste time by saying "No" or "I'll think about it."  They unhesitatingly say "Yes" -- even if they are not exactly sure how they'll do it. 

Facebook, Twitter, Game of Thrones, Starbucks, spinning class . . . time management is tough when there's so many other shiny alternatives that are more inviting than writing the background section or doing the analyses for a paper. 

Yet most of us will never remember what we read or posted on Twitter or Facebook yesterday.  In the meantime, this Turkish woman's resume will always have the five papers below.


-----
  • Sigirci, Ozge, Marc Rockmore, and Brian Wansink (2016), “How Traumatic Violence Permanently Changes Shopping Behavior,” Frontiers in Psychology, 7:1298. doi: 10.3389/fpsyg.2016.01298.
  • Siğirci, Ozge and Brian Wansink (2015), “Low Prices and High Regret:  How Pricing Influences Regret at All-You-Can-Eat Buffets,” BMC Nutrition, 1:36, 1-5, doi:10.1186/s40795-015-0030-x.
  • Kniffin, Kevin, Ozge Sigirci and Brian Wansink (2015), “Eating Heavily: Men Eat More in the Company of Women,” Evolutionary Psychological Science, 1-9. doi: 10.1007/s40806-015-0035-3.
  • Just, David R., Ozge Siğirci, and Brian Wansink (2015), “Peak-end Pizza:  Prices Delay Evaluations of Quality,” Journal of Product & Brand Management, 24:7, 770-778, doi:10.1108/jpbm01-2015-0802.​
  • Just, David R., Ozge Sigirci, and Brian Wansink (2014), “Lower Buffet Prices Lead to Less Taste Satisfaction,” Journal of Sensory Studies, 29:362-370.
Picture
Picture
67 Comments
Paul Kirschner
12/15/2016 06:06:05 am

Brian - Is this a tongue-in-cheek satire of the academic process or are you serious? I hope it's the former.

Reply
Brian
12/15/2016 03:31:23 pm

Hi Paul,

I meant it as serious, and I hope I didn't misrepresent anything.

This woman did really well while she was here, and it seems to be a pattern I've seen in the past.

It's also a pattern I wish I would have followed as a PhD student. If I would have been driving lots of projects forward that a more experienced mentor was directing toward the basket, I would have had more in the pipeline and wouldn't have been turned down for tenure twice.

Sincerely,

Brian

Reply
Sasha
12/16/2016 12:59:44 pm

Brian, seems your arrogance has no bounds. You can't call some "this woman", "the turkish woman" etc. You may call her "this lady", "the lady" or to make it simple just "this person". Your so called science means nothing to me. All I care about is basic manners. Shame that you allegedly are a famous person in your field.

Robin Kok link
12/15/2016 06:47:16 am

You pushing an unpaid PhD-student into salami slicing null-results into 5 p-hacked papers and you shame a paid postdoc for saying 'no' to doing the same.

Because more worthless, p-hacked publications = obviously better....? The quantity of publications is the key indicator of an academic's value to you?

I really hope this story is a joke. If not, your behaviour is one of the biggest causes of the proliferation of junk science in psychology and you are the one who should be shamed, not the postdoc.

Reply
Brian Wansink
12/15/2016 03:38:32 pm

Hi Robin,

I understand the good points you make. There isn't always a quantity and quality trade-off, but it's just really important to make hay while the sun shines. If a person doesn't want to one, they need to do the other. Unfair as it is, academia is really impatient.

Sincerely,

Brian

Reply
Clare
12/17/2016 01:15:58 pm

Low quality, high quantity. Got it.

I'm a grad student in a different field working through its own problems. I hope you encourage your students to look through the reaction to this post, and to understand why people have reacted so negatively to that research practices (personal and statistical) presented here.






Clare
12/18/2016 12:27:51 pm

To temper my prev comment a bit - I don'the want this to be an attack. There are many students being trained in labs that practice 'p-hacking'. But this training leaves them unprepared for the new standards that are creeping over every scientific field.

What may work now to get published isn't going to work well in the future (see 2006 psychology vs 2016 psychology). The earlier they know that this is a problem, the better it will be for them in the long term.

What should a student do when they realize their esteemed mentor's work is built on a faulty technique? Some leave academia or get dishearted from compromising their ideal of scientitic rigor. Some say No. Best for the mentor to get the message, change practices, and bring the students along with them.

Micah Allen link
12/15/2016 06:51:33 am

I sincerely hope this is satire because otherwise it is disturbing.

Reply
Christian Battista
12/15/2016 08:37:28 am

The papers are real, and so I fear the worst here.

Reply
Jeff Rouder
12/15/2016 09:52:17 am

Hi Brian,

Thanks for sharing. As you can see, your post has generated a critical response. The field of psychology has been going through methodological turmoil, and many of us have lost confidence in researchers' ability to deploy statistical inference in a transparent, honest, and self-critical ways. We think people are inadvertently fooling themselves. Your research as described above is concerning on several accounts including hunting for effects. I also note your dismissal of null effects and the priority you give to counting publications.

There is a loose collection of scholars that are attempting to change the methodological culture and rigor in psychology. New ideas include preregistration, open data, and the usage of statistical methods that allow researchers to state evidence for competing propositions including the null.

My hope is that you take these critiques seriously and consider how these new methodological practices can improve your science.

Reply
Brian Wansink
12/15/2016 03:44:28 pm

Hi Jeff,

Thanks for your thoughtful comments. We've been experimenting with some new ideas in this area (like crowdsourcing ideas and hypotheses). I hope what is going on also helps move researchers past trying to make generalizations from really limited contexts (MTurk studies, terminal-based studies, and so on).

Thanks for your reply,

Brian



Reply
Lazy Postdoc
12/15/2016 10:37:33 am

My sister lost a job opportunity because she refused her supervisor's offer/request to analyse data during Christmas week. Serves her right, getting distracted by the shiny alternatives of spending the holidays with her children and husband!

Reply
Brian Wansink
12/15/2016 03:47:03 pm

Dear Good Sister,

I'm sorry to hear of your sister's lost job opportunity. That might have been a good bullet for her to have dodged.

Sincerely,

Brian

Reply
Matthew McBee
12/15/2016 01:54:28 pm

This is a great piece that perfectly sums up the perverse incentives that create bad science. I'd eat my hat if any of those findings could be reproduced in preregistered replication studies. The quality of the literature takes another hit, but at least your lab got 5 papers out.

Reply
Brian Wansink
12/15/2016 03:51:07 pm

Hi Matthew,

Usually as academics we suffer from people saying, "Well, that seem's obvious." Hopefully the more useful of these are replicated and extended.

Sincerely,

Brian

Reply
Beau Gamble
12/15/2016 03:18:01 pm

Hi Brian,

This may sound snarky, but I am genuinely curious. How many of your other 500 publications resulted from similar data fishing expeditions?

You may be unfamiliar with the heightened risk of false positives when conducting multiple comparisons. I highly recommend Daniel Lakens' Coursera course, "Improving Your Statistical Inferences": https://www.coursera.org/learn/statistical-inferences

Reply
Brian Wansink
12/15/2016 04:01:43 pm

Hi Beau,

Thanks for your suggestions about false positives. That's one reason why we often try to have a secondary lab study to confirm what we find externally.

I looked for an article he might have written on the topic, but if you have an idea of a core article in the syllabus, I'd love to know about it.

Thanks,

Brian

Reply
Alex Reinhart link
12/15/2016 05:14:34 pm

I can't speak for Daniel Lakens, but I have written quite a bit about this kind of thing on my website, with references linked to a number of papers, if you'd like more detail.

In particular, the sections on multiple comparisons and researcher freedom are directly applicable here:

https://www.statisticsdonewrong.com/p-value.html
https://www.statisticsdonewrong.com/freedom.html

The paper that most directly addresses the research methods in your posts is "False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant", which discusses exactly the kinds of tactics you described in your post, and why they are likely to lead to meaningless results.

https://www.ncbi.nlm.nih.gov/pubmed/22006061

It's disappointing to hear these kinds of methods being advocated openly. Our goal should not be increasing our publication counts by slicing and dicing data until we get results. It should be improving our understanding of the world. That means analyzing experimental data rigorously, reporting our analysis fully (even parts that didn't produce significant results), and admitting that null results have value too.

George Henderson
12/15/2016 04:13:39 pm

I'm very grateful to you for exposing how these valueless and misleading studies are generated.
If a hypothesis is sound, you should be able to predict the result of an experiment. Predict as in beforehand.
You might pick up interesting information as well that needs discussing and retesting, but I'm guessing that if there had been anything really interesting in these cases, it wouldn't have taken multiple attempts to generate new hypotheses, and the lead experimenters would have made the connections, assuming that the subject is really one to inspire intellectual curiosity.

Reply
Brian Wansink
12/15/2016 11:26:52 pm

Hi George,

You are right on target. Good hypotheses will have shown up in the Lab, and we typically only take things into the field after they have been proven in the lab. But when they work on undergrads, but not as well on lunch-goers at an Italian buffet, its better for intellectual curiosity to figure out where it worked and where it didn't.

You make a good point and I hope I addressed it in the addendum.

Reply
Gordon
12/16/2016 04:03:19 am

Hi Brian.

This pretty much sums up what is wrong in psychology. 1. Do many lab experiments. 2. Report only significant findings (hypotheses that worked). 3. Run replication on similar task or new population or in the field. 4. P-hack the replication. Boom false positive support for a false positive...

It also took me long to understand the consequences. But it is very important!

Best,
Gordon

Leonid Schneider link
12/15/2016 04:39:20 pm

Dear Brian,
I only have one brief question: does your Cornell University approve of this post?
Leonid

Reply
Gary McDowell
12/15/2016 09:23:53 pm

One conclusion appears to be that in your lab, you judge success to be an unpaid inexperienced graduate student doing exactly what you told them, and failure to be the paid postdoc demonstrating independence and a work-life balance, and then leaving academia.

However, I struggle to draw conclusions from anecdotal data such as this, and you let your lack of objectivity affect your judgment to make a sweeping generalization about academia that no-one should be able to make from such a dataset.

These people are not at the same career stage. They have different experience and expertise about the data which may have influenced the choice of one to work on it (indeed, it sounds like the grad student didn't really have a choice). You make the nationality of the grad student clear and make no statement about the postdoc's, but they are implied to be different.

The grad student was working only on this data, whereas you state for the postdoc this would have been a side-project. In which case, even if the postdoc did take it, there is every likelihood that they could not have produced at the same rate as an otherwise unoccupied grad student in a foreign country.

In addition, the data you provide in this post is also misleading. The implication from reading this is that the grad student was in your lab for 6 months, whereas it took only a quick search to see that the student claims to have been in your lab for 11 months. So that would then suggest that the postdoc only produced 1/2 as much as the graduate student. And by produced, all we have to go on is papers, because that is the only metric of scientific output or indeed their work that you mention here, other than they already had a project. You do not mention what else the postdoc was doing, other than implying they were watching Game of Thrones, checking Facebook and going to spin class. Still, they produced 2-3 papers in 11 months based on the data you provide, and applying the correction for 11 months.

Now your data begins to take on a new light, because you also state that you met with the graduate student every day. So, now what is appearing is a graduate student in a foreign country with no extra funding for 11 months with daily attention from you working on first one, then another, dataset, publishing 5 papers, and a postdoc working on another project with unclear amounts of input from you but seemingly working independently producing half as many papers, who then leaves academia.

You do not state whether the postdoc originally intended to leave academia, or chose to leave academia after being in your lab. But a conclusion that you chose to support and nurture the graduate student and not the postdoc, while also assuming that the postdoc was distracted by "shiny things", and thus drove the postdoc out of academia yourself with your attitude could be just as valid as the conclusion/moral you provide.

The information you provide is sparse and anecdotal, so a conclusion cannot be drawn from it. I merely provide this alternative as a conclusion that is just as valid from the sparse information you provide as your conclusion that you should "Make hay while the sun shines".

Advice is freely given and free to take, and so I return the favor: fewer academics making sweeping generalizations about academic success based on personal anecdotes, and instead making conclusions based on rigorous data, or reading the work of those who do so, would make improvements to a clearly imperfect system easier.

I have added your post to my collection of "things academics actually say seriously about academia using anecdotes," which happily is just in time for inclusion in a manuscript currently in revision.

Reply
Brian Wansink
12/15/2016 11:39:55 pm

Dear Gary,

Thank you for your very thoughtful and constructive post. What you say are great points and they give me something to think about.

You are right on target. The person from Turkey was a 100% worker because we met every MWF. Also, she was younger and really was working to "prove" herself.

Also, the grad student correctly did have a 20 hour/week assignment on another administrative project, and she was not really certain she wanted to stay in academia. At least she was really unrealistic ("I want to be a professor at one of these 3 schools").

I've added an addendum to this post. One of the things I wanted to underscore is that a time period like this is a pretty brief window of opportunity. It might be easier to put the pedal to the metal now than later when a spouse and kids are around.

Thanks for your very thoughtful comments. I would love to take a look at your manuscript if you need comments, or to use in my graduate course when it's in a working paper phase. Send to foodandbrandlab@cornell.edu and I'll get it.

Thanks again,

Brdian

Reply
Unbelievable
12/16/2016 02:24:26 am

You have basically admitted pushing this student to p-hack and do intensive HARKing (and you keep praising HARKing in the addendum). Simply put, you're telling us that these five papers have no scientific value.

Reply
David Moore
12/16/2016 08:49:37 am

Based on your addendum I do not think you understand what p-hacking is. You basically said you don't p-hack and then you give an example to prove this (finding a result at only some times but not others) which is exactly what p-hacking is!

Reply
Brian Wansink
1/12/2017 08:23:31 pm

Hi David,

Much of social science (at least when it comes to eating) looks at finding boundary conditions (where an effect no longer works, for instance) or finding moderating effects (such as the types of people it works best with based on individual differences).

For instance, loud ambient noise in a restaurant might cause people to eat more food at night (because there more opportunities to eat appetizers, desserts, and order drinks), but this might not influence people at lunch (because most people only order a lunch entree and not chocolate lava cake and a martini). Also, loud music might have more of an effect with young people (who eat more) than people over 70 (who tend to eat less).

In an ideal world, one would have had hypotheses about this ahead of time. With field studies, you often figure it out while asking yourself why might have moderated the brilliant hypo you had when you first designed the study.

Uncovering these boundary conditions and moderating effects are really valuable in the social sciences and they don't always come "full blown from the head of Zeus" in a theory, but they come after analyzing the data.

Perhaps there are different definitions of p-hacking.

Best,

Brian

Reply
Tina Issac
12/16/2016 09:35:30 am

+1 to David Moore's comments above.

There is nothing wrong with "deep data diving" IF it is declared as such in the paper, preferably under a label such as "Data snooping" or "Exploration", clearly delineated from any hypothesis-conforming research, with stringent Type I error correction.

What you describe Brian does sound like p-hacking and HARKing. The problem is that you probably would not have done all these sub-group analyses and deep data dives if you original hypothesis had p < .05. What you engage in is a data-driven walk down the garden of forking paths, as Andrew Gelman likes to put it.

1) Test main hypothesis
2) if p < .05, stop,
3) if not, explore subgroup,
4) repeat from 2).

This is in fact p-hacking, and it does increase the Type I error rate, and it does lead to research that is false positive, and hence unreplicable. And yes it is damaging to science in the long run by littering the field with false research, leading other researchers down wrong paths, wasting time and resources of other labs. Of course, at the same time it is beneficial in the short and medium run for the researcher and the lab that engages in it (more funding, more cites, more prestige). The incentive system of academia clearly makes steers this in the wrong direction.

Fortunately, psychology has done some real soul-searching about these topics, and we now have tools such as p-curve analysis, test of insufficient variance, R-index, and others that help to unearth such methods post-publication.

I admit that it is a bit difficult to end on a positive note. I have always been a big fan of your research and reading this blog post was like a major punch in the gut.

Reply
Brian Wansink
1/12/2017 08:51:03 pm

Hi Tina,

Thank you for your very thoughtful note. You make a really important point. Stating a post hoc finding as a hypothesis inflates the chances of a Type I error.

The best way to approach something like that is to have the first paragraph of the analysis look at main effects of what was generally hypothesized. The second and third paragraphs (or sections of the analyses) then look at these sub-group analyses or boundary conditions.

Lab data is usually tight, but field study data is so crazy that to use it for hypothesis testing is sometimes less useful than using it as a demonstration, exploration, or as evaluation research.

Although the review process isn't always perfect, it does provide a useful check and balance. Effects that might have been found in a secondary analysis usually need to be explained or supported in the discussion section. When reviewing papers, I often ask authors to provide more theoretical support for these boundary conditions or mediations (but most of them seem somewhat obvious, but only after the fact).

Thanks again for the helpful post,

Brian


Reply
Sampa Mabaudi
12/16/2016 12:14:44 pm

As a grad student from a "lesser university", I found this article amusing, disturbing and sad at the same time.

I am aware that the author needed to make a point, but I am curious as to why it was necessary to basically reveal the identity of the "obedient" grad student but not that of the "liberal/independent" post-doc?
My immediate concern for the grad student was this- if Turkey is anything like my country of origin, this article follows her around as showing that she basically needed to be spoon fed into publishing and may be incapable of independent work.
Further, choosing to show the papers she published but not those of the post doc might actually be because the author knows that it would not be acceptable to undermine someone from a "higher university" or maybe even "higher country"?

Respectfully yours.

Reply
Brian Wansink
1/12/2017 09:07:45 pm

Hi Sampa,

Thank you for you excellent comment.

I mentioned her because I really, really admire and respect her. She's a hero. She made an opportunity that most people wouldn't have had the wisdom, energy, or insight to have made. She's one of 3 or 4 examples I always hold up as how to brilliantly seize the day.

Academia is lonely and the right mentors are hard to find. Even then, they might seldom have time for you. Her ability to find a mentor, get advice, and really leap-frog her research experience is amazing. She deserves to get a life-time of credit for taking a risk that 99% would have never taken and then capitalize on it like another 99% wouldn't have thought to do.

What she did could serve as a great set of insights for other people who might identify with her situation.
1. Find a mentor outside your school who might be willing to work with you on you
2. Try to take a 3-12 month leave of absence as a PhD student (and see if your adviser, parent, the lottery) can support you,
3. Move to the place for 3-12 months and try to add as much value to as many existing projects (and eventually new ones) as possible
4. Go back, graduate, and get a great job.

If you decide to go this route and need advice, feel free to contact me.

As for mentioning the woman's name, in retrospect, I probably shouldn't have done that. I just admire her too much (and my daughters adore her).

Thanks again and let me know how I can help,

Brian

Reply
Anon
12/16/2016 12:36:15 pm

Wow. Sampa took the words out of my mouth. It seems you are implying that this student was somehow academically inferior at the outset, which is kind of condescending. But then, to call her out by name through the articles you list ... Does she know about this post? Did she agree to be identified? but I guess this model student wouldn't say "No" to you would she...

Reply
Brian Wansink
1/12/2017 09:58:22 pm

Check out the post to Sampa:

"I mentioned her because I really, really admire and respect her. She's a hero. She made an opportunity that most people wouldn't have had the wisdom, energy, or insight to have made. She's one of 3 or 4 examples I always hold up as how to brilliantly seize the day."

Reply
Diabolecule
12/16/2016 11:57:23 pm

Hi Brian,

As a post doc with a bad attitude, I'd like to ask how many other projects was your post doc working on? At one point, my boss had me working on two different projects, supervising five students, and had volunteered me for but not told me about (even when I asked him directly why people kept giving me chemicals without an explanation) two further collaborative projects, and I was running service samples, plus he had me writing papers for students I was not responsible for but was working with unofficially.

I was hoping that you could help me to understand what more I should have been doing, given that my boss was unhappy with my paper output during this time.

Reply
Brian Wansink
1/12/2017 09:20:04 pm

Dear Diabolecule,

I'm sorry to hear about your experience. It must have hurt to be spread so thin and to still have your supervisor be unhappy. It might help to view it as a chapter of your life that will be over soon.

With volunteer visitors (we have 4-8 each year), I try to make sure they only work on things that will get them publications or helpful skills. Post-docs are in a different situation, and they often can get strung across too many projects -- some of which won't really pay off. Typically we try and say that 50% of a post-docs time should be spent on what their job description is and the other 50% should be spent on pure research (or I guess on what they want).

I'm sorry to hear about your experience. One thing to keep in mind is that not all academic positions need to be the type that have one as stressed out as your adviser might have been. There are great gigs in balanced schools that do not demand as much research, and there are teaching schools.

Think broadly about what options are out there for you. Try to keep your spirits up. Focus on finding a good fit (and not on what someone else might approve of).

Thanks for your note and best wishes on your next chapter,

Brian

Reply
Sergio Graziosi link
12/17/2016 02:37:18 pm

Brian,
you are getting so much flak, that I'm almost tempted to sympathise. I'll resist the temptation, but will offer a tale of one young PG student (me, ages ago) and a few links.
The tale: after graduation I got the exceptional chance of doing a funded PhD in a well respected institution. It didn't go well: within team competition was at the highest (i.e. fellow students and postdocs were all expected to compete with one another, the environment was often toxic as a result) and pressure to produce publishable results relentless.
I could not cope: I was there to find out how stuff works, but that didn't count at all. If all the emphasis is on publishing, only publishable results are considered results, all the rest is, as you write, a failure. Problem is, this turns science into a race, people are tempted to cut corners, while we end up rewarding selfish ambition and discouraging genuine curiosity. At the time I could barely decipher what was going on, but I was simply unable to put all my energies in a game which I knew wasn't producing trustworthy results (not to mention that it wasn't even producing anything potentially interesting).
I ended up quitting and making a career shift. I was lucky and I'm now in professional heaven (have been for almost 10 years), so I doubt that I'm saying all this out of spite. Why am I spilling my beans, then? Because I have the feeling that you are trying to do the right thing, your OP looks like a genuine attempt to provide useful advice (if it's not clever satire! It does smell of sexism, however).
What I'm suggesting is that perhaps your Post-Doc has suffered a similar situation: for some people, if they find themselves unable to believe in what they're doing, it will be hard or impossible to put all their energies into it. That's to say: if you'll sort out the methodological side, people working for you may suddenly rediscover their own motivation. I suspect that you may regret writing this post, but it doesn't need to be recorded as a mistake: it may still be useful.

So, onto methods. There is something catastrophically wrong in the genesis of the papers you describe, and it seems possible that you simply didn't know it (as per David Moore and Tina Issac). Fine, mistakes are there to be corrected. Others (Alex Reinhart in primis) have provided valuable literature, I'll add my own suggestions and a brief explanation.
Let's say we do an experiment, manipulating a variable that doesn't make any difference whatsoever. We measure 5 outcomes. Because of internal variability, all measures differ from our controls. So, for measure 1, we have a 5% probability that the difference will be "statistically significant", right? Same for measure 2, 3, 4 and 5. Question: what's the probability that at least one measure will turn out to be statistically significant? A bit more than 5%, I'd say.
This is all there is: the moment you start exploring your data, the p-value becomes worse than worthless: it is positively misleading. It misled you, the entire field of psychology and many more.
To be honest, it's even worse than this, p-values can be misleading even without actively exploring your data, as the usual Gelman has famously explained <a href='http://www.americanscientist.org/issues/pub/2014/6/the-statistical-crisis-in-science/99999'>here</a>. It doesn't even end there: if you dig enough, you'll find that it is extremely hard to avoid unintentional p-hacking: <a href='http://journal.frontiersin.org/article/10.3389/fpsyg.2016.01832/full'>Wicherts et al. (2016)</a> have produced a list of 34 (34!) things that people should watch out for, and that's assuming they are pre-registering their experiments. Looks daunting to me. Conclusion: the pressure to publish is real, but people are trying to find workable alternatives. As others suggest in here, it's up to every PI to decide whether to play it safe (and be part of the problem) or take some risks and help out. I trust you'll choose wisely ;-).

Reply
Brian Wansink
1/12/2017 09:35:55 pm

Dear Sergio,

Outstanding! Thank you so much for point out those two paper. (I downloaded the draft form and will be making it required reading for my team). You make outstanding points.

Also, I appreciate what you shared about your experience. I am sorry you experienced it, and I'm glad things have gone well in the past 10 years.

Although I wouldn't put this group anywhere near the "publish at any cost," I think the pressure and pace for the combination of outreach, translation, and publishing are really frenetic. People can still thrive in an environment like that, it also needs to be really supportive and it sounds like yours was instead almost cannibalistic.

You are insightful and what you say is exactly right about this person. They were soul-searching for the right path. That wasn't a good fit to put her in that position.

You were very thoughtful to take that time to share your story and to share those two papers. I'll make good use of them.

All my best on the next great 10 years,

Brian

Reply
Sergio Graziosi link
2/14/2017 08:21:50 am

Brian,
apologies for the late reply, I've either forgotten to tick the "notify" option or it didn't work. I've arrived back here serendipitously via (twitter) Anne Scheel ‏(@annemscheel).
I've noticed your latest post on the subject and the addendum #2 above, and I'm commenting now to show a little appreciation and encouragement. Nobody says (or should say) that what you're trying to do is easy, it isn't: it's costly (money and more), effortful, probably thankless and certainly time consuming.
So, just a little "thank you" token for trying to make things better, knowing there is no guarantee you'll succeed.
I wasn't thinking of this debate at all, but the other week I've published the first blog post from and for my "professional heaven", I mention it because it's supposed to be a handy little guide (a relatively low-cognitive-cost heuristic) to help doing more-reproducible science, so (if I did it right) might be useful in this context. URL is: "http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3681&articleType;=ArticleView&articleId;=170". (no surprise that I mention the same Wicherts et al. paper in there :-/)
Enjoy, please keep trying, and forgive me if I sound patronising (not my intention)!

Anuja
12/17/2016 11:09:04 pm

I read through all the comments and post. I wonder how different perspectives analyse a simple incidence with so many negative angles. I concluded it as, NEVER SAY NO TO ANY OPPORTUNITY. However big or small it is. And BELIEVE IN YOUR MENTOR. Your guide thinks best for you. An ancient philosophy in our Indian books. Thanks.

Reply
Brian Wansink
1/12/2017 09:46:13 pm

Hi Anju,

Thanks for your nice comment.

You are exactly right. Some of us imagined getting a PhD like this great partnership or apprenticeship, but that's not what its like for most people. Opportunities come and go; the fewer a person takes, the fewer there seem to be.

Assuming you're a graduate student, there's two regrets I had as a graduate student and I'll share them in case they may be relevant to you. First, I wish I would have chosen an advisor really, really liked me and not the advisor who was the most famous. I chose the famous guy and lost my funding in the PhD program (and fortunately ended up with the one who really liked me).

Second, I wish I would have started as many projects as possible with faculty (and finished them or pushed them far, far forward) when I was still a student. I wouldn't have slept, but I would have learned a lot, gotten a bunch of jump-started publications. I probably also wouldn't have gotten turned down for tenure at my first two schools.

Grab those opportunities an as a midwestern expression goes "make hay while the sun shines."

Thanks again for your post,

Brian

Reply
Waseem
12/19/2016 11:40:00 am

Anuja - "Your guide thinks best for you"? and if the mentor/leading sheep jumps off the cliff?

https://www.youtube.com/watch?v=4TMMllw0tOo

Reply
Brian Wansink
1/12/2017 09:50:00 pm

Hi Waseem,

Thanks for your comment, and you make an important point.

As I mentioned to Anuja, a key thing is to make sure your advisor really likes you. My first advisor was the famous dude who "divorced" me after 2 years, and a wise older friend said, "I should have told you what I tell all of my PhD students: Make your best friend your advisor."

Good luck finding that "best friend" advisor,

Brian

Reply
matti heino link
1/12/2017 12:55:40 pm

There are some nice posts about this post (and why the addendum doesn't really address the problems) by

- Andrew Gelman: http://andrewgelman.com/2016/12/15/hark-hark-p-value-heavens-gate-sings/ and
- Ana Todorović: http://neuroanatody.com/2016/12/too-good-to-be-true/

Reply
Brian Wansink
2/1/2017 07:51:08 am

Hi Matti,

Interesting posts, and thanks for pointing these out. Hopefully we'll have all of this addressed really soon.

Thanks,

Brian

Reply
Robin Niels Kok link
1/26/2017 04:16:22 am

Nick Brown and colleagues made hay while the sun shone and have pre-published a re-analysis of the papers mentioned in the blog post.

"Statistical heartburn: An attempt to digest four pizza publications from the Cornell Food and Brand Lab" is available here:

https://peerj.com/preprints/2748/

Reply
Brian Wansink
2/1/2017 07:49:16 am

Hi Robin,

Thanks for sharing this. When it came to our attention this weekend that there were some inconsistencies, we contacted the editors and asked if we could redo all of the analyses and publish an erratum. It will also get a chance to address some other oversights (like not cross-citing the papers, partly because they all came out at the same time).

Best,

Brian

Reply
Anthony St. John
2/1/2017 03:19:24 am

"With field studies, hypotheses usually don’t “come out” on the first data run. But instead of dropping the study, a person contributes more to science by figuring out when the hypo worked and when it didn’t."

I suggest you read this xkcd comic carefully: https://xkcd.com/882/

It provides a great example of learning from a "deep dive".

Reply
Brian Wansink
2/1/2017 07:46:19 am

Hi Anthony,

I like it. Thanks for the link. (Makes me grateful I'm more of a purple jelly bean guy).

Best,

Brian

Reply
Tim van der Zee
2/1/2017 11:36:28 am

Thank you for this update. You say that "This is data that cannot be depersonalized since sales, gender, and companions were central to some analyses."

This is inconsistent with what is reported in the papers. There are no analyses which require the *names* of the people, so the data can certainly be anonymized. Gender data is not personally identifiable information, nor is height or weight. There are many people with the same gender, height and weight combination meaning that you cannot trace this information back to an individual. Even if you could make it likely that this would be the case, taking out either the height or weight will still allow us to re-do all the critical analyses. As regarding the sales: it is already stated in the paper how much the participants ate, thus the data cannot add more information to this.

To rephrase: it seems perfectly possible to anonymize the dataset in such a way that allows others to re-do the analyses as described in the paper.

Reply
Brian Wansink
2/5/2017 01:46:34 am

Hi Tim,

There were a load of hypos we were exploring in this and one was the cross-perceptions of people eating at a table together. That is, is there "group think" about the quality of a food at a table. To account for this, we know the names of all the people eating with each other at the same table. We know that Bob is eating with Cindy (and that he also had 2 beers), and so forth.

This was collected in a town of less than 1000, and knowing gender and age and BMI (or height and weight) might not be so anonymous at the extreme. That's why we wrote the consent form to read:

"The records of this study will be kept private. In any sort of report we make public we will not include any information that will make it possible to identify you. Research records will be kept in a locked file; only the researchers will have access to the records."

If something along this line changes in the future, I will let you know.

Sincerely,

Brian

Reply
Tim van der Zee link
2/1/2017 12:06:54 pm

You state that "the data is sensitive" for various reasons. I will respond to them individually.

1. "The is sensitive for the company because it entails sales figures"
It seems highly unlikely that the data contains sales figures or anything that could be interpret as such. It is already reported in the paper how many people participated in the study (although this number continuously changes, as it discussed in the PeerJ article) and at which cost. Given that it is already clear how many people paid (at least) the price of the buffet, what else could be in the data that constitutes 'sales figures'? Even if the data would contain sales figures, that could be removed from the data set before it is shared. As such, this is not an argument why the data cannot be shared.

2. "The data is sensitive for the company because it entails traffic"
Again, the paper mentions all kinds of sample sizes, which constitutes the traffic of the restaurant. Sharing the data could only be sensitive if it would contain *additional* information about traffic, but that could simple be removed from the data set before sharing. As such, this is not an argument why the data cannot be shared. Instead, sharing the data could hopefully settle the case of how many people actually participated in the study.

3. "The data is sensitive to the diners because it contains the names of everyone they were dining with"
Why would you record the full name of the participants and store this electronically? Either way, it is straightforward to anonymize a dataset before sharing. A request for data does not constitute a request for sensitive data, but a request for the data which underlie the analyses. Considering that the names of the participants are not part of any analysis, they do not need to be shared. As such, this is not an argument why the data cannot be shared.

4. "The data is sensitive to the diners because it contains information on how much alcohol they ordered"
When the participants are anonymized, this cannot constitute a problem. Drinking alcohol is a socially acceptable practice so that knowing that there were people who at some point were drinking alcohol in an all-you-can-eat-dinner is hardly sensitive information. As such, this is not an argument why the data cannot be shared.

5. "The data is sensitive to the diners because it contains information on how much food they ate"
Frankly, I do hope that the data contain this information as this appears to be the entire point of this study. The papers already explicitly mention how much the participants ate, although these numbers appear to change through the articles, as is discussed in the PeerJ article. Either way, this also does not constitute a problem because the participants can be anonymized in the dataset. As such, this is not an argument why the data cannot be shared.

6. "The data is sensitive to the diners because it contains other identifying data (a 46 year old woman 5'9 and 136 lbs dining with a 49 year old man how is 6'3 and . . . )."
It is unclear why this would be sensitive information. This could only be a problem assuming that the study was performed in a very tight community where everyone is somehow aware of how much people weighted a few years ago, and whether they did or did not have lunch at this specific restaurant. Should this be deemed a likely possibility it still an easy option to simply anonymize the data further and remove either the age, weight, or height. As such, this is not an argument why the data cannot be shared.

----------

All of these concerns can be very easily dealt with by properly anonymizing the dataset. Anonymizing data is standard procedure. It thus surprises me that you appears to be either unaware of this procedure, or thinks this is not an option.

I kindly ask you again to simply anonymize the data and share it with the scientific community who are concerned about the veracity of this paper. As can be seen at https://peerj.com/preprints/2748/ the PeerJ has over 4000 views and 2800 downloads within a few days, which signifies that a substantial group of people is interested in the scope of the noted inconsistencies. Sharing the data can help the scientific community to shed light on these inconsistencies.

Reply
Brian Wansink
2/5/2017 02:03:03 am

Hi Tim,

As I noted in the post you made a half hour earlier, one of the hypotheses we had was whether people at the same table tend to converge on their evaluations of food -- do they tend to think it tastes the same. So this is why the data is interlinked to know that Bob is eating with Cindy (and that he also had 2 beers), and so forth.

Is that sensitive information? In a town of less than 1000, maybe it would be to Cindy's husband. For this reason, we wrote the consent form to read:

"The records of this study will be kept private. In any sort of report we make public we will not include any information that will make it possible to identify you. Research records will be kept in a locked file; only the researchers will have access to the records."

Again, if something along this line changes in the future, I will let you know.

Sincerely,

Brian

Reply
Malte Elson
2/6/2017 10:31:13 am

Dear Brian

Thank you for taking the time to respond to the many comments here. I'm glad you're taking this serious and that you're considering a substantial change in your research practices (as outlined in Addendum II).

Regarding your comment above though, I'm not sure in what way it responds to the points raised by Tim. He explained in great detail what variables he is interested in, and provided some guidance how potentially sensitive information could be removed to protect the privacy of your study participants.

If the quote in your comment above is taken directly from the consent form, then I don't see any problem at all with sharing parts of your dataset. It is true that your participants need to be protected, and that all sensitive information or information that could be used to identify them need to be kept private. But there is a host of other variables that, when detached from those information, will still be valuable to Tim (and others).

Removing sensitive variables from your dataset might make it impossible to reproduce certain analyses - but that is not a problem. Anybody that ever worked with sensitive information understands that you can't share everything all the time. But there are probably other analyses that could be easily reproduced without the sensitive information present in the same dataset.

Sharing
1) a full list of *all* variables measured (marking those that are "sensitive") AND
2) a partial dataset of only the non-sensitive variables
would still be completely in accordance with the statement from your consent form that says "we will not include any information that will make it possible to identify you" as it does not specify anything about sharing information that cannot be used to identify someone.

I really hope you will reconsider your previous statements not to share any of the data from the studies discussed here.

Kind regards
Malte

Julia M Rohrer
2/3/2017 03:52:37 am

Hi Brian,
maybe didn't read your Addenda properly, but it looks to me like you did not acknowledge the work of Tim, Jordan and Nick at any point. This puzzles me quite a bit because I think it is common practice in science to actually reference relevant sources instead of talking about impersonal "suggestions" that were made.

I'd also like to add that your group's work cannot possibly be accurate to the "3rd decimal point" because numbers normally only contain one decimal point, although that slip certainly adds some ironic flair.

Best,
Julia

Reply
Brian Wansink
2/5/2017 01:13:32 am

Hi Julia,

You're absolutely correct, and that's been done in a modification (and a link to their heartburn article) I made an hour ago where I mention some of the specific lessons I think all behavioral researchers can take away from this.

Thank you,

Brian

Reply
Julia M Rohrer
2/6/2017 03:19:40 am

Thanks :)!

Blake Suttle
2/3/2017 12:58:48 pm

Hi Brian,
The key point here, which many have alluded to, is the meaning of P = 0.05
That's the scientific community accepting 1 out of 20 false positives in defining statistical significance. That understanding, that language we've all agreed upon, becomes inoperable as soon as you make multiple comparisons from a single data set.
This is a real challenge, and there's no single 'right' answer to it, because we obviously want to learn as much as we can from our data, but as soon as you make five comparisons, you've introduced a 25% chance that a statistically significant finding will be spurious. If you make 10 comparisons, there's a 50% chance a stastically significant result is spurious. And with 20 comparisons, it's basically a statistical guarantee that at least one signficant result is spurious. This all just by our very definition of what statistical significance is.
It gets worse: it is not ever possible, statistlcally, to identify *which* of your significant results is the false positive. You're left only with conjecture.
With no right answer to this quandary, there are only 3 paths to take with such data that fall within good scientific practice:
1 - Limit your analysis to the single, planned comparison conceived a priori as your hypothesis test;
2 - Make multiple comparisons but adjust your critical threshold for statistical significance accordingly (so that an appropriate statistical 'penalty' is incurred to reflect the multiple comparisons: e.g. a Bonferroni correction where you divide 0.05 by the number of comparisons you make, and nothing is significant that does not fall under that much lower number; a Sequential Bonferroni where you divide 0.05 by the number of comparisons you make, the *first* comparison needs to fall under that critical value, the *second* needs to fall under 0.05 divided by the number of comparisons minus 1, the third 0.05 divided by the number of comparisons minus 2, etc for as many comparisons as you make).
or 3. Make multiple comparisons but keep a careful tally of the number you undertake and simply acknowledge these and the challenge they pose to interpreting the P values reported in your paper (see Mathew Moran's 2002 Oikos paper for compelling advocacy of this approach).

Without one of these, your P values are meaningless, literally in defiance of what a P value is, and you are severely misleading your readers as to the operation of the real world you're purportedly studying.
Whatever the approach you take, it is imperative that you are up front about it. That up-front admission of multiple comparions needs to be the erratum statement on each of the five papers referenced. Do you see? It doesn't matter whether you go back and re-analyze, try something new, whatever: the P values you report do not serve the very point of what a P value is. They are meaningless with respect to the hypothesis tests for which you present them. It was inadvertent, but you have misled your editors and readers, and you need to address that regardless of what subsequent analyses show. Your papers, just by the meaning of our statistics, report false inferences today. Certainly. By definition.
I hope this is helpful. Good on you to follow through with this unexpected tone and content of response as you have. There's a better lesson for your grad students and postdocs there, probably, than the one you originally intended to convey.

Reply
Brian Wansink
2/5/2017 01:24:07 am

Dear Blake,

Thank you for your very well thought out and thorough reply. I like your three point path and thank you for including the Moran (2002) reference. For these field studies that are looking to confirm either a finding from the lab or to suggest a new relationship, I think your second and third suggestion are great.

In addition to doing the types of test you mentioned I think there's other key things that can be done. When visiting with a Stats PhD earlier today he also suggested being very explicit about whether a conclusion was based on an a priori hypothesis, or based on exploration. I've also met with my Lab and with these big studies like this, we're going to start registering them on trials.gov (which I just learned about). (There's some other SOP "lessons learned" I mentioned up in my addendum post.)

Thanks for taking the time to write such a thoughtful post while also being both specific and useful.

Best wishes,

Brian

Reply
jack
2/6/2017 02:17:15 am

Congrats on the acknowledgement of need to improve research practice, this is how things move forward.

Reply
Hannah C.
2/7/2017 01:24:23 am

Why so much emphasis on the gender or nationality of the graduate student?

Reply
C'mon
2/7/2017 03:15:15 pm

Oh please

Reply
Eric
2/7/2017 07:19:59 am

Brian,

I'm worried. This doesn't add up -

Brian says 'a non-coauthor Stats Pro is redoing the analyses',

But Brian also said he couldn't share the data because of his consent forms....

"The records of this study will be kept private. In any sort of report we make public we will not include any information that will make it possible to identify you. Research records will be kept in a locked file; only the researchers will have access to the records."

If something along this line changes in the future, I will let you know.

So this non-coauthor Stats Pro isn't one of the original researchers.... presumably he/she must have access to the data to redo the analyses. So... why not share whatever you share with the non-coauthor stats pro with Tim et al. and whoever else wants it?

Here is the solution. You can share the data.

Alternatively, your actions with the stats pro have invalidated your consent agreement with the participants in this study..

Which one is it? I'm confused about all of this.

Eric Robinson
University of Liverpool

Reply
Kah
2/8/2017 09:45:29 pm

I stopped reading as soon as I read "lessor background." That spoke truly of what kind of 'scientist' you are. What a shame to see a once perceived Utopia of mine had turned into a game of charlatans.

Reply
'Stats Pro'
2/9/2017 10:56:50 pm

From R. A. Fisher, 1938:

To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.

Reply
Henry Thibodeau
2/12/2017 11:32:11 am

I have been following the developments on this blog and Andrew Gelman's blog. After reading through everything, I have these questions: If just one project and the four papers resulting from it had these many mistakes, what about the rest of the work this lab and researchers has done? How can the readers or other academics trust any work or papers this group of researchers has wriotten?

Henry Thibodeau
Independent Researcher

Reply
Cassandra
2/12/2017 11:40:06 am

Well, there's one answer to the question: "Why has the obesity crisis ballooned in the United States and is seemingly incurable".

I expect to publish at least three papers on this:

"When Academic profile and the scientific principle is inverted to foster a career" Practical Ethics

"Consumer Brands, Ethics and Government Policy: the 'do's and dont's' of faux policy" Ash Center for Democratic Governance and Innovation

and

"Casual sexism and racism within hetero-normative patriarchal power structures, a lesson from 'a Turkish lady'" Feminist Theory


As many others have stated: as satire it was a bit on the nose. As reality, well: you have the leader you deserve, it would seem.

Reply
Cassandra
2/12/2017 12:11:57 pm

And, sorry to do this, but:

<em> I mentioned her because I really, really admire and respect her. She's a hero. She made an opportunity that most people wouldn't have had the wisdom, energy, or insight to have made. She's one of 3 or 4 examples I always hold up as how to brilliantly seize the day... . Her ability to find a mentor, get advice, and really leap-frog her research experience is amazing. </em>... etc.

I'm left wondering if the author actually realizes just how creepy and unprofessional this sounds, especially in light of a mentoring position to an unpaid, foreign, female PHD student? Who he then 'mentored' into publishing five (!) papers from a single data set that wasn't even that good to start with.

For the record, this waxing Ode to the Perfect Woman PHD student reads <em>very</em> much like a couple of other email exchanges I've read.

In court.

In sexual harassment cases.

Reply



Leave a Reply.

    Picture

    "Pracademic"

    Here's help on how to get your PhD, get hired, and get tenure without making the same mistakes I did.  

    Picture

    Archives

    February 2017
    January 2017
    December 2016
    November 2016

    Categories

    All

    RSS Feed

    boundarywansink.pdf
    File Size: 421 kb
    File Type: pdf
    Download File

    activismresearchwansink.pdf
    File Size: 836 kb
    File Type: pdf
    Download File

Contact Us

    Get our Healthy by Designtm Newsletter

Submit
  • Home
  • Healthy Initiatives
    • Healthier by Design >
      • Educational Services >
        • Keynotes & speeches
      • Analysis & Evaluation
    • Slim by Design
    • Healthy Weight Registry
    • Cornell Food and Brand Lab
    • Smarter Lunchrooms
  • A Healthy, Happy You
  • Healthy Profits
  • Discoveries
    • Some Greatest Hits
    • Slim by Design >
      • More Slim by Design
    • Mindless Eating >
      • More Mindless Eating
    • Marketing Nutrition >
      • More Food Marketing
    • Asking Questions
    • Consumer Panels
    • Kids & Schools
    • Cool & Quirky Findings
  • PhD Advice
  • About
✕