| Our Blog

Things We’re Talking About

Written by in News.

Read the latest installment of Tom Harvey’s Quarterly Contrarian email. Every 3 months QCmail will present a view on a current business issue that differs from the current consensus. You can take it or leave it. But hopefully QCmail will sharpen your thinking or trigger another take on conventional business wisdom. This month, Tom Harvey features an article by Jason Dana of the Yale School of Management. Professor Dana challenges us all to rethink how we approach and handle job interviews—regardless which side of the table we are on! Read on and be prepared to be surprised!

 

A friend of mine once had a curious experience with a job interview. Excited about the possible position, she arrived five minutes early and was immediately ushered into the interview by the receptionist. Following an amicable discussion with a panel of interviewers, she was offered the job. Afterward, one of the interviewers remarked how impressed she was that my friend could be so composed after showing up 25 minutes late to the interview. As it turned out, my friend had been told the wrong start time by half an hour; she had remained composed because she did not know she was late.

 

My friend is not the type of person who would have remained cool had she known she was late, but the interviewers reached the opposite conclusion. Of course, they also could have concluded that her calm reflected a flippant attitude, which is also not a trait of hers. Either way, they would have been wrong to assume that her behavior in the interview was indicative of her future performance at the job.

 

This is a widespread problem. Employers like to use free-form, unstructured interviews in an attempt to “get to know” a job candidate. Such interviews are also increasingly popular with admissions officers at universities looking to move away from test scores and other standardized measures of student quality. But as in my friend’s case, interviewers typically form strong but unwarranted impressions about interviewees, often revealing more about themselves than the candidates.

 

People who study personnel psychology have long understood this. In 1979, for example, the Texas Legislature required the University of Texas Medical School at Houston to increase its incoming class size by 50 students late in the season. The additional 50 students that the school admitted had reached the interview phase of the application process but initially, following their interviews, were rejected. A team of researchers later found that these students did just as well as their other classmates in terms of attrition, academic performance, clinical performance (which involves rapport with patients and supervisors) and honors earned. The judgment of the interviewers, in other words, added nothing of relevance to the admissions process.

 

Research that my colleagues and I have conducted shows that the problem with interviews is worse than irrelevance: They can be harmful, undercutting the impact of other, more valuable information about interviewees. In one experiment, we had student subjects interview other students and then predict their grade point averages for the following semester. The prediction was to be based on the interview, the student’s course schedule and his or her past G.P.A. (We explained that past G.P.A. was historically the best predictor of future grades at their school.) In addition to predicting the G.P.A. of the interviewee, our subjects also predicted the performance of a student they did not meet, based only on that student’s course schedule and past G.P.A. In the end, our subjects’ G.P.A. predictions were significantly more accurate for the students they did not meet. The interviews had been counterproductive.

 

It gets worse. Unbeknownst to our subjects, we had instructed some of the interviewees to respond randomly to their questions. Though many of our interviewers were allowed to ask any questions they wanted, some were told to ask only yes/no or this/that questions. In half of these interviews, the interviewees were instructed to answer honestly. But in the other half, the interviewees were instructed to answer randomly. Specifically, they were told to note the first letter of each of the last two words of any question, and to see which category, A-M or N-Z, each letter fell into. If both letters were in the same category, the interviewee answered “yes” or took the “this” option; if the letters were in different categories, the interviewee answered “no” or took the “that” option.

 

Strikingly, not one interviewer reported noticing that he or she was conducting a random interview. More striking still, the students who conducted random interviews rated the degree to which they “got to know” the interviewee slightly higher on average than those who conducted honest interviews. The key psychological insight here is that people have no trouble turning any information into a coherent narrative. This is true when, as in the case of my friend, the information (i.e., her tardiness) is incorrect. And this is true, as in our experiments, when the information is random. People can’t help seeing signals, even in noise.

 

There was a final twist in our experiment. We explained what we had done, and what our findings were, to another group of student subjects. Then we asked them to rank the information they would like to have when making a G.P.A. prediction: honest interviews, random interviews, or no interviews at all. They most often ranked no interview last. In other words, a majority felt they would rather base their predictions on an interview they knew to be random than to have to base their predictions on background information alone.

 

So great is people’s confidence in their ability to glean valuable information from a face to face conversation that they feel they can do so even if they know they are not being dealt with squarely. But they are wrong. What can be done? One option is to structure interviews so that all candidates receive the same questions, a procedure that has been shown to make interviews more reliable and modestly more predictive of job success. Alternatively, you can use interviews to test job-related skills, rather than idly chatting or asking personal questions.

 

Realistically, unstructured interviews aren’t going away anytime soon. Until then, we should be humble about the likelihood that our impressions will provide a reliable guide to a candidate’s future performance.

 

Jason Dana is an assistant professor of management and marketing at the Yale School of Management.

 

EDITOR Tom Harvey teaches entrepreneurial marketing at Ohio State’s Fisher College of Business. For ten years he was CEO of a multinational insurance & risk management organization. He started his career with Xerox and later joined the venture arm of a technology development firm where he worked with more than twenty emerging companies. He has been CEO of five companies including two start-ups and a Joint Venture with a Fortune 100 company.  He received a BA from Georgetown and, after serving as an artillery officer, an MBA. Harvey is a director or adviser with several Ohio based firms. (614) 309-8757