We recently looked at our hiring process to attempt to determine what works and what doesn't. By "works" I mean what gives us insight into future performance. The first thing we realized was that we had no process.
Step 1: Create a process
The first thing we did was look at data from other companies. Google has published some information about their hiring process and in particular what didn't work. Brain teasers were out, but there wasn't a lot of good information about what does work. The problem is that without a lot of data you really don't know how good your process is. You certainly know how well the people you hired have performed, but you have no way to know how good the people you didn't hire were and without a highly structured process it's hard to match interview performance with real-world performance.
The next thing we did was to have everyone come up with a list of questions they typically ask candidates and for each one we asked, "what does the answer to this question tell us?". Does it tell us how well the candidate will perform as an engineer, does it tell us how good the candidate is at interviewing, or does it give us some vague sense of a quality we think an engineer should have that may or may not translate to real-world performance? It's hard to know which questions fall into the first category, but it's a little easier to filter out questions that fall in the last two.
We started by not presuming any relationships that we didn't have data to support. Does performance under stress in an interview translate into performance under stress in the real world? I don't know, so presuming that relationship does not provide me with useful information. Does the ability to "stand out" in an interview process correlate with being a good engineer? Probably not. We attempted to determine for each question how and why it would impact our decision to hire someone. Is specialized technical knowledge critical, important, just nice to have? What kinds of questions will tell us if they are a good fit with the team?
After we came up with what we felt like were a good set of questions we looked at the environment of the interview. Since we have no data to support the hypothesis that there is a correlation between being good at interviewing and being good at software engineering, we wanted to create an atmosphere that was as relaxed as possible. We actually explain our process and the thinking behind it because we don't want candidates to feel like stumbling over an answer because they are nervous is going to reflect negatively on them. We wanted to give them every opportunity to show us what they've got because we don't want to miss out on someone really good over a bad interview.
The last thing we looked at was our filter for candidates. This one was tough because we don't want to waste time on someone who is obviously not going to work out, but at the same time we don't want to have such a restrictive filter that you miss out on good candidates. What we settled on was FizzBuzz. A very simple test to see if they could program at all, or at least could figure out how to use Google. We try to impress on recruiters that we don't want them to apply their own filters because frankly we don't trust them to make any better decisions than we would, but we haven't met with a lot of success on that front.
So, how well is it working? I don't know. We certainly removed things that were not likely to be useful and put some priorities on the process that everyone understands and agrees on. I'd like to say time will tell, but we just don't hire enough engineers to collect the kind of data that a Google or Facebook could. Oh, and if you're looking for a job with some challenging problems, we are hiring.