Few in recruitment need reminding that best practice is to read, review and respond to every applicant. Yet how many businesses or departments can claim a clean record here? From my experience as (10 years recruiter and now rec-tech Product Manager), I’d bet none. Our analytic suite shows plenty of unread applications across our user base. So, what’s the problem?
I believe it’s this: The dichotomy of unread applications is that while reviewing every application is beneficial for the company, it’s not beneficial for the individual recruiter.
The counter argument runs thus; if every applicant gets a response, then the recruiter benefits long-term with a stronger, more engaged talent pool to work. I wouldn’t disagree. However, this long term benefit offers no guarantees – especially if there’s lots of unsuitable applicants for their sector. Perhaps they will benefit if everyone else does the same – but without collective agreement and commitment in practice, this still falls flat.
The issue lies in the realities of prioritisation and execution for an individual recruiter. The key objective is to make placements, and make them as quickly as possible. This applies in both agency and in-house, though the internal drivers behind that objective differ.
For a recruiter, when vacancy A is filled, the next priority is to fill vacancy B. However, at the same time, vacancy A may still have unread applications. These are likely to have accrued during the final stages of the process. Whilst candidates are at final stage interview, advertising remains switched on as the money has been spent, and if no-one is successful at final stage, the recruiter will consider those stored applications. But if the vacancy is filled, those applications become redundant, and the drive to review them is compromised by the greater drive to fill the next vacancy.
Some may point to better time management as the answer. Well, we can all improve on that. However, keep in mind the number of vacancies and related priorities that a recruiter has to juggle to achieve maximum placement output. Often a higher priority task will always be there, over and above going through those unread applications.
An individual recruiter is often asked to report on – and justify – their activity. Especially when failing to meet or exceed placement targets. But even when smashing those targets, the mechanics of recruitment dictate any daily flex should be spent on activity that directly results in more placements. What’s the preference for a manager or owner?
The bottom line is that recruiters respond to how their activity is measured. But how they’re measured doesn’t just refer to the measurement itself. It’s the handling and driving that data that counts. The greater the penalty for missing placement and revenue targets, the less likely activity benefitting the ‘community’ of the business is prioritised. Think how hard it can be in agencies, to encourage lead-gen and sharing across teams. If that’s a challenge, taking time to process unread applications (potentially for others), is even an even greater one.
Something else to consider is that earlier point about quality. Say you have a hundred unread from a recently closed vacancy. A large percentage are unlikely to be relevant to your desk or the wider business – how strong is the incentive to review them? Does that realistically feel like a priority compared to other tasks more likely to result in placement? Probably not. So, what’s the answer?
One size fits all is unlikely to work, but could a reasonable trade off be found in adjusting placement/revenue targets? Broadly speaking this works by reducing X from those targets in return for all applications being processed during that month. Of course, the driver will always be greater revenue to improve their bonus earnings – so ultimately this may still leave a gap. Then again, it could mean an increase in reviewing applications by virtue of the reduced pressure as a trade off.
Is this an answer? I’m not sure. It’s an idea, and like any idea that hasn’t been knocked around and discussed it will have flaws.
At idibu, we’re in a good place to explore how to solve this problem and to do that through our tech. But I want to talk more around these challenges and generate other ideas first. If you’d like to share any thoughts you have, I’d be really keen to hear them.