A Brief Look into the Culture of Recruitment

Fungai Mutsiwa
6 min readAug 23, 2020

We get to peer behind the veil of obscurity which is, the industry of recruitment, to understand how it works and will take shape in the future. This sector has mostly been the responsibility of line management and recruiters, whether internal or external. Naturally we have perceived these two groups to understand the models used for assessing candidates, and that there are control measures in place to mitigate for inherent bias throughout the recruitment process.

We are in an epoch where Artificial Intelligence (AI) and Machine Learning (ML) are entering various business sectors, with the expectation and general assumption that this will improve efficiency by tackling mundane tasks to allow for more strategic and value-add work. We’ll assess the potential impacts of bias and AI/ML in recruitment, but only after we gain a perspective on the current reality of the industry. After all, how can we improve the future of a system from within, without understanding its past or current state.

Inherent Bias

Within every individual therein lies conscious or unconscious bias. In the context of recruitment, I the subject, exist within myself as a candidate, but the recruiter constructs a perception of who I am based off their own ideas. This view comes naturally to the recruiter because these ideas are underlying attributes to their character. This bias is man-made, which doesn’t tell us much except that this is something we can control and improve on.

With that we have to question whether our recruiters are aware of this bias within them. Are there any control measures in place to mitigate the reality of this inherent bias? To put this into perspective for the candidate I’ll list a few indications of how or when this may have impeded your job opportunities.

  • Applications are at times filtered by candidate address, profile picture, or name
  • International candidates are not considered, whether it’s due to English being a second language, or the disregard of any prior overseas experience
  • Offering migrants lesser opportunities
  • Using racial stereotypes to assign opportunities to certain candidates

It needs to be understood that as a recruiter being aware of any or such practices awards you the opportunity to learn and improve. This can be done through courses or tools set in place to help assess your bias, and provide practical strategies to mitigate the level of bias and its impacts.

There are also a few malpractices outside of the above that need to be mentioned, of how:

  • Preference is given to referrals, and these applications are selected on that basis, without assessing the candidate on merit. There is failure to realise that referrals also lead to homogeneity, which would be a problem for organisations with a workforce diversity strategy
  • At times when an application is rejected no constructive feedback is provided to the candidate
  • Recruiters post “phantom” job advertisements on platforms like LinkedIn, Indeed or Seek in an attempt to collect candidate data to store in their databases
  • Recruiters prep potential candidates to pass the culture-fit, personality or psychometric tests, because they just want the commission, even when the individual may not actually be a good fit for the role or organisation

The Influence of Predictive AI and Machine Learning

When placed in a position to influence the decision on the viability of a job candidate, there is a level of accountability and due diligence that is expected. At least enough so as to assume that when recruiters perform their tasks there is a full understanding of not just the output of the models they use, but also the inner workings and feedback loops on these job decision algorithms.

As we usher in an epoch of disruptive technology, we need to assess the risks of implementing predictive AI and Machine-Learning in an industry that has been heavily dependent on human capital. The use of these mathematical models has shown how they tend to be opaque and, in some cases, unregulated, causing concerns around discrimination as they reinforce the bias carried by the modelers or recruiters. To begin to understand this proverbial black hole we need to at least consider the following issues:

  • The common misconception is that the algorithm is objective, and free of bias, without considering that the engineers of the underlying code hold their own biases
  • Using matching algorithms can perpetuate pre-existing bias, conscious or unconscious, within the data set used. So, any learned patterns or behaviours are in question considering that the data set is not only historical, but also limited to the candidate pool that applies through the system. It is then important to note that AI cannot be “aware” of any unintended bias. This ends up creating a filter-bubble through algorithmic editing
  • The models used have to develop proxies to determine ideal attributes of a candidate for a particular role. Through mining data from social media accounts the model can form a footprint of your personality to make an assessment. The first concern is the privacy of data, then also the proxies developed are not an accurate representation of the target pool. Potentially this determines whether the model even advertises the job opportunity to you
  • Companies such as HireVue use AI to assess candidates through its video interview platform. It evaluates an individual’s responses and facial gestures to preselected questions. Facial recognition algorithms are not accurate as shown by the Coded Gaze that illustrates how software is not able to identify a black person. This is not limited to race, as tools like IBM Watson have shown inaccuracies when distinguishing people by gender, or whether they have disabilities
  • Some companies do not provide full disclosure of the methodologies they use within their models, showing the lack of accountability and transparency

Again, we see ourselves in the same predicament as with data sharing and protection, where the entities involved are not entirely held to scrutiny. The regulatory efforts are either government run — the type of governments that held the 2018 Facebook congressional hearing, whilst unaware of how the platform works, yet heavy users of the site — or Not-For-Profits. This, coupled with old legislation translates to limited funding and an ineffective fight against antitrust laws thus far.

The above points are not meant as a deterrent, but rather to serve as an awareness to the impacts AI and Machine-Learning have. There is a responsibility shared by both the modeler and the user to eliminate bias and ensure the model is doing what it is meant to do. I’ve highlighted below, some points to consider to improve the implementation of AI and Machine-Learning in recruitment:

  • It is integral to always remember that behind every code there is an individual with personal beliefs and values, and this then is embedded in the code
  • AI serves to eliminate mundane or repetitive tasks, to create opportunities for more strategic work. However, there should still be human intervention when determining the appropriate model required, and the control measures such as third-party audits or model testing to ensure integrity of the results
  • There needs to be supervision of the results from the model to provide a feedback loop, so as to determine whether the model is effective
  • In some cases, these models should not be considered for use. Such an instance could be where the use of a facial recognition software for pre-screening can prove disadvantageous to a candidate with a disability, as their physical attributes may differ to the training data set

There needs to be management of the impact technology has on recruitment, identifying and assessing the potential risks in an aim to develop and implement controls and better decisions. The job is never to have a high quantity of candidates for a role, but finding the most applicable. Recruiters and hiring managers, as well as the engineers, need to develop a habit of adding a margin of error assuming there is no perfect reliability on the components of the models they use. Bias within a model will always be present, not due to the algorithm but because of the human element. What we need to be looking at, is how to mitigate it.

--

--

Fungai Mutsiwa

“Reliance on a solitary vantage point fails to illuminate the whole picture.” — N.Sousanis