Exploring AI in Preemployment Assessments

How vulnerable are hiring assessments to AI? This competition seeks to find out.

Competition Overview

I-O psychologists rely on preemployment assessments to evaluate job candidates. But with AI-powered tools like Large Language Models assisting in job applications, organizations face a new challenge: Are their hiring assessments informative for comparing amongst candidates if the perfect applicant is a click away?

This competition invites teams to create AI "job candidates" that attempt to excel in various assessments, including personality tests, cognitive ability exams, and interview questions. The results will help assessment developers adapt in an evolving hiring landscape.

Competitors will receive a dataset to develop their models (dev_dataset.zip). This dataset is available below, and includes two files:

Your goal is to automatically fill in responses using automated tools such as Generative AI or supervised learning models, scripts, or anything that is reproduceable by another person requiring no more than 5-10 minutes per job to fill in.

Each job is evaluated on three attributes:

  1. Personality
  2. Cognitive Ability
  3. Skills and Ability

Each attribute is weighted equally (0.333) to compute the Final Overall Score.

Organizations are trying to thwart people/bots who may be presenting an unrealistic view of their own abilities, so they have in place faking countermeasures. Every job's responses will receive an Honest Probability score, which will weight the FinalOverallScore. If the job is certain that your answers are "fake" then your honesty score for that job will be 0, making your entire overall score 0 for that job.

Scoring Formula:
Job Applicant Score = Honesty * [(Personality * 0.333) + (CognitiveAbility * 0.333) + (SkillsandAbility * 0.333)]

Leaderboard Overall Score = Average of Individual Job Applicant Scores Across All Jobs

Once you have completed the spreadsheet, submit it for scoring to view your average scores across all 50 jobs: Submit Spreadsheet Here to Score it

On March 9th, the test dataset will be released, which will have nearly 200 new jobs that competitors must use their development phase approach to complete

Instructions

  1. Team Registration: Sign up on the Sign-up Page to receive your unique team token. This token is required for both submissions and dataset downloads.

  2. Dataset Download:

    Enter your team token below and click the button to download the development dataset securely.


  3. Update 3/9/2025: Download Test Phase Data Here


  4. Explore the Dataset: Unzip the downloaded file and review its contents to familiarize yourself with the job listings and associated questions.

  5. Develop Your Approach: Use your preferred tools to automate responses to the questions in the development spreadsheet. (Note: Manually filling in responses is impractical for large test datasets.)

  6. Submission: Submit your completed spreadsheet for scoring at the Submission Page or via our API endpoint (optional). To learn how use the API for automated submissions, refer to the provided Python/R examples:
    Google Colab Example (Python): Link
    Google Colab Example (R): Link

  7. Leaderboard & History: Check your performance on the Leaderboard and review your submission history on the Submissions Page.
Sign Up Your Team Now