|Questions To Identify Non-Programmers|
|Written by Alex Armstrong|
|Friday, 19 March 2021|
How can you tell if somebody who claims to be a programmer really is or is in fact an imposter who has watched a few You Tube videos and acquired some of the jargon without really understanding the concepts.
Anyone who has applied for a professional programming job knows that the recruitment process goes through multiple stages and can last for weeks or months. But what if you are looking for a programmer in a less formal, or less well-funded. context? Do you simply have to trust that a person can code just because they say they are able to?
One context that might entice non-programmers to claim to be programmers is to take part in survey or research studies where participants are paid for their input. The researchers at the University of Bonn who have authored the paper "Do you really code? Designing and Evaluating Screening Questions for Online Surveys with Programmers" were prompted to look into how to distinguish bone fide programmers from fraudsters by their own previous experience of conducting research studies into password storage and security. For one of these the same team could only get 40 out of 1600 Computer Science students to take part in a study where the compensation was 100 euros. Recruiting experienced professional programmers is even more difficult as they generally have far too many demands on their time to contemplate any such involvement.
To attract sufficient numbers of participants researchers recruit from platforms such as GitHub, Stack Overflow, Meet-up groups and by posting on social media and as they state in the paper the problem is that:
since researchers often offer significantly higher compensation than for end-user studies, there can be an incentive for participants to take part in a study despite having no programming skill.
They also rely, as this study did. on using the online Clickworker agency.
There are many other contexts where being able to quickly identify "real programmers" would be an advantage so the questions they came up with are of wider interest than just academic research.
The study set out to devise a set of questions that met the following requirements:
• Effectiveness: Able to differentiate between programmers and non-programmers - the questions need to rely on domain knowledge and be complex enough so that only programmers can answer in a reasonable amount of time and should not leave any scope for mere guesses.
• Efficiency: Consume as little time as possible - the goal is to frame questions that programmers can answer quickly and also allow participants without programming skill decide quickly that they cannot answer them,
• Robustness against cheating: The instrument should be designed in a way that it becomes difficult for participants without programming skill to come by the answers, for instance, by using online search engines or forums
• Language independence: Should work regardless of the programming language the participants are skilled in.
Sixteen questions were initially proposed and given originally to 50 people known to be programmer - either professionally or students and to 100 non-programmers, of whom 35 claimed some programming skill. The test started by asking about familiarity with well known programming languages and then followed up with:
Q1: Which of these lesser-known programming languages have you worked with before?
The options were all fictitious and all but one of the programmers gave the right answer, "None of the above". However the question wasn't a good discriminator as 91% of the non-programming group gave the same answer.
The next question, by contrast did sort programmers from non-programmer. It asked:
Q2 Which of these websites do you most frequently use as aid when programming?
followed by a list including Stack Overflow, Wikipedia, LinkedIn and Memory Alpha which is a source for Star Trek. All the programmers selected Stack Overflow and no other option was selected. Among the non-programming group 60% admitted they did not program, 15% said they didn't used any of the listed sites, 9% chose Memory Alpha, 8% Wikipedia, 2% Linked In and only 6% picked Stack Overflow.
The next five questions tested basic knowledge with multiple choice answers and the ones that worked best at distinguishing between programmers and non-programmers were:
Q3 Choose the answer that best fits the description of a compiler’s function.
Q4 Choose the answer that best fits the definition of a recursive function.
Q6 Which of these values would be the most fitting for a Boolean?
All the programmers answered these correctly compared to between a quarter and a third of non-programmers.
Two multiple choice questions were less good as a higher proportion of non-programmers answered correctly and some programmers chose incorrectly:
Q5 Choose the answer that best fits the description of an algorithm
Q7 Please pick all powers of 2.
These questions were discarded on the grounds they did not achieve a correct answer rate of at least 98% from the programmer group or that more than 40% of the non-programmer group gave the correct answer.
The question that had the worst performance was:
Q10 Please select all valid hexadecimal numbers.
Only 70% of programmers and 6% of non-programmers answered correctly.
Two code-based questions foxed 20% and 24% of programmers respectively:
Q12 What is the run time of the following code?
Q13 When running the code, you get an error message for line 6: Array index out of range. What would you change to fix the problem?
The two question in the set that were based on pseudo code for sorting an array and two of them were effective in distinguishing between programmers and non programmer:
Q14 What is the purpose of the algorithm?
Q15 What is the parameter of the function?
However, Question 16, relating to a backward loop in "hello world" pseudocode was discarded because of having only 94% correct among programmers.
One conclusion of the study was that:
Designing programming screener questions is not trivial and we would not recommend using questions without testing them.
while perhaps the more important outcome was that:
for the kind of developer studies that are common in our community, it is not recommendable to rely on the self-reported programming skill or a platform’s recruitment features. In our test set, 42% of the Clickworker programmers got fewer correct answers than the poorest performers in our ground truth programmers group.
Do you really code? Designing and Evaluating Screening Questions for Online Surveys with Programmers by Anastasia Danilova, Alena Naiakshina, Stefan Horstmann, Matthew Smith
or email your comment to: email@example.com
|Last Updated ( Friday, 19 March 2021 )|