One of the best things about AutoGradr is that shortens the feedback loop between instructors and students. Students can get feedback in just a few seconds from AutoGradr and if they pass the test cases, they can be confident in knowing that they at least did what the question asked them to do. On the other hand, if they fail the test cases, AutoGradr tells them exactly where they went wrong. They can then debug their program and make multiple attempts to fix the issue and pass the test cases.
When students fail test cases, AutoGradr tells them exactly what went wrong by showing them a 'diff' between their program's output and the expected output.
In the example above, the test case failed because 'Stdout did not match expected output'. Simply put, this means that the output from the program is not the same as the output in the test case. Additionally, the feedback provides a diff.
The diff highlights in red and crosses out everything the program was not supposed to output. It also highlights and adds text in green that the program was supposed to output but did not.
The program above was supposed to prompt by printing "Say something:". Instead, it printed "Tell me something:". As a result. the diff crossed out 'Tell me' and put 'Say' in green. Now, the student can fix their program's prompt and make a new attempt to pass the test case.
Yes. The program output must match exactly as the output in the test case. Fixing formatting errors has a very low overhead on students if their program behaves correctly otherwise. On the other hand, benefits are huge. As instructors, you'll get uniform results from all your students. Their way of achieving the output may be different but the output itself must be the same. This makes grading far easier. There is also an important lesson for students that computers care about the small details. When students go out in the industry and build systems that must follow certain protocols, this lesson will be useful.