Week 3 Retrospective
I continue to not know where all my time is going!
What went well:
An appropriately difficult and lengthy quiz! Well actually, I’m not sure whether the fact that most students scored very well means they had enough time for a quiz of the right difficulty or they had too long for too easy of a quiz. The good part is they all scored significantly better than last week, even though the material is supposedly more difficult.
I started leaving code comments for their projects and assignments, which I think is an industry practice I can bring to the students. It’s a lot of work though, and I might scale that down later especially when the code becomes a lot more complex. I find that because I leave comments and will try to leave at least a hint for problems that didn’t pass all tests, I have to essentially debug their code, which I don’t really want to do. That’s why the tests are there! I guess the tests will check whether the code is correct and isolate the problem for where it’s incorrect, as well as catch mistakes in the code that I might not catch otherwise if the tests weren’t there.
I skimmed down my testing code quite a bit, and it’s great! I realized I could write I/O tests and still mock out the functions I need. I also shared these scripts with the wider GIR instructors team, and I do hope other people use them too! I feel quite accomplished with my scripts on testing students’ unit tests (although we haven’t gotten there yet) as well as my scripts to mock out functions and store print statements in a variable to check for correctness.
Earlier in the week when students told me their tests weren’t passing for the project, my reaction was my tests might be wrong. But after a few times, my tests were never wrong and my students’ code was always the one that was wrong. That gave me more confidence in my testing code script, and I learned to actually just be more confident in it because I’ve tested it using my own solution code as well as on Mimir.
What can be improved:
Explaining while loops is hard and I suspect some students did not understand how to construct a while loop. I’m planning on rectifying that tomorrow.
I might need to rethink how labs are run. Right now, I’m reviewing / finishing explaining material from lecture, giving 20ish minutes to do a coding problem, then going over answers. I don’t know if 20 minutes to do a problem is enough, and it seems like maybe not, depending on what the problem is. I go over the answers in class so I don’t have to grade (and write unit tests for) them.
I continue to miss little details here and there in explaining and introducing concepts, but I still think that’s to come with time and repetition of teaching these concepts. I’m bound to forget but the important part of that is to realize I should explain it again with those details not missing.
More quickly thinking of ways to explain concepts differently is a skill I really want to pick up. On Friday, I talked to a student for two or three hours to explain / lead to the conclusion of having a condition that always evaluates to True in an expression containing a logical operator does not need to be there. We went in loops quite a few times until I realized that explaining that particular concept by saying “this has no impact on the code so we don’t need it” makes a lot of sense. It took me? too long to get there, but I did try other ways to explain the same idea.
One of my students starts projects really early and he catches all the mistakes or questions in my project and homework code. I’m pretty grateful for him because he’s not egotistical about it at all and he’s quite respectful. I’m also really impressed at how fast he can finish homework / projects because he’s always the first one done. Pretty cool! Anyway, I need to release more unit tests to my students and make sure that the examples I release are a representative sample of the unit tests I have behind the scenes.
For quizzes, I’m not sure whether I want to let students have tests to run. I don’t really want to take points off for code that doesn’t run, because, well, if the exams were on paper, that wouldn’t be a requirement. But, a student told me during a quiz that he couldn’t run the code, so I released one test for it. I still don’t know whether to do so for the next quiz, because I don’t really care on an exam whether the code runs. It just makes my life easier when I can rely on tests passing or failing to determine code correctness, but it’s a bit unrealistic to get running code during a test, especially for complex code down the line. Also for the quiz tests, I wrote a rubric this last time as well as tests, and I realized if a student makes one mistake, it’s counted twice — once in the test and once in the rubric. That’s probably not good.
All in all, there’s a lot that’s going well but also a lot to think about and probably improve on, as usual!