I have finally submitted my Evaluation report for marking. I will post a link to it when it is marked. It ended up to be longer than I planned, and, like Herve, found it difficult to reduce. Certainly a relief to get it done. But Bronwyn's comments to me were to remember to keep asking ourselves whether we have made the connections between the findings, the evaluation questions, the decisions and the purpose of the evaluation. These comments really helped in the end.
And to focus on explaining fully whether the evaluation questions have been answered or not, and why. Sometimes I got so caught up in the results, that it was good to take a step back and remind my self what it was we originally set out to acheive. I think part of the reason for that was that often the responses to some questions would answer other sub questions. Which is not a bad thing, actually, just means that we have been thorough in our methods I guess.
After completing the evaluation, I was able to instigate some changes to the programme immediately, since in 'real time', the programme development was in a pilot stage. Those changes have now mostly been made, and it is basically ready for students to enrol into. This is possiby why it took me so long to complete the report. However, I thought that once I got some "real students" participating, I would also ask them to complete the User Reveiw questionaire, and I think it will be interesting to see if the responses differ at all(with supposedly a more refined version of the programme available now).
One of the most interesting aspects I found, was that the responses from 'general comments' often made the most impression. I enhanced this area of the evaluation after feedback from either Joy or Adrienne, which was great. I was also fortunate to be able to talk to each of the users, either as they worked through the questionaires, or soon afterwards. Some of them had made extra comments as well, outside of the evaluation scope. This was a great opportunity. I initally thought that my limited sample size with the User Review was going to be a disadvantage (n=5), but when combined with the chance to talk and take away the comments, I believe the small number of users was not really too much of a limitation. Of course, this made it impossible to analyse data statiscally, meaning that perhaps the results should have been demonstrated as 'trends'. However, as this was noted in the discussion, any other interpretations will be noted, and as it was a formative evaluation- also acceptable. It also meant that I did not use many graphs in the final report. I did create some data plots of 'means', but in the end did not include them (since n=5). I hope that was the right choice!!!
Wednesday, July 15, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment