Saturday, April 18, 2009

Weeks 5 and 6: Evaluation Methods

Progress on the new programme ( my evaluation project) is a bit slower than I thought it would be, but given that I am not working on the project full time- we are not too far behind. After having had the last week off work, it seems achievable now while I am still at home, but I might feel differently when I go back to work next week. The study/activity guide has been written- just formatting it onto Moodle, and my colleague and I have thought of some areas that we could introduce discussion forums-hopefully,as Bronwyn suggested, encouraging them to engage with each other (and therefore improve interaction). The study guide will be in the form of an 'activity guide'-"what to do and when to do it"- as they work through the programme. Topics for the Forums include theory topics such as thinking about their own working and learning styles-adding that to a thread-sharing the importance of these styles for job seeking purposes, with other students; communication issues to deal with when under stress- and difficulties this may cause in the workplace;-sharing their own experiences-these are a couple of examples of discussion topics. We are also in the process of developing some quizzes, so that we can monitor (or attempt to), understanding of specific topics.

In terms of the evaluation model/paradigms and methods that I am persuing:
It's become clear that the type of model best suited is the mixed-multiple-methods model, simply because of the types of data collection available. The programme will be up and running very soon, albeit in a pilot form (as it will still be possible to make running additions to the learning activities as needed, since they will be hosted on a UCOL Moodle site). I may not have made it clear in the previous posting, but one of the final assessments tasks on the programme is to complete a face-to-face interview with the student, based on the CV they will have completed, as well as using the details of a research project (job seek process) that they have to complete. So there is an opportunity to evaluate with 'interview' methods, as well as questionaire methods- as we will be in e-mail contact with the students, as well as contact through the Moodle LMS.

We have discussed paradigms at length now and it's been an ongoing learning journey for me to see and read so much about 'them'. Referring to Phillips' discussions of popular research paradigms, there are so many different 'worldwide views of inquiry', that in order to find a traditional paradigm that fits with e-learning, it was easier to explain away with another multiple approach-the fourth paradigm (aka Reeves et al), critical -realist-pragmatic, which fits in well with the multiple methods of data collection. Hence the term 'eclectic' (borrowed) as used by Reeves, et al. Pragmatic meaning a practical problem-solving method. I believe this fourth model/paradigm suits my project. It may be quite difficult to measure all the factors that contribute to improvements of 'learning' with the other more scientific methods, since I am looking at interaction as a quality issue, as well as the relevence of added course content. Feedback should be able to be incorporated quite easily since UCOL has deleveloped the extra learning material and the associated Moodle site (with activities etc).

Why this model?
I see my project as a Formative type of evaluation and some of my ideas from the eLearning guidelines also fit under Effectiveness (summative) evaluation. While the programme is essentially already developed, the 'online delivery' is being 'fine tuned', and the new content added by UCOL is, at present, untested. So in terms of the elearning guidelines already identified, I would be asking something along the lines of 'are the activities effective and matching the L.O ?' Key to this is evaluating 'useability' (tested by users) and 'effectiveness' (is it working?). So it is a bit like a 'user review'. While not a formal pilot launch of the programme, the initial release will be to existing students whom we already have a relationship with (well, we are planning for this approach at least).

Reading Chapter 8 of Reeves and Hedburg helped confirm why I have chosen an Effectiveness Evaluation, rather than Impact. The ultimate measure of 'success' would be to say "did the the student get a job after completing the programme and creating a CV?" But this is looking too far into the future to evaluate the programme- too many variables out of our control; perhaps a more appropriate short term effectiveness aspect would be " Did the Virtual Building Tour enable a more descriptive (and therefore effective) CV to be completed"? This sort of data would be easy to gather at the Interveiw I imagine.

Articles that I have been reading:
Mehelenbacher,B.,Bennett, L.,Bird, T., Ivey,M.,Lucas, J.,Morton, J.and Whitman,L. (2005). Useable E-Learning: A Conceptual Model for Evaluation and Design. Interaction, Vol 4-Theories, Models and Processes in HCI. Retreived 29 March, 2009 from http://www.pedagogy.ir/images/pdf/usable-e-learning05.pdf

This article arose from a search relating to "evaluation and useability". Was of interest to me because of the authors oversimplfied statement that 'useability is the study of intersection between tools, tasks, users and expectations in the context of use'. Since my evaluation project is on effectiveness based on the quality of interaction- in an e-learning environment, I perservered with reading it.
It's a conference paper (International Conference on Human-Computer Interaction). The aim was to outline the challenges faced by researchers merging the (numerous) theories of useability and evaluation with e-learning developments. The approach considered neccesary with evaluation of e-learning environments is that it is more task-orientated than other types of useability research, therefore a different model is suggested. I have interpreted the model to be more inline with the fourth paradigm described above- because of the task-orientated perspective on e-learning instruction. The group suggests that 'useability evaluation' is founded on theories stemming from early Human-Computer Interaction theories- the social science research side versus the design of the tools for human interaction. "Understanding audience is where useability begins". The paper describes 10 heuristsics from Jacob Neilsen (1994) (http://www.useit.com/papers/heuristic/heuristic_list.html) for useable design
(the ones that stand out for me: match between system and real world; and recognition rather than recall). The secret it seems, is to match the design heuristics with the development of the evaluation-testing theories of e-learning useability. So with the groups emphasis on users and tasks- the model they propose is outlined by a set of useability heuristic tools-for designers evaluating e-learning environments- the questions to ask are defined under the following headings:
Learner Background and Knowledge (e.g accessibility, suport and feedback)
Social Dynamics (e.g communication protocols)
Instructional Content (e.g examples and case studies)
Interaction Display (e.g appeal, consistencey and layout)
Instructor Activities (e.g authority and authenticity)Environment and Tools (e.g organisation and information relevence).

These headings (tools) have been drawn from the five dimensions of all Instructional Situations, modified for e-learning. The paper details these dimensions in great detail, so I won't repeat them. I guess they are saying that further development is needed, and their ideas step away from those of Reeves etc (who directly apply these heuristics to e-learning evaluation); towards a more 'synergistic collaboration between useability and e-learning research'. Thus the task-interaction orientated emphasis on the questioning.

Another paper I was reading:
I have not fully summarized it yet, sorry.
Developing a Usability Evaluation Method for E-learning Applications:From Functional Usability to Motivation to Learn (http://www.dmst.aueb.gr/en2/diafora2/Phd_thesis/Zaharias.pdf)

The paper describes the development of a questionnaire-based usability evaluation method (formative) for e-learning applications. As with the previous paper, the focus for the study is the 'poor usability of e-learning applications'. The development of the method was based upon a very well known methodology in HCI research and practice.
More to follow.

References:
Phillips, R. (2004). We can't evaluate e-learning if we don't know what we mean by evaluating e-learning. Retrieved 29 March 2009 from http://www.tlc.murdoch.edu.au/staff/phillips/PhillipsV2Final.doc

Reeves - Chapter 8 (already referenced).

1 comment:

Bronwyn hegarty said...

You are going well Debra so please do not be worried about slowness. I can see some excellent ideas emerging. You have also given the other some great material to digest.

Some suggestions about the evaluation method you are proposing and your elearning guidelines for your plan - as emailed as well.

Re formative and summative approaches you have described on your blog.

First of all you need to be clear about your eLearning guidelines which will form the big picture questions, that is, the over-arching things you want to find out. The three I can see on your blog in your post in week 3 are below. As you say further on in the weeks 5 & 6 post (below) - you have a dilemma regarding formative and effectiveness (summative).

Lets be clear on this - you are conducting a formative evaluation if the online materials are still in the development phase and the evaluation is being used to improve the development. with regard to this, I believe you can get peers to look at "are the activities relevant for the L.O" (more explanation below). I believe you can check useability of the materials with the student group you mention - access, navigation etc, plus you can look at the effectiveness of the design - this includes asking about the content and activities, learning outcomes and assessment. This is all formative.

However if you look at: "are the activities effective and matching the L.O", this is summative and will require you to look at student success e.g. pass rates and ask students how their learning was influenced etc. I do not believe you are ready for this yet.

Part of weeks 5 & 6 post - "I see my project as a Formative type of evaluation and some of my ideas from the eLearning guidelines also fit under Effectiveness (summative) evaluation. While the programme is essentially already developed, the 'online delivery' is being 'fine tuned', and the new content added by UCOL is, at present, untested. So in terms of the elearning guidelines already identified, I would be asking something along the lines of 'are the activities effective and matching the L.O ?' Key to this is evaluating 'useability' (tested by users) and 'effectiveness' (is it working?). So it is a bit like a 'user review'. While not a formal pilot launch of the programme, the initial release will be to existing students whom we already have a relationship with (well, we are planning for this approach at least)."

Guidelines from your post in week 3
SD 3:Do students gain knowledge relevant to employment and/or current thinking in their field? This could be re-worded to exclude "knowledge relevant to employment" with something like - a bit clumsy but you can decide.
- Are students provided with relevant information and current thinking in their field?

SD 5: "Do students aquire the learning skills for successfully completing the course?" This is useful but replacing aquire with have would be better.

ST 9: Do the technologies employed help students successfully meet the learning outcomes? I would not use this one as this is for a summative evaluation. Perhaps something like: Do the technologies employed successfully help students to participate and learn? The word successfully is broad but could also be a bit of a misnomer as it depends on the definition you are using.