Sunday, April 26, 2009

Week 7-8 Draft ideas for Sampling Methods/Tools:

As outlined in the last post- it looks like I should be concentrating on a 'formative evaluation', given the timing and the scale of the project (both for this assessment and for the pending release of the programme to students). Therefore these comments are based on Bronwyns' feedback, thanks, Bronwyn. A slight rewording of my original intention of evaluating this summative point: "Are the activities effective and matching the L.O ?" to this formative point: "Are the activities relevent for the L.O"....

E-Learning Guidelines to evaluate against:

Given we need to evaluate against two e-learning guidelines (and I have been previously discussing three): my draft plan now will include the following:

ST9: - reworded - for a user review - to investigate useability ( access, navigation, ease of use)and effectiveness (content, activities, L.O and assessment)-
"Do the technologies employed successfully help the students participate and learn"?

e.g Does the use of interactive tools and forums help the student complete (engage with) the course (successfully)?


SD3: - reworded - for a peer review - to investigate if the activites are relevent for the L.O.
"Are students provided with relevent information and current thinking in their field".
e.g Can the students 'successfully' engage with the activities provided with no f2F contact?


I see that the next step is to work on further sub-questions however I will make contact with Brownwyn first.


Paradigm/Model:
Defineatly Multiple-methods model/Ecletic-mixed methods-pragmatic paradigm for a Formative Evaluation- of the Interactive web-based programm as outlined in the previous Blog postings.


Sampling Methods: I have not completely thought through the specifics of the triangulation and bracketing yet.

But- I would like to discuss further and negotiate the following:

Peer review-discussion forum- qualititative-small groups
Questionaires- for the students while using the course-limitations on numbers-qualitative
Check lists-ditto, for the students
some paper-based, some email/ online questionaire/evaluation in Moodle.

Saturday, April 18, 2009

Weeks 5 and 6: Evaluation Methods

Progress on the new programme ( my evaluation project) is a bit slower than I thought it would be, but given that I am not working on the project full time- we are not too far behind. After having had the last week off work, it seems achievable now while I am still at home, but I might feel differently when I go back to work next week. The study/activity guide has been written- just formatting it onto Moodle, and my colleague and I have thought of some areas that we could introduce discussion forums-hopefully,as Bronwyn suggested, encouraging them to engage with each other (and therefore improve interaction). The study guide will be in the form of an 'activity guide'-"what to do and when to do it"- as they work through the programme. Topics for the Forums include theory topics such as thinking about their own working and learning styles-adding that to a thread-sharing the importance of these styles for job seeking purposes, with other students; communication issues to deal with when under stress- and difficulties this may cause in the workplace;-sharing their own experiences-these are a couple of examples of discussion topics. We are also in the process of developing some quizzes, so that we can monitor (or attempt to), understanding of specific topics.

In terms of the evaluation model/paradigms and methods that I am persuing:
It's become clear that the type of model best suited is the mixed-multiple-methods model, simply because of the types of data collection available. The programme will be up and running very soon, albeit in a pilot form (as it will still be possible to make running additions to the learning activities as needed, since they will be hosted on a UCOL Moodle site). I may not have made it clear in the previous posting, but one of the final assessments tasks on the programme is to complete a face-to-face interview with the student, based on the CV they will have completed, as well as using the details of a research project (job seek process) that they have to complete. So there is an opportunity to evaluate with 'interview' methods, as well as questionaire methods- as we will be in e-mail contact with the students, as well as contact through the Moodle LMS.

We have discussed paradigms at length now and it's been an ongoing learning journey for me to see and read so much about 'them'. Referring to Phillips' discussions of popular research paradigms, there are so many different 'worldwide views of inquiry', that in order to find a traditional paradigm that fits with e-learning, it was easier to explain away with another multiple approach-the fourth paradigm (aka Reeves et al), critical -realist-pragmatic, which fits in well with the multiple methods of data collection. Hence the term 'eclectic' (borrowed) as used by Reeves, et al. Pragmatic meaning a practical problem-solving method. I believe this fourth model/paradigm suits my project. It may be quite difficult to measure all the factors that contribute to improvements of 'learning' with the other more scientific methods, since I am looking at interaction as a quality issue, as well as the relevence of added course content. Feedback should be able to be incorporated quite easily since UCOL has deleveloped the extra learning material and the associated Moodle site (with activities etc).

Why this model?
I see my project as a Formative type of evaluation and some of my ideas from the eLearning guidelines also fit under Effectiveness (summative) evaluation. While the programme is essentially already developed, the 'online delivery' is being 'fine tuned', and the new content added by UCOL is, at present, untested. So in terms of the elearning guidelines already identified, I would be asking something along the lines of 'are the activities effective and matching the L.O ?' Key to this is evaluating 'useability' (tested by users) and 'effectiveness' (is it working?). So it is a bit like a 'user review'. While not a formal pilot launch of the programme, the initial release will be to existing students whom we already have a relationship with (well, we are planning for this approach at least).

Reading Chapter 8 of Reeves and Hedburg helped confirm why I have chosen an Effectiveness Evaluation, rather than Impact. The ultimate measure of 'success' would be to say "did the the student get a job after completing the programme and creating a CV?" But this is looking too far into the future to evaluate the programme- too many variables out of our control; perhaps a more appropriate short term effectiveness aspect would be " Did the Virtual Building Tour enable a more descriptive (and therefore effective) CV to be completed"? This sort of data would be easy to gather at the Interveiw I imagine.

Articles that I have been reading:
Mehelenbacher,B.,Bennett, L.,Bird, T., Ivey,M.,Lucas, J.,Morton, J.and Whitman,L. (2005). Useable E-Learning: A Conceptual Model for Evaluation and Design. Interaction, Vol 4-Theories, Models and Processes in HCI. Retreived 29 March, 2009 from http://www.pedagogy.ir/images/pdf/usable-e-learning05.pdf

This article arose from a search relating to "evaluation and useability". Was of interest to me because of the authors oversimplfied statement that 'useability is the study of intersection between tools, tasks, users and expectations in the context of use'. Since my evaluation project is on effectiveness based on the quality of interaction- in an e-learning environment, I perservered with reading it.
It's a conference paper (International Conference on Human-Computer Interaction). The aim was to outline the challenges faced by researchers merging the (numerous) theories of useability and evaluation with e-learning developments. The approach considered neccesary with evaluation of e-learning environments is that it is more task-orientated than other types of useability research, therefore a different model is suggested. I have interpreted the model to be more inline with the fourth paradigm described above- because of the task-orientated perspective on e-learning instruction. The group suggests that 'useability evaluation' is founded on theories stemming from early Human-Computer Interaction theories- the social science research side versus the design of the tools for human interaction. "Understanding audience is where useability begins". The paper describes 10 heuristsics from Jacob Neilsen (1994) (http://www.useit.com/papers/heuristic/heuristic_list.html) for useable design
(the ones that stand out for me: match between system and real world; and recognition rather than recall). The secret it seems, is to match the design heuristics with the development of the evaluation-testing theories of e-learning useability. So with the groups emphasis on users and tasks- the model they propose is outlined by a set of useability heuristic tools-for designers evaluating e-learning environments- the questions to ask are defined under the following headings:
Learner Background and Knowledge (e.g accessibility, suport and feedback)
Social Dynamics (e.g communication protocols)
Instructional Content (e.g examples and case studies)
Interaction Display (e.g appeal, consistencey and layout)
Instructor Activities (e.g authority and authenticity)Environment and Tools (e.g organisation and information relevence).

These headings (tools) have been drawn from the five dimensions of all Instructional Situations, modified for e-learning. The paper details these dimensions in great detail, so I won't repeat them. I guess they are saying that further development is needed, and their ideas step away from those of Reeves etc (who directly apply these heuristics to e-learning evaluation); towards a more 'synergistic collaboration between useability and e-learning research'. Thus the task-interaction orientated emphasis on the questioning.

Another paper I was reading:
I have not fully summarized it yet, sorry.
Developing a Usability Evaluation Method for E-learning Applications:From Functional Usability to Motivation to Learn (http://www.dmst.aueb.gr/en2/diafora2/Phd_thesis/Zaharias.pdf)

The paper describes the development of a questionnaire-based usability evaluation method (formative) for e-learning applications. As with the previous paper, the focus for the study is the 'poor usability of e-learning applications'. The development of the method was based upon a very well known methodology in HCI research and practice.
More to follow.

References:
Phillips, R. (2004). We can't evaluate e-learning if we don't know what we mean by evaluating e-learning. Retrieved 29 March 2009 from http://www.tlc.murdoch.edu.au/staff/phillips/PhillipsV2Final.doc

Reeves - Chapter 8 (already referenced).