First, a question. With such a rigorous system in place, what do you do to keep training and learning FUN. I've always felt training should be fun and tackled willingly if lasting learning is to occur, and if there are so many requirements, does this diminish the intrinsic motivation and reward for students? Or do you ensure this motivation remains via the design of your courses and the way your trainers work with students?
Second, your concept of blended assessment is a big and important one. I understand your need for outside certification of competency, but that seems only a small piece of what you are after at ROMATSA. I really like that you have a goal to keep on-the-job training and assessment an integral part of your system. To me it seems that this is the only way to fully assess the impact of training and whether it is actually transferred to the job. You seem to apologize for the fact that this assessment is somewhat subjective, but I think that makes it all the more richer. Standardized and objective assessments can only measure the quantities and qualities they are designed to measure. But the on-the-job subjective assessments, when done by someone skilled and open-minded, can likely undercover gaps (and skills!) that no one would have anticipated beforehand.
Third, you mention using COMET modules as tools in the assessment process. At COMET,we've traditionally hoped that our modules are seen as valuable components within an overall, more comprehensive training system. We've not traditionally seen them as assessment tools, and don't invest the same effort in creating assessment instruments for them as we do in creating the education and training content. Have you considered using COMET modules as "elective" training tools that forecasters can use for their ongoing refresher (and extension) training? Have you considered developing your own assessment techniques that "localize" the content in the modules? For examples, case studies or assessment questions that ask learners to apply the conceptual or procedural knowledge to local events and using local tools?
Fourth, what other assessment techniques have you considered using in the potential "toolbox" of blended assessment approaches? Have you considered customizing assessment for individual students needs, or is the desire for standardizing too strong?
Fifth, it seems to me that the goal of any assessment system is not just proof to outside or inside authorities that competencies are met and that training is working, but also to strengthen the learning process. As you pointed out in your lecture, assessment can also be used to identify gaps that point to what additional training might be useful. But also, assessment lets the learner self-assess and test their knowledge and skill to focus their learning efforts, which is integral in the learning process--not outside it or at the end. Can these smaller, mini-assessments that are are part of the teaching process be made to count toward the total required assessment process, or is it too ingrained that it is the "final" assessment that matters? (The portfolio assessment approach, which you don't mention, is a step back from the bias toward "final" assessment.)
Ok, so ends my long-winded posting to the forum. I hope others join. Let's keep this one open for some time if you agree. With summer holidays, some people may not see your presentation or read the forums until later this summer. :-)
I'm sorry I wasn't able to attend Paul's Discussion Forum.
Pat's first question about keeping training fun (in the previous post) reminded me of the safety video of my national airline:
Regards,
Chris
Hi Pat, first of all, thank you for your appreciations.
I will try to make it as structured as you did… so:
First – you’re asking some of the deepest secrets! Well… we try all times to find something new, to encourage trainees to find themselves challenging questions or problems, to find different approaches for the compulsory curricula. For observers one might not find things challenging but we have a very similar approach as for the forecasters: we’re using their errors, and also their experiences. We also have some “dry” topics as coding or rules and regulations… for these the challenge is to find the logic, the need and implications. Most of them are also based on sharing experiences. But everything, without the imagination and dedication of the trainers designing the course might be nothing.
Well, for the rest of the aspects you highlighted, the problems are quite sensible… we just started to use the moodle platform for quizzes and short tests. Our big problem is that we need some more authentic online structures for assessment. We have to develop these but it takes a lot of time. For this objective I strongly believe that open source templates (maybe developed as cooperation between states) that can be tailored for national use might be the solution. That was the reason why I suggested that COMET and EUMETCAL as leader organizations might develop or help developing some templates for international use.
In Romania as in many other non-english speaking countries the use of those modules is somewhat limited because of the language at least. But, as you once said, some of the COMET modules might be translated in some other languages being a sort of open in this aspect.
As I’ve tried to explain a bit in-depth in the other topic, each AMP has his personal file which describes the identified strengths and weaknesses. The problem is to have a sort of evolution traceability… the problem is extremely complex, and the aspects you’ve raised still have to be considered and solved.
For the very beginning we’re using in the moodle some random multiple question tests and quizzes – just to make AMP’s used with the platform. We’re hoping that in the next year we’ll develop a test simulation as flexible as possible.
Regarding the tailored assessments… this is also delicate – we always prefer to check some basic knowledge and the way AMP’s apply, instead of supposing that someone that once proved he knows something knows that thing forever. We’re human – so we forget! So, we check periodically some aspects that are connected to the basic competencies for the job.
We’re trying to have our AMP’s not only highly qualified, but also highly competent for their jobs. We can’t afford to suppose anything, we have to make them prove everything. The mini-assessments you’ve mentioned are, of course, part of the image we have about the competences and are extremely important for us in order to fill the gaps. For the external assessment, the final examination, based on probing, can’t take into account all the fine tuning we have to do.
Thanks again for all the ideas you gave me/us!