• Main > 
  • News > 
  • CODUR workshop at EMEMITALIA 2017

News

rss

30/10/2017

CODUR workshop at EMEMITALIA 2017

Towards the recognition of the ‘e-learning’ dimension in the University ranking systems

The CODUR workshop took place at the EMEMITALIA 2017 Conference in Bolzano (Italy) on August 31 st. As part of the activities of the CODUR project, the main purpose of the workshop was to share with a wider audience the intellectual outputs elaborated so far within the project. In particular, the workshop aimed to promote debate and exchange about the evaluation of the online dimension within the University ranking systems.


The workshop was organized in two phases, for a total duration of approximately 4 hours. In the morning, a Round Table was held where six invited experts gave their contributions on the topic with great interest for the participating audience. In the afternoon, a working session was organized as continuation of the morning session, where all the workshop participants (experts and audience from the morning session) could actively contribute and provide their opinions and feedback.


The Round Table discussion centered upon university rankings and e-learning dimension. Prior to the event, the experts had been provided with the provisional list of the CODUR criteria and indicators and were asked to comment on them during their speech.


The experts on the panel included all very well-known policy makers in the Italian e-learning context:


  • Floriana Falcinelli, from the University of Perugia, is University Deputy for e-Learning.
  • Patrizia Ghislandi, from the University of Trento, is Director of the University Lab for Educational Innovation.
  • Pier Paolo Limone, from the University of Foggia, is University Deputy for e-Learning.
  • Tommaso Minerva, from the University of Modena and Reggio, is member of the University Committee for e-learning.
  • Pier Giuseppe Rossi, from the University of Macerata, is University Deputy for e-Learning and Director of the University Centre for e-learning (CELFI).
  • Marina Rui, from the University of Genova, is University Deputy for e-Learning.


The Round Table was introduced by Francesca Pozzi (ITD-CNR) and chaired by Donatella Persico (CNR-ITD). The discussion turned out to be quite alive, thus demonstrating that the proposed topic is very hot within the Italian University context. Through the discussion among the six experts participating in the Round Table, some interesting themes emerged and a significant feedback related to the CODUR proposal was collected.


During the second phase, all the workshop participants, including both the experts and the audience of the Round Table, were actively engaged in a decision-making collaborative group-based session, through which it was possible to collect further inputs and feedback concerning the CODUR criteria and indicators.


Workshop Outcome


Overall, taking on board the feedback collected at the workshop, the present list of CODUR criteria and indicators should be definitely considered a valuable piece of work and a good starting point to determine a set of criteria and indicators for online Higher Education Institutions.


Evaluation, accreditation and ranking

Evaluation, accreditation and ranking are very critical aspects within the Higher Education community and – even if much has been said so far about these topics – we are far from being able to say a final word on them. In the ever changing and competitive world we live, being able to define indicators able to adequately measure the quality of Universities is still perceived as fundamental. However, even if evaluation, accreditation and ranking are somehow all sides of the same coin, it is important not to mix these terms up, as they point to different actions, each one with different aims. In CODUR we have decided to focus on the ranking area.


Position with regard to university rankings

Another aspect that has clearly emerged from the work done so far is that existing ranking systems are controversial: even if some recognize the need to compare and classify Universities, others claim ranking systems are not academic tools, but rather marketing ones. Among the main weaknesses often mentioned for the existing ranking systems, many say they are not solid enough, especially as far as validity of indicators, methodological correctness, transparency of sources of information and algorithms, etc.


Levels of quality analysis to be addressed

When we use the term ‘online dimension’ within the Higher Education (HE) context, we can point to many different situations, ranging from ‘completely online institutions’ (as the Open Universities), to traditional Universities running only a few courses or entire programmes through the Internet. Defining criteria and indicators for these situations (or any possible variant) is something extremely delicate and one should choose the exact focus of the work. In CODUR, we have chosen to focus on the evaluation of online Higher Education institutions, rather than on courses or programmes. (Some of) the present CODUR indicators seem to refer to the institution as a whole (independently on the online dimension), while others are specific for the online dimension. This might derive from the fact that within the project it is not yet clear enough whether the final output will be integrated within one existing ranking system (such as for example U-Multirank), or whether it will stay as a stand-alone set of indicators. Consequently, sometimes indicators that in principle should already been present in any existing system, have been also reported in the CODUR set, but these should definitely be deleted and not replicated once integrated in an existing system.


Criteria and indicators for measuring the quality of online learning

The lack of specific indicators for measuring the quality of online learning is definitely felt as an urgent gap to be filled in by the HE community. In this sense, CODUR has identified a real need, i.e. to define specific criteria and indicators for the online dimension. Present indicators are not ‘actual indicators’, as most of them are not straightforwardly operationalizable. Besides, some indicators are quite high, others are more fine-grained. In the future, the project should try to make them more homogenous, by choosing the level to focus on. Furthermore, it is not clear whether we are looking for a quantitative or qualitative indicators. Presently, indicators are mixed and even if a mixed approach is in principle not negative, this should be clearly chosen and stated. According to the feedback collected so far, some of the present criteria and indicators (even if very much related) should be merged and clustered. Some of the present criteria are very much intertwined and the boundaries between them is often blurred. CODUR should also consider adding other relevant indicators with the intention of capturing and measuring the “student voice” dimension.

Back to list