Páginas: 184-195 Recibido: 2021-10-01 Revisado: 2021-11-12 Aceptado: 2022-04-04 Preprint: 2022-03-15 Publicación
Final: 2022-05-31 |
|
Instrument to analyse
communication in a Community of Inquiry when using emerging methodologies
Instrumento para analizar la comunicación en una comunidad de
aprendizaje cuando se usan metodologías emergentes
Keidy García Lira |
||
Elba Gutiérrez-Santiuste |
Abstract
There is a growing interest
in learning in Higher Education using flipped classroom and m-learning. This
study constructs an original instrument to obtain information on the levels
perceived by the students of the three presences of the Community of Inquiry
model with these emerging methodologies. The instrument consists of 21 items,
based on the instrument developed by Arbaugh et al. (2008), which were adapted to flipped classroom and
m-learning. This instrument was distributed to 121 students from two different
universities. Cochran's Q test was run to verify whether there was
agreement between the opinions of five experts. Student t-test results
for independent samples indicate similarity in the opinions of the two groups
of students. Information analysis techniques, exploratory factor analysis, and
reliability tests were also used to validate it. The analysis revealed three
factors coinciding with cognitive presence, social presence and teaching
presence as proposed by the theoretical model. Cronbach's Alpha confirmed the
reliability of the tool as a whole
Resumen
Existe un creciente interés en
el aprendizaje en la Educación Superior utilizando el aula invertida y el
aprendizaje móvil. Este estudio construye un instrumento original para obtener
información sobre los niveles percibidos por el alumnado de las tres presencias
del modelo CoI cuando se hace uso de estas
metodologías emergentes. El instrumento está formado por 21 ítems, construidos
a partir del instrumento desarrollado por Arbaugh et
al. (2008),
que fueron adaptados al uso del
aula invertida y el aprendizaje móvil. Este instrumento se distribuyó a 121
estudiantes de dos universidades diferentes. Se ejecutó la prueba Q de
Cochran para comprobar si existía concordancia entre las opiniones de los
expertos. Los resultados de la prueba t de Student
para muestras independientes indican similitud en las opiniones de los dos grupos
de estudiantes. Para validarlo se utilizaron técnicas de análisis de
información, análisis factorial exploratorio y pruebas de confiabilidad. El
análisis reveló tres factores que coinciden con la presencia cognitiva, social
y docente tal y como propone el modelo teórico. El Alpha de Cronbach confirmó
la fiabilidad de la herramienta en su conjunto
Palabras
clave / Keywords
Community of inquiry
model, flipped classroom, mobile learning, blended
learning, validity, reliability.
Comunidad de aprendizaje, aula invertida, aprendizaje móvil, enseñanza
semipresencial, validez, fiabilidad.
1. Introduction
Communication in virtual educational environments has been attempted to
systematize through various models, which deal with dissimilar elements such as
the social aspect and the development of high-level cognitive functions, the
actions of teachers to facilitate student learning, among others. Over the last
20 years the theoretical and methodological model proposed by Garrison et al. (2000) of the Communities of
Inquiry (CoI) has been used for the analysis of the interactions
and typology of communication in virtual communities of learning and
questioning in Higher Education.
However, it was not until 2008 that an instrument was developed to
obtain information on the levels perceived by students in relation to the three
presences of the CoI model (Arbaugh et al., 2008). To the best of our knowledge, no further instruments have been
developed to validly, reliably and effectively measure the
dimensions of the CoI model. Researchers such
as Diaz et al. (2010) and Swan et al. (2008) found supporting evidence for the three different constructs (Castellanos-Reyes, 2020), but did not design a new instrument. The instrument designed by Arbaugh et al. (2008) has been accepted in many studies and it has been used to evaluate both
blended learning and MOOC courses but its adoption when using FC or ML has been
limited. The study by Kim et al. (2014) used FC together with the
proposal on the CoI model in which a fourth presence,
learning presence, is included. In spite that the CoI
instrument has been validated in many cases, the new emerging technologies make
necessary to re-validate this instrument. As Lowenthal and Dunlap (2014) argue, the CoI instrument should be “revisited and adjusted over time”
(p.
26).
One
of the benefits of m-learning (ML) is precisely to facilitate communication
regardless of time and geographical location of the participants in the
teaching-learning process. ML also includes personalized, flexible and
context-based teaching and learning, which in turn provides interactivity,
mobility and opportunity (Jou et al., 2016). At the same time, the ML “can accommodate
both formal and informal learning in collaborative or individual learning
modes, and within almost any context” (Y. A. Zhang, 2015, p. 43). Meanwhile, Ireri and Omwenga (2016) suggest to students to introduce ML in a
flipped classroom (FC) model. This will help them to overcome the distance with
the teachers and improve their performance. In this model, learning begins
individually online and then moves to the classroom or virtual group space,
where teachers guide students as they apply concepts and actively participate
in knowledge creation (Ireri & Omwenga, 2016). The work of the
teaching staff in this case is associated with the design of the activities
prior to the study and those carried out in the classroom, as well as with the
facilitation of learning and its evaluation. The students outside the classroom
must assume a leading role in the construction of their own learning and at
their own pace from the proposed teaching materials.
We
believe that emerging methodologies and new ways of communication are suitable in today's society. Hence, it is necessary to describe the
development and validation of an instrument to measure communication when
applying FC combined with ML (FC-ML).
The objectives that guided this research are:
· Build a
valid and reliable measuring instrument on FC-ML.
· Explore
the relationships between the dimensions that make up the instrument.
1.1 Community
of Inquiry model
Theoretical
foundations of the CoI model explain that high-level
learning can take place in collaborative communities where individual meaning and
socially constructed knowledge interact (Garrison et al., 2000). This model, as
illustrate in Figure 1, is structured around three elements that are present in
communication in education: cognitive presence, social presence, and teaching
presence.
Figure 1. Community of Inquiry model. Source:
Garrison (2017,
p. 25)
Cognitive
presence is defined as the extent to which students are able to construct and
confirm meaning through reflection and discourse sustained in a CoI (Garrison et al., 2000). In summary, it is a
process model that describes the development of higher order thinking rather
than individual learning outcomes; it is associated with perceived and real learning
outcomes (Akyol & Garrison, 2011). For this presence,
the proposed model identifies four categories: triggering event, exploration,
integration and resolution (Garrison, 2017).
Social
presence is described as the participants' ability to identify with the
community (e.g., course and group), communicate openly in an environment of
trust, and develop personal and affective relationships through the projection
of their individual personalities (Garrison, 2017). This presence contains three categories:
group cohesion, open communication and personal/affective (Garrison, 2017).
Teaching presence is presented as the action of designing, facilitating,
and directing cognitive and social processes in order to obtain learning
outcomes that have personal meaning and are worthwhile from an educational
point of view (Anderson
et al., 2001). The idea of using the term teaching and not teacher presence to
reflect the roles and responsibilities to be shared by participants in a CoI, associated with e-learning approaches (Garrison,
2017) has recently been raised. This dimension covers three categories:
design and organization, facilitating discourse, direct instruction (Garrison,
2017).
1.2 Community
of Inquiry instrument
The instrument developed by Arbaugh et al. (2008) to measure the levels
perceived by students of the three presences of the CoI model
consists of 34 five-point Likert type items. Arbaugh
et al. (2008) validated the instrument
with a sample of 287 participants from institutions in the United States and
Canada enrolled in graduate-level courses in either Education or Business.
Cronbach's alpha evaluated the instrument's reliability for cognitive presence
(
·
to examine participants' perceptions of the three presences (Lawrence-Benedict et al.,
2019; Mills et al., 2016; Sun et al., 2017).
·
to explore the relationship between presences (Bangert,
2009; Garrison et al., 2010; Gutiérrez-Santiuste et al., 2015; Kozan &
Richardson, 2014; Sen-Akbulut et al., 2022).
·
to explore the relationship between presences and their categories (Caskurlu,
2018; Heilporn & Lakhal, 2020).
·
to validate the theoretical model in different languages (Ballesteros
et al., 2019; Heilporn & Lakhal, 2020; Olpak & Kiliç Çakmak, 2018; Yu
& Richardson, 2015).
·
to validate the theoretical model in different disciplines (Carlon et
al., 2012; Heilporn & Lakhal, 2020; Lau et al., 2021).
2. Method
2.1 Research design and sample
A previous analysis was
made of the studies on the levels perceived by the students of the
three presences of the CoI model. The design of the
instrument was based on the instrument developed by Arbaugh
et al. (2008) but
was modified to incorporate other research (Al-Emran et al., 2016; Kim et al., 2014) and to adapt the items
to FC and ML.
An
item pool of 44 candidate questions was built. After an analysis of redundancy,
ambiguity, length, adaptation to the construct and corrections (DeVellis, 2017), the version 1 of the instrument was obtained,
consisting of 34 items. This version was reviewed by five judges to verify the
validity of the content. The judges were researchers in areas related to
communication and/or the use of technologies in education with extensive
research experience. Through a review sheet, they had to evaluate the questions
(correct/incorrect) and, in the latter case, indicate the reason in terms of
clarity, appropriateness, wording and a space for observations. A total of
eight comments were made during the review, of which six (75%) were accepted,
although several were repeated. The judges' responses were reviewed and
Cochran's Q test was used to check the equality of several related
samples in one dichotomous variable (Cochran, 1950). As a result of this test
it could be verified that there are no significant differences between the
opinions of the experts (
Following
the criteria of DeVellis (2017), the 22 items were affirmations whose answers
were recorded on four-level scales (1 = I
strongly disagree with the statement, 2 = I partially disagree with the statement, 3 = I partially agree with the statement, 4 = I strongly agree with the statement). A pilot test was carried out
with 16 students from the first semester of the degree in Computer Science
Engineering, taking the course Introduction to Computer Science. Among the
respondents, 62.5% were men and 37.5% women. As well, 56.25% were between 20
and 29 years old, while 43.75% were under 20 years old. Respondents were asked
to comment on the clarity and duplicity of the items, as a result the
instrument was kept unchanged. This test was also applied to test the relevance
and effectiveness as well as the conditions of the application and the
procedures involved (Hernández-Sampieri et al., 2014).
The
instrument was developed and applied in Spanish language. For seven days in the
academic year 2017-2018, the instrument was published online (in Moodle v.2 at
UGR and v.3 at UCI). The time required for completion was approximately 15
minutes. The quantitative data were analyzed through
the statistical programs SPSS v.24 and SPSS Amos v.22. Students were in the
first semester of their degree program in Computer Engineering and enrolled in
the subject Introduction to Computer Science (at University of Computer
Sciences, UCI, Cuba) and Software Fundamentals (at University of Granada, UGR,
Spain). Table 1 shows the sample profile. Our sample for convenience. The
sample size (
Table 1
Sample profile
Variables |
Frequency |
Percentages (%) |
Gender |
|
|
Female |
26 |
21.5 |
Male |
95 |
78.5 |
Age |
|
|
<20 years |
42 |
34.7 |
20–29 years |
65 |
53.7 |
30–39 years |
13 |
10.7 |
≥40 years |
1 |
0.8 |
University of
origin |
|
|
UCI |
83 |
68.6 |
UGR |
38 |
31.4 |
2.2 Data collection and analysis procedures
It was necessary to verify the equality of averages between the two
groups (UCI and UGR) because they came from different samples. Following the
criteria of Tabachnick and Fidell
(2013), the data were examined
for unanswered items, which were less than 5% and replaced by the mean. Data
were checked for outliers, but no cases had to be removed. Finally, the
normality of the data was assessed using skewness and kurtosis coefficients.
The data exhibited skewness and kurtosis outside recommended range of -1 to +1 (Ferrando & Anguiano-Carrasco, 2010). The most data did not follow a normal distribution due to the size of
the sample (Field, 2009). According to Tabachnick and Fidell (2013), transformations can
improve “the statistical evaluation of data” (p.
98). Appropriate transformations (e.g. square root
and logarithmic) were carried out without producing significant improvements.
However, as the scales used are four-level, it is difficult to assume normality
in this type of scale (Wu,
2007). Nevertheless, it was considered that with the size of the sample used,
the violation of the normality assumption does not cause problems (Pallant,
2007). Therefore, although violating the normality assumption weakens the
solution, “may still be worthwhile” (Tabachnick
& Fidell, 2013, p. 618).
Once
the non-normality of the data was assumed, the Mann-Whitney U test was carried
out with the scores of the responses to the items in order to
evaluate whether there were significant differences between the opinions
expressed by the UCI-Cuban and UGR-Spanish students. The results of the test
show that there are no significant differences in any of the answers in
relation to the variable university of origin (
Descriptive
analyses were performed and Cronbach's Alpha was
calculated. The KMO sample adequacy measurement and Bartlett's sphericity test
were found for the purpose of using EFA to validate the construct. Correlations
were carried out to explore the relationships between the dimensions that make
up the instrument.
3. Results
The purpose
of using EFA was to determine how and to what extent observed variables are
linked to latent variables or factors (Byrne, 2016). In our case, to check the extent to which the
items were related to the three dimensions theoretically proposed. First, the behavior of the items was evaluated through descriptive
statistics that measure the central trend and dispersion. Average item scores
ranged from 3.30(.666) to 3.74(.629) for cognitive presence, 3.04(.995) to
3.65(.642) for social presence, and 3.50(.709) to 3.59(.679) for teaching
presence. This suggests, “the online learning environment studied may have
comprised an effective learning community based on learner perceptions” (Kozan & Richardson, 2014, p. 42).
The sample
size is a factor that interacts with some aspects, among which the input matrix
to the EFA stands out (Lloret-Segura et al., 2014). In this matrix, a
distinction is made the product-moment correlation matrix and the polychoric correlation matrix. As previously mentioned, the
items had four response categories in addition to following a non-normal
distribution. For such reason, the items had to be analyzed
according to their ordinal measurement level, i.e., using the polychoric correlation matrix (Lloret-Segura et al., 2014). However, since the
sample size is small and the distributions are adequate it was decided to
perform the AFE based on Person's product-moment correlations matrix (Lloret-Segura et al., 2014). Correlation analyses
were conducted to determine the internal consistency of items within each
dimension. The resulting correlations ranged from .441 to .952, except for item
PC_HD02 (CP_TE02. The problems raised by
the students trought the videos, and their associated
resources, increased my interest in the course topics) which had indices
lower than .30 and therefore was not taken into account
in subsequent analyses.
Subsequently, the analysis for the selection of the
most appropriate factor extraction method was carried out. Given the ordinal level of measurement of the items,
we had to resort to the ordinary least squares method. However, this
estimation method could not be chosen as it is based on the matrix of polychoric correlations (Morata-Ramirez et al., 2015),
moreover, it may have convergence problems if the sample is small (López-Aguado & Gutiérrez-Provecho, 2019).
On the other hand, it is considered that one of the most suitable methods for
factor extraction is based on ordinary least squares, especially principal axis
factoring method (López-Aguado & Gutiérrez-Provecho, 2019).
In addition, this method of principal axes factoring has been the recommended
option when the normality assumption is not met (Lloret-Segura et al., 2014).
A first EFA was performed using the principal
axis factoring extraction method with oblimin
rotation to extract the factors “in consideration of the theoretical
interdependence of the presences” (Garrison et al., 2010, p. 33). In other words, on the basis of the theoretical assumption that the factors
are correlated. All communalities obtained values above 0.6 with only three
factors and six or seven indicators per factor, following the criteria of
MacCallum et al. (1999) that even samples with less than 100 can be
sufficient when communalities are consistently high. Bartlett's sphericity test
gave a value of
Figure 2. Principal Component Analysis plot
The
total variance explained for the three factors resulting was 69.96%. The first
factor represented the largest amount of variance (57.99%), followed by the
second factor (7.98%) and the third (3.99%). These three factors were named as
teaching presence (TP), cognitive presence (CP) and social presence (SP)
respectively; coinciding with the CoI model. Table 2
shows the factorial loads for each of the items of the instrument.
Table 2
Factorial loads for EFA with Oblimin rotation
Items |
TP |
CP |
SP |
PC_HD01. Los problemas
planteados por el profesorado a través de los videos (y sus recursos
asociados) han incrementado mí interés por los temas del curso (CP_TE01. The problems raised by the teacher
through the videos (and their associated resources) increased my interest in
the course topics). |
|
.757 |
|
PC_EXP01. El uso de videos
(y sus recursos asociados) me ha facilitado el intercambio de información del
contenido de la asignatura (CP_EXP01. The
use of videos (and their associated resources) has facilitated the exchange
of information about the content of the course). |
|
.739 |
|
PC_EXP02. El trabajo
colaborativo me ha facilitado el intercambio de información del contenido de
la asignatura (CP_EXP02. Collaborative
work has helped me to exchange information about the content of the course). |
|
.800 |
|
PC_INT01. El uso de videos
(y sus recursos asociados) me ha facilitado la asociación de ideas
relacionadas con el contenido de la asignatura (CP_INT01. The use of videos (and their associated resources)
has helped me to associate ideas related to the content of the course). |
|
.797 |
|
PC_INT02. El trabajo
colaborativo me ha facilitado la asociación de ideas relacionadas con el
contenido de la asignatura (CP_INT02. Collaborative
work has helped me to associate ideas related to the content of the course). |
|
.782 |
|
PC_RES01. El uso de videos
(y sus recursos asociados) me ha facilitado aplicar nuevas ideas (CP_RES01. The use of videos (and their
associated resources) has helped me to apply new ideas). |
|
.813 |
|
PC_RES02. El trabajo
colaborativo me ha facilitado aplicar nuevas ideas (CP_RES02. Collaborative work
has helped me to apply new ideas). |
|
.796 |
|
PS_AFE01. Trabajando
colaborativamente, he podido expresar mis emociones (SP_AFE01. By working collaboratively, I have been able to
express emotions). |
|
|
.610 |
PS_AFE02. Trabajando
colaborativamente he podido demostrar gratitud con algún miembro del grupo (SP_AFE02. By working collaboratively, I have
been able to show gratitude with a member of the group). |
|
|
.723 |
PS_CA01. Trabajando
colaborativamente he podido expresarme libremente (SP_OC01. By working collaboratively, I have been able to
express myself freely). |
|
|
.622 |
PS_CA02. Trabajando colaborativamente
me he sentido cómodo interactuando con otros participantes del curso (SP_OC02. By working collaboratively, I have
felt comfortable interacting with other course participants). |
|
|
.637 |
PS_COH01. Trabajando colaborativamente
me he sentido unido al grupo (SP_COH01.
By working collaboratively, I have felt united to
the group). |
|
|
.609 |
PS_COH02. Sentí que mi punto
de vista fue reconocido por otros participantes del curso (SP_COH02. I felt that my point of view was
recognized by other participants of the course). |
|
|
.573 |
PD_DO01. A través de los
videos (y sus recursos asociados) se han expresado claramente los contenidos
del curso (TP_DO01. The
videos (and their associated resources) have clearly expressed the contents
of the course). |
.922 |
|
|
PD_DO02. A través de los
vídeos (y sus recursos asociados) se ha expresado claramente la organización
del curso (TP_DO02.
The videos (and their associated resources) have
clearly expressed the organization of the course). |
.945 |
|
|
PD_DO03. A través del
trabajo colaborativo he obtenido información sobre los contenidos del curso (TP_DO03. I have obtained information about
the contents of the course through collaborative work). |
.937 |
|
|
PD_DO04. A través del trabajo
colaborativo he obtenido información sobre la organización del curso (TP_DO04.
I have obtained information about the organization
of the course through collaborative work). |
.943 |
|
|
PD_FD01. A través de los
videos (y sus recursos asociados) me he animado a consultar los contenidos
del curso y fuentes externas para generar conocimientos entre todos (TP_FAC01. The videos (and their associated
resources) have encouraged me to consult the contents of the course and
external sources to generate knowledge among all). |
.931 |
|
|
PD_FD02. A través del
trabajo colaborativo se ha promovido la construcción de conocimientos (TP_FAC02. The construction of knowledge has
been promoted through collaborative work). |
.943 |
|
|
PD_ED01. A través de los
videos (y sus recursos asociados) se me han dado orientaciones explicitas
para centrarme en los contenidos (TP_DI01.
The videos (and their associated resources) I have
been given explicit guidance to focus on the contents). |
.968 |
|
|
PD_ED02. A través del
trabajo colaborativo he obtenido orientaciones explicitas para centrarme en
los contenidos del curso (TP_DI02. I
have obtained explicit orientations through collaborative work to focus on
the contents of the course). |
.949 |
|
|
As
Table 2 shows, the TP factor included eight items that focus on efforts made in
relation to design and organization, facilitation of discourse and direct
teaching to obtain results in correspondence with the needs of the student
body. Items in this factor showed strong loads ranging from .922 to .968. The
CP factor included seven items that refer to the extent to which the student
body is able to construct meaning in a CoI. These items obtained loads with values ranging from
.739 to .813. The SP factor included six items that reflected participants'
ability to socially engage in a CoI. Items in this
factor showed loads ranging from .573 to .723.
On
the other hand, the CP factor had high positive correlations with the TP factor
(
One of the assumptions of Cronbach's Alpha coefficient
is the continuous nature of the variables (Elosua & Zumbo, 2008).
When this assumption is not met, a valid alternative is the ordinal alpha (Espinoza & Novoa-Muñoz, 2018).
The ordinal alpha is based on the polychoric
correlation matrix (Elosua & Zumbo, 2008).
For that reason, in this study the ordinal alpha could not be chosen. However,
the difference between the values of these coefficients may be due to high
values of skewness and kurtosis (González Alonso & Pazmiño Santacruz, 2015).
In this study the values of skewness and kurtosis are not considered high as
they ranged between -2.5 and +2.5. In fact, some authors consider the range of
-2 to +2 acceptable (Muthen & Kaplan, 1992).
Finally,
Cronbach's Alpha turned out to be .96 for the entire instrument while TP was
.98, SP .84 and CP .92. These values did not improve if any item was removed,
indicating that all questions were relevant.
4. Discussion
The objective that guided the research was to develop a FC-ML instrument
based on the CoI model (Arbaugh et al., 2008) to be adapted to learning experiences using FC and ML. Specifically,
the validity and reliability of the instrument, the relationships between its
dimensions were analyzed and it was examined if there were significant
differences between these dimensions in relation to gender and age. Obtaining
satisfactory results associated with each of the three presences that support
the CoI model.
The
results of the descriptive statistics, the Bartlett sphericity test and the KMO
value, confirmed the suitability of the sample to carry out EFA. As the data
did not follow a normal distribution, the appropriate extraction method —as in
the Carlon et al. (2012) and Kovanović et al.
(2018) investigations-— was the principal axis
factoring with oblimin rotation. This differs from
previous research that implemented principal component analysis and oblimin rotation (Arbaugh et al., 2008; Bangert, 2009; Garrison et al., 2010). Sample size of 121 is
consistent with the recommendation of five to ten participants per item, with
an absolute minimum of 100 subjects (Kass & Tinsley, 1979). As well as the
recommendation of MacCallum et al. (1999) that with all communalities above 0.6 and a
high overdetermination of factors the sample may be sufficient. The first
factor to load was the teaching presence explaining 57.99% of the variance,
followed by the cognitive presence explaining 7.98% of the variance and the
third was the social presence explaining 3.99% of the variance. The order of
load of the factors coincides with the studies of Garrison et al. (2010), Kozan and
Richardson (2014), Yu and Richardson (2015) and Kovanović et al.
(2018), but not with that of Carlon
et al. (2012).
Based
on the results of the EFA, this study found a model of three factors coincident
with the instrument developed by Arbaugh et al. (2008) and the research of Bangert
(2009), Garrison et al. (2010), Carlon et al. (2012) and Caskurlu (2018). In the present study, cognitive presence is
made up of four factors, while social presence and teaching presence are made
up of three factors. These results are consistent with what is proposed in the CoI model and is also consistent with the findings of Caskurlu (2018). The FC-ML instrument that fits the data has a
three-factor structure composed of seven items for CP, six items for SP and
eight items for TP. The instrument reliability of the three presences is high (
On
the other hand, our results indicate a strong correlation between cognitive and
teaching presence (
5. Conclusions
Finally,
the validity and reliability of the FC-ML instrument is important from a
practical point of view as it can be applied at the end of a course in which FC
and ML are used in a CoI. This is particularly true
if we think of FC not as a technology but as a way of using different digital
resources to enrich teaching and learning. At the same time, we consider ML as
a learning technology that constitutes an important advance in educational
technology and that within a year or less it may be adopted in Higher Education
(Alexander et al., 2019). Therefore, the
results obtained would provide an opportunity to examine how advances in the
use of emerging technologies, the context or the
discipline in which they are applied, influence virtual communication.
Specifically in the levels perceived by students of the cognitive, social and teaching presences of the CoI
model.
However, there are still limitations to this study
that should be noted. The sample of this study was by convenience, coming from
a Cuban university and a Spanish university. In the future, more and
larger representative samples will be needed to assess the extent to which the
results are applicable to other population groups to confirm the conclusion of
the study. When applying the instrument to more and larger representative
samples, the researchers recommend performing confirmatory factor analysis to
determine the extent to which the data support the proposed model.
References
Akyol, Z., & Garrison, D.
R. (2011). Assessing metacognition in an online community of inquiry. Internet
and Higher Education, 14(3), 183–190.
https://doi.org/10.1016/j.iheduc.2011.01.005
Al-Emran, M., Elsherif, H. M., &
Shaalan, K. (2016). Investigating attitudes towards the use of mobile learning
in higher education. Computers in Human Behavior, 56, 93–102.
https://doi.org/10.1016/j.chb.2015.11.033
Alexander, B., Ashford-Rowe, K.,
Barajas-Murphy, N., Dobbin, G., Knott, J., McCormack, M., Pomerantz, J.,
Seilhamer, R., & Weber, N. (2019). Horizon report 2019 higher education
edition. EDU19. EDUCAUSE. https://tinyurl.com/wyjnnbvn
Anderson, T., Rourke, L., Garrison,
D. R., & Archer, W. (2001). Assessing teaching presence in a computer
conferencing context. Journal of Asynchronous Learning Network, 5(2),
1–17. https://doi.org/10.1.1.95.9117
Arbaugh, J. B., Cleveland-Innes, M.,
Diaz, S. R., Garrison, D. R., Ice, P., Richardson, J. C., & Swan, K. P.
(2008). Developing a community of inquiry instrument: Testing a measure of the
community of inquiry framework using a multi-institutional sample. Internet
and Higher Education, 11(3–4), 133–136.
https://doi.org/10.1016/j.iheduc.2008.06.003
Ballesteros, B., Gil-Jaurena, I.,
& Morentin, J. (2019). Validation of the Spanish version of the “Community
of Inquiry” survey. Revista de Educación a Distancia, 59(4),
1–26.
Bangert, A. W. (2009). Building a
validity argument for the community of inquiry survey instrument. Internet
and Higher Education, 12(2), 104–111.
https://doi.org/10.1016/j.iheduc.2009.06.001
Byrne, B. M. (2016). Structural
equation modeling with AMOS: Basic concepts, applications, and programming
(3rd ed.). Routledge.
Carlon, S., Bennett-Woods, D., Berg,
B., Claywell, L., LeDuc, K., Marcisz, N., Mulhall, M., Noteboom, T., Snedden,
T., Whalen, K., & Zenoni, L. (2012). The community of inquiry instrument:
Validation and results in online health care disciplines. Computers &
Education, 59, 215–221.
https://doi.org/10.1016/j.compedu.2012.01.004
Caskurlu, S. (2018). Confirming the
subdimensions of teaching, social, and cognitive presences: A construct
validity study. Internet and Higher Education, 39, 1–12.
https://doi.org/10.1016/j.iheduc.2018.05.002
Castellanos-Reyes, D. (2020). 20
years of the community of inquiry framework. TechTrends, 64(4),
557–560. https://doi.org/10.1007/s11528-020-00491-7
Chen, R. H. (2022). Effects of
deliberate practice on blended learning sustainability: A community of inquiry
perspective. Sustainability, 14(3), 1785.
https://doi.org/10.3390/su14031785
Cochran, W. G. (1950). The comparison
of percentages in matched samples. Biometrika, 37(3/4), 256–266.
https://doi.org/10.2307/2332378
DeVellis, R. F. (2017). Scale
development: Theory and applications (4th ed.). SAGE Publications.
Diaz, S. R., Swan, K., Ice, P., &
Kupczynski, L. (2010). Student ratings of the importance of survey items, multiplicative
factor analysis and the validity of the community of inquiry survey. Internet
and Higher Education, 13(1–2), 22–30.
https://doi.org/10.1016/j.iheduc.2009.11.004
Elosua, P., & Zumbo, B. (2008). Coeficientes de fiabilidad para escalas de respuesta
categórica ordenada. Psicothema, 20(5), 896–901.
https://bit.ly/3IZFHw3
Espinoza,
S. C., & Novoa-Muñoz, F. (2018). Ventajas del alfa ordinal respecto al alfa
de Cronbach ilustradas con la encuesta AUDIT-OMS. Revista Panamericana de
Salud Publica, 42(e65), 1–6. https://doi.org/10.26633/RPSP.2018.65
Ferrando,
P. J., & Anguiano-Carrasco, C. (2010). El análisis factorial como técnica
de investigación en psicología. Papeles Del Psicologo, 31(1), 18–33. https://bit.ly/372PZyq
Field, A. (2009). Discovering
statistics using SPSS (3rd editio). SAGE Publications.
Floy, F. J., & Widaman, K. F.
(1995). Factor analysis in the development and refinement of clinical
assessment instrument. Psychological Assesment, 7(3), 286–299.
https://doi.org/10.1037/1040-3590.7.3.286
Garrison, D. R. (2017). E-learning
in the 21st century: A framework for research and practice (3rd ed.).
Routledge.
Garrison, D. R., Anderson, T., &
Archer, W. (1999). Critical inquiry in a text-based environment: Computer
conferencing in higher education. The Internet and Higher Education, 2(2–3),
87–105. https://doi.org/10.1016/S1096-7516(00)00016-6
Garrison, D. R., Cleveland-Innes, M.,
& Fung, T. S. (2010). Exploring causal relationships among teaching, cognitive
and social presence: Student perceptions of the community of inquiry framework.
Internet and Higher Education, 13(1–2), 31–36.
https://doi.org/10.1016/j.iheduc.2009.10.002
González Alonso, J., & Pazmiño
Santacruz, M. (2015). Cálculo e interpretación
del Alfa de Cronbach para el caso de validación de la consistencia interna de
un cuestionario, con dos posibles escalas tipo Likert. Revista Publicando,
2(2), 62–67.
Gutiérrez-Santiuste,
E., Rodríguez-Sabiote, C., & Gallego-Arrufat, M. J. (2015). Cognitive presence through
social and teaching presence in communities of inquiry: A correlational –
predictive study. Australasian Journal of Educational Technology, 31(3),
349–362. https://doi.org/10.14742/ajet.1666
Heilporn, G., & Lakhal, S.
(2020). Investigating the reliability and validity of the community of inquiry
framework: An analysis of categories within each presence. Computers &
Education, 145, 1–20. https://doi.org/10.1016/j.compedu.2019.103712
Henson, R. K., & Roberts, J. K.
(2006). Use of exploratory factor analysis in published research: Common errors
and some comment on improved practice. Educational and Psychological
Measurement, 66(3), 393–416.
https://doi.org/10.1177/0013164405282485
Hernández-Sampieri, R., Fernández,
C., & Baptista, P. (2014). Metodología
de la investigación (6th ed.). MrGraw-Hill.
Ireri, B. N., & Omwenga, E. I.
(2016). Mobile learning: A bridging technology of learner entry behavior in a
flipped classroom model. In J. Keengwe & G. Onchwari (Eds.), Handbook of
research on active learning and the flipped classroom model in the digital age
(pp. 106–121). Idea Group,U.S.
Jou, M., Tennyson, R. D., Wang, J.,
& Huang, S. Y. (2016). A study on the usability of E-books and APP in engineering
courses: A case study on mechanical drawing. Computers and Education, 92–93,
181–193. https://doi.org/10.1016/j.compedu.2015.10.004
Kaiser, H. F. (1970). A second
generation little jiffy. Psychometrika, 35(4), 401–415.
https://doi.org/https://doi.org/10.1007/BF02291817
Kass, R. A., & Tinsley, H. E. A.
(1979). Factor analysis. Journal of Leisure Research, 11(2),
120–138. https://doi.org/10.1080/00222216.1979.11969385
Kim, M. K., Kim, S. M., Khera, O.,
& Getman, J. (2014). The experience of three flipped classrooms in an urban
university: An exploration of design principles. Internet and Higher
Education, 22, 37–50. https://doi.org/10.1016/j.iheduc.2014.04.003
Kovanović, V., Gašević, D.,
Joksimović, S., Hatala, M., & Adesope, O. (2015). Analytics of communities
of inquiry: Effects of learning technology use on cognitive presence in
asynchronous online discussions. Internet and Higher Education, 27,
74–89. https://doi.org/10.1016/j.iheduc.2015.06.002
Kovanović, V., Joksimović, S.,
Poquet, O., Hennis, T., Čukić, I., de Vries, P., Hatala, M., Dawson, S.,
Siemens, G., & Gašević, D. (2018). Exploring communities of inquiry in
Massive Open Online Courses. Computer & Education, 119,
44–58. https://doi.org/10.1016/j.compedu.2017.11.010
Kozan, K., & Richardson, J. C.
(2014). New exploratory and confirmatory factor analysis insights into the
community of inquiry survey. Internet and Higher Education, 23,
39–47. https://doi.org/10.1016/j.iheduc.2014.06.002
Lau, Y., Tang, Y. M., Chau, K. Y.,
Vyas, L., & Sandoval-hernandez, A. (2021). COVID-19 crisis: exploring
community of inquiry in online learning for sub-degree students. Frontiers
in Psychology, 12, 679197. https://doi.org/10.3389/fpsyg.2021.679197
Lawrence-Benedict, H., Pfahl, M.,
& Smith, S. J. (2019). Community of Inquiry in online education: Using
student evaluative data for assessment and strategic development. Journal of
Hospitality, Leisure, Sport & Tourism Education, 25, 100208.
https://doi.org/10.1016/j.jhlste.2019.100208
Lloret-Segura,
S., Ferreres-Traver, A., Hernández-Baeza, A., & Tomás-Marco, I. (2014). Exploratory Item Factor
Analysis: A practical guide revised and updated. Anales de Psicología, 30(3),
1151–1169. https://doi.org/10.6018/analesps.30.3.199361
López-Aguado,
M., & Gutiérrez-Provecho, L. (2019). Cómo realizar e interpretar un
análisis factorial exploratorio utilizando SPSS. REIRE Revista d’Innovació i
Recerca En Educació, 12(2), 1–14. https://doi.org/10.1344/reire2019.12.227057
Lowenthal, P. R., & Dunlap, J. C.
(2014). Problems measuring social presence in a community of inquiry. E–Learning
and Digital Media, 11(1), 19–30.
https://doi.org/10.2304/elea.2014.11.1.19
MacCallum, R. C., Widaman, K. F., Zhang,
S., & Hong, S. (1999). Sample size in factor analysis. Psychological
Methods, 4(1), 84–99. https://doi.org/10.1037/1082- 989X.4.1.84
Mills, J., Yates, K., Harrison, H.,
Woods, C., Chamberlain-Salaun, J., Trueman, S., & Hitchins, M. (2016).
Using a community of inquiry framework to teach a nursing and midwifery
research subject: an evaluative study. Nurse Education Today, 43,
34–39. https://doi.org/10.1016/j.nedt.2016.04.016
Morata-Ramirez, M. Á., Holgado Tello,
F. P., Barbero-García, M. I., & Mendez, G. (2015). Análisis factorial confirmatorio. Recomendaciones sobre
mínimos cuadrados no ponderados en función del error Tipo I de Ji-Cuadrado y
RMSEA. Acción Psicológica, 12(1), 79–90.
https://doi.org/10.5944/ap.12.1.14362
Muthen,
B., & Kaplan, D. (1992). A comparison of some methodologies for the factor analysis of
non‐normal Likert variables: A note on
the size of the model. British Journal of Mathematical and Statistical
Psychology, 45(1), 19–30.
https://doi.org/10.1111/j.2044-8317.1992.tb00975.x
Olpak, Y. Z., & Kiliç Çakmak, E.
(2018). Examining the reliability and validity of a turkish version of the
community of inquiry survey. Online Learning, 22(1), 147–161.
https://doi.org/10.24059/olj.v22i1.990
Pallant, J. (2007). SPSS survival
manual, a step by step guide to data analysis using SPSS for windows (3rd
ed.). McGraw Hill.
Richardson, J. C., Maeda, Y., Lv, J.,
& Caskurlu, S. (2017). Social presence in relation to students’
satisfaction and learning in the online environment: A meta-analysis. Computers
in Human Behavior, 71, 402–417.
https://doi.org/10.1016/j.chb.2017.02.001
Sen-Akbulut, M., Umutlu, D., Oner,
D., & Arikan, S. (2022). Exploring university students’ learning
experiences in the covid-19 semester through the community of inquiry
framework. Turkish Online Journal of Distance Education, 23(1),
1–18. https://bit.ly/3IZqgoo
Sidiropoulou, Z., & Mavroidis, I.
(2019). The relation between the three dimensions of the Community of Inquiry
and the learning styles of students in a distance education programme. International
Journal of Emerging Technologies in Learning, 14(23), 180–192.
https://doi.org/10.3991/ijet.v14i23.11564
Sun, Y., Franklin, T., & Gao, F.
(2017). Learning outside of classroom: Exploring the active part of an informal
online English learning community in China. British Journal of Educational
Technology, 48(1), 57–70. https://doi.org/10.1111/bjet.12340
Swan, K. P., Richardson, J. C., Ice,
P., Garrison, D. R., Cleveland-Innes, M., & Arbaugh, J. Ben. (2008).
Validating a measurement tool of presence in online communities of inquiry. E-Mentor, 2(24), 1–12.
https://bit.ly/3JVdesE
Tabachnick,
B. G., & Fidell, L. S. (2013). Using multivariate statistics (6th ed.). Pearson.
Wu, C. H. (2007). An empirical study
on the transformation of Likert-scale data to numerical scores. Applied
Mathematical Sciences, 1(58), 2851–2862.
https://tinyurl.com/4zcm85wa
Yu, T., & Richardson, J. C.
(2015). Examining reliability and validity of a Korean version of the Community
of Inquiry instrument using exploratory and confirmatory factor analysis. Internet
and Higher Education, 25, 45–52.
https://doi.org/10.1016/j.iheduc.2014.12.004
Zhang, R. (2020). Exploring blended
learning experiences through the community of inquiry framework. Language
Learning & Technology, 24(1), 38–53. https://doi.org/10125/44707
Zhang, Y. A. (2015). Handbook of
mobile teaching and learning. Springer-Verlag.
https://doi.org/10.1007/978-3-642-54146-9