Development and validation of writing models with a focus on languages with transparent orthographies: insights and challenges
Writing models provide a theoretical framework for understanding the (meta)cognitive, (meta)linguistic and (grapho)motor processes involved in writing and describing the factors that determine the quality of the final product (written text). Several writing models have been developed since the 1970s, with each new model building on previous ones by addressing points of criticism or expanding components based on new findings. Today’s models encompass numerous factors and provide a theoretical framework for understanding writing; however, their validity has not been sufficiently tested and validated, especially outside the languages and orthographic systems in which they were developed, which is particularly English with its opaque orthography. For writing models to universally explain writing, they need to be applicable in different contexts. One of the biggest challenges in validating writing models is the role of orthographic transparency, on which there is little research. The aim of this paper is to present important milestones and significant changes in the development of writing models that have resulted from criticisms of previous models and efforts to overcome them, as well as to highlight the remaining uncertainties regarding the components and relationships within writing models, especially in the context of their generalization to different languages and scripts (orthographies). The literature review reveals a lack of studies testing the validity of writing models in languages with transparent orthographies compared to English and other opaque languages. Existing research findings for transparent languages such as Croatian, whose features facilitate writing acquisition and automatization of writing skills compared to opaque languages such as English, show a lesser role of transcription in the quality and fluency of written discourse. Finally, interdisciplinary and longitudinal studies in different languages and populations are needed to test the validity of existing models and to identify universal elements as well as those that depend on the language, writing and the wider context.