The Unicode cookbook for linguists: Managing writing systems using orthography profiles
Steven Moran
Michael Cysouw
Copyright Year:
ISBN 13: 9783961100903
Publisher: Language Science Press
Language: English
Formats Available
Conditions of Use
Attribution
CC BY
Reviews
The textbook has an easy to follow structure that includes both a theoretical and practical component, as also evidenced in its table of contents. The theoretical component discusses topics in text coding/encoding and principles of writing... read more
The textbook has an easy to follow structure that includes both a theoretical and practical component, as also evidenced in its table of contents. The theoretical component discusses topics in text coding/encoding and principles of writing systems, while also presenting a comprehensive historical background on the development and advances of the Unicode Standard and the International Phonetic Alphabet (IPA). What this text really does is to wed the perspectives and practices of linguistics and information technology in a serious effort to bridge the gap between them for the purpose of informing interested but less computer-savvy linguists.
The text illustrates analytical thinking and a well-thought out approach with accurate information.
The content is up-to-date. However, due to the nature of topic being one that continually involves, some of the suggested practical proposals may need to be updated in due time.
The text is written in lucid, accessible prose, using easy to comprehend terms, and providing definitions that are followed by their respective terms in technical terminological jargon along the way.
The text follows a clear, succinct and logical argument that builds in intensity and depth as the chapters succeed each other.
The textbook has a clear structure with several subdivisions.
The theoretical component discusses topics in text coding/encoding and principles of writing systems, while also presenting a comprehensive historical background on the development and advances of the Unicode Standard and the International Phonetic Alphabet (IPA). What this text really does is to wed the perspectives and practices of linguistics and information technology in a serious effort to bridge the gap between them for the purpose of informing interested but less computer-savvy linguists.
The practical component proposes specific procedures on how to use the Unicode Standard in the ‘daily practice of (comparative) linguistics’, making actual recommendations to linguists and computer programmers struggling with these tasks, outlining the likely challenges met, and lastly introducing two open source libraries, in Python and R, as a reference for tacking linguistic data, and phonetic and orthographic profiles.
The textbook follows the intuitive and lucid structure outlined in its table of contents.
The text does not have observable interface issues. It is easy to navigate and it includes clear, error-free tables that do not distract or confuse the reader.
Not any that I have noticed.
The text is not culturally offensive; its topic does not permit such scope.
If a page full of gibberish like dkjP~oiu._io&ORE@Qds=Sask,jklutiud~zxs2&3r/fv@ or □□□□□?| □□□□□ ?| is something you have encountered but are clueless as to the workings involved, this is the book for you!
If you are a linguist working in today’s computerized world, you are already familiar with the exasperation involved in having orthographic and phonetic symbols represented in digital documents in a way that are consistently and universally readable across software, across computers, across languages, across offices or campuses, etc. on an international level.
Universals in digital writing systems in our globally multilingual world are as hard to grasp, categorize, make sense of, and tame just like the universal patterns in the worlds’ languages themselves. Learning to resolve these troubles is an on-going effort for those using language, those using computers and, overwhelmingly more so, for those whose work combines the two.
The present open textbook is a resource on computer multilingualism. It is a manual written by two scientists who have combined expertise in human language and information technology to create this resource aiming to inform and guide readers on the notion of encoding in terms of the Unicode, i.e. the digital code that permits interoperability between human and computer language. One could say that Unicode is an attempt to have a lingua franca for computers; just like a lingua franca is powerful in that it allows speakers of different languages communicate while also being limiting depending on the context(s) of its use(s) and user(s), the Unicode is a universal tool that facilitates linguistic interoperability across computers … enduring pitfalls notwithstanding.
The practical value of the manual is that the authors provide recommendations on how to tackle global linguistic diversity (orthographies, grammars, phonetic representations in the languages of the world) and the inevitable IT shortcomings that come with the process of translating such linguistic diversity in computer-readable form.
The textbook has an easy to follow structure that includes both a theoretical and practical component, as also evidenced in its table of contents. The theoretical component discusses topics in text coding/encoding and principles of writing systems, while also presenting a comprehensive historical background on the development and advances of the Unicode Standard and the International Phonetic Alphabet (IPA). What this text really does is to wed the perspectives and practices of linguistics and information technology in a serious effort to bridge the gap between them for the purpose of informing interested but less computer-savvy linguists.
The practical component proposes specific procedures on how to use the Unicode Standard in the ‘daily practice of (comparative) linguistics’, making actual recommendations to linguists and computer programmers struggling with these tasks, outlining the likely challenges met, and lastly introducing two open source libraries, in Python and R, as a reference for tacking linguistic data, and phonetic and orthographic profiles.
The presence of the term ‘cookbook’ in the title is somewhat disorienting in terms of the semantics involved, at least at a first glance, but as one reads the book, it becomes increasingly clear that authors mean it either as a jest, or suggestive of the need to have an instruction manual, a collection of ‘recipes’, that practically address difficult tasks, in the same way that a cookbook suggests lists of necessary ingredients and instructions to use in order to turn recipes into hands-on meals.
The readership aimed is wide-scoped in that the manual is straightforward enough to enlighten the interested layman, student, or non-expert, while also being adequately detailed and comprehensive to be effective for more experienced academics in these fields. The text follows a clear, succinct and logical argument that builds in intensity and depth as the chapters succeed each other, illustrating analytical thinking and a well-thought out approach, using easy to comprehend terms, and providing definitions that are followed by their terms in technical terminological jargon, along the way.
Although a complete answer toolkit is beyond the capabilities of present research, technological advances, and academic instruction, the manual freely put forth in this open textbook format, is certainly an excellent starting point for those concerned in the topics covered, and I am glad for its availability, for having read it, and the opportunity to review it. I certainly recommend this textbook to others.
Table of Contents
- Chapter 1: Writing systems
- Chapter 2: The Unicode approach
- Chapter 3: Unicode pitfalls
- Chapter 4: The International Phonetic Alphabet
- Chapter 5: IPA meets Unicode
- Chapter 6: Practical recommendations
- Chapter 7: Orthography profiles
- Chapter 8: Implementation
Ancillary Material
Submit ancillary resourceAbout the Book
This text is a practical guide for linguists, and programmers, who work with data in multilingual computational environments. We introduce the basic concepts needed to understand how writing systems and character encodings function, and how they work together at the intersection between the Unicode Standard and the International Phonetic Alphabet. Although these standards are often met with frustration by users, they nevertheless provide language researchers and programmers with a consistent computational architecture needed to process, publish and analyze lexical data from the world's languages. Thus we bring to light common, but not always transparent, pitfalls which researchers face when working with Unicode and IPA. Having identified and overcome these pitfalls involved in making writing systems and character encodings syntactically and semantically interoperable (to the extent that they can be), we created a suite of open-source Python and R tools to work with languages using orthography profiles that describe author- or document-specific orthographic conventions. In this cookbook we describe a formal specification of orthography profiles and provide recipes using open source tools to show how users can segment text, analyze it, identify errors, and to transform it into different written forms for comparative linguistics research.
About the Contributors
Authors
Steven Moran is a postdoctoral researcher in the Department of Comparative Linguistics, University of Zurich. He has a broad background in computational linguistics and works with data from under-resourced and endangered languages to answer research questions regarding worldwide linguistic diversity and language evolution. He focuses on phonology, language acquisition, and historical-comparative linguistics. He also does linguistic fieldwork in West Africa.
Michael Cysouw is Professor of language typology at the Philipps University Marburg. His research interests are broad-scale investigations of the world's linguistic diversity, with a particular fondness for unusual structures from a worldwide perspective. He focusses not only on the content of linguistic diversity, but also on the methodological aspects of doing cross-linguistic research. In his research, language comparison is taken both as a window into language universals (focussing on aspects on which languages do not differ) as well as historical reconstruction (by interpreting differences between languages as the result of historical processes).