This textbook gives a systematized and compact summary, providing the most essential types of modern models for languages and computation together with their properties and applications.
Top researchers explore the nature of stress and accent patterns in languages, especially the nature of their representations and how people learn them.
This book focuses mainly on logical approaches to computational linguistics, but also discusses integrations with other approaches, presenting both classic and newly emerging theories and applications.
In a world in which advanced communication technologies have made the reporting of disasters and conflicts (also in the form of breaking news) a familiar and 'normalised' activity, the information we present here about television news reporting of the 2003 war in Iraq has implications that go beyond this particular conflict.
This book covers key issues related to Geospatial Semantic Web, including geospatial web services for spatial data interoperability; geospatial ontology for semantic interoperability; ontology creation, sharing, and integration; querying knowledge and information from heterogeneous data source; interfaces for Geospatial Semantic Web, VGI (Volunteered Geographic Information) and Geospatial Semantic Web; challenges of Geospatial Semantic Web; and development of Geospatial Semantic Web applications.
This case study-based textbook in multivariate analysis for advanced students in the humanities emphasizes descriptive, exploratory analyses of various types of datasets from a wide range of sub-disciplines, promoting the use of multivariate analysis and illustrating its wide applicability.
This book presents an investigation of lexical bundles in native and non-native scientific writing in English, whose aim is to produce a frequency-derived, statistically- and qualitatively-refined list of the most pedagogically useful lexical bundles in scientific prose: one that can be sorted and filtered by frequency, key word, structure and function, and includes contextual information such as variations, authentic examples and usage notes.
This book provides a systematic, empirical account of the language typically presented in English as a Foreign Language (EFL) textbooks, based on a large corpus of EFL textbooks used in secondary schools.
This book addresses the research, analysis, and description of the methods and processes that are used in the annotation and processing of language corpora in advanced, semi-advanced, and non-advanced languages.
The aim of this book is to present a comprehensive picture of the current state of Spanish learner corpus research (SLCR), which makes it unique, since no other monograph has focused on collecting research dealing with learner corpora of any language other than English.
This book explains how can be created information extraction (IE) applications that are able to tap the vast amount of relevant information available in natural language sources: Internet pages, official documents such as laws and regulations, books and newspapers, and social web.
In this book we address robustness issues at the speech recognition and natural language parsing levels, with a focus on feature extraction and noise robust recognition, adaptive systems, language modeling, parsing, and natural language understanding.
The two-volume set LNCS 13396 and 13397 constitutes revised selected papers from the CICLing 2018 conference which took place in Hanoi, Vietnam, in March 2018.
This book enables readers to interrogate the technical, rhetorical, theoretical, and socio-ethical challenges and opportunities involved in the development and adoption of augmentation technologies and artificial intelligence.
This book provides information on digital audio watermarking, its applications, and its evaluation for copyright protection of audio signals - both basic and advanced.
This book focuses on the multifarious aspects of 'fuzzy boundaries' in the field of discourse studies, a field that is marked by complex boundary work and a great degree of fuzziness regarding theoretical frameworks, methodologies, and the use of linguistic categories.
This book brings together a variety of approaches to English corpus linguistics and shows how corpus methodologies can contribute to the linking of diachronic and synchronic studies.
The origins of this book arise from the highly successful second SIGdial Workshop on Discourse and Dialogue that was held in September 2001 in con- junction with Eurospeech 2001.
The renewed focus on the evidential base of linguistics in general, but particularly on syntax, is in to a large degree dependent on technological developments: computers, electronic storage and transmission.
This edited book represents the first cohesive attempt to describe the literary genres of late-twentieth-century fiction in terms of lexico-grammatical patterns.
Meaning is a fundamental concept in Natural Language Processing (NLP), in the tasks of both Natural Language Understanding (NLU) and Natural Language Generation (NLG).
It has been estimated that over a billion people are using or learning English as a second or foreign language, and the numbers are growing not only for English but for other languages as well.
This book describes effective methods for automatically analyzing a sentence, based on the syntactic and semantic characteristics of the elements that form it.
Audio Signal Processing for Next-Generation Multimedia Communication Systems presents cutting-edge digital signal processing theory and implementation techniques for problems including speech acquisition and enhancement using microphone arrays, new adaptive filtering algorithms, multichannel acoustic echo cancellation, sound source tracking and separation, audio coding, and realistic sound stage reproduction.