Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Методы лингвистического анализа Учебно-методическое пособие

..pdf
Скачиваний:
3
Добавлен:
15.11.2022
Размер:
225.24 Кб
Скачать

Министерство образования и науки Российской Федерации

Федеральное государственное бюджетное образовательное учреждение высшего образования

«Пермский национальный исследовательский политехнический университет»

Я.Н. Ронжина

МЕТОДЫ ЛИНГВИСТИЧЕСКОГО АНАЛИЗА

Утверждено Редакционно-издательским советом университета

в качестве учебно-методического пособия

Издательство Пермского национального исследовательского

политехнического университета

2016

УДК 811.111’42 (072.8) ББК Ш 143.21-923.7

Р71

Рецензенты:

д-р филол. наук, профессор кафедры иностранных языков, лингвистики и перевода Н.Л. Мышкина

(Пермский национальный исследовательский политехнический университет);

канд. филол. наук, доцент кафедры лингвистики и перевода

Е.Л. Словикова

(Пермский государственный национальный исследовательский университет)

Ронжина, Я.Н.

Р71 Методы лингвистического анализа : учеб.-метод. пособие / Я.Н. Ронжина. – Пермь : Изд-во Перм. нац. исслед. политехн. ун-та, 2016. – 25 с.

ISBN 978-5-398-01626-0

Представлены выдержки из оригинальных текстов, а также задания к ним.

Пособие предназначено для использования на практических занятиях по дисциплине «Лингвистический анализ научнотехнического текста» в рамках основной образовательной программы подготовки бакалавров по направлению 45.03.02 «Лингвистика», профиль «Перевод и переводоведение». Кроме того, оно может быть использовано студентами старших курсов, магистрантами и аспирантами при написании выпускных квалификационных работ и диссертаций по лингвистике.

УДК 811.111’42 (072.8) ББК Ш 143.21 - 923.7

ISBN 978-5-398-01626-0

© ПНИПУ, 2016

2

 

ОГЛАВЛЕНИЕ

 

Предисловие..........................................................................................

4

1.

Основания количественных (статистических)

 

 

и экспериментальных методов лингвистического анализа...........

5

2.

Области применения количественных (статистических)

 

 

и экспериментальных методов лингвистического анализа.........

12

3.

Методы сбора, обработки и анализа лингвистических данных....

17

Список использованной литературы.................................................

24

3

ПРЕДИСЛОВИЕ

Предлагаемое учебно-методическое пособие предназначено для использования на практических занятиях по дисциплине «Лингвистический анализ научно-технического текста» в рамках основной образовательной программы подготовки бакалавров по направлению 45.03.02 «Лингвистика», профиль «Перевод и переводоведение». Оно составлено в целях формирования у студентов методологической базы самостоятельных научных исследований

идля развития у них аналитического подхода к чтению и изучению первоисточников, что требует соответствующей подготовки. В связи с этим пособие содержит выдержки из оригинальных текстов, раскрывающих темы практических занятий из раздела 3 (модуль 2) рабочей программы дисциплины: количественные (статистические)

иэкспериментальные методы в лингвистике, а также задания, ориентирующие читателей на извлечение необходимой информации.

4

1. ОСНОВАНИЯ КОЛИЧЕСТВЕННЫХ (СТАТИСТИЧЕСКИХ) И ЭКСПЕРИМЕНТАЛЬНЫХ МЕТОДОВ ЛИНГВИСТИЧЕСКОГО АНАЛИЗА

ЗАДАНИЕ 1. Внимательно прочтите высказывания /1/–/8/ и на их основе сделайте выводы относительно: 1) различий между качественными и количественными методами лингвистического анализа; 2) основных целей количественного анализа; 3) первых вопросов, которые возникают у исследователя при подготовке исследования на основе количественного подхода; 4) основных аспектов количественных методов исследований; 5) видов наблюдения (количественный и качественный); 6) типов распределения данных.

/1/ “Put briefly, qualitative research is concerned with structures and patterns, and how something is; quantitative research, however, focuses on how much or how many there is/are of a particular characteristic or item. The great advantage of quantitative research is that it enables us to compare relatively large numbers of things/people by using a comparatively easy index. For example, when marking student essays, a lectures will first look at the content, the structure and coherence of the argument, and the presentation, that is, analyse it qualitatively, but will ultimately translate this into a mark (i.e. a number), which allows us to compare two or more students with each other: a student gaining a 61% did better than a student achieving a 57 %, because 61 is larger than 57 – we do not need to look at the essays per se once we have the numerical, quantitative value indicating their quality. Quantitative data can be analysed using statistical methods, that is, particular mathematics tools which allow us to work with numerical data” [Rasinger 2010: 52].

/2/ “There is another fundamental difference between qualitative and quantitative studies. Qualitative studies are, by their very nature, inductive: theory is derived from the results

5

of our research. A concrete example: Rampton (1995) in his study on linguistic ‘crossing’ was interested in how South Asian adolescents growing up in the United Kingdom use codeswitching between English and Punjabi to indicate their social and ethnic identity. Using interview data from interaction between teenagers of South Asian descent, he identified particular patterns behind code-switches, and was able to infer what the underlying ‘rules’ with regard to use of a particular language and construction of identity were; as such, he used an inductive quantitative approach: theory was derived from (textual) data.

Quantitative research, however, is deductive: based on already known theory we develop hypotheses, which we then try to prove (or disprove) in the course of our empirical investigation. Hypotheses are statements about the potential and/or suggested relationship between at least two variables, such as ‘the older a learner, the less swear words they use’ (two variables) or ‘age and gender influence language use’ (three variables). A hypothesis must be proven right or wrong, and hence, it is important for it to be well defined. In particular, hypotheses must be falsifiable and not be tautological: the hypothesis ‘age can either influence a person’s language use or not’ is tautological – independent from our findings, it will always be true. A good hypothesis, however, must have the potential of being wrong. For a detailed discussion of hypotheses (and laws, and how they can be combined to form theories), see Scott and Marshall (2005)” [Rasinger 2010: 52-53].

/3/ “Quantitative analyses are all about counting something. <…> In order for something to be counted, two conditions are normally considered to be necessary: (a) what you want to count must itself be ‘countable’ (i.e. quantifiable), and

(b) what you want to count must have the potential to be variable (i.e. be able to change). Imagine, for example, that you were conducting a poll on which issues most affected voters’ choice of candidate in recent parliamentary elections. The

6

condition of quantifiability requires that you operationalize the possible set of responses so that they can be counted in a clear and coherent way (see section 3.2). You may, for instance, decide that you will group responses into categories, such as ‘environment’, ‘economy’ and ‘education’, such that you give a certain structure to the diversity of responses you receive (this is typically called coding). It is this structure that will then allow you to quantitavely analyse the results, by, for example, counting how many responses fall into each of your predetermined categories” [Levon 2010: 68-69].

/4/ “Quantitative analysis takes some time and effort, so it is important to be clear about what you are trying to accomplish with it. Note that “everybody seems to be doing it” is not on the list. The four main goals of quantitative analysis are:

1)data reduction: summarize trends, capture the common aspects of a set of observations such as the average, standard deviation, and correlations among variables;

2)inference: generalize from a representative set of observations to a larger universe of possible observations using hypothesis tests such as the t-test or analysis of variance;

3)iscovery of relationships: find descriptive or casual patterns in data which may be described in multiple regression models or in factor analysis;

4)exploration of processes that may have a basis in probability: theoretical modeling, say in information theory, or in practical contexts such as probabilistic sentence parsing” [Johnson 2008: 3].

/5/ “Researchers adopting a quantitative approach seek to investigate phenomena by collecting numerical data and analyzing those data statistically. To facilitate this statistical analysis and to control for extraneous variables quantitative

7

researchers typically recruit a large number of participants and carefully design all aspects of the study before collecting data. In this design process, the quantitative researcher faces a number of questions, including: Do I need more than one group? If so, how many groups are needed to address the research question(s)? How should participants be placed into groups? How will data be collected from the participants, and how often? If an experimental approach is adopted – for example, observations or measurements to be collected under relatively controlled conditions – what will the treatment consist of (e.g., stimuli, timed response, feedback)? How will extraneous variables be addressed?

Asking and answering these questions is important for ensuring three general desiderata of quantitative research in any discipline: validity, reliability, and replicability. A study possessing internal validity is one where the researcher can, with some degree of confidence, conclude that it was the stimulus or treatment that was responsible for observed effects and not chance or some other factor, such as practice, maturation, or measurement problems. In addition, the results of an externally valid study can be generalized beyond the immediate sample. That is, if a study possesses external validity, the results should hold true not only for the participants in the study, but for a larger population as well. Reliability refers to the consistency of measurement, both by different raters (inter-rater reliability) and by different instruments (instrument reliability) (Abbuhl and Mackey 2008). The final component, replicability, is also an essential component of quantitative research. A replicable study refers to one whose results can be repeated with other subject populations and in other contexts. As Porte (2012) notes, a study that cannot be replicated should be treated cautiously by the field” [Schütze, Sprouse 2013: 116–117].

8

/6/ “An observation can be obtained in some elaborate way, like visiting a monastery in Egypt to look at an ancient manuscript that hasn’t been read in a thousand years, or renting an MRI machine for an hour of brain imaging. Or an observation can be obtained on the cheap – asking someone where the shoes are in the department store and noting whether the talker says the /r/’s in “fourth floor”.

Some observations can’t be quantified in any meaningful sense. <…> However, if you were to observe that the form was used 15 times in this manuscript, but only twice in a slightly older manuscript, then these frequency counts begin to take the shape of quantified linguistic observations that can be analyzed with the same quantitative methods used in science and engineering. <…>

Each observation will have several descriptive properties – some will be qualitative and some will be quantitative – and descriptive properties (variables) come in one of our types:

Nominal: Named properties – they have no meaningful order on a scale of any type. <…>

Ordinal: Orderable properties – they aren’t observed on a measurable scale, but this kind of property is transitive so that if a is less that b and b is less than c then a is also less than c. <…>

Interval: This is a property that is measured on a scale that does not have a true zero value. <…>

Ratio: This is a property that we measure on a scale that does have an absolute zero value. <…>” [Johnson 2008: 3–5].

/7/ “The condition of variability, however, is a more abstract and basic one. It requires, simply, that the possibility of variation exist in your response set. In your poll of voter motivations, this condition is met, since all voters are presumably not motivated by the same things. Now, you may find in conducting your poll that in fact all voters do claim to be motivated by the same issue, the ‘environment’, for example.

9

This result, however, does not mean that the condition of variability is violated since they could have been motivated by other things, and it just so happens that they are all motivated by the same thing. The condition of variability is therefore a requirement about the possible existence of variation, and does not mean that variation will actually be found.

Because of this variability requirement, the things that we count in quantitative analyses are called variables” [Levon 2010: 69].

/8/ “Data come in a variety of shapes of frequency distributions (Figure 1.6). For example, if every outcome is equally likely then the distribution is uniform. This happens for example with the six sides of a dice – each one is (supposed to be) equally likely, so if you count up the number of rolls that come up “1” it should be on average 1 out of every 6 rolls.

In the normal – bell-shaped – distribution, measurements tend to congregate around a typical value and values become less and less likely as they deviate further from this central value. As we saw in the section above, the normal curve is defined by two parameters – what the central tendency is (µ) and how quickly probability goes down as you move away from the center of the distribution (σ).

If measurements are taken on a scale (like the 1–9 grammaticality rating scale discussed above), as we approach one end of the scale the frequency distribution is bound to be skewed because there is a limit beyond which the data values cannot go. We most often run into skewed frequency distributions when dealing with percentage data and reaction time data (where negative reaction times are not meaningful).

The J-shaped distribution is a kind of skewed distribution with most observations coming from the very end of the measurement scale. For example, if you count speech errors per utterance you might find that most utterances have a speech error count of 0. So in a histogram, the number of

10

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]