• 沒有找到結果。

General Procedure of Shared Unit Naming Task

CHAPTER 3 METHODOLOGY

3.2. Task 5: Shared Units Naming Task

3.2.7. General Procedure of Shared Unit Naming Task

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

69

The trials in this word sharing were generally the same with former homophonous naming task. The first difference is that we put the carriers in a sequential way, rather than in pairs. The other is that we used “百 [paj21]” (hundred) to substituted for color “白 [paj35]” (white) in this section, while we didn’t recruit another characters to replace for white. In this tonal syllable-sharing test, we would like to put a visual difference between color term and the visual character to cause the linguistic competition, so we had to seek for another carrier to avoid the sound overlapping. In this section, we had 20 trials for subjects to name the color of the visual carriers.

3.2.7. General Procedure of Shared Unit Naming Task

Another 20 subjects participated in the experiment 3. Before this naming phase, subjects were asked to recognize the substituting characters. All of the subjects were checked that each pronunciation of the carriers can be recognized and uttered correctly, so the subjects were allowed to initiate the experiment. At the beginning of the testing phase, subjects could saw a star mark in the center of the screen for 2000 ms, which reminded subjects to focus on the coming trial. Then the visual stimulus was displayed, and subjects were asked to name the visual color for each of the carriers without time limit. During the answering period, the SONY-IC recorder was set aside to collect the answers which subjects provided. After finishing the naming, they had to push the response button on the serial response box to record the reaction time for single trial as well as to initiate next trial. When answering a trial, subjects were asked to look attentively at a single carrier and process them serially and one by one. It was prohibited to change, skip, reverse, or omit the processing order of carriers in a single trial. The above is a complete course for answering one trial. In shared unit naming task, there were six types of phonological unit tested, such as onset, vowel,

‧ 國

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

70

rhyme, syllable, tone, and tonal syllable. Each of the target unit was designed 20 trials, which were recruited from the previous color naming task. Among the trials, 10 belonged to high group of phonological similarity, and the other trials were adopted from the low group. Therefore, for each of the subjects, they had to answer 20 trials for each of shared unit section, which means that there were 120 trials in total presented in those six sections of shared units. Twenty subjects were recruited in this experiment. That is to say, there were 2400 trials to be observed, and 19200 tokens (visual carriers) in total to be tested and analyzed.

After the experiment, all the sound files were transcribed, and the speech errors were detected and collected for later analysis. The criteria of error detection, record and categorization were the same with the previous experiment1 and 2. Three types of the relationship between target and error were also concerned in this shared unit test, such as phonological errors, semantic errors, and mixed errors. Regarding to the phonological and mixed errors (both errors are phonologically related), six types of phonologically structural relations were analyzed, such as onset, vowel, rhyme, syllable, tone, and tonal syllable. The phonological similarity of speech was also graded in this section.

E-prime served to help record the response time that subjects had done for trials.

The response span among the six groups of shared united will be compared, and the temporal pattern would be compared to the pattern of speech errors for further discussion.

‧ 國

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

71

Chapter 4

Results and Discussions

From above five experiments, we have recruited 22 subjects in color naming, reading, Stroop naming, and homophonous naming test; another 20 subjects joined in shared unit test. As a result, we have collected 1,056 speech errors in total. In color naming test (Test 1), we have collected 96 speech errors; in color reading test (Test 2), there were 78 errors observed; in Stroop color naming test (Test 3), 257 speech errors were generated in this section. In homophonous naming test (Test4), there were 249 speech errors produced. In shared united test (Test 5), there were 376 speech errors being detected. The influence of independent factor, phonological similarity and phonological units, on the number of speech errors and response time will be discussed in the following sections.

The organization of this chapter appears as follows. The structure of lexical errors and reaction times among the tests will be compared and discussed in 4.1, and the role of phonological similarity and the modalities among color naming, reading, as well as Stroop naming tasks will be discussed in 4.1, too. Under independent factor of phonological similarity, we would like to examine the previous linguistic effects in chapter 2 and their relation to the number of speech errors and temporal data. As to the factor of phonological unit, such as initial, rhyme, syllable structure, tone, phonotactic constraint, will be analyzed and discussed in 4.2. As to the shared unit test, the issue on the possible units in lexical encoding will be examined and discussed in section 4.3.

4.1. The Structure of Speech error and Reaction Time: Task 1 ~ Task 4

In this study, 22 subjects participated in test 1 to test 4. There were 40 trials in respective tests, and 8 visual words were filled in each trial. In the following part, we

will divide two portions: trial error frequency and error number. Trial error frequency, shown as “Trial F” in the following, indicates the error number that subjects made in those 40 trials of each test. The trial error number will be counted in high and low phonological similarity trials respectively. After data collecting, we can get several target-error pairs from each test, and we need to grade scores of phonological similarity for each pair. Then we can get the outcomes of high, medium, and low numbers in terms of phonological similarity, shown as “Error N” in the following. For example, in a trial from high phonological similarity group, subjects made four errors in this trial. The Trial F will be counted 4 in the high group and 0 in low group.

Among these errors, we could get four target-error pairs. In terms of phonological criteria in table 3-2, one of them could be attributed to high phonological similarity group, another to medium group, and the others to low group. Therefore, in the column of Error N, we will mark 1 in high group, 1 in medium group, and 2 in low group. All the counts will be shown in percentage as well. Table 4-1 appears the structure of these speech errors in the four tests among the 22 subjects.

Table 4-1. The Structure of Speech Errors (N=680)

Speech Errors Test 1 Test 2 Test 3 Test 4 Total

We have collected 680 errors among the four tests. With regard to Trial F, we observed that subjects produced 374 errors in the 20 trials of high phonological similarity, and 306 errors of low similarity. Except for test 4, the trials of high

‧ 國

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

73

phonological similarity came up more speech errors (52:44 in test 1; 52:26 in test 2;

152:105 in test 3). Test 4 appears the opposite pattern. The trial of low phonological similarity brought out more speech errors (118:131 in test 4).

The phonological effect is the average score of the phonological similarity of the target-error pairs we collected in Error N. The average of the four naming and reading tests is 2.59. It would give us an anchor point that phonological effect for one of the tests to the average. In color naming test, the phonological effect is 2.40, 2.44 in homophonous naming test, 2.72 in Stroop naming test, and 2.79 in color reading test.

It seems that the phonological effect weighs the heaviest in color reading task, and then Stroop naming task follows. On the other hand, it weighs the least in color naming test, and then the second least seems to be in homophonous naming test. It appears not only that phonological relation between target and error among the four tests might be different, but also that there might be different degrees of phonological dependency for respective visual tasks.

As to Error N, we graded the error pairs and divided them into three groups according to their phonological similarity. It seems that the pairs tend to be with high or low phonological similarity (625 in total), and there were only 55 pairs in between.

Some samples are elicited in the following (1-4):

(1) xoŋ35  xwɑŋ35 紅 (red)  黃 (yellow)

Example (1) is classified as a lexical error with high phonological similarity. The criteria on grading the similarity is based on table 3-2. The score of phonological relation between xong2 (red) and xuang2 (yellow) is 5 points, since they share syllable number, syllable structure (CGVN in deep structure), initial [x], 35 tone, and

‧ 國

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

74

nasal coda [ŋ]. This is a typical example of a speech error with semantic and high phonological relation in the case. There were 266 errors observed in this study.

(2) ly51  xej55 綠 (green)  黑 (black)

Example (2) is of a lexical substitution with less phonological relation. Between

lü4 (green) and xei1 (black), they only share the characteristic of syllable number.

This case could be attributed to a pure lexical semantic error because color terms in Chinese could be used in monosyllabic form, as used in the experiments. Sharing syllable number seems to be an absolute result in this study. Therefore, we can categorize this kind of error as a pure semantic error. There are 187 errors in total which can be attributed to this case.

(3) xwɑŋ35  pa35

黃 (yellow)  拔 (to pull out)

Example (3) is attributed to the case of pure phonological error. Between the target and error, there is no semantic relation. As to the error unit between them, they only share tone, and the parts of syllable are substituted. There are just three errors involving syllable occur among the tests.

(4) xej55  lan35 黑 (black)  蘭 (orchid)

Example (4) is a case which shares no semantic relation, and less phonological similarity (except for syllable number). There are 173 errors of this case, but it is quiet unusual to have such high proportion (25.44%) of this semantic-phonological

Test 2 and 3 shows that the number of pairs with high similarity exceeds the number of low similarity pairs (45:21 in test 2; 150:97 in test 3). In test 4, the numbers of the two groups tend to be equal (116:116 in test 4), while in test 1, the number of low similarity pair exceeds the number of high pair (37:43 in test 1). It is still vague to judge the phonological effect in error generating by error distribution.

From error distribution, it seems that the trails with high phonological similarity tended to induce more speech errors, and subjects tended to produce errors with phonological similarity. We need to put them under crosstab test to examine whether subjects have similar pattern in each test.

Table 4-2. Homogeneity of Proportions Among Subjects in Each Test

Chi-Square N x2 df Sig.

Note: *, ** are significant at the .05 and .01 levels respectively.

According to the chi-square test in table 4-2, we notice that Trial Fs in test 1 and test 4 pass the homogeneity test, which tells us that subjects have congruous pattern of error distribution when reacting to test trials. In test 1, after chi-square test, the error frequency of the trials among subjects doesn’t reach at significance level (x2=25.6, df=20, p>.05), while the Error N is significant at .05 level (x2=34.19, df=20, p<.05).

The result seems to be concordant with homophonous naming test. In test 4, it appears

that Trial F among subjects doesn’t reach at significance level (x2=30.38, df=20, p>.05), but the Error N is significant at .05 level (x2=36.24, df=21, p<.05). In square naming task and homophonous naming task, it seems that subjects might be sensitive to the phonological similarity of trials, but the target-error pairs didn’t show congruous phonological distribution among subjects. In test 2 and 3, subjects showed a different pattern on Trial F and Error N. The Trial F and Error N in test 2 among subjects are significant (Trial F: x2=32.22, df=18, p<.05; Error N: x2=30.66, df=18, p<.05), and so as in test 3 (Trial F: x2=52.42, df=21, p<.01; Error N: x2=59.97, df=21, p<.05). The results show that, in color reading and Stroop naming tests, subjects’ trial error frequencies were of difference as to phonological similarity of trials, and their speech errors didn’t tend to appear similar pattern of high or low phonological similarity. It seems that phonological similarity did not cause apparent the same effect on each subject.

In order to know whether phonological similarity would cause subjects to react differently among the four tasks, we applied one-way ANOVA to examine the trial F distribution, as shown in table 4-3.

Table 4-3. One-Way ANOVA for Phonological Similarity and Trial Frequency

Trial

Note: *, ** are significant at the .05 and .01 levels respectively.

With regard to the part of phonological similarity and Trial F, because the F-ratio (14.76) we computed exceeds the value of F (2.67) 4

4 F value refers to the n1 degrees of freedom (for greater mean square).

, we reject the null hypothesis and

accept the scientific hypothesis that different phonological similarity differently affected the error frequency of trials that subjects made. It also shows that different phonological similarity would cause different Trial F among the four tasks [F(3, 172)=14.76, p<.01]. Table 4-4 brings out the post-hoc test (Scfeffe’s post-hoc) and shows that test 1 (M=2.18, SD=1.88) and test 3 (M=5.84, SD=5.52) are significant at .01 level, and so do test 1 (M=2.18, SD=1.88) and test 4 (M=5.66, SD=4.29).

Besides, test 2 (M=1.77, SD=2.14) and test 3 (M=5.84, SD=5.52) achieve at .01 level, and so do test 1 (M=2.18, SD=1.88) and test 4 (M=5.66, SD=4.29). However, the pairs of “test 1 x test 2” and “test 3 x test 4” don’t reach to the .05 significant level.

Table 4-4. Post-hoc Analysis for Table 4-3 (Scfeffe)

Post-hoc Pairs Test 1

Note: *, ** are significant at the .05 and .01 levels respectively.

According to one-way ANOVA statistics, it appears that Trial F distribution in naming task (test 1) differs from Stroop naming task (test 3), and it also differs from the pattern in homophonous naming task (test 4). Besides, the Trial F pattern in reading task also differs from the patterns in Stroop naming and homophonous naming tasks.

However, naming task didn’t show difference from reading task (test 2), which implies that techniques of color naming and term reading would not cause different phonological sensitivity for subjects. Stroop naming task and homophonous task also

show congruous pattern in their Trial F distribution. However, the above results imply that subjects always show different phonological sensitivity to deal with the trials from they were in naming and reading tasks when stimuli come with visual competition (test 3 and 4).

Since the phonological information of trials appears to affect error frequency for subjects, we further looked into the phonological relation of errors with their targets.

Table 4-5. One-Way ANOVA for Phonological Similarity and Error Number

Error

Note: *, ** are significant at the .05 and .01 levels respectively.

Table 4-5 was computed from one-way ANOVA test. Because the F-ratio (15.56) we computed exceeds the value of F (2.67), we could reject the null hypothesis and accept the scientific hypothesis that different phonological similarity caused phonological influence on the errors that subjects made. It also shows that different phonological similarity would induce different error numbers among the four tasks [F(3, 172)=15.56, p<.01]. Table 4-6 brings out the post-hoc test (Scfeffe’s post-hoc).

The result shows that the pair of test 1 (M=1.82, SD=1.80) and test 3 (M=5.61, SD=5.49) is significant at .01 level, and so does the pair of test 1 (M=1.82, SD=1.80) and test 4 (M=5.27, SD=4.14). Besides, the pair of test 2 (M=1.50, SD=1.98) and test 3 (M=5.61, SD=5.49) achieves at .01 level, and so does the pair of test 1 (M=1.82, SD=1.80) and test 4 (M=5.27, SD=4.14). However, the pairs of “test 1 x test 2” and

“test 3 x test 4” don’t reach to the .05 significant level.

Table 4-6. Post-hoc Analysis for Table 4-5 (Scfeffe)

Post-hoc Pairs Test 1

Note: *, ** are significant at the .05 and .01 levels respectively.

Based on the result of one-way ANOVA, phonological similarity seems to induce different error distribution among tests. First, it seems that the error numbers in square naming test turned out to be different from Stroop naming and homophonous naming tests, but the error number distribution in Stroop naming test and homophonous naming test didn’t show difference. Second, error numbers in reading test, Stroop naming test and homophonous naming test reached to the level of significant difference.

The computed result accords to the one of Trial F in table 4-3 and 4-4. Naming task didn’t show much difference from reading task, and the pair of Stroop naming and homophonous naming tests did not, either. On the other hand, when visual competition comes out, the Error N appears different pattern from the one in square naming or term reading tasks. Based on the results of Trial F and Error N, phonological similarity appears to induce diverse patterns of error frequency and phonological relation between target and error when we cross-compared the four tests.

Besides speech error, subject’s reaction time (abbreviated as RT in the following) to each trial was also concerned in this study. The RT in individual tests was recorded and logged by E-prime experimental software. RT in this study refers to the time span

that started from showing of single trial and ended in pushing the key to finish a trial.

Therefore, the time span would include subject’s answering, repetition, re-correcting, and halting during a trial. Table 4-7 is the one-way ANOVA to examine whether phonological similarity would affect subjects’ react time among the four tests, which helps us know that phonological similarity could cause difference RT pattern in respective tests.

Table 4-7. Homogeneity of RTs Among Subjects in Each Test (one-way ANOVA)

RT Test 1 Test 2 Test 3 Test 4 Average

High Group 5384.94 3724.34 6565.01 6183.85 5464.53

Low Group 5619.85 3559.03 6674.17 6419.07 5568.03

Average 5502.39 3641.69 6619.59 6301.46 5516.28

RT F ratio Mean

Note: *, ** are significant at the .05 and .01 levels respectively.

According to the result in table 4-7, we noticed that the F ratios among the tests (1.10 in test 1; .49 in test 2; .11 in test 3; .42 in test 4) do not exceed the F values (3.96 in test 1; 4.07 in test 2 ~ 4). Therefore, we could not reject the null hypothesis, and we cannot accept the scientific hypothesis that phonological similarity influence subject’s reaction time in these tests, either. Furthermore, the values of significance in these

tests do not achieve at .05 levels. It seems that, for subjects in each test, phonological similarity did not cause significant difference in RTs. On the other hand, the result seems to accord to the Stroop’s work (1935) that term reading is processed faster than color naming.

Since we cannot infer that phonological similarity would cause effect in RT among subjects, we can see it still caused difference when we compared the four tests in pairs. The results are shown in table 4-8 and 4-9.

Table 4-8. One-Way ANOVA for Phonological Similarity and RT in Tests

RT F ratio Mean

After comparing the tests in pairs, we notice that the F radio is 71.13, which farther exceeds the F value 2.67, and it also pass Levene’s homogeneity test. We could reject the null hypothesis and accept the scientific hypothesis that phonological similarity caused significant difference for the four tests [F(3, 172)=71.13, p<.01].

Table 4-9 provides the Scfeffe’s post-hoc examination for the test pairs. If we compare the test with visual competition to another test without it, it appears that test 3 (M= 6.62, SD= 1095.54) and test 1 (M=5.47, SD=1060.27) are of significant difference, and so is the pair of test 3 and test 2 (M=3.64, SD=805.01). Besides, the pair of test 4 (M=6.30, SD=1200.06) and test 1 achieves at significant difference, and the pair of test 4 and test 2 does, too. Phonological similarity seems to induce different patterns of RT in Stroop naming and square naming tests, Stroop naming and term reading tests, homophonous naming and square naming tests, homophonous

naming and term reading tests.

On the other hand, if we compare the pairs which are both without visual competition (test1 and 2) or with visual competition (test 3 and 4), we could get opposite results. Test 1 (M=5.47, SD=1060.27) and test 2 (M=3.64, SD=805.01) are significantly different, while test 3 (M=6.62, SD=1095.54) and 4 (M=6.30, SD=1200.06) do not show significant difference. That is to say that phonological similarity induced effect between naming test and reading test, but it did not induce significant effect between Stroop naming test and homophonous naming test.

Table 4-9. Post-hoc Analysis for Table 4-8 (Scfeffe)

Post-hoc Pairs Test 1

Based on the results of table 4-7 and 4-8, within individual test, phonological similarity could not induce significant difference in RT. If we compare RT data among the tests in pairs, it appears that phonological similarity caused significant difference according to the tasks which subjects took.

When we crossed-compare the results in table 4-2 (Trial F and Error N distribution) and table 4-7 (RT), we could merely see that phonological similarity in test 1 and 4 caused subjects to induce different distribution in Trial F, which indicates that subjects might have apparently different error frequency according to the

When we crossed-compare the results in table 4-2 (Trial F and Error N distribution) and table 4-7 (RT), we could merely see that phonological similarity in test 1 and 4 caused subjects to induce different distribution in Trial F, which indicates that subjects might have apparently different error frequency according to the