• 沒有找到結果。

Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical

Journal, 27, 379-423, 623-656.

Shepard R. N., Hovland, C. I., and Jenkins, H. M. (1961). Learning and memorization of classfications. Psychological Monographs: General and Applied, 75, 517.

Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological

Bulletin, 119, 3-22.

Sloutsky, V. M. (2003). The role of similarity in the development of categorization. Trends in

Cognitive Sciences, 7, 246-251.

Sloutsky, V. M. (2010). From perceptual categories to concepts: What develops? Cognitive

Science, 34, 1244-1286.

Smith, E. E., & Medin, D. L. (1981). Categories and concepts. Cambridge, MA: Harvard University Press.

Smith, J. D., Murray, M. J., & Minda, J. P. (1997). Straight talk about linear separability.

Journal of Experimental Psychology: Learning, Memory & Cognition, 23, 659-680.

Sowell, E. R., Thompson, P. M., Holmes, C. J., Batth, R., Jernigan, T. L., & Toga, A. W.

(1999). Localizing age-related changes in brain structure between childhood and adolescence using statistical parametric mapping. NeuroImage, 9, 587-597.

Spellman, B. A. (1993). Implicit learning of base rates. Psycoloquy, 4(61). Retrieved June 9, 2012, from the World Wide Web:

http://www.cogsci.ecs.soton.ac.uk/cgi/psyc/newpsy?4.61

Stanton, R. D., & Nosofsky, R. M. (2007). Feedback interference and dissociations of classification: Evidence against the multiple-learning-systems hypothesis. Memory &

Cognition, 35, 1747-1758.

Verguts, T., Ameel, E., Storms, G. (2004). Measures of similarity in models of categorization.

Memory & Cognition, 32, 379-389.

Wickens, T. D. (1989). Multiway contingency tables analysis for the social sciences.

‧ 國

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

Hillsdale, NJ: Lawrence Erlbaum.

Zeithamova, D. & Maddox, W. T, (2006). Dual-task interference in perceptual category learning. Memory & Cognition, 34, 387-398.

Examples of calculating statistical density

To illustrate the procedure of calculating density more clearly, there is a simple example.

Suppose that there are two dimensions of materials (e.g., shape and color), and two levels of each dimension (black and white, square and circle), therefore there are four possible combinations of shapes and colors. In this experiment, all the target items are squares in black, while all the contrasting items are circles in white. In addition, assume that the number of target items is same as contrasting items.

To calculate the density, the probability of each possible combination in within- and between-category is required. Within-category represents the target items only, which are all square in black in this example. Therefore, the probability of black-square is 1.0.

Between-category represents both target and contrasting items, therefore, the probability of black-square is 0.5, as well as the probability of white-circle in between-category (see Table 10).

Table 9. Matrix of within-category and between-category probability in example 1

Black Square Black Circle White Square White Circle

Within-category 1.0 0 0 0

Between-category 0.5 0 0 0.5

After knowing the probability of each feature combination, now we continues to computing the Hdim, the dimensional entropy. Dimensional entropy has two parts, between-category and within-category (equation 4-1 and 4-2). To calculate, it is necessary to

consider the probability matrices from the aspect of dimensions first. There is only one kind of stimuli in target category (black squares only), so the probability matrix of color in the within-category (pblack and pwhite) would be [1.0 0], and the probability matrix of shape in the within-category (psquare and pcircle) would also be [1.0 0]. And there is another possibility (white circle) when we consider the between-category probability, so the matrix will be [. 5 . 5] both in dimensions of color and shape (Table 11). Put these matrices into

equation 4-1 (for within-category) and 4-2 (for between-category) respectively, then the Hwithindim will be get as 0, and Hbetweendim will be 2 (have not considered the attention weight

yet).

Table 10. Probability matrix of with- and between-category from the aspect of dimensions in example 1

Color Shape

Black White Square Circle

Within-category 1.0 0 1.0 0

Between-category 0.5 0.5 0.5 0.5

Next is Hrel, the relational entropy. Before it is computed, the number of relations needs to be considered first. According to equation 5, there is only one relation between all two dimensions. To calculate the relational entropy, we use the matrix of feature combination which is displayed in Table 9. The matrix for within-category is [1.0 0 0 0], and the matrix for between-category is [0.5 0 0 0.5] in Table 10. Put these numbers of probability into equation 5-1 and 5-2 respectively. Then we can get the Hwithinrel as 0, while the Hbetweenrel is 1 (have not considered the attention weight yet). Finally, we set wi as 1, and wk as 0.5, so Hwithin = 1 ∗ 0 + 0.5 ∗ 0 = 0,Hbetween = 1 ∗ 1 + 0.5 ∗ 2 = 2 And the

Suppose that the feature combination is more complicated. Now the target items are composed of half black squares and half white circles, while the contrasting items are composed of black circles and white squares. The probability matrix of within- and between-category is shown in Table 12 and the probability matrix from the aspect of

dimensions is shown as Table 13.

Table 11. Matrix of within-category and between-category probability in example 2

Black Square Black Circle White Square White Circle

Within-category 0.5 0 0 0.5

Between-category 0.25 0.25 0.25 0.25

Table 12. Probability matrix of with- and between-category from the aspect of dimensions in example 2

Color Shape

Black White Square Circle

Within-category 0.5 0.5 0.5 0.5

Between-category 0.5 0.5 0.5 0.5

In this example, there are two kinds of stimuli in the target items (black squares and white circles). Assuming they are equal in number, then the probability matrix of colors in the within-category (pblack and pwhite) would be [0.5 0.5], as well as shapes (psquare and pcircle) when calculating the Hwithindim (see Table 13). Therefore, Hwithindim is 2 (have not

considered the attention weight yet). On the other hand, there are four kinds of stimuli when considering between-category, mentioned as black squares, black circles, white squares and white circles, but the probability of matrix will as same as [0.5 0.5] when considering each dimension (half stimuli are in black when considering the dimension of color, and half

‧ 國

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

stimuli are squares when considering the dimension of shape) (see Table 13). Therefore, the Hbetweendim is also 2 (have not considered the attention weight yet). Compare to the calculation

of dimensional entropy, it is more straightforward to compute the Hrel, just use the matrix [. 5 0 0 . 5] for the within-category and the matrix [. 25 . 25 . 25 . 25] for the

between-category which are displayed in Table 12. Therefore, Hwithinrel is 1 and Hbetweenrel is 2. Do not forget to consider the attention weight, then the Hwithin = 1 ∗ 2 + 0.5 ∗ 1 = 2.5, and the Hbetween = 1 ∗ 2 + 0.5 ∗ 2 = 3 . So the density is 1 − 2.5 3� = 0.167 in this example.

The fact which can be observed easily in the two examples above is that the density is markedly larger when there is less variability (all are black squares in example 1) in the target items (within-category), while the density become lower when the variability is larger in the target items (density as 0.167 in example 2). Therefore, the variability of target items is one of the dominant factors which can affect density. Still, the salience of features and the similarity of stimuli have some common ideas with density, but not the same (Sloutsky, 2010).

The salience of each feature would affect density because the equations of density are composed of weighted entropy, but it simplifies the calculation of salience. It assumes that the saliences of features are all the same, and using a ratio to represent the relation between the attentional weight of dimensional and relational entropy instead of calculating attentional weights of all features. Also, the similarity is a part of concern in the concept of density, high

‧ 國

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

74

similarity within target category may cause a high density. However, the equations of density include both dimensional entropy and relational entropy. It is said that high density could be generated by not only high similarity but less relational entropy.

相關文件