• 沒有找到結果。

CHAPTER 6 EVALUATION

6.3 C ONTROLLED EXPERIMENT B WITH CUSTOMERS

6.3.1 Design and Objective of Experiment B

6.3.1 Design and Objective of Experiment B

To justify our conceptual framework and prove propositions that will influence on customers, we collect 33 customers for the four campaigns that we created from small business enterprise’s experiment campaign. We have four campaigns that are equally distributed to the customers which are developed by previous experiments and each customer will evaluate only one campaign. Each customer goes through the one on one controlled experiments process (See Figure 6.3.1) for about one hour.

Figure 6.3.1 The Procedure of Controlled Experiment for SME

Phase 1: Choose a campaign random sequence to show the customer

In search engine part, we choose our campaigns from previous experiments for SMEs (See Table 6.2.1 Campaign) to show the customers. Since we have three different media to show 33 customers, we set our customer experience journey that 11 customers are 1st, 2nd, and 3rd sequence to visit the search engine. We will give customers about 5-10 minutes to read through the brand-alliance campaign (See Figure 6.3.2).

Choose a level of campaign in image-related site

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

71

Figure 6.3.2 Brand alliance campaign information

Phase 2: Measure the targetability of the campaign

In search engine part we have two kinds of targetability to test for. One is the keyword targetability that the kind of keywords as query can save user’s time without the need for further search. Second is the inlink’s targetability, for example customer usually search for information on certain site, and if we put our campaign on a campaign image-related blog (e.g., recommended by our link building module) whether the customers think the image-related site compare to image-unrelated site may shorten their search cost for the information they want.

Figure 6.3.3 Measure the targetability of keywords and links

Phase 3: Measure the campaign and brand partner consistency

The customers select three images in order (See Figure 6.3.7), we also provide detailed adjectives about 14 images to empower the customer to tell the difference between the 14 images. (Appendix A)

Operational definition on calculation of Campaign and Brand Partner Consistency-

Since the customer select three images in order for each brand, site, campaign and favorite, we can thus calculate the consistency between brands, between brand and site, brand and campaign, site and favorite.

The calculation of consistency compare two rows of three images in order, if any rows has a match in 1st/2nd/3rd place, the score will add +3/+2/+1. We give two examples of between brand consistency (See Figure 6.3.4), and brand and campaign consistency (See Figure 6.3.5).

Figure 6.3.4 Between Brand Consistency Score

Figure 6.3.5 Brand and Campaign Consistency Score Operational definition on calculation of Image Distinguishing Ability -

We define a customer that have high distinguishing ability will have high degree of consistency with everybody’s opinion on different brands campaign, etc. We give an

SME Brand:

1storder image 2nd order image 3rd order image

Pretty

+3

+2

=

Between Brands image Consistency Score

1storder image 2nd order image 3rd order image

Classic

+3 +2

=

Brand and Campaign image Consistency

Score

9

+3 +1 Score range:0-12

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

73

example on subject 1’s image distinguish ability score on brand 原鄉物語 as following (See Figure 6.3.6). The calculation only matches without order, due to the fact that we think it’s okay that people might recognize the image through their own angle. Thus the calculation only counts if it matches or not with everybody’s voted images.

Figure 6.3.6 Image Distinguish Ability Score for Subject 1

Figure 6.3.7 Images selection for campaign, brands, site and favorite Phase4: Test the engagement level of campaign in image-related site and image-unrelated site

Through setting up the four scenarios (See Figure 6.3.8), SMEs simulate the customer journey in the search engine media. Customers can then answer the engagement level that they hold during the journey.

We will take an image-related site as an example. ZABU 雜鋪 is a site related to natural image which is the same as the campaign image. The customers link to site ZABU 雜鋪 and imagine themselves as the inlink site’s active users.

They can spend some time look and see what the site post and what kind of style

Everybody on SME Brand:

原鄉物語

Natural

Natural

Classic Pretty

Modern

1storder image 2nd order image 3rd order image

Classic

=

Image Distinguish Ability Score for Subject 1

2

#1 voted image #2 voted image #3 voted image

Subject 1 on SME Brand:

原鄉物語 Match Match Score range:0-3

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

74

they have. And then we ask customers intention on how they (under the scenario) will engage on the engagement site and further engagement level to the small business enterprise’s engagement level (See Sample in Figure 6.3.8).

Figure 6.3.8 Four scenarios to test the engagement level of customers

Figure 6.3.9 image-related inlink site engagement level

Scenario 1

Customer search

SME-Original-chooses keywords to reach Campaign

From Search Engine with different keywords…

Scenario 2

Customer search

SME-Revised long tail keywords to reach Campaign

Scenario 3

Through image-related site (links recommend for campaign image)

to reach the Campaign

From different website…

Scenario 4

Through image-unrelated (links recommend for other image)

site to reach the Campaign

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

75

Phase 5: Test the engagement level of SME-revised and SME-original-choose keyword

The engagement level of SME-revised and SME-original-choose keyword questionnaire is to design to contrast the two situations how customers will react and engage differently (See Sample in Figure 6.3.10).

Figure 6.3.10 SME-revised inlink long tail keyword