• 沒有找到結果。

Parameter learning of personalized trust models in broker-based distributed trust management

N/A
N/A
Protected

Academic year: 2021

Share "Parameter learning of personalized trust models in broker-based distributed trust management"

Copied!
13
0
0

加載中.... (立即查看全文)

全文

(1)

DOI 10.1007/s10796-006-9005-4

Parameter learning of personalized trust models

in broker-based distributed trust management

Jane Yung-jen Hsu· Kwei-Jay Lin · Tsung-Hsiang Chang· Chien-ju Ho · Han-Shen Huang· Wan-rong Jih

Published online: 5 December 2006

© Springer Science + Business Media, LLC 2006

Abstract Distributed trust management addresses the

challenges of eliciting, evaluating and propagating trust for service providers on the distributed network. By delegating trust management to brokers, individual users can share their feedbacks for services without the overhead of maintaining their own ratings. This research proposes a two-tier trust hierarchy, in which a user relies on her broker to provide reputation rat-ing about any service provider, while brokers leverage their connected partners in aggregating the reputation of unfamiliar service providers. Each broker collects feedbacks from its users on past transactions. To ac-commodate individual differences, personalized trust is modeled with a Bayesian network. Training strategies such as the expectation maximization (EM) algorithm can be deployed to estimate both server reputation and user bias. This paper presents the design and imple-mentation of a distributed trust simulator, which sup-ports experiments under different configurations. In

This research is supported in part by the National Science Council of Taiwan #NSC-94-2218-E-002-057, Institute for Information Industry #94-CS-0457, UC MICRO project #04-0511 and GeoSpatial Technologies, Inc.

J. Y.-j. Hsu (

B

)· T.-H. Chang · C.-j. Ho · W.-r. Jih Computer Science and Information Engineering, National Taiwan University, Taipei 106, Taiwan e-mail: yjhsu@ntu.edu.tw

K.-J. Lin

Electrical Engineering and Computer Science, University of California, Irvine, CA 92697, USA e-mail: klin@uci.edu

H.-S. Huang

Institute of Information Science, Academia Sinica, Taipei 115, Taiwan

addition, we have conducted experiments to show the following. 1) Personal rating error converges to below 5% consistently within 10,000 transactions regardless of the training strategy or bias distribution. 2) The choice of trust model has a significant impact on the perfor-mance of reputation prediction. 3) The two-tier trust framework scales well to distributed environments. In summary, parameter learning of trust models in the broker-based framework enables both aggregation of feedbacks and personalized reputation prediction.

Keywords Distributed trust management·

Reputation mechanism· Probabilistic trust model · Personalized feedback rating· Parameter learning · Expectation maximization

1 Introduction

Trust is an important relationship between individual entities engaging in any transactions. Each individual has a belief on certain attributes about the other. In addition to identification, who the subject entity is, and qualification, whether the subject entity is capable of performing the requested service, the trust relationship gauges consistency, that is, how well/certain the subject entity is able to deliver a service or a result (Lin, Lu, Yu, & Tai,2005).

In the online world, the ability to identify the trust-worthiness of a target partner/server has become crit-ically important. For example, how does an eBay buyer decide which seller will deliver the requested item as promised? Similarly, how does an enterprise applica-tion select the web services to help achieve its goals? While it is relatively easy to be deceitful online due to

(2)

the lack of physical contact, deceptive behaviors will be discouraged in repeated interactions if experiences may be captured. The rapid growth of online transactions and e-business activities suggests that traditional en-cryption and authentication mechanisms are no longer sufficient to adequately address the trust issue. It is imperative to provide trust management through a rep-utation mechanism based on user feedbacks from past transactions as in Zacharia and Maes (2000).

Amazon and eBay are successful examples of cen-tralized reputation systems, which help foster trust for vendors. With a single trust authority controlling all reputation information, such systems may be vulnera-ble, inflexivulnera-ble, and difficult to scale up. When the cen-tralized reputation system is owned by a single business entity, one may also raise issues about subjectivity. In contrast, a software agent working on behalf of its users may choose to maintain a reputation rating for every service provider. Building up a distributed trust relationship can facilitate collaboration in a multi-agent system. However, eliciting reputation information for each agent individually can be challenging. Moreover, when the number of feedbacks collected is small, the ratings can be easily skewed by potentially biased agents.

This research proposes a two-tier trust hierarchy, in which a user relies on its trust broker to provide rep-utation information about any service provider, while brokers leverage their connected partners in aggrega-ting the reputation of unfamiliar services. The software brokers act as trusted domain experts that manage the trust relationship for general web users. Trust brokers are independently maintained and operated; users are free to choose among many brokers available, much like people can choose their own CPAs and lawyers. Each broker is in charge of collecting and aggregating feedbacks from its users. The broker-based trust frame-work avoids the pitfalls of the centralized approach, while ensuring meaningful trust ratings in standard operations.

Even though all users belonging to the same bro-ker are assumed to share certain characteristics, e.g. membership, locations, or common interests, they are not without personal differences. To accommodate in-dividual differences, this paper presents a probabilistic approach to modeling personalized feedback with a Bayesian network. To tease apart the subjective user bias from the objective server performance, a broker may learn the trust model given the only observable data of user feedback. In our design, model fitting training strategies such as the expectation maximization (EM) algorithm (Dempster, Laird, & Rubin,1977) are used to approximate both the server performance and

user bias by searching for the local maximum of the likelihood function based on a probabilistic trust model and observed user feedbacks.

To evaluate the performance of the proposed trust framework, we have implemented a distributed trust simulator, and conducted extensive simulations. In par-ticular, we performed experiments to compare per-formance of two training strategies under shifted bias distributions; to examine how different trust models affect the reputation prediction accuracy; and to illtrate scalability of personalized reputation rating us-ing broker-based trust management in a distributed environment.

This paper is organized as follows. Section 2 pro-vides an overview of related research on trust manage-ment. The broker-based distributed trust framework is introduced in Section 3. In Section 4, we define the probabilistic trust model, and describe the procedure for training such a model with EM. The design of a distributed trust simulator and the simulation process are detailed in Section 5, followed by experimental results in Section6and the conclusion in Section7.

2 Related work

In recent years, the design of trust management frame-work has gained much attention in e-commerce, online auctions, peer-to-peer systems, web services and multi-agent systems. Trust management generally relies on a reputation mechanism based on user feedbacks from past transactions. In a comprehensive survey by Dellarocas and Resnick (2003), approaches to on-line reputation mechanisms are classified into cen-tralized (Zacharia and Maes, 2000) and distributed (Kamvar, Schlosser, & Garcia-Molina, 2003; Yu & Singh,2000). For example, Amazon computes the aver-age of product ratings according to customer reviews in a centralized fashion. Similarly, eBay utilizes a central-ized server to keep track of trust scores based on simple accumulation of user feedbacks of positive, negative, or neutral. Buyers and sellers have the opportunity to rate each other after each transaction, and ratings (with specific comments) over the last six months are maintained.

For peer-to-peer networked environments, trust rat-ings are usually collected locally. The Eigentrust algo-rithm proposed by Kamvar et al. (2003) is based on the notion of transitive trust, in which all peers in the file-sharing network cooperate to compute and store the global trust vector using power iteration. The ap-proach, similar to the idea of PageRank in Google search, was shown to be resistant to various attacks. In

(3)

addition to combining feedbacks using simple weighted average in Zacharia and Maes (2000), a Dempster– Shafer evidential model based on the word-of-mouth topology is proposed by Yu and Singh (2002). The model distinguishes between uncertainty and negative feedbacks to provide more accurate ratings. A Bayesian network model is proposed by Wang and Vassileva (2003) to combine ratings on different aspects of a server .

A broker framework for web applications was in-troduced by Lin et al. (2005), where service brokers manage trust information for their respective users. The framework combines three levels of trust and utilizes security broker, trust network, and reputation authority at each level respectively. By delegating trust man-agement to brokers, individual users only need to ask their brokers about the reputation of a service before engaging in any transaction. Each user only needs to share her feedback with her broker. Experiments were conducted to evaluate the performance of the proposed broker framework. While the proposed broker frame-work performed effectively with low computational overhead, there is no guarantee in error convergence, which motivated the current research.

One important challenge in any reputation mecha-nism is the difficulty in soliciting feedbacks. In addition to the general lack of incentives for the users, people are reluctant to share information for fear that it will give competitive advantage to others. Rewards are pro-vided in Fernandes, Kotsovinos, Ostring, and Dragovic (2004) as an incentives for honest participation. In Ju-rca and Faltings (2003), an incentive-compatible proto-col is proposed based on the upper bound of deception probability from game theory. Pavlov, Rosenschein,

and Topol (2004) proposed supporting privacy as an incentive for truthful feedbacks, while Jurca and Falt-ings (2004) designed a broker-based protocol to elicit truthful feedbacks.

Instead of deception, this research focuses on the problem of potentially biased feedbacks due to indi-vidual differences. Some people tend to give negative feedbacks, while others are more positive. Some people tend to have extreme opinions, while others are more moderate. In fact, the general distributed trust frame-work that will be introduced below can be extended to model other factors affecting user feedbacks.

3 Distributed trust framework

This research proposes a two-tier broker-based dis-tributed trust management framework for online ser-vice transaction systems. Figure 1 shows the overall structure of the proposed framework consisting of two types of agents, the brokers and the users. A broker typically works for multiple users who share (localized) common features, and are willing to share information among the group.

Each user may function as either a service provider (e.g. server) or a service requester (e.g. client) in a transaction. In e-commerce scenarios, a client user is often called a buyer, and a server user is called a seller. When a user acts as a seller, its server performance is dictated by a consistency factor (CF) or server reputa-tion that controls the probability of a successful service delivery. Let’s assume CF is an inherent and consis-tent property of the server (similar to the reliability of a hardware) with a real value ranging from 0 to 1.

Fig. 1 Users and brokers in a two-tier distributed trust framework

(4)

For example, a CF value of 0.8 means that the server has an 80% chance to deliver a satisfactory service as requested.

In this framework, a client user relies on its respec-tive broker to maintain server reputation ratings for all service providers that have engaged in transactions with any user managed by the same broker. For each trans-action, the client user ujrequests its broker Brokerito provide a reputation rating about a specific server user uk, and initiates the transaction provided that the rating is above a configurable threshold representing uj’s risk-taking attitude. When the transaction is over, Brokeri collects the feedback rating f on the server user ukfrom the client user ujbased on the success or failure of the current transaction.

Figure2shows the components of a broker, in which the Reputation Manager collects all feedback ratings generated by all its users, and the Trust Manager ex-changes reputation ratings with other connected bro-kers when necessary. By aggregating feedbacks from all its users, a broker has the opportunity to accumulate enough rating information about any server. In the case when a broker does find its local trust database to be inadequate for making a confident recommendation, it will request additional reputation information from neighboring brokers.

In the proposed framework, we assume that the bro-kers are connected according to some pre-defined prop-erties, e.g. physical proximity, social connections or business relationships. Brokers communicate through standard protocols (e.g. SOAP) in a peer-to-peer fash-ion. Based on its past performance, each broker is assigned a trust rating, which will be used in aggregating the server reputation ratings from multiple brokers. A simple weighted sum of all ratings from neighboring brokers will be returned to the requesting client user, who then decides whether to carry out the transaction with the specific server. Given that trust brokers are independently operated, they may not always be coop-erative and truthful in providing reputation information

Fig. 2 Trust broker architecture

and feedback when requested by other brokers. The EigenTrust mechanism is deployed to reward good behaviors and minimize malicious attacks by some bro-kers (Kamvar et al.,2003).

The reputation and trust management broker frame-work introduced in Lin et al. (2005) assumes users and brokers to be diligent in providing honest feedbacks. The ad hoc aggregation methods can compute trust ratings efficiently, but there is no guarantee of conver-gence. On the other hand, the computation defined by EigenTrust converges nicely to a global trust vector, and it meets the demand at the broker level satisfacto-rily. However, a global trust value may not be the right choice at the individual user level. For example, a small retailer may provide speedy delivery and great service in its local geographical area, but it may be limited in logistics and does not perform satisfactorily globally. In addition to variation in ratings due to locality or other factors, a client user may have her own personal bias, either positively or negatively. As a result, a broker needs to learn both the server performance and the user bias from feedback data collected over time. In the fol-lowing section, we will present a Bayesian network trust model for biased user feedback, and explain how the EM algorithm can be adopted to train the probabilistic trust model.

4 Probabilistic trust model

One of the most important and challenging issues in trust management is the problem of trust rating predic-tion. A broker needs to provide accurate predictions to keep its users informed, even with a small rating data-base at the beginning of the broker’s operation (e.g. during startup time) or when a new service has become available recently. What makes trust rating prediction especially hard is that the rating database may consist of subjective ratings from various users. Consequently, the predictive trust rating must be personalized to fit the subjective views of various users while maximizing the satisfaction from each individual.

Much previous work uses graphical models to repre-sent existing interactions among users. In their models, each user is expressed as a node. A feedback rating f that client/buyer ujgives server/seller ukis recorded as a directed link from node ujto node ukwith weight f . To predict the trust of uj on another server ul, those models compute the weighted average rating based on all the ratings on all paths from ujto ul. The common underlying assumption for computing weights is that, if ujtrusts ukbased on the performance of uk, ujand uk are likely to give similar ratings to the same servers.

(5)

However, those methods do not provide satisfactory solutions to our concerns. First, it is hard to predict the ratings between client and server at startup time or for a new server. The number of links is usually not enough to compose paths between two arbitrary nodes except for some special topologies. Second, although the weighted average mechanism gives personalized prediction, it does not take subjectivity into better con-sideration. For example, a server that always performs perfectly may give strict ratings on others, using its own performance as the rating standard. In this case, the ratings tend to be underestimated by the strict user.

This research proposes handling the problems by modeling server performance and client subjectivity explicitly. The rating that client ujwould give to server ukcomes from a function that takes the subjectivity of uj and the performance of uk as parameters. Such a model considers the subjectivity issue and is beneficial to the startup time issue since ratings can be computed even if there is no direct path from ujto uk.

A Bayesian network trust model is adopted by the proposed distributed trust management system. Our trust model is distinct from previous work in that it models the relation between a client and server within a transaction. Section 4.1 describes the details of the model. In this work, the subjectivity of rating bias is considered, namely, a client’s tendency to give strict or generous ratings. Section4.2shows how the model computes biased trust. Finally, we present the EM al-gorithm (Dempster et al.,1977) for training our model. 4.1 A sample model

Figure3shows the Bayesian network trust model con-sisting of five random variables, the client user (C), server user (S), user bias (B), server reputation (R) and the feedback rating (F) for a given transaction. The links in Fig.3represent causal dependencies among the random variables. Given the natural variations in server reputation and user bias, we introduce hidden nodes R and B as intermediate random variables to determine

Fig. 3 Bayesian network trust model with bias

the rating. For each transaction, the server reputation is decided by the objective server performance (link from S to R), while the user bias is decided by the subjective view of the client (link from C to B). For any transaction, the feedback rating a seller receives is based on both server performance and client bias (links from B and R to F). Throughout this paper, we sometimes use buyers to refer to client users, and sellers to refer to servers.

Let us examine the variables and distributions in more detail. The values of C and S are the unique user IDs. Multinomial distributions P(C) and P(S) are used to represent their transaction frequency. Each buyer has its own bias distribution P(B|C), and each seller has its own performance distribution P(R|S). While buyer, seller, and rating for each transaction are ob-servable, the actual server performance and user bias are unknown to the trust brokers. That is, R and B are latent variables. Intuitively, performance and bias can be expressed as real values. To reduce the complexity in computation, our implementation discretizes perfor-mance and bias into nRand nB bins, respectively. The degree of user bias or server performance is approx-imated by its expected value. The value of B falls within a lower bound bl and an upper bound bu. The value of R is within 0.0 and 1.0. It follows that P(B|C) and

P(R|S) are also multinomial distributions.

Assume B and R are instantiated as b and r in a spe-cific transaction involving client c and server s, we can define the feedback rating to be f = b + r. However, the real-numbered rating cannot be obtained by sum-ming up discretized performance and bias directly. In order to handle both discrete-number and real-number ratings with a single model, we define P(F|B, R) as a normal distribution withμ = B + R:

P(F|B, R) ∝ exp  −(F − (B + R))2 2σ2  , (1)

whereσ is a constant. Rating f = b + r has the highest probability to appear in this distribution, which is adap-tive to the model training.

4.2 Biased trust

The probabilistic trust model can be used to make pre-diction about a buyer’s biased trust, or the subjective trust, in a given seller. We compute the estimated repu-tation (i.e. consistency factor or expected performance) of seller s as follows.

ˆRs= 

r

(6)

Then, the estimated bias ˆBc of client/buyer c can be calculated in a similar way.

ˆBc= 

b

P(B = b|C = c) · b. (3)

Finally, ˆBc+ ˆRsis returned as the personalized reputa-tion rating back to client c.

4.3 Model training

Given a set of T transactionsD= {dt| 1 ≤ t ≤ T}, the parameters of a trust model can be trained using a model fitting method. The t-th transaction dt consists of a client ct, a server st, and the feedback rating ft for the specific transaction.

For any transaction, the participants, client C and server S, are observable. As a result, the probability distributions P(C) and P(S) can be estimated easily by normalizing their frequency of occurrences. Let Nc and Ns be the numbers of transactions with client c and server s, respectively. We have P(C = c) = Nc

T and

P(S = s) = Ns

T. The conditional distribution P(F|B, R) is defined for all circumstances in Eq.1, and no update is needed. The distributions of latent variables B and R are estimated by employing the EM algorithm.

The EM algorithm is a general algorithmic frame-work that searches for local maxima of data likelihood function in the parameter space of probabilistic mod-els. It consists of repeated applications of the E-step and the M-step. The E-step estimates the posterior distributions of the latent variables using the current model. The M-step updates the current model with the results from the E-step. The EM algorithm terminates at a local maximum of the likelihood function or the maximum number of iterations.

In our work, the E-step computes P(B, R|dt), the joint probability distribution of the latent variables given each transaction:

P(B = b, R = r|dt) =

P(b, r, ct, st, ft) 

b,rP(b, r, ct, st, ft).

Let pb,t denote rP(b, r|dt), and pr,t denote 

b P(b, r|dt). In the M-step, P(B|C) and P(R|S) are

up-dated by calculating the expected number of occur-rences as follows. P(B = b|C = c) =  {t|ct=c}pb,t  b,{t|ct=c}pb,t , P(R = r|S = s) =  {t|st=s}pr,t  r,{t|st=s}pr,t .

It should be noted that there are a number of meth-ods for searching in the parameter space to maximize data likelihood function. For example, simple average or gradient ascent may work fine in finding a solution, even in the presence of latent variables. Different meth-ods may stop at different local maxima, so will the same search method with different initial points. In general, good model design may have a stronger impact on the overall search result than the choice of search methods. The most important advantage of adopting EM is in its ease of implementation (or flexibility) to deal with new models, and its ability to learn from a relatively small set of data.

5 Implementation

We have designed and implemented a working proto-type of the trust broker, which can run in either sim-ulation mode or deployment mode. For performance evaluation of the proposed broker-based trust frame-work, a configurable trust simulator has been im-plemented. This section presents the details of both designs, with a detailed description of the simulation process. The updated and robust implementation in Python is produced to replace the previous Java im-plementation that was used in Lin et al. (2005). This implementation adopts a completely distributed design, in which all brokers and users can be independently running on any networked machines. Each broker or user is assigned a unique Universal Resource Identifier (URI), and all communications use standard SOAP APIs for enhanced interoperability and flexibility. A series of experiments using the trust brokers and sim-ulator have been conducted and the results are reported in Section6.

5.1 Trust broker

Trust brokers form the core of the proposed two-tier trust management framework. In particular, a trust broker performs the following functions:

1. Collecting feedback from its user after each transaction.

(7)

Fig. 4 Trust broker UML diagram

2. Maintaining ratings of all servers engaged in trans-actions with its users.

3. Aggregating seller reputation ratings from con-nected brokers.

4. Managing personal bias of its users.

5. Providing personalized estimation of server perfor-mance with confidence to its client user.

As was shown in Fig.2, there are two main compo-nents in a trust broker: the reputation manager and the trust manager. Fig.4depicts a more detailed design of the trust broker processes and APIs.

The broker module provides two interfaces to the users: GetUserReputation and SendFeedback. A client user checks the target server’s reputation from its broker using GetUserReputation. A trans-action is initiated only if the client is satisfied with the server rating. After each transaction, the

client user is obligated to provide a feedback on the server performance to its broker via Send-Feedback. Our implementation handles two types of feedback rating: continuous, which is a real value between 0.0 and 1.0.; and binary, which is either 0 or 1.

A broker utilizes ReputationManager to manage user information based on feedbacks collected from past transactions. Given that server reputation and client bias are not directly observable to the broker, the TrainingStrategy module attempts to find the optimal fitting of the data collected to the probabilistic user model. In particular, we have implemented two model fitting strategies, ExpectationMaximization and Simple Method for our experiments. The EM algorithm has been detailed in Section 4.3. The Simple Method es-timates the reputation (or CF) of a specific seller by computing the average of all feedbacks on the seller

(8)

collected by a broker, and it estimates the bias of a user by computing the average over the difference of any feedback with the seller’s estimated CF.

Additionally, the TrustManager module maintains the trust values of all brokers. The EigenTrust mech-anism (Kamvar et al.,2003) has been adopted to per-form peer-to-peer trust management among brokers. A broker aggregates reputation ratings from connected brokers by summing the ratings weighted by their trust values. EigenTrust offers the advantages of global con-vergence as well as resistance from attacks by mali-cious, incompetent, or disagreeing neighbors.

5.2 Trust simulator

In order to validate the proposed probabilistic user model and to evaluate the performance of specific pa-rameter learning strategies, e.g. EM, a trust simulator is designed to conduct experiments under various system and environmental configurations. The trust simulator is in charge of starting the broker processes, initializing each simulated user with an inherent CF and bias, generating the set of simulated transactions, configur-ing the trainconfigur-ing strategy and schedule, samplconfigur-ing and calculating the prediction errors in reputation and bias. Figure5shows the overall structure and the sequence of operations performed by the trust simulator.

The simulator synthesizes each transaction by ran-domly selecting a buyer and a seller, and the transaction proceeds just like a real transaction. All communica-tions among buyers, sellers, brokers, and the simulator are conducted with SOAP APIs. The seller perfor-mance (or client bias) is sampled based on a Gaussian distribution of the user’s inherent CF (or bias). For example, given a seller CF of 0.8 and a buyer bias of +0.05, the probability of a successful transaction is 0.8 with an expected value of 0.85 for the feedback rating. Continuous feedbacks range from 0.0 to 1.0, and binary feedbacks can be generated with the continuous rating as the probability of positive feedbacks.

The simulation process is summarized in Algorithms 1 and 2. For each simulation run, a global configura-tion object Config is specified to define the number of brokers n, the number of users m, the total number of transactions T, distribution (uniform or normal) and parameters for server reputation and user bias, con-fidence thresholdθc, reputation threshold θr, training size, and sampling points for prediction error calcu-lation. Each user ui is initialized with a reputation ri and a bias bi, which are uniformly sampled between 0.0 and 1.0 for the former, and between a lower bound bl

Algorithm 1 Trust Simulation

Require: Config: a global configuration object

1: Initialize a set of brokers O= {o1, . . . , on}, each with a feedback repositoryFi← ∅;

2: Initialize a set of users U= {u1, . . . , um}, each with an inherent reputation riand bias bi;

3: for t= 1 to T do

4: ct← uj, where j← Random(m); {randomly se-lect a client uj}

5: st← uk, where k← Random(m); {randomly se-lect a server uk}

6: Simulate the transaction by Broker(i, ct, st); {ct= ujis managed by Brokeri}

7: if t∈  then

8: Record the current ˆRuand ˆBufor all u∈ U;

9: end if 10: end for

11: Compute and output the average of R and the average ofB;

Algorithm 2 Broker

Require: a transaction t with client ctand server st

1: ifGetUserReputation(ct, st, θc) ≥ θrthen

2: Calculate user feedback rating rt;

3: Update feedback repositoryFiF∪ {rt};

4: end if

5: if|Fi| == σ then

6: Perform EM training with data in the feedback repository EM(Fi);

7: Reset the feedback repositoryFi;

8: end if

and an upper bound bufor the latter. Such user para-meters remain constant throughout a simulation run.

6 Experiments

This section presents the results of three sets of sim-ulation experiments that were designed to evaluate the performance of the proposed trust framework. The first set compares performance of two training strategies, EM and Simple, under shifted bias distributions. The second set examines how different trust models af-fect the reputation prediction accuracy. The third set illustrates scalability of personalized reputation rating using broker-based trust management in a distributed environment.

In each simulation, every user is initialized with an inherent CF and bias, which are uniformly sampled

(9)

from a given value range. All simulation experiments reported in this paper set the range for CF to be be-tween 0.0 and 1.0. Each simulation is repeated under shifted bias distribution. The range for “Bias shiftβ” is defined as[0.β − 0.2, 0.β + 0.2]. For example, the value of bias ranges in[−0.2, 0.2] for “Bias shift 0”, [0.0, 0.4] for “Bias shift 2”, and[−0.4, 0.0] for “Bias shift −2”.

Both CF and bias remain constant throughout a given simulation run. The confidence threshold, reputation threshold, and training size are constants empirically selected for the experiments. While they have impacts on the speed of convergence, the overall performance trend remains the same regardless of their specific values.

Fig. 6 Performance with different training strategies

(n = 1, m = 100, T = 35, 000) 0 2 4 6 8 10 12 14 100 1000 10000 100000

Personal Rating Error(%)

Transaction number

Simple (bias shift 0) Simple (bias shift 0.2) Simple (bias shift -0.2)

(a) Simple Method

0 5 10 15 20 25 30 100 1000 10000 100000

Personal Rating Error(%)

Transaction number

EM (bias shift 0) EM (bias shift 0.2) EM (bias shift -0.2)

(10)

Fig. 7 Bayesian network trust model without bias

Performance is measured in terms of the prediction error, which is defined as the root mean-squared error in the estimation. One of the most commonly used measures of success for numeric prediction, the mean-squared error is computed by taking the average of the squared difference between the estimated and correct values. Taking the square root gives the error value the same dimensionality as the actual values. In particular, we use reputation error to denote the error in predicting server reputation, and personal rating error to denote the error in predicting the summation of server reputa-tion and client bias. In the following experiments, per-formance is measured at exponentially growing number of transactions. Namely, the sampling points is {128, 256, . . . , 32768}.

6.1 Training strategies

This experiment explores the effectiveness of learning under shifted bias distribution. The simulations com-pare performance of two training strategies, Simple and

EM. A broker uses the Simple method to estimate server reputation by computing the average rating of all feedbacks collected about the server, and to esti-mate client bias by computing the average difference between any feedback rating by the user and the cor-responding estimated server reputation. Alternatively, a broker may use EM to fit the trust model for pre-dicting server reputation and client bias as described in Section4.3.

The simulations are set up with a single broker, 100 simulated users, and 35,000 transactions. The results in Figs. 6(a) and 6(b) show that both strategies per-formed reasonably well, and the personal rating error converges to less than 5% around 10,000 transactions under different bias distributions. With the relatively simple trust model in Fig. 3, we do not observe much performance advantage of EM except that it is less sen-sitive to bias shift. Given a good trust model, this exper-iment demonstrates that the choice of training strategy does not have significant impacts on performance.

6.2 Performance due to trust models

A major benefit of EM is its flexibility in handling different models. While the Simple method needs to be re-coded, EM can take a new model without much effort. In this experiment, the Bayesian network trust model without bias in Fig.7is adopted.

Figure8shows the simulation results for both Sim-ple and EM under different bias distributions. The

Fig. 8 Performance based on trust model without bias

(n = 1, m = 100, T = 35, 000) 0 5 10 15 20 25 30 100 1000 10000 100000 Reputation Error(%) Transaction number

Simple (bias shift 0) Simple (bias shift 2) Simple (bias shift -2) EM (bias shift 0) EM (bias shift 2) EM (bias shift -2)

(11)

reputation error converges to less than 5% within 1,000 transactions when there is no bias shift. However, the error remains above 17% when the bias shift is either 0.2 or −0.2. This experiment highlights the importance of adopting the right model. Performance suffers when the notion of bias is not included in the model.

6.3 Distributed trust management

The last experiment evaluates the performance of the two-tier broker-based trust framework in a real distrib-uted environment. The simulations are set up with 10 brokers, 1,000 simulated users, and 10,000 transactions.

Fig. 9 Performance in a distributed environment (n=10, m=1,000, T=10,000) 3 4 5 6 7 8 9 10 11 12 13 14 100 1000 10000

Personal Rating Error(%)

Transaction number

Simple (bias shift 0) Simple (bias shift 0.2) Simple (bias shift -0.2)

(a) Simple Method

0 5 10 15 20 25 30 35 100 1000 10000

Personal Rating Error(%)

Transaction number

EM (bias shift 0) EM (bias shift 0.2) EM (bias shift -0.2)

(12)

Each broker manages 100 users, and the processes can run on any number of networked machines. The trust model in Fig.3is used. The EigenTrust mechanism is deployed at the broker level with a threshold of 0.05 and a maximum iteration of 50.

Figures9(a) and 9(b) show the simulation results of both Simple and EM under different bias distributions. This experiment illustrates the scalability of trust man-agement to multiple brokers running in a real distrib-uted environment. As in the single-broker case, the Personal Rating Error falls below 5% within 10,000 transactions. The two-tier trust management works well in a distributed environment.

7 Conclusion

This paper presents a two-tier broker-based framework for distributed trust management. A Bayesian network is defined to model the combined trust from objec-tive server performance and subjecobjec-tive user bias. The probabilistic personalized trust model can be learned using expectation maximization or alternatives training strategies.

The trust brokers aggregate feedbacks from local users while supporting personalized services. When the number of feedbacks collected by a given broker is insufficient to make justifiable recommendations, the broker may request for additional information from trusted brokers. Instead of combining trust ratings from multiple brokers as in Lin et al. (2005), the EigenTrust mechanism (Kamvar et al.,2003) is adopted to compute a global trust vector. At the broker level, such a P2P trust mechanism avoids malicious attacks from unco-operative brokers. At the individual level, the broker-based trust mechanism fosters user community and their willingness to share feedbacks.

Research presented in this paper has improved over previous work in several ways. First, a general two-tier broker-based trust management framework is pro-posed. Second, Bayesian networks can be used to model personalized trust. Third, a robust implementa-tion of the distributed trust simulator, which is config-urable for a wide range of simulations, has been built. In addition, our experiments have demonstrated that personal rating error converges to below 5% consis-tently within 10,000 transactions (i.e. 10 transactions per user or 1,000 transactions per broker) regardless of the specific training strategy or bias distribution. However, the choice of trust model has a significant impact on the performance of reputation prediction. We’ve also shown that the two-tier trust framework

scales well to distributed environments without much overhead.

Acknowledgements This research is supported in part by the National Science Council of Taiwan #NSC-94-2218-E-002-057, Institute for Information Industry #94-CS-0457, UC MICRO project #04-0511 and GeoSpatial Technologies, Inc. The authors would like to thank Chia-en Tai and Haiyin Lu for sharing earlier versions of the previous trust simulator and valuable lessons gleaned from earlier experiments. Special thanks go to the anonymous reviewers for their constructive suggestions that helped improve the paper.

References

Dellarocas, C., & Resnick, P. (April 2003). Online reputation mechanisms: A roadmap for future research. In Summary

Report of the First Interdisciplinary Symposium on Online

Repurtation Mechanisms. Retrieved from http://ccs.mit.

edu/dell/papers/symposiumreport03.pdf.

Dempster, A., Laird, N., & Rubin, D. (1977). Maximum likeli-hood from incomplete data via the EM algorithm. Journal of

the Royal Statistical Society, B39, 1–37.

Fernandes, A., Kotsovinos, E., Ostring, S., & Dragovic, B. (2004). Pinocchio: Incentives for honest participation in distributed trust management. In: Proceedings of the Second

Internation-alConference on TrustManagement(pp. 63–77). Oxford, UK.

Jurca, R., & Faltings, B. (2003). An incentive compatible reputa-tion mechanism. In Proceedings of the Second Internareputa-tional

Joint Conference on Autonomous Agents and Multiagent Sys-tems (pp. 1026–1027). New York: ACM.

Jurca, R., & Faltings, B. (2004). Eliciting truthful feedback for binary reputation mechanisms. In: Proceedings of

IEEE/WIC/ACM International Conference on Web Intelli-gence (pp. 214–220). Beijing, China.

Kamvar, S.D., Schlosser, M.T., & Garcia-Molina, H. (2003). The eigentrust algorithm for reputation management in P2P net-works. In Proceedings of the Twelfth International World

Wide Web Conference. Budapest, Hungary.

Lin, K. J., Lu, H., Yu, T., & Tai, C. e. (2005). A reputation and trust managment broker framework for web applications. In: Proceedings of the IEEE International Conference on

e-Technology, e-Commerce, and e-Service (pp. 262–269). Hong

Kong, China.

Pavlov, E., Rosenschein, J.S., & Topol, Z. (2004). Supporting privacy in decentralized additive reputation systems. In:

Pro-ceedings of the Second International Conference on Trust Management. Oxford, UK.

Wang, Y., & Vassileva, J. (2003). Bayesian network trust model in peer-to-peer networks. In: Proceedings of Second

Inter-national Workshop on Peers and Peer-to-Peer Computing

(pp. 23–34). Berlin Heidelberg New York: Springer. Yu, B., & Singh, M.P. (2000). A social mechanism of reputation

management in electronic communities. In: Proceedings of

the 4th International Workshop on Cooperative Information Agents IV, The Future of Information Agents in Cyberspace

(pp. 154–165). Boston, Massachusetts.

Yu, B., & Singh, M.P. (2002). An evidential model of distrib-uted reputation management. In: Proceedings of the First

International Joint Conference on Autonomous Agents and Multiagent Systems (pp. 294–301). New York: ACM.

Zacharia, G., & Maes, P. (2000). Trust management through rep-utation mechanisms. Applied Artificial Intelligence Journal,

(13)

Jane Hsu received her PhD in Computer Science from Stanford University in 1991. She is an associate professor on Computer Science and Information Engineering at National Taiwan Uni-versity. Her research interests include intelligent multi-agent systems, data mining, service oriented computing and web tech-nology. Prof. Hsu is on the editorial board of the International Journal Service Oriented Computing and Applications (pub-lished by Springer). She has served on the editorial board of Intelligent Data Analysis–An International Journal (published by Elsevier and IOS Press) and the executive committee of the IEEE Technical Committee on E-Commerce. She is a Pro-gram co-Chair for the 2005 IEEE International Conference on e-Technology, e-Commerce, and e-Service, as well as the 2004 Conference on Artificial Intelligence and Applications. In addi-tion, she is actively involved in many key international confer-ences as organizers and members of the program committee. She is a member of AAAI, IEEE, ACM, Phi Tau Phi, and has been an executive committee member of TAAI.

Tsung-Hsiang Chang is a graduate student in the department of Computer Science and Information Engineering at National Tai-wan University. His research interests include intelligent systems, software engineering, and human-computer interaction. Kwei-Jay Lin received the BS in Electrical Engineering from Na-tional Taiwan University, the MS and Ph.D. in Computer Science from the University of Maryland, College Park. He is a Profes-sor in the Department of Electrical Engineering and Computer Science at the University of California, Irvine. Prior to joining UCI, he was an Associate Professor in the Computer Science De-partment at the University of Illinois at Urbana-Champaign. His research interests include service-oriented systems, e-commerce and enterprise computing, real-time systems, scheduling theory,

and distributed computing. Dr. Lin is an Editor-in-Chief of the International Journal of Service Oriented Computing and Ap-plications (published by Springer), and the Editor-in-Chief of the Software Publication Track, Journal of Information Science and Engineering (published by Academia Sinica, Taiwan). He has served on the editorial boards of IEEE Transactions on Parallel and Distributed Systems and the IEEE Transactions on Computers. He is a Co-Chair of the IEEE Technical Com-mittee on E-Commerce since 2004. he has chaired many inter-national conferences, including Conference Chairs for the 2006 IEEE Conference on E-Commerce Technology in San Francisco, the 2004 IEEE Conference on e-Technology, e-Commerce and e-Service in Taipei, the 2003 IEEE Conference on E-Commerce in Newport Beach, CA, and the 1998 IEEE Real-Time Systems Symposium in Madrid, Spain.

Chien-Ju Ho is a graduate student in the department of Com-puter Science and Information Engineering at National Taiwan University. His research interests include intelligent learning systems, machine learning, and human-computer interaction. Han-Shen Huang is a postdoctoral research fellow at the Insti-tute of Information Science of Academia Sinica in Taiwan. His research interests include machine learning and applications of probabilistic models.

Wan-rong Jih is a PhD candidate in the Department of Com-puter Science and Information Engineering at National Taiwan University. She has conducted research on multi-agent systems, optimization algorithms for dynamic vehicle routing and protein secondary structure prediction. Her current research focuses on context-aware technology and service-oriented computing.

數據

Fig. 1 Users and brokers in a two-tier distributed trust framework
Figure 2 shows the components of a broker, in which the Reputation Manager collects all feedback ratings generated by all its users, and the Trust Manager  ex-changes reputation ratings with other connected  bro-kers when necessary
Fig. 4 Trust broker UML diagram
Fig. 6 Performance with different training strategies (n = 1, m = 100, T = 35, 000)  0 2 4 6 8  10 12 14 100 1000 10000 100000
+3

參考文獻

相關文件

In order to solve the problems mentioned above, the following chapters intend to make a study of the structure and system of The Significance of Kuangyin Sūtra, then to have

Following the supply by the school of a copy of personal data in compliance with a data access request, the requestor is entitled to ask for correction of the personal data

DVDs, Podcasts, language teaching software, video games, and even foreign- language music and music videos can provide positive and fun associations with the language for

If we would like to use both training and validation data to predict the unknown scores, we can record the number of iterations in Algorithm 2 when using the training/validation

In addition, based on the information available, to meet the demand for school places in Central Allocation of POA 2022, the provisional number of students allocated to each class

For your reference, the following shows an alternative proof that is based on a combinatorial method... For each x ∈ S, we show that x contributes the same count to each side of

In this work, we will present a new learning algorithm called error tolerant associative memory (ETAM), which enlarges the basins of attraction, centered at the stored patterns,

  Uses the parameter value to set the number Uses the parameter value to set the number of threads to be active in parallel sections of of threads to be active in parallel sections