• 沒有找到結果。

A novel approach to collaborative testing in a crowdsourcing environment

N/A
N/A
Protected

Academic year: 2021

Share "A novel approach to collaborative testing in a crowdsourcing environment"

Copied!
11
0
0

加載中.... (立即查看全文)

全文

(1)

ContentslistsavailableatSciVerseScienceDirect

The

Journal

of

Systems

and

Software

j ourna l h o m e p a g e :w w w . e l s e v i e r . c o m / l o c a t e / j s s

A

novel

approach

to

collaborative

testing

in

a

crowdsourcing

environment

Yuan-Hsin

Tung

a,b,∗

,

Shian-Shyong

Tseng

a,c,∗

aDepartmentofComputerScienceandInformationEngineering,NationalChiaoTungUniversity,Taiwan,ROC bTelecommunicationLaboratory,ChunghwaTelecomCo.,Ltd.,Taiwan,ROC

cDepartmentofAppliedInformaticsandMultimedia,AsiaUniversity,Taiwan,ROC

a

r

t

i

c

l

e

i

n

f

o

Articlehistory:

Received7April2012

Receivedinrevisedform1January2013 Accepted21March2013

Available online 6 April 2013 Keywords:

Crowdsourcing Cloudcomputing Softwaretesting Collaborativetesting Integerlinearprogramming

a

b

s

t

r

a

c

t

Softwaretestingprocessesaregenerallylabor-intensiveandofteninvolvesubstantialcollaboration

amongtesters,developers,andevenusers.However,considerablehumanresourcecapacityexistson

theInternetinsocialnetworks,expertcommunities,orinternetforums—referredtoascrowds.

Effec-tivelyusingcrowdresourcestosupportcollaborativetestingisaninterestingandchallengingtopic.This

paperdefinesthecollaborativetestingprobleminacrowdenvironmentasanNP-Completejob

assign-mentproblemandformulatesitasanintegerlinearprogramming(ILP)problem.Althoughpackagetools

canbeusedtoobtaintheoptimalsolutiontoanILPproblem,computationalcomplexitymakesthese

toolsunsuitableforsolvinglarge-scaleproblems.Thisstudyusesagreedyapproachwithfourheuristic

strategiestosolvetheproblem.Thisiscalledthecrowdsourcing-basedcollaborativetestingapproach.

Thisapproachincludestwophases,trainingphaseandtestingphase.Thetrainingphasetransforms

theoriginalproblemintoanILPproblem.ThetestingphasesolvestheILPusingheuristicstrategies.A

prototypesystem,calledtheCollaborativeTestingSystem(COTS),isalsoimplemented.Theexperiment

resultsshowthattheproposedheuristicalgorithmsproducegoodqualityapproximatesolutionsinan

acceptabletimeframe.

© 2013 Elsevier Inc. All rights reserved.

1. Introduction

AsWebapplicationscontinuetoproliferate,ensuringthatthey arehighqualityandreliableiscritical(Andrewsetal.,2005;Ricca and Tonella, 2001, 2006; Homma et al., 2011).Low reliability software cannegatively affect businesses, consumers,and gov-ernmentsas theyincreasinglydepend ontheInternetfor daily operation.Thelaborandresource-intensivenatureofsoftware test-ingmakesproducingreliablesoftwaredifficult.Manyapproaches (Bertolino,2007;Weyersetal.,2011;Souzaetal.,2007;Whitehead, 2007;HeldandBlochinger,2009;Abdullahetal.,2009;Shuklaand Redmiles,1996)haverecentlybeenproposedtoaddress collab-orativetestinginthesoftwareengineeringdomain,suchasWeb applicationtesting, open-source testing,and game betatesting, buttheyhaveonlyfocusedonworkflowdesign,testingassistance, testing process improvement, and generating bug reports. The Internetisanextensivesourceofexperiencedhumanresources. Howe(CrowdsourcingWikipedia, 2011;Howe, 2006) proposed thetermcrowdsourcing—acombinationofthewordscrowdand outsourcing.Crowdsourcingleverageslargegroupsofpeopleor

∗ Correspondingauthorsat:TelecommunicationLaboratory,ChunghwaTelecom Co.,Ltd.,Taiwan,ROC.Tel.:+886928327053.

E-mailaddresses:yhdong@cht.com.tw(Y.-H.Tung),sstseng@asia.edu.tw (S.-S.Tseng).

communitiesontheInternettosolveproblems.Amazon Mechan-ical Turk (Amazon Mechanical Turk, 2011;Amazon Mechanical TurkWikipedia,2011)usesthisconcepttoachievebusinessgoals through mass collaboration enabled by Web 2.0 technologies. Crowdsourcingcollaborativetestingcanbeusedforfunctionaltest, useracceptancetest,contractacceptancetest,userexperiencetest, andbetatest.Effectivelyandefficientlyusingcrowdresourcesfrom theInternettoassesssoftwarefunctionalitiesinlimitedtimeframes andsharingtestresultswithothersisimportantforcollaborative testing.

Webapplicationstypicallyinvolvecomplex,multi-tiered, het-erogeneous architectures consisting of Web sites, applications, databaseservers,andclients.TargetedWebapplication function-alitiesshouldbeassessedbyseveraltest casesforcollaborative testing.Therefore,theappropriateassignmentoftestcasesto cor-respondingtestersbyaccountingforindividualandassembledtest resultsshouldbeconsidered.Thisstudyexploreshowcollaborative testingcanbeeffectivelyconductedinacrowdsourcing environ-ment(LuccaandFasolino,2006;Miaoetal.,2008;Benediktetal., 2002;Wangetal.,2008).Collaborativetestingrequirestestersto verifysoftwarefunctionsandexaminesoftwareoutputs.The pro-cessisinherentlycooperative, requiringthecoordinatedefforts ofmanytesters.However,becausetestersfromacrowdsourcing environmentmaynotbeasexperiencedasprofessionaltesters, theirexperience,abilities,anddegreesofinvolvementshouldbe expressedintermsofvariousquantitativevalues.Thisstudyuses 0164-1212/$–seefrontmatter © 2013 Elsevier Inc. All rights reserved.

(2)

thesevalues tocreateatrustworthiness indicator.Collaborative testingtestcaseassignmentshouldbebasedontrustworthinessin acrowdsourcingenvironment.

Thispaperdefinesthecollaborativetestingproblemina crowd-sourcingenvironmentasajobassignmentproblemandformulates itasanintegerlinearprogramming(ILP)problem.AlthoughanILP optimizationpackagetoolcanbeusedtocalculateanoptimal solu-tion,longexecutiontimesmakeitunsuitableforsolvinglarge-scale instances.Therefore,thisstudyusesagreedyapproachbasedon fourproposedheuristicstrategies,calledthecrowdsourcing-based collaborativetesting approach.The proposedapproach includes twophases,trainingphaseandtestingphase.

Thetraining phase representsWebapplicationsystems, test cases,andtestersonfivematrices:(1)atestcasepagecoverage matrix(A),(2) atestcase executiontime matrix(T),(3)a page thresholdmatrix(TH)(4)atestertrustworthinessmatrix(W),and (5)atesteravailabilitymatrix(H).TheILPtransformeralgorithm transformsthematrices intoan ILP formulation. In the testing phase,testcasesareassignedtoparticipatingtestersina collabo-rativetestingenvironmentandtestersareguidedwhileperforming tests, potentially reducing required testing effort. A prototype system,calledtheCollaborativeTestingSystem(COTS),was imple-mentedtoperformcollaborativetesting.Threeexperimentswere conductedtoevaluatetheperformanceoftheapproach.The experi-mentalresultsshowthatalthoughlinearprogrammingtoolCPLEX (CPLEX,2011)canoptimallysolvetheILPformulationfor small-scaleproblems,theproposedheuristicalgorithmsalwaysproduce goodqualityapproximatesolutionswithinanacceptabletime.

Thispaperisorganizedasfollows:Section2discussesrelated studies; Section 3 describes the collaborative testing scenario and formulates the problem as an ILP; Section 4 explains the crowdsourcing-based collaborative testing approach; Section 5 describestheexperimentaldesignandresults;andSection6 pro-videsaconclusion.

2. Relatedwork

2.1. CrowdsourcingandAmazonMechanicalTurk

Crowdsourcing (Collective intelligence Wikipedia, 2011; CrowdsourcingWikipedia,2011;Howe, 2006),theact of lever-agingmasscollaborationtoachieve businessgoals, hasbecome popularontheInternet.Howe(CrowdsourcingWikipedia,2011; Howe,2006)firstusedthetermcrowdsourcing—a combination of crowd and outsourcing—to describe the practice of solving complex problems and contributing relevant and novel ideas through an open request on the Internet. Amazon Mechanical Turk(AmazonMechanicalTurk,2011;AmazonMechanical Turk Wikipedia,2011) is a crowdsourcing Webservice that enables computerprogrammers(calledRequesters)toco-ordinatehuman intelligencetoperformtaskswhichcomputerscannotcomplete. Requesterssethumanintelligencetasks(HITs)andWorkers(or Providers)canbrowse,select,andcompletetasksforamonetary paymentsetbyaRequester.ARequestercansetWorker qualifi-cationrequirementsforataskandtheycancreateteststoverify thesequalifications.Theycanalsoacceptorrejectataskcompleted byaWorker,whichaffectstheWorker’sreputation.Workerscan belocated anywhere in the world.TopCoder (TopCoder, 2011) proposedanothercrowdsourcing businessmodel to administer softwareprogrammingcontests. ClientspayTopCodertodesign softwareapplicationsbasedontheirrequirements.TopCoderthen createsacontestforcommunitymemberstodevelopsoftwareand usesthiscontest-basedsystemwitharatingmechanismto eval-uateprogrammerperformancerelativetootherparticipants.For example,ifanewcoderbeatsanestablishedcoder,thenewcoder

isrankedhigher.Thispaperusesthevariabletrustworthinessto reflecttesterperformanceinacrowdsourcingtestenvironmentby referringtocrowdsourcingWebsites,suchasAmazonMechanical TurkandTopCoder.

Crowdsourcingcancreateaninterestingdynamicwhentesting inacloudenvironment(CrowdsourcingWikipedia,2011;Howe, 2006;Buyyaetal.,2009).Thus,vendorsofcloudtestingservices mustfindmethodsofconstantlyencouragingtesterstocompete effectively.Riunguetal.(2010)investigatedhowcrowdsourcing supportscloudtestinganddifferentcrowdsourcingmodelsthat couldbepracticalforcloudtesting.TestingserviceprovideruTest (Utest,2011)proposedthecrowdsourcingTest-as-a-Servicemodel forprovidingtestingservicestocustomersusingseveral hetero-geneous,resourceful,andskilledtesters.Anothercrowdsourcing modelusesacommunityofusersoraninterestgrouptotest spe-cificsoftware,forexampleAmazonMechanicalTurk,TopCoder,and AppStori(AppStori,2012).Feedbackfromthecommunityisthen beusedtoimprovethetestedapplication.Incorporating crowd-sourcingandcollaborativetestinginthecloudraisessocialissues suchastrustandcommunication.Hence,researchshould investi-gatetheroleandimpactoftrustincollaborativetestingactivities inthecloud.

2.2. Collaborativetesting

Insoftwareengineering,softwaretesting ofteninvolves sub-stantialcollaborationamongtesters,developers,andusers.Awide rangeofcommunicationandcollaborationtechnologiesareusedto coordinateprojectwork.Manystudies(Westetal.,2011;Shahriar andZulkernine,2011; HeldandBlochinger,2009)havefocused oncollaborative testing. However, most proposedcollaborative testingtoolsonlyfocusontestingprocessesandgeneratingbug reports.Notoolssupportcollaborativetestinginacrowdsourcing environment and account for practical communication require-mentsandtesting resourceconstraints.InTsaietal.(2004)and Baietal.(2007a,b),collaborativeverificationandvalidation(CVV) ofanapplication requirescontributionsfrommany partiesin a collaborative manner.The CVVframework publishes and ranks test cases based on their potency. The most potent test cases arefirstusedfortestingnewsoftwaretoreducetestingefforts. Whitehead(Whitehead,2007)classifiedsoftwareengineering col-laborationtoolsintofourcategories:model-based collaboration tools,processessupporttools,awarenesstools,andcollaboration infrastructure. The maincollaborative tool usedto managethe interfacebetweentestersanddevelopersisthebugtrackingtool (Shuklaand Redmiles,1996)which assistswithgeneratingand storingorrecordinganinitialerrorreport,prioritization,adding follow-oncommentsanderrordata,linkingsimilarreports,and assigningadevelopertorepairthesoftware.Onceabugisfixed,itis recordedinthebugtrackingsystem.Softwareinspectioninvolves manyengineersreviewingaspecificsoftwareartifact.Therefore, softwareinspectiontools(MacdonaldandMiller,1999)are histori-callycollaborative.Basedontoolfeatures,Hedberg(Hedberg,2004) dividedsoftwareinspectionhistoryintofourphases:earlytools, distributedtools,asynchronoustools,andWeb-basedtools.Early toolsweredesignedtosupportengineersinface-to-facemeetings, whiledistributedtoolsallowedengineerstoparticipateremotely ininspectionmeetings.Asynchronoustoolsmeantthatinspection participantsdidnothavetomeetsimultaneouslyandWeb-based toolssupportedinspectionprocessesontheWeb.

3. Collaborativetestingproblemdefinition

Thissectionintroducestheproposedapproachtosupport col-laborativetestingforWebapplications.Tosimplifydiscussionof

(3)

Fig.1. AnexcerptionofdependencegraphfortheWebapplication,BookStore.

thecollaborativetestingformulation,thisstudyassumesthatthe dependencegraph,test cases,andtesterprofilesweredesigned bytestengineersandthetesterprofilescontaintesteravailability, executiontimeforeachtestcase,andcollaborativetesting trust-worthiness.TheseassumptionsarebasedonWebuserbehavior evaluations.

Collaborativetestingwasdefined,withassumptions,asajob assignmentproblemusingthemotivationexample.Theproblem wasthenmodeledasanILPformulationwithresourceconstraints. The job assignment problem was reduced to an NP-Complete problem(GareyandJohnson,1979).Theproposed crowdsourcing-basedapproachforcollaborativetestingwasthenintroduced. 3.1. Motivationexample

To conveniently collect test cases for Web application test-ing,testengineersproducedtestcasesusingauser-sessionbased approach (Liu et al.,2000; Benediktet al.,2002; Elbaum etal., 2005;Dengetal.,2004),whereeachtestcase wasasetof exe-cutionpathsandinputattribute-valuepairs.TheWebapplication canberepresentedasadependencegraph.Thenotationsusedfor thedependencegraphandtestcaseareasfollows:

G dependencegraph;adependencegraphforaWebapplicationisa directedgraph,G=(P,E);

P setofnodesinthedependencegraph; E setofedgesinthedependencegraph; pi Webpage,pi∈P;

eij adirectlinkfrompitopj,whereeij=(pi,pj),eij∈E,and∀pi,pj∈P

Mk

j inputattribute-valuepairs;

Mk

j =(attjk(1),valkj(1)),(attjk(2),valk2(2)),...,whereattjk(x)isthe

inputattributeandvalk

j(x)istheinputattributevalue;

TCj testcaseTCjisatestpathwithinputattribute-valuepairs

TCj=(pj1,Mj1),(pj2,Mj2),(pj3,M3j)...,∀pj∈P.

Fig.1showsthedependencegraphthatdescribesthecontrol flowofthetargetWebapplicationforcollaborativetestingby ana-lyzingitsprogramstructures.

To illustrate this definition, a test case of Web application BookStore (Open Source Web Applications, 2011), (the open-sourceWebproject)isshowninFig.2.Inthisapplication,testers select “AdvancedSearch” onpage “Default.aspx” to connect to “AdvSearch.aspx”andsearchforbooksbytyping“Databases”in thetitlefield.Testersarethendirectedto“Books.aspx,”which dis-playsthesearchresults.Testersselectabookandarethendirected to“BookDetail.aspx,”showingtheirselectiondetails.Thetestcase iscollectedfromeachtestersessionlogandrepresentedas:

TC1=<(Default.aspx, (categoryid=‘2’, title=‘Databases’)), (AdvSearch.aspx, (title=‘Databases’, author=‘Jim Buyens’, cate-goryid=‘2’, pricemin=‘15.99’, pricemax=‘39.99’)), (Books.aspx, (categoryid=‘2’, title=‘Databases’)), (BookDetail.aspx, (itemid=‘1’, categoryid=‘2’,quantity=‘3’,rating=‘48’,ratingcount=‘14’))>

Fig.3showsacollaborativetestingexampleina crowdsourc-ingenvironment.TheWebapplicationwithfourWebpagesand fourtestcaseswasextractedfromBookStore(OpenSourceWeb Applications,2011)andthreeonlinetestersparticipatedinthe col-laborativetestwithoutinteracting.Withnocoordination,testers usuallyexecutespecificpopularapplicationfunctions.Fig.3uses thethresholdandtrustworthinessvariablestodescribespecific sit-uationsinacrowdsourcingtest.Thethresholdvariableisusedas atestingcriterionfortestingatargetbasedonpagecomplexity.A higherthresholdmeansthatatestingtargetismorecomplexand requiresmoretestingeffort.Thetrustworthinessvariablereflects a tester’s experiences, abilities, and degrees of involvement by usingacrowdsourcingWebsiteratingmechanism(AppStori,2012; TopCoder, 2011). Higher trustworthiness reflects more reliable testingresults.Inthisscenario,thecollaborativetestingproblem istransformedintoajobassignmentproblemwithresource con-straints.

Thenotationsusedforthecollaborativetestingformulationare asfollows:

i numberoftestcases; j numberoftesters; k numberofWebpages;

xij xij=1meansthattheithtestcaseisassignedtojthtester,

otherwise,xij=0;

aik aik=1meansthattheithtestcasecoversthekthpage,otherwise,

aik=0;

A testcasepagecoveragematrix,A=[aik];

tij timetakenforthejthtestertoexecutetheithtestcase;

(4)

Fig.3.Anexampleofcollaborativetestingwithdependencegraph,testcases,andtesterprofilesonthecrowdsourcingenvironment.

T testcaseexecutiontimematrix,T=[tij];

wj trustworthinessofthejthtester;

W testertrustworthinessmatrix,W=[wj];

LoCk linesofcodeforthekthWebpage,pagecomplexityevaluation;

thk thresholdofthekthWebpage,thk=



LoCk

mLoCm ; TH pagethresholdmatrix,TH=[thk];

hj thejthtester’savailabletime;

H testeravailabilitymatrix,H=[hj].

Thefivematrices ofcollaborativetestingproblemwerethen formulatedinthetrainingphase.Table1showsatestcasepage coveragematrixoftherelationshipsbetweentestcasesandpages. Forexample,testcase1(TC1)inFig.3coversthreepages,P1,P2,and P3,andiswrittenas(P1→P2→P3).Table1showsthetestcasepage coveragematrixbasedonthesetoftestcases.Threetesters partic-ipatedinthetestandthetestcaseexecutiontimeforeachtester wasestimatedbyaddingtheexecutiontime ofthe correspond-ingpages.TesterAexecutedTC1in8min.Table2showsthetest caseexecutiontimematrixwhichsummarizesthetesterexecution times.

Two factors that may affect collaborative testing were also considered:pagecomplexityandtestertrustworthiness.Because differentpagesmayhavedifferentcomplexities,thethresholdwas definedasthetestingcriterionbyreferringtopagecomplexity.In thepagethresholdmatrixinTable3,thetestthresholdwasdefined withlinesofcodewhicharecomplexityindicators(Albrechtand Gaffney,1983;LowandJeffery,1990).Morecomplexpagesrequire moretesting.Becauseinexperiencedandevenmalicioustestersare

Table1

Testcasepagecoveragematrix,A=[aik].

P1 P2 P3 P4 TC1 1 1 1 0 TC2 1 0 0 1 TC3 0 1 1 1 TC4 1 1 1 1 Table2

Testcaseexecutiontimematrix,T=[tij].

TesterA TesterB TesterC

TC1 8 24 14

TC2 22 18 18

TC3 12 18 24

TC4 16 8 16

Table3

Pagethresholdmatrix,TH=[thk].

P1 P2 P3 P4 Summary PageComplexity (LinesofCode, LoCk) 533 476 765 435 2209 Testthreshold,thk 4.82 4.31 6.93 3.94 20 Table4

Testertrustworthinessmatrix,W=[wj].

TesterA TesterB TesterC ExperiencesofParticipation 3000 1500 2100

Trustworthy,wj 1 0.5 0.7

likelytoparticipateintesting,usersmustdecidewhethertotrust

thetestingresults.Tomakethisdecision,atestertrustworthiness

matrixwascreatedbasedontesterpriorparticipation,asshown

inTable4.InTable5,thetesteravailabilitymatrixshowsthetime thattesterscandedicatetotesting.

Thisstudyaimstofindtheminimal-timetestcasecombination thatcoversthewholeWebapplicationandachievesthetesting criteria.Whentestingbegan,TestersA,B,andCexecutedpages individually.Tocompletetesting,testersmustbecoordinatedto conductthetestcasesandworkquickly.Thetestcaseexecutions forthecasesolutionsareasfollows:

TC1:TesterAexecutedthecasein8min. TC2:TesterBexecutedthecasein18min.

Toachievecollaborativetestingwithminimaltotaltestingtime, test cases must be appropriatelyassigned to testers. The Web application testing structureswere described byconstructing a dependencegraph. Pagecoverage wascalculated duringtesting usingthedependencegraph.Crowdtesterswerethenguidedby testcaseassignmentandtheythenperformedtestcasesontheir own,that is,TesterA:P1→P2→P3→P4,Tester B:P1→P2→P3,or TesterC:P1→P4.

Table5

Testeravailabilitymatrix,H=[hj].

hi TesterA TesterB TesterC

(5)

Thisishowtestcaseswereassignedtotestersandhowtesters

wereguidedtotherequiredregionstoachievetestingand

min-imizetotaltesting time. Thisisa jobassignmentproblemwith

resourceconstraints.

3.2. Problemdefinition:collaborativetestingjobassignment

Thissectiondefinestheproposedjobassignmentproblemusing

amathematicalformula.Basedonthematrices,thejobassignment

problemwasformulatedasanILPproblem.Thejobassignment

problemwasusedtocalculateasub-optimalrepresentativeset

(GareyandJohnson,1979).Calculatingtheoptimalrepresentative setisa minimalrepresentativesetproblem,oranNP-Complete problem.TheILPformulationwasdevelopedasfollows:

Objectivefunction[OBJ]is

min



i



j xijtij,

wherexijisbinaryfortestcasei=1,...,n,andtesterj=1,...,m. Sub-jectto[CONSTRAINTS]:

1.



i



j

aikxij≥1 for each page,[CC1: CoverageConstraint 1]whereaik=1,iftheithtestcasecoversthekthpage,otherwise aik=0.

2.



k

aik≥1foreachtestcase.[CC2:CoverageConstraint2] 3.0≤



ixijtij≤hjfor eachtester,[TAC:TesterAvailable Con-straint] wheretij>0andhj>0. 4.



i



j

aikxijwj≤thkforeachtestedpage,[TC: Trustworthi-nessConstraint]

where0<wj≤1andthk>0.

5.xijisbinary.[BC:BinaryConstraint]

AccordingtothisILPformulation,theobjectivefunctionisused tocalculatethetotalcollaborativetestingexecutiontime.To calcu-latetheoptimalexecutiontime,theobjectivefunctionminimizes thetotalexecutiontimewithvariablesxijandtij.Ifbinaryvariable xij=1,theithtestcaseisassignedtothejthtester,otherwisexij=0. Variabletijreflectsthetestingtimeofthejthtesterfortheithtest case.Tomodelthecollaborativetestingcharacteristics,the cover-age,availability,andtrustworthinessconstraintswereconstructed

asConstraints1–4.Table1showsthataik=1iftheithtestcase cov-ersthekthWebpage.Constraint1ensuresthatallWebpagesare coveredbytestcasesandConstraint2ensuresthateachtestcase coversatleastonepage.Coveragemeasurementsareimportant forsoftwaretesting.Constraint3ensuresthattesterresourcesare limitedtopreventworkoverloadingbyreferringtoTable2,T=[tij], andTable5,H=[hj].InTable2,variabletijistheexecutiontimeof theithtestcaseforthejthtester.InTable5,hjrepresentstester worktime.Constraint4modelstrustworthinessandtesting sup-portbyconsideringtestertrustworthinessfromTable4,W=[wj]. Eachtestergainsatrustworthyweightingwhenheorsheexecutes testingwork.ThresholdvaluethkinTable3isaWebpagetesting criterionandwjrepresentsthetrustworthinessofthejthtester.A morecomplexpagewithhigherthresholdthkrequiresmore test-ing.Thesummaryofaikxijwj meansthattheassignedtestcases mustcovertestingthresholdthkofeachpage.Ifcovered,thepage ismarkedastested,ifnot,thepageisassignedtoanothertester.In Constraint5,variablexijcanonlyequal0or1.

4. Crowdsourcing-basedapproachforcollaborativetesting

4.1. Overview

Fig.4showsthattheproposedcrowdsourcing-basedapproach consistsoftwomainphases.Inthetrainingphase,theILP trans-former was constructed to transform the collaborative testing probleminto an ILP formulation. Therequired settings include a dependence graph, test cases, and tester profiles. A leverage dependencegraph(JeffreyandGupta,2005;Leonetal.,2005)was usedtorepresenttheWebapplications.Testcasebaseswere con-vertedintoacoveragematrixtorepresenttherelationshipbetween Webapplicationsandtestcases.Atesterprofilewascreatedfor each tester based upon hisor her participation. In the testing phase,participatingtestersconductedtests.Thetestcase assign-mentalgorithmwasdesignedbasedonthegreedyapproachfor assigningappropriatetestcasestoindividualtesters.Thisapproach continuestoassigntestcasestotestersuntilallpagesarefinished. 4.2. Trainingphase

Inthetrainingphase,fivematricesbasedonthedependence graph,testcasebase,andtesterprofileswereusedtorepresentthe jobassignmentproblem.Algorithm 1,theILPtransformer algo-rithm,wasproposedtoconverttheoriginalproblemintoanILP

(6)

problemaccordingtotheproblemdefinitioninSection3.2. Algo-rithm1consistsofthreesteps.InStep1,theobjectivefunction calculatestheminimaltimerequiredtoexecutethetestcasesusing theavailabletesters.Ifvariablexij=1intheobjectivefunction,this meansthattheithtestcaseisassignedtothejthtester.Thenumber ofvariablesintheobjectivefunctionis(i*j).InStep2,Coverage Con-straint1(CC1)andCoverageConstraint2(CC2)wereconstructedby consideringthetestcasepagecoveragematrix(A).TheTester Avail-ableConstraint(TAC)andTrustworthinessConstraint(TC)were constructedbyconsideringthetestcaseexecutiontimematrix(T) andtesteravailabilitymatrix(H).Thebinaryconstraintfor vari-ablexijistheBinaryConstraint(BC).Thereare:(k+i)constraints forCC1andCC2,jconstraintsforTAC,kconstraintsforTC, and i*jconstraintsforBC.Theproposedmodelconsistsof(ij+i+j+2k) constraints.Step3returnstheILPformulation.

Algorithm1.ILPtransformeralgorithm

Input:testcasepagecoveragematrix,A=[aik],testcaseexecutiontime

matrix,T=[tij],pagethresholdmatrix,TH=[thk],testertrustworthiness

matrix,W=[wj],testeravailabilitymatrix,H=[hj].

Output:collaborativetestingintheILPformulation,ILP={OBJ, CONSTRAINTS}

Step1.AccordingtomatricesAandT,letobjectivefunctionOBJ=ijxijtij

tominimizethetestcasecombinationexecutiontime. Step2.Generateconstraints.

Step2.1.Referringtotestcasepagecoveragematrix,A,addCC1andCC2to CONSTRAINTS.

Step2.2.Referringtotestcaseexecutiontimematrix,T,andtester availabilitymatrix,H,addTACtoCONSTRAINTS.

Step2.3.Referringtomatrices,A,W,andS,addTCtoCONSTRAINTS. Step2.4.AddBCtoCONSTRAINTSforvariablexij.

Step3.ILPformulation,ILP={OBJ,CONSTRAINTS}.

4.3. Testingphase

Thetestcaseassignmentalgorithmwasproposedinthe test-ingphaseandlinearprogrammingtoolILOGCPLEXwasusedto evaluatealgorithmperformance.Thealgorithmcalculatesthe rep-resentativesetoftestcaseswiththeminimalexecutiontimeand satisfiestheobjectivefunctionandtheconstraints.Thealgorithm identifiesonecandidatetest caseineachiteration.Asshownin Algorithm2,theproposedalgorithmconsistsofthreesteps.

Algorithm2.Testcaseassignmentalgorithm

Input:collaborativetestingintheILPformulation,ILP={OBJ, CONSTRAINTS};proposedheuristicstrategy,H.

Output:testcaseassignments,xij;objectivefunction,OF. Initialize:Setallvariablexij=0,OF=0.

Step1.Calculatepagesupportukforeachpagek.

Step2.Identifytargetpagekwiththemaximalpagegain.

Step2.1.CalculatepagegainEvalkforeachpagekwithpagesupportuk

andtrustworthinessthk.

Step2.2.IdentifytargetpagekwithmaximalpagegainEvalk. Step3.IdentifytestcasetfortargetpagekbasedonTCS(H). Step3.1.UseproposedheuristicstrategyHfromH1toH4fortestcase selection.

Step3.2.SelecttestcasetfromcandidatetestcasesetT.CalculateTCS(H)

forcandidatetestcasetandidentifytestcasetwithmaximumTCS(H).

Step3.3.Assigntestcasettotesterj,setxij=1,andupdateOFbasedonOBJ.

Step3.4.Ifallpagesupports,uk,arebiggerthanthresholdthk,EXIT;

otherwise,GOTOStep.1.

InStep1,thepagesupportwasusedtomonitoralltestcases performedforeachpage.InEq.(1),thepagesupportwas calcu-latedbyreferringtothetestcasepagecoveragematrixandtester trustworthiness.

Pagesupportvector:U={uk|pagesupportfor kthWebpage}, wherepagesupport:uk=



i



j

aikxijwj . (1)

InStep2,pagegain,Evalk,wascalculatedwithpagesupportuk andtrustworthinessthk foreachnodeinEq.(2).Thenodewith maximalEvalkisthetargetpagefortesting.Oncethetargetpage isselected,thecandidatetestcasesetisproduced,whereeachtest casecontainsthetargetpage.

Pagegain:Evalk= (thkth−uk)

k , (2)

wherethkisthethresholdofthekthpageandukisthepagesupport ofthekthpage.

InStep3themostessentialtestcasewasselectedfromthesetof candidatetestcases.Fourheuristicstrategiescanbeselectedinthe proposedalgorithm.Thegreedyalgorithmisbasedonfourheuristic strategiesfortestcaseselection,TCS(H),definedinEqs.(3.1)–(3.4). Eq.(3.1),TCS(H1),selectsthetestcasewithmaximalcoverageand Eq.(3.2),TCS(H2),selectsthetestcasewiththeminimalexecution time.Eq.(3.3),TCS(H3),selectsthetestcasewithmaximaltester trustworthinessand Eq.(3.4),TCS(H4),isa compoundheuristic, consistingofminimalexecutiontimeandmaximalcoverage.

Heuristic1(H1):maximalcoverageheuristic TCS(H1)=max



iaik,

(3.1) whereaik=1meansthattheithtestcasecontainsthekthpage.

Heuristic 2(H2):maximal time heuristic TCS(H1)=mintij

(3.2) wheretijreferstotheexecutiontimeoftheithtestcasebythejth tester.

Heuristic 3(H3):maximal-trustworthiness heuristic

TCS(H3)=maxwj, (3.3)

wherewjrepresentsthetrustworthinessoftesterj. Heuristic 4(H4):compound heuristic

TCS(H4)=min



Tij max



iaik



(3.4)

Theselectedtestcaseisusedtoupdatethecurrentpagesupport. Testingiscomplete oncethetestingcriterion ismet,otherwise proceedtostep1.

Thecrowdsourcing-basedcollaborativetestingapproach con-tainstwoalgorithms.Algorithm1isanILPtransformerwithlinear timecomplexitytotransformtheproposedmatricesintoanILP formulation.Therefore,thetimecomplexityofAlgorithm2,which dominates the complexity of the computation of the proposed approach,mustbeanalyzed.

InStep1,allaikxijwjaresummarizedaspagesupportukforeach page.Pagesupporttimecomplexityiscalculatedbymultiplyingthe numberoftestcases(m)bythenumberofpages(n)––O(m×n).

InStep2,thepagegaincalculation,Evalk,requiressubtraction anddivision.InStep2.1,thetimecomplexityofallpagegain cal-culationsisO(n).Step2.2usestimecomplexityO(n)toidentifythe maximalpagegain.ThetotaltimecomplexityofStep2is2O(n).

InStep3,thegreedyalgorithmwithfourheuristicstrategiesis basedonaselectionsortalgorithm.Therefore,theaveragetime complexityoftheselectionsortalgorithmisO(n×log(n)),andthe timecomplexityoftheworstcaseisnoworsethanO(n2).InEq. (3.4),thecompoundheuristicisamin-maxfunctionandtime com-plexityis2O(n2).CombiningSteps1–3,Algorithm2isrepresented by

T=O(m×n)+2×O(n)+max(O(n2), O(n×log(n), 2×O(n2))=O(n2).

Example:

Table6showsanexampleillustratingthecollaborativetesting approachprocesses.Thetopofthetableshows pagethresholds

(7)

Table6

AnexampleofresultsoftestcaseassignmentalgorithmwithheuristicH2.

Threshold(thk) Assignment Page1 Page2 Page3 Page4 Testcasesuggestion

2.41 2.15 3.47 1.97

Iteration uk Evalk uk Evalk uk Evalk uk Evalk

PageSupport(uk)andPageGain(Evalk) Initial 0 1 0 1 0 1 0 1 TC3

r=1 0 1 0.7 0.67 0.7 0.8 0.7 0.64 TC4 r=2 0.7 0.71 1.4 0.35 1.4 0.6 1.4 0.29 TC1 r=3 1.2 0.5 1.9 0.12 1.9 0.45 1.4 0.29 TC2 r=4 1.7 0.29 1.9 0.12 1.9 0.45 1.9 0.03 TC1 r=5 2.7 0 2.9 0 2.9 0.16 1.9 0.03 TC3 r=6 2.7 0 3.9 0 3.9 0 2.9 0 –

andthebottomshowstheproposedalgorithmprocesses.

Accord-ingtotheproposedalgorithm,inStep1thepagesupportvector,

U(r)=[uk],iscalculatedforeachpageusingEq.(1).Testcase1is

assignedtoTesterAiniterationr=1andthepagesupportwith vectorUisupdatedfromU(0)=[0,0,0,0]toU(1)=[0,0.7,0.7,0.7]. InStep2,thetargetpageisselectedbycalculatingthepagegain, Evalk,foreachpage.AsshowninEq.(2),pagesupportuk repre-sentsthetestingsituationofthekthpage.ThemaximalEvalkvalue meansthat thekthpageisthetargetpagein theWeb applica-tion.TheEvalkcalculationselectedPage3,withamaximalvalue of0.557.InStep3,atestcaseisselectedbasedonthealgorithm testcaseselectionstrategy,TCS.Inthiscase,thecollaborative test-ingalgorithmwasperformedwithtestcaseselectionstrategyH2 (theminimaltimeheuristic),asshowninEq.(3.2).Thetestcase containingPage3wasgiventotestersandtestingwasperformed iterativelyuntilpageEvalk was≤0.Theproposedtest casewas assigned to testersand testing was performediteratively until testingofallWebapplicationpageswascomplete.Theproposed algorithmidentifiescandidatetestcasesineachiteration.Thetest casecombinationissuggestedbythesequence,TC3,TC4,TC1,TC2, TC1,TC3.

5. Implementationandexperiment

ThisstudyimplementstheprototypeCOTStosupport collabo-rativetestingusingtheproposedcrowdsourcing-basedapproach. Toevaluatetheperformanceoftheproposedapproach,aseries ofcomputationalsimulationsweredesignedandconducted.The experimentsconsistedofthreeparts:(1)convertingthe collabo-rativetestingproblemtotheILPformulation;(2)comparingthe heuristicalgorithmandtheILPformulation;and(3)investigating collaborativetestingusingtheproposedapproachandtheCOTS.

5.1. Systemimplementation

TheCOTSisaWeb-basedapplicationthatwasdevelopedusing ASP.NET and MS SQL Server. This study applies the proposed approachtosupporttestcaseassignmentofcollaborativetesting usingtheCOTS.TheCOTSalsocollectsandanalyzesbugreports forcollaborativetesting.Fig.5showsthatthesystemcontainsfive mainmodules:theTestingUserInterface,BugReportSystem,Test CaseAssignment,WebApplicationTransformer,andSystemAdmin modules.AsshowninFig.6,theTestingUserInterfaceandBug ReportSystemmodulesprovideinteractiveinterfacestotesters. TheBugReportSystemmoduleallowstesterstoreportanybugs orglitches.TheCOTSrecordstester-reportedbugsandstoresthe informationintheBugReportDatabase.TheTestCaseAssignment moduleusestheproposedapproachtoprovidetestingobjectives totesters.Fig.7showsthattheSystemAdminmoduleprovides variousadministrativefunctionsforsystemadministratorsto mon-itortesting.Toevaluatetheproposedalgorithmperformance,test

Fig.5.SystemarchitectureofCOTS.

engineersperformedcollaborativetestingontheCOTSinvarious experiments.

5.2. Experimentsandresults

5.2.1. Experiment1:convertingthecollaborativetestingproblem totheILPformulation

Experiment 1 determines the minimal-time test case com-bination using the ILP formulation. The ILP formulation was used to identify optimal collaborative testing solutions. It was implemented using the linear programming tool, ILOG CPLEX Interactive Optimizer11.2.1, ona Linux server withIntel Core

(8)

Fig.7. SystemAdminofCOTSformonitoringthecollaborativetesting.

2extreme2.66GHzCPUsand 4GB RAM. TheWebapplication, BookStore(OpenSourceWebApplications,2011),fromWebsite www.gotocode.comwasused.Theapplicationwasdevelopedin theASP.NET language. To simulatea realtesting environment, severalcombinationsoftest casesandtesterswereconstructed forcollaborativetesting.Testingengineersselectedthetestcases accordingtothefeaturesofWebapplicationBookStore.Asshownin Table1,irepresentstestcasesandjrepresentstesters.The param-etersfortesterprofiles,usertrustworthinessweights,participating testers,andtestingthresholdswereconfiguredbasedonthe pre-liminarysettingsinSection3.

The experiment results suggest that the execution times increase sharplyas the number of variables (i*j) increases.For example,whenvariable(i*j)is1800,theaverageexecutiontime ismorethan2651s.Andwhenvariable(i*j)is2145,theaverage executiontimeis2851s.Toverifythecorrelationbetweennumber ofvariablesandexecutiontime,theR-squaredvalue(coefficient ofdetermination,R2)(Pearson,1987)forthetwoindicatorswas calculated.Whenthecoefficientisgreaterthan0.75,the correla-tionbetweenthetwoindicatorsisregardedashighandoneofthe indicatorsiseliminatedbasedonthenextcriterion.AnR-squared valueof0.8035reflectsasignificantpositivecorrelationbetween thenumberofvariablesandexecutiontime.Whenthenumberof variablesincreases,theexecutiontimeincreases.Inthiscase,the CPLEXfailedtorenderresultswhenthenumberofvariableswas morethan2800.

Table7.

5.2.2. Experiment2:comparisonsbetweentheheuristic algorithmandILPformulation

Experiment2comparestheperformanceoftheproposed algo-rithmwiththatoftheCPLEXILPformulation.Collaborativetesting experimentsdesigned byexperienced test engineerswere con-ductedonthetargetWebapplication,BookStore.Tables8–10show theexperimentresultswithtestcaseandtesterconfigurations.

ThesolutionstothecorrespondingILPformulationwereusedas abaselineforcomparisons.InTables8–10,thefirstthreecolumns showtheconfigurationoftestcases,testers,andnumberof vari-ables.TheILPcolumnshowsthetwoexecutionresults,execution time and optimal objective. The four heuristic algorithm sub-columnssummarizethecomputationalobjectivesoftheproposed algorithmandheuristicstrategies.

InTable8,thenumberoftestcasesincreaseswiththenumberof testers.InTable9,thenumberoftestersisconstantandthenumber oftestcasesincreases.InTable10,thenumberoftestcasesisset at70andthenumberoftestersincreasesinincrementsof5.

Thefollowingobservationsarebasedonthenumericalresults.

(1)Tables 8–10 show that the ILP formulation execution time increasesrapidlyasthenumberofvariablesincrease, demon-strating that the problem quickly becomes more complex whenthenumberofvariablesincreases.Whenthenumberof variablesislarge,thecorrespondingexecutiontimebecomes prohibitivelylong.However,theproposedapproachisalmost unaffectedby increasingproblemcomplexity.The proposed algorithm achievedaveragesofapproximately90%(ILPover HeuristicH2inTable8),79%(ILPoverHeuristicH2inTable9), and89.5%(ILPover HeuristicH2inTable10)oftheoptimal solution.Thisshows that theproposedapproachis suitable forcollaborativetestinginareal-timeenvironment,without requiringcomplexcomputations.

(2)Becausetheproposedapproachusesagreedyalgorithm,this ensuresthatafeasiblesolutionisobtainedwithinan accept-abletime.Thisisreflectedbytheexperimentresults.However, becausethecollaborativetestingproblemisNP-Complete,the CPLEXILPformulationexecutiontimeincreasessignificantlyas thenumberofvariablesincreases.InTable10,whenthe num-berofvariablesexceeds2800,thesolutioncannotbefoundin 7200s.

(3)Theexperimentresultsshowthattheobjectivedecreasesas thenumberoftestcasesandtestersincreases.InTables8–10, theobjective optimalcolumn suggests thatwhen there are few test cases and testers, the number of solution combi-nationchoices isrelativelysmall.Thisresultsin highercost solutions.

(4)Theexperimentsproposefourheuristicstrategiestosolvethe ILP formulation.Heuristics2 and4 performbetterthanthe other two heuristics. The average objectives of Heuristic 2 are2510.71,2854.44,and2335,andtheaverageobjectivesof Heuristic4are2555,2898.89,and2375.83.Thetwoheuristics includecostsintheequations.Heuristics1and3mayperform betterinthelongtermwhenlearningcurvesareconsidered, becausetestercapabilityincreaseswithexperience.

5.2.3. Experiment3:collaborativetestingwiththeproposed approachintheCOTS

Experiment 3 evaluates the performance of the proposed approach and the prototype COTS. Crowdsourcing Web sites TopCoder and Amazon Mechanical Turk were consulted when designingthecrowdsourcingtestexperiment.Collaborative test-ingwasperformedusingWebapplicationBookstoreandInternet crowdtesters.Experiencedtestengineersembedded10bugsinto theWebapplication.Crowdtesterswerethenrequestedto partici-pateinthetest.Participatingtestersweredividedintotwogroups,a guidedgroupandarandomgroup,basedonpagesupports.Anew testerregisteredbycompletingaquestionnairetoestimatetheir trustworthiness.InExperiment3,theCOTSassignedtestcasesto testersaccordingtoheuristicH1criteriaintheguidedgroup (mini-mumcostfirststrategy)andrandomlyassignedtestcasestotesters intherandomgroup.Theexperimentendedwhenallpagesupports inBookStorereachedtheirthresholds.

Over a 6-day testing period, 144testers participated in the experiment.TheCOTSdispatchedincomingcrowdtesterstothe randomorguidedgroup,untileachpagesupportinthetwogroups exceededitsthreshold.Theguidedgrouptestended3daysbefore therandomgroup.Fig.8showsthattheguidedgrouprequiredless testingtimethantherandomgroup.Totalexecutiontime compar-isonsbetweenthetwogroupsshowthatthetestingeffortreduction rateisalmost53%.Thisindicatesthattheapproachimproves test-ing effectiveness by guiding testers with test case assignment. Table11 shows thebugs reportedby participatingtesters. The groupsreported55and97bugsandbothgroupsoftesters discov-eredall10defects.Thisshowsthatalthoughbothgroupsexhibit

(9)

Table7

ExecutionresultsofILPfordifferenttestcaseandtestercombinationinexperiment1.

Testcase(i) Tester(j) Variable(i×j) Executiontime(s) Objectiveoptimal

ex1-1 35 15 525 282.75 2705 ex1-2 40 18 720 530.79 2355 ex1-3 45 21 945 752.28 2275 ex1-4 50 24 1200 707.25 2195 ex1-5 55 27 1485 2991.73 2150 ex1-6 60 30 1800 2651.95 2100 ex1-7 65 33 2145 2851.46 2095 Average 50 24 1260 1538.32 2267.86

R-squared(numberofvariables,executiontime)=0.8035.

Table8

Executionresultsofdifferenttestcaseandtestercombinationinexperiment2.

TestCase(i) Tester(j) Variable(i×j) ILP Heuristicalgorithm

ExecutionTime(s.) ObjectiveOptimal Obj.H1 Obj.H2 Obj.H3 Obj.H4

ex1-1 35 15 525 282.75 2705 4265 3225 3805 3315 ex1-2 40 18 720 530.79 2355 4295 2690 3755 2745 ex1-3 45 21 945 752.28 2275 4125 2595 4240 2670 ex1-4 50 24 1200 707.25 2195 3755 2470 3475 2470 ex1-5 55 27 1485 2991.73 2150 3360 2235 3250 2395 ex1-6 60 30 1800 2651.95 2100 3270 2190 3240 2185 ex1-7 65 33 2145 2851.46 2095 3230 2170 3000 2105 Avg. 50 24 1260 1538.32 2267.86 3757.14 2510.71 3537.86 2555 Table9

Executionresultsofdifferentnumbersoftestcasesinexperiment2.

TestCase(i) Tester(j) Variable(i×j) ILP Heuristicalgorithm

ExecutionTime(s.) ObjectiveOptimal Obj.H1 Obj.H2 Obj.H3 Obj.H4

ex2-1 30 20 600 2.46 2485 3140 3520 4760 3705 ex2-2 35 20 700 5.48 2395 3930 3385 4100 3505 ex2-3 40 20 800 23.53 2300 3945 2770 3570 2810 ex2-4 45 20 900 40.99 2270 3930 2920 3960 2780 ex2-5 50 20 1000 19.33 2225 3795 2675 3870 2725 ex2-6 55 20 1100 80.11 2340 3375 2680 3740 2675 ex2-7 60 20 1200 84.37 2150 3300 2660 3930 2700 ex2-8 65 20 1300 1001.86 2240 3480 2535 3925 2605 ex2-9 70 20 1400 2599.27 2130 3390 2545 4265 2585 Avg. 50 20 1000 428.6 2281.67 3587.22 2854.44 4013.33 2898.89 Table10

Executionresultsofdifferentnumbersoftestersinexperiment2.

TestCase(i) Tester(j) Variable(i×j) ILP Heuristicalgorithm

Executiontime(s.) Objectiveoptimal Obj.H1 Obj.H2 Obj.H3 Obj.H4

ex3-1 70 15 1050 402 2245 3515 2670 4990 2760 ex3-2 70 20 1400 682 2045 3515 2585 4660 2485 ex3-3 70 25 1750 970 2035 3250 2245 3930 2325 ex3-4 70 30 2100 1837 1990 3220 2255 3905 2345 ex3-5 70 35 2450 2671 2125 3310 2275 4160 2340 ex3-6 70 40 2800 7200a 2110b 3195 1980 3120 2000 Avg. 70 27.5 1925 2293.67 2091.67 3334.17 2335 4127.5 2375.83

aInthisexperimental,wesetthetestingcriteriaas7200s. bCurrentbestsolutionundertimelimit.

Table11

Statisticsofbugreportsinexperiment3.

Group Crowdtester Executiontimes(min) Avg.executiontime(min) Numbersofbugreports Defectcoverage

RandomGroup 85 16,740 196.94 97 100%

(10)

0 5000 10000 15000 20000

Random

Group

Guided

Group

Exec

uon Ti

me

(mins)

Fig.8.Totalexecutiontimeofguidegroupandrandomgroupinexperiment3.

thesamedefectrevealingability,theguidedgrouponlyrequired

56%ofthetestingeffortoftherandomgroup.

6. Conclusion

Thisstudymodelscollaborativetestinginacrowdsourcing

envi-ronmentasajobassignmentproblemandrepresentstheproblem

usinganILPformulation.Agreedyalgorithmwithfourheuristic

strategiesisproposedtosolvealarge-scalecollaborativetesting

problem. Based ontheproposedalgorithm, the COTSis

imple-mentedtoevaluatetheproposedmodel.Theexperimentresults

showthattheapproachiseffectiveandperformswellattesting

andrevealingdefects.Theexperimentresultsalsoshowthatthe

averageobjectivesolutionoftheproposedalgorithmis

approxi-mately90%oftheoptimalsolution.Theapproachcanbeappliedin

areal-timecrowdsourcingenvironmentandreducestestingefforts

byalmost53%.

Crowdsourcingisaprocesswherenetworkedpeople

collabo-ratetocompletea cloudcomputingtask.Differentcollaborative

scenariosmeanthatvariousfactorsandconstraintsmustbe

con-sidered.Futurestudiesshouldapplythisapproachtootherphases

ofthesoftwaredevelopmentlifecycle,suchasrequirement

devel-opment,softwaredesign,andcodeimplementation,toaccelerate

systemdevelopment in a crowdsourcing environment. Various

factors,suchascrowdexpertiseand knowledge,shouldalsobe

incorporatedintothecollaborativetestingapproach.

Acknowledgments

This work was partially supported by the National Science

CounciloftheRepublicofChinaundergrants

NSC101-2511-S-468-007-MY3andNSC101-2511-S-468-001.

References

Abdullah,R.,Lakulu,M.,Ibrahim,H.,Selamat,M.H.,Nor,M.Z.M.,2009. The chal-lengesofopensourcesoftwaredevelopmentwithcollaborativeenvironment. In:InternationalConferenceonComputerTechnologyandDevelopment.

Albrecht,A.J.,GaffneyJr.,J.E.,1983. Softwarefunction,sourcelinesofcode,and developmenteffortprediction:asoftwaresciencevalidation.IEEETransaction onSoftwareEngineeringSE-9(6),639–648.

AmazonMechanicalTurk,https://www.mturk.com/mturk/welcome(lastaccessed 01.12.11.).

AmazonMechanicalTurk Wikipedia, http://en.wikipedia.org/wiki/Amazon MechanicalTurk(lastaccessed01.12.11.).

Andrews,A.A.,Offutt,J.,Alexander,R.T.,2005.Testingwebapplicationsbymodeling withFSMs.SoftwareSystemsandModeling,326–345,July.

“AppStori,”http://www.appstori.com/(lastaccessed01.09.12.).

Bai,X., Cao,Z., Chen,Y., 2007a. Design ofatrustworthy servicebroker and dependence-basedprogressivegrouptesting.InternationalJournalof Simula-tionandProcessModelling3(1),66–79.

Bai, X., Wang, Y., Dai, G., Tsai, W.-T., Chen, Y., 2007b. A framework for contract-based collaborative verification and validation of web services. Component-BasedSoftwareEngineering,vol.4608.Springer,Berlin/Heidelberg 258–273.

Benedikt,M.,Freire,J.,Godefroid,P.,2002.VeriWeb:automaticallytestingdynamic websites.In:Proceedingsof11thInternationalWorldWideWebConference, Honolulu,HI,USA,pp.654–668,May.

Bertolino,A.,2007. Softwaretestingresearch:achievements,challenges,dreams. In:Briand,A.,Wolf,L.(Eds.),FutureofSoftwareEngineering.IEEE-CSPress,IEEE ComputerSociety,Washington,DC,USA.

Buyya,R.,Yeo,C.S.,Venugopal,S.,Broberg,J.,Brandic,I.,2009.Cloudcomputingand emergingITPlatformsvision,hype,andrealityfordeliveringcomputingasthe 5thutility.FutureGenerationComputerSystem25,599–616.

Collectiveintelligence Wikipedia, http://en.wikipedia.org/wiki/Collective intelligence(lastaccessed01.12.11.).

CPLEX,IBMILOGCPLEXOptimizer,http://www-01.ibm.com/software/integration/ optimization/cplex-optimizer/,(lastaccessed01.12.11.).

Crowdsourcing Wikipedia, http://en.wikipedia.org/wiki/Crowdsourcing (last accessed01.12.11.).

Deng,Y.,Frankl,P.,Wang,J.,2004. Testingwebdatabaseapplications.SIGSOFT SoftwareEngineeringNotes29(5),1–10.

Elbaum,S.,Rothermel,G.,Karre,S.,FisherII,M.,2005.LeveragingUserSessionData toSupportWebApplicationTesting.IEEETransactiononSoftwareEngineering 31(3),187–202,March.

Garey,M.R.,Johnson,D.S.,1979. In:Klee,V.(Ed.),ComputersandIntractability,a GuidetotheTheoryofNP-Completeness.Freeman,NewYork.

Hedberg,H.,2004.Introducingthenextgenerationofsoftwareinspectiontools.In: ProductFocusedSoftwareProcessImprovement(LNCS3009),pp.234–247. Held,M.,Blochinger,W.,2009. Structuredcollaborativeworkflowdesign.Future

GenerationComputerSystem25,638–653.

Homma,K.,Izumi,S.,Takahashi,K.,Togashi,A.,2011. Modeling,verificationand teestingofwebapplicationsusingmodelchecker.IEICETransactionson Infor-mation&SystemE94-D(5),May.

Howe,J.,2006.Theriseofcrowdsourcing.WiredMagazine14(06),17–23. Jeffrey,D.,Gupta,N.,2005. Testsuitereductionwithselectiveredundancy.In:

Proceedingsofthe21stIEEEInternationalConferenceSoftwareMaintenance, pp.549–558.

Leon,D.,Masri,W.,Podgurski,A.,2005.Anempiricalevaluationoftestcasefiltering techniquesbasedonexercisingcomplexinformationflows.In:Proceedingsof the27thInternationalConferenceSoftwareEngineering,pp.412–421. Liu,C.-H.,Kung,D.C.,Hsia,P.,2000. Object-baseddataflowtestingofweb

applica-tions.In:ProceedingsoftheFirstAsia-PacificConferenceQualitySoftware,pp. 7–16.

Low,G.G.,Jeffery,D.R.,1990. Functionpointsintheestimationandevaluation ofthesoftwareprocess.IEEETransactiononSoftwareEngineering16(1), 64–71.

Lucca,G.A.D.,Fasolino,A.R.,2006. Testingweb-basedapplications:thestateof theartandfuturetrends.InformationandSoftwareTechnology48,1172– 1186.

Macdonald,F.,Miller,J.,1999.Acomparisonofcomputersupportsystemsfor soft-wareinspection.AutomatedSoftwareEngineering6(3),291–313.

Miao,H.,Qian,Z.,Song,B.,2008. Towardsautomaticallygeneratingtestpathsfor WebapplicationTesting.In:InternationalSymposiumonTheoreticalAspectsof SoftwareEngineering,2ndIFIP/IEEE.

OpenSourceWebApplicationswithSourceCodeinASP,JSP,PHP,Perl,ColdFusion, ASP.NETC#,http://www.gotocode.com/(lastaccessed01.12.11.).

Pearson,K., 1987. Mathematical contributionstothetheoryofevolution. on thelawofancestralheredity.ProceedingsoftheRoyalSocietyofLondon62, 386–412.

Ricca,F.,Tonella,P.,2001.AnalysisandTestingofWebApplications.In:Proceedings ofthe23rdInternationalConferenceonSoftwareEngineering,Toronto,ON, Canada,pp.25–34.

Ricca,F.,Tonella,P.,2006.Detectinganomalyandfailureinwebapplications.IEEE MultiMedia13(2),44–51.

Riungu,L.M.,Taipale,F.O.,Smolander,K.,2010.Researchissuesforsoftwaretesting inthecloud.In:ThirdInternationalConferenceonSoftwareTesting,Verification, andValidationWorkshops(ICSTW2010).

Shahriar, H., Zulkernine, M., 2011. Trustworthiness testing of phishing web-sites:abehaviormodel-basedapproach.FutureGenerationComputerSystem, http://dx.doi.org/10.1016/j.future.2011.02.001.

Shukla,S.V.,Redmiles,D.F.,1996.Collaborativelearninginasoftwarebug-tracking scenario.In:WorkshoponApproachesforDistributedLearningthrough Com-puterSupportedCollaborativeLearning,Boston,MA.

Souza,C.R.,Quirk,S.,Trainer,E.,Redmiles,D.F.,2007.Supportingcollaborative soft-waredevelopmentthroughthevisualizationofsocio-technicaldependencies. In:Proceedingsofthe2007InternationalACMConferenceonSupportingGroup WorkACMNewYork,NY,USA.

TopCoder,http://www.topcoder.com/(lastaccessed01.12.11.).

Tsai,W.T.,Chen,Y.,Paul,R.,Liao,N.,Huang,H.,2004. Cooperativeandgroup test-inginverificationofdynamiccompositewebservices.In:Proceedingsof28th AnnualInt.ComputerSoftwareandApplicationsConf.(COMPSAC2004),vol.2, pp.170–173.

Utest, What we test, Available at: http://www.utest.com/what-we-test (last accessed01.12.11.).

Wang,M.,Yuan,J.,Miao,H.,Tan,G.,2008. Astaticanalysisapproachfor auto-maticgeneratingtestcasesforWebapplications.InternationalConferenceon ComputerScienceandSoftwareEngineering.

West, A.G., Chang, J., Venkatasubramanian, K.K., Lee, I., 2011. Trust in collaborative web applications. Future Generation Computer System, http://dx.doi.org/10.1016/j.future.2011.02.007.

Weyers,B.,Luther,W.,Baloian,N.,2011. Interfacecreationandredesign tech-niquesincollaborativelearningscenarios.FutureGenerationComputerSystem 27,127–138.

(11)

Whitehead,J.,2007. Collaborationinsoftwareengineering:aroadmap.Futureof SoftwareEngineering(FOSE’07).

Yuan-HsinTungisworkingtowardthePh.D.degreein theDepartmentofComputerScience,NationalChiaoTung University,Hsinchu,Taiwan.Hereceiveda BSdegree inTransportationCommunicationManagementfromthe NationalCheng-KungUniversityinTaiwan,andanMSin InformationManagementfromtheNationalSunYat-Sen UniversityinTaiwan.Currently,hehasbeenanAssociate Researcher,since2000,attheTestCenterof Telecommu-nicationLabs.ofChunghwaTelecominTaoyuan,Taiwan. His mainresearchesare cloudcomputing, knowledge engineering,data-miningtechnology,softwaretesting, andsoftwareengineering.

Shian-Shyong Tseng received his Ph.D. degree in ComputerEngineeringfromtheNationalChiaoTung Uni-versityin1984.HehasbeenwiththeDepartmentof ComputerandInformationScienceatNationalChiaoTung Universitysince1983.From1988to1991,hewasthe directoroftheComputerCenterNationalChiaoTung Uni-versity.From1991to1992and1996to1998,heacted asthechairmanofDepartmentofComputerand Infor-mationScience.From1992to1996,hewastheDirector oftheComputerCenteratMinistryofEducationandthe ChairmanofTaiwanAcademicNetwork(TANet) manage-mentcommittee.InDecember1999,hefoundedTaiwan NetworkInformationCenter(TWNIC)andwasthe chair-manoftheboardofdirectorsofTWNICfrom2000to2005.HeisnowtheDeanof CollegeofComputerScience,AsiaUniversity.Hiscurrentresearchinterestsinclude datamining,expertsystem,computeralgorithmandInternet-basedapplications.

數據

Fig. 1. An excerption of dependence graph for the Web application, BookStore.
Fig. 3. An example of collaborative testing with dependence graph, test cases, and tester profiles on the crowdsourcing environment.
Fig. 4 shows that the proposed crowdsourcing-based approach consists of two main phases
Fig. 5. System architecture of COTS.
+3

參考文獻

相關文件

This study primarily represents a collaborative effort between researchers and in-service teachers in designing teaching activities and developing history of mathematics

which can be used (i) to test specific assumptions about the distribution of speed and accuracy in a population of test takers and (ii) to iteratively build a structural

By correcting for the speed of individual test takers, it is possible to reveal systematic differences between the items in a test, which were modeled by item discrimination and

If we want to test the strong connectivity of a digraph, our randomized algorithm for testing digraphs with an H-free k-induced subgraph can help us determine which tester should

Wi-Fi Supported Network Environment and Cloud-based Technology to Enhance Collaborative Learning... • Curriculum is basically a lesson plan that functions as a map

For HSK: If a test taker is late and the listening test has not begun, test takers can enter the test room and take the test; if a test taker is late and the listening test has

So, we develop a tool of collaborative learning in this research, utilize the structure of server / client, and combine the functions of text and voice communication via

YCT (Levels I-IV)Test: If a test taker is late and the listening test has not begun, test takers can enter the test room and take the test; if a test taker is late and the listening