• 沒有找到結果。

CHAPTER 2 RELATED WORK

2.3 Related Technologies

2.3.8 Web Services

Web services, in the general meaning of the term, are services offered via the Web.

It is a consequence of a natural evolution. Over time, applications have become loosely coupled and split into multiple components. This has allowed the distribution of an application across many different machines. This way, multiple computers’

resources can be used to provide the most resources possible to an application. To better understand why the need of Web services, and eventually where the WIS solution fits which is mentioned in later sections, the evolution of distributed computing and component technology is discussed here.

The architecture that grew out from this need became known as the client-server model. This model involves a central server that contains the database or other central data store that all clients access. The client handles the user-interface screens and some or all of the business logic before sending the data to the server. This frees the server resources to concentrate on data storage and access, while making full use of the resources of the client machines. But there are problems with this approach. The

major obstacle is the maintenance time required when new applications are shipped out to hundreds or thousands of desktops. The requirements associated with these fat-client applications are an obstacle to deploying them successfully. If all the correct DLLs and library files are not installed and registered correctly, the application will not function. Another problem is that it can become extremely costly in terms of resources required to scale up to a large number of users. They consumed database connections and other resources in a way that made it difficult to increase the number of client machines without dramatically increasing server power.

The solution to this architectural problem came with the broad acceptance of the World Wide Web. Web applications could now be built so that the only requirement for the client was an Internet browser. These applications became known as thin clients, because they used far fewer resources on the client machine. Web applications are built using dynamic Web pages accessing a database or middle-tier components.

In this model, the client machine is used less, because the required resources for HTML pages are much less than for a standard application that runs locally.

The final result is a model that allows for the distribution of the application at the server level. A Web server is used to process all HTTP requests, while one or more application servers can be used to run and manage all middle-tier components. A database server contains the actual data store for the application. With so many tiers to the application, the total work is spread across many computers.

There are many benefits to this model, as we have seen from the huge growth in Internet companies and applications in the last few years. Clients are no longer forced to install and application locally; applications can be developed for large numbers of people and, when functionality has to be upgraded or replace, the central Web server can be upgraded without ever having to change the client.

While this architecture is often used in building enterprise-level Web applications, it is not tied exclusively to Web applications. This solution, the n-tier model, is designed to be flexible and able to support any type of user-interface from dynamic Web pages to pocket devices.

At the same time that the client-server revolution was taking place, interest in component technology was also growing. In the beginning, developers used simple code reuse. This involved sharing common functions and subroutines. This had many problems, one being that if code was not designed very carefully to be portable to another project, it was not very useful and had to be re-implemented.

When the world moved to the object-oriented programming model, the solution to this problem was going to be the class. Object-oriented programming hides the implementation of a class in private methods, allowing a client to access only the header, or definition file. For enterprise developers this still presented many problems as developers tried to bring classes from one project forward into new ones. They found that differences between compilers, access to the uncompiled source code, and dependency on a specific programming language made this solution very difficult to achieve.

The next solution lay in components. The concept behind this technology was interface-based programming. A component would publish a well-defined interface that could be used to interact with it. This interface was a contract that was guaranteed to remain in place. Other developers could, therefore, develop using these interfaces, confident that future changes to the component would no break their code.

The component interfaces were in a binary standard, giving developers the choice to use different programming languages for the component and the client. For Microsoft, there is COM and DCOM; for Java, there is JavaBeans. Both are very

successful solutions.

Web services are not necessarily going to replace component development;

components make a lot of sense in an intranet solution where the environment is controlled, and it does not make sense to expose purely internal objects through a less efficient Web service interface. Web services make interoperating easy and effective, but they are not as fast as a binary proprietary protocol such as DCOM.

The problem with components lies in distributing them across the Internet. Before Web services, if a company created a great COM component for performing some functionality, they could sell and distribute that COM component. Your company might buy this component, install it on every server that needed the functionality, and use the component in your custom solution. With Web services this model changes.

Now the third-party vendor exposes a Web service to provide the functionality previously offered by the component. Your company accesses this Web service in your custom application and you no longer need to install any component on your servers. As upgrades are made to the functionality of the Web service, you have access to them immediately, because the Web service is centrally located. Instead of having to redistribute new components to all servers when upgrades are required, nothing has to be done to take advantage of changes in the internal working of the Web service.

This new model allows for “functionality reuse”. This is a fundamentally new concept. It is not interface-based, though it uses many of the concepts related to interfaces. It is not object-oriented, although systems can be built using object–oriented and component–oriented concepts. What functionality-reuse programming allows is the ability to use other systems to perform specific functionality in your application. This allows you to concentrate on the real business

problems, while taking advantage of third party expertise and experience in those areas you choose to access via Web services. Instead of forcing developers to choose between certain technologies when looking for functionality, Web services allow them to choose the correct functionality, not the correct technology. This is because the interface is defined; the application performing the actual functionality can be written in any language. This frees architects and developers by allowing them to choose functionality based solely on the requirements of the system, not the technological constraints.

A Web service is accessible through standard Internet protocols such as HTTP. This means that any client can use the Internet to make RPC-like (Remote Procedure Call) calls to any server on the Internet and receive the response back as XML. The messages sent back and forth between the client and server are encoded in a special XML dialect called Simple Object Access Protocol, or SOAP for short. This protocol defines a standardized way of accessing functionality on remote machines.

The fundamental concept driving Web services is that clients and servers can use any technology, any language, and any device. These things are at the discretion of the developer and the medium of the device. It is only the interface that is explicitly defined for each service. The way they access a Web service is the same: SOAP over HTTP. With Internet access built into everything these days, all we need to access sophisticated server applications is a basic XML text processor to encode and decode the SOAP messages.

There are many distributed component technologies, such as Distributed COM (DCOM), Common Object Request Broker Architecture (CORBA), and Remote Method Invocation (RMI) for Java, work very well in an intranet environment. These technologies allow components to be invoked over network connections, this

facilitating distributed application development. Each of these works well in a pure environment, but none is very successful at interoperating with the other protocols.

Java components cannot be called using DCOM, and COM objects cannot be invoked via RMI. Attempting to use these technologies over the Internet presents an even more difficult problem. Firewalls often block access to the required TCP/IP ports, and because they are proprietary formats both the client and server must be running compatible software.

The advantage of SOAP is that it is sent over HTTP. Most firewalls allow HTTP traffic to give end users the ability to browse the Internet. Web Services operate using the same ports in the firewall (port 80 for HTTP and port 443 for HTTPS), allowing server applications to be securely protected while still exposing business functionality via Web services. SOAP is not a proprietary protocol; instead, it is an XML standard that defines the messages set between the client and the Web service. Any Web service can therefore interact with, and be used from, any technology solution. This increases the ability to distribute systems without relying on a single technology like DCOM, CORBA, or RMI.

What makes Web services so revolutionary is the application of Internet standards.

All messages are sent via the HTTP protocol. The messages passing between Web services and clients are encoded in XML. How a request for a Web service is encoded is specified in the Simple Object Access Protocol, or SOAP for short. This standard, which was developed by a consortium of companies including Microsoft, IBM, DeveloperMentor, and Userland Software, is now an official W3C standard under review. SOAP messages are specified in a well-defined XML format.

SOAP makes it possible to access any Web service using well-known calls and response. The actual system residing behind the Web service could be any proprietary

system. As long as they send and receive valid SOAP messages, any system can call them, and they can call any Web services on the Internet.

In addition to SOAP there are a handful of standards that are required to make Web services a viable solution:

z XML: provides a standard and unified way to represent data and messages across all Web services.

z WSDL (Web Service Description Language): This specifies the interface of the Web services: what each method is called and what parameters it accepts and returns. From this document, the valid SOAP messages that can be sent to a Web service can be established.

Figure 2-10: The Web Service Solution

z DISCO (Discovery Protocol): This acts as a pointer to all the Web services located on a particular Web site. It enables dynamic discovery of published Web services for an organization.

z UDDI (Universal Description, Discovery and Integration): This acts as a central repository of available Web services. Applications and developers can access the UDDI registry to learn what Web services are available on the Internet.

相關文件