• 沒有找到結果。

Regarding to the execution results, it is proud to say that the server and the client systems work successfully. They both meet the proposed expectations. In particular, the server system can acquire audiovisual contents from various types of cameras. In addition, it can transcode these acquired contents into MPEG-4 format in real-time. It can provide streaming services to multiple clients, who may ask for either live-captured contents or stored media files. The data transmission can take place either via UDP or TCP, as designated by the client. If the desired media file is stored type, the server may stream it in a smoother manner since the bandwidth smoothing technique is embedded into the server. Finally, the server is fully compliant with both RTP and RTSP protocols.

For the client side, the client system can connect to the server to request the desired media contents. Upon receiving MPEG-4 data, it can decode and render them in real-time. Lastly, the proposed system is so flexible that the user can even use Windows® Media Player® as the client program.

The implemented client and server system seem to work fine until the measurements mentioned above are taken. The resulted inter-departure times are not the same as those of expected when the server is streaming to a single client. This problem gets worse when the server is streaming to multiple clients as described earlier. The possible reasons for this discrepancy are:

z The TCP and the UDP socket kernel programs (Windows® Socket Library) do not function in the expected way. This is to say that when “send()” or “sendto()” is called to send a packet via TCP or UDP, respectively, the packet is not transmitted immediately.

z The bars in the histogram of second case (Figure 48) spread out because:

- Socket kernel programs are too busy to handle outbound packets destined to six clients. Therefore, packets cannot be injected into the network on time.

- CPU is exhausted due to fact that there are too many packetizers (“StreamerletIs()” threads) executing concurrently, and as a result, “send()” or

“sendto()” functions are not called at the exact times.

This problem may become more serious when the wireless transmission comes into play whether at the client side or the server side because:

z IEEE 802.11 DCF mechanism will cause one’s transmission delayed when the medium is in use. Thus, the inaccuracy of inter-departure times will probably be enlarged.

z When multiple stations are sharing the same medium (i.e. air), the overall transmission rate may eventually saturate [23], and cannot reach the maximum supported transmission rate of IEEE 802.11. Consequently, the delayed inter-departure times might be extended even longer.

Finally, the packet-based MVBA algorithm is successfully embedded into the server system. This feature can be useful when streaming stored media files. However, since MVBA is not intended for online or real-time applications, such as sports events or video conferences, respectively, this feature cannot be used to improve the transmission of live-captured audiovisual contents.

7 Conclusions

To reduce the time that people have to wait before starting to enjoy the multimedia contents, streaming technology has been developed and widely accepted. The mainstream streaming protocols are RTP and RTSP. The former is designed for the needs of streaming data with real-time characteristics. It has a subpart named RTCP, which allows the monitoring of reception quality. The later acts like a “network remote control”. It provides bi-directional communication between the server and the client. Many commercial streaming solutions that adopt these two protocols are out there. However, they are incapable of getting media contents from various types of cameras, and smoothing bursty traffic.

In this work, a real-time interactive RTP/RTSP multimedia streaming monitoring system with bandwidth smoothing technique is designed, developed and implemented successfully. In short, unlike other commercial solutions that merely can stream stored video or audio, the proposed system can stream either live-captured audiovisual contents that are real-time encoded into MPEG-4 format or stored ones.

Apparently, the server system is capable of acquiring captured images from various types of cameras. In addition, to smoothen the burstiness of VBR contents that may cause inefficient utilization of the network resources, the proposed system is further enhanced by the packet-based MVBA smoothing algorithm, which can compute a smoother transmission plan. Moreover, in the client system, a simple and easy image capturing software interface is designed and implemented. With this interface, one can acquire received media frames effortlessly, so he or she can do any kind of post-processing. For example, if automotive image processing functions are needed to allow automatic monitoring on unattended target zones, the system developer can get received frames from the image capturing software interface without any difficulty, and in turn feed them to the image processing functions in a graceful way.

Finally, the execution results are presented. These results indicate that the proposed system is implemented successfully, and that the goals of this work are achieved.

However, some issues do arise. The inter-departure times are not equal to the expected values. This discrepancy becomes more obvious when more clients are present. Without a doubt, if IEEE 802.11 wireless connection is in use either at the client side or the server side, this problem will be even more harmful.

From the system designer’s point of view, let’s conclude what factor limits the capability of the server system. If the main threads used by the server are to be divided into groups based on their functions, they can be sorted into the following units:

z Transcoding unit

- It includes “Transcoder()” thread.

- The server system only contains a single entity of this unit.

- It is responsible for acquiring images from various types of cameras, transcoding these images into MPEG-4, and placing the resulted bitstreams onto the buffer where the planning unit can access.

- The number of cameras that this unit can support to perform real-time transcoding depends on the computation power of the server machine.

z Transmission unit

- It includes “PacketScheduler()” thread.

- The server system only contains a single entity of this unit.

- It is responsible for handing over all the RTP packets destined to each connected clients to the network card, which in turn inserts these packets to the network.

z Planning unit

- It includes “StreamerletIs()” and “RTSPProcessorThread()” threads.

- Whenever the server accepts a new connection from a client, the server will exclusively create a new entity of this unit to deal with all the RTP and RTSP packets destined to or received from this client.

- “StreamerletIs()” is in charge of packetizing frames acquired either from the storage device or the buffer where the transcoding unit places the MPEG-4 bitstreams into RTP packets, and putting these packets on the buffer where the

transmission unit can access.

- “RTSPProcessorThread()” is in charge of exchanging RTSP messages with the client and performing necessary RTSP state transitions.

Obviously, among these three units, the most CPU-consuming one is the transcoding unit. This is because of the fact that the MPEG-4 encoding task requires large computation power, especially its motion search part, which accounts for over eighty percent of the overall MPEG-4 computation. In other words, if the transcoding unit is not taken into consideration, then the server can easily stream media data to more than ten clients concurrently. The reasons are that the transmission unit in fact consumes light computation power, and even though the number of the entities of the planning units increases as the number of the clients becomes larger, but due to their low computational power requirement, the overall server system performance is not affected greatly.

Therefore, the key factor that limits the server’s capability is the encoding unit.

After numerous measurements are taken, it is found that 7 milliseconds in average will be needed by the MPEG-4 encoder to encode a single 320x240 full-color frame. For the case when images acquired from a single camera are to be encoded at the frame rate of 25 frames per second, which is equivalent to 40 milliseconds of inter-frame time, then within this period of time the server system will take about 10 milliseconds to perform transcoding, and 1 millisecond to transmit a group of RTP packets (around 5 to 6 packets) to each connected client. Thus, in this case, the server can support up to 30 clients simultaneously. If the server provides the service of streaming three different videos acquired from three cameras to clients, it will take 30 milliseconds out of the 40-millisecond inter-frame time just to transcode the captured contents into MPEG-4. As a result, the number of clients that the server can support is decreased to 10.

In the future, the system may be enhanced by adding the following features to it:

z The feedback mechanism provided by RTCP can be implemented to the system so the server can adapt its transmission behavior to the network conditions.

z The proposed packet-based MVBA smoothing algorithm is designed for stored

media applications and cannot apply to the live-events such as live baseball games (i.e. small start-up delay is tolerable). Therefore, online smoothing techniques should be studied and realized in the future system.

z For some applications, any noticeable delay is not allowed. Thus, real-time smoothing algorithms should be researched and implemented in the upcoming system.

z Since RTSP is extendible, some other nice functions should be added to the system.

For example, for the client system that contains automotive image processing functions, if RTSP is modified to provide a means for the client to send messages to the server, then when the client detects that the monitored area is in danger, it can report to the server about the abnormal event via RTSP channel. In turn, the server can either record the event or notify the related personnel.

z Since the current transcoding unit is CPU-consuming, the server system can be improved by either upgrading the CPU or replacing the encoder with a more efficient one.

References

[1] Jan Krikke, “Streaming video transforms the media industry”, IEEE Computer Graphics and Applications, Volume 24, Issue 4, July/Aug, 2004.

[2] H. Schulzrinne, S. Casner, R. Frederick, and V. Jacobson, “RTP: A Transport Protocol for Real-Time Applications”, Audio Visual Working Group Request for Comment RFC 3550, IETF, July 2003.

[3] H. Schulzrinne, A. Rao, R. Lanphier, M. Westerlund, and A. Naraismhan, “Real time streaming protocol (RTSP)”, Internet Draft RFC 2326, Internet Engineering Task Force, October 27, 2003.

[4] M. Handley and V. Jacobson, “SDP: Session Description Protocol”, Request for Comments: 2327, Internet Engineering Task Force, April, 1998.

[5] D. Anderson, Y. Osawa, and R. Govindan, “A file system for continuous media,”

ACM Trans. Comput. Syst., pp. 311–337, Nov. 1992.

[6] P. Lougher and D. Shepard, “The design of a storage server for continuous media,”

Comput. J., vol. 36, pp. 32–42, Feb. 1993.

[7] D. Gemmell, J. Vin, D. Kandlur, P. Rangan, and L. Rowe, “Multimedia storage servers: A tutorial,” IEEE Comput. Mag., vol. 28, pp. 40–49, May 1995.

[8] I. Dalgic and F. A. Tobagi, “Performance evaluation of ATM networks carrying constant and variable bit-rate video traffic,” IEEE J. Select. Areas Commun., vol. 15, pp. 1115–1131, Aug. 1997.

[9] T. V. Lakshman, A. Ortega, and A. R. Reibman, “Variable bitrate (VBR) video:

Tradeoffs and potentials,” Proc. IEEE, vol. 86, pp. 952–973, May 1998.

[10] O. Rose, “Statistical properties of MPEG video traffic and their impact on traffic modeling in ATM systems,” in Proc. Conf. Local Computer Networks, Oct. 1995, pp.

397–406.

[11] Wu-chi Feng and Jennifer Rexford,“ Performance Evaluation of Smoothing Algorithms for Transmitting Prerecorded Variable-Bit-Rate Video ”, in IEEE Transactions on Multimedia, vol. 1, NO. 3, September 1999.

[12] W. Feng and S. Sechrest, “Smoothing and buffering for delivery of prerecorded compressed video,” in Proc. of the IS&T/ SPIE Symposium on Multimedia Comp.

and Networking, pp. 234-242, Feb. 1995.

[13] J. D. Salehi, Z.-L. Zhang, J. F. Kurose, and D. Towsley, “Supporting stored video:

Reducing rate variability and end-to-end resource requirements through optimal smoothing,” IEEE/ACM Trans. Networking, vol. 6, pp. 397–410, Aug. 1998.

[14] W. Feng, F. Jahanian, and S. Sechrest, “Optimal buffering for the delivery of compressed prerecorded video,” ACM Multimedia System Journals, pp. 297–309, Sept. 1997.

[15] W. Feng, “Rate-constrained bandwidth smoothing for the delivery of stored video,” in Proc. IS&T/SPIE Multimedia Networking and Computing, Feb. 1997, pp. 58–66.

[16] J. M. McManus and K. W. Ross, “Video on demand over ATM: Constant-rate transmission and transport,” IEEE J. Selected Areas Communications, vol. 14, pp.

1087–1098, Aug. 1996.

[17] Jean M. McManus, Keith W. Ross, “A dynamic programming methodology for managing prerecorded VBR sources in packet-switched networks,”

Telecommunication Systems, vol. 9, 1998.

[18] J. Zhang and J. Hui, “Traffic characteristics and smoothness criteria in VBR video transmission,” in Proc. IEEE Int. Conf. Multimedia Computing and Systems, June 1997.

[19] “DLLs, Processes, and Threads.” available at

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dllproc/base/dlls_pro cesses_and_threads.asp.

[20] W. Richard Stevens, “Unix Network Programming, Networking APIs: Sockets and XTI”, Prentice Hall PTR, vol. 1, 2nd Edition, pp. 44-46, 1998.

[21] Ferguson P., Huston G., Quality of Service: Delivering QoS on the Internet and in Corporate Networks, John Wiley & Sons, Inc., 1998.

[22] “WinPcap, the Packet Capture and Network Monitoring Library for Windows”

available at http://www.winpcap.org/default.htm.

[23] Shihong Zou, Haitao Wu, and Shiduan Cheng, “A New Mechanism of Transmitting MPEG-4 Video in IEEE 802.11 Wireless LAN with DCF”, on IEEE Proceedings of ICCT, 2003.

[24] Chan-Wei Lin, “Smoothing Algorithm for Video Streaming in Packet Based Transmission System”, Master Thesis, The Department of Communication Engineering, National Chiao Tung University.

相關文件