• 沒有找到結果。

DISCUSSIONS AND RELATED WORK

The subject addressed in this thesis relates our work to a wide variety of research areas in software engineering and programming languages. For example, the concepts of context or polymorphism have been the foundation of programming language research since the beginning. Here we concentrate on software complexity handling, or more specifically the control of the coupling and cohesion of software modules.

Complexity handling has been the central theme addressed by fundamental software engineering principles documented over the years. [1] has organized and explained many principles systematically. Our work differs from this more traditional approach in that instead of basing discussions on general concepts of software modules or concrete code fragments and their dependencies, we develop an abstract model trying to capture principles in a more structural and semantic manner. Also, we try to minimize scope for our evaluation criteria and focus on the notion of maximal reusability. While we believe this approach to complexity handling gives software developers a simpler roadmap guiding their design activities, it does not replace the need to understand those well-established software engineering principles.

Existing component models and technologies often stress on mechanisms and abstractions for component specification and assembly. In contrast, we attempt to captures the essence of component and its reuse orientation across different levels of abstractions. We are more interested in how to apply the component principles using any OO languages, and believe that discipline around interface and black-box reuse is not sufficient, just like encapsulation and inheritance principles and mechanisms supported by OO – they are necessary but too primitive. Our approach focuses more on static structure of software designs, and provides a natural organization scheme.

There are also extensive research efforts directly targeted at the complexity and implications of various kinds of component correlation. For example, [8] discusses the problem of extraneous embedded knowledge – a component knowing information not conceptually required for what it does – and proposed the use of implicit context to hide these information while communicating with the component. In [9] language constructs are proposed for specifying collaboration interfaces that support bidirectional communication among components. A component can declare both its provided and expected services in its interface, much like the domain and polymorphism in our model, except the emphasis is

placed more on complex forms of collaboration. With support at the language level, the binding mechanism between two components can be simplified, as opposed to using excessive wrappers that is also pointed out here. In [10], various forms of (primitive) object-oriented composition are analyzed and classified (e.g. overriding, transparent redirection, acquisition, subtyping, and polymorphism). The paper proposes language support for expressing linguistically these different types of composition, which coincides with our objective despite the difference in.

Complexity handling is also the main subject studied by the reengineering and reverse engineering communities, which also come with a wealth of research results. Many approaches provide visualization tools for software developers to analyze the dependency and other program structures. For example, a recent approach [7] proposes a way of visualizing the internal structure of classes, called class blueprint, and provides guidelines and hints for spotting bad designs. Although the objective is similar - to help developers in evaluating and therefore improving designs, we argue that similar approaches in this direction often gross over important information embedded in various artifacts, such as the roles they play in the design. Although this may be what we can expect given the raw materials reverse engineering tools can work on, we believe our work can pave the way toward more semantics-unveiling techniques in this area.

One of the current trends in refactoring is when and where refactoring should be done.

The starting point of this, as mentioned before, is the Flower’s classical book on refactoring [22]. There are 22 bad smells listed in an informal manner, with a simple description and a set of suggested refactorings to improve the code. Tool support is necessary to assist the human intuition in a very efficient and effective way. The tool presented in [27] provides a generic approach (distance based cohesion) to generate visualizations supporting the developer to identify candidates for refactorings. To produce the distance matrix, the user creates a repository for the whole project, then extract the relevant symbols likes classes, methods etc. into the tool which is implemented as a relational database. The user has to select the classes to be analyzed. After the distances are calculated and distances have re-arranged, the tool can calculate positions and other information from the repository and display them with a VRML-client. TTCN-3 [28] is an Eclipse-based development environment developed by Motorola that provides suitable metrics and refactorings to enable the assessment and automatic restructuring of test suites. TTCN-3 uses Eclipse’s refactoring wizard that displays a preview of all resulting changes and uses its Java

Development Tools (JDT) to parse Java source code into AST trees for metrics calculation.

This system is in many ways similar to ours. However, it does not support bad smells directly and reasonably intuitively, nor does it provide further explanations about the refactoring methods it suggests or where the bad smells occur. Our tool lists bad smells sorted according to some priority (based on their corresponding metrics after proper normalization), so that more “urgent” design flaws are shown before the rest.

相關文件