Chapter 7: Quality Attributes

J.D. Meier, Alex Homer, David Hill, Jason Taylor, Prashant Bansode, Lonnie Wall, Rob Boucher Jr, Akshay Bogawat

Objectives

  • Learn the key quality attributes, and how they apply to applications.
  • Learn the key issues, decisions, and techniques associated with each quality attribute.

Overview

Quality attributes are the cross-cutting concerns that affect run-time performance, system design, and user experience. Quality attributes are important for the overall usability, performance, reliability, and security of software applications. The quality of the application is measured by the extent to which it possesses a desired combination of these quality attributes. When designing applications to meet any of this quality attributes requirements, it is necessary to consider the potential impact on other requirements. You need to analyze the tradeoffs between multiple quality attributes. The importance or priority of each quality attribute differs from system to system; for example, in a line of business (LOB) system, performance, scalability, security, and usability will be more important than interoperability, while in a packaged application, interoperability will be very important.

How to Use This Chapter

This chapter lists and describes the quality attributes that you must consider when you design your application. To get the most out of this chapter, first take into account the objectives and overview above, and then use the table in the section How the Quality Attributes are organized to gain an understanding of how quality attributes map to system and application quality factors. Next, look at the Quality Attribute Frame table, which describes each of the quality attributes. Finally, for each of the quality attributes, understand the lists of the key issues for that attribute, the decisions you must make to addresses these issues, and the key techniques you can use to implement solutions for that quality attribute. Keep in mind that the list of quality attributes in this chapter is not exhaustive, but it provides a good starting point for asking appropriate questions about your architecture.

How the Quality Attributes are Organized

Quality attributes represent areas of concern that have the potential for application-wide impact across layers and tiers. Some of these attributes are related to the overall system design, while others are specific to run-time, design-time, or user-centric issues. Use the following table to gain an understanding of the quality attributes and the scenarios they are most likely to affect.

Type Quality attributes
System Qualities Supportability
Testability
Run-time Qualities Availability
Interoperability
Manageability
Performance
Reliability
Scalability
Security
Design Qualities Conceptual Integrity
Flexibility
Maintainability
Reusability
User Qualities User Experience / Usability

Quality Attribute Frame

The following table describes the quality attributes covered in this chapter. Use this table to understand what each of the quality attributes means in terms of your application design.

Quality attribute Description
Availability Availability defines the proportion of time that the system is functional and working. It can be measured as a percentage of the total system downtime over a predefined period. Availability will be affected by system errors, infrastructure problems, malicious attacks, and system load.
Conceptual Integrity Conceptual integrity defines the consistency and coherence of the overall design. This includes the way that components or modules are designed, as well as factors such as coding style and variable naming.
Flexibility Flexibility is the ability of a system to adapt to varying environments and situations, and to cope with changes in business policies and rules. A flexible system is one that is easy to reconfigure or adapt in response to different user and system requirements.
Interoperability Interoperability is the ability of diverse components of a system or different systems to operate successfully by exchanging information, often by using services. An interoperable system makes it easier to exchange and reuse information internally as well as externally.
Maintainability Maintainability is the ability of a system to undergo changes to its components, services, features, and interfaces as may be required when adding or changing the functionality, fixing errors, and meeting new business requirements.
Manageability Manageability defines how easy it is to manage the application, usually through sufficient and useful instrumentation exposed for use in monitoring systems and for debugging and performance tuning.
Performance Performance is an indication of the responsiveness of a system to execute any action within a given time interval. It can be measured in terms of latency or throughput. Latency is the time taken to respond to any event. Throughput is the number of events that take place within a given amount of time.
Reliability Reliability is the ability of a system to remain operational over time. Reliability is measured as the probability that a system will not fail to perform its intended functions over a specified time interval.
Reusability Reusability defines the capability for components and subsystems to be suitable for use in other applications and in other scenarios. Reusability minimizes the duplication of components and also the implementation time.
Scalability Scalability is the ability of a system to function well when there are changes to the load or demand. Typically, the system will be able to be extended over more powerful or more numerous servers as demand and load increase.
Security Security defines the ways that a system is protected from disclosure or loss of information, and the possibility of a successful malicious attack. A secure system aims to protect assets and prevent unauthorized modification of information.
Supportability Supportability defines how easy it is for operators, developers, and users to understand and use the application, and how easy it is to resolve errors when the system fails to work correctly.
Testability Testability is a measure of how easy it is to create test criteria for the system and its components, and to execute these tests in order to determine if the criteria are met. Good testability makes it more likely that faults in a system can be isolated in a timely and effective manner.
Usability Usability defines how well the application meets the requirements of the user and consumer by being intuitive, easy to localize and globalize, and able to provide good access for disabled users and a good overall user experience.

Availability

Availability defines the proportion of time that the system is functional and working. It can be measured as a percentage of the total system downtime over a predefined period. Availability will be affected by system errors, infrastructure problems, malicious attacks, and system load. Use the techniques listed below to maximize availability for your application.

Key Issues

  • A physical tier such as the database server or application server can fail or become unresponsive, causing the entire system to fail.
  • Security vulnerabilities can allow Denial of Service (DoS) attacks, which prevent authorized users from accessing the system.
  • Inappropriate use of resources can reduce availability. For example, resources acquired too early and held for too long cause resource starvation and an inability to handle additional concurrent user requests.
  • Bugs or faults in the application can cause a system-wide failure.
  • Frequent updates, such as security patches and user application upgrades, can reduce the availability of the system,
  • A network fault can cause the application to be unavailable.

Key Decisions

  • How to design failover support related to different tiers in the system.
  • How to decide if there is a need for a geographically separate redundant site to failover to in case of natural disasters such as earthquakes or tornados.
  • How to design for run-time upgrades.
  • How to design for proper exception handling in order to reduce application failures.
  • How to handle unreliable network connections.

Key Techniques

  • Use Network Load Balancing (NLB) for Web servers in order to distribute the load and prevent requests from being sent to a server that is down.
  • Use a Redundant Array of Independent Disks (RAID) to mitigate system failure in the event that a disk fails.
  • Deploy the system at geographically separate sites and balance requests across all sites that are available. This is an example of advanced networking design.
  • To minimize security vulnerabilities, reduce the attack surface area, identify malicious behavior, use application instrumentation to expose unintended behavior, and implement comprehensive data validation.
  • Design clients with occasionally connected capabilities, such as a rich client.

Conceptual Integrity

Conceptual integrity defines the consistency and coherence of the overall design. This includes the way that components or modules are designed, as well as factors such as coding style and variable naming. A coherent system makes it easy to resolve issues because you will know what is consistent with the overall design. Conversely, a system without conceptual integrity will constantly be affected by changing interfaces, frequently deprecating modules, and lack of consistency in how tasks are performed.

Key Issues

  • Mixing different areas of concern together within your design.
  • Not using or inconsistent use of a development process.
  • Collaboration and communication between different groups involved with the application lifecycle.
  • Lack of design and coding standards.
  • Existing (legacy) system demands that prevent both refactoring and progression toward a new platform or paradigm.

Key Decisions

  • How to identify areas of concern and group them into logical layers.
  • How to manage the development process.
  • How to facilitate collaboration and communication throughout the application lifecycle.
  • How to establish and enforce design and coding standards.
  • How to create a migration path away from legacy technologies.
  • How to isolate applications from external dependencies.

Key Techniques

  • Use published guidelines to help identify areas of concern and group them into logical layers within the design.
  • Perform an Application Lifecycle Management (ALM) assessment.
  • Establish a development process integrated with tools to facilitate process workflow, communication, and collaboration.
  • Establish published guidelines for design and coding standards.
  • Incorporate code reviews into your development process to ensure guidelines are being followed.
  • Use the Gateway design pattern for integration with legacy systems.
  • Provide documentation to explain the overall structure of the application.

Flexibility

Flexibility is the ability of a system to adapt to varying environments and situations, and to cope with changes in business policies and rules. A flexible system is one that can be easily modified in response to different user and system requirements.

Key Issues

  • The code base is large, unmanageable, and fragile.
  • Refactoring is burdensome due to regression requirements for a large and growing code base.
  • The existing code is over-complex.
  • The same logic is implemented in many different ways.

Key Decisions

  • How to handle dynamic business rules, such as changes related to authorization, data, or process.
  • How to handle a dynamic user interface (UI), such as changes related to authorization, data, or process.
  • How to respond to changes in data and logic processing.
  • How to ensure that components and services have well-defined responsibilities and relationships.

Key Techniques

  • Use business components to implement the rules, if only the business rule values tend to change.
  • Use an external source, such as a business rules engine, if the business decision rules tend to change.
  • Use a business workflow engine if the business process tends to change.
  • Design systems as well-defined layers, or areas of concern, that clearly delineate the system’s UI, business processes, and data access functionality.
  • Design components to be cohesive and loosely coupled to maximize flexibility and facilitate replacement and reusability.

Interoperability

Interoperability is the ability of diverse components of a system or different systems to operate successfully by exchanging information, often by using services. An interoperable system allows you to exchange and reuse information internally as well as externally. Communication protocols, interfaces, and data formats are the key considerations for interoperability. Standardization is also an important aspect to be considered when designing an interoperable system.

Key Issues

  • Interaction with external or legacy systems that use different data formats.
  • Boundary blurring, which allows artifacts from one layer, tier, or system to defuse into another.

Key Decisions

  • How to handle different data formats from external or legacy systems.
  • How to enable systems to interoperate while evolving separately or even being replaced.
  • How to isolate systems through the use of service interfaces.
  • How to isolate systems through the use of mapping layers.

Key Techniques

  • Use orchestration with adaptors to connect with external or legacy systems and translate data between systems.
  • Use a canonical data model to handle interaction with a large number of different data formats.
  • Expose services using interfaces based on XML or standard types in order to support interoperability with other systems.
  • Design components to be cohesive and have low coupling in order to maximize flexibility and facilitate replacement and reusability.

Maintainability

Maintainability is the ability of a system to undergo changes to its components, services, features, and interfaces as may be required when adding or changing functionality, fixing bugs, and meeting new business requirements. Measurability can be measured in terms of the time it takes to restore the system to its operational status following a failure or removal from operation for upgrading. Improving system maintainability will increase efficiency and reduce run-time defects.

Key Issues

  • Excessive dependencies between components and layers prevent easy replacement, updates, and changes.
  • Use of direct communication prevents changes to the physical deployment of components and layers.
  • Reliance on custom implementations of features such as authentication and authorization prevents reuse and hampers maintenance.
  • Mixing the implementation of cross-cutting concerns with application-specific components makes maintenance harder and reuse difficult.
  • Components are not cohesive, which makes them difficult to replace and causes unnecessary dependencies on child components.

Key Decisions

  • How to reduce dependencies between components and layers.
  • How to implement a pluggable architecture that allows easy upgrades and maintenance, and improved testing capabilities.
  • How to separate the functionality for cross-cutting concerns from application-specific code.
  • How to choose an appropriate communication model, format, and protocol.
  • How to create cohesive components.

Key Techniques

  • Design systems as well-defined layers, or areas of concern, that clearly delineate the system’s UI, business processes, and data access functionality.
  • Design components to be cohesive and have low coupling in order to maximize flexibility and facilitate replacement and reusability.
  • Design interfaces that allow the use of plug-in modules or adapters to maximize flexibility and extensibility.
  • Provide good architectural documentation to explain the structure of the application.

Manageability

Design your application to be easy to manage, by exposing sufficient and useful instrumentation for use in monitoring systems and for debugging and performance tuning.

Key Issues

  • Lack of diagnostic information
  • Lack of troubleshooting tools
  • Lack of performance and scale metrics
  • Lack of tracing ability
  • Lack of health monitoring

Key Decisions

  • How to enable the system behavior to change based on operational environment requirements, such as infrastructure or deployment changes.
  • How to enable the system behavior to change at run time based on system load; for example, by queuing requests and processing them when the system is available.
  • How to create a snapshot of the system’s state to use for troubleshooting.
  • How to monitor aspects of the system’s operation and health.
  • How to create custom instrumentation to provide detailed operational reports.
  • How to discover details of the requests sent to the system.

Key Techniques

  • Consider creating a health model that defines the significant state changes that can affect application performance, and use this model to specify management instrumentation requirements.
  • Implement instrumentation, such as events and performance counters, that detects state changes, and expose these changes through standard systems such as Event Logs, Trace files, or Windows Management Instrumentation (WMI).
  • Capture and report sufficient information about errors and state changes in order to enable accurate monitoring, debugging, and management.
  • Consider creating management packs that administrators can use in their monitoring environments to manage the application.
  • Consider monitor health of your application or specific functions for debugging.
  • Consider logging and auditing information that may be useful for maintenance and debugging, such as request details or module outputs and calls to other systems and services.

Performance

Performance is an indication of the responsiveness of a system to execute specific actions in a given time interval. It can be measured in terms of latency or throughput. Latency is the time taken to respond to any event. Throughput is the number of events that take place in a given amount of time. Factors affecting system performance include the demand for a specific action and the system’s response to the demand.

Key Issues

  • Increased client response time, reduced throughput, and server resource over-utilization.
  • Increased memory consumption, resulting in reduced performance, unable to find data in cache, and increased data store access.
  • Increased database server processing may cause reduced throughput.
  • Increased network bandwidth consumption may cause delayed response times, and increased load for client and server systems.
  • Inefficient queries, or fetching all of the data when only a portion is displayed, may incur unnecessary load on the database server, failure to meet performance objectives, and costs in excess of budget allocations.
  • Poor resource management can result in the creation of multiple instances of resources, with the corresponding connection overhead, and can increase the application’s response time.

Key Decisions

  • How to determine a caching strategy.
  • How to design high-performance communication between layers.
  • How to choose effective types of transactions, locks, threading, and queuing.
  • How to structure the application.
  • How to manage resources effectively.

Key Techniques

  • Choose the appropriate remote communication mechanism.
  • Design coarse-grained interfaces that require the minimum number of calls (preferably just one) to execute a specific task.
  • Minimize the amount of data sent over the network.
  • Batch work to reduce calls over the network.
  • Reduce transitions across boundaries.
  • Consider asynchronous communication.

Reliability

Reliability is the ability of a system to continue operating as expected over time. Reliability is measured as the probability that a system will not fail and that it will perform its intended function for a specified time interval. Improving the reliability of a system may lead to a more secure system because it helps to prevent the types of failures that a malicious user may exploit.

Key Issues

  • System may crash.
  • System becomes unresponsive at times.
  • Output is inconsistent.
  • System fails because of unavailability of other externalities such as systems, networks, and databases.

Key Decisions

  • How to handle unreliable external systems.
  • How to detect failures and automatically initiate a failover.
  • How to redirect load under extreme circumstances.
  • How to take the system offline but still queue pending requests.
  • How to handle failed communications.
  • How to handle failed transactions.

Key Techniques

  • Implement instrumentation, such as events and performance counters, that detects poor performance or failures of requests sent to external systems, and expose information through standard systems such as Event Logs, Trace files, or WMI.
  • Log performance and auditing information about calls made to other systems and services.
  • Consider implementing configuration settings that change the way the application works, such as using a different service, failing over to another system, or accessing a spare or backup system should the usual one fail.
  • Consider implementing code that uses alternative systems when it detects a specific number of failed requests to an existing system.
  • Implement store-and-forward or cached message-based communication systems that allow requests to be stored when the target system is unavailable, and replayed when it is online.
  • Consider using Windows Message Queuing or Microsoft BizTalk® Server to provide a reliable once-only delivery mechanism for asynchronous requests.

Reusability

Reusability is the probability that a component will be used in other components or scenarios to add new functionalities with little or no change. Reusability minimizes the duplication of components and also the implementation time. Identifying the common attributes between various components is the first step in building small reusable components of a larger system.

Key Issues

  • Using different code or components to achieve the same result in different places.
  • Using multiple similar methods instead of parameters to implement tasks that vary slightly.
  • Using several systems to implement the same feature or function.

Key Decisions

  • How to reduce duplication of similar logic in multiple components.
  • How to reduce duplication of similar logic in multiple layers or subsystems.
  • How to reuse functionality in another system.
  • How to share functionality across multiple systems.
  • How to share functionality across different subsystems within an application.

Key Techniques

  • Examine the application design to identify cross-cutting concerns such as validation, logging, and authentication, and implement these functions as separate components.
  • Examine the application design to identify common functionality, and implement this functionality in separate components that you can reuse.
  • Consider exposing functionality from components, layers, and subsystems through service interfaces that other layers and systems can use.
  • Consider using platform-agnostic data types and structures that can be accessed and understood on different platforms.

Scalability

Scalability is an attribute of a system that displays the ability to function well even with change in demand. Typically, the system should be able to handle increases in size or volume. The aim is to maintain the system’s availability, reliability, and performance even when the load increases. There are two methods for improving scalability: scaling vertically, and scaling horizontally. You add more resources such as CPU, memory, disk, etc. to a single system to scale vertically. You add more machines, for serving the application, to scale horizontally.

Key Issues

  • Applications cannot handle increasing load.
  • Users incur delays in response and longer completion times.
  • The system fails.
  • The system cannot queue excess work and process it during periods of reduced load.

Key Decisions

  • How to design layers and tiers for scalability.
  • How to scale up or scale out an application.
  • How to scale the database.
  • How to scale the UI.
  • How to handle spikes in traffic and load.

Key Techniques

  • Avoid stateful components and subsystems where possible to reduce server affinity.
  • Consider locating layers on the same physical tier to reduce the number of servers required while maximizing load-sharing and failover capabilities.
  • Consider implementing configuration settings that change the way the application works, such as using a different service, failing over to another system, or accessing a spare or backup system in case the usual system fails.
  • Consider implementing code that uses alternative systems when it detects a specific number of failed requests to an existing system.
  • Consider implementing code that uses alternative systems when it detects a predefined service load or a number of pending requests to an existing system.
  • Implement store-and-forward or cached message-based communication systems that allow requests to be stored when the target system is unavailable, and replayed when it is online.
  • Consider partitioning data across more than one database server to maximize scale-up opportunities and allow flexible location of data subsets.

Security

Security is an attribute of a system that needs to be protected from disclosure or loss of information. Securing a system aims to protect assets and unauthorized modification of information. The factors affecting system security are confidentiality, integrity, and availability. Authentication, encryption, and auditing and logging are the features used for securing systems.

Key Issues

  • Spoofing of user identity
  • Tampering with data
  • Repudiation
  • Information disclosure
  • Denial of service (DoS)

Key Decisions

  • How to address authentication and authorization.
  • How to protect against malicious input.
  • How to protect sensitive data.
  • How to protect against SQL injection.
  • How to protect against cross-site scripting.

Key Techniques

  • Identify the trust boundaries, and authenticate and authorize users crossing a trust boundary.
  • Validate input for length, range, format, and type using constrain, reject, and sanitize principles. Encode output.
  • Do not reveal sensitive system or application information.
  • Use application instrumentation to expose behavior that can be monitored.
  • Partition the site into anonymous, identified, and authenticated users.
  • Reduce session timeouts.

Supportability

Supportability is the ability to provide support to a system when it fails to work correctly.

Key Issues

  • Lack of diagnostic information
  • Lack of troubleshooting tools
  • Lack of performance and scale metrics
  • Lack of tracing ability
  • Lack of health monitoring

Key Decisions

  • How to monitor system activity.
  • How to monitor system performance.
  • How to implement tracing.
  • How to provide troubleshooting support.
  • How to design auditing and logging.

Key Techniques

  • Consider a system monitoring application, such as Microsoft System Center.
  • Use performance counters to monitor system performance.
  • Enable tracing in Web applications in order to troubleshoot errors.
  • Use common components to provide tracing support in code.
  • Use Aspect Oriented Programming (AOP) techniques to implement tracing.

Testability

Testability is a measure of how well system or components allow you to create test criteria and execute tests to determine if the criteria are met. Testability allows faults in a system to be isolated in a timely and effective manner.

Key Issues

  • Complex applications with many processing permutations are not tested consistently.
  • Automated or granular testing cannot be performed because the application has a monolithic design.
  • Lack of test planning.
  • Poor test coverage—manual as well as automated.
  • Input inconsistencies; for the same input, the output is not same.
  • Output inconsistencies—output does not fully cover the output domain, even though all known variations of input are provided.

Key Decisions

  • How to ensure an early start to testing during the development life cycle.
  • How to automate user interaction tests.
  • How to handle test automation and detailed reporting for highly complex functionality, rules, or calculations.
  • How to separately test each layer or tier.
  • How to make it easy to specify and understand system inputs and outputs to facilitate the construction of test cases.
  • How to clearly define component and communication interfaces.

Key Techniques

  • Use mock objects during testing.
  • Construct simple, structured solutions.
  • Design systems to be modular to support testing.
  • Provide instrumentation or implement probes for testing.
  • Provide mechanisms to debug output and ways to specify inputs easily.

User Experience / Usability

The application interfaces must be designed with the user and consumer in mind so that they are intuitive, can be localized and globalized, provide access to disabled users, and provide a good overall user experience.

Key Issues

  • Too much interaction (excessive number of “clicks”) is required for a task.
  • There is an incorrect flow to the interface.
  • Data elements and controls are poorly grouped.
  • Feedback to the user is poor, especially for errors and exceptions.
  • The application is unresponsive.

Key Decisions

  • How to leverage effective interaction patterns.
  • How to determine user experience acceptance criteria.
  • How to improve responsiveness for the user.
  • How to determine the most effective UI technology.
  • How to enhance the visual experience.

Key Techniques

  • Design the screen and input flows and user interaction patterns to maximize ease of use.
  • Incorporate workflows where appropriate to simplify multi-step operations.
  • Choose appropriate control types (such as option groups and check boxes) and lay out controls and content using the accepted UI design patterns.
  • Implement technologies and techniques that provide maximum user interactivity, such as Asynchronous JavaScript and XML (AJAX) in Web pages and client-side input validation.
  • Use asynchronous techniques for background tasks, and tasks such as populating controls or performing long-running tasks.

Additional Resources

For more information on implementing and auditing quality attributes, see the following resources:

Last edited Dec 15, 2008 at 9:58 PM by prashantbansode, version 2

Comments

No comments yet.