November 21, 2010

Cellular System Design

The Kansas City has the following population and the spectrum band.
  • Population  - 50,000 – 500,000
  • Bandwidth - 900 MHz – 905 MHz à 5 MHz
  • GSM Carrier separation - 200 KHz

This system is developed using 7 cell cluster system.

Following assumptions are made in order to decide the number of base stations needed.
  • Aimed Grade of Service - 2%
  • Call duration - 5 min
  • Distance from Base Station - 1m
  • Transmitter Power - 1mw
  • Receiver Power - -100 dBm
  • Path loss exponent - 3

The “ N “ cells which collectively use the complete set of available frequencies is called a cluster.

Cell Size
If i = 2 and j = 1 then,
N = i2 + ij + j2
N = 22 + (2*1) + (1)2
N= 4 + 2 + 1
N = 7

Co-Channel Reuse Distance



 

  • D1 - 1m
  • P1 - 1 mw
  • P2 - -100 dBm
  • N - 3
  • D2 - ?

Converting P2 to mw

10 log 10 P        = -100 dBm
log 10 P             = -100 / 10
log 10 P             = -10
P                      = 10 -10 mw


Cell Radius

 

Number of Carriers Available                    =          Bandwidth / Carrier separation
                                                                        =          5 MHz  /  200 KHz
                                                                        =          25

Assuming a cell gets equal number of carriers,

Number of Carriers per Cell                      =          Number of Carriers Available / 7

                                                                        =          25 / 7

                                                                        =          3.571

                                                                        =          3 (approximately)

Therefore,

Number of Channels per Cell                     =          Number of Carriers per Cell * 8

                                                                        =          3 * 8

                                                                        =          24

Two channels are taken for the transmission of control signals. Therefore,                                                                                 

Number of Usable Channels per Cell        =          Number of channels per cell – 2

                                                                        =          24 – 2

                                                                        =          22



Busy Hour Traffic

·        Aimed Grade of Service - 2%

·        Usable Channels per Cell - 22

Therefore,

Busy Hour Traffic                                          =          14.90 Erlang

Number of Users per Cell

Assuming the call arrival rate in busy hour          à 5 min

Traffic                                                  =          Number of users per cell * call arrival rate

14.9 E                                                  =          N * (5 / 60)

(14.9 * 60) / 5                                      =          N

178.8                                                   =          N

Number of Users per Cell                 =          178 (approximately)

Therefore, a cell can accommodate 178 users.

Number of base stations required     =          Number of Users / Call Arrival Rate

                                                            =          500,000 / 178

                                                            =          2808.98

=          2809 (approximately)


The competitive outlook of the industry has to go beyond cost cutting and find the underlying ways to improve efficiency, while delivering value-added services for mobile end users.

In GSM 900 system, there are 125 channels in both uplink and downlink, and these channels span the available bandwidth of GSM 900. The frequency is a scare resource in GSM system, and the frequency must be carefully planned to be reused. The frequency reuse factor is defined as the number of base stations that can be implemented between the current base station and the ones before the same frequency is reused. The antenna height can also influence the reuse factor, since the higher the antenna is, the greater the possibility that the signal causes more interference. Frequency planning is done using one of the previously mentioned optimization algorithms, by setting an adequate cost function to maximize the capacity of the network while minimizing the number of frequency sub-bands used.

When you consider the spectrum management, no single reform in isolation would provide a solution for the increasing pressures placed on spectrum management. However, in combination with regulatory, process-oriented and technical reforms, economic methods of spectrum management can help to create an improved spectrum management system. There are several methods for spectrum pricing.

The simplest method that of the administrative cost recovery price already adopted in many countries of the world is based on estimation of the funding required recovering the yearly costs incurred by the government agency for managing the spectrum resource. A number of options for spectrum price determination based on system performance have been developed. The price could be built up from a number of separate elements based on any or all of various criteria such as the amount of spectrum used, number of channels or links used, degree of congestion, efficiency of radio equipment, transmitter power/coverage area, geographical location and so forth. The basic principle of this approach is to identify various technical parameters in order to measure the spectrum volume used or define the “pollution area” of a radio system as a common basis for establishing spectrum fees. Another method is the spectrum fee is based on the costs of spectrum reframing. The so-called “differential rent spectrum price” basically exploits the difference between equipment costs for systems providing the same service but using different spectrum ranges.

The aims of differential rent spectrum prices are to establish equal opportunities in the market among all operators using different bands and access media and to stimulate operators to use higher frequency ranges or alternative wire technology, resulting in increased efficiency in spectrum use. Such method can only be successful to the extent that suitable alternative frequencies or technologies exist.

 

Network Dimensioning

What is Network Dimensioning?

The dimensioning exercise is to identify the equipment and the network type (i.e. technology employed) required in order to cater for the coverage and quality requirements, apart from seeing that capacity needs are fulfilled for the next few years (generally 3–5 years).[1] In other words, network dimensioning is based on the coverage and capacity requirements. The main objective is to optimize the network in a cost effective way.
The purpose of dimensioning a new network/service is to determine the minimum capacity requirements that will still allow the Tele-traffic Grade of Service (GoS) requirements to be met. To do this, dimensioning involves planning for peak-hour traffic, i.e. that hour during the day during which traffic intensity is at its peak.
The dimensioning process involves determining the network’s topology, routing plan, traffic matrix, and GoS requirements, and using this information to determine the maximum call handling capacity of the switches, and the maximum number of channels required between the switches. This process requires a complex model that simulates the behavior of the network equipment and routing protocols.
A dimensioning rule is that the planner must ensure that the traffic load should never approach a load of 100%. To calculate the correct dimensioning to comply with the above rule, the planner must take on-going measurements of the network’s traffic, and continuously maintain and upgrade resources to meet the changing requirements. Another reason for "over-provisioning" is to make sure that traffic can be rerouted in case a failure occurs in the network.
Because of the complexity of network dimensioning, this is typically done using specialized software tools. Whereas researches typically develop custom software to study a particular problem, network operators typically make use of commercial network planning software (e.g. OPNET Technologies, [http://www.wandl.com WANDL], [http://www.vpisystems.com VPISystems]).



System Design & Network Dimensioning

Network planning and design is an iterative process, encompassing topological design, network-synthesis, and network-realization, and is aimed at ensuring that a new network or service meets the needs of the subscriber and operator.[1] The process can be tailored according to each new network or service.[2]
This is an extremely important process which must be performed before the establishment of a new telecommunications network or service.
The network planning process consists of several phases, which can be combined at a higher level to main phases that differ depending on the logics.

The preplanning phase covers the assignments and preparation before the actual network planning is started. The network planning criteria is agreed with the customer. As specified earlier, the requirements depend on many factors, the main criteria being the coverage and quality targets. Also several limitations exist, like the limited frequency band and the budget for the investments. The priority for the planning parameters comes from the customer. Due to the fact that the network plan can not be optimized with regard to all the parameters the priorities need to be agreed with the customer throughout the whole process.
The network planning criteria is used as an input for network dimensioning. The following are listed as the basic inputs for dimensioning:
  • coverage requirements, the signal level for outdoor, in-car and indoor with the coverage probabilities;
  • quality requirements, drop call rate, call blocking;
  • frequency spectrum, number of channels, including information about possible needed guard bands;
  • subscriber information, number of users and growth figures;
  • traffic per user, busy hour value;
  • services
During the process of network planning and design, it is necessary to estimate the expected traffic intensity and thus the traffic load that the network must support.[2] If a network of a similar nature already exists, then it may be possible to take traffic measurements of such a network and use that data to calculate the exact traffic load.[3] However, as is more likely in most instances, if there are no similar networks to be found, then the network planner must use telecommunications forecasting methods to estimate the expected traffic intensity.[2]
The forecasting process involves several steps as follows:[2]
  • Definition of problem
  • Data acquisition
  • Choice of forecasting method
  • Analysis/Forecasting
  • Documentation and analysis of results

Necessity & Importance

The importance of dimensioning a new network, is to determine the minimum capacity requirements that will still allow the Telet-raffic Grade of Service (GoS) requirements to be met.[2][3] To do this, dimensioning involves planning for peak-hour traffic, i.e. that hour during the day during which traffic intensity is at its peak.[3]
The dimensioning process involves determining the network’s topology, routing plan, traffic matrix, and GoS requirements, and using this information to determine the maximum call handling capacity of the switches, and the maximum number of channels required between the switches.[3] This process requires a complex model that simulates the behavior of the network equipment and routing protocols.
A dimensioning rule is that the planner must ensure that the traffic load should never approach a load of 100 percent.[2][3] To calculate the correct dimensioning to comply with the above rule, the planner must take on-going measurements of the network’s traffic, and continuously maintain and upgrade resources to meet the changing requirements.[3][4]. Another reason for "over-provisioning" is to make sure that traffic can be rerouted in case a failure occurs in the network.
Because of the complexity of network dimensioning, this is typically done using specialized software tools. Whereas researchers typically develop custom software to study a particular problem, network operators typically make use of commercial network planning software. (E.g. OPNET Technologies, SevOne, WANDL, VPISystems, Cariden, Aria Networks). However, there is one notable open source network planning software available by the name of TOTEM named after Toolbox for Traffic Engineering Methods.

Impact upon Network Operators & the Network Users

Network dimensioning is basically based on the subscriber and traffic forecasts. The objective of dimensioning is to model an operator network based on subscriber and traffic forecasts to produce a technically optimum model of the network.[3] It is also typically based on a traffic matrix that contains the bandwidth demand for each source, destination pair.
The dimensioning gives a preliminary network plan as an output, which is then supplemented in coverage and parameter planning phases to create a more detailed plan. The preliminary plan includes the number of network elements that are needed to fulfill the quality of service requirements set by the operator, e.g. in GSM the number of BTSs and TRXs (transceivers). It also needs to be noted that dimensioning is repeated in the case of network extension. The result of dimensioning has two aspects; it tells the minimum number of base stations due to coverage or capacity reasons. Both of these aspects need to be analyzed against the original planning targets. It is also important to understand the forecasts for the subscriber growth and also the services that are going to be deployed. This has posed a challenge in particular to mobile network operators. The dimensioning result is an average capacity requirement per area type like urban, suburban, etc. More detailed capacity planning, capacity allocation for individual cells, can be done using a planning tool having digital maps and traffic information. The dimensioning results are an input for coverage planning, which is the next step in the network planning process. The radio network configuration plan also provides information for preplanning of the transmission network. The topology can be sketched based on the initial configuration and network design criteria.
The definition of the radio network planning criteria is done at the beginning of the network planning process. The customer requirements form the basis of the negotiations and the final radio network planning criteria is agreed between the customer and the radio network planner. The network operator has performance quality targets for the cellular network and these quality requirements are also related to how the end user experiences the network. The network planner’s main target is to build as high a quality network as possible. On the other hand, there must be cost-efficiency – how much money the operator can spend for the investments so that the business is financially profitable. The two factors – network quality and investments – are connected to profit. The link is not straight forward but, is one factor, if the network end user quality perception is good it has an impact on the profits. This explains the complexity of network planning; sufficient cellular network coverage and capacity needs to be created with as low an investment as possible.
The coverage targets include the geographical coverage, coverage thresholds for different areas and coverage probability. The range for a typical coverage probability is 90–95 %. The geographical coverage is case-specific and can be defined in steps according to network roll-out phases.
The quality targets are those agreed in association with the customer and network planning. The main quality parameters are call success or drop call rate, handover success, congestion or call attempt success and customer observed downlink (DL) quality. The DL quality is measured according to BER as defined in GSM specifications and mapped to RXQUAL values. Normally downlink RXQUAL classes 0 to 5 are considered as a sufficient call quality for the end user. The classes 6 and 7 represent poor performance and thus need to be avoided. The target value for RXQUAL can be, for example, 95% or the time equal or better than 5. Example values for network quality targets are shown in the below table.
The coverage and quality targets need to be considered in connection with the network evolution strategy. The subscriber forecast predicts the need and pace of network enlargement. Due to this obvious connection it is important to verify the subscriber forecast from time to time and keep it up to date. The coverage and quality targets need to be adjusted for the different network evolution phases. Interference probability becomes more important as the network capacity enlarges and has to be added as part of the quality targets.
The network features that are used have an effect on the dimensioning phase. The capacity and quality requirements need to be adjusted according to the features in use.
Some parameters that affect network planning cannot be controlled and therefore it is important to know what they are and then to take them into account in the planning. The topology and morphology is always area specific and due to this the area related planning parameters are case specific. An accurate digital map is needed in network planning. The map is used together with the propagation model to calculate the coverage areas in the planning phase. The propagation model is customized for the planning area with propagation measurements.
Population data are needed when estimating subscriber numbers. Population data as a layer on the map are useful in the planning tasks in order to cover a dense population area and allocate needed capacity.
The available bandwidth is a critical network planning parameter. Some basic decisions are dependent on the bandwidth, BTS configuration and frequency planning.
The quality of the cellular network is highly dependent on the quality of the network plan. The network performance will be measured and analyzed to prove that it is working according to the planning requirements.



Does Network Dimensioning Pose a Challenge in Particular to Mobile Network Operators?

The network dimensioning poses a great challenge to the mobile network operators.
Competition is leading increasing numbers of operators to identify network relocation by equipment vendors as an efficient way of solving existing problems, improving network capacity and maintaining competitiveness. However, doing so creates numerous challenges for both operators and vendors.
Firstly, new equipment must enable better network performance in order to solve existing problems. Increasing subscriber numbers and restricted frequency resources mean that operators require base stations with larger capacity and higher spectrum utilization. To reduce CAPEX and OPEX, operators need to minimize the number of sites via technical support from equipment vendors. The new equipment must also provide a rich array of services so as to increase operators' profits.
Secondly, each equipment vendor must possess sufficient relocation project management and implementation capabilities. Outstanding equipment represents a precondition for relocation, while project management describes the core. Each relocation project invariably consists of three phases: preparation, cutover implementation, and network optimization. To implement large-scale network relocation, effective project management systems monitor all the involved procedures. To avoid affecting service usage, cutover occurs in the evenings, and hundreds or thousands of base stations must be cutover in a very limited timeframe, while stability and reliability guarantees remain of paramount importance. This has placed considerable demands upon the shoulders of equipment vendors.
Thirdly, given tight deadlines that involve thousands of TRXs, an equipment vendor must display strong network planning and network optimization capabilities in order to ensure relocation success. The replacement network must exhibit improved quality and solve network congestion, restricted frequency resources and coverage blind spots.
Fourthly, newly adopted equipment must enable long-term evolution, especially towards 3G and IP networks to render operators' investment long-term and sustainable, and obviate the need to significant reinvestment.

Challenges of Network Dimensioning

The network dimensioning challenges are as follows;
  • Complex emission environment, severely fluctuate signal, big difference of the multi-approach emission caused by the various man-made buildings, and the difficulties in theoretic prediction of the coverage area.
  • Severe interference. Except for the human noises, all adjacent frequency interference, inter-modulation interference and other wireless interference should be considered and controlled in the permitted index during the engineering design.
  • Limited frequency resources. It’s getting more serious along with the big increasing of the subscriber.
  • There are strict rules for the cell structure and the cell splitting behavior designed for the frequency re-use. The station address planning can hardly be carried out in the real project due to various reasons.
  • The investment control is the technical and economical issue of the network construction, which can be by no means ignored.
 References

[1] Second-generation Network Planning and Optimization (GSM)

[2] Penttinen A., Chapter 10 – Network Planning and Dimensioning, Lecture Notes: S-38.145 - Introduction to Teletraffic Theory, Helsinki University of Technology, Fall 1999.

[3] Farr R.E., Telecommunications Traffic, Tariffs and Costs – An Introduction For Managers, Peter Peregrinus Ltd, 1988.

[4]. Wiley%20Advanced%20Cellular%20Network%20Planning%20and%20Optimisation%20Jan%202007.pdf

[5] http://books.google.lk/books?id=ynyG9TB-tJ0C&pg=PA30&lpg=PA30&dq=network+pre-

planning+challenges&source=bl&ots=HoImfCjKmu&sig=8K6JWYiSOUSFDrmllsCChu6hHg0&hl=

en&ei=0eK5TKCJHeWP4ga2vqWLDg&sa=X&oi=book_result&ct=result&resnum=1&ved=0CBMQ6

AEwAA#v=onepage&q&f=false

[6] http://www.scribd.com/doc/22066142/Chapter1-Network-Planning-Overview

November 20, 2010

Relevance of the issues and solutions addressed in Brooks classic paper on No Silver Bullet Essence and Accident in Software Engineering

In this article, Brooks has mentioned several problems related to software engineering and the traditions that can be taken to implement and resolve the issues arise. He believes the hard part in the construction of the software is not the labor of representing it and testing the fidelity of the representation, it is the specification, design, and testing of the conceptual construct.

Further more he has classified in-built properties of modern software systems into four sections.
  • Complexity

According to Brooks, expanding of a software unit is an increase in the number of different elements, not just a reiteration of the equal elements in larger sizes. In most scenarios, all the elements interact with each other in a non-linear manner and the complication of all the elements as a whole boost much more than linearly.

Below table depicts the set of problems and the effects arise from the complexity of a software.
Problem
Effect
Communication
  • Product flaws
  • Cost overruns
  • Schedule delays
Understanding all possible states of a program
  • Unreliability
Functional complexity
  • Less usability of the program
Complexity of structure
  • Difficulty in extending programs to new functions
  • Creates side effects
  • Un-visualized states constitute security trapdoors
Management problem
  • Makes overview hard
  • Impede conceptual integrity
  • Learning and understanding burden makes personnel turnover a disaster

Above areas mentioned by Brooks are still applicable in today’s context. In the software development industry earlier many products failed due to the complexity issues. Further more the complexity of the structure will also affect to the system maintenance as well. Currently, these complexity problems are decreased by using agile development methodology concepts. For example, in some agile development methodologies uses small teams which is manageable and effective communication with the customers. So it may reduce most of the communication problems mentioned by Brooks. And also functional complexity and the complex structure of the software are reduced by providing weekly or twice a week releases as per agile concepts in today’s context.

  • Conformity

Brooks says other fields must conform because they are perceived as the most conformable, but the software must conform since it is the most recent arrival on the scene. In all cases, much complexity comes from conformation to other interfaces and this complexity cannot be simplified out by any redesign of the software alone. This conformity is still applicable to today’s context.

  • Changeability

Brooks also states that software is constantly subject to force for transform whereas manufactured possessions are infrequently altered after manufacture. When software product is found useful, people try to use it in different ways and also in different domains, so this needs changeability of the software. Further more he points out that according to the construction of new computers, disks, and printers, the existing software should also needs to be changed to adapt to the change of the hardware’s.

This changeability is also still applicable to today’s context. Most of the time customers do not know what they exactly needs. So they keep changing the requirements. And also after a product is developed and if it is not usable, there is no point in developing such software, so changing gives more value to the product.


  • Invisibility


Software products mainly differ from the real world products from its un-visualizability. Brooks states that the software has no ready geometric representation like land has maps; silicon chips have diagrams, and computers have connectivity schematics. When we try to diagram software structure we generally constitute general directed graphs which mainly represent the flow of data, dependency pattern, time sequence and name-space relationships. These graphs are less hierarchical. Indeed, one of the ways of establishing conceptual control over such structure is to enforce link cutting until one or more of the graphs becomes hierarchical. As per my understanding this point that Brooks has mentioned is applicable to any time, either for today’s context as well as the future.

Next, we will identify the three steps in software development which hit difficulties in building software, which is accidental, not essential in past years described by Brooks.

  • High-level languages

One technology that has made significant improvement in the area of accidental complexity was the invention of high-level languages. The progressive use of high-level languages will increase software productivity, reliability, and simplicity. High-level language frees a program from much of its accidental complexity. High-level language can also furnish all the constructs that the programmer imagines in abstract program. And also the language development approaches are closer and closer to the complexity of users.

Today's languages, such as C, C++, C# and Java, are considered to be improvements, but not of the same order of magnitude as stated by Brooks. These languages are being more portable across platforms. Greater abstraction and hiding of details is generally intended to make the language user-friendly, as it includes concepts from the problem domain instead of those of the machine used. With the growing complexity of modern microprocessor architectures, well-designed compilers for high-level languages frequently produce code efficiently.

  • Timesharing


Even though this doesn’t give a large effect as high-level language, but this provides a major improvement in the productivity of programmers and in the quality of their products. Time-sharing preserves immediacy, and enables one to maintain an overview of complexity.

Internet has brought the general concept of time-sharing back into popularity in today’s world. Expensive corporate server farms costing millions can host thousands of customers all sharing the same common resources. As with the early serial terminals, websites operate primarily in bursts of activity followed by periods of idle time. This bursting nature permits the service to be used by many website customers at once, and none of them notice any delays in communications until the servers start to get very busy.

  • Integrated Program Development Environments (IDE)

This provides a platform for individual programs together, by providing integrated libraries unified file formats, and pipes and filters which improves productivity by essential factors.

In today’s context, there are improvements occur to the IDE’s. There is growing interest in visual programming. Visual IDEs allow users to create new applications by moving programming building blocks or code nodes to create flowcharts or structure diagrams which are then compiled or interpreted. These flowcharts often are based on the Unified Modeling Language. Some IDEs support multiple languages, such as Eclipse or NetBeans; both are based on Java or based on C#.

This article also describes several solutions to the problems that were discussed above. Brooks has identified the below technical developments as solutions that are most often advanced as potential silver bullets.

·        Ada and other high-level language advances
·        Object Oriented programming (OO programming)
·        Artificial Intelligence (AI)
·        Expert systems
·        Automatic programming
·        Graphical programming
·        Program verification
·        Environments and tools
·        Workstations

These solutions have major impact in addressing software engineering issues during past 20 years.
The 1980s brought advances in programming language implementation like Ada and other high-level language advances. In 1990’s many "rapid application development" (RAD) languages emerged, which usually came with an IDE, garbage collection, and were descendants of older languages. All such languages were object-oriented. These included Object Pascal, Visual Basic, and Java and C #. Java in particular received much attention. More radical and innovative than the RAD languages were the new scripting languages. Many consider these scripting languages to be more productive than even the RAD languages, but often because of choices that make small programs simpler but large programs more difficult to write and maintain. Nevertheless, scripting languages came to be the most prominent ones used in connection with the Web.

OO Programming was not commonly used in mainstream software application development until the early 1990. Many modern programming languages now support OOP. Most commercially important recent object-oriented languages are Visual Basic.NET (VB.NET) and C#, both designed for Microsoft's .NET platform, and Java, developed by Sun Microsystems. Both frameworks show the benefit of using OOP by creating an abstraction from implementation in their own way. VB.NET and C# support cross-language inheritance, allowing classes defined in one language to subclass classes defined in the other language. Java runs in a virtual machine, making it possible to run on all different operating systems. VB.NET and C# make use of the Strategy pattern to accomplish cross-language inheritance, whereas Java makes use of the Adapter pattern.

In the 1990s and early 21st century, Artificial Intelligence achieved its greatest successes. The success was due to several factors: the incredible power of computers today, a greater emphasis on solving specific sub-problems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards.

Expert systems are most valuable to organizations that have a high-level of know-how experience and expertise that cannot be easily transferred to other members. They are designed to carry the intelligence and information found in the intellect of experts and provide this knowledge to other members of the organization for problem-solving purposes.

Automatic programming is successful in addressing the software engineering issues in past 20 years. IDEs such as Eclipse, Interface Builder and Microsoft Visual Studio have more advanced forms of source code generation, with which the programmer has the ability to interactively select and customize "snippets" of source code, which increases the productivity of the programmer.

Current developments try to integrate the graphical programming approach with dataflow programming languages to either have immediate access to the program state resulting in online debugging or automatic program generation and documentation. Dataflow languages also allow automatic parallelization, which is likely to become one of the greatest programming challenges of the future.

At present, program verification is used by most leading hardware companies, but its use in the software industry is still languishing. This could be attributed to the greater need in the hardware industry, where errors have greater commercial significance. Because of the potential subtle interactions between components, it is increasingly difficult to exercise a realistic set of possibilities by simulation.

Workstations have changed within past 20 years. Workstation has included class PC with some of the following features: support for ECC memory, a larger number of memory sockets which use registered (buffered) modules, multiple processor sockets, sophisticated CPUs (for Intel CPU it will be server derived Xeon instead of typical for PCs Core), multiple displays, run reliable operating system with advanced features and high performance graphics card (i.e. Nvidia's professional Quadro instead of games-oriented GeForce)

Moreover Brooks describes the attacks that address the essence of the software problem, the formulation of these complex conceptual structures.

  • Buy versus build

Many project teams have faced the time when they need to make a major decision. Should one try to custom build a solution or buy an off-the-shelf product and customize it? Frequently, a wrong decision can result in cost overruns, project delays, or a solution that does not fit business needs very well. In my experience, I have seen two extremes of behavior among teams charged with making the "build vs. buy" decision. One believes that they can build everything needed and that no off-the-shelf solution will fit their needs. The other side of the coin is a belief that an off-the-shelf package will be much cheaper and will be able to fit one's needs. Unfortunately, both paths frequently can lead to disappointments if not carefully considered.
  • Requirement refinement and rapid prototyping

Brooks states that the most important function that the software builder performs for the client is the iterative extraction and refinement of the product requirements. He also states that the customer also doesn’t know what he exactly needs, so its better have requirement refinement with rapid prototyping. Rapid prototyping is now entering the field of rapid manufacturing and it is believed by many experts that this is a "next level" technology

  • Incremental developments – grow, don’t build software

Brooks says that the brain alone is powerful, because it is grown not build. He says that the developers should start from the basic functions of the development and move forward incrementally. This increases the enthusiasm of the developer.

  • Great designers

To improve the software art centers as it always has, on people he proposes the organizations should reward and nurtured the great designers.

The below table depicts the exciting vs. useful but unexciting products developed by great designers.


 
Existing Products
Unexciting Products
LINUX
MS-DOS
WIN 7
APL
SharePoint
Pascal
Vista
Fortran

 
Even though Brooks stated the above possessions in late 1980’s, his perceptions are still applicable. As a whole the report highlights the problems and issues related to the software engineering process and products mentioned by the Brooks, the solutions presented to overcome those difficulties and how successful have these methods been during last 20 years. Though Brooks has defined several solutions for the problems he has mentioned that no silver bullet can be found among them.

 

Why will a brilliant programmer / designer may not necessarily be a good software architect?

A brilliant developer might have an idea about what to be developed, the module that he is supposed to develop, the requirements that needs to be implemented, but if a brilliant programmer doesn’t have the big picture of the system that he needs to implement, he cannot be a good software architect. And also most of the time a best programmer might not be able to, sell the vision throughout the entirety of the software development lifecycle, evolving it throughout the project if necessary and taking responsibility for ensuring that it's delivered successfully. A brilliant programmer might be able to develop and deliver whatever he supposed to develop, but he might not be able to make sure that the entire project delivery is successfully.
            A good software architect owns the bigger picture. Technical leadership is only one aspect but there are other things that need to be done during the delivery phase of a software project. These include taking responsibility, providing technical guidance, making technical decisions and having the authority to make those decisions. An architect need to undertake the technical leadership to ensure everything is taken care of and that the team is being steered in the right direction on a continuous basis. A brilliant programmer might be an expert for one or two development environments, and he might not be an expert in most of the environments. A good developer might not have good technical leadership skills and since many project teams don't get the technical leadership that they need and most of the time they are depend on the architect, if that programmer is going to be an architect the entire project fails.
            We have seen many brilliant developers who are not good mentors, or coachers. In most software projects coaching and mentoring is an overlooked activity since many team members not getting the support that they need. A good architect uses his coaching and mentoring ability to enhance people's skills and to help them improve their own careers, whereas a good developer might fail.
Most architects are experienced coders, so it makes sense to keep those skills up-to-date. In addition, the architect can experience the same pain as everybody else on the team, which in turn helps them better understand how their architecture is viewed from a development perspective. A good developer may be selfish, and doesn’t understand or even try to understand others needs. So he cannot be a good software architect even though he is a good developer.
When we consider the overall details, a good software architect does not just happen. He possesses set of qualities and also his experience makes him a good software architect. A brilliant programmer who does not have much experience and key qualities as a software architect cannot be a may not necessarily be a good software architect.

Key Attributes of a Software Architect

Who is a software architect? Is it the person who is responsible for the software architecture, the answer is no. He is a person who owns more than that. According to the Rational Unified Process,  “The software architect must be well-rounded, posses maturity, vision, and a depth of experience that allows for grasping issues quickly and making educated, critical judgment in the absence of complete information.“
            An architect plays several roles including, Driving major technical decisions, expressed as the software architecture which includes identifying and documenting the architecturally significant aspects of the system, including requirements, design, implementation, and deployment views of the system. He is also responsible for providing rationale for these decisions, balancing the concerns of the various stakeholders, driving down technical risks, and ensuring that decisions are effectively communicated, validated, and adhered to.
            Architect is not only responsible for performing multiple roles, but also holding multiple skills.  Followings are the key attributes possess by a good software architect.
ü       Visionary
An architect is person with a clear, distinctive and vision of the future, usually connected with advances in technology arrangements. He is a person who always determines novel and modern approaches, and always has time to learn. Great architects are great "sponges" with great memory and powers of assimilation. They can look at a new piece of knowledge (e.g. a new API), understand what it does, how it fits and 18 months later will remember to use it when the opportunity arises.

ü       Manager
Architect is responsible for planning and directing the work of a group of individuals, monitoring their work, and taking corrective actions when necessary. Architects direct the team members directly or they may direct several leads who direct the other team members. He also coordinates with all stakeholders to formulate and communicate a practical blueprint. Architects have to explain and advise on technical issues to business stakeholders. They also have to be able to advise delivery teams on how to build.
ü       Developer
Architect helps to develop POC or pilot to validate solution, patterns, practices, and principles. He is a person who is concerned with facets of the software development process. Ultimately the architect will have the "buck stop" at their desk - they've got to be able to solve problems (or help others solve them) regardless of where it exists - client, database, middle tier, network etc. etc.
ü       Leader
An architect in the sense of making decisions and ensuring things get done but also a leader in the sense of empowering other people to make decisions and a mentor to help others make better decisions. He also defines processes and criteria. How files checked in and checked out, which changes are risky and which are not and when should they be integrated.
ü       Coach 
An architect should be able to motivate and inspire others to gain buy in from folks and ensure that each person is getting the challenge they want but in line with ensuring the end goal is still being addressed. Mentors and coaches others on effective application of industry best practices
ü       Governor
As a governor, an architect is responsible for establishing architectural standards and guidelines.  The architect looks at problems in a different way by applying analogies from different fields or projects. A true architect must not be parochial, and this means gaining experience in different roles and fields, probably with different employers. Having and using experience is more about attitude than years.