Skip navigation

The Mythical Man-Month

Posted by federico Oct 24, 2018

The Mythical Man-Month es un libro de la decada del 70 donde Brook (famoso por la ley de Brook) postula ciertas ideas en la ingenieria de software.


Un resumen (de wikipedia) de esas 15 ideas:


The mythical man-month

See also: Combinatorial explosion

Brooks discusses several causes of scheduling failures. The most enduring is his discussion of Brooks's law: Adding manpower to a late software project makes it later. Man-month is a hypothetical unit of work representing the work done by one person in one month; Brooks' law says that the possibility of measuring useful work in man-months is a myth, and is hence the centerpiece of the book.

Complex programming projects cannot be perfectly partitioned into discrete tasks that can be worked on without communication between the workers and without establishing a set of complex interrelationships between tasks and the workers performing them.

Therefore, assigning more programmers to a project running behind schedule will make it even later. This is because the time required for the new programmers to learn about the project and the increased communication overhead will consume an ever increasing quantity of the calendar time available. When n people have to communicate among themselves, as n increases, their output decreases and when it becomes negative the project is delayed further with every person added.

  • Group intercommunication formula: n(n − 1) / 2
  • Example: 50 developers give 50 · (50 – 1) / 2 = 1225 channels of communication.

No silver bullet

Main article: No Silver Bullet


Brooks added "No Silver Bullet — Essence and Accidents of Software Engineering"—and further reflections on it, "'No Silver Bullet' Refired"—to the anniversary edition of The Mythical Man-Month.Brooks insists that there is no one silver bullet -- "there is no single development, in either technology or management technique, which by itself promises even one order of magnitude [tenfold] improvement within a decade in productivity, in reliability, in simplicity."The argument relies on the distinction between accidental complexity and essential complexity, similar to the way Amdahl's law relies on the distinction between "strictly serial" and "parallelizable".

The second-system effect

Main article: Second-system effect


The second-system effect proposes that, when an architect designs a second system, it is the most dangerous system they will ever design, because they will tend to incorporate all of the additions they originally did not add to the first system due to inherent time constraints. Thus, when embarking on a second system, an engineer should be mindful that they are susceptible to over-engineering it.

The tendency towards irreducible number of errors

The author makes the observation that in a suitably complex system there is a certain irreducible number of errors. Any attempt to fix observed errors tends to result in the introduction of other errors.

Progress tracking

Brooks wrote "Question: How does a large software project get to be one year late? Answer: One day at a time!" Incremental slippages on many fronts eventually accumulate to produce a large overall delay. Continued attention to meeting small individual milestones is required at each level of management.

Conceptual integrity

To make a user-friendly system, the system must have conceptual integrity, which can only be achieved by separating architecture from implementation. A single chief architect (or a small number of architects), acting on the user's behalf, decides what goes in the system and what stays out. The architect or team of architects should develop an idea of what the system should do and make sure that this vision is understood by the rest of the team. A novel idea by someone may not be included if it does not fit seamlessly with the overall system design. In fact, to ensure a user-friendly system, a system may deliberately provide fewer features than it is capable of. The point being, if a system is too complicated to use, many features will go unused because no one has time to learn them.

The manual

The chief architect produces a manual of system specifications. It should describe the external specifications of the system in detail, i.e., everything that the user sees. The manual should be altered as feedback comes in from the implementation teams and the users.

The pilot system

When designing a new kind of system, a team will design a throw-away system (whether it intends to or not). This system acts as a "pilot plan" that reveals techniques that will subsequently cause a complete redesign of the system. This second, smarter system should be the one delivered to the customer, since delivery of the pilot system would cause nothing but agony to the customer, and possibly ruin the system's reputation and maybe even the company.

Formal documents

Every project manager should create a small core set of formal documents defining the project objectives, how they are to be achieved, who is going to achieve them, when they are going to be achieved, and how much they are going to cost. These documents may also reveal inconsistencies that are otherwise hard to see.

Project estimation

When estimating project times, it should be remembered that programming products (which can be sold to paying customers) and programming systems are both three times as hard to write as simple independent in-house programs.[4] It should be kept in mind how much of the work week will actually be spent on technical issues, as opposed to administrative or other non-technical tasks, such as meetings, and especially "stand-up" or "all-hands" meetings.


To avoid disaster, all the teams working on a project should remain in contact with each other in as many ways as possible—e-mail, phone, meetings, memos etc. Instead of assuming something, implementers should ask the architect(s) to clarify their intent on a feature they are implementing, before proceeding with an assumption that might very well be completely incorrect. The architect(s) are responsible for formulating a group picture of the project and communicating it to others.

The surgical team

Much as a surgical team during surgery is led by one surgeon performing the most critical work, while directing the team to assist with less critical parts, it seems reasonable to have a "good" programmer develop critical system components while the rest of a team provides what is needed at the right time. Additionally, Brooks muses that "good" programmers are generally five to ten times as productive as mediocre ones.

Code freeze and system versioning

Software is invisible. Therefore, many things only become apparent once a certain amount of work has been done on a new system, allowing a user to experience it. This experience will yield insights, which will change a user's needs or the perception of the user's needs. The system should, therefore, be changed to fulfill the changed requirements of the user. This can only occur up to a certain point, otherwise the system may never be completed. At a certain date, no more changes should be allowed to the system and the code should be frozen. All requests for changes should be delayed until the next version of the system.

Specialized tools

Instead of every programmer having his own special set of tools, each team should have a designated tool-maker who may create tools that are highly customized for the job that team is doing, e.g., a code generator tool that creates code based on a specification. In addition, system-wide tools should be built by a common tools team, overseen by the project manager.

Lowering software development costs

There are two techniques for lowering software development costs that Brooks writes about:

  • Implementers may be hired only after the architecture of the system has been completed (a step that may take several months, during which time prematurely hired implementers may have nothing to do).
  • Another technique Brooks mentions is not to develop software at all, but simply to buy it "off the shelf" when possible.

Quality of a product or in a service is seldom by chance. It can only be achieved through detailed planning and careful execution. Despite the efforts put in, except for in an ideal scenario, there are still bound to be glitches. However, the thing about Quality Assurance (QA) is that it paves the way for higher efficiency and better performance through incessant testing.

Through the implementation of Agile and DevOps methods, teams now collaborate more than ever before. Developers are merged into a testing cycle right from the early stages. Formerly, the process of testing happened at definite intervals, and testers had to wait for the product to be completely built before they set out to find the bugs and glitches. Admittedly, more time than can be agreed upon was spent waiting for the code to come through to a tester’s lap. With the drastic change in the role of a tester, however, the scope for QA has broadened by leaps and bounds. Be it a software-developer-in-test (SDET), or a QA engineer, as a member of a collaborative team, a modern day’s tester has a cadence similar to that of a software development engineer.

An SDET, or a QA engineer, would be able to easily validate fields being coded in order to avoid data loss. The QA engineer would also be able to write some basic checks and test ideas for an application programming interface (API), all of which inherently improves the design. Having teams that test earlier in the application lifecycle helps the QA engineers feel far more at ease with tooling and technology. Being able to look through server processes, work with API tools, or just walk through code quickly with a bit of help, is rapidly becoming a desired (and a much-required) skill.

When speaking of quality software based on DevOps processes, there are two essential parameters (among a few others) one should consider:

  • Shift-Left Testing: Where testing is part and parcel of continuous integration (CI)
  • Shift-Right Testing: Where the horizon of testing is broadened after receiving feedback from the end-users


Shift-Left TestingThe examples that are given while stating the features of a product need to meet the product acceptance criteria, and the assumptions that are waiting to be validated need to meet the business acceptance criteria. In short, defining tests even before the features are completely built is the shift-left testing approach. Shift-left testing is key to delivering quality software at speed.

In any organization, it can get quite challenging while deploying a new patch of an application. Rigorous testing needs to be conducted throughout, in the form or regression and functional testing, to ensure that patch updates do not destabilize a system. When testing starts earlier on in the cycle, teams are more focused on the quality and have a “let’s get the coding right the first time” outlook. This helps save tremendous amounts of time, and reduces the number of iterations a software development team has to perform for a particular code.


A few reasons to adopt the Shift-Left Testing approach:

  • Improved design: Through continuous Shift-Left testing and arduous brainstorming sessions, roadblock areas, bottlenecks, and possible performance failures are identified in advance. Even though these discoveries may lead to new design alternatives, they are improved versions of the original idea.
  • Bugs are Fixed Early-On: When we stop and think about how often organizational executives admitted that they “should have” dealt with the issue early on when it was identified, we realize the importance of Shift-Left testing. It gives more breathing room to tackle mistakes immediately after they are spotted, removing the, “let’s come back to this once we finish the critical stuff” (which seldom happens) outlook.
  • Massive Time and Effort Saved: When talking about improvement of efficiency and increase in quality, it would be ironic (and impractical) to hope to achieve those objectives without saving our own time and effort! This is another compelling reason to shift your testing left.


Shift-Right TestingWhile testing early on in the application lifecycle is absolutely essential and highly recommended, it is not enough. Obtaining feedback continuously from users is equally significant and here we have the Shift-Right approach for testing. Some things are out of the test engineer’s purview. A server can have downtime, for instance. A designed application, however, simply cannot afford such a failure.The performance and usability for an application is continuously monitored and accordingly modified. Even while gathering of the requirements, a tester should be quite aware of how users would feel about the functionality of a particular application. The important thing to factor is covering more ground with the testing. Testing should have the right mix at the right time for a given business framework. Such an approach immediately helps engineers understand how the product or feature update was received by the intended users.


A Few Reasons To Adopt The Shift-Right Testing Approach:

  • Enhancing Customer Experience: Through shifting testing right, customer issues care carefully collected. Upon obtaining the feedback, the collection of issues is then translated into technical and business languages. This helps isolate each issue and improve it, thereby enhancing the overall customer experience.
  • More Scope for Automation: Automation saves time, plain and simple. When patches and features are being built into application, automating large parts and even the whole process, saves precious time. User Interface (UI) automation, once the application is stable at a core-functional level, is crucial for testing with speed. Shifting testing to right enables you to do just that!
  • High Test Coverage: A Shift-Right approach to testing empowers the test engineers to test more, test on-time and test late. That translates to lesser bugs (at a basic stage), better quality (at an elevated stage) and delighted customer experience %

Filter Blog

By date: By tag: