Previous Tuckman-Model CORS-and-CSRF Next

Amdahl's Law

Amdahl's Law

Amdahl's law is a principle in computer science that predicts the theoretical maximum speedup of a system when only part of it is optimized. It is widely used in parallel computing to illustrate that the overall performance gain is limited by the portion of a program that cannot be parallelized. The law was formulated by computer architect Gene Amdahl in 1967. It highlights that even with a massive increase in processing resources, the sequential, non-parallelizable part of a program will eventually become a bottleneck, capping the maximum achievable speedup.

The formula

The formula for Amdahl's law calculates the speedup ($S$) based on the fraction of the program that can be enhanced ($P$) and the number of parallel units ($N$) used.

$$S=\frac{1}{(1-P)+\frac{P}{N}}$$

Where:

  • $S$ is the theoretical speedup of the entire task.
  • $P$ is the proportion of the program's execution time that can be parallelized (between 0 and 1).
  • $1-P$ is the sequential portion of the program that cannot be parallelized.
  • $N$ is the number of processors or enhanced resources.

Example of Amdahl's law

Imagine a program where 75% of the code can be parallelized, and you want to run it on a system with four processors.

  • $P=0.75$
  • $1-P=0.25$
  • $N=4$
$$S=\frac{1}{(1-0.75)+\frac{0.75}{4}}=\frac{1}{0.25+0.1875}=\frac{1}{0.4375}\approx 2.28$$

In this case, the theoretical speedup is limited to approximately 2.28 times, even though you have four processors. The 25% of the program that must run sequentially creates a performance bottleneck.

Limitations of Amdahl's law

While Amdahl's law provides a crucial insight into parallel processing, it has several important limitations:

  • Fixed workload: The law assumes that the problem size remains constant, even as the number of processors increases. In practice, programmers often use more processors to tackle larger problems.
  • Ignores overhead: It does not account for real-world factors like communication overhead between processors, synchronization costs, or resource contention, all of which can reduce actual performance.
  • Assumes homogeneity: The model typically assumes that all processors are identical, which may not be the case in modern, heterogeneous systems.
  • Contrast with Gustafson's law: Amdahl's law focuses on speedup for a fixed problem size (fixed-load speedup). Gustafson's law offers an alternative perspective, focusing on how speedup can increase for larger problem sizes (scaled-load speedup).
Back to Index
Previous Tuckman-Model CORS-and-CSRF Next
*