CPI vs MIPS: Key Differences Explained
This page describes the differences between CPI and MIPS.
CPI
CPI stands for Clock Cycles Per Instruction. As we know, a program is composed of a number of instructions. Instructions can be ALU, load, store, branch, and so on.
In computer terminology, it’s easier to count the number of instructions executed compared to counting the number of CPU cycles to run the program. Hence, the average number of clock cycles per instruction has been used as an alternate measurement of performance.
The following is the CPI equation:
CPI = CPU clock cycles for the program / Instruction count
In other words:
CPU Time = Instruction count X CPI / Clock rate
If the CPI for each instruction is noted, then the overall CPI can be calculated as follows:
CPI = Σ CPIi X Ii / Instruction count
Where:
- Ii = Execution of the Number of times an instruction of type i
- CPIi = Average number of cycles to execute instruction of type i
MIPS
MIPS stands for Million Instructions Per Second. It is another measure of performance. It is also referred to as the rate of instruction execution per unit time.
MIPS can be expressed as per the following equation:
MIPS = (Instruction count) / (Execution time X 106) = (clock rate / CPI X 106)
MIPS for machines having different instruction sets will have different results. This is because MIPS doesn’t track the execution time.