Side effect (computer science)

In computer science, a function or expression is said to have a side effect if it modifies some state or has an observable interaction with calling functions or the outside world. For example, a particular function might modify a global variable or static variable, modify one of its arguments, raise an exception, write data to a display or file, read data, or call other side-effecting functions. In the presence of side effects, a program's behaviour may depend on history; that is, the order of evaluation matters. Understanding and debugging a function with side effects requires knowledge about the context and its possible histories.[1][2]

Side effects are the most common way that a program interacts with the outside world (people, filesystems, other computers on networks). But the degree to which side effects are used depends on the programming paradigm. Imperative programming is known for its frequent utilization of side effects.

In functional programming, side effects are rarely used. The lack of side effects makes it easier to do formal verifications of a program. Functional languages such as Standard ML, Scheme and Scala do not restrict side effects, but it is customary for programmers to avoid them.[3] The functional language Haskell expresses side effects such as I/O and other stateful computations using monadic actions.[4][5]

Assembly language programmers must be aware of hidden side effects instructions that modify parts of the processor state which are not mentioned in the instruction's mnemonic. A classic example of a hidden side effect is an arithmetic instruction that implicitly modifies condition codes (a hidden side effect) while it explicitly modifies a register (the overt effect). One potential drawback of an instruction set with hidden side effects is that, if many instructions have side effects on a single piece of state, like condition codes, then the logic required to update that state sequentially may become a performance bottleneck. The problem is particularly acute on some processors designed with pipelining (since 1990) or with out-of-order execution. Such a processor may require additional control circuitry to detect hidden side effects and stall the pipeline if the next instruction depends on the results of those effects.

Referential transparency

Absence of side effects is a necessary, but not sufficient, condition for referential transparency. Referential transparency means that an expression (such as a function call) can be replaced with its value; this requires that the expression has no side effects and is pure (always returns the same results on the same input).

Temporal side effects

Side effects caused by the time taken for an operation to execute are usually ignored when discussing side effects and referential transparency. There are some cases, such as with hardware timing or testing, where operations are inserted specifically for their temporal side effects e.g. Sleep(5000) or for(int i=0; i < 10000; i++){}. These instructions do not change state other than taking an amount of time to complete.

Idempotence

Main article: Idempotence

A function with side effects is said to be idempotent under sequential composition (f; f) if, when called with the same argument twice, the second call returns the same value and has no side effects which can distinguish it from the first call. For instance, consider:

x = 0;
def xSetter(n):
    global x
    x = n
xSetter(5)
xSetter(5)

Here, xSetter is idempotent because the second call to xSetter (with the same argument) returns the same value and does not change the visible program state. Note that this is distinct from idempotence under function composition (f ∘ f): xSetter is not idempotent under function composition because xSetter(5) and xSetter(xSetter(5)) set x to different values where return value of xSetter is x; in this example return value is always 5.

Example

One common demonstration of side effect behavior is that of the assignment operator in C++. For example, assignment returns the right operand and has the side effect of assigning that value to a variable. This allows for syntactically clean multiple assignment:

int i, j;
i = j = 3;

Because the operator right associates, this equates to

int i, j;
i = (j = 3); // j = 3 returns 3, which then gets assigned to i

Where the result of assigning 3 into "j" then gets assigned into "i". This presents a potential hangup for novice programmers who may confuse

while (b == 10) {} // Tests if b evaluates to 10

with

// The assignment function returns 10
// which automatically casts to "true"
// so the loop conditional always evaluates to true
while (b = 10) {}

See also

References

  1. “Research Topics in Functional Programming” ed. D. Turner, Addison-Wesley, 1990, pp 17–42. Retrieved from: Hughes, John, Why Functional Programming Matters (PDF)
  2. Collberg, CSc 520 Principles of Programming Languages, Department of Computer Science, University of Arizona
  3. Matthias Felleisen et al., How To Design Programs, MIT Press
  4. Haskell 98 report, http://www.haskell.org.
  5. Imperative Functional Programming, Simon Peyton Jones and Phil Wadler, Conference Record of the 20th Annual ACM Symposium on Principles of Programming Languages, pages 7184, 1993
This article is issued from Wikipedia - version of the Friday, May 06, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.