Power law








An example power-law graph, being used to demonstrate ranking of popularity. To the right is the long tail, and to the left are the few that dominate (also known as the 80–20 rule).


In statistics, a power law is a functional relationship between two quantities, where a relative change in one quantity results in a proportional relative change in the other quantity, independent of the initial size of those quantities: one quantity varies as a power of another. For instance, considering the area of a square in terms of the length of its side, if the length is doubled, the area is multiplied by a factor of four.[1]




Contents






  • 1 Empirical examples


  • 2 Properties


    • 2.1 Scale invariance


    • 2.2 Lack of well-defined average value


    • 2.3 Universality




  • 3 Power-law functions


    • 3.1 Examples


      • 3.1.1 Astronomy


      • 3.1.2 Physics


      • 3.1.3 Biology


      • 3.1.4 Meterology


      • 3.1.5 General science


      • 3.1.6 Mathematics


      • 3.1.7 Economics




    • 3.2 Variants


      • 3.2.1 Broken power law


      • 3.2.2 Power law with exponential cutoff


      • 3.2.3 Curved power law






  • 4 Power-law probability distributions


    • 4.1 Graphical methods for identification


    • 4.2 Plotting power-law distributions


    • 4.3 Estimating the exponent from empirical data


      • 4.3.1 Maximum likelihood


      • 4.3.2 Kolmogorov–Smirnov estimation


      • 4.3.3 Two-point fitting method


      • 4.3.4 R function






  • 5 Validating power laws


  • 6 See also


  • 7 References


  • 8 External links





Empirical examples


The distributions of a wide variety of physical, biological, and man-made phenomena approximately follow a power law over a wide range of magnitudes: these include the sizes of craters on the moon and of solar flares,[2] the foraging pattern of various species,[3] the sizes of activity patterns of neuronal populations,[4] the frequencies of words in most languages, frequencies of family names, the species richness in clades of organisms,[5] the sizes of power outages, criminal charges per convict, volcanic eruptions,[6] human judgements of stimulus intensity[7][8] and many other quantities.[9] Few empirical distributions fit a power law for all their values, but rather follow a power law in the tail.
Acoustic attenuation follows frequency power-laws within wide frequency bands for many complex media. Allometric scaling laws for relationships between biological variables are among the best known power-law functions in nature.



Properties



Scale invariance


One attribute of power laws is their scale invariance. Given a relation f(x)=ax−k{displaystyle f(x)=ax^{-k}}f(x)=ax^{-k}, scaling the argument x{displaystyle x}x by a constant factor c{displaystyle c}c causes only a proportionate scaling of the function itself. That is,


f(cx)=a(cx)−k=c−kf(x)∝f(x),{displaystyle f(cx)=a(cx)^{-k}=c^{-k}f(x)propto f(x),!}{displaystyle f(cx)=a(cx)^{-k}=c^{-k}f(x)propto f(x),!}

where {displaystyle propto }propto denotes direct proportionality. That is, scaling by a constant c{displaystyle c}c simply multiplies the original power-law relation by the constant c−k{displaystyle c^{-k}}c^{{-k}}. Thus, it follows that all power laws with a particular scaling exponent are equivalent up to constant factors, since each is simply a scaled version of the others. This behavior is what produces the linear relationship when logarithms are taken of both f(x){displaystyle f(x)}f(x) and x{displaystyle x}x, and the straight-line on the log–log plot is often called the signature of a power law. With real data, such straightness is a necessary, but not sufficient, condition for the data following a power-law relation. In fact, there are many ways to generate finite amounts of data that mimic this signature behavior, but, in their asymptotic limit, are not true power laws (e.g., if the generating process of some data follows a Log-normal distribution).[citation needed] Thus, accurately fitting and validating power-law models is an active area of research in statistics; see below.



Lack of well-defined average value


A power-law x−k{displaystyle x^{-k}}{displaystyle x^{-k}} has a well-defined mean over x∈[1,∞){displaystyle xin [1,infty )}{displaystyle xin [1,infty )} only if k>2{displaystyle k>2}{displaystyle k>2}, and it has a finite variance only if k>3{displaystyle k>3}{displaystyle k>3}; most identified power laws in nature have exponents such that the mean is well-defined but the variance is not, implying they are capable of black swan behavior.[10] This can be seen in the following thought experiment:[11] imagine a room with your friends and estimate the average monthly income in the room. Now imagine the world's richest person entering the room, with a monthly income of about 1 billion US$. What happens to the average income in the room? Income is distributed according to a power-law known as the Pareto distribution (for example, the net worth of Americans is distributed according to a power law with an exponent of 2).


On the one hand, this makes it incorrect to apply traditional statistics that are based on variance and standard deviation (such as regression analysis).[citation needed] On the other hand, this also allows for cost-efficient interventions.[11] For example, given that car exhaust is distributed according to a power-law among cars (very few cars contribute to most contamination) it would be sufficient to eliminate those very few cars from the road to reduce total exhaust substantially.[12]


The median does exist, however: for a power law xk, with exponent k>1{displaystyle k>1}k > 1, it takes the value 21/(k – 1)xmin, where xmin is the minimum value for which the power law holds[13]



Universality


The equivalence of power laws with a particular scaling exponent can have a deeper origin in the dynamical processes that generate the power-law relation. In physics, for example, phase transitions in thermodynamic systems are associated with the emergence of power-law distributions of certain quantities, whose exponents are referred to as the critical exponents of the system. Diverse systems with the same critical exponents—that is, which display identical scaling behaviour as they approach criticality—can be shown, via renormalization group theory, to share the same fundamental dynamics. For instance, the behavior of water and CO2 at their boiling points fall in the same universality class because they have identical critical exponents.[citation needed][clarification needed] In fact, almost all material phase transitions are described by a small set of universality classes. Similar observations have been made, though not as comprehensively, for various self-organized critical systems, where the critical point of the system is an attractor. Formally, this sharing of dynamics is referred to as universality, and systems with precisely the same critical exponents are said to belong to the same universality class.



Power-law functions


Scientific interest in power-law relations stems partly from the ease with which certain general classes of mechanisms generate them.[14] The demonstration of a power-law relation in some data can point to specific kinds of mechanisms that might underlie the natural phenomenon in question, and can indicate a deep connection with other, seemingly unrelated systems;[15] see also universality above. The ubiquity of power-law relations in physics is partly due to dimensional constraints, while in complex systems, power laws are often thought to be signatures of hierarchy or of specific stochastic processes. A few notable examples of power laws are Pareto's law of income distribution, structural self-similarity of fractals, and scaling laws in biological systems. Research on the origins of power-law relations, and efforts to observe and validate them in the real world, is an active topic of research in many fields of science, including physics, computer science, linguistics, geophysics, neuroscience, sociology, economics and more.


However, much of the recent interest in power laws comes from the study of probability distributions: The distributions of a wide variety of quantities seem to follow the power-law form, at least in their upper tail (large events). The behavior of these large events connects these quantities to the study of theory of large deviations (also called extreme value theory), which considers the frequency of extremely rare events like stock market crashes and large natural disasters. It is primarily in the study of statistical distributions that the name "power law" is used.


In empirical contexts, an approximation to a power-law o(xk){displaystyle o(x^{k})}o(x^k) often includes a deviation term ε{displaystyle varepsilon }varepsilon , which can represent uncertainty in the observed values (perhaps measurement or sampling errors) or provide a simple way for observations to deviate from the power-law function (perhaps for stochastic reasons):


y=axk+ε.{displaystyle y=ax^{k}+varepsilon .!}y = ax^k + varepsilon.!

Mathematically, a strict power law cannot be a probability distribution, but a distribution that is a truncated power function is possible: p(x)=Cx−α{displaystyle p(x)=Cx^{-alpha }}p(x) = C x^{-alpha} for x>xmin{displaystyle x>x_{text{min}}}x > x_text{min} where the exponent α{displaystyle alpha }alpha (Greek letter alpha, not to be confused with scaling factor a{displaystyle a}a used above) is greater than 1 (otherwise the tail has infinite area), the minimum value xmin{displaystyle x_{text{min}}}x_text{min} is needed otherwise the distribution has infinite area as x approaches 0, and the constant C is a scaling factor to ensure that the total area is 1, as required by a probability distribution. More often one uses an asymptotic power law – one that is only true in the limit; see power-law probability distributions below for details. Typically the exponent falls in the range 2<α<3{displaystyle 2<alpha <3}2 < alpha < 3, though not always.[9]



Examples


More than a hundred power-law distributions have been identified in physics (e.g. sandpile avalanches), biology (e.g. species extinction and body mass), and the social sciences (e.g. city sizes and income).[16] Among them are:



Astronomy



  • Kepler's third law

  • The initial mass function of stars

  • The differential energy spectrum of cosmic-ray nuclei



Physics



  • The Angstrom exponent in aerosol optics

  • The frequency-dependency of acoustic attenuation in complex media

  • The Stevens' power law of psychophysics

  • The Stefan–Boltzmann law

  • The input-voltage–output-current curves of field-effect transistors and vacuum tubes approximate a square-law relationship, a factor in "tube sound".


  • Square-cube law (ratio of surface area to volume)

  • A 3/2-power law can be found in the plate characteristic curves of triodes.

  • The inverse-square laws of Newtonian gravity and electrostatics, as evidenced by the gravitational potential and Electrostatic potential, respectively.


  • Self-organized criticality with a critical point as an attractor

  • Model of van der Waals force

  • Force and potential in simple harmonic motion

  • The M-sigma relation


  • Gamma correction relating light intensity with voltage


  • Behaviour near second-order phase transitions involving critical exponents

  • The safe operating area relating to maximum simultaneous current and voltage in power semiconductors.

  • Supercritical state of matter and supercritical fluids, such as supercritical exponents of heat capacity and viscosity.[17]

  • The Curie-von Schweidler law in dielectric responses to step DC voltage input.

  • The damping force over speed relation in antiseismic dampers calculus



Biology




  • Kleiber's law relating animal metabolism to size, and allometric laws in general

  • The two-thirds power law, relating speed to curvature in the human motor system.

  • The Taylor's law relating mean population size and variance of populations sizes in ecology

  • Neuronal avalanches[4]

  • The species richness (number of species) in clades of freshwater fishes[18]



Meterology


  • The size of rain-shower cells,[19] energy dissipation in cyclones [20] and the diameters of dust devils on Earth and Mars [21]


General science




  • Exponential growth and random observation (or killing)[22]

  • Progress through exponential growth and exponential diffusion of innovations[23]

  • Highly optimized tolerance

  • Proposed form of experience curve effects

  • Pink noise

  • The law of stream numbers, and the law of stream lengths (Horton's laws describing river systems)[24]

  • Populations of cities (Gibrat's law)[citation needed]


  • Bibliograms, and frequencies of words in a text (Zipf's law)[citation needed]


  • 90–9–1 principle on wikis (also referred to as the 1% rule)[citation needed]

  • Richardson's Law for the severity of violent conflicts (wars and terrorism)[25]

  • The relationship between a CPU's cache size and the number of cache misses follows the power law of cache misses.

  • The spectral density of the weight matrices of deep neural networks[26]



Mathematics



  • Fractals


  • Pareto distribution and the Pareto principle also called the "80–20 rule"


  • Zipf's law in corpus analysis and population distributions amongst others, where frequency of an item or event is inversely proportional to its frequency rank (i.e. the second most frequent item/event occurs half as often as the most frequent item, the third most frequent item/event occurs one third as often as the most frequent item, and so on).


  • Zeta distribution (discrete)


  • Yule–Simon distribution (discrete)


  • Student's t-distribution (continuous), of which the Cauchy distribution is a special case

  • Lotka's law

  • The scale-free network model



Economics


  • Distribution of artists by the average price of their artworks.[27]

  • Distribution of income in a market economy.

  • Distribution of degrees in banking networks.


Variants



Broken power law




Some models of the initial mass function use a broken power law; here Kroupa (2001) in red.


A broken power law is a piecewise function, consisting of two or more power laws, combined with a threshold. For example, with two power laws:[28]




f(x)∝1{displaystyle f(x)propto x^{alpha _{1}}}f(x) propto x^{alpha_1} for x<xth,{displaystyle x<x_{text{th}},}x<x_text{th},


f(x)∝xthα1−α2xα2 for x>xth{displaystyle f(x)propto x_{text{th}}^{alpha _{1}-alpha _{2}}x^{alpha _{2}}{text{ for }}x>x_{text{th}}}f(x) propto x^{alpha_1-alpha_2}_text{th}x^{alpha_2}text{ for } x>x_text{th}.



Power law with exponential cutoff


A power law with an exponential cutoff is simply a power law multiplied by an exponential function:[29]


f(x)∝x.{displaystyle f(x)propto x^{alpha }e^{beta x}.}f(x) propto x^{alpha}e^{beta x}.


Curved power law



f(x)∝x{displaystyle f(x)propto x^{alpha +beta x}}f(x) propto x^{alpha + beta x}[30]


Power-law probability distributions


In a looser sense, a power-law probability distribution is a distribution whose density function (or mass function in the discrete case) has the form, for large values of x{displaystyle x}x,[31]


P(X>x)∼L(x)x−+1){displaystyle P(X>x)sim L(x)x^{-(alpha +1)}}{displaystyle P(X>x)sim L(x)x^{-(alpha +1)}}

where α>0{displaystyle alpha >0}alpha >0, and L(x){displaystyle L(x)}L(x) is a slowly varying function, which is any function that satisfies limx→L(rx)/L(x)=1{displaystyle lim _{xrightarrow infty }L(r,x)/L(x)=1}lim _{{xrightarrow infty }}L(r,x)/L(x)=1 for any positive factor r{displaystyle r}r. This property of L(x){displaystyle L(x)}L(x) follows directly from the requirement that p(x){displaystyle p(x)}p(x) be asymptotically scale invariant; thus, the form of L(x){displaystyle L(x)}L(x) only controls the shape and finite extent of the lower tail. For instance, if L(x){displaystyle L(x)}L(x) is the constant function, then we have a power law that holds for all values of x{displaystyle x}x. In many cases, it is convenient to assume a lower bound xmin{displaystyle x_{mathrm {min} }}x_{mathrm{min}} from which the law holds. Combining these two cases, and where x{displaystyle x}x is a continuous variable, the power law has the form


p(x)=α1xmin(xxmin)−α,{displaystyle p(x)={frac {alpha -1}{x_{min }}}left({frac {x}{x_{min }}}right)^{-alpha },}p(x) = frac{alpha-1}{x_min} left(frac{x}{x_min}right)^{-alpha},

where the pre-factor to α1xmin{displaystyle {frac {alpha -1}{x_{min }}}}frac{alpha-1}{x_min} is the normalizing constant. We can now consider several properties of this distribution. For instance, its moments are given by


xm⟩=∫xmin∞xmp(x)dx=α1−mxminm{displaystyle langle x^{m}rangle =int _{x_{min }}^{infty }x^{m}p(x),mathrm {d} x={frac {alpha -1}{alpha -1-m}}x_{min }^{m}}langle x^{m} rangle = int_{x_min}^infty x^{m} p(x) ,mathrm{d}x = frac{alpha-1}{alpha-1-m}x_min^m

which is only well defined for m<α1{displaystyle m<alpha -1}m < alpha -1. That is, all moments m≥α1{displaystyle mgeq alpha -1}m geq alpha - 1 diverge: when α2{displaystyle alpha leq 2}{displaystyle alpha leq 2}, the average and all higher-order moments are infinite; when 2<α<3{displaystyle 2<alpha <3}2<alpha<3, the mean exists, but the variance and higher-order moments are infinite, etc. For finite-size samples drawn from such distribution, this behavior implies that the central moment estimators (like the mean and the variance) for diverging moments will never converge – as more data is accumulated, they continue to grow. These power-law probability distributions are also called Pareto-type distributions, distributions with Pareto tails, or distributions with regularly varying tails.


A modification, which does not satisfy the general form above, with an exponential cutoff,[9] is


p(x)∝L(x)x−αe−λx.{displaystyle p(x)propto L(x)x^{-alpha }mathrm {e} ^{-lambda x}.}p(x) propto L(x) x^{-alpha} mathrm{e}^{-lambda x}.

In this distribution, the exponential decay term e−λx{displaystyle mathrm {e} ^{-lambda x}}mathrm{e}^{-lambda x} eventually overwhelms the power-law behavior at very large values of x{displaystyle x}x. This distribution does not scale and is thus not asymptotically as a power law; however, it does approximately scale over a finite region before the cutoff. (Note that the pure form above is a subset of this family, with λ=0{displaystyle lambda =0}lambda =0.) This distribution is a common alternative to the asymptotic power-law distribution because it naturally captures finite-size effects.


The Tweedie distributions are a family of statistical models characterized by closure under additive and reproductive convolution as well as under scale transformation. Consequently, these models all express a power-law relationship between the variance and the mean. These models have a fundamental role as foci of mathematical convergence similar to the role that the normal distribution has as a focus in the central limit theorem. This convergence effect explains why the variance-to-mean power law manifests so widely in natural processes, as with Taylor's law in ecology and with fluctuation scaling[32] in physics. It can also be shown that this variance-to-mean power law, when demonstrated by the method of expanding bins, implies the presence of 1/f noise and that 1/f noise can arise as a consequence of this Tweedie convergence effect.[33]



Graphical methods for identification


Although more sophisticated and robust methods have been proposed, the most frequently used graphical methods of identifying power-law probability distributions using random samples are Pareto quantile-quantile plots (or Pareto Q–Q plots),[citation needed] mean residual life plots[34][35] and log–log plots. Another, more robust graphical method uses bundles of residual quantile functions.[36] (Please keep in mind that power-law distributions are also called Pareto-type distributions.) It is assumed here that a random sample is obtained from a probability distribution, and that we want to know if the tail of the distribution follows a power law (in other words, we want to know if the distribution has a "Pareto tail"). Here, the random sample is called "the data".


Pareto Q–Q plots compare the quantiles of the log-transformed data to the corresponding quantiles of an exponential distribution with mean 1 (or to the quantiles of a standard Pareto distribution) by plotting the former versus the latter. If the resultant scatterplot suggests that the plotted points " asymptotically converge" to a straight line, then a power-law distribution should be suspected. A limitation of Pareto Q–Q plots is that they behave poorly when the tail index α{displaystyle alpha }alpha (also called Pareto index) is close to 0, because Pareto Q–Q plots are not designed to identify distributions with slowly varying tails.[36]


On the other hand, in its version for identifying power-law probability distributions, the mean residual life plot consists of first log-transforming the data, and then plotting the average of those log-transformed data that are higher than the i-th order statistic versus the i-th order statistic, for i = 1, ..., n, where n is the size of the random sample. If the resultant scatterplot suggests that the plotted points tend to "stabilize" about a horizontal straight line, then a power-law distribution should be suspected. Since the mean residual life plot is very sensitive to outliers (it is not robust), it usually produces plots that are difficult to interpret; for this reason, such plots are usually called Hill horror plots [37]




A straight line on a log–log plot is necessary but insufficient evidence for power-laws, the slope of the straight line corresponds to the power law exponent.


Log–log plots are an alternative way of graphically examining the tail of a distribution using a random sample. Caution has to be exercised however as a log–log plot is necessary but insufficient evidence for a power law relationship, as many non power-law distributions will appear as straight lines on a log–log plot.[38][39] This method consists of plotting the logarithm of an estimator of the probability that a particular number of the distribution occurs versus the logarithm of that particular number. Usually, this estimator is the proportion of times that the number occurs in the data set. If the points in the plot tend to "converge" to a straight line for large numbers in the x axis, then the researcher concludes that the distribution has a power-law tail. Examples of the application of these types of plot have been published.[40] A disadvantage of these plots is that, in order for them to provide reliable results, they require huge amounts of data. In addition, they are appropriate only for discrete (or grouped) data.


Another graphical method for the identification of power-law probability distributions using random samples has been proposed.[36] This methodology consists of plotting a bundle for the log-transformed sample. Originally proposed as a tool to explore the existence of moments and the moment generation function using random samples, the bundle methodology is based on residual quantile functions (RQFs), also called residual percentile functions,[41][42][43][44][45][46][47] which provide a full characterization of the tail behavior of many well-known probability distributions, including power-law distributions, distributions with other types of heavy tails, and even non-heavy-tailed distributions. Bundle plots do not have the disadvantages of Pareto Q–Q plots, mean residual life plots and log–log plots mentioned above (they are robust to outliers, allow visually identifying power laws with small values of α{displaystyle alpha }alpha , and do not demand the collection of much data).[citation needed] In addition, other types of tail behavior can be identified using bundle plots.



Plotting power-law distributions


In general, power-law distributions are plotted on doubly logarithmic axes, which emphasizes the upper tail region. The most convenient way to do this is via the (complementary) cumulative distribution (cdf), P(x)=Pr(X>x){displaystyle P(x)=mathrm {Pr} (X>x)}P(x) = mathrm{Pr}(X > x),


P(x)=Pr(X>x)=C∫x∞p(X)dX=α1xmin−α+1∫x∞X−αdX=(xxmin)−α+1.{displaystyle P(x)=Pr(X>x)=Cint _{x}^{infty }p(X),mathrm {d} X={frac {alpha -1}{x_{min }^{-alpha +1}}}int _{x}^{infty }X^{-alpha },mathrm {d} X=left({frac {x}{x_{min }}}right)^{-alpha +1}.}P(x) = Pr(X > x) =  C int_x^infty p(X),mathrm{d}X =  frac{alpha-1}{x_min^{-alpha+1}} int_x^infty X^{-alpha},mathrm{d}X = left(frac{x}{x_min} right)^{-alpha+1}.

Note that the cdf is also a power-law function, but with a smaller scaling exponent. For data, an equivalent form of the cdf is the rank-frequency approach, in which we first sort the n{displaystyle n}n observed values in ascending order, and plot them against the vector [1,n−1n,n−2n,…,1n]{displaystyle left[1,{frac {n-1}{n}},{frac {n-2}{n}},dots ,{frac {1}{n}}right]}left[1,frac{n-1}{n},frac{n-2}{n},dots,frac{1}{n}right].


Although it can be convenient to log-bin the data, or otherwise smooth the probability density (mass) function directly, these methods introduce an implicit bias in the representation of the data, and thus should be avoided.[48][49] The cdf, on the other hand, is more robust to (but not without) such biases in the data and preserves the linear signature on doubly logarithmic axes. Though a cdf representation is favored over that of the pdf while fitting a power law to the data with the linear least square method, it is not devoid of mathematical inaccuracy. Thus, while estimating exponents of a power law distribution, maximum likelihood estimator is recommended.



Estimating the exponent from empirical data


There are many ways of estimating the value of the scaling exponent for a power-law tail, however not all of them yield unbiased and consistent answers. Some of the most reliable techniques are often based on the method of maximum likelihood. Alternative methods are often based on making a linear regression on either the log–log probability, the log–log cumulative distribution function, or on log-binned data, but these approaches should be avoided as they can all lead to highly biased estimates of the scaling exponent.[9]



Maximum likelihood


For real-valued, independent and identically distributed data, we fit a power-law distribution of the form


p(x)=α1xmin(xxmin)−α{displaystyle p(x)={frac {alpha -1}{x_{min }}}left({frac {x}{x_{min }}}right)^{-alpha }}p(x) = frac{alpha-1}{x_min} left(frac{x}{x_min}right)^{-alpha}

to the data x≥xmin{displaystyle xgeq x_{min }}xgeq x_min, where the coefficient α1xmin{displaystyle {frac {alpha -1}{x_{min }}}}frac{alpha-1}{x_min} is included to ensure that the distribution is normalized. Given a choice for xmin{displaystyle x_{min }}x_min, the log likelihood function becomes:


L(α)=log⁡i=1nα1xmin(xixmin)−α{displaystyle {mathcal {L}}(alpha )=log prod _{i=1}^{n}{frac {alpha -1}{x_{min }}}left({frac {x_{i}}{x_{min }}}right)^{-alpha }}{displaystyle {mathcal {L}}(alpha )=log prod _{i=1}^{n}{frac {alpha -1}{x_{min }}}left({frac {x_{i}}{x_{min }}}right)^{-alpha }}

The maximum of this likelihood is found by differentiating with respect to parameter α{displaystyle alpha }alpha , setting the result equal to zero. Upon rearrangement, this yields the estimator equation:


α^=1+n[∑i=1nln⁡xixmin]−1{displaystyle {hat {alpha }}=1+nleft[sum _{i=1}^{n}ln {frac {x_{i}}{x_{min }}}right]^{-1}}hat{alpha} = 1 + n left[ sum_{i=1}^n ln frac{x_i}{x_min} right]^{-1}

where {xi}{displaystyle {x_{i}}}{x_i} are the n{displaystyle n}n data points xi≥xmin{displaystyle x_{i}geq x_{min }}x_{i}geq x_min.[2][50] This estimator exhibits a small finite sample-size bias of order O(n−1){displaystyle O(n^{-1})}O(n^{-1}), which is small when n > 100. Further, the standard error of the estimate is σ^1n+O(n−1){displaystyle sigma ={frac {{hat {alpha }}-1}{sqrt {n}}}+O(n^{-1})}sigma ={frac  {{hat  {alpha }}-1}{{sqrt  {n}}}}+O(n^{{-1}}). This estimator is equivalent to the popular[citation needed]Hill estimator from quantitative finance and extreme value theory.[citation needed]


For a set of n integer-valued data points {xi}{displaystyle {x_{i}}}{x_i}, again where each xi≥xmin{displaystyle x_{i}geq x_{min }}x_igeq x_min, the maximum likelihood exponent is the solution to the transcendental equation


ζ′(α^,xmin)ζ^,xmin)=−1n∑i=1nln⁡xixmin{displaystyle {frac {zeta '({hat {alpha }},x_{min })}{zeta ({hat {alpha }},x_{min })}}=-{frac {1}{n}}sum _{i=1}^{n}ln {frac {x_{i}}{x_{min }}}}frac{zeta'(hatalpha,x_min)}{zeta(hat{alpha},x_min)} = -frac{1}{n} sum_{i=1}^n ln frac{x_i}{x_min}

where ζ,xmin){displaystyle zeta (alpha ,x_{mathrm {min} })}zeta(alpha,x_{mathrm{min}}) is the incomplete zeta function. The uncertainty in this estimate follows the same formula as for the continuous equation. However, the two equations for α^{displaystyle {hat {alpha }}}hat{alpha} are not equivalent, and the continuous version should not be applied to discrete data, nor vice versa.


Further, both of these estimators require the choice of xmin{displaystyle x_{min }}x_min. For functions with a non-trivial L(x){displaystyle L(x)}L(x) function, choosing xmin{displaystyle x_{min }}x_min too small produces a significant bias in α^{displaystyle {hat {alpha }}}{hat {alpha }}, while choosing it too large increases the uncertainty in α^{displaystyle {hat {alpha }}}hat{alpha}, and reduces the statistical power of our model. In general, the best choice of xmin{displaystyle x_{min }}x_min depends strongly on the particular form of the lower tail, represented by L(x){displaystyle L(x)}L(x) above.


More about these methods, and the conditions under which they can be used, can be found in .[9] Further, this comprehensive review article provides usable code (Matlab, Python, R and C++) for estimation and testing routines for power-law distributions.



Kolmogorov–Smirnov estimation


Another method for the estimation of the power-law exponent, which does not assume independent and identically distributed (iid) data, uses the minimization of the Kolmogorov–Smirnov statistic, D{displaystyle D}D, between the cumulative distribution functions of the data and the power law:


α^=argminα{displaystyle {hat {alpha }}={underset {alpha }{operatorname {arg,min} }},D_{alpha }}hat{alpha} = underset{alpha}{operatorname{arg,min}} , D_alpha

with


=maxx|Pemp(x)−(x)|{displaystyle D_{alpha }=max _{x}|P_{mathrm {emp} }(x)-P_{alpha }(x)|} D_alpha = max_x | P_mathrm{emp}(x) - P_alpha(x) |

where Pemp(x){displaystyle P_{mathrm {emp} }(x)}P_mathrm{emp}(x) and (x){displaystyle P_{alpha }(x)}P_alpha(x) denote the cdfs of the data and the power law with exponent α{displaystyle alpha }alpha , respectively. As this method does not assume iid data, it provides an alternative way to determine the power-law exponent for data sets in which the temporal correlation can not be ignored.[4]



Two-point fitting method


This criterion[clarification needed] can be applied for the estimation of power-law exponent in the case of scale free distributions and provides a more convergent estimate than the maximum likelihood method.[51] It has been applied to study probability distributions of fracture apertures.[51] In some contexts the probability distribution is described, not by the cumulative distribution function, by the cumulative frequency of a property X, defined as the number of elements per meter (or area unit, second etc.) for which X > x applies, where x is a variable real number. As an example,[51] the cumulative distribution of the fracture aperture, X, for a sample of N elements is defined as 'the number of fractures per meter having aperture greater than x . Use of cumulative frequency has some advantages, e.g. it allows one to put on the same diagram data gathered from sample lines of different lengths at different scales (e.g. from outcrop and from microscope).



R function


The following function estimates the exponent in R, plotting the log–log data and the fitted line.


    pwrdist <- function(u,...) {
# u is vector of event counts, e.g. how many
# crimes was a given perpetrator charged for by the police
fx <- table(u)
i <- as.numeric(names(fx))
y <- rep(0,max(i))
y[i] <- fx
m0 <- glm(y~log(1:max(i)),family=quasipoisson())
print(summary(m0))
sub <- paste("s=",round(m0$coef[2],2),"lambda=",sum(u),"/",length(u))
plot(i,fx,log="xy",xlab="x",sub=sub,ylab="counts",...)
grid()
lines(1:max(i),(fitted(m0)),type="b")
return(m0)
}


Validating power laws


Although power-law relations are attractive for many theoretical reasons, demonstrating that data does indeed follow a power-law relation requires more than simply fitting a particular model to the data.[23] This is important for understanding the mechanism that gives rise to the distribution: superficially similar distributions may arise for significantly different reasons, and different models yield different predictions, such as extrapolation.


For example, log-normal distributions are often mistaken for power-law distributions:[52] a data set drawn from a lognormal distribution will be approximately linear for large values (corresponding to the upper tail of the lognormal being close to a power law)[clarification needed], but for small values the lognormal will drop off significantly (bowing down), corresponding to the lower tail of the lognormal being small (there are very few small values, rather than many small values in a power law).[citation needed]


For example, Gibrat's law about proportional growth processes produce distributions that are lognormal, although their log–log plots look linear over a limited range. An explanation of this is that although the logarithm of the lognormal density function is quadratic in log(x), yielding a "bowed" shape in a log–log plot, if the quadratic term is small relative to the linear term then the result can appear almost linear, and the lognormal behavior is only visible when the quadratic term dominates, which may require significantly more data. Therefore, a log–log plot that is slightly "bowed" downwards can reflect a log-normal distribution – not a power law.


In general, many alternative functional forms can appear to follow a power-law form for some extent.[53] Stumpf[54] proposed plotting the empirical cumulative distribution function in the log-log domain and claimed that a candidate power-law should cover at least two orders of magnitude. Also, researchers usually have to face the problem of deciding whether or not a real-world probability distribution follows a power law. As a solution to this problem, Diaz[36] proposed a graphical methodology based on random samples that allow visually discerning between different types of tail behavior. This methodology uses bundles of residual quantile functions, also called percentile residual life functions, which characterize many different types of distribution tails, including both heavy and non-heavy tails. However, Stumpf[54] claimed the need for both a statistical and a theoretical background in order to support a power-law in the underlying mechanism driving the data generating process.


One method to validate a power-law relation tests many orthogonal predictions of a particular generative mechanism against data. Simply fitting a power-law relation to a particular kind of data is not considered a rational approach. As such, the validation of power-law claims remains a very active field of research in many areas of modern science.[9]



See also












References


Notes





  1. ^ Yaneer Bar-Yam. "Concepts: Power Law". New England Complex Systems Institute. Retrieved 18 August 2015..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"""""""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}


  2. ^ ab Newman, M. E. J. (2005). "Power laws, Pareto distributions and Zipf's law". Contemporary Physics. 46 (5): 323–351. arXiv:cond-mat/0412004. Bibcode:2005ConPh..46..323N. doi:10.1080/00107510500052444.


  3. ^ Humphries NE, Queiroz N, Dyer JR, Pade NG, Musyl MK, Schaefer KM, Fuller DW, Brunnschweiler JM, Doyle TK, Houghton JD, Hays GC, Jones CS, Noble LR, Wearmouth VJ, Southall EJ, Sims DW (2010). "Environmental context explains Lévy and Brownian movement patterns of marine predators". Nature. 465 (7301): 1066–1069. Bibcode:2010Natur.465.1066H. doi:10.1038/nature09116. PMID 20531470.


  4. ^ abc Klaus A, Yu S, Plenz D (2011). Zochowski, Michal, ed. "Statistical Analyses Support Power Law Distributions Found in Neuronal Avalanches". PLoS ONE. 6 (5): e19779. Bibcode:2011PLoSO...619779K. doi:10.1371/journal.pone.0019779. PMC 3102672. PMID 21720544.CS1 maint: Multiple names: authors list (link)


  5. ^ Albert, J. S.; Reis, R. E., eds. (2011). Historical Biogeography of Neotropical Freshwater Fishes. Berkeley: University of California Press.


  6. ^ Cannavò, Flavio; Nunnari, Giuseppe (2016-03-01). "On a Possible Unified Scaling Law for Volcanic Eruption Durations". Scientific Reports. 6: 22289. Bibcode:2016NatSR...622289C. doi:10.1038/srep22289. ISSN 2045-2322. PMC 4772095. PMID 26926425.


  7. ^ Stevens, S. S. (1957). On the psychophysical law. Psychological Review, 64, 153-181


  8. ^ Staddon, J. E. R. (1978). Theory of behavioral power functions. Psychological Review, 85, 305-320.


  9. ^ abcdef Clauset, Shalizi & Newman 2009.


  10. ^ Newman, M. E. J.; Reggiani, Aura; Nijkamp, Peter (2005). "Power laws, Pareto distributions and Zipf's law". Cities. 30 (2005): 323–351. arXiv:cond-mat/0412004. doi:10.1016/j.cities.2012.03.001.


  11. ^ ab 9na CEPAL Charlas Sobre Sistemas Complejos Sociales (CCSSCS): Leyes de potencias, https://www.youtube.com/watch?v=4uDSEs86xCI


  12. ^ Malcolm Gladwell (2006), Million-Dollar Murray; "Archived copy". Archived from the original on 2015-03-18. Retrieved 2015-06-14.CS1 maint: Archived copy as title (link)


  13. ^ Newman, Mark EJ. "Power laws, Pareto distributions and Zipf's law." Contemporary physics 46.5 (2005): 323-351.


  14. ^ Sornette 2006.


  15. ^ Simon 1955.


  16. ^ Andriani, P.; McKelvey, B. (2007). "Beyond Gaussian averages: redirecting international business and management research toward extreme events and power laws". Journal of International Business Studies. 38 (7): 1212–1230. doi:10.1057/palgrave.jibs.8400324.


  17. ^ Bolmatov, D.; Brazhkin, V. V.; Trachenko, K. (2013). "Thermodynamic behaviour of supercritical matter". Nature Communications. 4: 2331. arXiv:1303.3153. Bibcode:2013NatCo...4E2331B. doi:10.1038/ncomms3331. PMID 23949085.


  18. ^ Albert, J. S., H. J. Bart, & R. E. Reis (2011). "Species richness & cladal diversity". In Albert, J. S., & R. E. Reis. Historical Biogeography of Neotropical Freshwater Fishes. Berkeley: University of California Press. pp. 89–104.CS1 maint: Multiple names: authors list (link)


  19. ^ Machado L, Rossow, WB (1993). "Structural characteristics and radial properties of tropical cloud clusters". Monthly Weather Review. 121 (12): 3234–3260. doi:10.1175/1520-0493(1993)121<3234:scarpo>2.0.co;2.


  20. ^ Corral, A, Osso, A, Llebot, JE (2010). "Scaling of tropical cyclone dissipation". Nature Physics. 6 (9): 693–696. arXiv:0910.0054. Bibcode:2010NatPh...6..693C. doi:10.1038/nphys1725.


  21. ^ Lorenz RD (2009). "Power Law of Dust Devil Diameters on Earth and Mars". Icarus. 203 (2): 683–684. Bibcode:2009Icar..203..683L. doi:10.1016/j.icarus.2009.06.029.


  22. ^ Reed W. J.; Hughes B. D. "From gene families and genera to incomes and internet file sizes: Why power laws are so common in nature". Phys Rev E, 2002, 66, 067103


  23. ^ ab Hilbert, Martin (2013). "Scale-free power-laws as interaction between progress and diffusion". Complexity (Submitted manuscript). 19 (4): 56–65. Bibcode:2014Cmplx..19d..56H. doi:10.1002/cplx.21485.


  24. ^ "Horton's Laws – Example". www.engr.colostate.edu. Retrieved 2018-09-30.


  25. ^ Lewis Fry Richardson (1950). The Statistics of Deadly Quarrels.


  26. ^ Martin, Charles H.; Mahoney, Michael W. (2018-10-02). "Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning". arXiv:1810.01075 [cs.LG].


  27. ^ Etro, F.; Stepanova, E. (2018). "Power-laws in art". Physica A: Statistical Mechanics and its Applications. 506: 217–220. Bibcode:2018PhyA..506..217E. doi:10.1016/j.physa.2018.04.057.


  28. ^ Jóhannesson, Gudlaugur; Björnsson, Gunnlaugur; Gudmundsson, Einar H. (2006). "Afterglow Light Curves and Broken Power Laws: A Statistical Study". The Astrophysical Journal. 640 (1): L5. arXiv:astro-ph/0602219. Bibcode:2006ApJ...640L...5J. doi:10.1086/503294. Retrieved 2013-07-07.


  29. ^ Clauset, Aaron (2009). "POWER-LAW DISTRIBUTIONS IN EMPIRICAL DATA". SIAM Review. 51 (4): 661–703. arXiv:0706.1062. Bibcode:2009SIAMR..51..661C. doi:10.1137/070710111.


  30. ^ "Curved-power law". Retrieved 2013-07-07.


  31. ^ N. H. Bingham, C. M. Goldie, and J. L. Teugels, Regular variation. Cambridge University Press, 1989


  32. ^ Kendal, WS; Jørgensen, B (2011). "Taylor's power law and fluctuation scaling explained by a central-limit-like convergence". Phys. Rev. E. 83 (6): 066115. Bibcode:2011PhRvE..83f6115K. doi:10.1103/physreve.83.066115. PMID 21797449.


  33. ^ Kendal, WS; Jørgensen, BR (2011). "Tweedie convergence: a mathematical basis for Taylor's power law, 1/f noise and multifractality". Phys. Rev. E. 84 (6): 066120. Bibcode:2011PhRvE..84f6120K. doi:10.1103/physreve.84.066120. PMID 22304168.


  34. ^ Beirlant, J., Teugels, J. L., Vynckier, P. (1996a) Practical Analysis of Extreme Values, Leuven: Leuven University Press


  35. ^ Coles, S. (2001) An introduction to statistical modeling of extreme values. Springer-Verlag, London.


  36. ^ abcd Diaz, F. J. (1999). "Identifying Tail Behavior by Means of Residual Quantile Functions". Journal of Computational and Graphical Statistics. 8 (3): 493–509. doi:10.2307/1390871. JSTOR 1390871.


  37. ^ Resnick, S. I. (1997). "Heavy Tail Modeling and Teletraffic Data". The Annals of Statistics. 25 (5): 1805–1869. doi:10.1214/aos/1069362376.


  38. ^ "So You Think You Have a Power Law — Well Isn't That Special?". bactra.org. Retrieved 27 March 2018.


  39. ^ Clauset, Aaron; Shalizi, Cosma Rohilla; Newman, M. E. J. (4 November 2009). "Power-law distributions in empirical data". SIAM Review. 51 (4): 661–703. arXiv:0706.1062. doi:10.1137/070710111.


  40. ^ Jeong, H; Tombor, B. Albert; Oltvai, Z.N.; Barabasi, A.-L. (2000). "The large-scale organization of metabolic networks". Nature. 407 (6804): 651–654. arXiv:cond-mat/0010278. Bibcode:2000Natur.407..651J. doi:10.1038/35036627. PMID 11034217.


  41. ^ Arnold, B. C.; Brockett, P. L. (1983). "When does the βth percentile residual life function determine the distribution?". Operations Research. 31 (2): 391–396. doi:10.1287/opre.31.2.391.


  42. ^ Joe, H.; Proschan, F. (1984). "Percentile residual life functions". Operations Research. 32 (3): 668–678. doi:10.1287/opre.32.3.668.


  43. ^ Joe, H. (1985), "Characterizations of life distributions from percentile residual lifetimes", Ann. Inst. Statist. Math. 37, Part A, 165–172.


  44. ^ Csorgo, S.; Viharos, L. (1992). "Confidence bands for percentile residual lifetimes". Journal of Statistical Planning and Inference. 30 (3): 327–337. doi:10.1016/0378-3758(92)90159-p.


  45. ^ Schmittlein, D. C.; Morrison, D. G. (1981). "The median residual lifetime: A characterization theorem and an application". Operations Research. 29 (2): 392–399. doi:10.1287/opre.29.2.392.


  46. ^ Morrison, D. G.; Schmittlein, D. C. (1980). "Jobs, strikes, and wars: Probability models for duration". Organizational Behavior and Human Performance. 25 (2): 224–251. doi:10.1016/0030-5073(80)90065-3.


  47. ^ Gerchak, Y (1984). "Decreasing failure rates and related issues in the social sciences". Operations Research. 32 (3): 537–546. doi:10.1287/opre.32.3.537.


  48. ^ Bauke, H. (2007). "Parameter estimation for power-law distributions by maximum likelihood methods". The European Physical Journal. 58 (2): 167–173. arXiv:0704.1867. doi:10.1140/epjb/e2007-00219-y.


  49. ^ Clauset, A., Shalizi, C. R., Newman, M. E. J. (2009). "Power-Law Distributions in Empirical Data". SIAM Review. 51 (4): 661–703. doi:10.1137/070710111.CS1 maint: Multiple names: authors list (link)


  50. ^ Hall, P. (1982). "On Some Simple Estimates of an Exponent of Regular Variation". Journal of the Royal Statistical Society, Series B. 44 (1): 37–42. JSTOR 2984706.


  51. ^ abc Guerriero, V. (2012). "Power Law Distribution: Method of Multi-scale Inferential Statistics". Journal of Modern Mathematics Frontier (JMMF). 1: 21–28.


  52. ^ Mitzenmacher 2004.


  53. ^ Laherrère & Sornette 1998.


  54. ^ ab Stumpf, M.P.H. (2012). "Critical Truths about Power Laws". Science. 335 (6069): 665–666. Bibcode:2012Sci...335..665S. doi:10.1126/science.1216142. PMID 22323807.



Bibliography



  • Bak, Per (1997) How nature works, Oxford University Press
    ISBN 0-19-850164-1


  • Clauset, A.; Shalizi, C. R.; Newman, M. E. J. (2009). "Power-Law Distributions in Empirical Data". SIAM Review. 51 (4): 661–703. arXiv:0706.1062. Bibcode:2009SIAMR..51..661C. doi:10.1137/070710111.


  • Laherrère, J.; Sornette, D. (1998). "Stretched exponential distributions in nature and economy: "fat tails" with characteristic scales". The European Physical Journal B. 2 (4): 525–539. arXiv:cond-mat/9801293. Bibcode:1998EPJB....2..525L. doi:10.1007/s100510050276.


  • Mitzenmacher, M. (2004). "A Brief History of Generative Models for Power Law and Lognormal Distributions" (PDF). Internet Mathematics. 1 (2): 226–251. doi:10.1080/15427951.2004.10129088.

  • Alexander Saichev, Yannick Malevergne and Didier Sornette (2009) Theory of Zipf's law and beyond, Lecture Notes in Economics and Mathematical Systems, Volume 632, Springer (November 2009),
    ISBN 978-3-642-02945-5


  • Simon, H. A. (1955). "On a Class of Skew Distribution Functions". Biometrika. 42 (3/4): 425–440. doi:10.2307/2333389. JSTOR 2333389.


  • Sornette, Didier (2006). Critical Phenomena in Natural Sciences: Chaos, Fractals, Self-organization and Disorder: Concepts and Tools. Springer Series in Synergetics (2nd ed.). Heidelberg: Springer. ISBN 978-3-540-30882-9.

  • Mark Buchanan (2000) Ubiquity, Weidenfeld & Nicolson
    ISBN 0-297-64376-2


  • Stumpf, M.P.H.; Porter, M.A. (2012). "Critical Truths about Power Laws". Science. 335 (6069): 665–6. Bibcode:2012Sci...335..665S. doi:10.1126/science.1216142. PMID 22323807.



External links



  • Zipf's law

  • Zipf, Power-laws, and Pareto – a ranking tutorial

  • Stream Morphometry and Horton's Laws


  • Clay Shirky on Institutions & Collaboration: Power law in relation to the internet-based social networks


  • Clay Shirky on Power Laws, Weblogs, and Inequality


  • "How the Finance Gurus Get Risk All Wrong" by Benoit Mandelbrot & Nassim Nicholas Taleb. Fortune, July 11, 2005.


  • "Million-dollar Murray": power-law distributions in homelessness and other social problems; by Malcolm Gladwell. The New Yorker, February 13, 2006.

  • Benoit Mandelbrot & Richard Hudson: The Misbehaviour of Markets (2004)

  • Philip Ball: Critical Mass: How one thing leads to another (2005)


  • Tyranny of the Power Law from The Econophysics Blog


  • So You Think You Have a Power Law – Well Isn't That Special? from Three-Toed Sloth, the blog of Cosma Shalizi, Professor of Statistics at Carnegie-Mellon University.


  • Simple MATLAB script which bins data to illustrate power-law distributions (if any) in the data.


  • The Erdős Webgraph Server visualizes the distribution of the degrees of the webgraph on the download page.




Popular posts from this blog

Lambaréné

Chris Pine

Kashihara Line