Talk:Probability density function

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Inconsistency[edit]

This article is really confusing:

First it sais

the density of X with respect to a reference measure ...

and then

Note that it is not possible to define a density with reference to an arbitrary measure (e.g. one can't choose the counting measure as a reference for a continuous random variable).

Well, what keeps you from choosing the measure like this? What keeps you from defining it? The question is whether there exists an object that satisfies the definition or not. Let us continue:

Not every probability distribution has a density function: the distributions of discrete random variables do not.

Well, without referencing to the fact that we agreed to choose the Lebesgue measure on the reals, this sentence is just wrong. As noted above: *is* the density function for a discrete random variable with respect to .

Suggested edits:

  1. Make the structure like in every usual math book: first paragraph stays as is, second paragraph is the formal, general, unambiguous definition and then we specialize to the reals and give some examples. Why should one 'generalize' the definition to the 'measure theoretic definition of probability'? What other definition of 'probability' is there apart from the 'measure theoretic' one?
  2. 'Note that it is not possible to define a density with reference to an arbitrary measure (e.g. one can't choose the counting measure as a reference for a continuous random variable).' is deleted and something like the following is added: 'Note that it is possible for a random variable to have a density with respect to one measure on while having no density for some other measure on the same space. Example: For a random variable with a finite image it is true that has the density with respect to its pushforward measure but no density with respect to the usual Lebesgue measure on . So it is important to be precise about the measure in question when one talks about densities.' — Preceding unsigned comment added by Fryasdf (talkcontribs) 09:04, 5 June 2015 (UTC)[reply]

Also, the rest is not really 'clean'. Everywhere one can read 'it is possible to define density for ...' *NO!* We *have* already defined it. It should read 'In this case, the density is given by ...'. — Preceding unsigned comment added by Fryasdf (talkcontribs) 09:40, 5 June 2015 (UTC)[reply]

Well, for a mathematician, you are right (except for one mistake, see below). The problem is that most of the users of this notion are far not mathematicians, have no idea of the measure theory, and are not interested to ever learn such math.
And here is the mistake: the density with respect to the pushforward measure is just 1, anyway. Boris Tsirelson (talk) 07:12, 6 June 2015 (UTC)[reply]

Contiunous univariate distributions[edit]

I got two problems with this section.

First is, that I don't see (because it is not explained) why : should hold, i.e. why F is differentiable in the first place.

Second, the title of the section is not explained. What is a univariate distribution?

Please fix this. Quiet photon (talk) 17:01, 2 March 2010 (UTC)[reply]

Yeah, it shouldn't be true in general that . If f is continuous at x then it will hold by the Fundamental Theorem of Calculus, but even if F is differentiable it won't necessarily be true since the pdf is only unique almost everywhere. Not defining a univariate distribution seems okay though, since there is a link to the page in the section. --Theodds (talk) 18:52, 29 June 2010 (UTC)[reply]

"Intuitive"[edit]

Intuitively, if a probability distribution has density f(x), then the infinitesimal interval [x, x + dx] has probability f(x) dx.

Arrgh, now you tell me this has something to do with ?

Certainly. An integral is intuitively thought of as the sum of infinitely many infinitely small quantities ƒ(xdx, each equal to the area below the graph of ƒ above the interval from x to x + dx, as x runs through the set of all numbers in the set A. That applies to integrals generally, not just those in probability theory. Michael Hardy (talk) 00:50, 12 June 2008 (UTC)[reply]
I completely agree that this view is not intuitive if one is used to the proper definition, because then is just some syntactical delimiter capturing the variable , not some kind of second variable. I recognize many people do find your view intutive, but I feel it's kind of like making up an interpretation of the symbols distinct from their actual definition. Thus there should at least be a link explaining this viewpoint (can this stuff be made rigorous with differentials?). Suprahili (talk) 22:14, 19 May 2009 (UTC)[reply]

dx is not just a syntactical delimeter. It does in fact bind a variable, but consider the situation where f(x) is in meters per second and x (and so also dx) is in seconds. Multiply them and get meters. That's not just syntactical delimiting. Moreover, the "proper definitions" were obviously not what Leibniz had in mind when he introduced this notation in the 17th century. The intuitive explanation given is in line with the way Leibniz did it. And it is useful. Was Leibniz "making up an interpretation distinct from the actual definition", when in fact the "actual definition" came two centuries later in the 1800s? Suprahili, have you ever heard that the world existed before the 21st century? Michael Hardy (talk) 00:48, 20 May 2009 (UTC)[reply]

pdf[edit]

It would be nice to have a picture of a PDF, of, say, the normal distribution. -iwakura


Hi all:

Can someone help me in computing

Indefinite Integral (f(x)^((1/r)+1)) dx where r>=1

in terms of Indefinite Integral (f(x)) dx

Here, f(x) is an arbitrary probability density function.

Partho

what is a multimodal pdf ? the article should touch this topic. - rodrigob.

Simple English[edit]

In simple english. The probability density function is any function f(x) that describes the probability density in terms of the input variable x. With two further conditions that

  • f(x) is greater than or equal to zero for all values of x
  • The total area under the graph is 1. Refer to equation below.

The actual probability can then be calculated by taking the integral of the function f(x) by the integration interval of the input variable x .

For example: the variable x being within the interval 4.3 < x < 7.8 would have the actual probability of

.

And why oh why say "However special care should be taken around this term, since it is not standard among probabilists and statisticians and in other sources “probability distribution function” may be used when the probability distribution is defined as a function over general sets of values, or it may refer to the cumulative distribution function, or it may be a probability mass function rather than the density." That is an awful sentence. And a probabilist is a statistician. Worik (talk) 02:25, 26 April 2010 (UTC)[reply]

what is a probability density?[edit]

Given that it is a common mistake to interpret the y-axis of the probability density function as representing probability (as is often done with the normal curve), it would be helpful to have a common-sense description of what probability *density* is. It's clearly related to actual probability, but does it have a "real-world" correlate? How should "density" be interpreted? --anon

Answer: If "probability" is equivalent to "distance travelled" then "probability density" is equivalent to "speed". So the "probability density function of input variable x " is equivalent to "speed function of input variable t" where t stands for time. -ohanian

One more answer: If you have a random variable, then it can take many values. But, most of the time those values are not equally likely, some of them occur more often than others. So, if a value of this variable is more likely, the density of that variable is higher at that value. If certain value does not occur at all, the density at that value is zero. This explanation is not at all rigurious, but it might drive the point home. Oleg Alexandrov 15:03, 2 Jun 2005 (UTC)

A simple example would help: Failure probablilty vs. failure rate. Any device fails after some time (failure probability==1, which is the integral from 0 to infinity), but the failure rate is high in the beginning (infant mortalility) and late (as the device wears out), but low in the middle during its useful lifetime, forming the famous bathtub curve. Ralf-Peter 20:44, 20 March 2006 (UTC)[reply]

The only description that made any sense to me was the paragraph beginning "In the field of statistical physics". I gather that somehow while the y-axis values do not represent probabilities of corresponding x-axis values, the y-axis values do represent probabilities of the interval from correponding x-axis value to that value plus an infinitely small amount. While this statement is easy to understand in the reading of it, I'm still puzzled about how an infinitely small amount can make any difference if we limit ourselves to the real number system. 207.189.230.42 05:38, 12 October 2007 (UTC) 197.242.10.103 (talk) 15:53, 30 May 2013 (UTC)[reply]

OTHER Answer: Density means "one divide by something", this is the inverse of something. As dx have the physical units of the considered random variable, then the pdf f(x) ,have units inverse of the random variable physical units. As they multiply inside the integral we obtain dimensionless units(wich integrated result value is between zero and one, depending of the integration limits of the random variable).By the shape of f(x) we see values of the random variable wich have more density of probability than others , but the probability to any point (any value of the random variable) is always zero in the continuous pdf. Proof is the integration with limits the same point. Probability is the integral of the pdf. Thinking in the Normal pdf, N(0,1) of a random variable in meters, at value zero meters have 0.4/meter of density and at value one meter have 0.243/meter of density. This is o.4/0.243=1.65 more density at the value zero meters than at the value one meter of that random variable, but the Probability at zero meter is equal to the Probability at one meter, equal to Zero (dimensionless). [1] Rferreira1204 .197.242.10.103 (talk) 15:53, 30 May 2013 (UTC)[reply]

None of these answers help me to appreciate the literal understanding. There is a Kahn academy video that produces a very nice example using the pdf for Rainfall and helps explain why the y axis doesn't represent a probabilty for a single point, but for a range, and why unlike a cdf, the y axis doesn't have to b 1 at the top. https://www.khanacademy.org/math/probability/random-variables-topic/random_variables_prob_dist/v/probability-density-functions In particular his reminder that the "area" of the range is the probability. Reference to the discrete integral usually helps people understand what this concept means. Raiteria (talk) 08:25, 14 September 2013 (UTC)[reply]

References

Probability Density[edit]

I don't have time to correct it now, but the page Probability Density links to Probability Amplitude, which is about quantum mechanics. I think that should be a disambiguation page. --anon

I redirected Probability Density to Probability density, which I made into a disambig. Oleg Alexandrov 18:04, 18 August 2005 (UTC)[reply]

Self Inconsistency[edit]

The article begins by saying that only when the Distribution Function is Absolutely Continuous, the random variable will have a Probability Density Function, but then it leaps into the PDF of Discrete Distributions, using the Dirac Delta "Function"! I don't think this is consistent :-) Albmont 18:14, 9 November 2006 (UTC)[reply]

I believe you are confusing absolute continuity of a function with absolute continuity of a measure. The article is in reference to the latter. See a probability and measure text such as Billingsley for a detailed treatment of the subject. Essentially a distribution is called absolutely continuous (AC) if there is a positive bounded measure which is absolutely continuous w.r.t. to the probability measure (and thus by Radon-Nikodym Theorem ensures us a density exists). Another reference is the beginning of chapter 10 in Resnick's "A Probability Path" which is a bit more readable then Billingsley. Ageofmech (talk) 00:44, 18 March 2013 (UTC)[reply]

marginal density function[edit]

if we know the joint density function f(x,y), how to get the marginal density function fY(x) & fX(y)? Jackzhp 01:37, 1 December 2006 (UTC)[reply]

I think you integrate in x to get fX and integrate in y to get Fy. But I am not sure. Ask Michael Hardy, he should know. Oleg Alexandrov (talk) 16:17, 1 December 2006 (UTC)[reply]

addition[edit]

We know fX(x) for X, and fY(Y) for Y, X & Y are independent, Z=X+Y, then what is the density function for Z? And Z2=kX+Y. Thanks. Jackzhp 18:00, 1 December 2006 (UTC)[reply]

It's the convolution of the two densities. Michael Hardy 19:44, 5 December 2006 (UTC)[reply]

I've added a short section on this. Michael Hardy 19:59, 5 December 2006 (UTC)[reply]

Standard deviation[edit]

It would be helpful to add the standard deviation formula for completeness Gp4rts 19:02, 5 December 2006 (UTC)[reply]

I've added this at the bottom (the variance, not the SD, but close). Michael Hardy 19:44, 5 December 2006 (UTC)[reply]

Reals?[edit]

Is it true that all continuous random variables have to take on real values? What about the a random variable that represents a colour. Can it not have a probability density function over R3? Perhaps we could alter the formal definition to talk about ranges instead of intervals? MisterSheik 21:34, 27 February 2007 (UTC)[reply]

That is covered in the section "Probability function associated to multiple variables". -Icek 05:47, 17 April 2007 (UTC)[reply]

By definition a Random Variable has real values as its range. It maps from the sample space of the probaility space to the reals. However, random elements of the metric space can map to a set besides the reals. (Also, by definition an n-dimentional random vector maps to n-dimentional R-space). These are just naming conventions, but they are pretty universally used throughout texts. Ageofmech (talk) 00:49, 18 March 2013 (UTC)[reply]

Generality[edit]

I do not think the "formal" definition of a PDF given in the beginning of the article is the widely accepted general definition.

Given elements in an abstract set equipped with a measure (usually but not necessarily the Lebesgue measure), one can define a PDF over so that

for all measurable subsets .

is not necessarily a subset of or even . Any measure can also be used as reference, it is for instance perfectly possible to define a PDF with respect to area but spanned by polar coordinates.

I think it's wrong and misleading to present the case of a PDF over with respect to the Lebesgue measure as the "definition" of a PDF. Winterfors (talk) 23:54, 14 February 2008 (UTC)[reply]

Well put. This article needs a complete rewrite; what's defined as a density here is just a special case for univariate random variables. And densities can't be "informally" thought of as a smoothed histogram. Etcetera. And while we're at it, cut out all the peripheral stuff, like transformations and expectations of random variables, which belong on their own pages. Anyone with a thick skin?
While I'm ranting... why are nearly all stats articles on wikipedia such badly written junk? Compared to math articles, for example, which tend to be short, precise and to the point. --Zaqrfv (talk) 03:57, 1 September 2008 (UTC)[reply]
Because when statisticians retire they spend their time refuting global warming instead of contributing to Wikipedia? (I'm a retired computer science logician, and spent a long time writing Introduction to Boolean algebra instead of (ok, as well as) ranting about Boolean logic.) --Vaughan Pratt (talk) 05:01, 13 January 2011 (UTC)[reply]

Uses of PDF vs. distribution function[edit]

Would it be useful to explain in simple terms the use of this function, and contrast that with cumulative distribution functions? I gather that one is the integral of the other but beyond that I am having trouble. Boris B (talk) 03:29, 13 April 2008 (UTC)[reply]

Math mode versus html encodings[edit]

I've replaced most of the html or wiki encodings of equations with explicit calls to LaTeX math mode. This looks better if the preferences are set to "always png", otherwise the font changes between some of the expressions. I didn't fix all of the variable or functions that stood alone, but may do so. One that I wasn't sure how to fix was "ƒf" in Further details. How should that render? --Autopilot (talk) 21:40, 28 December 2008 (UTC)[reply]


Loosely, one may think of as...[edit]

Loosely, one may think of as the probability that a random variable whose probability density function is ƒf is in the interval from x to , where is an infinitely small increment.

I've deleted the sentence above because I think it is confusing. f(x) dx would go to zero as dx went to an infinitely small number. The probability of the value being in an infinetly small interval is zero (if you are using a PDF with finite values). Richard Giuly (talk) 18:35, 18 February 2009 (UTC)[reply]

I find that deletion unfortunate. The probability of being between x and x + dx is infinitely small, and f(xdx is also. When BOTH approach zero, their ratio may approach a finite nonzero value. This is standard use of the dx notation in calculus, except that the way calculus has been taught to freshmen in recent years has been influenced by undue squeamishness. Michael Hardy (talk) 00:43, 20 May 2009 (UTC)[reply]
I also find it unfortunate. This is how scientists and mathematicians actually think of PDFs. If the infinitesimal nature was bothersome to you, a finite increment could have been discussed. The word "Loosely" covers the technicality of the probability going to zero and should have covered that concern. Jason Quinn (talk) 23:03, 22 April 2011 (UTC)[reply]

Terminology[edit]

Are "probability distribution function" and "probability density function" synonymous? To me a "probability distribution function" is the distribution function, not the probability density function.. 131.175.127.242 (talk) 08:52, 21 October 2009 (UTC)[reply]

See Probability distribution function ... this seems correct in saying that different sources use different meanings. Melcombe (talk) 09:06, 21 October 2009 (UTC)[reply]

Probability distribution function[edit]

Why would somebody put incorrect information into the lead and then insist to keep it there? I'm talking about the “… often referred to as the probability distribution function” piece. First of all it's clearly not often. Second, the majority of textbooks which do employ that term use it in the “cdf” meaning. In fact, I cannot find any published source which would have defined probability distribution function as density, although some of them seem to use the term in the “density” sense without ever defining it.

The confusion seems to originate from physics, where the term “distribution function” was used by Maxwell to describe the probability density function of gas particles multiplied by the physical density of those particles.

Stating in the very first sentence that “probability density function” is the same as “probability distribution function” is at least misleading.  … stpasha »  20:27, 26 November 2009 (UTC)[reply]

That's easy. You gave the justification for deleting it that it is "deprecated" ...which is no reason for deleting something from an encyclopedia. After all the use of the term was given a citation as part of what you deleted. If an alternative term for "probability density function" has been used then it can/should certainly be included. Just because you don't like the term is no reason for deleting it. Reverting the deletion was a quick answer to an ill-considered edit, in exactly the same spirit as the deletion. Of course discussion of the term could be moved later on and given proper context if necessary. On following the citation given, I see that it does not actually seem to lead to anything directly related to equivalence to ""probability density function". As to established usage, the Oxford Dicionary of Statistical Terms says: "It is customary , but not the universal practice, to use 'probability distribution' to denote the probability mass or probability density of either discontinuous or continuous variable and some such expression as 'cumulative probability distribution' to denote the probability of values up to and including the argument x." Melcombe (talk) 10:57, 27 November 2009 (UTC)[reply]
And here I thought that not liking something is a sufficient reason to delete it :) As for the term “probability distribution” in Oxford Dictionary — guess what — we have a separate article for it, quite distinct from both the pdf and cdf.  … stpasha »  20:05, 27 November 2009 (UTC)[reply]

Codomain of a random variable: observation space?[edit]

See also Wikipedia talk:WikiProject Mathematics#Codomain of a random variable: observation space?. Boris Tsirelson (talk) 16:54, 27 March 2010 (UTC)[reply]

Link between discrete and continuous distributions[edit]

"The definition of a probability density function at the start of this page makes it possible to describe the variable associated with a continuous distribution using a set of binary discrete variables associated with the intervals [a; b] (for example, a variable being worth 1 if X is in [a; b], and 0 if not)." — Does anyone understand it? I do not. Boris Tsirelson (talk) 07:43, 20 September 2010 (UTC)[reply]

I think the general idea that section tries to convey is that a discrete r.v. can be represented by the"density" (and btw. it’s improper to call it simply density, without the quotation marks) that is a weighted sum of Dirac delta functions. It is clear the entire section has to be rewritten to make it comprehensible to a person who does not know that already.  // stpasha »  16:28, 20 September 2010 (UTC)[reply]
Maybe you can turn it into that, but for now, rereading the paragraph quoted above I see something very different. Your version does not explain why "the variable associated with a continuous distribution" and why the indicators of intervals. Boris Tsirelson (talk) 17:27, 20 September 2010 (UTC)[reply]
Anyway, I have deleted it. Boris Tsirelson (talk) 18:55, 23 September 2010 (UTC)[reply]

Probability is dimensionless[edit]

The edit by Rferreirapt is not well-done, but it should be improved rather than deleted, I think so.

"Probability is not dimensionless: it is outcomes per trial" — no, sorry, I disagree; "number of outcomes" is dimensionless, and "number of trials" is dimensionless; if in doubt ask a physicist or see Dimensional analysis. Boris Tsirelson (talk) 18:54, 23 September 2010 (UTC)[reply]

Density means "one divide by something", this is the inverse of something. As dx have the physical units of the considered random variable, then the pdf, f(x) ,have units inverse of the random variable physical units. As they multiply inside the integral we obtain dimensionless units(wich integrated result value is between zero and one, depending of the integration limits of the random variable).By the shape of f(x) we see values of the random variable wich have more density of probability than others , but the probability to any point (any value of the random variable) is always zero in the continuous pdf. Proof is the integration with limits the same point. Probability is the integral of the pdf. Thinking,as example, in the Normal pdf, N(0,1) of a random variable in meters, at value zero meters have 0.4/meter of density and at value one meter have 0.243/meter of density. This is 1.65 (=0.4/0.243) more density at the value zero meters than at the value one meter of that random variable, but the Probability at zero meter is equal to the Probability at one meter, equal to Zero (dimensionless). [1] Rferreirapt ,as I answered above. — Preceding unsigned comment added by 37.60.184.8 (talk) 18:07, 30 May 2013 (UTC)[reply]

OK, I agree; but I have two questions to you: (a) why do you insist on stating it twice (both in "#what is a probability density?" and here), and (b) so what? do you propose some change to the article? or just use this page as a forum? Boris Tsirelson (talk) 19:49, 30 May 2013 (UTC)[reply]
Maybe you want to restore your old text:
  • Units for the pdf
As seen below inside the integral are the pdf multiplied by dx.
The unit for the random variable X , are that of the entity measured, meters, seconds,liters,etc.
This implies the unit for a variation in X ,the dx ,are the same as the unit of X.
The pdf is a density , with inverse units of the X units: 1/meter ,1/second ,1/liter, etc.
This explains why Probability is dimensionless.
Important to note that the values of the pdf are not the probability. Probability is the Integral of the pdf. This is why ,in continuous pdf, the pontual probability is zero to any point (integral from a point to a point is zero), but pdf have values in that points. The pdf values are also used to ratios( with the standard normal, Φ(0)=0.4 and Φ(1)=0.24 so the density of probability in zero is 1.66 greater than in one, and different conclusions to the same points of others continuos pdf).
If we deal with Bidimensional pdf, f(X,Y), the unit of that density is 1/unitX.unitY .
And the units of E[X]? Inside of that integral are x.f(x).dx .This is Unit.(1/Unit).Unit, and the conclusion E[X] have the same unit as X.
By last, the units of E[(X-X)^2]?
That is a problem, since your text is a not well-done essay and, more important, is your original research (unless you find a reference); it will not survive here, even though it is basically correct.
Mathematical theories are always presented in the unitless form; that is, units are assumed to be chosen once and for all; the dimension analysis is left to physicists. The general theory of probability density is also like that.
Thus I think that your text is helpful but regretfully not suitable for Wikipedia; try something else, like Wikibook (or find a reliable source for your statements).
You see, also the article "Derivative" does not mention (physical) dimensions.
Maybe your thoughts could appear as examples to "Dimensional analysis"? But this is also problematic. Boris Tsirelson (talk) 20:07, 30 May 2013 (UTC)[reply]

"Relative probability"[edit]

I'm uncomfortable with the following phrase in the opening sentence: "a function that describes the relative chance for this random variable to occur at a given point in the observation space." First, the passage only refers to "a given point", but a relative chance has to relate two different points' chances to each other; so I think the passage should be reworded to reflect that. Second and more important, in what sense does the density describe the "relative chance"? Is it f(x1) / f(x2) = P(x = x1) / P(x = x2)? No, since the right side of this is zero over zero. So the intended meaning must be something about the limit as goes to zero of P(x1 < x < x1 + ) / P(x2 < x < x2 + ), or something like that. That interpretation is unfamiliar to me (albeit intuitive as a counterpart to the interpretation of discrete probability functions).

I would suggest two things: (1) Move this statement out of the intro, since it is likely, without further explanation that would be too detailed for the intro, to give the wrong impression that f(x1) / f(x2) = P(x = x1) / P(x = x2); and (2) put it later in the article, with a careful statement of what is meant and with a citation. Comments, anyone? Duoduoduo (talk) 15:38, 14 November 2010 (UTC)[reply]

Yes, one may say that it is about the limit as goes to zero of P(x1 < x < x1 + ) / P(x2 < x < x2 + ). This limit is equal to the ratio of the densities, under natural assumptions. (Namely, that both points are points of continuity of the density, and the denominator does not vanish.) Boris Tsirelson (talk) 16:12, 14 November 2010 (UTC)[reply]
It used to say “relative likelihood”, but then somebody got offended at the word “likelihood” and changed it to “relative chance”. Anyways, there is no need to specify a second point since the statement is valid for any other point. As for your second remark, note that the lead sentence is intended to serve as an informal introduction. Of course, the technical meaning of the density is that f(x) = dPr[ x < X < x+dx ] / dx, but informally it is perfectly reasonable to say that higher values of the density function are where the random variable is more likely to occur.  // stpasha »  18:08, 14 November 2010 (UTC)[reply]

Families of densities and domains?[edit]

The section Probability_density_function#Families_of_densities is very unclear, I think due to various meanings of the word "domain". There's "the domain of a family of densities", then "the domain is the actual random variable...", then "variables in the domain"--and in the following section, "any domain D in the n-dimensional space". I think the last of these is a domain in the sense of Domain (mathematical analysis). I thought on a first reading that should be "the domain of a family of densities" the domain of a function--specifically the function sending the parameter values to the corresponding distribution--but this doesn't seem consistent with what follows. Can anyone clarify this? Jowa fan (talk) 12:08, 9 November 2012 (UTC)[reply]

Yes... "the domain of a family of densities" is the sample space; these densities describe how probability is distributed over the sample space. "The domain is the actual random variable" probably should be "the sample space is the domain of the actual random variable". "Variables in the domain" could be "points of the sample space" or "possible values of random variables". And in the following section, "any domain D in the n-dimensional space" could be rather "any measurable subset of the n-dimensional space". Anyway, the text needs to be rewritten for clarity. And maybe all that is just not enough important for this article. --Boris Tsirelson (talk) 14:09, 9 November 2012 (UTC)[reply]
Thanks. I've made some small changes based on your comments. Probably the paragraph should be rewritten entirely, but I'll leave that for someone with more expertise in the subject. Jowa fan (talk) 05:19, 10 November 2012 (UTC)[reply]

Generalized probability density functions don't exist[edit]

Although the section "Link between discrete and continuous distributions" is useful and intuitive, I have not been able to find any sources that support its claims. I spent the day searching online and checking by University's library for any references to generalized PDFs, and I have found none. Of the dozens of books on generalized functions (aka, distributions), not one mentioned their applicability to PDFs. Similarly, none of the books on probability made mention of generalized PDFs.

Specifically, if you allow f to be a generalized function, like this section suggests, you run into problems. How can you enforce that the PDF integrates to one? The statement

\int_X f(x) dx

has no meaning if f is a generalized function.

I have flagged this section as disputed, and unless someone has supporting evidence for it, I will remove it in a few weeks. — Preceding unsigned comment added by 174.63.120.183 (talk) 20:11, 28 February 2013 (UTC)[reply]

This generalized function is a probability measure, - a special case of a generalized function. (In fact, a positive generalized function is well-known to be a locally finite measure.) The integral is well-defined.
On the other hand, maybe indeed this section is so-called ""original research by synthesis". I mean, it is a correct application of the notion of generalized function to the theory of distributions; but the problem is, whether this possibility was mentioned in the literature (outside Wikipedia). I guess, such formal manipulations with delta-functions were made by physicists; after all, the delta-function was invented by Dirac! The question is, where to look for a source. Boris Tsirelson (talk) 20:58, 28 February 2013 (UTC)[reply]
It may be a probability measure, but this section words this as though it is a PDF that gives "The density of probability associated with this variable [...]". Saying it is a probability measure is different from saying that it is a PDF. — Preceding unsigned comment added by 174.63.120.183 (talk) 21:36, 28 February 2013 (UTC)[reply]
Surely, a generalized function is not a function, thus, not a PDF if we insist that PDF means "probability density function" (rather than "probability density generalized function" - PDGF?). On the other hand, for a physicist or engineer this rigor is of little interest. And the very term "generalized function" indicates clearly that it is meant to be also a kind of function, in some sense. The probability measure really is the derivative of the cumulative distribution function, provided that (a) derivative is understood according to the theory of generalized functions, and (b) measures are treated as special case of generalized functions. Boris Tsirelson (talk) 06:49, 1 March 2013 (UTC)[reply]

There are various publications that use generalized functions in something that looks like and is treated like a probability density function, as a convenience tool: for example this American Statistican article. Additionally this other paper, which is similar in topic, uses the term (and defines) "generalized probability density function". No doubt there are others ... the notion has been around since the 70's at least, often as a way of treating characteristic functions via a single simple formula that looks like a simple Fourier transform. Also, in the case of the differential equations for the pdf of a stochastic diffusion equation, the intial condition can be conveniently represented as a delta function. 81.98.35.149 (talk) 23:13, 28 February 2013 (UTC)[reply]

Thank you! Now we see that this is not "OR by synthesis". Boris Tsirelson (talk) 06:53, 1 March 2013 (UTC)[reply]

Dependent variables and change of variables : Multiple variables[edit]

The formula

looks wrong to me: the denominator should look like a Jacobian, not the square root of a sum. I am not too sure about the correct formula though, so help with that would be welcome. Garfl (talk) 15:46, 16 October 2013 (UTC)[reply]

Jacobian? No, you cannot make Jacobian out of a single function g of n variables. The square root of the sum is the norm of the gradient vector. It is in the denominator since it is inversely proportional to the thickness of an infinitesimal layer The formula is correct as far as the reader understands that is an infinitesimal element of (n-1)-dimensional volume on the hypersurface which is hinted at in the (rather unclear) explanatory phrase after the formula. When using the parametrization, indeed, Jacobians are relevant. Boris Tsirelson (talk) 17:06, 16 October 2013 (UTC)[reply]

Could you please add where the formula for the non-continuous g function come from? Some book or a source you took this from? --Hanator123 (talk) 16:02, 4 January 2016 (UTC)[reply]

Recent long (and possibly tedious) addition on the derivation of a pdf.[edit]

This recent edit [1] is very long and, I suggest, tedious. I propose that it be reverted. Isambard Kingdom (talk) 18:08, 16 November 2016 (UTC) I suggest that the material be moved to "Simple Wikipedia" at: [2]. Isambard Kingdom (talk) 18:17, 16 November 2016 (UTC)[reply]

Or to Wikiversity? Boris Tsirelson (talk) 18:52, 16 November 2016 (UTC)[reply]
Agree. Even if the pedagogy and style were perfect (and I do have complaints), it's way too long. Nobody will read it, nor will they read anything else in the article. Wikiversity makes sense to me, simple wikipedia not so much. --Steve (talk) 22:48, 16 November 2016 (UTC)[reply]
Agreed. McKay (talk) 02:40, 17 November 2016 (UTC)[reply]

Reverted change to opening sentence[edit]

I reverted the change to the opening sentence that didn't like the term "relative likelihood". It was fine as it was -- to give an intuitive sense for what a PDF is, not as a technical definition. Moreover, the way it was changed, saying the density is a function that describes the local density, is unhelpful. There's probably some room for improvement over the way it is now, but it should still convey the same overall gist. Deacon Vorbis (talk) 00:31, 2 December 2016 (UTC)[reply]

One problem with this formulation is that "the probability for this random variable to take on a given value" is just zero (since we mean continuous distributions), and "likelihood for this random variable to take on a given value" is an attempt to legalize the illegal notion by using a different word; or has it a different meaning? Very vague. It should be about (at best) infinitesimal intervals, not values. And then "metric dependence" manifests itself: we should consider two infinitesimal intervals of the same length. In other words: "intervals of equal length" makes sense, while "points of equal size" does not. Boris Tsirelson (talk) 06:14, 2 December 2016 (UTC)[reply]
Also, "relative likelihood" sounds like "likelihood ratio"; but the latter is about a single possible value of two random variables (distributed differently), not about two possible values of a single random variable. Boris Tsirelson (talk) 07:00, 2 December 2016 (UTC)[reply]
On the other hand I agree that saying "the density is a function that describes the local density" is unhelpful. Boris Tsirelson (talk) 07:04, 2 December 2016 (UTC)[reply]
"...is a function that quantifies how likely it is for this random variable be very close to a given value" or something like that?? --Steve (talk) 19:28, 3 December 2016 (UTC)[reply]
I wonder, why this density is described so differently from, say, Density, Charge density, Energy density. On the level of lead, they could (or should?) look very similar. Boris Tsirelson (talk) 20:43, 3 December 2016 (UTC)[reply]
I like a text in “The elements of continuum biomechanics” by Marcelo Epstein, quoted here (pages 50-51). Regretfully, it is too long for a lead... Boris Tsirelson (talk) 20:49, 3 December 2016 (UTC)[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just modified one external link on Probability density function. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 22:56, 26 July 2017 (UTC)[reply]