Talk:Wiener filter

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Lack of derivation[edit]

The article lacks a derivation of the different Wiener filters. --Fredrik Orderud 16:24, 8 May 2005 (UTC)[reply]

discrete time[edit]

Hi there. I would like to contribute to this article, specially talking about the discrete time case, solutions, and the LMS and RLS algorithms... But I'm kinda confused, because the way the filter is introduced is different from what I'm used to (what is a good thing, actually).

I believe this article needs some structural changes, but I don't want to simply start changing the place of things and piss off who wrote the current stuff.

I can't say how I think the whole structure of the article should be, because I'm still trying to understand what is there :) . I just would like a place to write specifically about the discrete-time case, with a cross-correlation matrix. Also, I think we should talk about the equalisation case, where the noise is the signal trough a channel, and filtering the noise means finding an inverse filter to the channel!... A hot topic, IMHO. -- NIC1138 05:19, 6 December 2005 (UTC)[reply]

What is S?[edit]

In this article what does the capital S stand for?

It is the Laplace transform of small s(t). --Memming 18:30, 2 July 2006 (UTC)[reply]
I think S of both types gets a little overused here. In any case diving right into S with double subscripts requires a little explanation (by contrast, R gets defined quite well). As it stands now, the whole solution section is very unclear because of this notation. —Preceding unsigned comment added by 128.30.6.11 (talk) 17:22, 26 October 2007 (UTC)[reply]
It is not the Laplace transform of small s(t). It stands for spectral density in the s-domain, which is the Laplace transform of correlation functions for stochastic signals. In this case, is the Laplace transform of , and is the Laplace transform of .--Yangli2 (talk) 21:58, 10 February 2008 (UTC)[reply]
It seems to me that this article overuses the symbols S and s. How many different meanings can this one article assign to one symbol? It uses s to mean both the signal to be estimated, as in s(t), and to represent the complex angular frequency in the Laplace domain, as in G(s). Then, as if that's not confusing enough, the article has S(s) to represent the power spectral density as a function of the complex angular frequency, not to be confused with the Laplace transform of the original signal s(t). For anyone who is new to the topic, surely this article is extremely confusing. 71.233.125.212 (talk) 03:42, 27 June 2008 (UTC)[reply]
There might be something wrong in the usage of S in the 'Causal Solution' section. I'll have to look into it when I have more time. Yangli2 (talk) 23:01, 6 May 2009 (UTC)[reply]

What is [edit]

In this article what is the which suddenly appears in the section entitled stationary solution?

Also I think a discrete time treatment (as mentioned by NIC1138) would be useful. Encyclops 16:27, 13 August 2006 (UTC)[reply]

Alpha may be the lead/lag time, which is called d earlier in the article Encyclops 21:45, 13 August 2006 (UTC)[reply]
Alpha is the delay introduced by the filter g(t), since it was established that it is going to be causal, it needs a delay. You are right though, it should have been introduced first.
To get to discrete time is rather simple in this case, just change all t's to n's, all integrals to sums, and voila, you can put it on a DSP. Kidding, but it's pretty straight-forward. Yangli2 (talk) 22:45, 6 May 2009 (UTC)[reply]
Alpha is not introduced to enforce a causal filter. The filter is always causal. Alpha is a tunable parameter that allows you to switch between prediction, filtering, and smoothing. See 4.3 of 'Introduction to Random Signals and Applied Kalman Filtering'. I will edit the page to discuss this. — Preceding unsigned comment added by Jaimevrl1 (talkcontribs) 12:32, 3 July 2022 (UTC)[reply]

Isn't there an error in the expected value ?[edit]

To avoid making errors in this page and according to my present understanding, I prefer to talk here before doing any modification.

The author wrote : Taking the expected value of the squared error results in

where

  • is the autocorrelation function of
  • is the autocorrelation function of
  • is the cross-correlation function of and


but according to Brown and Hwang [1] (page 164), shouldn't it be?


Taking the expected value of the squared error results in

where

  • is the autocorrelation function of
  • is the autocorrelation function of
  • is the cross-correlation function of and


doesn't equal


  • ^ [1]: Brown, Robert Grover and Patrick Y.C. Hwang (1996) Introduction to Random Signals and Applied Kalman Filtering. 3 ed. New York: John Wiley & Sons. ISBN 0-471-12839-2

Touriste (usurped) 14:14, 25 July 2007 (UTC)[reply]

Hmmm! You may have a valid point here. Encyclops 00:30, 27 July 2007 (UTC)[reply]

Thanks to Michael Hardy, who didn't write here what he did, the page is now corrected. In fact, the assumptio on uncorrelation between the noise and the signal was forgotten. Touriste (usurped) 22:29, 21 August 2007 (UTC)[reply]

Yep, I think Touriste is right[edit]

I was just looking at the expectation formula, which does not make any sense whatsoever; it needs correction, and I believe Touriste's update is correct. Please, Touriste, overwrite the current formula with your correction next time you log in. I believe the cause of the error is the original author's confusion of the variable names. He must have had multiple sources that used different naming conventions, and in his transcription mismatched them. --Yangli2 (talk) 21:34, 10 February 2008 (UTC)[reply]


I just fixed the article; it did follow 2 different naming conventions, the top half, up to just before the expectation formula, used as the output of the wiener filter, which I replaced with ; and the bottom half from there on used as the observed signal, . After my replacement, the naming conventions have been reconciled. I added a line of explanation as to what stands for right after it appears in the expectation formula. --Yangli2 (talk) 22:44, 10 February 2008 (UTC)[reply]

Did this fix get removed somehow? The current page seems to still use in the explanation of the expectation formulation. Very confusing as there is no clue as to what actually is. —Preceding unsigned comment added by 203.12.172.254 (talk) 02:41, 21 August 2008 (UTC)[reply]

No, it's still there. I kept x(t) to stand in for s(t) + n(t), which I explained right underneath the expectation formula:
where
x(t) = s(t) + n(t) is the observed signal
Hope this helps. Yangli2 (talk) 22:49, 6 May 2009 (UTC)[reply]

Uncorrelated noise[edit]

For most applications, the assumption of uncorrelated signal and noise is reasonable because the source of the noise (e.g. sensor noise or quantization noise) do not depend upon the signal itself.

Not true - quantization noise is a notable example of noise that does depend on the signal. --Spitschan (talk) 23:27, 6 June 2009 (UTC)[reply]

If it's not true (and I agree with you there), then why is it still in the article? I'm removing it. -- Jalanpalmer (talk) 22:22, 6 March 2010 (UTC)[reply]

astronaut photos look the same[edit]

The two astronaut photos look the same to me. The image files are different sizes, so they do appear to be different images. I wonder whether these two photos are the best example of using a Wiener filter against an image.Billsmithaustin (talk) 15:56, 15 August 2013 (UTC)[reply]

They are also not suitable illustrations of the theory given which is for the one-dimensional Wiener filter whereas the photos are for a two-dimensional filter.JFB80 (talk) 21:11, 15 August 2013 (UTC)[reply]
Later: I have deleted the photos for these reasons. They are out of place here.JFB80 (talk) 20:34, 13 September 2014 (UTC)[reply]

Linear algebra formulation?[edit]

The Wiener filter presented in this article looks very different than the version I learned (and much more complicated too). Perhaps it's just a difference in formalism? Here's a sketch of the presentation I'm used to (for Gaussian distributed numbers).

Consider a vector of observations being a sum of signal and noise (that is, s has covariance S, and n has covariance N). We wish to recover a maximum-likelihood estimate of the signal given the data. . By completing the square for we see that . So our maximum likelihood estimate of is , and the factor in front of is the Wiener filter. It tends to 1 for signal-dominated modes and to S/N for noise-dominated modes. I've also made an example of applying this filter. Amaurea (talk) 17:56, 3 October 2014 (UTC)[reply]

Your version is not the Wiener filter but just standard maximum likelihood theory. The characteristic properties of the Wiener filter are that it acts only on past observations and that it uses the spectra of signal and noise. Your example is inappropriate too and is also 2-dimensional whereas the article only gives theory for the usual 1-dimensional case. Why don't you write your own article on your method? JFB80 (talk) 05:50, 4 October 2014 (UTC)[reply]
It is not "my version". I didn't invent it. It is just standard maximum likelihood theory, but that doesn't mean it's not a Wiener filter. To be more specific it is called a Generalized Wiener Filter. It's used so many places in my field that it's hard to choose which reference to give, but see for example this article (and the many references within). Processing 2D images is a very common use of Wiener filters, and an example is even given in the current version of the article (though not a very impressive example). So I don't agree that only the historical 1-D version of the filter is appropriate. "The article only gives theory for the usual 1-dimensional case" is not an argument for not discussing the generalized case. Also, what do you mean by "your own article"? I can't write my own article because one can't own Wikipedia articles. But I can start a new article, on generalized Wiener filters, if you think that would be clearer. Amaurea (talk) 13:54, 4 October 2014 (UTC)[reply]
I looked up your reference and see that tbe term 'generalized Wiener filter' is commonly used but I still think it is an incorrect name for the method as it has no connection with what Wiener did. Maybe someone used the term and then everyone else copied it without checking what the Wiener filter actually was. That happens. Yes I did mean to suggest that you should start a new article on the method which also connects with maximum likelihood and Bayesian methods. It would be clearer that way. JFB80 (talk) 19:10, 4 October 2014 (UTC)[reply]
Voila. Amaurea (talk) 16:16, 5 October 2014 (UTC)[reply]
Ok That's fine. There has been work on Wiener filters for both limited interval of observation and for two dimensions which followed the thinking of the usual Wiener filter. (I dont have the references to hand just now). You use a different method and make no reference to previous work. JFB80 (talk) 05:31, 6 October 2014 (UTC)[reply]
I do reference previous work: The Pratt article from 1972. Would you mind briefly clarifying what you mean by the "thinking of the usual Wiener filter", and how the method I wrote about doesn't follow that? Is it the interpretation of one of the data axes as time that is the issue? I'm also curious about those other two-dimensional methods. Amaurea (talk) 15:56, 19 October 2014 (UTC)[reply]

Astronaut photos[edit]

The links associated with two photos lead back to a NASA image is significantly higher quantity. Has noise been added for the purpose of this article, and if so is this appropriate? I'd suggest that something like the examples from the Gimp refocus plugin would be more appropriate this the history of the images is known, provided that the author was prepared to make them available. MarkMLl (talk) 13:06, 29 May 2015 (UTC)[reply]

I think this is a much better example of wiener filtering of an image. The improvement is actually visible, and the assumptions about the statistical properties of the signal and noise are made explicit. Amaurea (talk) 15:41, 29 May 2015 (UTC)[reply]

Sections 2 and 3 Need better explanations of variables[edit]

Current discussion is very unclear. — Preceding unsigned comment added by Zhermes (talkcontribs) 18:08, 21 August 2016 (UTC)[reply]

In Wikipedia spectrum is variously called power spectrum, spectral density, power spectral density.JFB80 (talk) 03:39, 22 August 2016 (UTC)[reply]

Low Quality Wikipedia Math Articles Should Not Appear in Google Search Results[edit]

Undefined terms and symbols; verbose jargon filled prose; no insight, intuition, or understanding. The biggest problem is that these low quality articles rank highly in Google search results. This article is the first hit when I Google "Wiener Filter". This kind of crap should not be presented as the best the human race has to offer on the subject.

Is there some way we can mark this article as low quality and therefore also discourage search engines from linking to it? — Preceding unsigned comment added by 73.15.52.144 (talk) 16:15, 21 June 2018 (UTC)[reply]

The article quality scale is part of the problem .. "Start-class" quality rating is not an adequate description of the problems with this article. The problem is not a shortage of information, but that information is not provided in a clear or meaningful way. This is not a Harry Potter article that can be improved by correcting typos and adding more trivia. We need a "Problem-class" quality rating for articles that fail to meet the clarity standard. Articles about concepts are necessarily more challenging in this regard.

In any case, Google should not link to articles that are rated Start-class. — Preceding unsigned comment added by 73.15.52.144 (talk) 16:44, 21 June 2018 (UTC)[reply]

Relationship to the Least Squares Filter[edit]

I think the current section about "Relationship to the Least Squares Filter" is missing.

A more accurate derivation and relation is given at How Is the Formula for the Wiener Deconvolution Derived?

--Royi A (talk) 11:48, 25 August 2018 (UTC)[reply]