3
$\begingroup$

Please bear with me if this seems like a very basic question. Let's say you want to detect a signal by measuring a variable $x\in[0,\infty)$. Let's say you know the variable $x$ follows a probability distribution $p(x|\mu)$ where $\mu$ is some parameter that characterizes the mean of $x$.

Usually, if there were no signal and you only have noise, you say $\mu=0$. Then you define a detection threshold $T(\alpha)$ corresponding to a chosen false detection rate $\alpha$, meaning that $$P(x>T|\mu=0) = \int_{T}^\infty p(x|0)\,dx = \alpha $$ and if you measure a value $x>T$ then you can claim a detection with significance $\alpha$. If you don't measure above the threshold you can calculate exclusion limits on $\mu$.

A colleague of mine recently was doing some work where they assume that the noise-only distribution doesn't necessarily correspond to $\mu=0$, but rather $\mu'\geq0$, and I'm guessing that they assume that a signal present would correspond to $\mu''>\mu'\geq0$.

I just want to ask people's opinions on whether this is okay to do, or if you've encountered similar situations? It feels a little strange to me because if you don't already know the value of $\mu'$ for noise, then you can't set a fixed threshold. But I guess if you somehow already know the value of $\mu'$, then maybe it's okay?

$\endgroup$
2
  • $\begingroup$ The measurement you are describing clearly only makes sense if the noise level is a-priori known, in which case it is just a trivial shift from $0$ to $\mu'$. $\endgroup$ Commented Aug 27 at 16:46
  • $\begingroup$ this is a classical problem in radar detection theory associated with fluctuating target models, the standard reference is rand.org/pubs/research_memoranda/RM1217.html $\endgroup$ Commented Aug 27 at 22:37

1 Answer 1

4
$\begingroup$

What your colleague is doing is not inherently wrong, although it is easy to get the implementation details wrong.

The key is that since $x\in[0,\infty)$ it is simply not true that a noise-only signal has $\mu=0$.

The quintessential example of this is the magnitude of a complex signal where there is gaussian noise in the real and gaussian noise in the imaginary channels. In that case, the magnitude signal is not described by a normal distribution but by a Rician distribution. $$x \sim p(x|\nu,\sigma)=\frac{x}{\sigma^2} \exp \left( -\frac{x^2+\nu^2}{2\sigma^2} \right) I_0\left( \frac{x\nu}{\sigma^2} \right)$$ where, roughly speaking, $\nu$ is your signal and $\sigma$ is your noise standard deviation. So in this case we can find the noise-only distribution by setting $\nu=0$. This gives $$E(x|0,\sigma)=\mu=\sqrt{\frac{\pi}{2}}\sigma\ne 0$$

You should speak with your colleague in more depth about this. It is an area that is easy to get wrong. It would be beneficial for you to understand the statistical distribution that they are using to model the data-generation process, why it is a good model for your process, and how the noise-only distribution behaves. However, if your measurement is strictly non-negative, then it is pretty much guaranteed that your noise-only measurement will not have zero mean, even if the noise has zero mean for high SNR signals.

$\endgroup$
1
  • 2
    $\begingroup$ Thanks for the clarification. I was too quick to relate the parameter \mu with the mean. We are indeed using Rician actually. Their motivation for this seems like they wanted to take a background that looks like a signal and treat it as noise. $\endgroup$ Commented Aug 27 at 21:36

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.