Please bear with me if this seems like a very basic question. Let's say you want to detect a signal by measuring a variable $x\in[0,\infty)$. Let's say you know the variable $x$ follows a probability distribution $p(x|\mu)$ where $\mu$ is some parameter that characterizes the mean of $x$.
Usually, if there were no signal and you only have noise, you say $\mu=0$. Then you define a detection threshold $T(\alpha)$ corresponding to a chosen false detection rate $\alpha$, meaning that $$P(x>T|\mu=0) = \int_{T}^\infty p(x|0)\,dx = \alpha $$ and if you measure a value $x>T$ then you can claim a detection with significance $\alpha$. If you don't measure above the threshold you can calculate exclusion limits on $\mu$.
A colleague of mine recently was doing some work where they assume that the noise-only distribution doesn't necessarily correspond to $\mu=0$, but rather $\mu'\geq0$, and I'm guessing that they assume that a signal present would correspond to $\mu''>\mu'\geq0$.
I just want to ask people's opinions on whether this is okay to do, or if you've encountered similar situations? It feels a little strange to me because if you don't already know the value of $\mu'$ for noise, then you can't set a fixed threshold. But I guess if you somehow already know the value of $\mu'$, then maybe it's okay?