aprendtech.com >> blog >> this post
If you have trouble viewing this, try the pdf of this post. You can download the code used to produce the figures in this post.

Deadtime-3 Pulse Height Analysis Theory

In previous posts, I discussed the mean and variance and the energy spectrum of photon counting with deadtime. In this post, I will describe the statistics of pulse height analysis (PHA) data as a function of the deadtime of the detector. I will analyze the idealized case with perfect energy bins with zero transition width and no overlap and no added electronic noise. With these assumptions and no deadtime, the number of counts in each bin is Poisson distributed with a mean value equal to the number of incident photons and the data in different bins are independent. With deadtime, the PHA data mean and variance are smaller than those with no deadtime. In addition, the data in different bins become negatively correlated.
In my next post, I will describe a Monte Carlo simulation to validate the formulas derived here.
The derivation here is taken from a recent paper by Wang et al.[2]. The approach is first to derive statistics with a fixed number of recorded counts and then to use conditional expectation, variance, and covariance to derive the statistics with random numbers of recorded counts. By fixing the recorded counts, the counts for the PHA bins become multinomial random vectors. So, first I will derive the basic properties a multinomial random vector. Then, I will summarize the formulas for conditional statistics and apply them to the PHA count data.

The multinomial distribution

The multinomial distribution is a generalization of the binomial random variable that is discussed in most probability textbooks. You will recall that a binomial is the number of successes in n independent trials if the probability of success in each trial is p. With a multinomial, there are more than two possible outcomes of each trial ( see Ross [1] Section 6.1). If there are n trials with r possible outcomes with probabilities pr r = 1…r and Xj,  j = 1…r is the number of successes for outcome j , then the probability mass function of the joint distribution of the Xj is
(1) Prob(X1 = k1, X2 = k2, , …, Xr = kr) = (n!)/(k1!k2!⋯kr!)pk11pk22pkrr
I will derive the moments of the multinomial distribution using the moment generating function, which, for a joint distribution, is defined to be (see Ross [1] Section 7.6)
MX(t1, …, tr) = eX1t1 + ⋯ + Xrtr
In my notation is the expected value. This function can be derived by using the definition of expected value with the probability mass function in Eq. 1↑
(2) eX1t1 + ⋯ + Xrtr  =  {ki:ki = n}(n!)/(k1!k2!⋯kr!)pk11pk22pkrrek1t1 + ⋯ + krtr  =  {ki:ki = n}(n!)/(k1!k2!⋯kr!)(p1et1)k1(p2et2)k2(pretr)kr
The multinomial theorem from elementary algebra gives the expansion of a multinomial to the n’th power
(3) (x1 + x2 + ⋯ + xr)n = {ki:ki = n}(n!)/(k1!k2!⋯kr!)xk11xk22xkrr
Comparing this to Eq. 2↑, we see that by substituting xk = pketk
(4) MX(t1, …, tr) = eX1t1 + ⋯ + Xrtr = (p1et1 + p2et2 + ⋯ + pretr)n
We can use the moment generating function in Eq. 4↑ to derive the moments using the general relation
Xj11Xjrr = (MX(t1, …, tr))/(tj11⋯∂tjrr)t1, …, tr = 0
Taking the first derivative, the expected value of any component of the multinomial is
Xk = [n(p1et1 + p2et2 + ⋯ + pretr)n − 1pketk]t1, …, tr = 0 = npk
since
rk = 1pk = 1.
Taking another derivative, the second moment is X2k = n(n − 1)p2k + npk so the variance is
Var(Xk) = X2k − Xk2 = npk(1 − pk)
Notice that the individual counts have the same mean and variance as a binomial random variable. This makes sense since in any trial, a particular case will occur with probability pk.
Taking derivatives with respect to tj and tk, the covariance for j ≠ k is
Cov(Xj, Xk) =  − npjpk
Of course, if j = k, Cov(Xk, Xk) = Var(Xk).

Conditional expectation and variance

Ross[1] discusses conditional expectation in Section 7.4. He proves the following result
X = X|YY
In addition, I will give a proof for the conditional covariance formula (see Wikipedia)
(5) cov(X, Y) = cov(X, Y|Z) + cov(X|Z, Y|Z)
This can be proved by starting with the computational formula for covariance
cov(X, Y) = XY − XY
Rewriting the right hand side by using conditional expectation with the variable Z,
(6) cov(X, Y) = XY|Z − X|ZY|Z
Using the covariance computational formula again, XY|Z = cov(X, Y|Z) + X|ZY|Z. Substituting in the first term on the right hand side of (6↑)
(7) cov(X, Y)  =  cov(X, Y|Z) + X|ZY|Z − X|ZY|Z  =  cov(X, Y|Z) + X|ZY|Z − X|ZY|Z
where I have used the fact that the expectation of a sum is the sum of the expectations. The last two terms of (7↑) are the computational formula for cov(X|Z, Y|Z), so we have derived the conditional covariance formula, Eq. 5↑.
Since var(X) = cov(X, X), we can use (5↑) to show that the formula for conditional variance is
(8) var(X) = var(X|Y) + var(X|Y)

Statistics of PHA data with deadtime

Now we are ready to derive the statistics of PHA data with deadtime. The derivation here will follow that in Wang et al.[2]. As I discussed in my blog post, the spectrum of the measured energies with deadtime is
(9) Sdeadtime(E) = N0k = 0((λτ)k)/(k!)e − λτ(s(k)*s)
where s(E) = (S(E))/(N0), S(E) is the incident energy spectrum, and N0 = S(E)dE. Each term in the sum corresponds to k additional photons arriving during the dead time and, by definition, s(0)*s = s.
With PHA, we group the measured energies into r bins, corresponding to energies from [Ek − 1:Ek,  k = 1…r]. Suppose we have a fixed number of recorded counts, M. Then, since the photons have random energies, each bin Mk|M,  k = 1…r counts the number of photons with energies in the bin energy range. The probability for each bin is
pk = (EkEk − 1Sdeadtime(E)dE)/(Sdeadtime(E)dE)
and the counts are components of a multinomial random vector. By the results above, the expected value and the variance of the counts in a bin for fixed M are
Mk|M = Mpk
var(Mk|M) = Mpk(1 − pk).
Applying the conditional expectation formula, the mean value for each bin is
(10) Mk  =  Mk|M  =  Mpk  =  Mpk
We can apply the laws of conditional variance Eq. 8↑ and covariance Eq. 5↑ to derive the statistics of the PHA data. First the variance
(11) var(Mk)  =  var(Mk|M) + var(Mk|M)  =  Mpk(1 − pk) + var(Mpk)  =  pk(1 − pk)M + p2kvar(M)  =  Mpk + (var(M) − M)p2k
The covariance can be derived similarly
(12) cov(Mj, Mk)  =  cov(Mj, Mk|M) + cov(Mj|M, Mk|M)  =   − pjpkM + pjpkvar(M)  =  pjpk(var(M) − M)

Discussion

The expected value in Eq. 10↑ is straightforward but pk is the fraction of the distorted spectrum with non-zero deadtime, Eq. 9↑, in each energy bin. In a typical x-ray imaging system, this spectrum and therefore the fractions can change markedly since the incident count rate can vary by a factor of 100 from air to the interior of the object.
If the deadtime is zero, then the total recorded counts M are Poisson distributed with mean M = λT that is equal to the variance var(M) = λT. Therefore, by Eq. 11↑, the variance of the counts in each bin is also equal to the expected value. With non-zero deadtime, my previous post shows that the expected value and variance for large counts are
M = (λT)/(1 + λτ)
var(M) = (λT)/((1 + λτ)3)
where λ is the incident count rate, T is the integration time, and τ is the deadtime. Note that in the formula for the spectrum with deadtime, Eq. 9↑, above, N0 = λT. In this case, var(M) < M, so from Eq. 11↑ the variance of the PHA bin counts is also less than expected from Poisson statistics, var(Mk) < Mpk. This makes sense since the photons that arrive during the deadtimes do not change the counts so the variance is reduced.
Also, Eq. 12↑ shows that with zero deadtime so that var(M) = M, the covariance is 0. With non-zero deadtime, var(M) < M and the covariance is negative.
Last edited Oct. 17, 2011
© 2011 by Aprend Technology and Robert E. Alvarez
Linking is allowed but reposting or mirroring is expressly forbidden.

References

[1] Sheldon M. Ross: A First Course in Probability. Prentice Hall College Div, 1997.

[2] Adam S Wang, Daniel Harrison, Vladimir Lobastov, J Eric Tkaczyk: “Pulse pileup statistics for energy discriminating photon counting x-ray detectors”, Medical Physics, pp. 4265—4275, 2011.