-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add (iterative) cleaning using masks #6
Comments
@gigjozsa, from this statement: "cutoff[i]*rms_original, where rms_original is the rms of the original (unfiltered) data cube" Is the "original unfiltered cube" the dirty map, or is it a clean cube without without filtering? |
Yup, that's too cryptic. It's the restored data cube in this iteration, not convolved (and in that sense the "original"). |
I see. Do we use the noise per channel or for the whole cube? I suppose since we are cleaning each channel it makes sense to use the noise per channel? |
Both not perfect. But per channel appears to be more appropriate. Any highly variable noise would be a concern anyway. |
This is how it is approximately done in the miriad pipeline used for
Serra et al. 2012 http://esoads.eso.org/abs/2012MNRAS.422.1835S or
Wang et al. 2013 http://esoads.eso.org/abs/2013MNRAS.433..270W or
Janowiecki et al. 2015 http://esoads.eso.org/abs/2015ApJ...801...96J
which is HI work. HI can be concentrated, point source-like, or extended. The trick is to filter the cubes with different size filters (3d Gaussians) to enhance emission at different scales and to define the clean region based on that.
Use a Hogbom or Clarke clean (certainly other variants work as well). Number of iterations infinite, but with a cutoff level set.
Repeat the loop below
niters
times with the running index of the iteration being denoted asi
. (0 <=i
<niters
)For the
i
th iteration define a numbercleancut[i]
withcleancut[i]
<cleancut[i+i]
for alli
and the lastcleancut[niters-1]
being very large.For the
i
th iteration define a multiplicatorn[i]
for the rms noise to determine clean masks,n[i]>=n[i+1]
.For the
i
th iteration define a cutoff paramtercutoff[i]
for the cleaning,cutoff[i] >= cutoff[i+1]
.Start:
-- produce very few similar cubes by filtering the restored cube from the previous iteration (dirty cube at the beginning) with m different 3D Gaussians (with FWHMs fwhm_x_1,fwhm_y_1, fwhm_v_1, ..., fwhm_x_m, fwhm_y_m, fwhm_v_m), in all three directions. (Remark: in the pipeline we performed Hanning smoothing along the velocity axis instead, just because Miriad does not provide Gaussian filtering on the third axis.)
-- For each resulting cube and for the original cube calculate the maximum
max
and therms
.-- For each resulting cube and for the original cube calculate a threshold
thre
as the maximum ofmax/cleancut[i]
andn[i]*rms
.-- For each resulting cube and for the original cube define a partial clean region as all voxels above
thre
-- Define the united clean region as the union of the clean regions of the single cubes
-- Clean the dirty cube using the united clean region, i.e. allowing only for clean components inside the united clean region. Use an infinite possible number of iterations, but set a cutoff parameter of
cutoff[i]*rms_original
, where rms_original is the rms of the original (unfiltered) data cube-- Produce the restored cube.
-- if
i
=niters
stop, otherwise increasei
by 1 and start the loop again, goto StartFor the Serra pipeline the parameters are as follows:
(In that pipeline a Hanning filter with a width of 9 pixels was used rather than a Gaussian.) A code snippet might clarify some quesitons:
Remarks:
i) To determine the rms (and filtering out all positive sources), I prefer the following method: Make a histogram and fit a Gaussian to the left part from the peak (left indicating the negative direction), neglecting (ideally) the positive part of the histogram. The sigma of that Gaussian is a good estimate of the rms.
ii) The Serra pipeline runs only one filter to determine an additional clean mask of a convolved cube. For WSRT data it might be good to also try a
in addition.
iii) Again, this does not need to be restricted to a Hogbom clean, but can be used with any suitable deconvolution algorithm.
The text was updated successfully, but these errors were encountered: