The Velocity Integration Implementation
See original GitHub issueThe implementation of velocity integration is a little confusing. Could help about this issue?
In network.py file, there are two different implementation ways. One is:
if use_miccai_int:
# for the miccai2018 submission, the squaring layer
# scaling was essentially built in by the network
# was manually composed of a Transform and and Add Layer.
v = flow
for _ in range(int_steps):
v1 = nrn_layers.SpatialTransformer(interp_method='linear', indexing=indexing)([v, v])
v = keras.layers.add([v, v1])
flow = v
the other one is:
else:
# new implementation in neuron is cleaner.
z_sample = flow
flow = nrn_layers.VecInt(method='ss', name='flow-int', int_steps=int_steps)(z_sample)
if bidir:
rev_z_sample = Negate()(z_sample)
neg_flow = nrn_layers.VecInt(method='ss', name='neg_flow-int', int_steps=int_steps)(rev_z_sample)
The new implementation in /ext/neuron/neuron/utils.py is:
if method in ['ss', 'scaling_and_squaring']:
nb_steps = kwargs['nb_steps']
assert nb_steps >= 0, 'nb_steps should be >= 0, found: %d' % nb_steps
if time_dep:
svec = K.permute_dimensions(vec, [-1, *range(0, vec.shape[-1] - 1)])
assert 2**nb_steps == svec.shape[0], "2**nb_steps and vector shape don't match"
svec = svec/(2**nb_steps)
for _ in range(nb_steps):
svec = svec[0::2] + tf.map_fn(transform, svec[1::2,:], svec[0::2,:])
disp = svec[0, :]
else:
vec = vec/(2**nb_steps)
for _ in range(nb_steps):
vec += transform(vec, vec)
disp = vec
These two implementations are quite different, after checking the code in /ext/neuron/neuron/utils.py. The former MICCAI version is taking the network output (flow) as the time-point velocity. The new implementation is dividing the output (flow) by (2**nb_steps) firstly. If the nb_steps equals to the default value (7). *The output (flow) is actually divided by 2*7=128. Then do the same processing of the divided flow, which can be interpreted as the integration of the divided flow.
Why the new implementation is quite different from the former? Or why should divide the network output (flow) by 128? After this processing, I think the divided flow will be near zero or the network output flow will be large enough.
Issue Analytics
- State:
- Created 4 years ago
- Comments:5
Top GitHub Comments
The scaling and squaring approximates the integration operation. If the integration was perfectly obtained, it would lead in no folding voxels in this case (since one couldn’t cross by following the vector fields). More integration steps means a finer integration, which means better approximation, and hence closer to “no folding voxels”.
I don’t expect the Dice to vary too much – essentially 7 integration steps is quite accurate, and 6 or 8 wouldn’t affect the deformation so much so as to change the Dice – it will just affect a few small cases that have folds, more or less.
@adalca Thanks for your fast response. And yes, it is my confusion. Most of important, why the “mean folding vox” decreases with the larger integration step?