Opened 12 years ago
Closed 11 years ago
#486 closed defect (fixed)
Zero cross section
Reported by: | ohl | Owned by: | Juergen Reuter |
---|---|---|---|
Priority: | P4 | Milestone: | v2.2.0 |
Component: | core | Version: | 2.1.1 |
Severity: | major | Keywords: | zero cross section |
Cc: |
Description
At some time between r3966 and r3979, the process gl, gl => t, B, d, U lost all its cross section:
model = SM process leptons = gl, gl => t, B, e1, N1 process jets = gl, gl => t, B, d, U process boson = gl, gl => t, B, Wm compile sqrts = 1 TeV beams = gl, gl integrate (leptons, jets, boson) { iterations = 2:1000 }
that's when WK merges CS's code (found by Fabian)
Change History (30)
comment:1 Changed 12 years ago by
comment:2 Changed 12 years ago by
comment:3 Changed 12 years ago by
My suspicion seems to have been undermined, and the culprit seems to be indeed CS! It looks like the helicity selection rules. The reason why the process "leptons" is non-zero and the process "jets" is zero, is that the electron is massive while d and ubar are both massless. Setting me = 0 results also in a zero cross section for the process "leptons". However, switching off the helicity selection rules does not help. Strange. CS, what did you do???
comment:4 Changed 12 years ago by
It seems not to be the helicity selection is NOT the culprit, but many things in processes.f90 went berserk. Sometimes I don't understand the changes by CS.
comment:5 Changed 12 years ago by
When using process_write to debug, there is something wrong with the interaction_write
statement on the structure functions. Secondly, the phase space forest initialization seems to not have been worked properly.
comment:6 Changed 12 years ago by
Priority: | P1 → P0 |
---|
comment:7 Changed 12 years ago by
I am tempted to say that this is connected to #468: if WK will rework that part of the code (processes and/or integrations) we will find it (hopefully) :(
comment:8 Changed 12 years ago by
Whatever that is we should definitely add it as a unit test. Going home now.
comment:9 Changed 12 years ago by
its process_compute_vamp_phs_factor, the weights seem ok, so it's probably the grids.....
comment:10 Changed 12 years ago by
I can't reproduce the problem in my whizard_nlo branch, so if it is indeed connected to the NLO code, I suspect that bug was introduced or exposed during the merge.
comment:11 Changed 12 years ago by
OMG, this means we have to go through the code line by line ... viel Spass
comment:12 Changed 12 years ago by
Thanks for the tip, CS, anyhow. So it seems to be an incompatibility with the changes in the trunk from the point who forked your branch to the nlo setup.
comment:13 Changed 12 years ago by
Before you edit this, WK, I'm investigating this at the moment :) (after I made a Brathuhn with Asian vegetables)
comment:14 Changed 12 years ago by
Could be. In any case, the quick hacks that incorporated CS's branch in the trunk are not going to stay. The process component and library handler are being rewritten, essentially from scratch. So, we should check this as a test case once the program is up and running again, but right now it's pointless.
(If you just found the problem, go ahead, but otherwise don't worry.)
comment:16 Changed 12 years ago by
I found the culprit, it's ?phs_step_mapping flag. If you set this to false the zero cross sections work again. But I don't understand yet why it causes a problem from 3979 (the merge) on.
comment:17 Changed 12 years ago by
Mysterious: the phs_tree for 3966 and 3979 is the same, but in version the problem is there, in the other it isn't. :(
exp_type = F variable_lim = F External: 6 Mask: 63 Incoming: 2 Mask: 48 Branches: 9 32 16 * 15 Daughters: 2 13 + (axis +) * 13 Daughters: 1 12 + (axis +) * 12 Daughters: 4 8 + (axis +) 8 4 2 1 Branch # 12: Mapping (s_channel) for particle "W+" m/w = 80.418999999999997 2.0489999999999999 a1/2/3 = 0.0000000000000000 0.0000000000000000 0.0000000000000000 Arrays: mass_sum, effective_mass, effective_width 1 173.09999999999999 173.09999999999999 0.0000000000000000 2 4.2000000000000002 4.2000000000000002 0.0000000000000000 4 0.0000000000000000 0.0000000000000000 0.0000000000000000 8 0.0000000000000000 0.0000000000000000 0.0000000000000000 12 0.0000000000000000 80.418999999999997 2.0489999999999999 13 173.09999999999999 253.51900000000001 2.0489999999999999 15 177.29999999999998 257.71899999999999 2.0489999999999999
comment:18 Changed 12 years ago by
comment:19 Changed 12 years ago by
The step_mappings have not been present in the nlo_lab_tagged version. Somehow that seems to cause the problem now.
comment:20 Changed 12 years ago by
There is twice a call to phs_forest_set_parameters, one filling the forest subtype of process, one the forest subtype of process%kinematic_configuration_in. For this second call there appears to be an unknown particle in the tree:
Branch # 12: Mapping (step_hyp ) for particle "?" m/w = 0.0000000000000000 0.0000000000000000 a1/2/3 = 0.0000000000000000 0.0000000000000000 0.0000000000000000
but I don't know whether this causes the problem.
comment:21 Changed 12 years ago by
Some more info: the problem only appears if both the two variables
?phs_step_mapping
and
?phs_keep_resonant
are true. If either of those is set false.
comment:23 Changed 12 years ago by
I think this is all I can contribute. WK, could you comment on this, how shall we proceed?
comment:24 Changed 12 years ago by
Priority: | P0 → P1 |
---|---|
Severity: | blocker → major |
comment:25 Changed 12 years ago by
Owner: | changed from kilian to Juergen Reuter |
---|
comment:26 Changed 12 years ago by
Once, processes are integrable again and PHS (incl. step function mapping, which I have in suspicion) are back, I will run this. I expect it to go away then, but let's see. Depending on #468.
comment:27 Changed 12 years ago by
Priority: | P1 → P4 |
---|
comment:28 Changed 11 years ago by
Owner: | changed from Juergen Reuter to ALL |
---|
comment:29 Changed 11 years ago by
Owner: | changed from ALL to Juergen Reuter |
---|---|
Status: | new → assigned |
I will do these checks as soon as ticket #534 has been closed.
comment:30 Changed 11 years ago by
Resolution: | → fixed |
---|---|
Status: | assigned → closed |
As of now (r5206) this SINDARIN file runs completely ok. I guess it was a bogus of the merge of CS's code with the actual trunk. Closing.
Confirmed, I have some suspicion, but no strong opinion yet. May you live in interesting times? Sometimes a bit more quietude would be great!