whizard is hosted by Hepforge, IPPP Durham

Opened 14 years ago

Closed 6 years ago

#364 closed defect (fixed)

Out of memory in phase space generation

Reported by: ohl Owned by: brass
Priority: P1 Milestone: v2.6.4
Component: phase_space Version: 2.0.3
Severity: critical Keywords:
Cc:

Description

For a certain process for ttbar + 1 Jet (i.e. q,g -> b bbar l+ l- nu nubar q for q = u,d,c,s,U,D,C,S the phase space generation is big enough to trigger the "out of memory" failure!

Change History (27)

comment:1 Changed 14 years ago by ohl

Priority: P3P2
Severity: normalmajor

comment:2 Changed 14 years ago by kilian

Status: newassigned

Can't reproduce. With this .sin file

alias q = u:d:c:s:U:D:C:S
alias lp = E1:E2
alias lm = e1:e2
alias nu = n1:n2
alias nubar = N1:N2
process ttj = q,g => b, bbar, lp, lm, nu, nubar, q

ms = 0  mc = 0  me = 0  mmu = 0
sqrts = 2000
integrate (ttj)

memory consumption peaks at 1GB. O'Mega needs about 200M, gfortran uses 500M (with -O3), phase space close to 1G.

It appears, however, that the memory used during phs setup is not freed properly: if I run as above, memory stays at 1G. If I create phase space and use it in a separate run, Whizard uses not more than 250M. This should be looked at (in a separate ticket?), but since peak memory is not affected, it doesn't look urgent.

1G is still manageable even for 32bit, but phase space really emerges as a bottleneck in more complicated processes. I might look for a more memory-efficient algorithm, but propose to postpone this for now. Opinions?

comment:3 Changed 14 years ago by Juergen Reuter

Milestone: v2.0.4v2.0.5

As discussed with WK on the phone, we postpone an implementation of a more efficient algorithm at least to 2.0.5.

comment:4 Changed 14 years ago by Juergen Reuter

Priority: P2P1
Severity: majorcritical

This is really severe and prevents one from using big LHC processes. Ranking up.

comment:5 Changed 14 years ago by Juergen Reuter

This becomes more and more severe an issue. FB reports about BSM processes (in the NMSSM) where no 2->6 process on 32bit allows for phase space generation.

comment:6 Changed 14 years ago by Juergen Reuter

Some new values to add here: for a 2->8 process in the SM the phase space generation is now 31 GB large! In general, for 2->8 SM or 2->6 BSM processes, the phase space generation needs a week or so.

comment:7 Changed 14 years ago by Juergen Reuter

Had to kill the processes with memory exceeding 50 GB! This really is show-stopper. BSM processes do not work at all.

comment:8 Changed 13 years ago by Juergen Reuter

Priority: P1P2

Reranked after discussion with WK.

comment:9 Changed 13 years ago by Juergen Reuter

Milestone: v2.0.5v2.0.6

comment:10 Changed 13 years ago by Juergen Reuter

Milestone: v2.0.6v2.1.0

This is for sure on a longer time scale, moving it.

comment:11 Changed 12 years ago by Juergen Reuter

Milestone: v2.2.0v2.1.0
Priority: P2P1

This has become much more urgent now: the following semi-simple example is already borderline:

model = SM_CKM
alias j = u:d:c:s:b
alias J = U:D:C:S:B
process tth_had = e1, E1 => b,b,B,B,j,j,J,J
compile
sqrts = 2500
ms = 0
mc = 0
mb = 0
integrate (tth_had)

Phase space generation is running for 2.5 days now, at the moment at a size of 19 GB.

comment:12 Changed 12 years ago by Juergen Reuter

Milestone: v2.1.0v2.1.1

comment:13 Changed 8 years ago by kilian

Owner: changed from kilian to brass
Status: assignednew

comment:14 Changed 8 years ago by Juergen Reuter

Component: corephase_space
Milestone: v2.3.1v2.3.2

comment:15 Changed 8 years ago by Juergen Reuter

Is there any news on that one here?

comment:16 Changed 8 years ago by Juergen Reuter

Milestone: v2.3.2v2.4.0

Milestone renamed

comment:17 Changed 7 years ago by Juergen Reuter

Milestone: v2.4.0v2.4.1

comment:18 Changed 7 years ago by Juergen Reuter

During the last Wuerzburg WHIZARD meeting end of 2016 TO confirmed that the memory seems still growing but we have no further data points. Shall we close the ticket for now, and (re)open again if we experience any more problems?

comment:19 Changed 7 years ago by Juergen Reuter

Milestone: v2.4.1v2.5.0

Milestone renamed

comment:20 Changed 7 years ago by Juergen Reuter

Milestone: v2.5.0v2.6.0

comment:21 Changed 7 years ago by Juergen Reuter

Milestone: v2.6.0v2.6.1

comment:22 Changed 7 years ago by Juergen Reuter

With the fast_wood/cascades2 method now having been merged, shall we close this issue? Or do we claim to find possible memory leaks in the (old) cascades implementation?

comment:23 Changed 6 years ago by Juergen Reuter

Milestone: v2.6.1v2.6.2

comment:24 Changed 6 years ago by Juergen Reuter

Update from MU: he also sees this growth in virtual memory when the CASCADE2 (fast_wood) method is applied. So the 'out of memory' issue is not yet solved. MU promised to look into this.

comment:25 Changed 6 years ago by Juergen Reuter

Milestone: v2.6.2v2.6.3

comment:26 Changed 6 years ago by Juergen Reuter

Milestone: v2.6.3v2.6.4

comment:27 Changed 6 years ago by Juergen Reuter

Resolution: fixed
Status: newclosed

After discussion with MU and WK, MU is convinced that this cannot be further improved within the cascades2 algorithm at the moment. The problem is the setup of phs_trees/phs_forests. The advice is for users in case of memory problems to use the cascades2 framework. Only when users will then run (again) into out of memory issues we will reopen again. closing for now.

Note: See TracTickets for help on using tickets.