memory issue in GW calculation

Various technical topics such as parallelism and efficiency, netCDF problems, the Yambo code structure itself, are posted here.

Moderators: Daniele Varsano, andrea.ferretti, andrea marini, Conor Hogan, myrta gruning

memory issue in GW calculation

Postby mferri » Mon May 02, 2016 11:20 am

Dear Developers,
i'm using Yambo v3.4.1 in order to calculate GW corrections on my silicene 1D system.
First, I performed a nscf calculation, setting 25 k-points and 250 bands. I was able to reach the convergence of some parameters, using 2 nodes of my local cluster (48GB RAM and 12 cores for each node).
Then, to reach the convergence of other parameters ( BndsRnXp and GbndRnge), I performed another nscf calculation, increasing the number of bands up to 350, and mantaining 25 k-points. Now Yambo requires a lot of memory to performe the calculation, even if I decrease the value of each parameter.
As you can see in the attachments, if I set 250 bands, I'm able to perform the following calculation using 2 nodes (12 cores per node):
# | em1d # [R Xd] Dynamical Inverse Dielectric Matrix
# | ppa # [R Xp] Plasmon Pole Approximation
# | HF_and_locXC # [R XX] Hartree-Fock Self-energy and Vxc
# | gw0 # [R GW] GoWo Quasiparticle energy levels
# | FFTGvecs= 15 Ry # [FFT] Plane-waves
# | EXXRLvcs= 15 Ry # [XX] Exchange RL components
# | Chimod= "Hartree" # [X] IP/Hartree/ALDA/LRC/BSfxc
# | % BndsRnXp
# | 1 | 250 | # [Xp] Polarization function bands
# | %
# | NGsBlkXp= 3 Ry # [Xp] Response block size
# | % LongDrXp
# | 0.1000E-4 | 0.000 | 0.000 | # [Xp] [cc] Electric Field
# | %
# | PPAPntXp= 27.21138 eV # [Xp] PPA imaginary energy
# | % GbndRnge
# | 1 | 220 | # [GW] G[W] bands range
# | %
# | GDamping= 0.10000 eV # [GW] G[W] damping
# | dScStep= 0.10000 eV # [GW] Energy step to evalute Z factors
# | DysSolver= "n" # [GW] Dyson Equation solver (`n`,`s`,`g`)
# | %QPkrange # [GW] QP generalized Kpoint/Band indices
# | 1| 25| 57| 70|
# | %
# | %QPerange # [GW] QP generalized Kpoint/Energy indices
# | 1| 25| 0.0|-1.0|
# | %

When i set 350 bands, i need 4 nodes (using only 2 cores per node, otherwise the memory consumption is too high) to perform this apparently softer calculation:
# | em1d # [R Xd] Dynamical Inverse Dielectric Matrix
# | ppa # [R Xp] Plasmon Pole Approximation
# | HF_and_locXC # [R XX] Hartree-Fock Self-energy and Vxc
# | gw0 # [R GW] GoWo Quasiparticle energy levels
# | BoseTemp= 0.000000 eV # Bosonic Temperature
# | EXXRLvcs= 5 Ry # [XX] Exchange RL components
# | Chimod= "Hartree" # [X] IP/Hartree/ALDA/LRC/BSfxc
# | % BndsRnXp
# | 1 | 100 | # [Xp] Polarization function bands
# | %
# | NGsBlkXp= 1 Ry # [Xp] Response block size
# | % LongDrXp
# | 0.1000E-4 | 0.000 | 0.000 | # [Xp] [cc] Electric Field
# | %
# | PPAPntXp= 27.21138 eV # [Xp] PPA imaginary energy
# | % GbndRnge
# | 1 | 100 | # [GW] G[W] bands range
# | %
# | GDamping= 0.10000 eV # [GW] G[W] damping
# | dScStep= 0.10000 eV # [GW] Energy step to evalute Z factors
# | DysSolver= "n" # [GW] Dyson Equation solver (`n`,`s`,`g`)
# | %QPkrange # [GW] QP generalized Kpoint/Band indices
# | 1| 25| 59| 70|
# | %
# | %QPerange # [GW] QP generalized Kpoint/Energy indices
# | 1| 25| 0.0|-1.0|
# | %


So, I would like to ask if Yambo stores in memory all the bands and then uses only the ones I asked for, and if there is a way to reduce this memory consumption.
Thank you

Best,
Matteo
You do not have the required permissions to view the files attached to this post.
Matteo Ferri
PhD student in Condensed Matter at SISSA

Previously:
Master at Università degli Studi di Milano &
CNR, Istituto per la Microelettronica e Microsistemi - Sezione di Agrate Brianza
mferri
 
Posts: 17
Joined: Thu Nov 12, 2015 10:18 am

Re: memory issue in GW calculation

Postby Daniele Varsano » Mon May 02, 2016 11:35 am

Dear Matteo,

Yambo 3.4 and yambo 4.x have a totally different implementation of parallelism: yambo 4.x it is meant to scale also with respect the memory, yambo 3.4 could be slightly more performant in some cases but definitely less performant for what concern memory issues.

Coming to your cases: you are calculating anyway 475 QP correction in a single run (# | 1| 25| 57| 70|).
Dividing them in different run will help in memory usage.
Next big differences in the two runs you sent the report are essentially in the FFT used grid more than bands: you can see by comparing the [FFT-SC] in the
report in the two runs. This is because in one run you lowered the FFTGves, in the other case you used the default which is all the G-vecs needed for the Wfs.
Reducing that value will also reduce the memory usage.

Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
User avatar
Daniele Varsano
 
Posts: 1974
Joined: Tue Mar 17, 2009 2:23 pm

Re: memory issue in GW calculation

Postby mferri » Mon May 02, 2016 11:51 am

Dear Daniele,
Thank you for your reply and for your advices.
In some previous tests, i noticed that FFTGVecs didn't affect the results in a significant way e I didn't check it this time. My mistake :roll:

Cheers,

Matteo
Matteo Ferri
PhD student in Condensed Matter at SISSA

Previously:
Master at Università degli Studi di Milano &
CNR, Istituto per la Microelettronica e Microsistemi - Sezione di Agrate Brianza
mferri
 
Posts: 17
Joined: Thu Nov 12, 2015 10:18 am


Return to Technical Issues

Who is online

Users browsing this forum: No registered users and 1 guest