Problems in running yambo within parallel environment

Run-time issues concerning Yambo that are not covered in the above forums.

Moderators: Daniele Varsano, andrea marini, Conor Hogan, myrta gruning

Problems in running yambo within parallel environment

Postby Jinsen » Mon Dec 05, 2016 12:13 pm

Dear all,

I am a newer to yambo. I have tried to run yambo-4.1.1 with graphene as an example in my cluster. First, I ran QE 5.3 and got the save directory. Then I entered the save directory and ran “p2y -N”. Next, I ran “yambo” to initiate the program. Finally, the GW input file was generated with “yambo -p p -g n -V RL -F GW.in” and I changed “NGsBlkXp= 1 RL # [Xp] Response block size” to “NGsBlkXp= 10 RL # [Xp] Response block size”. However, there are several problems. I can only get the result in small system (8 atoms in a supercell),but when I try a big system (32 atoms), I cannot get the result and and the process stoped in step 6. And The program cannot run it in parallel environment, only running with one thread. Is there anyone can help me solve the problems?

The report file shows as follows:

[01] CPU structure, Files & I/O Directories
===========================================

* CPU-Threads :256(CPU)-1(threads)-1(threads@X)-1(threads@DIP)-1(threads@SE)-1(threads@RT)-1(threads@K)
* MPI CPU : 256
* THREADS (max): 1
* THREADS TOT(max): 256
* I/O NODES : 1
* Fragmented WFs :yes



[06] Bare local and non-local Exchange-Correlation
==================================================

[EXS] Plane waves : 233607

QP @ K 001 - 002 : b 001 - 880



Here are the input files, including scf, nscf, and GW calculation:

SCF:

&control
title='graphene_4,4 cell '
calculation='scf'
restart_mode='from_scratch'
prefix = 'graph' ,
wf_collect=.true.,
pseudo_dir = '/vol6/home/djy/djy/QE/WORKSHOP/Pseudo/',
outdir='./'
verbosity ='high',
/
&system
ibrav=12,
celldm(1)=18.591137839, celldm(2)=1.0, celldm(3)=1.5
celldm(4)=0.5,
nat=32,
nbnd=220,
nosym = .true. ,
ntyp=1,
ecutwfc=35.0,
/
&electrons
diago_full_acc=.true.,
conv_thr = 1.0d-8
/
ATOMIC_SPECIES
C 12.0 C_pbe-20071121.UPF
ATOMIC_POSITIONS {angstrom}
C 0.0000000000 0.0000000000 0.0000000000
C 1.2297560140 0.7099999790 0.0000000000
C 1.2297503190 2.1299900320 0.0000000000
C 2.4595063330 2.8399900110 0.0000000000
C 2.4595006380 4.2599800640 0.0000000000
C 3.6892566520 4.9699800430 0.0000000000
C 3.6892509570 6.3899700960 0.0000000000
C 4.9190069710 7.0999700750 0.0000000000
C 2.4595006370 0.0000000000 0.0000000000
C 3.6892566510 0.7099999790 0.0000000000
C 3.6892509560 2.1299900320 0.0000000000
C 4.9190069700 2.8399900110 0.0000000000
C 4.9190012750 4.2599800640 0.0000000000
C 6.1487572890 4.9699800430 0.0000000000
C 6.1487515940 6.3899700960 0.0000000000
C 7.3785076080 7.0999700750 0.0000000000
C 4.9190012740 0.0000000000 0.0000000000
C 6.1487572880 0.7099999790 0.0000000000
C 6.1487515930 2.1299900320 0.0000000000
C 7.3785076070 2.8399900110 0.0000000000
C 7.3785019120 4.2599800640 0.0000000000
C 8.6082579260 4.9699800430 0.0000000000
C 8.6082522310 6.3899700960 0.0000000000
C 9.8380082450 7.0999700750 0.0000000000
C 7.3785019110 0.0000000000 0.0000000000
C 8.6082579250 0.7099999790 0.0000000000
C 8.6082522300 2.1299900320 0.0000000000
C 9.8380082440 2.8399900110 0.0000000000
C 9.8380025490 4.2599800640 0.0000000000
C 11.0677585630 4.9699800430 0.0000000000
C 11.0677528680 6.3899700960 0.0000000000
C 12.2975088820 7.0999700750 0.0000000000
K_POINTS {automatic}
2 2 1 1 1 0


NSCF:

&control
title='graphene_4,4 cell '
calculation='nscf'
prefix = 'graph' ,
wf_collect=.true.,
pseudo_dir = '/vol6/home/djy/djy/QE/WORKSHOP/Pseudo/',
outdir='./'
verbosity ='high',
/
&system
ibrav=12,
celldm(1)=18.591137839, celldm(2)=1.0, celldm(3)=1.5
celldm(4)=0.5,
nat=32,
nosym = .true. ,
nbnd=880,
ntyp=1,
ecutwfc=35.0,
/
&electrons
diago_thr_init = 1.0d-5
/
ATOMIC_SPECIES
C 12.0 C_pbe-20071121.UPF
ATOMIC_POSITIONS {angstrom}
C 0.0000000000 0.0000000000 0.0000000000
C 1.2297560140 0.7099999790 0.0000000000
C 1.2297503190 2.1299900320 0.0000000000
C 2.4595063330 2.8399900110 0.0000000000
C 2.4595006380 4.2599800640 0.0000000000
C 3.6892566520 4.9699800430 0.0000000000
C 3.6892509570 6.3899700960 0.0000000000
C 4.9190069710 7.0999700750 0.0000000000
C 2.4595006370 0.0000000000 0.0000000000
C 3.6892566510 0.7099999790 0.0000000000
C 3.6892509560 2.1299900320 0.0000000000
C 4.9190069700 2.8399900110 0.0000000000
C 4.9190012750 4.2599800640 0.0000000000
C 6.1487572890 4.9699800430 0.0000000000
C 6.1487515940 6.3899700960 0.0000000000
C 7.3785076080 7.0999700750 0.0000000000
C 4.9190012740 0.0000000000 0.0000000000
C 6.1487572880 0.7099999790 0.0000000000
C 6.1487515930 2.1299900320 0.0000000000
C 7.3785076070 2.8399900110 0.0000000000
C 7.3785019120 4.2599800640 0.0000000000
C 8.6082579260 4.9699800430 0.0000000000
C 8.6082522310 6.3899700960 0.0000000000
C 9.8380082450 7.0999700750 0.0000000000
C 7.3785019110 0.0000000000 0.0000000000
C 8.6082579250 0.7099999790 0.0000000000
C 8.6082522300 2.1299900320 0.0000000000
C 9.8380082440 2.8399900110 0.0000000000
C 9.8380025490 4.2599800640 0.0000000000
C 11.0677585630 4.9699800430 0.0000000000
C 11.0677528680 6.3899700960 0.0000000000
C 12.2975088820 7.0999700750 0.0000000000
K_POINTS {automatic}
2 2 1 1 1 0


and GW:

#
# ::: ::: ::: :::: :::: ::::::::: ::::::::
# :+: :+: :+: :+: +:+:+: :+:+:+ :+: :+: :+: :+:
# +:+ +:+ +:+ +:+ +:+ +:+:+ +:+ +:+ +:+ +:+ +:+
# +#++: +#++:++#++: +#+ +:+ +#+ +#++:++#+ +#+ +:+
# +#+ +#+ +#+ +#+ +#+ +#+ +#+ +#+ +#+
# #+# #+# #+# #+# #+# #+# #+# #+# #+#
# ### ### ### ### ### ######### ########
#
#
# GPL Version 4.1.1 Revision 112
# MPI Build
# http://www.yambo-code.org
#
ppa # [R Xp] Plasmon Pole Approximation
gw0 # [R GW] GoWo Quasiparticle energy levels
HF_and_locXC # [R XX] Hartree-Fock Self-energy and Vxc
em1d # [R Xd] Dynamical Inverse Dielectric Matrix
FFTGvecs= 31615 RL # [FFT] Plane-waves
EXXRLvcs= 233607 RL # [XX] Exchange RL components
Chimod= "" # [X] IP/Hartree/ALDA/LRC/BSfxc
% BndsRnXp
1 | 880 | # [Xp] Polarization function bands
%
NGsBlkXp= 10 RL # [Xp] Response block size
% LongDrXp
1.000000 | 0.000000 | 0.000000 | # [Xp] [cc] Electric Field
%
PPAPntXp= 27.21138 eV # [Xp] PPA imaginary energy
% GbndRnge
1 | 880 | # [GW] G[W] bands range
%
GDamping= 0.100000 eV # [GW] G[W] damping
dScStep= 0.100000 eV # [GW] Energy step to evaluate Z factors
DysSolver= "n" # [GW] Dyson Equation solver ("n","s","g")
%QPkrange # [GW] QP generalized Kpoint/Band indices
1| 2| 1|880|
%


Best,
Jinsen


Graduate student
Department of Physics
NUDT
Jinsen Han
Department of Physics
National University of Defense Technology
Jinsen
 
Posts: 5
Joined: Mon Dec 05, 2016 10:44 am

Re: Problems in running yambo within parallel environment

Postby Daniele Varsano » Mon Dec 05, 2016 12:20 pm

Dear Jinsen,

YOu did not specified any parallel strategy:
Parallel variable need to be specified. You need to add "-V par" when building up input files.
In the yambo webpage there is a tutorial explaining the meaning of the variables and how to set them.

Next it looks you have memory issues:
Are you sure you need to calculate GW energy corrections for 880 bands!!!
%QPkrange # [GW] QP generalized Kpoint/Band indices
1| 2| 1|880|
%


Usually only bands around Fermi level are needed, reducing that number you will save a lot of cpu time and memory.

Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
User avatar
Daniele Varsano
 
Posts: 1948
Joined: Tue Mar 17, 2009 2:23 pm

Re: Problems in running yambo within parallel environment

Postby Jinsen » Mon Dec 05, 2016 5:34 pm

Hi Daniele,

Thanks for your guidance. Finally, it works and I successfully get the result of quasiparticle's correction energy. :D

Best,
Jinsen
--
Graduate Student
Department of Physics
NUDT
Jinsen Han
Department of Physics
National University of Defense Technology
Jinsen
 
Posts: 5
Joined: Mon Dec 05, 2016 10:44 am


Return to Other issues

Who is online

Users browsing this forum: No registered users and 1 guest

cron