stuck in caluclation of static inverse dielectric matrix

Deals with issues related to computation of optical spectra, in RPA (-o c) or by solving the Bethe-Salpeter equation (-o b). Includes local field effects, excitons, etc.

Moderators: Davide Sangalli, andrea marini, Conor Hogan, myrta gruning

stuck in caluclation of static inverse dielectric matrix

Postby ljzhou86 » Wed Apr 17, 2019 8:17 am

I am trying to do a bse calculation on a heterostructure (2D + 0D) system containing 115 atoms. Due to extremely expensive cost in GW, I plan to use the scisorr operators to correct the band gaps. But I was firstly stuck in the step of caluclation of static inverse dielectric matrix.

The input file is as follows:

# "yambo -b -V par -r "
# ::: ::: ::: :::: :::: ::::::::: ::::::::
# :+: :+: :+: :+: +:+:+: :+:+:+ :+: :+: :+: :+:
# +:+ +:+ +:+ +:+ +:+ +:+:+ +:+ +:+ +:+ +:+ +:+
# +#++: +#++:++#++: +#+ +:+ +#+ +#++:++#+ +#+ +:+
# +#+ +#+ +#+ +#+ +#+ +#+ +#+ +#+ +#+
# #+# #+# #+# #+# #+# #+# #+# #+# #+#
# ### ### ### ### ### ######### ########
#
#
# GPL Version 4.3.2 Revision 134. (Based on r.15658 h.afdb12
# MPI+OpenMP Build
# http://www.yambo-code.org
#
em1s # [R Xs] Static Inverse Dielectric Matrix
rim_cut # [R RIM CUT] Coulomb potential
NLogCPUs=0 # [PARALLEL] Live-timing CPU`s (0 for all)
FFTGvecs= 68539 RL # [FFT] Plane-waves
PAR_def_mode= "memory" # [PARALLEL] Default distribution mode ("balanced"/"memory"/"workload")
X_all_q_CPU= "2 1 4 18 8" # [PARALLEL] CPUs for each role
X_all_q_ROLEs= "q g k c v" # [PARALLEL] CPUs roles (q,g,k,c,v)
X_all_q_nCPU_LinAlg_INV= 36 # [PARALLEL] CPUs for Linear Algebra
X_Threads=0 # [OPENMP/X] Number of threads for response functions
DIP_Threads=0 # [OPENMP/X] Number of threads for dipoles
RandQpts= 3000000 # [RIM] Number of random q-points in the BZ
RandGvec= 123 RL # [RIM] Coulomb interaction RS components
CUTGeo= "box z" # [CUT] Coulomb Cutoff geometry: box/cylinder/sphere X/Y/Z/XY..
% CUTBox
0.000 | 0.00 | 55.0 | # [CUT] [au] Box sides
%
CUTRadius= 0.000000 # [CUT] [au] Sphere/Cylinder radius
CUTCylLen= 0.000000 # [CUT] [au] Cylinder length
CUTwsGvec= 0.700000 # [CUT] WS cutoff: number of G to be modified
Chimod= "hartree" # [X] IP/Hartree/ALDA/LRC/PF/BSfxc
% BndsRnXs
1 | 1100 | # [Xs] Polarization function bands
%
NGsBlkXs= 5 Ry # RL # [Xs] Response block size
% DmRngeXs
0.10000 | 0.10000 | eV # [Xs] Damping range
%
% LongDrXs
1.000000 | 0.000000 | 0.000000 | # [Xs] [cc] Electric Field
%
"
After about half one minute run, I firstly got the warning in report file as "[x,Vnl] slows the Dipoles computation. To neglect it rename the ns.kb_pp file". Then I renamed the ns.kb_pp files in SAVE and resubmitted job, I finally got the following report file as"


The log file is as "
<01s> P1114: [01] CPU structure, Files & I/O Directories
<01s> P1114: CPU-Threads:1152(CPU)-1(threads)
<02s> P1114: CPU-Threads:X_all_q(environment)-2 1 4 18 8(CPUs)-q g k c v(ROLEs)
<02s> P1114: [02] CORE Variables Setup
<02s> P1114: [02.01] Unit cells
<03s> P1114: [02.02] Symmetries
<03s> P1114: [02.03] RL shells
<03s> P1114: [02.04] K-grid lattice
<03s> P1114: [02.05] Energies [ev] & Occupations
<10s> P1114: [03] Transferred momenta grid
<11s> P1114: [04] Coloumb potential Random Integration (RIM)
<11s> P1114: [04.01] RIM initialization
<11s> P1114: Random points | | [000%] --(E) --(X)
<13s> P1114: Random points |########################################| [100%] 02s(E) 02s(X)
<13s> P1114: [04.02] RIM integrals
<13s> P1114: [WARNING] Empty workload for CPU 1114
<13s> P1114: Momenta loop | | [***%] --(E) --(X)
<22s> P1114: [05] Coloumb potential CutOff :box
<22s> P1114: Box | | [000%] --(E) --(X)
<22s> P1114: Box |########################################| [100%] --(E) --(X)
<22s> P1114: [06] Static Dielectric Matrix
<22s> P1114: [PARALLEL Response_G_space for K(bz) on 4 CPU] Loaded/Total (Percentual):12/49(24%)
<23s> P1114: [PARALLEL Response_G_space for Q(ibz) on 2 CPU] Loaded/Total (Percentual):12/25(48%)
<23s> P1114: [PARALLEL Response_G_space for K-q(ibz) on 1 CPU] Loaded/Total (Percentual):25/25(100%)
<23s> P1114: [LA] SERIAL linear algebra
<23s> P1114: [PARALLEL Response_G_space for CON bands on 18 CPU] Loaded/Total (Percentual):39/715(5%)
<23s> P1114: [PARALLEL Response_G_space for VAL bands on 8 CPU] Loaded/Total (Percentual):48/385(12%)
<23s> P1114: [PARALLEL distribution for RL vectors(X) on 1 CPU] Loaded/Total (Percentual):82174225/82174225(100%)
<24s> P1114: [MEMORY] Alloc WF%c( 2.631750Gb) TOTAL: 3.983044Gb (traced) 37.15200Mb (memstat)
"
The report file as"

__ __ _ __ __ ____ U ___ u
\ \ / / U /"\ U u |" \/ "| u U | __") u \/"_ \/
\ V / \/ _ \/ \| |\/| |/ \| _ \/ | | | |
U_|"|_u / ___ \ | | | | | |_) | .-,_| |_| |
|_| /_/ \_\ |_| |_| |____/ \_)-\___/
.-,//|(_ \\ >> <<,-,,-. _|| \\_ \\
\_) (__) (__) (__) (./ \.) (__) (__) (__)


GPL Version 4.3.2 Revision 134. (Based on r.15658 h.afdb12c8
MPI+OpenMP Build
http://www.yambo-code.org

04/09/2019 at 19:25 YAMBO @ ba092.localdomain

Timing [Min/Max/Average]: 01s/01s/01s

[01] CPU structure, Files & I/O Directories
===========================================

* CPU-Threads :1152(CPU)-1(threads)
* CPU-Threads :X_all_q(environment)-2 1 4 18 8(CPUs)-q g k c v(ROLEs)
* MPI CPU : 1152
* THREADS (max): 1
* THREADS TOT(max): 1152
* I/O NODES : 1
* Fragmented WFs :yes

CORE databases in .
Additional I/O in .
Communications in .
Input file is Inputs/05w
Report file is ./r_em1s_rim_cut
Log files in ./LOG

[RD./SAVE//ns.db1]------------------------------------------
Bands : 1100
K-points : 25
G-vectors [RL space]: 1644139
Components [wavefunctions]: 205631
Symmetries [spatial+T-rev]: 2
Spinor components : 1
Spin polarizations : 1
Temperature [ev]: 0.000000
Electrons : 770.0000
WF G-vectors : 217077
Max atoms/species : 50
No. of atom species : 3
Exact exchange fraction in XC : 0.000000
Exact exchange screening in XC : 0.000000
Magnetic symmetries : no
- S/N 002981 -------------------------- v.04.03.02 r.00134 -

[02] CORE Variables Setup
=========================

...skipping...
Points outside the sphere : 2403299
[Int_sBZ(q=0) 1/q^2]*(Vol_sBZ)^(-1/3) = 6.556071
should be < 7.795600
[WR./SAVE//ndb.RIM]-----------------------------------------
Brillouin Zone Q/K grids (IBZ/BZ): 25 49 25 49
Coulombian RL components : 123
Coulombian diagonal components :yes
RIM random points : 3000000
RIM RL volume [a.u.]: 0.0052
Real RL volume [a.u.]: 0.0052
Eps^-1 reference component :0
Eps^-1 components : 0.00 0.00 0.00
RIM anysotropy factor : 0.000000
- S/N 002981 -------------------------- v.04.03.02 r.00134 -

Summary of Coulomb integrals for non-metallic bands |Q|[au] RIM/Bare:

Q [1]:0.1000E-40.8410 * Q [9]: 0.03311 0.65532
Q [10]: 0.03311 0.65567 * Q [2]: 0.03311 0.65524
Q [16]: 0.05734 0.81750 * Q [15]: 0.05734 0.81711
Q [5]: 0.05734 0.81734 * Q [11]: 0.06621 0.85140
Q [12]: 0.06621 0.85174 * Q [3]: 0.06621 0.85149
Q [21]: 0.08759 0.90517 * Q [19]: 0.08759 0.90518
Q [20]: 0.08759 0.90493 * Q [18]: 0.08759 0.90486
Q [6]: 0.08759 0.90502 * Q [17]: 0.08759 0.90489
Q [14]: 0.09932 0.92352 * Q [13]: 0.09932 0.92324
Q [4]: 0.09932 0.92333 * Q [25]: 0.114682 0.940771
Q [24]: 0.114682 0.940521 * Q [8]: 0.114682 0.940676
Q [23]: 0.119364 0.944907 * Q [22]: 0.119364 0.944650
Q [7]: 0.119364 0.944774

Timing [Min/Max/Average]: 11s/11s/11s

[05] Coloumb potential CutOff :box
==================================

Cut directions :Z
Box sides [au]: 55.00000
Symmetry test passed :yes

[WR./SAVE//ndb.cutoff]--------------------------------------
Brillouin Zone Q/K grids (IBZ/BZ): 25 49 25 49
CutOff Geometry :box z
Coulomb cutoff potential :box z 55.000
Box sides length [au]: 0.00000 0.00000 55.00000
Sphere/Cylinder radius [au]: 0.000000
Cylinder length [au]: 0.000000
RL components : 68539
RL components used in the sum : 68539
RIM corrections included :yes
RIM RL components : 123
RIM random points : 3000000
- S/N 002981 -------------------------- v.04.03.02 r.00134 -

[06] Static Dielectric Matrix
"
This time, there was no prompts of warning and errors, however, the jobs got terminated also very quickly. I have no idea how to fix it. Could you help me? Is it a issue with memory? Note the number of total cores I used is 1152. I think it is large enough. Thanks a lot.
Best
Dr. Zhou Liu-Jiang
Fujian Institute of Research on the Structure of Matter
Chinese Academy of Sciences
Fuzhou, Fujian, 350002
ljzhou86
 
Posts: 72
Joined: Fri May 03, 2013 10:20 am

Re: stuck in caluclation of static inverse dielectric matrix

Postby Daniele Varsano » Wed Apr 17, 2019 9:03 am

Dear Zhou Liu-Jiang,
using the present parallelization strategy it seems you need at least 4GB per code, is that the case?
I suggest you to change your parallelization setting moving all the cpu assignment on bands: this will help to reduce the needed memory per core:
Code: Select all
X_all_q_CPU= "1 1 1 Nc Nv" # [PARALLEL] CPUs for each role
X_all_q_ROLEs= "q g k c v" # [PARALLEL] CPUs roles (q,g,k,c,v)



Also note that this has effect only if you compiled the code using the scalapack libraries.
X_all_q_nCPU_LinAlg_INV= 36

Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
User avatar
Daniele Varsano
 
Posts: 2026
Joined: Tue Mar 17, 2009 2:23 pm

Re: stuck in caluclation of static inverse dielectric matrix

Postby ljzhou86 » Wed Apr 17, 2019 9:37 am

Hi Daniele

THanks a lot for your kind suggestions. I check and find that the yambo code I generated did not include scalapack libraries. I plan to recompile it as"

./configure FC=ifort F77=ifort -with-netcdf-path=/usr/projects/hpcsoft/toss3/common/netcdf/4.4.0_intel-17.0.4/ --enable-open-mp --enable-time-profile --enable-memory-profile --with-blas-libs="-lmkl_intel_lp64 -lmkl_sequential -lmkl_core" --with-lapack-libs="-lmkl_intel_lp64 -lmkl_sequential -lmkl_core" --with-scalapack-libs="-lmkl_scalapack_lp64"

But the configure log showed the empty information for scalapack as
"
...
# > MATH: (Internal FFTW3)
#
# [ Ic] FFT : /turquoise/usr/projects/cint/cint_codes/ljzhou86_softeware/yambo/4.3.2/v2-4.3.2/yambo-4.3.2/lib/external/intel/mpiifort/lib/libfftw3.a
# -I/turquoise/usr/projects/cint/cint_codes/ljzhou86_softeware/yambo/4.3.2/v2-4.3.2/yambo-4.3.2/lib/external/intel/mpiifort/include/
# [ E ] BLAS : -lmkl_intel_lp64 -lmkl_sequential -lmkl_core
# [ E ] LAPACK : -lmkl_intel_lp64 -lmkl_sequential -lmkl_core
# [ - ] SCALAPACK :
# [ - ] BLACS :
# [ - ] PETSC :
#
# [ - ] SLEPC :
..."

Could you tell me how to include scalapack library in a right way?

Best
Dr. Zhou Liu-Jiang
Fujian Institute of Research on the Structure of Matter
Chinese Academy of Sciences
Fuzhou, Fujian, 350002
ljzhou86
 
Posts: 72
Joined: Fri May 03, 2013 10:20 am

Re: stuck in caluclation of static inverse dielectric matrix

Postby Daniele Varsano » Wed Apr 17, 2019 10:08 am

Dear Zhou Liu-Jiang,
the syntax you used seems to me to be correct if you have the mkl dir in your path:
Code: Select all
--with-scalapack-libs="-L$MKL_PATH -lmkl_scalapack_lp64"

You should look in the config.log to spot why they are not recognized.

Anyway, the problem you are experiencing (probably lack of memory) it is not related with the use of the scalapack, as they are used for the inversion of the X matrix, and
the code stops before that operation.

Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
User avatar
Daniele Varsano
 
Posts: 2026
Joined: Tue Mar 17, 2009 2:23 pm

Re: stuck in caluclation of static inverse dielectric matrix

Postby ljzhou86 » Wed Apr 17, 2019 10:59 am

Dear Daniele

I know, but I want to enable the feature of Parallel linear algebra distribution from scalapack library. I checked the config.log and found that the libmkl_scalapack_lp64.so I used seems to be not recognized by yambo, as"
/usr/projects/hpcsoft/toss3/common/intel-clusterstudio/2018.4.057/compilers_and_libraries_2018/linux/mkl/lib/intel64/libmkl_scalapack_lp64.so: undefined reference to `Ccgerv2d'
/usr/projects/hpcsoft/toss3/common/intel-clusterstudio/2018.4.057/compilers_and_libraries_2018/linux/mkl/lib/intel64/libmkl_scalapack_lp64.so: undefined reference to `blacs_gridinfo_'
/usr/projects/hpcsoft/toss3/common/intel-clusterstudio/2018.4.057/compilers_and_libraries_2018/linux/mkl/lib/intel64/libmkl_scalapack_lp64.so: undefined reference to `blacs_gridinfo__'
..."
I have no idea on this why it is not compatible with yambo.

The internal scalapack librariy is the other option. How to set the option to enable the use of the internal scalapack librariy? I found the default compiling does not call the use of internal scalapack library. I have placed the scalapack-2.0.2.tar.gz to direcotry"lib/archive". Thanks a lot

Best
Dr. Zhou Liu-Jiang
Fujian Institute of Research on the Structure of Matter
Chinese Academy of Sciences
Fuzhou, Fujian, 350002
ljzhou86
 
Posts: 72
Joined: Fri May 03, 2013 10:20 am

Re: stuck in caluclation of static inverse dielectric matrix

Postby Daniele Varsano » Wed Apr 17, 2019 11:07 am

Dear Dr. Zhou Liu-Jiang,
as scalapack it is not strictly necessary to run yambo, it is not included as internal library and you need to resort to external library.
From the config.log anyway it does not seem to be a problem of yambo, but it fails some internal checks, so it seems more than you miss some library path or you need to load other module?
I have placed the scalapack-2.0.2.tar.gz to direcotry"lib/archive

This is not needed, you can compile the scalapack and point to them in the configure option as you are doing with the mkl.
Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
User avatar
Daniele Varsano
 
Posts: 2026
Joined: Tue Mar 17, 2009 2:23 pm

Re: stuck in caluclation of static inverse dielectric matrix

Postby ljzhou86 » Wed Apr 17, 2019 11:43 am

Dear Daniele

Daniele Varsano wrote:
Code: Select all
X_all_q_CPU= "1 1 1 Nc Nv" # [PARALLEL] CPUs for each role
X_all_q_ROLEs= "q g k c v" # [PARALLEL] CPUs roles (q,g,k,c,v)




I set the input file as you suggested (X_all_q_CPU= "1 1 1 36 32" # [PARALLEL] CPUs for each role), It would not terminate until it approached the end of "Xo@q[1] " as"
<36s> P0001: Reading wf_fragments_21_1
<37s> P0001: Reading wf_fragments_22_1
<37s> P0001: Reading wf_fragments_23_1
<38s> P0001: Reading wf_fragments_24_1
<38s> P0001: Reading wf_fragments_25_1
<39s> P0001: [X-CG] R(p) Tot o/o(of R) : 1342 11466 100
<39s> P0001: Xo@q[1] | | [000%] --(E) --(X)
<02m-43s> P0001: Xo@q[1] |# | [002%] 02m-04s(E) 01h-21m-46s(X)
<04m-02s> P0001: Xo@q[1] |## | [005%] 03m-22s(E) 01h-06m-45s(X)
<05m-47s> P0001: Xo@q[1] |### | [007%] 05m-07s(E) 01h-08m-10s(X)
<08m-03s> P0001: Xo@q[1] |#### | [010%] 07m-23s(E) 01h-13m-33s(X)
<09m-06s> P0001: Xo@q[1] |##### | [012%] 08m-27s(E) 01h-07m-33s(X)
<10m-07s> P0001: Xo@q[1] |###### | [015%] 09m-27s(E) 01h-02m-50s(X)
<11m-04s> P0001: Xo@q[1] |####### | [017%] 10m-25s(E) 59m-30s(X)
<12m-21s> P0001: Xo@q[1] |######## | [020%] 11m-42s(E) 58m-24s(X)
<13m-22s> P0001: Xo@q[1] |######### | [022%] 12m-43s(E) 56m-30s(X)
<14m-32s> P0001: Xo@q[1] |########## | [025%] 13m-52s(E) 55m-26s(X)
<15m-44s> P0001: Xo@q[1] |########### | [027%] 15m-04s(E) 54m-41s(X)
<16m-52s> P0001: Xo@q[1] |############ | [030%] 16m-13s(E) 54m-00s(X)
<18m-00s> P0001: Xo@q[1] |############# | [032%] 17m-21s(E) 53m-18s(X)
<19m-27s> P0001: Xo@q[1] |############## | [035%] 18m-47s(E) 53m-40s(X)
<20m-48s> P0001: Xo@q[1] |############### | [037%] 20m-09s(E) 53m-39s(X)
<22m-02s> P0001: Xo@q[1] |################ | [040%] 21m-23s(E) 53m-27s(X)
<23m-10s> P0001: Xo@q[1] |################# | [042%] 22m-31s(E) 52m-55s(X)
<24m-47s> P0001: Xo@q[1] |################## | [045%] 24m-07s(E) 53m-36s(X)
<26m-09s> P0001: Xo@q[1] |################### | [047%] 25m-30s(E) 53m-38s(X)
<27m-19s> P0001: Xo@q[1] |#################### | [050%] 26m-40s(E) 53m-20s(X)
<28m-22s> P0001: Xo@q[1] |##################### | [052%] 27m-42s(E) 52m-44s(X)
<29m-55s> P0001: Xo@q[1] |###################### | [055%] 29m-15s(E) 53m-08s(X)
<31m-34s> P0001: Xo@q[1] |####################### | [057%] 30m-54s(E) 53m-44s(X)
<32m-51s> P0001: Xo@q[1] |######################## | [060%] 32m-12s(E) 53m-37s(X)
<34m-21s> P0001: Xo@q[1] |######################### | [062%] 33m-42s(E) 53m-54s(X)
<36m-12s> P0001: Xo@q[1] |########################## | [065%] 35m-32s(E) 54m-38s(X)
<37m-41s> P0001: Xo@q[1] |########################### | [067%] 37m-01s(E) 54m-50s(X)
<39m-08s> P0001: Xo@q[1] |############################ | [070%] 38m-29s(E) 54m-57s(X)
<40m-40s> P0001: Xo@q[1] |############################# | [072%] 40m-01s(E) 55m-11s(X)
<42m-18s> P0001: Xo@q[1] |############################## | [075%] 41m-38s(E) 55m-30s(X)
<43m-43s> P0001: Xo@q[1] |############################### | [077%] 43m-03s(E) 55m-30s(X)
<44m-57s> P0001: Xo@q[1] |################################ | [080%] 44m-17s(E) 55m-20s(X)
<46m-40s> P0001: Xo@q[1] |################################# | [082%] 46m-00s(E) 55m-43s(X)
<47m-59s> P0001: Xo@q[1] |################################## | [085%] 47m-20s(E) 55m-40s(X)
<49m-32s> P0001: Xo@q[1] |################################### | [087%] 48m-53s(E) 55m-50s(X)
<51m-02s> P0001: Xo@q[1] |#################################### | [090%] 50m-22s(E) 55m-57s(X)
<52m-29s> P0001: Xo@q[1] |##################################### | [092%] 51m-50s(E) 56m-00s(X)
<53m-56s> P0001: Xo@q[1] |###################################### | [095%] 53m-17s(E) 56m-05s(X)
<54m-44s> P0001: Xo@q[1] |####################################### | [097%] 54m-04s(E) 55m-26s(X)
"
What is the reason why it stoped and could not continue other q points' calculation?? Thanks a lot
Best
Dr. Zhou Liu-Jiang
Fujian Institute of Research on the Structure of Matter
Chinese Academy of Sciences
Fuzhou, Fujian, 350002
ljzhou86
 
Posts: 72
Joined: Fri May 03, 2013 10:20 am

Re: stuck in caluclation of static inverse dielectric matrix

Postby Daniele Varsano » Wed Apr 17, 2019 1:30 pm

Dear Zhou Liu-Jiang,
any error message? what is the size of your dielectric matrix? It should be reported in the report.

Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
User avatar
Daniele Varsano
 
Posts: 2026
Joined: Tue Mar 17, 2009 2:23 pm

Re: stuck in caluclation of static inverse dielectric matrix

Postby ljzhou86 » Wed Apr 17, 2019 8:58 pm

Dear Daniele

Actually, there is any prompts of errors. I attach the report file here. Hope you can help me to spot the reason. Thanks a lot

Best
You do not have the required permissions to view the files attached to this post.
Dr. Zhou Liu-Jiang
Fujian Institute of Research on the Structure of Matter
Chinese Academy of Sciences
Fuzhou, Fujian, 350002
ljzhou86
 
Posts: 72
Joined: Fri May 03, 2013 10:20 am

Re: stuck in caluclation of static inverse dielectric matrix

Postby Daniele Varsano » Wed Apr 17, 2019 9:37 pm

Dear Zhou Liu-Jiang,
unfortunately, there are not hint to spot the problem, I can't see anything wrong in your setup.
It is a massive calculation, and I can only guess that it is still a memory problem.
My only suggestion is to repeat the calculation reducing the number of cpu's allowing more memory per core inside the node.
Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
User avatar
Daniele Varsano
 
Posts: 2026
Joined: Tue Mar 17, 2009 2:23 pm

Next

Return to Linear Response

Who is online

Users browsing this forum: No registered users and 2 guests