by **andrea.ferretti** » Mon Oct 10, 2016 11:49 am

Dear Davide,

the problem you raise is quite important, especially in view of the large amount

of CPU resources required by a real MBPT calculation.

Fortunately, the structure of, say, a GW or BSE calculation, is such that a number

of parameters exist to strongly speed up the calculations while being still able to extract

reasonable scaling data (of course, no reasonable physical quantities in general,

as for QE pw.x with one scf step).

Let's consider a GW run, just to make an example:

* the calculation time for Xo is almost linear with respect to the number of bands included

* similarly, the calculation time for sgm is almost linear wrt the number of QP corrections

that we want to compute as well as wrt the number of bands that we include in the description of G.

setting the above parameters as minimal usually allows for a decent time length for the test runs

(where decent means not to long but also not to short).

How to set these parameters also depends the number of cores you want to exploit,

in such a way that, even if these parameters are minimal, you should still be able to extract scaling data

also wrt the number of bands etc..

Other parameters you may want to play with are the k/q point meshes...

One can either reduce them at the DFT level to have faster calculations,

or to simply compute separately the response function at any single q point to build some scaling data

take care

Andrea

Andrea Ferretti, PhD

CNR-NANO-S3 and MaX Centre

via Campi 213/A, 41125, Modena, Italy

Tel: +39 059 2055322; Skype: andrea_ferretti

URL:

http://www.nano.cnr.it