University of Cambridge > > Isaac Newton Institute Seminar Series > SCALE-LES: Strategic development of large eddy simulation suitable to the future HPC

SCALE-LES: Strategic development of large eddy simulation suitable to the future HPC

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Mustapha Amrani.

Multiscale Numerics for the Atmosphere and Ocean

The Large Eddy Simulation is a vital dynamical framework to investigate the cloud-aerosol-chemistry-radiation interaction from the viewpoint of climate problem. So far, the LES using in the meteorological filed are having several problems. One problem was that it is large grid-size used, compromising to the suitability of LES . In addition, the aspect ratio of horizontal and vertical grids was much larger than unity. The grid-size must be reduced to several 10m and it is desirable that the aspect ratio is near unity for the atmospheric LES . The target domain was also narrow for less of computer resources. The large-scale computing using the recent powerful super-computer may enable us to conduct the LES with reasonable grid-size and wide domain. Ultimately, the global LES is one of milestones in near future. Another problem in LES applied on meteorological field is that the heat source owing to water condensation is injected in a grid box. Strictly considering, the grid-box heating collapse the theory of LES that the grid size is in the energy cascade domain. Nevertheless, we have used the dry theory of LES . Beside the above problem that should be resolved in the future, we are now confronting with computational problems for such large-scale calculations. The numerical method of fluid dynamical part in the atmospheric model has been shifted from the spectral transform method to the grid-point method. The former is no longer acceptable on the massively parallel platforms form the limitation of inner-connect communication. On the other hand, the latter also contains a new problem, which is so-called memory bandwidth problem. For example, even on K Computer, the B/F ratio is just 0.5. The key to get high computational performance is the reduction of load/store from and to the main memory and efficient use of cash memory. Similar problem occurs in the communication between computer nodes. The multidisciplinary team (Team SCALE ) in RIKE /AICS is now tackling to such prob

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2021, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity