![]() |
COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. | ![]() |
University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > SCALE-LES: Strategic development of large eddy simulation suitable to the future HPC
![]() SCALE-LES: Strategic development of large eddy simulation suitable to the future HPCAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Mustapha Amrani. Multiscale Numerics for the Atmosphere and Ocean The Large Eddy Simulation is a vital dynamical framework to investigate the cloud-aerosol-chemistry-radiation interaction from the viewpoint of climate problem. So far, the LES using in the meteorological filed are having several problems. One problem was that it is large grid-size used, compromising to the suitability of LES . In addition, the aspect ratio of horizontal and vertical grids was much larger than unity. The grid-size must be reduced to several 10m and it is desirable that the aspect ratio is near unity for the atmospheric LES . The target domain was also narrow for less of computer resources. The large-scale computing using the recent powerful super-computer may enable us to conduct the LES with reasonable grid-size and wide domain. Ultimately, the global LES is one of milestones in near future. Another problem in LES applied on meteorological field is that the heat source owing to water condensation is injected in a grid box. Strictly considering, the grid-box heating collapse the theory of LES that the grid size is in the energy cascade domain. Nevertheless, we have used the dry theory of LES . Beside the above problem that should be resolved in the future, we are now confronting with computational problems for such large-scale calculations. The numerical method of fluid dynamical part in the atmospheric model has been shifted from the spectral transform method to the grid-point method. The former is no longer acceptable on the massively parallel platforms form the limitation of inner-connect communication. On the other hand, the latter also contains a new problem, which is so-called memory bandwidth problem. For example, even on K Computer, the B/F ratio is just 0.5. The key to get high computational performance is the reduction of load/store from and to the main memory and efficient use of cash memory. Similar problem occurs in the communication between computer nodes. The multidisciplinary team (Team SCALE ) in RIKE /AICS is now tackling to such prob This talk is part of the Isaac Newton Institute Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsIsolation and molecular identification of actinomycete microflora, of ... cat.inist.fr/?aModele=afficheN&cpsidt=17110743 de A BOUDEMAGH - 2005 - Cité 24 fois - Autres articles ... of some saharian soils of south east Algeria (Biskra, EL-Oued Cambridge Neuroscience Seminar, 2011 All CMS eventsOther talksBlack and British Migration International Women's Day Lecture 2018: Press for Progress by Being an Active Bystander Part Ib Group Project Presentations Recent developments and debates in East Asian monsoon palaeoclimatology HE@Cam Seminar: Christian Hill - Patient Access Scheme, Managed Access Agreements and their influence on the approval trends on new medicines, devices and diagnostics An SU(3) variant of instanton homology for webs From Euler to Poincare PTPmesh: Data Center Network Latency Measurements Using PTP The Global Warming Sceptic The frequency of ‘America’ in America Developing an optimisation algorithm to supervise active learning in drug discovery |