Delivering Supercomputing to the Ultrascale

• Computational simulations run on large supercomputers balance their outputs with the need of the scientist and the capability of the machine. Persistent storage is typically expensive and slow, its peformance grows at a slower rate than the processing power of the machine. This forces scientists to be practical about the size and frequency of the simulation outputs that can be later analyzed to understand the simulation states. Flexibility in the trade-offs of flexibilty and accessibility of the outputs of the simulations are critical the success of scientists using the supercomputers to understand their science. In situ transformations of the simulation state to be persistently stored is the focus of this dissertation. The extreme size and parallelism of simulations can cause challenges for visualization and data analysis. This is coupled with the need to accept pre partitioned data into the analysis algorithms, which is not always well oriented toward existing software infrastructures. The work in this dissertation is focused on improving current work flows and software to accept data as it is, and efficiently produce smaller, more information rich data, for persistent storage that is easily consumed by end-user scientists. I attack this problem from both a theoretical and practical basis, by managing completely raw data to quantities of information dense visualizations and study methods for managing both the creation and persistence of data products from large scale simulations.

Weitere Dienste

Verfasserangaben: John Patchett urn:nbn:de:hbz:386-kluedo-51102 Hans Hagen Dissertation Englisch 16.12.2017 2017 Technische Universität Kaiserslautern Technische Universität Kaiserslautern 15.12.2017 20.12.2017 123 Fachbereich Informatik I. Computing Methodologies 0 Allgemeines, Informatik, Informationswissenschaft / 004 Informatik Creative Commons 4.0 - Namensnennung (CC BY 4.0)

$Rev: 13581$