Refine
Year of publication
- 1999 (425)
- 2021 (228)
- 2022 (197)
- 2023 (167)
- 2020 (160)
- 1998 (116)
- 2019 (103)
- 2000 (96)
- 2007 (92)
- 2018 (90)
- 1996 (88)
- 2015 (82)
- 1995 (81)
- 2016 (81)
- 2009 (78)
- 2014 (77)
- 1997 (76)
- 1994 (70)
- 2005 (68)
- 2006 (67)
- 2008 (66)
- 2001 (64)
- 2003 (63)
- 2013 (62)
- 2012 (61)
- 2004 (57)
- 2010 (56)
- 2002 (54)
- 2017 (52)
- 2011 (51)
- 1993 (42)
- 2024 (41)
- 1992 (40)
- 1991 (33)
- 1990 (11)
- 1989 (5)
- 1987 (4)
- 1988 (4)
- 1979 (3)
- 1984 (3)
- 1985 (3)
- 1980 (1)
- 1981 (1)
Document Type
- Preprint (1038)
- Doctoral Thesis (944)
- Article (672)
- Report (399)
- Master's Thesis (30)
- Conference Proceeding (28)
- Diploma Thesis (24)
- Periodical Part (21)
- Working Paper (15)
- Lecture (11)
Language
- English (3219) (remove)
Keywords
- AG-RESY (47)
- PARO (25)
- Visualisierung (16)
- SKALP (15)
- Wavelet (13)
- finite element method (12)
- Case-Based Reasoning (11)
- Inverses Problem (11)
- Optimization (11)
- RODEO (11)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (1055)
- Kaiserslautern - Fachbereich Informatik (757)
- Kaiserslautern - Fachbereich Physik (324)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (314)
- Fraunhofer (ITWM) (205)
- Kaiserslautern - Fachbereich Chemie (126)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (116)
- Kaiserslautern - Fachbereich Biologie (102)
- Kaiserslautern - Fachbereich Sozialwissenschaften (76)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (37)
Reaktive Strömungen sind ein wichtiger Bestandteil vieler umwelttechnischer und industrieller Prozesse und ein Forschungsgegenstand in vielen Bereichen. Ein Beispiel eines solchen Prozesses ist die Reinigung von Abgasen eines Verbrennungsmotors in der Automobilindustrie. Hierzu werden Katalysatoren und poröse Filter benutzt. In allen Forschungsbereichen wird mathematische Modellierung und Simulation eingesetzt, die es ermöglicht, die Effizienz des zu entwickelnden Prozesses oder Produkts zu steigern und Daten zu erhalten, die für theoretische oder experimentelle Forschungsmethoden unzugänglich sind. Numerische Algorithmen zur Simulation reaktiver Strömungen werden seit Jahrzehnten entwickelt und haben in vielen Bereichen ihre Leistungsfähigkeit bei der Lösung angewandter Industrie- und Umweltprobleme bewiesen. Die Klasse der reaktiven Strömungen und insbesondere die der reaktiven Strömungen auf der Porenskala ist aber sehr reichhaltig, und es gibt keinen allgemeinen Algorithmus, der für alle Strömungen dieser Art effizient ist. Eine Anpassung der Algorithmen für bestimmte Klassen von Problemen ist erforderlich. In dieser Arbeit liegt der Schwerpunkt auf der Entwicklung effizienter Algorithmen zur porenskaligen Simulation von Prozessen in katalytischen Filtern. Ein besonderes Merkmal dieser Filter ist, dass das Filtermaterial ein inertes, undurchlässiges Grundgerüst und nanoporöse aktive (Washcoat) Partikel enthält, in denen die Reaktionen stattfinden. Der Stofftransport findet innerhalb der Poren statt und in den Washcoat-Partikeln kann der Transport durch Konvektion (oft vernachlässigt) und Diffusion beschrieben werden. Die mathematischen Modelle basieren auf einer Konvektions-Diffusions-Reaktions-Gleichung oder Systemen solcher Gleichungen. Zu den größten Herausforderungen bei der Lösung solcher Probleme gehören die Nichtlinearitäten der reaktiven Terme und die Heterogenität des Strömungsfeldes, die durch die Heterogenität der porösen Medien verursacht wird. Letzteres bedeutet, dass in ein und derselben Materialprobe schnelle und langsame Zonen koexistieren, was bedeutet, dass sich die Art der maßgeblichen Gleichungen lokal ändert. Letzteres impliziert, dass nicht einfach Algorithmen für parabolische oder hyperbolische Probleme ausgewählt werden können. Die Algorithmen sollten für jede Art von Strömung robust sein oder sich an die Änderung der Art der Gleichungen anpassen können. Darüber hinaus kann die Eigenschaft ausgenutzt werden, dass der Washcoat (in dem die Reaktionen stattfinden und den Algorithmen eine wesentliche Änderung auferlegen) nur einen begrenzten Teil des Rechengebiets einnimmt. All dies motiviert dazu, die Klasse der Splitting-Verfahren erneut zu untersuchen, an die betrachtete Klasse von Problemen anzupassen und ihre Stabilität und Leistung numerisch zu untersuchen. Dies ist das Hauptthema der Dissertation. Die Steigerung der Rechenleistung und sich ändernde Rechnerarchitekturen erfordern eine Überarbeitung der bisherigen Softwareimplementierung und der bisher verwendeten Datenstrukturen. Ein weiteres Ziel dieser Arbeit ist daher die Entwicklung von Softwarelösungen, die die Simulation reaktiver Strömungen für hochdimensionale Probleme ermöglichen. Zur reaktiven Strömungssimulation auf der Porenskala wurden verschiedene Methoden verwendet. Die Arbeit konzentrierte sich auf zwei allgemeine Methodenklassen: Splitting-Algorithmen und implizit-explizite Schemata. Zum Vergleich werden vollständig implizite Algorithmen verwendet. Eine Reihe von Benchmark-Geometrien und chemischen Reaktionen, deren Komplexität von einfachen 1D- und linearen Fällen bis hin zu CT-Scan-basierten echten Filterdomänen und echten nichtlinearen komplexen chemischen Reaktionen reichte, wurden berücksichtigt und untersucht. Die vorliegende Arbeit zeigt, dass für alle betrachteten Fälle durch die Nutzung von Splitting-Verfahren für den (schwachen) Transport- und den Reaktionsterm, Verbesserungen in Bezug auf Speichernutzung und Konvergenzgeschwindigkeit erzielt werden können.
Comparative analysis of scalar fields in scientific visualization often involves distance functions on topological abstractions. This paper focuses on the merge tree abstraction (representing the nesting of sub- or superlevel sets) and proposes the application of the unconstrained deformation-based edit distance. Previous approaches on merge trees often suffer from instability: small perturbations in the data can lead to large distances of the abstractions. While some existing methods can handle so-called vertical instability, the unconstrained deformation-based edit distance addresses both vertical and horizontal instabilities, also called saddle swaps. We establish the computational complexity as NP-complete, and provide an integer linear program formulation for computation. Experimental results on the TOSCA shape matching ensemble provide evidence for the stability of the proposed distance. We thereby showcase the potential of handling saddle swaps for comparison of scalar fields through merge trees.
Comparative visualization of scalar fields is often facilitated using similarity measures such as edit distances. In this paper, we describe a novel approach for similarity analysis of scalar fields that combines two recently introduced techniques: Wasserstein geodesics/barycenters as well as path mappings, a branch decomposition-independent edit distance. Effectively, we are able to leverage the reduced susceptibility of path mappings to small perturbations in the data when compared with the original Wasserstein distance. Our approach therefore exhibits superior performance and quality in typical tasks such as ensemble summarization, ensemble clustering, and temporal reduction of time series, while retaining practically feasible runtimes. Beyond studying theoretical properties of our approach and discussing implementation aspects, we describe a number of case studies that provide empirical insights into its utility for comparative visualization, and demonstrate the advantages of our method in both synthetic and real-world scenarios. We supply a C++ implementation that can be used to reproduce our results.
Intelligent formal methods
(2022)
Information technology has become an indispensable part of our daily lives, with a significant proportion of our everyday activities relying on the safe and reliable operation of computer systems. One promising approach to ensuring these critical properties is the use of so-called formal methods, a broad range of rigorous, mathematical techniques for specifying, developing, and verifying hardware, software, cyber-physical systems, and artificial intelligence. Unlike traditional quality assurance approaches, such as testing, formal methods offer the unique ability to provide formal proof of the absence of errors, a trait particularly desirable in the context of today's ubiquitous safety-critical systems. However, this advantage comes at a cost: formal methods require extensive training, often assume idealized or limited settings, and typically demand substantial computational resources.
Inspired by the vision of artificial intelligence, this work seeks to automate formal methods and dramatically expand their applicability. To achieve this goal, we develop a novel, innovative type of formal method that combines inductive techniques from machine learning with deductive techniques from logic. We name this new approach "intelligent formal methods" and apply it to three fundamental areas: software verification, hardware and software synthesis, and the generation of formal specifications.
Objective. Gradient-based optimization using algorithmic derivatives can be a useful technique to improve engineering designs with respect to a computer-implemented objective function. Likewise, uncertainty quantification through computer simulations can be carried out by means of derivatives of the computer simulation. However, the effectiveness of these techniques depends on how 'well-linearizable' the software is. In this study, we assess how promising derivative information of a typical proton computed tomography (pCT) scan computer simulation is for the aforementioned applications. Approach. This study is mainly based on numerical experiments, in which we repeatedly evaluate three representative computational steps with perturbed input values. We support our observations with a review of the algorithmic steps and arithmetic operations performed by the software, using debugging techniques. Main results. The model-based iterative reconstruction (MBIR) subprocedure (at the end of the software pipeline) and the Monte Carlo (MC) simulation (at the beginning) were piecewise differentiable. However, the observed high density and magnitude of jumps was likely to preclude most meaningful uses of the derivatives. Jumps in the MBIR function arose from the discrete computation of the set of voxels intersected by a proton path, and could be reduced in magnitude by a 'fuzzy voxels' approach. The investigated jumps in the MC function arose from local changes in the control flow that affected the amount of consumed random numbers. The tracking algorithm solves an inherently non-differentiable problem. Significance. Besides the technical challenges of merely applying AD to existing software projects, the MC and MBIR codes must be adapted to compute smoother functions. For the MBIR code, we presented one possible approach for this while for the MC code, this will be subject to further research. For the tracking subprocedure, further research on surrogate models is necessary.
We discuss the dynamics of the formation of a Bose polaron when an impurity is injected into a weakly interacting one-dimensional Bose condensate. While for small impurity-boson couplings this process can be described within the Froehlich model as generation, emission and binding of Bogoliubov phonons, this is no longer adequate if the coupling becomes strong. To treat this regime we consider a mean-field approach beyond the Froehlich model which accounts for the backaction to the condensate, complemented with Truncated Wigner simulations to include quantum fluctuation. For the stationary polaron we find a periodic energy-momentum relation and non-monotonous relation between impurity velocity and polaron momentum including regions of negative impurity velocity. Studying the polaron formation after turning on the impurity-boson coupling quasi-adiabatically and in a sudden quench, we find a very rich scenario of dynamical regimes. Due to the build-up of an effective mass, the impurity is slowed down even if its initial velocity is below the Landau critical value. For larger initial velocities we find deceleration and even backscattering caused by emission of density waves or grey solitons and subsequent formation of stationary polaron states in different momentum sectors. In order to analyze the effect of quantum fluctuations we consider a trapped condensate to avoid 1D infrared divergencies. Using Truncated Wigner simulations in this case we show under what conditions the influence of quantum fluctuations is small.
European crayfish species are considered keystone in freshwater ecosystems. As such, their conservation is of paramount importance to prevent biodiversity decline and loss of ecosystem function. Unfortunately, today, European crayfish species are among the most threatened crayfish species worldwide. An especially relevant threat is represented by the invasive pathogen Aphanomyces astaci. This oomycete, native of North America, has been one of the main causes of crayfish population declines across Europe since its first introduction 150 years ago, to the point of causing the local extinction of many populations. Over the years, several introductions of A. astaci strains into Europe took place through translocation of infected North American crayfish, and were followed by mass mortalities across European crayfish populations. However, in the past 20 years, more and more reports emerged of European crayfish populations surviving A. astaci infections or being latently infected with the pathogen. The survival of infected crayfish can be ascribed to both increased resistance of some crayfish populations and decreased virulence of some A. astaci strains. As the relationship between host and pathogen in Europe is changing, it is imperative to gain insights on what shapes these changes to understand the implications for the long-term coexistence of crayfish and A. astaci in Europe. With this thesis, I focused on the virulence of A. astaci, looking for mechanisms, patterns and determinants underlying the pathogen’s virulence variability. In particular, by characterising the virulence of several A. astaci strains, I identified two possible different mechanisms of loss of virulence. I revealed that A. astaci’s virulence variability is not linked to variation of in vitro growth and sporulation, traits classically associated with a pathogen’s virulence. Based on these results, I suggest that the pathogen’s virulence determinants are likely its “virulence effectors”, of which A. astaci genome is enriched. Additionally, with the present work I provided transcriptomic evidence of coevolution between A. astaci and European crayfish. I showed that the haplogroups based on the canonical mitochondrial markers, often used to assess A. astaci’s virulence to inform management actions, do not differ for some of their characterising phenotypical traits, including virulence. Finally, after experimental characterisation of virulence and assessment of its likely phenotypical determinants, i.e., sporulation and growth, the next and more comprehensive step to study the pathogen’s virulence is through genomic approaches. To this aim, I provided key data for future comparative genomic studies, i.e., highly complete genome assemblies based on Nanopore (3) and Illumina reads (11). These data can be exploited in several ways, from building a pangenome of the species to a genome-wide association study (GWAS), that can offer a much deeper understanding of A. astaci’s virulence and adaptability. In particular, the identification of the loci associated with virulence through a GWAS has the potential to be revolutionary for the management of A. astaci, as it can become the basis to create a genomic tool to quickly and accurately assess the virulence of newly introduced strains, directing management actions towards the more dangerous strains.
We report the experimental implementation of dynamical decoupling on a small, non-interacting ensemble of up to 25 optically trapped, neutral Cs atoms. The qubit consists of the two magnetic-insensitive Cs clock states \(\vert F = 3, m_F = 0\rangle\) and \(\vert F = 4, m_F = 0\rangle\), which are coupled by microwave radiation. We observe a significant enhancement of the coherence time when employing Carr-Purcell-Meiboom-Gill (CPMG) dynamical decoupling. A CPMG sequence with ten refocusing pulses increases the coherence time of 16.2(9) ms by more than one order of magnitude to 178(2) ms. In addition, we make use of the filter function formalism and utilise the CPMG sequence to measure the background noise floor affecting the qubit coherence, finding a power-law noise spectrum \(1/\omega^\alpha\) with \(\mathit{\alpha} = 0.89(2)\). This finding is in very good agreement with an independent measurement of the noise in the intensity of the trapping laser. Moreover, the measured coherence evolutions also exhibit signatures of low-frequency noise originating at distinct frequencies. Our findings point toward noise spectroscopy of engineered atomic baths through single-atom dynamical decoupling in a system of individual Cs impurities immersed in an ultracold 87Rb bath.
Olive mill wastewater (OMW) is a by-product of olive oil extraction and its disposal on soil has been associated with significant environmental challenges, including toxic effects on soil organisms and quality of groundwater due to its high phenolic content. Recent studies focusing on the dynamics of OMW degradation in soil are handling the environmental conditions as main factors influencing the fate and transport of polyphenols in the soil-water system. The understanding of seasonal-dependent phenol leaching from OMW-treated soil remained elusive, as field studies are hindered by spatial variability and complex environmental dynamics. Therefore, controlled lysimeter experiments were conducted to investigate the leaching and transport mechanisms of OMW-derived phenolic compounds in soil.
This thesis presents the results of an 18-week lysimeter experiment conducted in a laboratory setting, aimed at monitoring and comprehending the distribution and leaching of OMW-derived phenolic compounds in soil after OMW application. The experiment spanned four seasonal simulation phases, including two winter, one spring, and one summer, under semi-arid climate Tunisian conditions. The effects of OMW on soil leachates properties, soil water repellency, and soil water retention capacity were assessed.
The soil leachates exhibited varying degrees of recovery across the different simulation phases. However, persistent salinity in the leachates and high soil water repellency at the top treated OMW-soils were recorded. The findings revealed also that OMW application changed the pore size distribution in treated OMW-soils. Most of the OMW-derived phenols were immobilized in the upper 5 cm of the soil. Notably, soluble phenolic compounds exhibited the formation of coarser pores for the sake of fine pores, suggesting that OMW- organic carbon played a crucial role in controlling the depth-dependent transport mechanisms of OMW within the soil matrix.
In conclusion, this study provides valuable insights into the fate and impact of OMW-derived phenolic compounds in soil. It emphasizes the significance of conducting OMW applications with careful irrigation practices and thorough phenol leaching surveys to minimize the risk of potential groundwater contamination. Additionally, more experiments are warranted to investigate the sorption capacity of the soil during and after OMW application and its influence on the stability of soluble phenolic compounds
in soils.
Thermo-optic interaction significantly differs from the usual particle-particle interactions in physics, as it is retarded in time. A prominent platform for realising this kind of interaction are photon Bose–Einstein condensates, which are created in dye-filled microcavities. The dye solution continually absorbs and re-emits these photons, causing the photon gas to thermalize and to form a Bose–Einstein condensate. Because of a non-ideal quantum efficiency, these cycles heat the dye solution, creating a medium that provides an effective thermo-optic photon–photon interaction. So far, only a mean-field description of this process exists. This paper goes beyond by working out a quantum mechanical description of the effective thermo-optic photon–photon interaction. To this end, the self-consistent modelling of the temperature diffusion builds the backbone of the modelling. Furthermore, the manyfold experimental timescales allow for deriving an approximate Hamiltonian. The resulting quantum theory is applied in the perturbative regime to both a harmonic and a box potential for investigating its prospect for precise measurements of the effective photon–photon interaction strength.