Refine
Year of publication
Document Type
- Preprint (1038)
- Doctoral Thesis (941)
- Article (631)
- Report (399)
- Master's Thesis (30)
- Conference Proceeding (28)
- Diploma Thesis (24)
- Periodical Part (21)
- Working Paper (15)
- Lecture (11)
Language
- English (3174) (remove)
Keywords
- AG-RESY (47)
- PARO (25)
- Visualisierung (16)
- SKALP (15)
- Wavelet (13)
- finite element method (12)
- Case-Based Reasoning (11)
- Inverses Problem (11)
- Optimization (11)
- RODEO (11)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (1054)
- Kaiserslautern - Fachbereich Informatik (753)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (308)
- Kaiserslautern - Fachbereich Physik (298)
- Fraunhofer (ITWM) (205)
- Kaiserslautern - Fachbereich Chemie (121)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (115)
- Kaiserslautern - Fachbereich Biologie (102)
- Kaiserslautern - Fachbereich Sozialwissenschaften (76)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (37)
Enantiomerically pure, C2-symmetric 2,6-bis(pyrazol-3-yl) pyridine ligands were obtained by treatment of diethyl-2,6-pyridinedicarbonate with (1R,4R)-(+)-camphor in the presence of NaH followed by ring closure with hydrazine. After twofold N-alkylation at the pyrazole rings, the addition of iron(II) chloride led to the according pentacoordinate dichloridoiron(II) complexes. All intermediates of the ligand synthesis, the ligands bearing NCH3 and NCH2C6H5 groups and the derived iron(II) complexes were structurally characterized by means of X-ray structure analysis. In-situ reaction with iron(II) carboxylates resulted in the formation of iron(II) carboxylate complexes, which turned out to be highly active in the hydrosilylation of acetophenone. However, even at room temperature, the enantiomeric excess of the product 1-phenylethanol is poor. 57Fe Mössbauer spectroscopy gave an insight into the species formed during catalysis.
This DFG-funded research project aimed to gain a better understanding of the mechanisms of the W-Cl repair principle within the framework of fundamental investigations, to contribute to the creation of the necessary basis for a broader application of the repair principle in practice. The focus was on the development of a model to describe the chloride redistribution after the application of a system sealing surface protective coating. On the basis of Fick's second law of diffusion, a mathematical model with a self-contained analytical solution was developed, with the help of which the chloride redistribution after application of a system sealing surface protective coating can be calculated under the idealized assumption of complete water saturation of the concrete. Furthermore, the influence of the dehydration of the concrete, expected as a result of the application of the repair principle W-Cl, on the chloride redistribution was investigated. On the basis of laboratory tests and numerical simulations, material-specific reduction functions were developed to quantify the relationship between the chloride diffusion coefficient and the ambient humidity.
In this paper, the relationship between production parameters of ultra high performance fiber-reinforced concrete (UHPFRC) and the spatial distribution and orientation of the steel fibers is investigated. UHPFRC specimens with varying fiber diameter, fiber volume fraction, and rheology of the mixture are produced. Additionally, casting is performed from the side or the middle of the formwork. Imaging by micro computed tomography allows for a statistical analysis of the spatial arrangement of the fibers in the test specimens. The flexural behavior and the load capacity of the specimens are analyzed by four point bending tests. The results of the bending tests are well explained by characteristics of the fiber systems determined from the image data.
Virtual Possibilities: Exploring the Role of Emerging Technologies in Work and Learning Environments
(2024)
The present work aims to investigate whether virtual reality can support learning as well as vocational work environments. To this end, four studies were conducted, with the first set investigating the demands for vocational workers and the impact of input methods on participant performance. These studies laid the foundation needed to create studies incorporating virtual reality research. The second set of studies was concerned with the impact of virtual reality on learning performance as well as the influence of binaural stimuli presentation on task performance. Results of each study are discussed individually and in conjunction with one another. The four studies are further supplemented with further research conducted by the author as well as an analysis of the growing field of virtual reality-based research. The thesis closes by embedding the discussed work into the scientific landscape and tries to give an outlook for virtual reality-based use cases in the future.
The fundamental differences in hydrodynamics of the froth and spray regime account for the ongoing interest in search for the point of phase inversion. This short communication presents a new approach for identification of phase inversion on sieve trays in terms of an image-based measurement technique. Image analysis of entrained droplets reveals a distinct increase in Sauter mean diameter and droplet frequency during phase inversion. Further measurement methods like pressure drop, gravimetric analysis of entrained liquid, froth height assessment and photographic observation of the flow regime serve as a reference value and complement the discussion. A flow map based on the experimental data comprises each regime and shows a good agreement with phase inversion correlations from literature.
Algorithmic decision-making (ADM) systems have come to support, pre-empt or substitute for human decisions in manifold areas, with potentially significant impacts on individuals' lives. Achieving transparency and accountability has been formulated as a general goal regarding the use of these systems. However, concrete applications differ widely in the degree of risk and the accountability problems they entail for data subjects. The present paper addresses this variation and presents a framework that differentiates regulatory requirements for a range of ADM system uses. It draws on agency theory to conceptualize accountability challenges from the point of view of data subjects with the purpose to systematize instruments for safeguarding algorithmic accountability. The paper furthermore shows how such instruments can be matched to applications of ADM based on a risk matrix. The resulting comprehensive framework can guide the evaluation of ADM systems and the choice of suitable regulatory provisions.
We describe a novel technique for the simultaneous visualization of multiple scalar fields, e.g. representing the members of an ensemble, based on their contour trees. Using tree alignments, a graph-theoretic concept similar to edit distance mappings, we identify commonalities across multiple contour trees and leverage these to obtain a layout that can represent all trees simultaneously in an easy-to-interpret, minimally-cluttered manner. We describe a heuristic algorithm to compute tree alignments for a given similarity metric, and give an algorithm to compute a joint layout of the resulting aligned contour trees. We apply our approach to the visualization of scalar field ensembles, discuss basic visualization and interaction possibilities, and demonstrate results on several analytic and real-world examples.
Synchrotron-based nuclear resonance vibrational spectroscopy (NRVS) using the Mössbauer isotope 161Dy has been employed for the first time to study the vibrational properties of a single-molecule magnet (SMM) incorporating DyIII, namely [Dy(Cy3PO)2(H2O)5]Br3⋅2 (Cy3PO)⋅2 H2O ⋅2 EtOH. The experimental partial phonon density of states (pDOS), which includes all vibrational modes involving a displacement of the DyIII ion, was reproduced by means of simulations using density functional theory (DFT), enabling the assignment of all intramolecular vibrational modes. This study proves that 161Dy NRVS is a powerful experimental tool with significant potential to help to clarify the role of phonons in SMMs.
This article proposes a new clock-dependent gain-scheduled dynamic output feedback controller for delayed linear parameter varying systems with piecewise constant parameters. The proposed controller guarantees ℒ2-performance. By employing a clock-dependent Lyapunov–Krasovskii functional, a sufficient condition for the existence of the controller is provided in terms of clock- and parameter-dependent linear matrix inequalities. A case study on output feedback control of delayed switched systems is also provided. To illustrate the efficacy of the result, it is applied to a practical VTOL helicopter model.
The generation of liquid-liquid dispersions with defined droplet size distributions is an important aspect for process equipment design. In this work, two centrifugal pumps with different impeller diameters were used to generate dispersions at selected operating points for a paraffin oil-water system. The droplet break-up phenomena within the centrifugal pumps were analyzed using a transparent pump design in combination with high-speed imaging. Droplet size distributions at centrifugal pump discharge nozzle were recorded with optical probe measurement technologies and evaluated by means of image processing using a neural network. The influence of impeller diameter, rotational speed, volumetric flow rate and dispersed phase fraction are discussed. Experimental data is correlated using fluid properties, operating data as well as centrifugal pump dimensions. The correlations developed from results of this work serve as a basis for the equipment design of centrifugal pumps.
Microcrystalline cellulose pellets for oral drug delivery are often produced by a combined wet extrusion-spheronization process. During the entire process, the cylindrical as well as the spherical pellets are exposed to various stresses resulting in a change of their shape and size due to plastic deformation and breakage. In this work, the effect of moisture content of pellets on their mechanical behavior is studied. In static compression tests, the strong influence of water content on deformation behavior of pellets is confirmed. Moreover, impact tests are performed using a setup consisting of three high-speed cameras to record pellet-wall collisions. Material properties, such as stiffness, restitution coefficient, breakage force, and displacement, were analyzed depending on the water content.
Microbial planktonic communities are the basis of food webs in aquatic ecosystems since they contribute substantially to primary production and nutrient recycling. Network analyses of DNA metabarcoding data sets emerged as a powerful tool to untangle the complex ecological relationships among the key players in food webs. In this study, we evaluated co-occurrence networks constructed from time-series metabarcoding data sets (12 months, biweekly sampling) of protistan plankton communities in surface layers (epilimnion) and bottom waters (hypolimnion) of two temperate deep lakes, Lake Mondsee (Austria) and Lake Zurich (Switzerland). Lake Zurich plankton communities were less tightly connected, more fragmented and had a higher susceptibility to a species extinction scenario compared to Lake Mondsee communities. We interpret these results as a lower robustness of Lake Zurich protistan plankton to environmental stressors, especially stressors resulting from climate change. In all networks, the phylum Ciliophora contributed the highest number of nodes, among them several in key positions of the networks. Associations in ciliate-specific subnetworks resembled autecological species-specific traits that indicate adaptions to specific environmental conditions. We demonstrate the strength of co-occurrence network analyses to deepen our understanding of plankton community dynamics in lakes and indicate biotic relationships, which resulted in new hypotheses that may guide future research in climate-stressed ecosystems.
A highly water-dispersible heterogeneous Brønsted acid surfactant was prepared by synthesis of a bi-functional anisotropic Janus-type material. The catalyst comprises ionic functionalities on one side and propyl-SO3H groups on the other. The novel material was investigated as a green substitute of a homogeneous acidic phase transfer catalyst (PTC). The activity of the catalyst was investigated for the aqueous-phase oxidation of cyclohexene to adipic acid with 30 % hydrogen peroxide even in a decagram-scale. It can also be used for the synthesis of some other carboxylic acid derivatives as well as diethyl phthalate.
A characterisation of the spaces \({\mathcal {G}}_K\) and \({\mathcal {G}}_K'\) introduced in Grothaus et al. (Methods Funct Anal Topol 3(2):46–64, 1997) and Potthoff and Timpel (Potential Anal 4(6):637–654, 1995) is given. A first characterisation of these spaces provided in Grothaus et al. (Methods Funct Anal Topol 3(2):46–64, 1997) uses the concepts of holomorphy on infinite dimensional spaces. We, instead, give a characterisation in terms of U-functionals, i.e., classic holomorphic function on the one dimensional field of complex numbers. We apply our new characterisation to derive new results concerning a stochastic transport equation and the stochastic heat equation with multiplicative noise.
Machining is very common in industry, e.g. automotive industry and aerospace industry, which is a nonlinear dynamic problem including large deformations, large strain, large strain rates and high temperatures, that implies some difficulties for numerical methods such as Finite element method. One way to simulate such kind of problems is the Particle Finite Element Method (PFEM) which combines the advantages of continuum mechanics and discrete modeling techniques. In this work we introduce an improved PFEM called the Adaptive Particle Finite Element Method (A-PFEM). The A-PFEM introduces particles and removes wrong elements along the numerical simulation to improve accuracy, precision, decrease computing time and resolve the phenomena that take place in machining in multiple scales. At the end of this paper, some examples are present to show the performance of the A-PFEM.
One technique to describe the failure of mechanical structures is a phase field model for fracture. Phase field models for fracture consider an independent scalar field variable in addition to the mechanical displacement [1]. The phase field ansatz approximates crack surfaces as a continuous transition zone in which the phase field variable varies from a value that indicates intact material to another value that represents cracks. For a good approximation of cracks, these transition zones are required to be narrow, which leads to steep gradients in the fracture field. As a consequence, the required mesh density in a finite element simulation and thus the computational effort increases. In order to circumvent this efficiency problem, exponential shape functions were introduced in the discretization of the phase field variable, see [2]. Compared to the bilinear shape functions these special shape functions allow for a better approximation of the steep transition with less elements. Unfortunately, the exponential shape functions are not symmetric, which requires a certain orientation of elements relative to the crack surfaces. This adaptation is not uniquely determined and needs to be set up in the correct way in order to improve the approximation of smooth cracks. The issue is solved in this work by reorientating the exponential shape functions according to the nodal value of phase field gradient in a particular element. To be precise, this work discusses an adaptive algorithm that implements such a reorientation for 2d and 3d situations.
In grinding, the crystal grain size of the workpiece material is relatively same range compared to the removal depth. This raises a question if an anisotropic material model, which considers the effect of the crystal grain size and orientations, would better predict the process forces when compared to an isotropic material model. Initially, a simple micro-indentation process is chosen to compare the two models. In this work, a crystal plasticity model and an isotropic Johnson-Cooke plasticity model are employed to simulate micro-identation of a twinning induced plasticity (TWIP) steel. The results of the two models are compared using the force-displacement curves from the micro-indentation experiments. In the future, the study will be extended to describe the material removal process during a single grit scratch test.
A branch-and-cut approach and alternative formulations for thetraveling salesman problem with drone
(2020)
In this paper, we are interested in studying thetraveling salesman problem withdrone(TSP-D). Given a set of customers and a truck that is equipped with a singledrone, the TSP-D asks that all customers are served exactly once and minimal deliv-ery time is achieved. We provide two compact mixed integer linear programmingformulations that can be used to address instances with up to 10 customer within afew seconds. Notably, we introduce a third formulation for the TSP-D with an expo-nential number of constraints. The latter formulation is suitable to be solved by abranch-and-cut algorithm. Indeed, this approach can be used to find optimal solu-tions for several instances with up to 20 customers within 1 hour, thus challenging thecurrent state-of-the-art in solving the TSP-D. A detailed numerical study providesan in-depth comparison on the effectiveness of the proposed formulations. More-over, we reveal further details on the operational characteristics of a drone-assisteddelivery system. By using three different sets of benchmark instances, considera-tion is given to various assumptions that affect, for example, technological droneparameters and the impact of distance metrics.
In recent years, there has been a growing need for accurate 3D scene reconstruction. Recent developments in the automotive industry have led to the increased use of ADAS where 3D reconstruction techniques are used, for example, as part of a collision detection system. For such applications, scene geometry reconstruction is usually performed in the form of depth estimation, where distances to scene objects are obtained.
In general, depth estimation systems can be divided into active and passive. Both systems have their advantages and disadvantages, but passive systems are usually cheaper to produce and easier to assemble and integrate than active systems. Passive systems can be stereo- or multiple-view based. Up to a certain limit, increasing the number of views in multi-view systems usually results in improved depth estimation accuracy.
One potential problem for ensuring the reliability of multi-view systems is the need to accurately estimate the orientation of their optical sensors. One way to ensure sensor placement for multi-view systems is to rigidly fix the sensors at the manufacturing stage. Unlike arbitrary sensor placement, using of a simplified and known sensor placement geometry further simplifies the depth estimation.
We meet with the concept of light field, which parameterizes all visible light passing through all viewpoints by their intersection with angular and spatial planes. When applied to computer vision, this gives us a 2D set of 2D images, where the physical distances between each image are fixed and proportional to each other.
Existing light field depth estimation methods provide good accuracy, which is suitable for industrial applications. However, the main problems of these methods are related to their running time and resource requirements. Most of the algorithms presented in the literature are typically sharpened for accuracy, can only be run on high-performance machines and often require a significant amount of time to process and obtain results.
Real-world applications often have running time requirements. Also, often there is a power-consumption limitation. In this dissertation, we investigate the problem of building a depth estimation system with an light field camera that satisfies the operating time and power consumption constraints without significant loss of estimation accuracy.
First, an algorithm for calibrating light field cameras is proposed, together with an algorithm for automatic calibration refinement, that works on arbitrary captured scenes. An algorithm for classical geometric depth estimation using light field cameras is proposed. Ways to optimize the algorithm for real-time use without significant loss of accuracy are presented. Finally, the ways how the presented depth estimation methods can be extended using modern deep learning paradigms under the two previously mentioned constraints are shown.
Substrate channeling is a widespread mechanism in metabolic pathways to avoid decomposition of unstable intermediates, competing reactions, and to accelerate catalytic turnover. During the biosynthesis of light-harvesting phycobilins in cyanobacteria, two members of the ferredoxin-dependent bilin reductases are involved in the reduction of the open-chain tetrapyrrole biliverdin IXα to the pink pigment phycoerythrobilin. The first reaction is catalyzed by 15,16-dihydrobiliverdin:ferredoxin oxidoreductase and produces the unstable intermediate 15,16-dihydrobiliverdin (DHBV). This intermediate is subsequently converted by phycoerythrobilin:ferredoxin oxidoreductase to the final product phycoerythrobilin. Although substrate channeling has been postulated already a decade ago, detailed experimental evidence was missing. Using a new on-column assay employing immobilized enzyme in combination with UV-Vis and fluorescence spectroscopy revealed that both enzymes transiently interact and that transfer of the intermediate is facilitated by a significantly higher binding affinity of DHBV toward phycoerythrobilin:ferredoxin oxidoreductase. Concluding from the presented data, the intermediate DHBV is transferred via proximity channeling.
A novel method for the highly stereoselective synthesis of tetrahydropyrans is reported. This domino reaction is based on a twofold addition of enamides to aldehydes followed by a subsequent cyclization and furnishes fully substituted tetrahydropyrans in high yields. Three new σ-bonds and five continuous stereogenic centers are formed in this one-pot process with a remarkable degree of diastereoselectivity. In most cases, the formation of only one out of 16 possible diastereomers is observed. Two different stereoisomers can be accessed in a controlled fashion starting either from an E- or a Z-configured enamide.
Covering edges in networks
(2019)
In this paper we consider the covering problem on a networkG=(V,E)withedgedemands. The task is to cover a subsetJ⊆Eof the edges with a minimum numberof facilities within a predefined coverage radius. We focus on both the nodal andthe absolute version of this problem. In the latter, facilities may be placed every-where in the network. While there already exist polynomial time algorithms to solvethe problem on trees, we establish a finite dominating set (i.e., a finite subset ofpoints provably containing an optimal solution) for the absolute version in generalgraphs. Complexity and approximability results are given and a greedy strategy isproved to be a (1+ln(|J|))-approximate algorithm. Finally, the different approachesare compared in a computational study.
This contribution presents a novel approach to investigate entrainment in distillation and absorption columns. An image-based probe allows precise droplet detection at various radial and axial positions above trays. Validations achieve an aver-age error of 6.4 % (monospheres 9.2–114.4mm) and 3 % (monodisperse droplet stream up to 19 m s–1and 74.5mm).Experiments in a DN 450 cold flow test rig show an increasing (decreasing) share of larger droplets with higher gas (liq-uid) loads. Locally measured droplet sizes depend on probe position as well as tray design and enable an extrapolation tointegral entrainment rates.
The use of digital media in adult education is very heterogene-ous. To date, there are no empirical studies that have examined the possibility that media-related differences in media usage of adult educators could be in part due to differential media pedagogical attitudes of adult educators. Moreover, there is a lack of empirical evidence to support the understanding of what factors modulate differences in media pedagogical com-petencies of adult educators. In order to examine different the-oretical potentialities, in the present study, an online survey of adult educators (n = 626) was conducted to investigate the attitudes of adult educators in Germany toward their use of digital media. The results of the study indicate that there are influencing factors such as educational level or employment context on attitudes toward digital media.
Organic solutions of lithium bis(fluorosulfonyl)imide (LiFSI) are promising electrolytes for Li-ion batteries. Information on the diffusion coefficients of the species in these solutions is needed for battery design. Therefore, the self-diffusion coefficients in such solutions were studied experimentally with the pulsed-field gradient nuclear magnetic resonance technique. The self-diffusion coefficients of the ions Li+ and FSI− as well as those of the solvents were measured for LiFSI solutions in pure dimethyl carbonate and ethylene carbonate as well as in mixtures of these solvents at 298 K and ambient pressure. Despite the Li+ ion being the smallest species in the solution, its self-diffusion coefficient is the lowest as a result of its strong coordination with the solvent molecules.
The production of nylon-6.6 is one of the largest scale syntheses in industrial chemistry. The standard procedure is based on an energy consuming low-level conversion of cyclohexane to yield adipic acid in two steps that is converted to nylon-6.6 in a separate step. Therefore, there is a strong intent to optimize the synthetic route in an economic and ecologic matter. In this work, we present a one-pot oxygenation of cyclohexane with hydrogen peroxide and a µ4-oxido-copper cluster catalyst to yield dicarboxylic acids with adipic acid as the main product.
This paper provides experimental results on investigations for the validation of photo-grammetric strain measurements of ultra-high-performance concrete (UHPC)-prismssubjected to static and cyclic bending-tensile stress. For this purpose, 4 static and5 cyclic test series were performed. Damage progresses during loading are monitoredby means of a digital image correlation (DIC) system and a clip gauge. The control ofthe DIC by trigger lists and the measurement noise as a function of the measurementrate are examined. All static tests were performed force controlled with the same test-ingspeedandthesamemeasuringrateofDICandclipgauge.Allcyclictestswereperformed with the same upper and lower stress levels but with different loading rates.During the static tests, the DIC can be used to make accurate strain measurementsbefore UHPC failure. In the cyclic tests, the measurement noise of the DICdecreases with an increasing measuring rate. The tests performed confirm the con-trol of the DIC by trigger lists for cyclic tests on UHPC-prisms and show that themeasurement noise is negligible in static and cyclic tests.
The Atacama Desert is the driest non‐polar desert on Earth, presenting precarious conditions for biological activity. In the arid coastal belt, life is restricted to areas with fog events that cause almost daily wet–dry cycles. In such an area, we discov‐ered a hitherto unknown and unique ground covering biocenosis dominated by li‐chens, fungi, and algae attached to grit‐sized (~6 mm) quartz and granitoid stones. Comparable biocenosis forming a kind of a layer on top of soil and rock surfaces in general is summarized as cryptogamic ground covers (CGC) in literature. In contrast to known CGC from arid environments to which frequent cyclic wetting events are lethal, in the Atacama Desert every fog event is answered by photosynthetic activity of the soil community and thus considered as the desert's breath. Photosynthesis of the new CGC type is activated by the lowest amount of water known for such a community worldwide thus enabling the unique biocenosis to fulfill a variety of eco‐system services. In a considerable portion of the coastal Atacama Desert, it protects the soil from sporadically occurring splash erosion and contributes to the accumula‐tion of soil carbon and nitrogen as well as soil formation through bio‐weathering. The structure and function of the new CGC type are discussed, and we suggest the name grit–crust. We conclude that this type of CGC can be expected in all non‐polar fog deserts of the world and may resemble the cryptogam communities that shaped ancient Earth. It may thus represent a relevant player in current and ancient biogeo‐chemical cycling.
The increase of pluvial flooding has long been discussed to be a most probableoutcome of climate change. This has raised the question of necessary conse-quences in the design of urban drainage systems in order to secure adequateflood protection and resilience. Due to the uncertainties in future trends ofheavy rainfall events, the awareness of remaining risks of extreme pluvialflooding needs to be roused at responsible decision makers and the public aswell leading to the implementation of pluvial flood risk management (PFRM)concepts. The state of two core elements of PFRM in Germany are describedhere: flood hazard and risk evaluation and risk communication. In 2016 theguideline DWA-M 119 has been published to establish city-based PFRM con-cepts in specification of the European Flood Risk Management Directive(EU 2007). As core elements, the guidelines recommend a site-specific analysisand evaluation of flood hazards and potentials of flood damages to create floodhazard and flood risk maps. In the long run, PFRM needs to be established asa joint community effort and a requirement for more flood resilience. The riskcommunication within the administration and in the public requires a com-prehensible characterization and classification of heavy rainfall to illustrateevent extremity. The concept of a rainstorm severity index (RSI) instead of sta-tistical rainfall parameters appears to be promising to gain a better perceptionby affected citizens and non-hydrology-experts as well. A methodical approachis described to specify and assign site-specific rainfall depths within the sever-ity index scheme RSI12.This article is categorized under:
Engineering Water > Sustainable Engineering of Water
Engineering Water > Planning Water
Engineering Water > Methods
Measuring Particle Size Distributions in Multiphase Flows Using a Convolutional Neural Network
(2019)
The efficiency of many chemical engineering applications depends on the surface/volume ratio of the dispersed phase. Knowledge of this particle size distribution is a key factor for better process control. The challenge of measurements acquired by optical imaging techniques is the segmentation of overlapping particles, especially in high phase fraction flows. In this work, a convolutional neural network is trained to segment droplets in images acquired by a shadowgraphic approach. The network is trained on artificial images and implemented into a droplet size algorithm. The results are compared to an OpenSource segmentation approach.
Hajós' conjecture asserts that a simple Eulerian graph on n vertices can be decomposed into at most [(n-1)/2] cycles. The conjecture is only proved for graph classes in which every element contains vertices of degree 2 or 4. We develop new techniques to construct cycle decompositions. They work on the common neighborhood of two degree-6 vertices. With these techniques, we find structures that cannot occur in a minimal counterexample to Hajós' conjecture and verify the conjecture for Eulerian graphs of pathwidth at most 6. This implies that these graphs satisfy the small cycle double cover conjecture.
The interest in micro applications increases in recent years due to new methods of fabrication. One fabrication process is direct laser writing, which can fabricate high-precision structures in the micrometer range. The material properties of the micro structures are related to the writing parameters, such as laser power, scan speed, distance between written lines and writing direction. This work presents investigations of the thermal length expansion coefficients of a laser-written polymer in regard to laser power. To this end cantilever structures are fabricated. The small cantilevers are heated and their length expansions observed using a microscope. Images of the cantilevers at different temperatures are taken and by image post processing, the change in length and their coefficients of thermal expansion is determined.
With significant technological growth and computing power it is possible to simulate metal cutting processes with different discretization techniques. Classically the Lagrangian or Eulerian finite element formulations are used to model metal cutting process. Lagrangian approach is accurate with it's representation of the domain boundary, but requires a re-meshing procedure to avoid element distortions. Eulerian approach provides a steady state solution of the chip-workpiece separation, however its limitation lies in the treatment of convective terms during motion. The Arbitrary Lagrangian-Eulerian method can be used to combine the advantages of both methods and avoid the disadvantages. In the Lagrangian framework, use of a meshless technique– Smooth Particle Hydrodynamics (SPH) has its advantage in large strain deformation problems without the need for re-meshing algorithms. This work compares the LAG, ALE and SPH approaches by modelling a turning process.
This work reviews the state-of-the-art models for the simulation of bubble columns and focuses on methods coupled with computational fluid dynamics (CFD) where the potential and deficits of the models are evaluated. Particular attention is paid to different approaches in multiphase fluid dynamics including the population balance to determine bubble size distributions and the modeling of turbulence where the authors refer to numerous published examples. Additional models for reactive systems are presented as well as a special chapter regarding the extension of the models for the simulation of bubble columns with a present solid particle phase, i.e., slurry bubble columns.
Classification of Fine Particles Using the Hydrodynamic Forces in the Boundary Layer of a Membrane
(2019)
The wet classification of particles < 10 μm is a complex process that has been researched for many years. In this study, the usage of a modified cross-flow filtration process as a classification process was investigated. With this process, particles in a fine micrometer range can be separated from suspensions. The upper particle size is dependent on hydrodynamic forces. The experimental results were compared with different hydrodynamic force models to predict upper size. The influence of the permeate flux and the particle concentration in the feed on the upper particle size is studied.
In this contribution a phase field model for ductile fracture with linear isotropic hardening is presented. An energy functional consisting of an elastic energy, a plastic dissipation potential and a Griffith type fracture energy constitutes the model. The application of an unaltered radial return algorithm on element level is possible due to the choice of an appropriate coupling between the nodal degrees of freedom, namely the displacement and the crack/fracture fields. The degradation function models the mentioned coupling by reducing the stiffness of the material and the plastic contribution of the energy density in broken material. Furthermore, to solve the global system of differential equations comprising the balance of linear momentum and the quasi-static Ginzburg-Landau type evolution equation, the application of a monolithic iterative solution scheme becomes feasible. The compact model is used to perform 3D simulations of fracture in tension. The computed plastic zones are compared to the dog-bone model that is used to derive validity criteria for KIC measurements.
With the expansion of the electromobility and wind energy, the number of frequency inverter-controlled electric motors and generators is increasing. In parallel, the number of the rolling bearing failures caused by inverter-induced parasitic currents also shows an increasing trend. In order to determine the electrical state of the rolling bearing, to develop preventive measures against damages caused by parasitic currents and to support system-level calculations, electrical rolling bearing models have been developed. The models are based on the electrical insulating ability of the lubricant film that develops in the rolling contacts. For the capacitance calculation of the rolling contacts, different correction factors were developed to simplify the complex tribological and electrical interactions of this region. The state-of-the-art correction factors vary widely, and their validity range also differ significantly, which leads to uncertainty in their general application and to the demand for further investigations of this field. In the present work, a combined simulation method is developed that can determine the rolling bearing capacitance of axially loaded rolling bearings. The simulation consists of an electrically extended EHL simulation for calculating the capacitance of the rolling contact, and an electrical FEM simulation for the capacitance calculation of the non-contact regions. With the combination of the resulted capacitance values of the two simulation methods, the total rolling bearing capacitance can be determined with high accuracy and without using correction factors. In addition, due to experimental investigations, the different capacitance sources of the rolling bearing are identified. After the validation of the combined simulation method, it can be applied for the investigation of the different capacitance sources, i.e., to determine their significance compared to the total rolling bearing capacitance. The developed simulation method allows a detailed analysis of the rolling bearing capacitances, taking into account influencing factors that could not be considered before (e.g., oil quantity in the environment of the rolling bearing). As a result, the accurate calculation of the rolling bearing capacitance can improve the prediction of the harmful parasitic currents and help to develop preventive measures against them.
The influence on the mass transfer in liquid-liquid extraction was investigated during droplet formation in a quiescent aqueous continuous phase for the two transition components, acetone and acetonitrile, in toluene. Both transition components have similar characteristics. However, an approximately eight times slower mass transfer of a droplet hanging on a capillary in relation to a rising droplet could be observed. The droplet formation time and the initial solute concentration are decisive for the mass transfer behaviour. A lower volumetric flow leads to slower droplet formation and a higher specific mass transfer area enhancing mass transfer, which is visualized via laser induced fluorescence (LIF). Additionally, as expected, higher initial solute concentrations promote Marangoni turbulences and thus mass transfer, which is measured via confocal Raman spectroscopy inside a fixed hanging droplet.
Knowledge workers face an ever increasing flood of information in their daily work. They live in a “multi-tasking craziness”, involving activities like creating, finding, processing, assessing or organizing information while constantly switching from one context to another, each being associated with different tasks, documents, mails, etc. Hence, their personal information sphere consisting of file, mail and bookmark folders as well as their content, calendar entries, etc. is cluttered with information that has become irrelevant. Finding important information thus gets harder and much of previously gained knowledge is practically lost.
This thesis explores new ways of solving this problem by investigating the potential of self-(re)organizing and especially forgetting-enabled personal knowledge assistants in the given scenario. It utilizes so-called Managed Forgetting, which is an escalating set of measures to overcome the binary keep-or-delete paradigm, ranging from temporal hiding, to condensation, to adaptive reorganization, synchronization, archiving and deletion. Managed Forgetting is combined with two other major ideas: First, it uses the Semantic Desktop as an ecosystem, which brings Semantic Web and thus knowledge graph technologies to a user’s desktop, making it possible to capture and represent major parts of a user’s personal mental model in a machine-understandable way and exploit it in many different applications. Second, the system uses explicated context information – so-called Context Spaces: context is seen as an explicit interaction element users can work with (i.e. a “tangible” object similar to a folder) and in (immersion). The thesis is structured according to the basic interaction cycle with such a system, ranging from evidence collection to information extraction and context elicitation, followed by information value assessment and the actual support measures consisting of self-(re)organization decisions (back-end) and user interface updates (front-end). The system’s data foundation are personal or group knowledge graphs as well as native data. This work makes contributions to all of these aspects, whereas several of them have been investigated and developed in interdisciplinary research with cognitive scientists. On a more general level, searching and trust in such highly autonomous assistants have also been investigated.
In summary, a self-(re)organizing and especially forgetting-enabled support system for information management and knowledge work has been realized. Its different features vary in maturity: the most mature ones are already in practical use (also in industry), while the latest are just well elaborated (position papers) or rough ideas. Different evaluation strategies have been applied ranging from mere data-driven experiments to various user studies. Some of them were rather short-term with controlled laboratory conditions, others less controlled but spanning several months. Different benefits of working with such a system could be quantified, e.g. cognitive offloading effects and reduced task switching/resumption time. Other benefits were gathered qualitatively, e.g. tidiness of the information sphere and its better alignment with the user’s mental model. The presented approach has been shown to hold a lot of potential. In some aspects, however, only first steps have been taken towards tapping it, e.g. several support measures can be further refined and automation further increased.
Sustained Human Background Exposure to Acrolein Evidenced by Monitoring Urinary Exposure Biomarkers
(2019)
Scope
This study investigates a potential correlation between the intake of heat-processed food and the excretion of the acrolein (AC) biomarkers N-acetyl-S-(3-hydroxypropyl)-l-cysteine (HPMA) and N-acetyl-S-(carboxyethyl)-l-cysteine (CEMA) based on two human studies.
Methods and Results
Human exposure to AC is monitored using the AC-related mercapturic acids HPMA and CEMA in the urine of a) non-smoking volunteers under defined living conditions and b) of non-smoking volunteers on unrestricted or vegan diet under free living conditions. Free living volunteers in part show markedly enhanced urinary excretions of HPMA and CEMA. The intake of heat-processed food does not influence AC-related biomarker excretion. Incidentally enhanced urinary exposure biomarker levels appear to suggest AC exposure possibly from open fire, barbecuing, or tobacco smoke. However, kinetics of urinary biomarkers related to tobacco and other potential smoke exposure, do not correlate with those observed for HPMA and CEMA.
Conclusion
This study is the first to convincingly show a sustained and substantial background exposure to AC in non-smoking humans, clearly independent from uptake of heat-processed foods. The data strongly point to endogenous AC generation by pathways of mammalian and/or microbial metabolism as yet not taken into consideration.
Editorial
(2020)
Bacterial cell appendix formation supports cell-cell interaction, cell adhesion and cell movement. Additionally, in bioelectrochemical systems (BES), cell appendages have been shown to participate in extracellular electron transfer. In this work, the cell appendix formation of Clostridium acetobutylicum in biofilms of a BES are imaged and compared with conventional biofilms. Under all observed conditions, the cells possess filamentous appendages with a higher number and density in the BES. Differences in the amount of extracellular polymeric substance in the biofilms of the electrodes lead to the conclusion that the cathode can be used as electron donor and the anode as electron acceptor by C. acetobutylicum. When using conductive atomic force microscopy, a current response of about 15 nA is found for the cell appendages from the BES. This is the first report of conductivity for clostridial cell appendices and represents the basis for further studies on their role for biofilm formation and electron transfer.
Water availability shapes edaphic and lithic cyanobacterial communities in the Atacama Desert
(2019)
In the Atacama Desert, cyanobacteria grow on various substrates such as soils (edaphic) and quartz or granitoid stones (lithic). Both edaphic and lithic cyanobacterial communities have been described but no comparison between both communities of the same locality has yet been undertaken. In the present study, we compared both cyanobacterial communities along a precipitation gradient ranging from the arid National Park Pan de Azúcar (PA), which resembles a large fog oasis in the Atacama Desert extending to the semiarid Santa Gracia Natural Reserve (SG) further south, as well as along a precipitation gradient within PA. Various microscopic techniques, as well as culturing and partial 16S rRNA sequencing, were applied to identify 21 cyanobacterial species; the diversity was found to decline as precipitation levels decreased. Additionally, under increasing xeric stress, lithic community species composition showed higher divergence from the surrounding edaphic community, resulting in indigenous hypolithic and chasmoendolithic cyanobacterial communities. We conclude that rain and fog water, respectively, cause contrasting trends regarding cyanobacterial species richness in the edaphic and lithic microhabitats.
Several governmental organizations all over the world aim for algorithmic accountability of artificial intelligence systems. However, there are few specific proposals on how exactly to achieve it. This article provides an extensive overview of possible transparency and inspectability mechanisms that contribute to accountability for the technical components of an algorithmic decision-making system. Following the different phases of a generic software development process, we identify and discuss several such mechanisms. For each of them, we give an estimate of the cost with respect to time and money that might be associated with that measure.
Insurance companies and banks regularly have to face stress tests performed by regulatory instances. To model their investment decision problems that includes stress scenarios, we propose the worst-case portfolio approach. Thus, the resulting optimal portfolios are already stress test prone by construction. A central issue of the worst-case portfolio approach is that neither the time nor the order of occurrence of the stress scenarios are known. Even more, there are no probabilistic assumptions regarding the occurrence of the stresses. By defining the relative worst-case loss and introducing the concept of minimum constant portfolio processes, we generalize the traditional concepts of the indifference frontier and the indifference-optimality principle. We prove the existence of a minimum constant portfolio process that is optimal for the multi-stress worst-case problem. As a main result we derive a verification theorem that provides conditions on Lagrange multipliers and nonlinear ordinary differential equations that support the construction of optimal worst-case portfolio strategies. The practical applicability of the verification theorem is demonstrated via numerical solution of various worst-case problems with stresses. There, it is in particular shown that an investor who chooses the worst-case optimal portfolio process may have a preference regarding the order of stresses, but there may also be stress scenarios where he/she is indifferent regarding the order and time of occurrence.
This thesis focuses on the operation of reliability-constrained routes in wireless ad-hoc networks. A complete communication protocol that is capable of guaranteeing a statistical minimum reliability level would have to support several functionalities: first, routes that are capable of supporting the specified Quality of Service requirement have to be discovered. During operation of discovered routes, the current Quality of Service level has to be monitored continuously. Whenever significant deviations are detected and the required level of Quality of Service is endangered, route maintenance has to ensure continuous operation. All four functionalities, route discovery, route operation, route maintenance and collection and distribution of network status information, will be addressed in this thesis.
In the first part of the thesis, we propose a new approach for Quality-of- Service routing in wireless ad-hoc networks called rmin-routing, with the provision of statistical minimum route reliability as main route selection criterion. To achieve specified minimum route reliabilities, we improve the reliability of individual links by well-directed retransmissions, to be applied during the operation of routes. To select among a set of candidate routes, we define and apply route quality criteria concerning network load.
High-quality information about the network status is essential for the discovery and operation of routes and clusters in wireless ad-hoc networks. This requires permanent observation and assessment of nodes, links, and link metrics, and the exchange of gathered status data. In the second part of the thesis, we present cTEx, a configurable topology explorer for wireless ad-hoc networks that efficiently detects and exchanges high-quality network status information during operation.
In the third part, we propose a decentralized algorithm for the discovery and operation of reliability-constrained routes in wireless ad-hoc networks called dRmin-routing. The algorithm uses locally available network status information about network topology and link properties that is collected proactively in order to discover a preliminary route candidate. This is followed by a distributed, reactive search along this preselected route to remove imprecisions of the locally recorded network status before making a final route selection. During route operation, dRmin-routing monitors routes and performs different kinds of route repair actions to maintain route reliability in order to overcome varying link reliabilities.
Modeling and Simulation of Internet of Things Infrastructures for Cyber-Physical Energy Systems
(2024)
This dissertation presents a novel approach to the model-based development and simulation-based validation of Internet of Things (IoT) infrastructures within the context of Cyber-Physical Energy Systems (CPES). CPES represents an evolution in energy management, seamlessly blending physical and cyber components for efficient, secure, and dependable energy distribution. However, the intricate interplay of these components demands innovative modeling and simulation strategies.
The work begins by establishing a robust foundation, exploring essential background elements such as requirements engineering, model-based systems engineering, digitalization approaches, and the intricacies of IoT platforms. It introduces the novel concept of homomorphic encryption, a critical enabler for securing IoT data within CPES.
In the exploration of the state of the art, the dissertation delves into the multifaceted landscape of IoT simulation, emphasizing the significance of versatility, community support, scalability, and synchronization.
The core contribution emerges in the chapter on simulating IoT networks. It introduces a sophisticated framework that encompasses hardware-in-the-loop, software-in-the-loop, and human-in-the-loop simulation. This innovative framework extends the boundaries of conventional simulation, enabling holistic evaluations of IoT systems.
A practical case study on smart energy usage showcases the application of the framework. Detailed SysML models, including requirements, package diagrams, block definition diagrams, internal block diagrams, state machine diagrams, and activity diagrams, are meticulously examined. The performance evaluation encompasses diverse aspects, from hardware and software validation to human interaction.
In conclusion, this dissertation represents a significant leap forward in the integration of IoT infrastructures within CPES. Its contributions extend from a comprehensive understanding of foundational elements to the practical implementation of a holistic simulation framework. This work not only addresses the current challenges but also outlines a path for future research, shaping the landscape of IoT integration within the dynamic realm of CPES. It offers invaluable insights for researchers, engineers, and stakeholders working towards resilient, secure, and energy-efficient infrastructures.
Within this work, we report the results of nuclear inelastic scattering experiments of the low-spin phase of the iron(II) mononuclear SCO complex Fe[HBpz3]2 and density functional theory based calculations performed on a model molecule of the complex. We show that the calculated partial density of vibrational states based on the structure of a single iron(II) center which is linked by three pyrazole rings to borat is in good accordance with the experimentally obtained 57Fe-pDOS and assign the molecular vibrations to the prominent optical phonons.
We present new results on standard basis computations of a 0-dimensional ideal I in a power series ring or in the localization of a polynomial ring over a computable field K. We prove the semicontinuity of the “highest corner” in a family of ideals, parametrized by the spectrum of a Noetherian domain A. This semicontinuity is used to design a new modular algorithm for computing a standard basis of I if K is the quotient field of A. It uses the computation over the residue field of a “good” prime ideal of A to truncate high order terms in the subsequent computation over K. We prove that almost all prime ideals are good, so a random choice is very likely to be good, and whether it is good is detected a posteriori by the algorithm. The algorithm yields a significant speed advantage over the non-modular version and works for arbitrary Noetherian domains. The most important special cases are perhaps A = ℤ and A = k[t], k any field and t a set of parameters. Besides its generality, the method differs substantially from previously known modular algorithms for A = ℤ, since it does not manipulate the coefficients. It is also usually faster and can be combined with other modular methods for computations in local rings. The algorithm is implemented in the computer algebra system SINGULAR and we present several examples illustrating its power.
Nanostructured tantalum (Ta)-based dental implants have recently attracted significant attention thanks to their superior biocompatibility and bioactivity as compared to their titanium-based counterparts. While the biological and chemical aspects of Ta implants have been widely studied, their mechanical features have been investigated more rarely. Additionally, the mechanical behavior of these implants and, more importantly, their plastic deformation mechanisms are still not fully understood. Accordingly, in the current research, molecular dynamics simulation as a powerful tool for probing the atomic-scale phenomena is utilized to explore the microstructural evolution of pure polycrystalline Ta samples under tensile loading conditions. Various samples with an average grain size of 2–10 nm are systematically examined using various crystal structure analysis tools to determine the underlying deformation mechanisms. The results reveal that for the samples with an average grain size larger than 8 nm, twinning and dislocation slip are the main sources of any plasticity induced within the sample. For finer-grained samples, the activity of grain boundaries—including grain elongation, rotation, migration, and sliding—are the most important mechanisms governing the plastic deformation. Finally, the temperature-dependent Hall–Petch breakdown is thoroughly examined for the nanocrystalline samples via identification of the grain boundary dynamics.
In many applications, visual analytics (VA) has developed into a standard tool to ease data access and knowledge generation. VA describes a holistic cycle transforming data into hypothesis and visualization to generate insights that enhance the data. Unfortunately, many data sources used in the VA process are affected by uncertainty. In addition, the VA cycle itself can introduce uncertainty to the knowledge generation process but does not provide a mechanism to handle these sources of uncertainty. In this manuscript, we aim to provide an extended VA cycle that is capable of handling uncertainty by quantification, propagation, and visualization, defined as uncertainty-aware visual analytics (UAVA). Here, a recap of uncertainty definition and description is used as a starting point to insert novel components in the visual analytics cycle. These components assist in capturing uncertainty throughout the VA cycle. Further, different data types, hypothesis generation approaches, and uncertainty-aware visualization approaches are discussed that fit in the defined UAVA cycle. In addition, application scenarios that can be handled by such a cycle, examples, and a list of open challenges in the area of UAVA are provided.
In this paper we consider the stochastic primitive equation for geophysical flows subject to transport noise and turbulent pressure. Admitting very rough noise terms, the global existence and uniqueness of solutions to this stochastic partial differential equation are proven using stochastic maximal
-regularity, the theory of critical spaces for stochastic evolution equations, and global a priori bounds. Compared to other results in this direction, we do not need any smallness assumption on the transport noise which acts directly on the velocity field and we also allow rougher noise terms. The adaptation to Stratonovich type noise and, more generally, to variable viscosity and/or conductivity are discussed as well.
Municipal wastewater is an interesting source of phosphorus and several processes for the recovery of phosphorus from this source have been described. These processes yield magnesium ammonium phosphate (MAP), a valuable fertilizer. In these processes, pH shifts and the addition of chemicals are used to influence the species distribution in the solution such as to finally obtain the desired product and to prevent the co-precipitation of salts of heavy metal ions. Elucidating these species distributions experimentally is a challenging and cumbersome task. Therefore, in the present work, a thermodynamic model was developed that can be used for predicting the species distributions in the various steps of the recovery process. The model combines the extended Debye-Hückel equation for the prediction of activity coefficients with dissociation constants and solubility product data from the literature and contains no parameters that need to be adjusted to process data. The model was successfully tested by comparison to experimental data for the Stuttgart process from the literature and used for analyzing the different process steps. Furthermore, it was demonstrated how the model can be used for optimizing the process.
Aflatoxins, a group of mycotoxins produced by various mold species within the genus Aspergillus, have been extensively investigated for their potential to contaminate food and feed, rendering them unfit for consumption. Nevertheless, the role of aflatoxins as environmental contaminants in soil, which represents their natural habitat, remains a relatively unexplored area in aflatoxin research. This knowledge gap can be attributed, in part, to the methodological challenges associated with detecting aflatoxins in soil. The main objective of this PhD project was to develop and validate an analytical method that allows monitoring of aflatoxins in soil, and scrutinize the mechanisms and extent of occurrence of aflatoxins in soil, the processes governing their dissipation, and their impact on the soil microbiome and associated soil functions. By utilizing an efficient extraction solvent mixture comprising acetonitrile and water, coupled with an ultrasonication step, recoveries of 78% to 92% were achieved, enabling reliable determination of trace levels in soil ranging from 0.5 to 20 µg kg-1. However, in a field trial conducted in a high-risk model region for aflatoxin contamination in Sub-Saharan Africa, no aflatoxins were detected using this procedure, underscoring the complexities of field monitoring. These challenges encompassed rapid degradation, spatial heterogeneity, and seasonal fluctuations in aflatoxin occurrence. Degradation experiments revealed the importance of microbial and photochemical processes in the dissipation of aflatoxins in soil with half-lives of 20 - 65 days. The rate of dissipation was found to be influenced by soil properties, most notably soil texture and the initial concentration of aflatoxins in the soil. An exposure study provided evidence that aflatoxins do not pose a substantial threat to the soil microbiome, encompassing microbial biomass, activity, and catabolic functionality. This was particularly evident in clayey soils, where the toxicity of aflatoxins diminished significantly due to their strong binding to clay minerals. However, several critical questions remain unanswered, emphasizing the necessity for further research to attain a more comprehensive understanding of the ecological importance of aflatoxins. Future research should prioritize the challenges associated with field monitoring of aflatoxins, elucidate the mechanisms responsible for the dissipation of aflatoxins in soil during microbial and photochemical degradation, and investigate the ecological consequences of aflatoxins in regions heavily affected by aflatoxins, taking into account the interactions between aflatoxins and environmental and anthropogenic stressors. Addressing these questions contributes to a comprehensive understanding of the environmental impact of aflatoxins in soil, ultimately contributing to more effective strategies for aflatoxin management in agriculture.
In recent decades, academia has addressed a wide range of research topics in the field of ethical decision-making. Besides a great amount of research on ethical consumption, also the domain of ethical investments increasingly moves in the focus of scholars. While in this area most research focuses on whether socially or environmentally sustainable businesses outperform traditional investments financially or investigates the character traits as well as other socio-demographic factors of ethical investors, the impact of sustainable corporate conduct on the investment intentions of private investors still requires further research. Hence, we conducted two studies to shed more light on this highly relevant topic. After discussing the current state of research, in our first empirical study, we explore whether besides the traditional triad of risk, return, and liquidity, also sustainability exerts a significant impact on the willingness to invest. As hypothesized, we find that sustainability shows a clear and decisive impact in addition to the traditional factors. In a consecutive study, we investigate deeper into the sustainability-willingness to invest link. Here, our results show that improved sustainability might not pay off in terms of investment attractiveness, however and conversely, it certainly harms to conduct business in a non-sustainable manner, which cannot even be compensated by an increased return.
As a consequence of the real estate market crash after 2008, large investors invested a significant amount of wealth into single-family houses to construct a portfolio of rental dwellings, whose income is securitized in the capital. In some local housing markets, these investors own remarkable numbers of single-family houses. Furthermore, their trading activities have resulted in a new investment strategy, which exacerbates property wealth concentration and polarization. This new investment strategy and its portfolio optimization inspire curiosity about its influence on housing markets. This paper first aims to find an optimal portfolio strategy by employing an expected utility optimization from the terminal wealth, which adopts a stochastic model that includes a variety of economic states to estimate house prices. Second, it aims to analyze the effect of large investors on the housing market. The results show the investment strategies of large investors depend on the balance among economic state, maintenance cost, rental income, interest rate and investment willingness of large investors to housing and their effect depends on the state of the economy.
Dataflow process networks (DPNs) are intrinsically data-driven, i.e., node actions are not synchronized among each other and may fire whenever sufficient input operands arrived at a node. While the general model of computation (MoC) of DPNs does not impose further restrictions, many different subclasses of DPNs representing different dataflow MoCs have been considered over time. These classes mainly differ in the kinds of behaviors of the processes. A DPN may be heterogeneous in that different processes in the network belong to different classes of DPNs. A heterogeneous DPN can therefore be effectively used to model and to implement different components of a system with different kinds of processes and, therefore, different dataflow MoCs. This paper presents a model-based design based on different dataflow MoCs including their heterogeneous combinations. In particular, it covers the automatic software synthesis of systems from DPN models. The main objective is to validate, evaluate and compare the artifacts exhibited by different dataflow MoCs at the implementation level of systems under the supervision of a common design tool. Moreover, this work also offers an efficient synthesis method that targets and exploits heterogeneity in DPNs by generating implementations based on the kinds of behaviors of the processes. The proposed synthesis method provides a tool chain including different specialized code generators for specific dataflow MoCs, and a runtime system that finally maps models using a combination of different dataflow MoCs on cross-vendor target hardware.
Quantum Annealing (QA) is a metaheuristic for solving optimization problems in a time-efficient manner. Therefore, quantum mechanical effects are used to compute and evaluate many possible solutions of an optimization problem simultaneously. Recent studies have shown the potential of QA for solving such complex assignment problems within milliseconds. This also applies for the field of job shop scheduling, where the existing approaches however focus on small problem sizes. To assess the full potential of QA in this area for industry-scale problem formulations, it is necessary to consider larger problem instances and to evaluate the potentials of computing these job shop scheduling problems while finding a near-optimal solution in a time-efficient manner. Consequently, this paper presents a QA-based job shop scheduling. In particular, flexible job shop scheduling problems in various sizes are computed with QA, demonstrating the efficiency of the approach regarding scalability, solutions quality, and computing time. For the evaluation of the proposed approach, the solutions are compared in a scientific benchmark with state-of-the-art algorithms for solving flexible job shop scheduling problems. The results indicate that QA has the potential for solving flexible job shop scheduling problems in a time efficient manner. Even large problem instances can be computed within seconds, which offers the possibility for application in industry.
Continuous-time regime-switching models are a very popular class of models for financial applications. In this work the so-called signal-to-noise matrix is introduced for hidden Markov models where the switching is driven by an unobservable Markov chain. Its relations to filtering, i.e. state estimation of the chain given the available observations, and portfolio optimization are investigated. A convergence result for the filter is derived: The filter converges to its invariant distribution if the eigenvalues of the signal-to-noise matrix converge to zero. This matrix is then also used to prove a mutual fund representation for regime-switching models and a corresponding market reduction which is consistent with filtering and portfolio optimization. Two canonical cases for the reduction are analyzed in more detail, the first based on the market regimes and the second depending on the eigenvalues. These considerations are presented both for observable and unobservable Markov chains. The results are illustrated by numerical simulations.
In this paper we investigate a utility maximization problem with drift uncertainty in a multivariate continuous-time Black–Scholes type financial market which may be incomplete. We impose a constraint on the admissible strategies that prevents a pure bond investment and we include uncertainty by means of ellipsoidal uncertainty sets for the drift. Our main results consist firstly in finding an explicit representation of the optimal strategy and the worst-case parameter, secondly in proving a minimax theorem that connects our robust utility maximization problem with the corresponding dual problem. Thirdly, we show that, as the degree of model uncertainty increases, the optimal strategy converges to a generalized uniform diversification strategy.
The dynamic behaviour of unsaturated sand rubber chips mixtures at various gravimetric contents is evaluated through an experimental study comprising resonant column tests in a fixed-free device. Chips were irregularly shaped with dimensions ranging from 5 to 14 mm. Three types of sand with different gradation have been considered. Relative density amounted to 0.5 for all specimens. Due to the large size of the chips, the diameter of the specimens had to be equal to 100 mm, which in turn required a re-calibration of the device assuming a frequency-dependent drive head inertia. The effects of confining stress, rubber chips content, and sand gradation on shear modulus and damping ratio are determined over wide ranges of the shear strain. At small strains, as known for sands, increasing the confining stress stiffens the mixtures. Increasing the rubber chips content reduces significantly the shear modulus and increases the damping ratio. At higher strains, increasing the confining stress or the rubber content flattens the reduction of the shear modulus with strain. Damping at high strains does not show any appreciable dependence on rubber content. Unloading–reloading sequences are used to assess shear modulus degradation and threshold strains. Finally, design equations are derived from the test results to predict the dynamic response of the composite material.
Many practical optimisation problems have conflicting objectives, which should be addressed by multi-criteria optimisation (MCO), i.e. by determining the set of best compromises, the Pareto set (PS), along with its picture in parameter space (PSPS). In previous work on low-dimensional MCO problems, we have found characteristic topological features of the PS and PSPS, which depend on the dimensionality of the parameter space M and the objective space N. E.g., M = 2 and N = 3 yields triangles with needle-like extensions. The reasons for these topological features were unknown so far. Here, we show that they are to be expected if all objective functions of the MCO satisfy two conditions: (a) they can be approximated by quadratic functions and (b) one of the eigenvalues of the Hessian matrix evaluated at the function’s minimum is small compared to the other eigenvalues. Objective functions which meet conditions (a) and (b) have a valley-like topology, for which the valley lies in the direction of the eigenvector corresponding to the lowest eigenvalue. The PSPS can be estimated by starting at the minimum of an objective function, following the valley, and combining these lines for all objective functions. The PS is obtained by evaluating the objective functions. We believe that the conditions (a) and (b) are met in many practical problems and discuss an example from molecular modelling. The improved understanding of the features of these MCO problems opens the route for designing methods for swiftly finding estimates of their PS and PSPS.
This contribution defends two claims. The first is about why thought experiments are so relevant and powerful in mathematics. Heuristics and proof are not strictly and, therefore, the relevance of thought experiments is not contained to heuristics. The main argument is based on a semiotic analysis of how mathematics works with signs. Seen in this way, formal symbols do not eliminate thought experiments (replacing them by something rigorous), but rather provide a new stage for them. The formal world resembles the empirical world in that it calls for exploration and offers surprises. This presents a major reason why thought experiments occur both in empirical sciences and in mathematics. The second claim is about a looming aporia that signals the limitation of thought experiments. This aporia arises when mathematical arguments cease to be fully accessible, thus violating a precondition for experimenting in thought. The contribution focuses on the work of Vladimir Voevodsky (1966–2017, Fields medalist in 2002) who argued that even very pure branches of mathematics cannot avoid inaccessibility of proof. Furthermore, he suggested that computer verification is a feasible path forward, but only if proof is not modeled in terms of formal logic.
Algorithmic systems are increasingly used by state agencies to inform decisions about humans. They produce scores on risks of recidivism in criminal justice, indicate the probability for a job seeker to find a job in the labor market, or calculate whether an applicant should get access to a certain university program. In this contribution, we take an interdisciplinary perspective, provide a bird’s eye view of the different key decisions that are to be taken when state actors decide to use an algorithmic system, and illustrate these decisions with empirical examples from case studies. Building on these insights, we discuss the main pitfalls and promises of the use of algorithmic system by the state and focus on four levels: The most basic question whether an algorithmic system should be used at all, the regulation and governance of the system, issues of algorithm design, and, finally, questions related to the implementation of the system on the ground and the human–machine-interaction that comes with it. Based on our assessment of the advantages and challenges that arise at each of these levels, we propose a set of crucial questions to be asked when such intricate matters are addressed.
This study investigated the universality of emotional prosody in perception of discrete emotions when semantics is not available. In two experiments the perception of emotional prosody in Hebrew and German by listeners who speak one of the languages but not the other was investigated. Having a parallel tool in both languages allowed to conduct controlled comparisons. In Experiment 1, 39 native German speakers with no knowledge of Hebrew and 80 native Israeli speakers rated Hebrew sentences spoken with four different emotional prosodies (anger, fear, happiness, sadness) or neutral. The Hebrew version of the Test for Rating of Emotions in Speech (T-RES) was used for this purpose. Ratings indicated participants’ agreement on how much the sentence conveyed each of four discrete emotions (anger, fear, happiness and sadness). In Experient 2, 30 native speakers of German, and 24 Israeli native speakers of Hebrew who had no knowledge of German rated sentences of the German version of the T-RES. Based only on the prosody, German-speaking participants were able to accurately identify the emotions in the Hebrew sentences and Hebrew-speaking participants were able to identify the emotions in the German sentences. In both experiments ratings between the groups were similar. These findings show that individuals are able to identify emotions in a foreign language even if they do not have access to semantics. This ability goes beyond identification of target emotion; similarities between languages exist even for “wrong” perception. This adds to accumulating evidence in the literature on the universality of emotional prosody.
Performance of pure OME and various HVO–OME fuel blends as alternative fuels for a diesel engine
(2022)
Since the potential for reducing CO2 emissions from fossil fuels is limited, suitable CO2-neutral fuels are required for applications which cannot reasonably be electrified, and therefore still rely on internal combustion engines in the future. Potential fuel candidates for CI engines are either paraffinic diesel fuels or new fuels like POMDME (polyoxymethylene dimethyl ether, short “OME”). Besides, also blends of these two types of fuels might be of interest. While many studies have been conducted on OME blends with fossil diesel fuel, the research on HVO–OME blends has been less extensive to date.
In the current work, pure OME and HVO–OME blends are investigated in a single-cylinder research engine. The test results of the various fuel blend formulations are compared and evaluated, particularly with regard to soot-NOx trade-off behavior. The primary objective of the study is to examine whether the major potential of blending these two fuels is already largely exploited at low OME content, or if significant additional emission reduction potential can still be found with higher content blends, but still without the need to switch to pure OME operation. Furthermore, the fuel blend which is best suited for the realization of an ultra-low emission concept under the current technical conditions should be identified. In addition, three different injector designs were tested for operation on pure OME3-5, differing both in hydraulic flow and in the number of injection holes as well as their layout. The optimum configuration is evaluated with regard to emissions, normalized heat release and indicated efficiency.
Due to an excellent ratio of high strength to low density, as well as a strong corrosion resistance, the titanium alloy Ti-6Al-4 V is widely used in industrial applications. However, Ti-6Al-4 V is also a difficult-to-cut material because of its low thermal conductivity and high chemical reactivity, especially at elevated temperatures. As a result, machining Ti-6Al-4 V is characterized by high thermal loads and a rapidly progressing thermo-chemical induced tool wear. An adequate cooling strategy is essential to reduce the thermal load and therefore tool wear. Sub-zero metalworking fluids (MWF) which are applied at liquid state but at supply temperatures below the ambient temperature, offer great potential to significantly reduce the thermal load when machining Ti-6Al-4 V. Within the presented research, systematically varied sub-zero cooling strategies are applied when milling Ti-6Al-4 V. The influences of the supply temperature, as well as the volume flow and the outlet velocity are investigated aiming at a reduction of the thermal loads that occur during milling. The milling experiments were recorded using high-speed cameras in order to characterize the impact of the cooling strategies and resolve the behavior of the MWF. Additionally, the novel sub-zero cooling approach is compared to a cryogenic CO2 cooling strategy. The results show that the optimized sub-zero cooling strategy led to a sufficient reduction of the thermal loads and does outperform the cryogenic cooling even at elevated CO2 mass flows.
We consider the optimization problem of a large insurance company that wants to maximize the expected utility of its surplus through the optimal control of the proportional reinsurance. In addition, the insurer is exposed to the risk of default of its reinsurer at the worst possible time, a setting that is closely related to a scenario of the Swiss Solvency Test.
There had been interesting interactions between philosophical reflections, technical developments and the work of artists, poets and designers, starting especially in the 1950s and 1960s with a stimulating cell in Stuttgart and Ulm in Germany spreading mutual international interactions. The paper aims to describe the philosophical background of Max Bense with his research on the intellectual history of mathematics and the upcoming studies on technology and cybernetics. Together with communication theories and semiotics, new aesthetics such as cybernetic aesthetics had been worked out, based on the notions of information and sign. This background stimulated international students, artists and researchers from different creative disciplines for methodical approaches leading to first computer art experiments. The interrelations in these fields with Latin America are in the focus of these studies. Students, artists, and poets from Latin America, especially Brazil, came to Germany for studies and exhibitions in the creative scientific cell around Max Bense. Some of them stayed in Europe, but the exchange developed also in the opposite direction, traveling to and working in Latin America. Some of those fruitful international interrelations will be described and reflected.
In the strive for the climate-neutral and ultra-low emission vehicle powertrains of the future, synthetic fuels produced from renewable sources will play a major role. Polyoxymethylene dimethyl ethers (POMDME or “OME”) produced from renewable hydrogen are a very promising candidate for zero-impact emissions in future CI engines. To optimize the utilisation of these fuels in terms of efficiency, performance and emissions, it is not only necessary to adapt the combustion parameters, but especially to optimize the injection and mixture formation process. In the present work, the spray break-up behavior and mixture formation of OME fuel is investigated numerically in 3D CFD and validated against experimental data from optical measurements in a high pressure/high temperature chamber using Schlieren and Mie scattering. For comparison, the same operating points using conventional diesel fuel were measured in the optical chamber, and the CFD modeling was optimized based on these data. To model the spray-breakup phenomena reliably, the primary break-up model according to Fischer is used, taking into account the nozzle internal flow in a detailed calculation of the disperse droplet phase. As OME has not yet been investigated very intensively with respect to its chemico-physical properties, chemical analyses of the substance properties were carried out to capture the most important parameters correctly in the simulation. With this approach, the results of the optical spray measurement could be reproduced well by the numerical model for the cases studied here, laying the basis for further numerical studies of OME sprays, including real engine operation.
Synthetic Biology is revolutionizing biological research by introducing principles of mechanical engineering, including the standardization of genetic parts and standardized part assembly routes. Both are realized in the Modular Cloning (MoClo) strategy. MoClo allows for the rapid and robust assembly of individual genes and multigene clusters, enabling iterative cycles of gene design, construction, testing, and learning in short time. This is particularly true if generation times of target organisms are short, as is the case for the unicellular green alga Chlamydomonas reinhardtii. Testing a gene of interest in Chlamydomonas with MoClo requires two assembly steps, one for the gene of interest itself and another to combine it with a selection marker. To reduce this to a single assembly step, we constructed five new destination vectors. They contain genes conferring resistance to commonly used antibiotics in Chlamydomonas and a site for the direct assembly of basic genetic parts. The vectors employ red/white color selection and, therefore, do not require costly compounds like X-gal and IPTG. mCherry expression is used to demonstrate the functionality of these vectors.
Many real-world optimization and decision-making problems comprise several, partly conflicting objective functions. The English saying “Quality has its price” is just as true on a large scale as it is in private sphere and, therefore, quality and price are a typical pair of conflicting objective functions that are very common in applications. Yet, in industrial applications, both quality and cost may be understood in the specific context and differ whether a transportation, a production, or a planning problem is considered. Other objective functions that are receiving increasing attention in real-world decision-making situations are, for example, robustness, time, sustainability, adaptability, or longevity.
The electrochemical process of microbial electrosynthesis (MES) is used to drive the metabolism of electroactive microorganisms for the production of valuable chemicals and fuels. MES combines the advantages of electrochemistry, engineering, and microbiology and offers alternative production processes based on renewable raw materials and regenerative energies. In addition to the reactor concept and electrode design, the biocatalysts used have a significant influence on the performance of MES. Thus, pure and mixed cultures can be used as biocatalysts. By using mixed cultures, interactions between organisms, such as the direct interspecies electron transfer (DIET) or syntrophic interactions, influence the performance in terms of productivity and the product range of MES. This review focuses on the comparison of pure and mixed cultures in microbial electrosynthesis. The performance indicators, such as productivities and coulombic efficiencies (CEs), for both procedural methods are discussed. Typical products in MES are methane and acetate, therefore these processes are the focus of this review. In general, most studies used mixed cultures as biocatalyst, as more advanced performance of mixed cultures has been seen for both products. When comparing pure and mixed cultures in equivalent experimental setups a 3-fold higher methane and a nearly 2-fold higher acetate production rate can be achieved in mixed cultures. However, studies of pure culture MES for methane production have shown some improvement through reactor optimization and operational mode reaching similar performance indicators as mixed culture MES. Overall, the review gives an overview of the advantages and disadvantages of using pure or mixed cultures in MES.
In this paper, we devise a stochastic asset–liability management (ALM) model for a life insurance company and analyze its influence on the balance sheet within a low-interest rate environment. In particular, a flexible procedure for the generation of insurers’ compressed contract portfolios that respects the given biometric structure is presented, extending the existing literature on stochastic ALM modeling. The introduced balance sheet model is in line with the principles of double-entry bookkeeping as required in accounting. We further focus on the incorporation of new business, i.e. the addition of newly concluded contracts and thus of insured in each period. Efficient simulations are obtained by integrating new policies into existing cohorts according to contract-related criteria. We provide new results on the consistency of the balance sheet equations. In extensive simulation studies for different scenarios regarding the business form of today’s life insurers, we utilize these to analyze the long-term behavior and the stability of the components of the balance sheet for different asset–liability approaches. Finally, we investigate the robustness of two prominent investment strategies against crashes in the capital markets, which lead to extreme liquidity shocks and thus threaten the insurer’s financial health.
Lattice Boltzmann method for antiplane shear deformation: non-lattice-conforming boundary conditions
(2022)
In this work, two different approaches to treat boundary conditions in a lattice Boltzmann method (LBM) for the wave equation are presented. We interpret the wave equation as the governing equation of the displacement field of a solid under simplified deformation assumptions, but the algorithms are not limited to this interpretation. A feature of both algorithms is that the boundary does not need to conform with the discretization, i.e., the regular lattice. This allows for a larger flexibility regarding the geometries that can be handled by the LBM. The first algorithm aims at determining the missing distribution functions at boundary lattice points in such a way that a desired macroscopic boundary condition is fulfilled. The second algorithm is only available for Neumann-type boundary conditions and considers a balance of momentum for control volumes on the mesoscopic scale, i.e., at the scale of the lattice spacing. Numerical examples demonstrate that the new algorithms indeed improve the accuracy of the LBM compared to previous results and that they are able to model boundary conditions for complex geometries that do not conform with the lattice.
Irrelevant speech impairs serial recall of verbal but not spatial items in children and adults
(2022)
Immediate serial recall of visually presented items is reliably impaired by task-irrelevant speech that the participants are instructed to ignore (“irrelevant speech effect,” ISE). The ISE is stronger with changing speech tokens (words or syllables) when compared to repetitions of single tokens (“changing-state effect,” CSE). These phenomena have been attributed to sound-induced diversions of attention away from the focal task (attention capture account), or to specific interference of obligatory, involuntary sound processing with either the integrity of phonological traces in a phonological short-term store (phonological loop account), or the efficiency of a domain-general rehearsal process employed for serial order retention (changing-state account). Aiming to further explore the role of attention, phonological coding, and serial order retention in the ISE, we analyzed the effects of steady-state and changing-state speech on serial order reconstruction of visually presented verbal and spatial items in children (n = 81) and adults (n = 80). In the verbal task, both age groups performed worse with changing-state speech (sequences of different syllables) when compared with steady-state speech (one syllable repeated) and silence. Children were more impaired than adults by both speech sounds. In the spatial task, no disruptive effect of irrelevant speech was found in either group. These results indicate that irrelevant speech evokes similarity-based interference, and thus pose difficulties for the attention-capture and the changing-state account of the ISE.
Cold plasma is a partially ionized state of matter that unites high reactivity and mild conditions. Therefore, cold plasma reactors are intriguing for reaction engineering. In this work, a laboratory scale dielectric barrier discharge (DBD) cold plasma reactor was designed, set up, and used for studying the influence of the specific energy input (SEI) on the product spectrum of the partial oxidation of methane. In total, 23 experiments were carried out near ambient conditions with a molar reactant ratio of methane to oxygen of 2:1 at SEI between 0.3 and 6.0 J cm−3. The feed also contained argon at a mole fraction of 0.75 mol mol−1. The product stream was split into a fraction that was condensed in a cold trap and the remaining gaseous fraction. The latter was analyzed at-line in a gas chromatograph equipped with a dual column and two carrier gases. The condensed fraction was analyzed by qualitative and quantitative 1H and 13C NMR spectroscopy, Karl Fischer titration, and sodium sulfite titration. In the product stream, 16 components were identified and quantified: acetic acid, acetone, carbon dioxide, carbon monoxide, ethanol, ethane, ethene, ethylene glycol, formaldehyde, formic acid, hydrogen, methanol, methyl acetate, methyl hydroperoxide, methyl formate, and water. A univariant influence of the SEI on the conversions of methane and oxygen and the selectivities to the products was observed. The experimental results provided here are an asset for developing reaction kinetic models of the partial oxidation of methane in DBD plasma reactors.
Interview with Frank Petry on “Digital Entrepreneurship: Opportunities, Challenges, and Impacts”
(2022)
Frank Petry is a primal rock of Germany's startup scene. He is a serial founder, serial investor (e.g., Ticketmaster, Expedia, Lending Tree, Web.de, ESCOM), partner and member of the Advisory Board at Blue Lake VC, as well as a partner, mentor and advisory board member at the Baltic Sandbox Accelerator. Additionally, he is the CEO of PECON (Consulting) and Thundermountain (VC, Accelerator, Corporate innovation).
Microbiologically induced calcium carbonate precipitation (MICP) is a technique that has received a lot of attention in the field of geotechnology in the last decade. It has the potential to provide a sustainable and ecological alternative to conventional consolidation of minerals, for example by the use of cement. From a variety of microbiological metabolic pathways that can induce calcium carbonate (CaCO3) precipitation, ureolysis has been established as the most commonly used method. To better understand the mechanisms of MICP and to develop new processes and optimize existing ones based on this understanding, ureolytic MICP is the subject of intensive research. The interplay of biological and civil engineering aspects shows how interdisciplinary research needs to be to advance the potential of this technology. This paper describes and critically discusses, based on current literature, the key influencing factors involved in the cementation of sand by ureolytic MICP. Due to the complexity of MICP, these factors often influence each other, making it essential for researchers from all disciplines to be aware of these factors and its interactions. Furthermore, this paper discusses the opportunities and challenges for future research in this area to provide impetus for studies that can further advance the understanding of MICP.
Indentation and Scratching with a Rotating Adhesive Tool: A Molecular Dynamics Simulation Study
(2022)
For the specific case of a spherical diamond nanoparticle with 10 nm radius rolling over a planar Fe surface, we employ molecular dynamics simulation to study the processes of indentation and scratching. The particle is rotating (rolling). We focus on the influence of the adhesion force between the nanoparticle and the surface on the damage mechanisms on the surface; the adhesion is modeled by a pair potential with arbitrarily prescribed value of the adhesion strength. With increasing adhesion, the following effects are observed. The load needed for indentation decreases and so does the effective material hardness; this effect is considerably more pronounced than for a non-rotating particle. During scratching, the tangential force, and hence the friction coefficient, increase. The torque needed to keep the particle rolling adds to the total work for scratching; however, for a particle rolling without slip on the surface the total work is minimum. In this sense, a rolling particle induces the most efficient scratching process. For both indentation and scratching, the length of the dislocation network generated in the substrate reduces. After leaving the surface, the particle is (partially) covered with substrate atoms and the scratch groove is roughened. We demonstrate that these effects are based on substrate atom transport under the rotating particle from the front towards the rear; this transport already occurs for a repulsive particle but is severely intensified by adhesion.
Additive manufacturing (AM) enables the production of components with a high degree of individualization at constant manufacturing effort, which is why additive manufacturing is increasingly applied in industrial processes. However, additively produced surfaces do not meet the requirements for functional surfaces, which is why subsequent machining is mandatory for most of AM-workpieces. Further, the performance of many functional surfaces can be enhanced by microstructuring. The combination of both AM and subtractive processes is referred to as hybrid manufacturing. In this paper, the hybrid manufacturing of AISI 316L is investigated. The two AM technologies laser-based powder bed fusion (L-PBF) and high-speed laser directed energy deposition (HS L-DED) are used to produce workpieces which are subsequently machined by micro milling (tool diameter d = 100 µm). The machining results were evaluated based on tool wear, burr formation, process forces and the generated topography. Those indicated differences in the machinability of materials produced by L-PBF and HS L-DED which were attributed to different microstructural properties.
First essential m-dissipativity of an infinite-dimensional Ornstein-Uhlenbeck operator N, perturbed by the gradient of a potential, on a domain FC
∞
b
of finitely based, smooth and bounded functions, is shown. Our considerations allow unbounded diffusion operators as coefficients. We derive corresponding second order regularity estimates for solutions f of the Kolmogorov equation ◂−▸αf−Nf=g, ◂+▸α∈(0,∞), generalizing some results of Da Prato and Lunardi. Second, we prove essential m-dissipativity for generators (◂,▸LΦ,FC
∞
b
) of infinite-dimensional degenerate diffusion processes. We emphasize that the essential m-dissipativity of (◂,▸LΦ,FC
∞
b
) is useful to apply general resolvent methods developed by Beznea, Boboc and Röckner, in order to construct martingale/weak solutions to infinite-dimensional non-linear degenerate stochastic differential equations. Furthermore, the essential m-dissipativity of (◂,▸LΦ,FC
∞
b
) and (◂,▸N,FC
∞
b
), as well as the regularity estimates are essential to apply the general abstract Hilbert space hypocoercivity method from Dolbeault, Mouhot, Schmeiser and Grothaus, Stilgenbauer, respectively, to the corresponding diffusions.
We provide a complete elaboration of the L2-Hilbert space hypocoercivity theorem for the degenerate Langevin dynamics with multiplicative noise, studying the longtime behavior of the strongly continuous contraction semigroup solving the abstract Cauchy problem for the associated backward Kolmogorov operator. Hypocoercivity for the Langevin dynamics with constant diffusion matrix was proven previously by Dolbeault, Mouhot and Schmeiser in the corresponding Fokker–Planck framework and made rigorous in the Kolmogorov backwards setting by Grothaus and Stilgenbauer. We extend these results to weakly differentiable diffusion coefficient matrices, introducing multiplicative noise for the corresponding stochastic differential equation. The rate of convergence is explicitly computed depending on the choice of these coefficients and the potential giving the outer force. In order to obtain a solution to the abstract Cauchy problem, we first prove essential self-adjointness of non-degenerate elliptic Dirichlet operators on Hilbert spaces, using prior elliptic regularity results and techniques from Bogachev, Krylov and Röckner. We apply operator perturbation theory to obtain essential m-dissipativity of the Kolmogorov operator, extending the m-dissipativity results from Conrad and Grothaus. We emphasize that the chosen Kolmogorov approach is natural, as the theory of generalized Dirichlet forms implies a stochastic representation of the Langevin semigroup as the transition kernel of a diffusion process which provides a martingale solution to the Langevin equation with multiplicative noise. Moreover, we show that even a weak solution is obtained this way.
We examine the predictability of 299 capital market anomalies enhanced by 30 machine learning approaches and over 250 models in a dataset with more than 500 million firm-month anomaly observations. We find significant monthly (out-of-sample) returns of around 1.8–2.0%, and over 80% of the models yield returns equal to or larger than our linearly constructed baseline factor. For the best performing models, the risk-adjusted returns are significant across alternative asset pricing models, considering transaction costs with round-trip costs of up to 2% and including only anomalies after publication. Our results indicate that non-linear models can reveal market inefficiencies (mispricing) that are hard to conciliate with risk-based explanations.
The simulation of Dynamic Random Access Memories (DRAMs) on system level requires highly accurate models due to their complex timing and power behavior. However, conventional cycle-accurate DRAM subsystem models often become a bottleneck for the overall simulation speed. A promising alternative are simulators based on Transaction Level Modeling, which can be fast and accurate at the same time. In this paper we present DRAMSys4.0, which is, to the best of our knowledge, the fastest and most extensive open-source cycle-accurate DRAM simulation framework. DRAMSys4.0 includes a novel software architecture that enables a fast adaption to different hardware controller implementations and new JEDEC standards. In addition, it already supports the latest standards DDR5 and LPDDR5. We explain how to apply optimization techniques for an increased simulation speed while maintaining full temporal accuracy. Furthermore, we demonstrate the simulator’s accuracy and analysis tools with two application examples. Finally, we provide a detailed investigation and comparison of the most prominent cycle-accurate open-source DRAM simulators with regard to their supported features, analysis capabilities and simulation speed.
This article presents a methodology whereby adjoint solutions for partitioned multiphysics problems can be computed efficiently, in a way that is completely independent of the underlying physical sub-problems, the associated numerical solution methods, and the number and type of couplings between them. By applying the reverse mode of algorithmic differentiation to each discipline, and by using a specialized recording strategy, diagonal and cross terms can be evaluated individually, thereby allowing different solution methods for the generic coupled problem (for example block-Jacobi or block-Gauss-Seidel). Based on an implementation in the open-source multiphysics simulation and design software SU2, we demonstrate how the same algorithm can be applied for shape sensitivity analysis on a heat exchanger (conjugate heat transfer), a deforming wing (fluid–structure interaction), and a cooled turbine blade where both effects are simultaneously taken into account.
Comparative public policy is a blooming research area. It also suffers from some curious blind spots. In this paper we discuss four of these: (1) the obsession with covariance, which means that important phenomena are ignored; (2) the lack of agency, which leads to underwhelming explanatory models; (3) the unclear universe of cases, which means the inferential value of theories and the empirical results are unclear; and (4) the focus on outputs, even though most theories contain strong assumptions about the political process leading to certain outputs. Following this discussion, we then outline how a closer integration of policy process theories may be fruitful for future research.
Algorithmic systems that provide services to people by supporting or replacing human decision-making promise greater convenience in various areas. The opacity of these applications, however, means that it is not clear how much they truly serve their users. A promising way to address the issue of possible undesired biases consists in giving users control by letting them configure a system and aligning its performance with users’ own preferences. However, as the present paper argues, this form of control over an algorithmic system demands an algorithmic literacy that also entails a certain way of making oneself knowable: users must interrogate their own dispositions and see how these can be formalized such that they can be translated into the algorithmic system. This may, however, extend already existing practices through which people are monitored and probed and means that exerting such control requires users to direct a computational mode of thinking at themselves.
In this note, we define one more way of quantization of classical systems. The quantization we consider is an analogue of classical Jordan–Schwinger map which has been known and used for a long time by physicists. The difference, compared to Jordan–Schwinger map, is that we use generators of Cuntz algebra O∞ (i.e. countable family of mutually orthogonal partial isometries of separable Hilbert space) as a “building blocks” instead of creation–annihilation operators. The resulting scheme satisfies properties similar to Van Hove prequantization, i.e. exact conservation of Lie brackets and linearity.
Recently, phase field modeling of fatigue fracture has gained a lot of attention from many researches and studies, since the fatigue damage of structures is a crucial issue in mechanical design. Differing from traditional phase field fracture models, our approach considers not only the elastic strain energy and crack surface energy, additionally, we introduce a fatigue energy contribution into the regularized energy density function caused by cyclic load. Comparing to other type of fracture phenomenon, fatigue damage occurs only after a large number of load cycles. It requires a large computing effort in a computer simulation. Furthermore, the choice of the cycle number increment is usually determined by a compromise between simulation time and accuracy. In this work, we propose an efficient phase field method for cyclic fatigue propagation that only requires moderate computational cost without sacrificing accuracy. We divide the entire fatigue fracture simulation into three stages and apply different cycle number increments in each damage stage. The basic concept of the algorithm is to associate the cycle number increment with the damage increment of each simulation iteration. Numerical examples show that our method can effectively predict the phenomenon of fatigue crack growth and reproduce fracture patterns.
In a widely-studied class of multi-parametric optimization problems, the objective value of each solution is an affine function of real-valued parameters. Then, the goal is to provide an optimal solution set, i.e., a set containing an optimal solution for each non-parametric problem obtained by fixing a parameter vector. For many multi-parametric optimization problems, however, an optimal solution set of minimum cardinality can contain super-polynomially many solutions. Consequently, no polynomial-time exact algorithms can exist for these problems even if P=NP. We propose an approximation method that is applicable to a general class of multi-parametric optimization problems and outputs a set of solutions with cardinality polynomial in the instance size and the inverse of the approximation guarantee. This method lifts approximation algorithms for non-parametric optimization problems to their parametric version and provides an approximation guarantee that is arbitrarily close to the approximation guarantee of the approximation algorithm for the non-parametric problem. If the non-parametric problem can be solved exactly in polynomial time or if an FPTAS is available, our algorithm is an FPTAS. Further, we show that, for any given approximation guarantee, the minimum cardinality of an approximation set is, in general, not ℓ-approximable for any natural number ℓ less or equal to the number of parameters, and we discuss applications of our results to classical multi-parametric combinatorial optimizations problems. In particular, we obtain an FPTAS for the multi-parametric minimum s-t-cut problem, an FPTAS for the multi-parametric knapsack problem, as well as an approximation algorithm for the multi-parametric maximization of independence systems problem.
Wear phenomena in worm gears are dependent on the size of the gears. Whereas larger gears are mainly affected by fatigue wear, abrasive wear is predominant in smaller gears. In this context a simulation model for abrasive wear of worm gears was developed, which is based on an energetic wear equation. This approach associates wear with solid friction energy occurring in the tooth contact. The physically-based wear simulation model includes a tooth contact analysis and tribological calculation to determine the local solid tooth friction and wear. The calculation is iterated with the modified tooth flank geometry of the worn worm wheel, in order to consider the influence of wear on the tooth contact. Experimental results on worm gears are used to determine the wear model parameter and to validate the model. A simulative study for a wide range of worm gear geometries was conducted to investigate the influence of geometry and operating conditions on abrasive wear.
Algorithms are increasingly used in different domains of public policy. They help humans to profile unemployed, support administrations to detect tax fraud and give recidivism risk scores that judges or criminal justice managers take into account when they make bail decisions. In recent years, critics have increasingly pointed to ethical challenges of these tools and emphasized problems of discrimination, opaqueness or accountability, and computer scientists have proposed technical solutions to these issues. In contrast to these important debates, the literature on how these tools are implemented in the actual everyday decision-making process has remained cursory. This is problematic because the consequences of ADM systems are at least as dependent on the implementation in an actual decision-making context as on their technical features. In this study, we show how the introduction of risk assessment tools in the criminal justice sector on the local level in the USA has deeply transformed the decision-making process. We argue that this is mainly due to the fact that the evidence generated by the algorithm introduces a notion of statistical prediction to a situation which was dominated by fundamental uncertainty about the outcome before. While this expectation is supported by the case study evidence, the possibility to shift blame to the algorithm does seem much less important to the criminal justice actors.
Micro milling is a very flexible micro cutting process widely deployed to manufacture miniaturized parts. However, size effects occur when downscaling the cutting processes. They lead to higher mechanical loads on the tools and therefore increased tool wear. Micro milling tools are usually made of cemented carbides due to their mechanical strength and fine grain structure. Technical ceramics as alternative tool materials offer very good mechanical properties as well, with grain sizes well below 1 μ m. In conventional machining, they have proven to be able to reduce tool wear. To transfer these wear improvements to the micro scale, we manufactured all-ceramic micro end mills in previous studies ( ∅ 50 and ∅ 100 μm). Tools made from zirconia (Y-TZP) showed the sharpest cutting edges, and were the best performing in micro milling trials amongst the substrates tested. However, the advantages of the ceramic substrate could not be utilized for the brass and titanium materials tested in those studies. Therefore, in this study the capabilities of all-ceramic micro end mills ( ∅ 50 μ m) in different workpiece materials (1.4404, 1.7225, 3.1325 and PMMA GS) were researched. For the two steels and the aluminum alloy, the ceramic tools did not offer an improvement over the cemented carbide tools used as reference. For the thermoplastic PMMA however, significant improvements could be achieved by utilizing the Y-TZP ceramic tools: Less tool wear, less and more stable cutting forces, and higher surface qualities.
Deactivation processes of photoexcited (λex = 580 nm) phycocyanobilin (PCB) in methanol were investigated by means of UV/Vis and mid-IR femtosecond (fs) transient absorption (TA) as well as static fluorescence spectroscopy, supported by density-functional-theory calculations of three relevant ground state conformers, PCBA, PCBB and PCBC, their relative electronic state energies and normal mode vibrational analysis. UV/Vis fs-TA reveals time constants of 2.0, 18 and 67 ps, describing decay of PCBB*, of PCBA* and thermal re-equilibration of PCBA, PCBB and PCBC, respectively, in line with the model by Dietzek et al. (Chem Phys Lett 515:163, 2011) and predecessors. Significant substantiation and extension of this model is achieved first via mid-IR fs-TA, i.e. identification of molecular structures and their dynamics, with time constants of 2.6, 21 and 40 ps, respectively. Second, transient IR continuum absorption (CA) is observed in the region above 1755 cm−1 (CA1) and between 1550 and 1450 cm−1 (CA2), indicative for the IR absorption of highly polarizable protons in hydrogen bonding networks (X–H…Y). This allows to characterize chromophore protonation/deprotonation processes, associated with the electronic and structural dynamics, on a molecular level. The PCB photocycle is suggested to be closed via a long living (> 1 ns), PCBC-like (i.e. deprotonated), fluorescent species.
Optimizing a manufacturing company's in-house energy demand amidst fluctuating electricity prices and uncertainties in renewable energy supply as well as volatile manufacturing planning situations is a challenging task. To tackle this issue, a novel approach is developed for scheduling the energy supply in manufacturing systems with the objective of reducing energy costs. The approach employs Quantum Annealing to determine the optimal mix of in-house generation, purchased electricity, and energy storage. The effectiveness and scalability of the approach are demonstrated through the validation using two simplified use cases, showcasing its potential in solving complex energy supply optimization problems.