Refine
Year of publication
Document Type
- Doctoral Thesis (1862)
- Preprint (1154)
- Article (581)
- Report (480)
- Periodical Part (295)
- Master's Thesis (252)
- Working Paper (115)
- Conference Proceeding (47)
- Diploma Thesis (35)
- Lecture (25)
Language
- English (2986)
- German (1962)
- Multiple languages (6)
- Spanish (4)
Has Fulltext
- yes (4958) (remove)
Keywords
- AG-RESY (64)
- PARO (31)
- Stadtplanung (30)
- Erwachsenenbildung (29)
- Organisationsentwicklung (27)
- Schule (25)
- Modellierung (24)
- Simulation (24)
- Visualisierung (21)
- Case-Based Reasoning (20)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (1138)
- Kaiserslautern - Fachbereich Informatik (909)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (525)
- Kaiserslautern - Fachbereich Chemie (407)
- Kaiserslautern - Fachbereich Sozialwissenschaften (344)
- Kaiserslautern - Fachbereich Physik (321)
- Fraunhofer (ITWM) (224)
- Kaiserslautern - Fachbereich Biologie (172)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (169)
- Distance and Independent Studies Center (DISC) (165)
In recent decades, academia has addressed a wide range of research topics in the field of ethical decision-making. Besides a great amount of research on ethical consumption, also the domain of ethical investments increasingly moves in the focus of scholars. While in this area most research focuses on whether socially or environmentally sustainable businesses outperform traditional investments financially or investigates the character traits as well as other socio-demographic factors of ethical investors, the impact of sustainable corporate conduct on the investment intentions of private investors still requires further research. Hence, we conducted two studies to shed more light on this highly relevant topic. After discussing the current state of research, in our first empirical study, we explore whether besides the traditional triad of risk, return, and liquidity, also sustainability exerts a significant impact on the willingness to invest. As hypothesized, we find that sustainability shows a clear and decisive impact in addition to the traditional factors. In a consecutive study, we investigate deeper into the sustainability-willingness to invest link. Here, our results show that improved sustainability might not pay off in terms of investment attractiveness, however and conversely, it certainly harms to conduct business in a non-sustainable manner, which cannot even be compensated by an increased return.
As a consequence of the real estate market crash after 2008, large investors invested a significant amount of wealth into single-family houses to construct a portfolio of rental dwellings, whose income is securitized in the capital. In some local housing markets, these investors own remarkable numbers of single-family houses. Furthermore, their trading activities have resulted in a new investment strategy, which exacerbates property wealth concentration and polarization. This new investment strategy and its portfolio optimization inspire curiosity about its influence on housing markets. This paper first aims to find an optimal portfolio strategy by employing an expected utility optimization from the terminal wealth, which adopts a stochastic model that includes a variety of economic states to estimate house prices. Second, it aims to analyze the effect of large investors on the housing market. The results show the investment strategies of large investors depend on the balance among economic state, maintenance cost, rental income, interest rate and investment willingness of large investors to housing and their effect depends on the state of the economy.
Dataflow process networks (DPNs) are intrinsically data-driven, i.e., node actions are not synchronized among each other and may fire whenever sufficient input operands arrived at a node. While the general model of computation (MoC) of DPNs does not impose further restrictions, many different subclasses of DPNs representing different dataflow MoCs have been considered over time. These classes mainly differ in the kinds of behaviors of the processes. A DPN may be heterogeneous in that different processes in the network belong to different classes of DPNs. A heterogeneous DPN can therefore be effectively used to model and to implement different components of a system with different kinds of processes and, therefore, different dataflow MoCs. This paper presents a model-based design based on different dataflow MoCs including their heterogeneous combinations. In particular, it covers the automatic software synthesis of systems from DPN models. The main objective is to validate, evaluate and compare the artifacts exhibited by different dataflow MoCs at the implementation level of systems under the supervision of a common design tool. Moreover, this work also offers an efficient synthesis method that targets and exploits heterogeneity in DPNs by generating implementations based on the kinds of behaviors of the processes. The proposed synthesis method provides a tool chain including different specialized code generators for specific dataflow MoCs, and a runtime system that finally maps models using a combination of different dataflow MoCs on cross-vendor target hardware.
Quantum Annealing (QA) is a metaheuristic for solving optimization problems in a time-efficient manner. Therefore, quantum mechanical effects are used to compute and evaluate many possible solutions of an optimization problem simultaneously. Recent studies have shown the potential of QA for solving such complex assignment problems within milliseconds. This also applies for the field of job shop scheduling, where the existing approaches however focus on small problem sizes. To assess the full potential of QA in this area for industry-scale problem formulations, it is necessary to consider larger problem instances and to evaluate the potentials of computing these job shop scheduling problems while finding a near-optimal solution in a time-efficient manner. Consequently, this paper presents a QA-based job shop scheduling. In particular, flexible job shop scheduling problems in various sizes are computed with QA, demonstrating the efficiency of the approach regarding scalability, solutions quality, and computing time. For the evaluation of the proposed approach, the solutions are compared in a scientific benchmark with state-of-the-art algorithms for solving flexible job shop scheduling problems. The results indicate that QA has the potential for solving flexible job shop scheduling problems in a time efficient manner. Even large problem instances can be computed within seconds, which offers the possibility for application in industry.
Es werden Ergebnisse aus einer Kontaktsimulation vorgestellt, welche die Oberflächenveränderung eines Axiallagers infolge von unerwünschtem elektrischem Stromdurchgang bei Mischreibung zeigen. Das hierzu entwickelte Modell berücksichtigt neben den Oberflächenrauheiten auch das nichtlineare Materialverhalten des Wälzlagerwerkstoffes. Im Gegensatz zu bekannten Modellierungsmethoden für ähnliche Problemstellungen, wird hier ein neuartiger Ansatz auf Basis einer gekoppelten Euler- Lagrange- Finite Element Simulation entwickelt. Das Modell liefert, mit experimentell geschädigten Oberflächen als Eingangsgröße, Erkenntnisse zum Traganteilsverhalten und weiterer mechanischer Kenngrößen infolge kombinierter mechanischer und elektrischer Belastungen.
Continuous-time regime-switching models are a very popular class of models for financial applications. In this work the so-called signal-to-noise matrix is introduced for hidden Markov models where the switching is driven by an unobservable Markov chain. Its relations to filtering, i.e. state estimation of the chain given the available observations, and portfolio optimization are investigated. A convergence result for the filter is derived: The filter converges to its invariant distribution if the eigenvalues of the signal-to-noise matrix converge to zero. This matrix is then also used to prove a mutual fund representation for regime-switching models and a corresponding market reduction which is consistent with filtering and portfolio optimization. Two canonical cases for the reduction are analyzed in more detail, the first based on the market regimes and the second depending on the eigenvalues. These considerations are presented both for observable and unobservable Markov chains. The results are illustrated by numerical simulations.
In this paper we investigate a utility maximization problem with drift uncertainty in a multivariate continuous-time Black–Scholes type financial market which may be incomplete. We impose a constraint on the admissible strategies that prevents a pure bond investment and we include uncertainty by means of ellipsoidal uncertainty sets for the drift. Our main results consist firstly in finding an explicit representation of the optimal strategy and the worst-case parameter, secondly in proving a minimax theorem that connects our robust utility maximization problem with the corresponding dual problem. Thirdly, we show that, as the degree of model uncertainty increases, the optimal strategy converges to a generalized uniform diversification strategy.
The dynamic behaviour of unsaturated sand rubber chips mixtures at various gravimetric contents is evaluated through an experimental study comprising resonant column tests in a fixed-free device. Chips were irregularly shaped with dimensions ranging from 5 to 14 mm. Three types of sand with different gradation have been considered. Relative density amounted to 0.5 for all specimens. Due to the large size of the chips, the diameter of the specimens had to be equal to 100 mm, which in turn required a re-calibration of the device assuming a frequency-dependent drive head inertia. The effects of confining stress, rubber chips content, and sand gradation on shear modulus and damping ratio are determined over wide ranges of the shear strain. At small strains, as known for sands, increasing the confining stress stiffens the mixtures. Increasing the rubber chips content reduces significantly the shear modulus and increases the damping ratio. At higher strains, increasing the confining stress or the rubber content flattens the reduction of the shear modulus with strain. Damping at high strains does not show any appreciable dependence on rubber content. Unloading–reloading sequences are used to assess shear modulus degradation and threshold strains. Finally, design equations are derived from the test results to predict the dynamic response of the composite material.
Many practical optimisation problems have conflicting objectives, which should be addressed by multi-criteria optimisation (MCO), i.e. by determining the set of best compromises, the Pareto set (PS), along with its picture in parameter space (PSPS). In previous work on low-dimensional MCO problems, we have found characteristic topological features of the PS and PSPS, which depend on the dimensionality of the parameter space M and the objective space N. E.g., M = 2 and N = 3 yields triangles with needle-like extensions. The reasons for these topological features were unknown so far. Here, we show that they are to be expected if all objective functions of the MCO satisfy two conditions: (a) they can be approximated by quadratic functions and (b) one of the eigenvalues of the Hessian matrix evaluated at the function’s minimum is small compared to the other eigenvalues. Objective functions which meet conditions (a) and (b) have a valley-like topology, for which the valley lies in the direction of the eigenvector corresponding to the lowest eigenvalue. The PSPS can be estimated by starting at the minimum of an objective function, following the valley, and combining these lines for all objective functions. The PS is obtained by evaluating the objective functions. We believe that the conditions (a) and (b) are met in many practical problems and discuss an example from molecular modelling. The improved understanding of the features of these MCO problems opens the route for designing methods for swiftly finding estimates of their PS and PSPS.
This contribution defends two claims. The first is about why thought experiments are so relevant and powerful in mathematics. Heuristics and proof are not strictly and, therefore, the relevance of thought experiments is not contained to heuristics. The main argument is based on a semiotic analysis of how mathematics works with signs. Seen in this way, formal symbols do not eliminate thought experiments (replacing them by something rigorous), but rather provide a new stage for them. The formal world resembles the empirical world in that it calls for exploration and offers surprises. This presents a major reason why thought experiments occur both in empirical sciences and in mathematics. The second claim is about a looming aporia that signals the limitation of thought experiments. This aporia arises when mathematical arguments cease to be fully accessible, thus violating a precondition for experimenting in thought. The contribution focuses on the work of Vladimir Voevodsky (1966–2017, Fields medalist in 2002) who argued that even very pure branches of mathematics cannot avoid inaccessibility of proof. Furthermore, he suggested that computer verification is a feasible path forward, but only if proof is not modeled in terms of formal logic.
Algorithmic systems are increasingly used by state agencies to inform decisions about humans. They produce scores on risks of recidivism in criminal justice, indicate the probability for a job seeker to find a job in the labor market, or calculate whether an applicant should get access to a certain university program. In this contribution, we take an interdisciplinary perspective, provide a bird’s eye view of the different key decisions that are to be taken when state actors decide to use an algorithmic system, and illustrate these decisions with empirical examples from case studies. Building on these insights, we discuss the main pitfalls and promises of the use of algorithmic system by the state and focus on four levels: The most basic question whether an algorithmic system should be used at all, the regulation and governance of the system, issues of algorithm design, and, finally, questions related to the implementation of the system on the ground and the human–machine-interaction that comes with it. Based on our assessment of the advantages and challenges that arise at each of these levels, we propose a set of crucial questions to be asked when such intricate matters are addressed.
This study investigated the universality of emotional prosody in perception of discrete emotions when semantics is not available. In two experiments the perception of emotional prosody in Hebrew and German by listeners who speak one of the languages but not the other was investigated. Having a parallel tool in both languages allowed to conduct controlled comparisons. In Experiment 1, 39 native German speakers with no knowledge of Hebrew and 80 native Israeli speakers rated Hebrew sentences spoken with four different emotional prosodies (anger, fear, happiness, sadness) or neutral. The Hebrew version of the Test for Rating of Emotions in Speech (T-RES) was used for this purpose. Ratings indicated participants’ agreement on how much the sentence conveyed each of four discrete emotions (anger, fear, happiness and sadness). In Experient 2, 30 native speakers of German, and 24 Israeli native speakers of Hebrew who had no knowledge of German rated sentences of the German version of the T-RES. Based only on the prosody, German-speaking participants were able to accurately identify the emotions in the Hebrew sentences and Hebrew-speaking participants were able to identify the emotions in the German sentences. In both experiments ratings between the groups were similar. These findings show that individuals are able to identify emotions in a foreign language even if they do not have access to semantics. This ability goes beyond identification of target emotion; similarities between languages exist even for “wrong” perception. This adds to accumulating evidence in the literature on the universality of emotional prosody.
Neben einem populistischen Demokratieverständnis bildet auch ein majoritärer Relativismus Teil der deutschen politischen Kultur. Dieser Aufsatz argumentiert und liefert Evidenz dafür, dass es sich um zwei unterschiedliche, aber zugleich teils miteinander verwandte Demokratieauffassungen handelt und dass es wichtig ist, beide auseinanderzuhalten. Wie der Populismus erwartet der majoritäre Relativismus die möglichst unmittelbare und getreue Verwirklichung der Interessen in der Bevölkerung, er hält dabei jedoch ausdrücklich nicht an der Idee eines wahren und einheitlichen Volkswillens fest. Während beide positiv mit der Unterstützung der rechtspopulistischen Partei Alternative für Deutschland (AfD) zusammenhängen, zeigt nur der Populismus einen negativen Zusammenhang mit optimierendem Problemlösen durch künstliche Intelligenz in der politischen Führung, der majoritäre Relativismus hingegen sogar einen positiven Zusammenhang. Bemerkenswert ist zudem, dass der majoritäre Relativismus Unterstützung für die AfD besser vorhersagt als ein populistisches Demokratieverständnis. Damit leistet der Aufsatz einen wichtigen Beitrag zur Debatte über Populismus als Bestandteil der politischen Kultur in Deutschland.
Performance of pure OME and various HVO–OME fuel blends as alternative fuels for a diesel engine
(2022)
Since the potential for reducing CO2 emissions from fossil fuels is limited, suitable CO2-neutral fuels are required for applications which cannot reasonably be electrified, and therefore still rely on internal combustion engines in the future. Potential fuel candidates for CI engines are either paraffinic diesel fuels or new fuels like POMDME (polyoxymethylene dimethyl ether, short “OME”). Besides, also blends of these two types of fuels might be of interest. While many studies have been conducted on OME blends with fossil diesel fuel, the research on HVO–OME blends has been less extensive to date.
In the current work, pure OME and HVO–OME blends are investigated in a single-cylinder research engine. The test results of the various fuel blend formulations are compared and evaluated, particularly with regard to soot-NOx trade-off behavior. The primary objective of the study is to examine whether the major potential of blending these two fuels is already largely exploited at low OME content, or if significant additional emission reduction potential can still be found with higher content blends, but still without the need to switch to pure OME operation. Furthermore, the fuel blend which is best suited for the realization of an ultra-low emission concept under the current technical conditions should be identified. In addition, three different injector designs were tested for operation on pure OME3-5, differing both in hydraulic flow and in the number of injection holes as well as their layout. The optimum configuration is evaluated with regard to emissions, normalized heat release and indicated efficiency.
Due to an excellent ratio of high strength to low density, as well as a strong corrosion resistance, the titanium alloy Ti-6Al-4 V is widely used in industrial applications. However, Ti-6Al-4 V is also a difficult-to-cut material because of its low thermal conductivity and high chemical reactivity, especially at elevated temperatures. As a result, machining Ti-6Al-4 V is characterized by high thermal loads and a rapidly progressing thermo-chemical induced tool wear. An adequate cooling strategy is essential to reduce the thermal load and therefore tool wear. Sub-zero metalworking fluids (MWF) which are applied at liquid state but at supply temperatures below the ambient temperature, offer great potential to significantly reduce the thermal load when machining Ti-6Al-4 V. Within the presented research, systematically varied sub-zero cooling strategies are applied when milling Ti-6Al-4 V. The influences of the supply temperature, as well as the volume flow and the outlet velocity are investigated aiming at a reduction of the thermal loads that occur during milling. The milling experiments were recorded using high-speed cameras in order to characterize the impact of the cooling strategies and resolve the behavior of the MWF. Additionally, the novel sub-zero cooling approach is compared to a cryogenic CO2 cooling strategy. The results show that the optimized sub-zero cooling strategy led to a sufficient reduction of the thermal loads and does outperform the cryogenic cooling even at elevated CO2 mass flows.
We consider the optimization problem of a large insurance company that wants to maximize the expected utility of its surplus through the optimal control of the proportional reinsurance. In addition, the insurer is exposed to the risk of default of its reinsurer at the worst possible time, a setting that is closely related to a scenario of the Swiss Solvency Test.
There had been interesting interactions between philosophical reflections, technical developments and the work of artists, poets and designers, starting especially in the 1950s and 1960s with a stimulating cell in Stuttgart and Ulm in Germany spreading mutual international interactions. The paper aims to describe the philosophical background of Max Bense with his research on the intellectual history of mathematics and the upcoming studies on technology and cybernetics. Together with communication theories and semiotics, new aesthetics such as cybernetic aesthetics had been worked out, based on the notions of information and sign. This background stimulated international students, artists and researchers from different creative disciplines for methodical approaches leading to first computer art experiments. The interrelations in these fields with Latin America are in the focus of these studies. Students, artists, and poets from Latin America, especially Brazil, came to Germany for studies and exhibitions in the creative scientific cell around Max Bense. Some of them stayed in Europe, but the exchange developed also in the opposite direction, traveling to and working in Latin America. Some of those fruitful international interrelations will be described and reflected.
In the strive for the climate-neutral and ultra-low emission vehicle powertrains of the future, synthetic fuels produced from renewable sources will play a major role. Polyoxymethylene dimethyl ethers (POMDME or “OME”) produced from renewable hydrogen are a very promising candidate for zero-impact emissions in future CI engines. To optimize the utilisation of these fuels in terms of efficiency, performance and emissions, it is not only necessary to adapt the combustion parameters, but especially to optimize the injection and mixture formation process. In the present work, the spray break-up behavior and mixture formation of OME fuel is investigated numerically in 3D CFD and validated against experimental data from optical measurements in a high pressure/high temperature chamber using Schlieren and Mie scattering. For comparison, the same operating points using conventional diesel fuel were measured in the optical chamber, and the CFD modeling was optimized based on these data. To model the spray-breakup phenomena reliably, the primary break-up model according to Fischer is used, taking into account the nozzle internal flow in a detailed calculation of the disperse droplet phase. As OME has not yet been investigated very intensively with respect to its chemico-physical properties, chemical analyses of the substance properties were carried out to capture the most important parameters correctly in the simulation. With this approach, the results of the optical spray measurement could be reproduced well by the numerical model for the cases studied here, laying the basis for further numerical studies of OME sprays, including real engine operation.