Refine
Year of publication
Document Type
- Article (726) (remove)
Has Fulltext
- yes (726)
Keywords
- AG-RESY (42)
- PARO (30)
- SKALP (15)
- Schule (12)
- MINT (11)
- Mathematische Modellierung (11)
- Stadtplanung (9)
- Denkmäler (8)
- HANDFLEX (8)
- Monitoring (8)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (153)
- Kaiserslautern - Fachbereich Informatik (134)
- Kaiserslautern - Fachbereich Physik (101)
- Kaiserslautern - Fachbereich Mathematik (84)
- Kaiserslautern - Fachbereich Sozialwissenschaften (53)
- Kaiserslautern - Fachbereich Biologie (50)
- Kaiserslautern - Fachbereich Chemie (42)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (27)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (26)
- Kaiserslautern - Fachbereich Bauingenieurwesen (23)
This article presents a methodology whereby adjoint solutions for partitioned multiphysics problems can be computed efficiently, in a way that is completely independent of the underlying physical sub-problems, the associated numerical solution methods, and the number and type of couplings between them. By applying the reverse mode of algorithmic differentiation to each discipline, and by using a specialized recording strategy, diagonal and cross terms can be evaluated individually, thereby allowing different solution methods for the generic coupled problem (for example block-Jacobi or block-Gauss-Seidel). Based on an implementation in the open-source multiphysics simulation and design software SU2, we demonstrate how the same algorithm can be applied for shape sensitivity analysis on a heat exchanger (conjugate heat transfer), a deforming wing (fluid–structure interaction), and a cooled turbine blade where both effects are simultaneously taken into account.
Comparative public policy is a blooming research area. It also suffers from some curious blind spots. In this paper we discuss four of these: (1) the obsession with covariance, which means that important phenomena are ignored; (2) the lack of agency, which leads to underwhelming explanatory models; (3) the unclear universe of cases, which means the inferential value of theories and the empirical results are unclear; and (4) the focus on outputs, even though most theories contain strong assumptions about the political process leading to certain outputs. Following this discussion, we then outline how a closer integration of policy process theories may be fruitful for future research.
Algorithmic systems that provide services to people by supporting or replacing human decision-making promise greater convenience in various areas. The opacity of these applications, however, means that it is not clear how much they truly serve their users. A promising way to address the issue of possible undesired biases consists in giving users control by letting them configure a system and aligning its performance with users’ own preferences. However, as the present paper argues, this form of control over an algorithmic system demands an algorithmic literacy that also entails a certain way of making oneself knowable: users must interrogate their own dispositions and see how these can be formalized such that they can be translated into the algorithmic system. This may, however, extend already existing practices through which people are monitored and probed and means that exerting such control requires users to direct a computational mode of thinking at themselves.
In this note, we define one more way of quantization of classical systems. The quantization we consider is an analogue of classical Jordan–Schwinger map which has been known and used for a long time by physicists. The difference, compared to Jordan–Schwinger map, is that we use generators of Cuntz algebra O∞ (i.e. countable family of mutually orthogonal partial isometries of separable Hilbert space) as a “building blocks” instead of creation–annihilation operators. The resulting scheme satisfies properties similar to Van Hove prequantization, i.e. exact conservation of Lie brackets and linearity.
Recently, phase field modeling of fatigue fracture has gained a lot of attention from many researches and studies, since the fatigue damage of structures is a crucial issue in mechanical design. Differing from traditional phase field fracture models, our approach considers not only the elastic strain energy and crack surface energy, additionally, we introduce a fatigue energy contribution into the regularized energy density function caused by cyclic load. Comparing to other type of fracture phenomenon, fatigue damage occurs only after a large number of load cycles. It requires a large computing effort in a computer simulation. Furthermore, the choice of the cycle number increment is usually determined by a compromise between simulation time and accuracy. In this work, we propose an efficient phase field method for cyclic fatigue propagation that only requires moderate computational cost without sacrificing accuracy. We divide the entire fatigue fracture simulation into three stages and apply different cycle number increments in each damage stage. The basic concept of the algorithm is to associate the cycle number increment with the damage increment of each simulation iteration. Numerical examples show that our method can effectively predict the phenomenon of fatigue crack growth and reproduce fracture patterns.
In a widely-studied class of multi-parametric optimization problems, the objective value of each solution is an affine function of real-valued parameters. Then, the goal is to provide an optimal solution set, i.e., a set containing an optimal solution for each non-parametric problem obtained by fixing a parameter vector. For many multi-parametric optimization problems, however, an optimal solution set of minimum cardinality can contain super-polynomially many solutions. Consequently, no polynomial-time exact algorithms can exist for these problems even if P=NP. We propose an approximation method that is applicable to a general class of multi-parametric optimization problems and outputs a set of solutions with cardinality polynomial in the instance size and the inverse of the approximation guarantee. This method lifts approximation algorithms for non-parametric optimization problems to their parametric version and provides an approximation guarantee that is arbitrarily close to the approximation guarantee of the approximation algorithm for the non-parametric problem. If the non-parametric problem can be solved exactly in polynomial time or if an FPTAS is available, our algorithm is an FPTAS. Further, we show that, for any given approximation guarantee, the minimum cardinality of an approximation set is, in general, not ℓ-approximable for any natural number ℓ less or equal to the number of parameters, and we discuss applications of our results to classical multi-parametric combinatorial optimizations problems. In particular, we obtain an FPTAS for the multi-parametric minimum s-t-cut problem, an FPTAS for the multi-parametric knapsack problem, as well as an approximation algorithm for the multi-parametric maximization of independence systems problem.
Wear phenomena in worm gears are dependent on the size of the gears. Whereas larger gears are mainly affected by fatigue wear, abrasive wear is predominant in smaller gears. In this context a simulation model for abrasive wear of worm gears was developed, which is based on an energetic wear equation. This approach associates wear with solid friction energy occurring in the tooth contact. The physically-based wear simulation model includes a tooth contact analysis and tribological calculation to determine the local solid tooth friction and wear. The calculation is iterated with the modified tooth flank geometry of the worn worm wheel, in order to consider the influence of wear on the tooth contact. Experimental results on worm gears are used to determine the wear model parameter and to validate the model. A simulative study for a wide range of worm gear geometries was conducted to investigate the influence of geometry and operating conditions on abrasive wear.
Algorithms are increasingly used in different domains of public policy. They help humans to profile unemployed, support administrations to detect tax fraud and give recidivism risk scores that judges or criminal justice managers take into account when they make bail decisions. In recent years, critics have increasingly pointed to ethical challenges of these tools and emphasized problems of discrimination, opaqueness or accountability, and computer scientists have proposed technical solutions to these issues. In contrast to these important debates, the literature on how these tools are implemented in the actual everyday decision-making process has remained cursory. This is problematic because the consequences of ADM systems are at least as dependent on the implementation in an actual decision-making context as on their technical features. In this study, we show how the introduction of risk assessment tools in the criminal justice sector on the local level in the USA has deeply transformed the decision-making process. We argue that this is mainly due to the fact that the evidence generated by the algorithm introduces a notion of statistical prediction to a situation which was dominated by fundamental uncertainty about the outcome before. While this expectation is supported by the case study evidence, the possibility to shift blame to the algorithm does seem much less important to the criminal justice actors.
Micro milling is a very flexible micro cutting process widely deployed to manufacture miniaturized parts. However, size effects occur when downscaling the cutting processes. They lead to higher mechanical loads on the tools and therefore increased tool wear. Micro milling tools are usually made of cemented carbides due to their mechanical strength and fine grain structure. Technical ceramics as alternative tool materials offer very good mechanical properties as well, with grain sizes well below 1 μ m. In conventional machining, they have proven to be able to reduce tool wear. To transfer these wear improvements to the micro scale, we manufactured all-ceramic micro end mills in previous studies ( ∅ 50 and ∅ 100 μm). Tools made from zirconia (Y-TZP) showed the sharpest cutting edges, and were the best performing in micro milling trials amongst the substrates tested. However, the advantages of the ceramic substrate could not be utilized for the brass and titanium materials tested in those studies. Therefore, in this study the capabilities of all-ceramic micro end mills ( ∅ 50 μ m) in different workpiece materials (1.4404, 1.7225, 3.1325 and PMMA GS) were researched. For the two steels and the aluminum alloy, the ceramic tools did not offer an improvement over the cemented carbide tools used as reference. For the thermoplastic PMMA however, significant improvements could be achieved by utilizing the Y-TZP ceramic tools: Less tool wear, less and more stable cutting forces, and higher surface qualities.
Deactivation processes of photoexcited (λex = 580 nm) phycocyanobilin (PCB) in methanol were investigated by means of UV/Vis and mid-IR femtosecond (fs) transient absorption (TA) as well as static fluorescence spectroscopy, supported by density-functional-theory calculations of three relevant ground state conformers, PCBA, PCBB and PCBC, their relative electronic state energies and normal mode vibrational analysis. UV/Vis fs-TA reveals time constants of 2.0, 18 and 67 ps, describing decay of PCBB*, of PCBA* and thermal re-equilibration of PCBA, PCBB and PCBC, respectively, in line with the model by Dietzek et al. (Chem Phys Lett 515:163, 2011) and predecessors. Significant substantiation and extension of this model is achieved first via mid-IR fs-TA, i.e. identification of molecular structures and their dynamics, with time constants of 2.6, 21 and 40 ps, respectively. Second, transient IR continuum absorption (CA) is observed in the region above 1755 cm−1 (CA1) and between 1550 and 1450 cm−1 (CA2), indicative for the IR absorption of highly polarizable protons in hydrogen bonding networks (X–H…Y). This allows to characterize chromophore protonation/deprotonation processes, associated with the electronic and structural dynamics, on a molecular level. The PCB photocycle is suggested to be closed via a long living (> 1 ns), PCBC-like (i.e. deprotonated), fluorescent species.