Refine
Year of publication
Document Type
- Doctoral Thesis (1864)
- Preprint (1185)
- Article (589)
- Report (483)
- Periodical Part (295)
- Master's Thesis (252)
- Working Paper (115)
- Conference Proceeding (47)
- Diploma Thesis (35)
- Lecture (25)
Language
- English (3026)
- German (1972)
- Multiple languages (6)
- Spanish (4)
Keywords
- AG-RESY (64)
- PARO (31)
- Stadtplanung (30)
- Erwachsenenbildung (29)
- Organisationsentwicklung (27)
- Schule (25)
- Modellierung (24)
- Simulation (24)
- Mathematische Modellierung (21)
- Visualisierung (21)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (1175)
- Kaiserslautern - Fachbereich Informatik (910)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (527)
- Kaiserslautern - Fachbereich Chemie (407)
- Kaiserslautern - Fachbereich Sozialwissenschaften (345)
- Kaiserslautern - Fachbereich Physik (323)
- Fraunhofer (ITWM) (224)
- Kaiserslautern - Fachbereich Biologie (174)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (169)
- Distance and Independent Studies Center (DISC) (165)
- Kaiserslautern - Fachbereich Bauingenieurwesen (122)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (111)
- Universität (110)
- Kaiserslautern - Fachbereich ARUBI (87)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (84)
- Landau - Fachbereich Natur- und Umweltwissenschaften (27)
- Universitätsbibliothek (26)
- Landau - Fachbereich Psychologie (9)
- Kaiserslautern - Fachbereich Architektur (6)
- Landau - Fachbereich Erziehungswissenschaften (5)
- Landau - Fachbereich Kultur- und Sozialwissenschaften (2)
- Zentrum für Innovation und Digitalisierung in Studium und Lehre (ZIDiS) (1)
Within this work, we report the results of nuclear inelastic scattering experiments of the low-spin phase of the iron(II) mononuclear SCO complex Fe[HBpz3]2 and density functional theory based calculations performed on a model molecule of the complex. We show that the calculated partial density of vibrational states based on the structure of a single iron(II) center which is linked by three pyrazole rings to borat is in good accordance with the experimentally obtained 57Fe-pDOS and assign the molecular vibrations to the prominent optical phonons.
We present new results on standard basis computations of a 0-dimensional ideal I in a power series ring or in the localization of a polynomial ring over a computable field K. We prove the semicontinuity of the “highest corner” in a family of ideals, parametrized by the spectrum of a Noetherian domain A. This semicontinuity is used to design a new modular algorithm for computing a standard basis of I if K is the quotient field of A. It uses the computation over the residue field of a “good” prime ideal of A to truncate high order terms in the subsequent computation over K. We prove that almost all prime ideals are good, so a random choice is very likely to be good, and whether it is good is detected a posteriori by the algorithm. The algorithm yields a significant speed advantage over the non-modular version and works for arbitrary Noetherian domains. The most important special cases are perhaps A = ℤ and A = k[t], k any field and t a set of parameters. Besides its generality, the method differs substantially from previously known modular algorithms for A = ℤ, since it does not manipulate the coefficients. It is also usually faster and can be combined with other modular methods for computations in local rings. The algorithm is implemented in the computer algebra system SINGULAR and we present several examples illustrating its power.
Nanostructured tantalum (Ta)-based dental implants have recently attracted significant attention thanks to their superior biocompatibility and bioactivity as compared to their titanium-based counterparts. While the biological and chemical aspects of Ta implants have been widely studied, their mechanical features have been investigated more rarely. Additionally, the mechanical behavior of these implants and, more importantly, their plastic deformation mechanisms are still not fully understood. Accordingly, in the current research, molecular dynamics simulation as a powerful tool for probing the atomic-scale phenomena is utilized to explore the microstructural evolution of pure polycrystalline Ta samples under tensile loading conditions. Various samples with an average grain size of 2–10 nm are systematically examined using various crystal structure analysis tools to determine the underlying deformation mechanisms. The results reveal that for the samples with an average grain size larger than 8 nm, twinning and dislocation slip are the main sources of any plasticity induced within the sample. For finer-grained samples, the activity of grain boundaries—including grain elongation, rotation, migration, and sliding—are the most important mechanisms governing the plastic deformation. Finally, the temperature-dependent Hall–Petch breakdown is thoroughly examined for the nanocrystalline samples via identification of the grain boundary dynamics.
In many applications, visual analytics (VA) has developed into a standard tool to ease data access and knowledge generation. VA describes a holistic cycle transforming data into hypothesis and visualization to generate insights that enhance the data. Unfortunately, many data sources used in the VA process are affected by uncertainty. In addition, the VA cycle itself can introduce uncertainty to the knowledge generation process but does not provide a mechanism to handle these sources of uncertainty. In this manuscript, we aim to provide an extended VA cycle that is capable of handling uncertainty by quantification, propagation, and visualization, defined as uncertainty-aware visual analytics (UAVA). Here, a recap of uncertainty definition and description is used as a starting point to insert novel components in the visual analytics cycle. These components assist in capturing uncertainty throughout the VA cycle. Further, different data types, hypothesis generation approaches, and uncertainty-aware visualization approaches are discussed that fit in the defined UAVA cycle. In addition, application scenarios that can be handled by such a cycle, examples, and a list of open challenges in the area of UAVA are provided.
In this paper we consider the stochastic primitive equation for geophysical flows subject to transport noise and turbulent pressure. Admitting very rough noise terms, the global existence and uniqueness of solutions to this stochastic partial differential equation are proven using stochastic maximal
-regularity, the theory of critical spaces for stochastic evolution equations, and global a priori bounds. Compared to other results in this direction, we do not need any smallness assumption on the transport noise which acts directly on the velocity field and we also allow rougher noise terms. The adaptation to Stratonovich type noise and, more generally, to variable viscosity and/or conductivity are discussed as well.
Municipal wastewater is an interesting source of phosphorus and several processes for the recovery of phosphorus from this source have been described. These processes yield magnesium ammonium phosphate (MAP), a valuable fertilizer. In these processes, pH shifts and the addition of chemicals are used to influence the species distribution in the solution such as to finally obtain the desired product and to prevent the co-precipitation of salts of heavy metal ions. Elucidating these species distributions experimentally is a challenging and cumbersome task. Therefore, in the present work, a thermodynamic model was developed that can be used for predicting the species distributions in the various steps of the recovery process. The model combines the extended Debye-Hückel equation for the prediction of activity coefficients with dissociation constants and solubility product data from the literature and contains no parameters that need to be adjusted to process data. The model was successfully tested by comparison to experimental data for the Stuttgart process from the literature and used for analyzing the different process steps. Furthermore, it was demonstrated how the model can be used for optimizing the process.
Aflatoxins, a group of mycotoxins produced by various mold species within the genus Aspergillus, have been extensively investigated for their potential to contaminate food and feed, rendering them unfit for consumption. Nevertheless, the role of aflatoxins as environmental contaminants in soil, which represents their natural habitat, remains a relatively unexplored area in aflatoxin research. This knowledge gap can be attributed, in part, to the methodological challenges associated with detecting aflatoxins in soil. The main objective of this PhD project was to develop and validate an analytical method that allows monitoring of aflatoxins in soil, and scrutinize the mechanisms and extent of occurrence of aflatoxins in soil, the processes governing their dissipation, and their impact on the soil microbiome and associated soil functions. By utilizing an efficient extraction solvent mixture comprising acetonitrile and water, coupled with an ultrasonication step, recoveries of 78% to 92% were achieved, enabling reliable determination of trace levels in soil ranging from 0.5 to 20 µg kg-1. However, in a field trial conducted in a high-risk model region for aflatoxin contamination in Sub-Saharan Africa, no aflatoxins were detected using this procedure, underscoring the complexities of field monitoring. These challenges encompassed rapid degradation, spatial heterogeneity, and seasonal fluctuations in aflatoxin occurrence. Degradation experiments revealed the importance of microbial and photochemical processes in the dissipation of aflatoxins in soil with half-lives of 20 - 65 days. The rate of dissipation was found to be influenced by soil properties, most notably soil texture and the initial concentration of aflatoxins in the soil. An exposure study provided evidence that aflatoxins do not pose a substantial threat to the soil microbiome, encompassing microbial biomass, activity, and catabolic functionality. This was particularly evident in clayey soils, where the toxicity of aflatoxins diminished significantly due to their strong binding to clay minerals. However, several critical questions remain unanswered, emphasizing the necessity for further research to attain a more comprehensive understanding of the ecological importance of aflatoxins. Future research should prioritize the challenges associated with field monitoring of aflatoxins, elucidate the mechanisms responsible for the dissipation of aflatoxins in soil during microbial and photochemical degradation, and investigate the ecological consequences of aflatoxins in regions heavily affected by aflatoxins, taking into account the interactions between aflatoxins and environmental and anthropogenic stressors. Addressing these questions contributes to a comprehensive understanding of the environmental impact of aflatoxins in soil, ultimately contributing to more effective strategies for aflatoxin management in agriculture.
In recent decades, academia has addressed a wide range of research topics in the field of ethical decision-making. Besides a great amount of research on ethical consumption, also the domain of ethical investments increasingly moves in the focus of scholars. While in this area most research focuses on whether socially or environmentally sustainable businesses outperform traditional investments financially or investigates the character traits as well as other socio-demographic factors of ethical investors, the impact of sustainable corporate conduct on the investment intentions of private investors still requires further research. Hence, we conducted two studies to shed more light on this highly relevant topic. After discussing the current state of research, in our first empirical study, we explore whether besides the traditional triad of risk, return, and liquidity, also sustainability exerts a significant impact on the willingness to invest. As hypothesized, we find that sustainability shows a clear and decisive impact in addition to the traditional factors. In a consecutive study, we investigate deeper into the sustainability-willingness to invest link. Here, our results show that improved sustainability might not pay off in terms of investment attractiveness, however and conversely, it certainly harms to conduct business in a non-sustainable manner, which cannot even be compensated by an increased return.
As a consequence of the real estate market crash after 2008, large investors invested a significant amount of wealth into single-family houses to construct a portfolio of rental dwellings, whose income is securitized in the capital. In some local housing markets, these investors own remarkable numbers of single-family houses. Furthermore, their trading activities have resulted in a new investment strategy, which exacerbates property wealth concentration and polarization. This new investment strategy and its portfolio optimization inspire curiosity about its influence on housing markets. This paper first aims to find an optimal portfolio strategy by employing an expected utility optimization from the terminal wealth, which adopts a stochastic model that includes a variety of economic states to estimate house prices. Second, it aims to analyze the effect of large investors on the housing market. The results show the investment strategies of large investors depend on the balance among economic state, maintenance cost, rental income, interest rate and investment willingness of large investors to housing and their effect depends on the state of the economy.
Dataflow process networks (DPNs) are intrinsically data-driven, i.e., node actions are not synchronized among each other and may fire whenever sufficient input operands arrived at a node. While the general model of computation (MoC) of DPNs does not impose further restrictions, many different subclasses of DPNs representing different dataflow MoCs have been considered over time. These classes mainly differ in the kinds of behaviors of the processes. A DPN may be heterogeneous in that different processes in the network belong to different classes of DPNs. A heterogeneous DPN can therefore be effectively used to model and to implement different components of a system with different kinds of processes and, therefore, different dataflow MoCs. This paper presents a model-based design based on different dataflow MoCs including their heterogeneous combinations. In particular, it covers the automatic software synthesis of systems from DPN models. The main objective is to validate, evaluate and compare the artifacts exhibited by different dataflow MoCs at the implementation level of systems under the supervision of a common design tool. Moreover, this work also offers an efficient synthesis method that targets and exploits heterogeneity in DPNs by generating implementations based on the kinds of behaviors of the processes. The proposed synthesis method provides a tool chain including different specialized code generators for specific dataflow MoCs, and a runtime system that finally maps models using a combination of different dataflow MoCs on cross-vendor target hardware.
Quantum Annealing (QA) is a metaheuristic for solving optimization problems in a time-efficient manner. Therefore, quantum mechanical effects are used to compute and evaluate many possible solutions of an optimization problem simultaneously. Recent studies have shown the potential of QA for solving such complex assignment problems within milliseconds. This also applies for the field of job shop scheduling, where the existing approaches however focus on small problem sizes. To assess the full potential of QA in this area for industry-scale problem formulations, it is necessary to consider larger problem instances and to evaluate the potentials of computing these job shop scheduling problems while finding a near-optimal solution in a time-efficient manner. Consequently, this paper presents a QA-based job shop scheduling. In particular, flexible job shop scheduling problems in various sizes are computed with QA, demonstrating the efficiency of the approach regarding scalability, solutions quality, and computing time. For the evaluation of the proposed approach, the solutions are compared in a scientific benchmark with state-of-the-art algorithms for solving flexible job shop scheduling problems. The results indicate that QA has the potential for solving flexible job shop scheduling problems in a time efficient manner. Even large problem instances can be computed within seconds, which offers the possibility for application in industry.
Es werden Ergebnisse aus einer Kontaktsimulation vorgestellt, welche die Oberflächenveränderung eines Axiallagers infolge von unerwünschtem elektrischem Stromdurchgang bei Mischreibung zeigen. Das hierzu entwickelte Modell berücksichtigt neben den Oberflächenrauheiten auch das nichtlineare Materialverhalten des Wälzlagerwerkstoffes. Im Gegensatz zu bekannten Modellierungsmethoden für ähnliche Problemstellungen, wird hier ein neuartiger Ansatz auf Basis einer gekoppelten Euler- Lagrange- Finite Element Simulation entwickelt. Das Modell liefert, mit experimentell geschädigten Oberflächen als Eingangsgröße, Erkenntnisse zum Traganteilsverhalten und weiterer mechanischer Kenngrößen infolge kombinierter mechanischer und elektrischer Belastungen.
Continuous-time regime-switching models are a very popular class of models for financial applications. In this work the so-called signal-to-noise matrix is introduced for hidden Markov models where the switching is driven by an unobservable Markov chain. Its relations to filtering, i.e. state estimation of the chain given the available observations, and portfolio optimization are investigated. A convergence result for the filter is derived: The filter converges to its invariant distribution if the eigenvalues of the signal-to-noise matrix converge to zero. This matrix is then also used to prove a mutual fund representation for regime-switching models and a corresponding market reduction which is consistent with filtering and portfolio optimization. Two canonical cases for the reduction are analyzed in more detail, the first based on the market regimes and the second depending on the eigenvalues. These considerations are presented both for observable and unobservable Markov chains. The results are illustrated by numerical simulations.
In this paper we investigate a utility maximization problem with drift uncertainty in a multivariate continuous-time Black–Scholes type financial market which may be incomplete. We impose a constraint on the admissible strategies that prevents a pure bond investment and we include uncertainty by means of ellipsoidal uncertainty sets for the drift. Our main results consist firstly in finding an explicit representation of the optimal strategy and the worst-case parameter, secondly in proving a minimax theorem that connects our robust utility maximization problem with the corresponding dual problem. Thirdly, we show that, as the degree of model uncertainty increases, the optimal strategy converges to a generalized uniform diversification strategy.
The dynamic behaviour of unsaturated sand rubber chips mixtures at various gravimetric contents is evaluated through an experimental study comprising resonant column tests in a fixed-free device. Chips were irregularly shaped with dimensions ranging from 5 to 14 mm. Three types of sand with different gradation have been considered. Relative density amounted to 0.5 for all specimens. Due to the large size of the chips, the diameter of the specimens had to be equal to 100 mm, which in turn required a re-calibration of the device assuming a frequency-dependent drive head inertia. The effects of confining stress, rubber chips content, and sand gradation on shear modulus and damping ratio are determined over wide ranges of the shear strain. At small strains, as known for sands, increasing the confining stress stiffens the mixtures. Increasing the rubber chips content reduces significantly the shear modulus and increases the damping ratio. At higher strains, increasing the confining stress or the rubber content flattens the reduction of the shear modulus with strain. Damping at high strains does not show any appreciable dependence on rubber content. Unloading–reloading sequences are used to assess shear modulus degradation and threshold strains. Finally, design equations are derived from the test results to predict the dynamic response of the composite material.
Many practical optimisation problems have conflicting objectives, which should be addressed by multi-criteria optimisation (MCO), i.e. by determining the set of best compromises, the Pareto set (PS), along with its picture in parameter space (PSPS). In previous work on low-dimensional MCO problems, we have found characteristic topological features of the PS and PSPS, which depend on the dimensionality of the parameter space M and the objective space N. E.g., M = 2 and N = 3 yields triangles with needle-like extensions. The reasons for these topological features were unknown so far. Here, we show that they are to be expected if all objective functions of the MCO satisfy two conditions: (a) they can be approximated by quadratic functions and (b) one of the eigenvalues of the Hessian matrix evaluated at the function’s minimum is small compared to the other eigenvalues. Objective functions which meet conditions (a) and (b) have a valley-like topology, for which the valley lies in the direction of the eigenvector corresponding to the lowest eigenvalue. The PSPS can be estimated by starting at the minimum of an objective function, following the valley, and combining these lines for all objective functions. The PS is obtained by evaluating the objective functions. We believe that the conditions (a) and (b) are met in many practical problems and discuss an example from molecular modelling. The improved understanding of the features of these MCO problems opens the route for designing methods for swiftly finding estimates of their PS and PSPS.
This contribution defends two claims. The first is about why thought experiments are so relevant and powerful in mathematics. Heuristics and proof are not strictly and, therefore, the relevance of thought experiments is not contained to heuristics. The main argument is based on a semiotic analysis of how mathematics works with signs. Seen in this way, formal symbols do not eliminate thought experiments (replacing them by something rigorous), but rather provide a new stage for them. The formal world resembles the empirical world in that it calls for exploration and offers surprises. This presents a major reason why thought experiments occur both in empirical sciences and in mathematics. The second claim is about a looming aporia that signals the limitation of thought experiments. This aporia arises when mathematical arguments cease to be fully accessible, thus violating a precondition for experimenting in thought. The contribution focuses on the work of Vladimir Voevodsky (1966–2017, Fields medalist in 2002) who argued that even very pure branches of mathematics cannot avoid inaccessibility of proof. Furthermore, he suggested that computer verification is a feasible path forward, but only if proof is not modeled in terms of formal logic.
Algorithmic systems are increasingly used by state agencies to inform decisions about humans. They produce scores on risks of recidivism in criminal justice, indicate the probability for a job seeker to find a job in the labor market, or calculate whether an applicant should get access to a certain university program. In this contribution, we take an interdisciplinary perspective, provide a bird’s eye view of the different key decisions that are to be taken when state actors decide to use an algorithmic system, and illustrate these decisions with empirical examples from case studies. Building on these insights, we discuss the main pitfalls and promises of the use of algorithmic system by the state and focus on four levels: The most basic question whether an algorithmic system should be used at all, the regulation and governance of the system, issues of algorithm design, and, finally, questions related to the implementation of the system on the ground and the human–machine-interaction that comes with it. Based on our assessment of the advantages and challenges that arise at each of these levels, we propose a set of crucial questions to be asked when such intricate matters are addressed.
This study investigated the universality of emotional prosody in perception of discrete emotions when semantics is not available. In two experiments the perception of emotional prosody in Hebrew and German by listeners who speak one of the languages but not the other was investigated. Having a parallel tool in both languages allowed to conduct controlled comparisons. In Experiment 1, 39 native German speakers with no knowledge of Hebrew and 80 native Israeli speakers rated Hebrew sentences spoken with four different emotional prosodies (anger, fear, happiness, sadness) or neutral. The Hebrew version of the Test for Rating of Emotions in Speech (T-RES) was used for this purpose. Ratings indicated participants’ agreement on how much the sentence conveyed each of four discrete emotions (anger, fear, happiness and sadness). In Experient 2, 30 native speakers of German, and 24 Israeli native speakers of Hebrew who had no knowledge of German rated sentences of the German version of the T-RES. Based only on the prosody, German-speaking participants were able to accurately identify the emotions in the Hebrew sentences and Hebrew-speaking participants were able to identify the emotions in the German sentences. In both experiments ratings between the groups were similar. These findings show that individuals are able to identify emotions in a foreign language even if they do not have access to semantics. This ability goes beyond identification of target emotion; similarities between languages exist even for “wrong” perception. This adds to accumulating evidence in the literature on the universality of emotional prosody.
Neben einem populistischen Demokratieverständnis bildet auch ein majoritärer Relativismus Teil der deutschen politischen Kultur. Dieser Aufsatz argumentiert und liefert Evidenz dafür, dass es sich um zwei unterschiedliche, aber zugleich teils miteinander verwandte Demokratieauffassungen handelt und dass es wichtig ist, beide auseinanderzuhalten. Wie der Populismus erwartet der majoritäre Relativismus die möglichst unmittelbare und getreue Verwirklichung der Interessen in der Bevölkerung, er hält dabei jedoch ausdrücklich nicht an der Idee eines wahren und einheitlichen Volkswillens fest. Während beide positiv mit der Unterstützung der rechtspopulistischen Partei Alternative für Deutschland (AfD) zusammenhängen, zeigt nur der Populismus einen negativen Zusammenhang mit optimierendem Problemlösen durch künstliche Intelligenz in der politischen Führung, der majoritäre Relativismus hingegen sogar einen positiven Zusammenhang. Bemerkenswert ist zudem, dass der majoritäre Relativismus Unterstützung für die AfD besser vorhersagt als ein populistisches Demokratieverständnis. Damit leistet der Aufsatz einen wichtigen Beitrag zur Debatte über Populismus als Bestandteil der politischen Kultur in Deutschland.
Performance of pure OME and various HVO–OME fuel blends as alternative fuels for a diesel engine
(2022)
Since the potential for reducing CO2 emissions from fossil fuels is limited, suitable CO2-neutral fuels are required for applications which cannot reasonably be electrified, and therefore still rely on internal combustion engines in the future. Potential fuel candidates for CI engines are either paraffinic diesel fuels or new fuels like POMDME (polyoxymethylene dimethyl ether, short “OME”). Besides, also blends of these two types of fuels might be of interest. While many studies have been conducted on OME blends with fossil diesel fuel, the research on HVO–OME blends has been less extensive to date.
In the current work, pure OME and HVO–OME blends are investigated in a single-cylinder research engine. The test results of the various fuel blend formulations are compared and evaluated, particularly with regard to soot-NOx trade-off behavior. The primary objective of the study is to examine whether the major potential of blending these two fuels is already largely exploited at low OME content, or if significant additional emission reduction potential can still be found with higher content blends, but still without the need to switch to pure OME operation. Furthermore, the fuel blend which is best suited for the realization of an ultra-low emission concept under the current technical conditions should be identified. In addition, three different injector designs were tested for operation on pure OME3-5, differing both in hydraulic flow and in the number of injection holes as well as their layout. The optimum configuration is evaluated with regard to emissions, normalized heat release and indicated efficiency.
Due to an excellent ratio of high strength to low density, as well as a strong corrosion resistance, the titanium alloy Ti-6Al-4 V is widely used in industrial applications. However, Ti-6Al-4 V is also a difficult-to-cut material because of its low thermal conductivity and high chemical reactivity, especially at elevated temperatures. As a result, machining Ti-6Al-4 V is characterized by high thermal loads and a rapidly progressing thermo-chemical induced tool wear. An adequate cooling strategy is essential to reduce the thermal load and therefore tool wear. Sub-zero metalworking fluids (MWF) which are applied at liquid state but at supply temperatures below the ambient temperature, offer great potential to significantly reduce the thermal load when machining Ti-6Al-4 V. Within the presented research, systematically varied sub-zero cooling strategies are applied when milling Ti-6Al-4 V. The influences of the supply temperature, as well as the volume flow and the outlet velocity are investigated aiming at a reduction of the thermal loads that occur during milling. The milling experiments were recorded using high-speed cameras in order to characterize the impact of the cooling strategies and resolve the behavior of the MWF. Additionally, the novel sub-zero cooling approach is compared to a cryogenic CO2 cooling strategy. The results show that the optimized sub-zero cooling strategy led to a sufficient reduction of the thermal loads and does outperform the cryogenic cooling even at elevated CO2 mass flows.
We consider the optimization problem of a large insurance company that wants to maximize the expected utility of its surplus through the optimal control of the proportional reinsurance. In addition, the insurer is exposed to the risk of default of its reinsurer at the worst possible time, a setting that is closely related to a scenario of the Swiss Solvency Test.
There had been interesting interactions between philosophical reflections, technical developments and the work of artists, poets and designers, starting especially in the 1950s and 1960s with a stimulating cell in Stuttgart and Ulm in Germany spreading mutual international interactions. The paper aims to describe the philosophical background of Max Bense with his research on the intellectual history of mathematics and the upcoming studies on technology and cybernetics. Together with communication theories and semiotics, new aesthetics such as cybernetic aesthetics had been worked out, based on the notions of information and sign. This background stimulated international students, artists and researchers from different creative disciplines for methodical approaches leading to first computer art experiments. The interrelations in these fields with Latin America are in the focus of these studies. Students, artists, and poets from Latin America, especially Brazil, came to Germany for studies and exhibitions in the creative scientific cell around Max Bense. Some of them stayed in Europe, but the exchange developed also in the opposite direction, traveling to and working in Latin America. Some of those fruitful international interrelations will be described and reflected.
In the strive for the climate-neutral and ultra-low emission vehicle powertrains of the future, synthetic fuels produced from renewable sources will play a major role. Polyoxymethylene dimethyl ethers (POMDME or “OME”) produced from renewable hydrogen are a very promising candidate for zero-impact emissions in future CI engines. To optimize the utilisation of these fuels in terms of efficiency, performance and emissions, it is not only necessary to adapt the combustion parameters, but especially to optimize the injection and mixture formation process. In the present work, the spray break-up behavior and mixture formation of OME fuel is investigated numerically in 3D CFD and validated against experimental data from optical measurements in a high pressure/high temperature chamber using Schlieren and Mie scattering. For comparison, the same operating points using conventional diesel fuel were measured in the optical chamber, and the CFD modeling was optimized based on these data. To model the spray-breakup phenomena reliably, the primary break-up model according to Fischer is used, taking into account the nozzle internal flow in a detailed calculation of the disperse droplet phase. As OME has not yet been investigated very intensively with respect to its chemico-physical properties, chemical analyses of the substance properties were carried out to capture the most important parameters correctly in the simulation. With this approach, the results of the optical spray measurement could be reproduced well by the numerical model for the cases studied here, laying the basis for further numerical studies of OME sprays, including real engine operation.
Synthetic Biology is revolutionizing biological research by introducing principles of mechanical engineering, including the standardization of genetic parts and standardized part assembly routes. Both are realized in the Modular Cloning (MoClo) strategy. MoClo allows for the rapid and robust assembly of individual genes and multigene clusters, enabling iterative cycles of gene design, construction, testing, and learning in short time. This is particularly true if generation times of target organisms are short, as is the case for the unicellular green alga Chlamydomonas reinhardtii. Testing a gene of interest in Chlamydomonas with MoClo requires two assembly steps, one for the gene of interest itself and another to combine it with a selection marker. To reduce this to a single assembly step, we constructed five new destination vectors. They contain genes conferring resistance to commonly used antibiotics in Chlamydomonas and a site for the direct assembly of basic genetic parts. The vectors employ red/white color selection and, therefore, do not require costly compounds like X-gal and IPTG. mCherry expression is used to demonstrate the functionality of these vectors.
In der vorliegenden Masterarbeit wurden Machbarkeit und Vorteile von verschiedenen Aufnahmezeitintervallen, sowie einer quantitativen Analyse von Blutflusswerten für Aktivierungsstudien des auditorischen Systems bei einer kleinen Zahl von CI-Nutzern mit unterschiedlicher Hörperformance/zu erwartender Aktivierung evaluiert. Dafür wurden die PET-Daten sogenannter „good performer“ und „poor performer“ zunächst individuell analysiert. Die Beurteilung der Performance erfolgte über das Sprachverstehen im Störgeräusch (HSM-Satztest), bei dem die Probanden entweder ≥ 70 % oder ≤ 30 % verstehen [13]. Eine Unterscheidung bzw. Auswertung zwischen den Gruppen findet in der vorliegenden Masterarbeit nicht statt. Um den Ergebnissen jedoch eine breitere Gültigkeit zu geben, ist es von Vorteil, ein Spektrum unterschiedlicher CI-Nutzer mit unterschiedlicher Hörperfor-mance zu inkludieren.
Die Analyse des Aufnahmezeitintervalls wurde mittels Statistical Parametric Mapping (SPM) durchgeführt, die Blutflussquantifizierung mittels PMOD Software. Die mit diesen Verfahren individuell ermittelten Werte wurden anschließend weiter statistisch analysiert.
Der vorliegende Essay thematisiert die Herausforderungen von Lernenden und Lehrenden im Umfeld der betrieblichen Bildung, die durch virtuelle Lern- und Trainingsszenarien, insbesondere angetrieben durch die Corona-Pandemie, entstanden sind. Der Blick richtet sich dabei u.a. auf die Fragen, wieso Trainer:innen das Präsenztraining bevorzugen und wieso virtuelle Seminar- oder Trainingsteilnahmen am Arbeitsplatz oft mit Hindernissen verbunden sind. Letztlich geht es auch um die teilweise untrennbare Verbindung von Orten und bestimmten Aktivitäten und die Idee, dass der virtuelle Raum bereits im antiken Griechenland ein Thema war. Dabei wird auch in Betracht gezogen, dass die Verbindung von Orten und Tätigkeiten auch das Rollenempfinden von, in diesem Falle internen Trainer:innen, beeinflusst.
Many real-world optimization and decision-making problems comprise several, partly conflicting objective functions. The English saying “Quality has its price” is just as true on a large scale as it is in private sphere and, therefore, quality and price are a typical pair of conflicting objective functions that are very common in applications. Yet, in industrial applications, both quality and cost may be understood in the specific context and differ whether a transportation, a production, or a planning problem is considered. Other objective functions that are receiving increasing attention in real-world decision-making situations are, for example, robustness, time, sustainability, adaptability, or longevity.
The electrochemical process of microbial electrosynthesis (MES) is used to drive the metabolism of electroactive microorganisms for the production of valuable chemicals and fuels. MES combines the advantages of electrochemistry, engineering, and microbiology and offers alternative production processes based on renewable raw materials and regenerative energies. In addition to the reactor concept and electrode design, the biocatalysts used have a significant influence on the performance of MES. Thus, pure and mixed cultures can be used as biocatalysts. By using mixed cultures, interactions between organisms, such as the direct interspecies electron transfer (DIET) or syntrophic interactions, influence the performance in terms of productivity and the product range of MES. This review focuses on the comparison of pure and mixed cultures in microbial electrosynthesis. The performance indicators, such as productivities and coulombic efficiencies (CEs), for both procedural methods are discussed. Typical products in MES are methane and acetate, therefore these processes are the focus of this review. In general, most studies used mixed cultures as biocatalyst, as more advanced performance of mixed cultures has been seen for both products. When comparing pure and mixed cultures in equivalent experimental setups a 3-fold higher methane and a nearly 2-fold higher acetate production rate can be achieved in mixed cultures. However, studies of pure culture MES for methane production have shown some improvement through reactor optimization and operational mode reaching similar performance indicators as mixed culture MES. Overall, the review gives an overview of the advantages and disadvantages of using pure or mixed cultures in MES.
In this paper, we devise a stochastic asset–liability management (ALM) model for a life insurance company and analyze its influence on the balance sheet within a low-interest rate environment. In particular, a flexible procedure for the generation of insurers’ compressed contract portfolios that respects the given biometric structure is presented, extending the existing literature on stochastic ALM modeling. The introduced balance sheet model is in line with the principles of double-entry bookkeeping as required in accounting. We further focus on the incorporation of new business, i.e. the addition of newly concluded contracts and thus of insured in each period. Efficient simulations are obtained by integrating new policies into existing cohorts according to contract-related criteria. We provide new results on the consistency of the balance sheet equations. In extensive simulation studies for different scenarios regarding the business form of today’s life insurers, we utilize these to analyze the long-term behavior and the stability of the components of the balance sheet for different asset–liability approaches. Finally, we investigate the robustness of two prominent investment strategies against crashes in the capital markets, which lead to extreme liquidity shocks and thus threaten the insurer’s financial health.
Lattice Boltzmann method for antiplane shear deformation: non-lattice-conforming boundary conditions
(2022)
In this work, two different approaches to treat boundary conditions in a lattice Boltzmann method (LBM) for the wave equation are presented. We interpret the wave equation as the governing equation of the displacement field of a solid under simplified deformation assumptions, but the algorithms are not limited to this interpretation. A feature of both algorithms is that the boundary does not need to conform with the discretization, i.e., the regular lattice. This allows for a larger flexibility regarding the geometries that can be handled by the LBM. The first algorithm aims at determining the missing distribution functions at boundary lattice points in such a way that a desired macroscopic boundary condition is fulfilled. The second algorithm is only available for Neumann-type boundary conditions and considers a balance of momentum for control volumes on the mesoscopic scale, i.e., at the scale of the lattice spacing. Numerical examples demonstrate that the new algorithms indeed improve the accuracy of the LBM compared to previous results and that they are able to model boundary conditions for complex geometries that do not conform with the lattice.
Irrelevant speech impairs serial recall of verbal but not spatial items in children and adults
(2022)
Immediate serial recall of visually presented items is reliably impaired by task-irrelevant speech that the participants are instructed to ignore (“irrelevant speech effect,” ISE). The ISE is stronger with changing speech tokens (words or syllables) when compared to repetitions of single tokens (“changing-state effect,” CSE). These phenomena have been attributed to sound-induced diversions of attention away from the focal task (attention capture account), or to specific interference of obligatory, involuntary sound processing with either the integrity of phonological traces in a phonological short-term store (phonological loop account), or the efficiency of a domain-general rehearsal process employed for serial order retention (changing-state account). Aiming to further explore the role of attention, phonological coding, and serial order retention in the ISE, we analyzed the effects of steady-state and changing-state speech on serial order reconstruction of visually presented verbal and spatial items in children (n = 81) and adults (n = 80). In the verbal task, both age groups performed worse with changing-state speech (sequences of different syllables) when compared with steady-state speech (one syllable repeated) and silence. Children were more impaired than adults by both speech sounds. In the spatial task, no disruptive effect of irrelevant speech was found in either group. These results indicate that irrelevant speech evokes similarity-based interference, and thus pose difficulties for the attention-capture and the changing-state account of the ISE.
Cold plasma is a partially ionized state of matter that unites high reactivity and mild conditions. Therefore, cold plasma reactors are intriguing for reaction engineering. In this work, a laboratory scale dielectric barrier discharge (DBD) cold plasma reactor was designed, set up, and used for studying the influence of the specific energy input (SEI) on the product spectrum of the partial oxidation of methane. In total, 23 experiments were carried out near ambient conditions with a molar reactant ratio of methane to oxygen of 2:1 at SEI between 0.3 and 6.0 J cm−3. The feed also contained argon at a mole fraction of 0.75 mol mol−1. The product stream was split into a fraction that was condensed in a cold trap and the remaining gaseous fraction. The latter was analyzed at-line in a gas chromatograph equipped with a dual column and two carrier gases. The condensed fraction was analyzed by qualitative and quantitative 1H and 13C NMR spectroscopy, Karl Fischer titration, and sodium sulfite titration. In the product stream, 16 components were identified and quantified: acetic acid, acetone, carbon dioxide, carbon monoxide, ethanol, ethane, ethene, ethylene glycol, formaldehyde, formic acid, hydrogen, methanol, methyl acetate, methyl hydroperoxide, methyl formate, and water. A univariant influence of the SEI on the conversions of methane and oxygen and the selectivities to the products was observed. The experimental results provided here are an asset for developing reaction kinetic models of the partial oxidation of methane in DBD plasma reactors.
Interview with Frank Petry on “Digital Entrepreneurship: Opportunities, Challenges, and Impacts”
(2022)
Frank Petry is a primal rock of Germany's startup scene. He is a serial founder, serial investor (e.g., Ticketmaster, Expedia, Lending Tree, Web.de, ESCOM), partner and member of the Advisory Board at Blue Lake VC, as well as a partner, mentor and advisory board member at the Baltic Sandbox Accelerator. Additionally, he is the CEO of PECON (Consulting) and Thundermountain (VC, Accelerator, Corporate innovation).
Microbiologically induced calcium carbonate precipitation (MICP) is a technique that has received a lot of attention in the field of geotechnology in the last decade. It has the potential to provide a sustainable and ecological alternative to conventional consolidation of minerals, for example by the use of cement. From a variety of microbiological metabolic pathways that can induce calcium carbonate (CaCO3) precipitation, ureolysis has been established as the most commonly used method. To better understand the mechanisms of MICP and to develop new processes and optimize existing ones based on this understanding, ureolytic MICP is the subject of intensive research. The interplay of biological and civil engineering aspects shows how interdisciplinary research needs to be to advance the potential of this technology. This paper describes and critically discusses, based on current literature, the key influencing factors involved in the cementation of sand by ureolytic MICP. Due to the complexity of MICP, these factors often influence each other, making it essential for researchers from all disciplines to be aware of these factors and its interactions. Furthermore, this paper discusses the opportunities and challenges for future research in this area to provide impetus for studies that can further advance the understanding of MICP.
Indentation and Scratching with a Rotating Adhesive Tool: A Molecular Dynamics Simulation Study
(2022)
For the specific case of a spherical diamond nanoparticle with 10 nm radius rolling over a planar Fe surface, we employ molecular dynamics simulation to study the processes of indentation and scratching. The particle is rotating (rolling). We focus on the influence of the adhesion force between the nanoparticle and the surface on the damage mechanisms on the surface; the adhesion is modeled by a pair potential with arbitrarily prescribed value of the adhesion strength. With increasing adhesion, the following effects are observed. The load needed for indentation decreases and so does the effective material hardness; this effect is considerably more pronounced than for a non-rotating particle. During scratching, the tangential force, and hence the friction coefficient, increase. The torque needed to keep the particle rolling adds to the total work for scratching; however, for a particle rolling without slip on the surface the total work is minimum. In this sense, a rolling particle induces the most efficient scratching process. For both indentation and scratching, the length of the dislocation network generated in the substrate reduces. After leaving the surface, the particle is (partially) covered with substrate atoms and the scratch groove is roughened. We demonstrate that these effects are based on substrate atom transport under the rotating particle from the front towards the rear; this transport already occurs for a repulsive particle but is severely intensified by adhesion.
Additive manufacturing (AM) enables the production of components with a high degree of individualization at constant manufacturing effort, which is why additive manufacturing is increasingly applied in industrial processes. However, additively produced surfaces do not meet the requirements for functional surfaces, which is why subsequent machining is mandatory for most of AM-workpieces. Further, the performance of many functional surfaces can be enhanced by microstructuring. The combination of both AM and subtractive processes is referred to as hybrid manufacturing. In this paper, the hybrid manufacturing of AISI 316L is investigated. The two AM technologies laser-based powder bed fusion (L-PBF) and high-speed laser directed energy deposition (HS L-DED) are used to produce workpieces which are subsequently machined by micro milling (tool diameter d = 100 µm). The machining results were evaluated based on tool wear, burr formation, process forces and the generated topography. Those indicated differences in the machinability of materials produced by L-PBF and HS L-DED which were attributed to different microstructural properties.
First essential m-dissipativity of an infinite-dimensional Ornstein-Uhlenbeck operator N, perturbed by the gradient of a potential, on a domain FC
∞
b
of finitely based, smooth and bounded functions, is shown. Our considerations allow unbounded diffusion operators as coefficients. We derive corresponding second order regularity estimates for solutions f of the Kolmogorov equation ◂−▸αf−Nf=g, ◂+▸α∈(0,∞), generalizing some results of Da Prato and Lunardi. Second, we prove essential m-dissipativity for generators (◂,▸LΦ,FC
∞
b
) of infinite-dimensional degenerate diffusion processes. We emphasize that the essential m-dissipativity of (◂,▸LΦ,FC
∞
b
) is useful to apply general resolvent methods developed by Beznea, Boboc and Röckner, in order to construct martingale/weak solutions to infinite-dimensional non-linear degenerate stochastic differential equations. Furthermore, the essential m-dissipativity of (◂,▸LΦ,FC
∞
b
) and (◂,▸N,FC
∞
b
), as well as the regularity estimates are essential to apply the general abstract Hilbert space hypocoercivity method from Dolbeault, Mouhot, Schmeiser and Grothaus, Stilgenbauer, respectively, to the corresponding diffusions.
We provide a complete elaboration of the L2-Hilbert space hypocoercivity theorem for the degenerate Langevin dynamics with multiplicative noise, studying the longtime behavior of the strongly continuous contraction semigroup solving the abstract Cauchy problem for the associated backward Kolmogorov operator. Hypocoercivity for the Langevin dynamics with constant diffusion matrix was proven previously by Dolbeault, Mouhot and Schmeiser in the corresponding Fokker–Planck framework and made rigorous in the Kolmogorov backwards setting by Grothaus and Stilgenbauer. We extend these results to weakly differentiable diffusion coefficient matrices, introducing multiplicative noise for the corresponding stochastic differential equation. The rate of convergence is explicitly computed depending on the choice of these coefficients and the potential giving the outer force. In order to obtain a solution to the abstract Cauchy problem, we first prove essential self-adjointness of non-degenerate elliptic Dirichlet operators on Hilbert spaces, using prior elliptic regularity results and techniques from Bogachev, Krylov and Röckner. We apply operator perturbation theory to obtain essential m-dissipativity of the Kolmogorov operator, extending the m-dissipativity results from Conrad and Grothaus. We emphasize that the chosen Kolmogorov approach is natural, as the theory of generalized Dirichlet forms implies a stochastic representation of the Langevin semigroup as the transition kernel of a diffusion process which provides a martingale solution to the Langevin equation with multiplicative noise. Moreover, we show that even a weak solution is obtained this way.
We examine the predictability of 299 capital market anomalies enhanced by 30 machine learning approaches and over 250 models in a dataset with more than 500 million firm-month anomaly observations. We find significant monthly (out-of-sample) returns of around 1.8–2.0%, and over 80% of the models yield returns equal to or larger than our linearly constructed baseline factor. For the best performing models, the risk-adjusted returns are significant across alternative asset pricing models, considering transaction costs with round-trip costs of up to 2% and including only anomalies after publication. Our results indicate that non-linear models can reveal market inefficiencies (mispricing) that are hard to conciliate with risk-based explanations.
The simulation of Dynamic Random Access Memories (DRAMs) on system level requires highly accurate models due to their complex timing and power behavior. However, conventional cycle-accurate DRAM subsystem models often become a bottleneck for the overall simulation speed. A promising alternative are simulators based on Transaction Level Modeling, which can be fast and accurate at the same time. In this paper we present DRAMSys4.0, which is, to the best of our knowledge, the fastest and most extensive open-source cycle-accurate DRAM simulation framework. DRAMSys4.0 includes a novel software architecture that enables a fast adaption to different hardware controller implementations and new JEDEC standards. In addition, it already supports the latest standards DDR5 and LPDDR5. We explain how to apply optimization techniques for an increased simulation speed while maintaining full temporal accuracy. Furthermore, we demonstrate the simulator’s accuracy and analysis tools with two application examples. Finally, we provide a detailed investigation and comparison of the most prominent cycle-accurate open-source DRAM simulators with regard to their supported features, analysis capabilities and simulation speed.
This article presents a methodology whereby adjoint solutions for partitioned multiphysics problems can be computed efficiently, in a way that is completely independent of the underlying physical sub-problems, the associated numerical solution methods, and the number and type of couplings between them. By applying the reverse mode of algorithmic differentiation to each discipline, and by using a specialized recording strategy, diagonal and cross terms can be evaluated individually, thereby allowing different solution methods for the generic coupled problem (for example block-Jacobi or block-Gauss-Seidel). Based on an implementation in the open-source multiphysics simulation and design software SU2, we demonstrate how the same algorithm can be applied for shape sensitivity analysis on a heat exchanger (conjugate heat transfer), a deforming wing (fluid–structure interaction), and a cooled turbine blade where both effects are simultaneously taken into account.
Comparative public policy is a blooming research area. It also suffers from some curious blind spots. In this paper we discuss four of these: (1) the obsession with covariance, which means that important phenomena are ignored; (2) the lack of agency, which leads to underwhelming explanatory models; (3) the unclear universe of cases, which means the inferential value of theories and the empirical results are unclear; and (4) the focus on outputs, even though most theories contain strong assumptions about the political process leading to certain outputs. Following this discussion, we then outline how a closer integration of policy process theories may be fruitful for future research.
Algorithmic systems that provide services to people by supporting or replacing human decision-making promise greater convenience in various areas. The opacity of these applications, however, means that it is not clear how much they truly serve their users. A promising way to address the issue of possible undesired biases consists in giving users control by letting them configure a system and aligning its performance with users’ own preferences. However, as the present paper argues, this form of control over an algorithmic system demands an algorithmic literacy that also entails a certain way of making oneself knowable: users must interrogate their own dispositions and see how these can be formalized such that they can be translated into the algorithmic system. This may, however, extend already existing practices through which people are monitored and probed and means that exerting such control requires users to direct a computational mode of thinking at themselves.
In this note, we define one more way of quantization of classical systems. The quantization we consider is an analogue of classical Jordan–Schwinger map which has been known and used for a long time by physicists. The difference, compared to Jordan–Schwinger map, is that we use generators of Cuntz algebra O∞ (i.e. countable family of mutually orthogonal partial isometries of separable Hilbert space) as a “building blocks” instead of creation–annihilation operators. The resulting scheme satisfies properties similar to Van Hove prequantization, i.e. exact conservation of Lie brackets and linearity.
Recently, phase field modeling of fatigue fracture has gained a lot of attention from many researches and studies, since the fatigue damage of structures is a crucial issue in mechanical design. Differing from traditional phase field fracture models, our approach considers not only the elastic strain energy and crack surface energy, additionally, we introduce a fatigue energy contribution into the regularized energy density function caused by cyclic load. Comparing to other type of fracture phenomenon, fatigue damage occurs only after a large number of load cycles. It requires a large computing effort in a computer simulation. Furthermore, the choice of the cycle number increment is usually determined by a compromise between simulation time and accuracy. In this work, we propose an efficient phase field method for cyclic fatigue propagation that only requires moderate computational cost without sacrificing accuracy. We divide the entire fatigue fracture simulation into three stages and apply different cycle number increments in each damage stage. The basic concept of the algorithm is to associate the cycle number increment with the damage increment of each simulation iteration. Numerical examples show that our method can effectively predict the phenomenon of fatigue crack growth and reproduce fracture patterns.
In a widely-studied class of multi-parametric optimization problems, the objective value of each solution is an affine function of real-valued parameters. Then, the goal is to provide an optimal solution set, i.e., a set containing an optimal solution for each non-parametric problem obtained by fixing a parameter vector. For many multi-parametric optimization problems, however, an optimal solution set of minimum cardinality can contain super-polynomially many solutions. Consequently, no polynomial-time exact algorithms can exist for these problems even if P=NP. We propose an approximation method that is applicable to a general class of multi-parametric optimization problems and outputs a set of solutions with cardinality polynomial in the instance size and the inverse of the approximation guarantee. This method lifts approximation algorithms for non-parametric optimization problems to their parametric version and provides an approximation guarantee that is arbitrarily close to the approximation guarantee of the approximation algorithm for the non-parametric problem. If the non-parametric problem can be solved exactly in polynomial time or if an FPTAS is available, our algorithm is an FPTAS. Further, we show that, for any given approximation guarantee, the minimum cardinality of an approximation set is, in general, not ℓ-approximable for any natural number ℓ less or equal to the number of parameters, and we discuss applications of our results to classical multi-parametric combinatorial optimizations problems. In particular, we obtain an FPTAS for the multi-parametric minimum s-t-cut problem, an FPTAS for the multi-parametric knapsack problem, as well as an approximation algorithm for the multi-parametric maximization of independence systems problem.
Wear phenomena in worm gears are dependent on the size of the gears. Whereas larger gears are mainly affected by fatigue wear, abrasive wear is predominant in smaller gears. In this context a simulation model for abrasive wear of worm gears was developed, which is based on an energetic wear equation. This approach associates wear with solid friction energy occurring in the tooth contact. The physically-based wear simulation model includes a tooth contact analysis and tribological calculation to determine the local solid tooth friction and wear. The calculation is iterated with the modified tooth flank geometry of the worn worm wheel, in order to consider the influence of wear on the tooth contact. Experimental results on worm gears are used to determine the wear model parameter and to validate the model. A simulative study for a wide range of worm gear geometries was conducted to investigate the influence of geometry and operating conditions on abrasive wear.
Algorithms are increasingly used in different domains of public policy. They help humans to profile unemployed, support administrations to detect tax fraud and give recidivism risk scores that judges or criminal justice managers take into account when they make bail decisions. In recent years, critics have increasingly pointed to ethical challenges of these tools and emphasized problems of discrimination, opaqueness or accountability, and computer scientists have proposed technical solutions to these issues. In contrast to these important debates, the literature on how these tools are implemented in the actual everyday decision-making process has remained cursory. This is problematic because the consequences of ADM systems are at least as dependent on the implementation in an actual decision-making context as on their technical features. In this study, we show how the introduction of risk assessment tools in the criminal justice sector on the local level in the USA has deeply transformed the decision-making process. We argue that this is mainly due to the fact that the evidence generated by the algorithm introduces a notion of statistical prediction to a situation which was dominated by fundamental uncertainty about the outcome before. While this expectation is supported by the case study evidence, the possibility to shift blame to the algorithm does seem much less important to the criminal justice actors.
Micro milling is a very flexible micro cutting process widely deployed to manufacture miniaturized parts. However, size effects occur when downscaling the cutting processes. They lead to higher mechanical loads on the tools and therefore increased tool wear. Micro milling tools are usually made of cemented carbides due to their mechanical strength and fine grain structure. Technical ceramics as alternative tool materials offer very good mechanical properties as well, with grain sizes well below 1 μ m. In conventional machining, they have proven to be able to reduce tool wear. To transfer these wear improvements to the micro scale, we manufactured all-ceramic micro end mills in previous studies ( ∅ 50 and ∅ 100 μm). Tools made from zirconia (Y-TZP) showed the sharpest cutting edges, and were the best performing in micro milling trials amongst the substrates tested. However, the advantages of the ceramic substrate could not be utilized for the brass and titanium materials tested in those studies. Therefore, in this study the capabilities of all-ceramic micro end mills ( ∅ 50 μ m) in different workpiece materials (1.4404, 1.7225, 3.1325 and PMMA GS) were researched. For the two steels and the aluminum alloy, the ceramic tools did not offer an improvement over the cemented carbide tools used as reference. For the thermoplastic PMMA however, significant improvements could be achieved by utilizing the Y-TZP ceramic tools: Less tool wear, less and more stable cutting forces, and higher surface qualities.
Velocity Based Training ist ein Ansatz zur Belastungssteuerung im Widerstandstraining, der die volitional maximale konzentrische Durchschnittsgeschwindigkeit gegen einen bestimmten Lastwiderstand zur Steuerung der Belastungsintensität sowie das Ausmaß der intraseriellen konzentrischen Geschwindigkeitsreduktion zur Steuerung der intraseriellen muskulären Ermüdung verwendet. Die diesem Ansatz inhärente Grundvoraussetzung, sich mit volitional maximalen konzentrischen Geschwindigkeiten zu bewegen, führt jedoch dazu, dass die Steuerung der muskulären Ermüdung auf Basis der relativen Geschwindigkeitsreduktion nicht umsetzbar ist, wenn man sich im Widerstandstraining mit volitional submaximaler Geschwindigkeit bewegt. Deshalb befasste sich dieses Promotionsprojekt mit der übergeordneten Forschungsfrage, inwieweit sich ein adaptierter Ansatz der geschwindigkeitsbasierten Belastungssteuerung im Widerstandstraining auf Basis der Minimum Velocity Threshold (MVT), der eine „Relative Stopping Velocity Threshold“ ([RSVT], berechnet als Vielfaches der MVT in Prozent) zur objektiven Autoregulation der Belastungsdauer verwendet, dazu eignet, den Grad der muskulären Ermüdung innerhalb eines Trainingssatzes mit volitional submaximaler konzentrischer Bewegungsgeschwindigkeit zu steuern.
Zur Beantwortung dieser übergeordneten Forschungsfrage wurde eine explanative, prospektive Untersuchung im quasiexperimentellen Design durchgeführt. Dabei wurde für alle Probanden an einem ersten Termin die individuelle dynamische Maximalkraftleistung (1-RM) für die Langhantelübungen Bankdrücken und Kreuzheben ermittelt und an einem zweiten Termin die eigentliche Testung durchgeführt. An diesem zweiten Testtermin wurde pro Übung jeweils ein Testsatz mit volitional maximaler und ein Testsatz mit volitional submaximaler konzentrischer Bewegungsgeschwindigkeit bei einer standardisierten Belastungsintensität von 75 % 1-RM ausgeführt, während die konzentrische Bewegungsgeschwindigkeit der einzelnen Wiederholungen mittels einer Inertialsensoreinheit erfasst wurde, um die ermüdungsbedingte Geschwindigkeitsreduktion der Wiederholungen am Ende eines ausbelastenden Testsatzes zu untersuchen.
Als Antwort auf die übergeordnete Forschungsfrage dieser Untersuchung kann festgehalten werden, dass sich die RSVT grundsätzlich zur Steuerung der intraseriellen muskulären Ermüdung im Widerstandstraining mit volitional submaximaler konzentrischer Bewegungsgeschwindigkeit eignet. Für fitness- und gesundheitsorientierte Personen wurde ein RSVT-Zielkorridor abgeleitet der RSVT = 171,4 - 186,6 % MVT entspricht. Führt man einen Satz Bankdrücken mit der Langhantel mit einer Belastungsintensität von 75 % 1-RM und volitional submaximaler konzentrischer Bewegungsgeschwindigkeit so lange aus, bis die durchschnittliche konzentrische Bewegungsgeschwindigkeit (MV) einer Wiederholung ermüdungsbedingt in diesen Zielkorridor absinkt, sollten noch zwei bis drei weitere Wiederholungen ausführbar sein, bevor der Punkt des momentanen konzentrischen Muskelversagens erreicht wird. Für leistungsorientierte Personen im trainierten Zustand wurde ein RSVT-Zielkorridor von RSVT = 183,8 - 211,3 % MVT abgeleitet. Sinkt die gemessene MV einer Wiederholung ermüdungsbedingt in diesen Zielkorridor, kann mit vertretbarer Sicherheit davon ausgegangen werden, dass noch eine bis zwei weitere Wiederholungen bis zum Punkt des momentanen konzentrischen Muskelversagens ausgeführt werden können.
Die vorliegende Dissertation liefert durch diese Weiterentwicklung des Velocity Based Training einen adaptierten Steuerungsansatz, mit dem es erstmals möglich wird, die geschwindigkeitsbasierte Belastungssteuerung im Widerstandstraining auch bei volitional submaximalen konzentrischen Bewegungsgeschwindigkeiten sinnvoll anzuwenden. Aufgrund bestehender Limitationen der Untersuchung sind jedoch weitere wissenschaftliche Studien erforderlich, um die Gültigkeit, die Übertragbarkeit sowie die Effektivität des MVT-basierten Steuerungsansatzes weiter zu erforschen.
Deactivation processes of photoexcited (λex = 580 nm) phycocyanobilin (PCB) in methanol were investigated by means of UV/Vis and mid-IR femtosecond (fs) transient absorption (TA) as well as static fluorescence spectroscopy, supported by density-functional-theory calculations of three relevant ground state conformers, PCBA, PCBB and PCBC, their relative electronic state energies and normal mode vibrational analysis. UV/Vis fs-TA reveals time constants of 2.0, 18 and 67 ps, describing decay of PCBB*, of PCBA* and thermal re-equilibration of PCBA, PCBB and PCBC, respectively, in line with the model by Dietzek et al. (Chem Phys Lett 515:163, 2011) and predecessors. Significant substantiation and extension of this model is achieved first via mid-IR fs-TA, i.e. identification of molecular structures and their dynamics, with time constants of 2.6, 21 and 40 ps, respectively. Second, transient IR continuum absorption (CA) is observed in the region above 1755 cm−1 (CA1) and between 1550 and 1450 cm−1 (CA2), indicative for the IR absorption of highly polarizable protons in hydrogen bonding networks (X–H…Y). This allows to characterize chromophore protonation/deprotonation processes, associated with the electronic and structural dynamics, on a molecular level. The PCB photocycle is suggested to be closed via a long living (> 1 ns), PCBC-like (i.e. deprotonated), fluorescent species.
Optimizing a manufacturing company's in-house energy demand amidst fluctuating electricity prices and uncertainties in renewable energy supply as well as volatile manufacturing planning situations is a challenging task. To tackle this issue, a novel approach is developed for scheduling the energy supply in manufacturing systems with the objective of reducing energy costs. The approach employs Quantum Annealing to determine the optimal mix of in-house generation, purchased electricity, and energy storage. The effectiveness and scalability of the approach are demonstrated through the validation using two simplified use cases, showcasing its potential in solving complex energy supply optimization problems.
Cutting of metallic glasses produces as a rule serrated and segmented chips in experiments, while atomistic simulations produce straight unserrated chips. We demonstrate here that with increasing depth of cut – with all other parameters unchanged – chip serration starts to affect the morphology of the chip also in molecular dynamics simulations. The underlying reason is the shear localization in shear bands. As the distance between shear bands increases with increasing depth of cut, the surface morphology of the chip becomes increasingly segmented. The parallel shear bands that formed during cutting do no longer interact with each other when their separation is ≳10 nm. Our results are analogous to the so-called fold instability that has been found when machining nanocrystalline metals.
Over the past 2 decades, there has been much progress on the classification of symplectic linear quotient singularities V/G admitting a symplectic (equivalently, crepant) resolution of singularities. The classification is almost complete but there is an infinite series of groups in dimension 4—the symplectically primitive but complex imprimitive groups—and 10 exceptional groups up to dimension 10, for which it is still open. In this paper, we treat the remaining infinite series and prove that for all but possibly 39 cases there is no symplectic resolution. We thereby reduce the classification problem to finitely many open cases. We furthermore prove non-existence of a symplectic resolution for one exceptional group, leaving 39+9=48 open cases in total. We do not expect any of the remaining cases to admit a symplectic resolution.
Consider the primitive equations on ◂+▸R2×(◂,▸z0,z1) with initial data a of the form a=◂+▸a1+a2, where ◂+▸a1∈◂◽.▸BUCσ(◂,▸R2;L1(◂,▸z0,z1)) and ◂+▸a2∈L
∞
σ
(◂,▸R2;L1(◂,▸z0,z1)). These spaces are scaling-invariant and represent the anisotropic character of these equations. It is shown that for a1 arbitrary large and a2 sufficiently small, this set of equations admits a unique strong solution which extends to a global one and is thus strongly globally well posed for these data provided a is periodic in the horizontal variables. The approach presented depends crucially on mapping properties of the hydrostatic Stokes semigroup in the L∞(L1)-setting. It can be seen as the counterpart of the classical iteration schemes for the Navier–Stokes equations, now for the primitive equations in the L∞(L1)-setting.
In figure–ground organization, the figure is defined as a region that is both “shaped” and “nearer.” Here we test whether changes in task set and instructions can alter the outcome of the cross-border competition between figural priors that underlies figure assignment. Extremal edge (EE), a relative distance prior, has been established as a strong figural prior when the task is to report “which side is nearer?” In three experiments using bipartite stimuli, EEs competed and cooperated with familiar configuration, a shape prior for figure assignment in a “which side is shaped?” task.” Experiment 1 showed small but significant effects of familiar configuration for displays sketching upright familiar objects, although “shaped-side” responses were predominantly determined by EEs. In Experiment 2, instructions regarding the possibility of perceiving familiar shapes were added. Now, although EE remained the dominant prior, the figure was perceived on the familiar-configuration side of the border on a significantly larger percentage of trials across all display types. In Experiment 3, both task set (nearer/shaped) and the presence versus absence of instructions emphasizing that familiar objects might be present were manipulated within subjects. With familiarity thus “primed,” effects of task set emerged when EE and familiar configuration favored opposite sides as figure. Thus, changing instructions can modulate the weighing of figural priors for shape versus distance in figure assignment in a manner that interacts with task set. Moreover, we show that the influence of familiar parts emerges in participants without medial temporal lobe/ perirhinal cortex brain damage when instructions emphasize that familiar objects might be present.
Cellular membranes can serve as barriers between subcellular compartments, but they can also interact to form dynamically regulated membrane contact sites between a specific pair of organelles. Focussing on plants, this article discusses local redox environments and the current knowledge on membrane contact sites as examples for the dividing and connecting functions of membranes, respectively.
Fragmentation of granular clusters may be studied by experiments and by granular mechanics simulation. When comparing results, it is often assumed that results can be compared when scaled to the same value of E/◂◽.▸Esep, where E denotes the collision energy and ◂◽.▸Esep is the energy needed to break every contact in the granular clusters. The ratio ◂+▸E/◂◽.▸Esep∝v2 depends on the collision velocity v but not on the number of grains per cluster, N. We test this hypothesis using granular-mechanics simulations on silica clusters containing a few thousand grains in the velocity range where fragmentation starts. We find that a good parameter to compare different systems is given by ◂+▸E/(Nα◂◽.▸Esep), where α∼2/3. The occurrence of the extra factor Nα is caused by energy dissipation during the collision such that large clusters request a higher impact energy for reaching the same level of fragmentation than small clusters. Energy is dissipated during the collision mainly by normal and tangential (sliding) forces between grains. For large values of the viscoelastic friction parameter, we find smaller cluster fragmentation, since fragment velocities are smaller and allow for fragment recombination.
Die Möglichkeit einer Prämienanpassung in der deutschen PKV ist vom Wert des sogenannten auslösenden Faktors abhängig, der mittels einer linearen Extrapolation der Schadenquotienten der vergangenen drei Jahre berechnet wird. Seine frühzeitige, verlässliche Vorhersage ist aus Sicht des Risikomanagements von großer Bedeutung. Wir untersuchen deshalb vielfältige Vorhersageansätze, die von klassischen Zeitreihenansätzen und Regression über neuronale Netze bis hin zu hybriden Modellen reichen. Während bei den klassischen Methoden Regression mit ARIMA-Fehlern am besten abschneidet, zeigt ein neuronales Netz, das mit Zeitreihenvorhersage kombiniert oder auf desaisonalisierten und trendbereinigten Daten trainiert wurde, das insgesamt beste Verhalten.
Gliomas are primary brain tumors with a high invasive potential and infiltrative spread. Among them, glioblastoma multiforme (GBM) exhibits microvascular hyperplasia and pronounced necrosis triggered by hypoxia. Histological samples showing garland-like hypercellular structures (so-called pseudopalisades) centered around the occlusion site of a capillary are typical for GBM and hint on poor prognosis of patient survival. We propose a multiscale modeling approach in the kinetic theory of active particles framework and deduce by an upscaling process a reaction-diffusion model with repellent pH-taxis. We prove existence of a unique global bounded classical solution for a version of the obtained macroscopic system and investigate the asymptotic behavior of the solution. Moreover, we study two different types of scaling and compare the behavior of the obtained macroscopic PDEs by way of simulations. These show that patterns (not necessarily of Turing type), including pseudopalisades, can be formed for some parameter ranges, in accordance with the tumor grade. This is true when the PDEs are obtained via parabolic scaling (undirected tissue), while no such patterns are observed for the PDEs arising by a hyperbolic limit (directed tissue). This suggests that brain tissue might be undirected - at least as far as glioma migration is concerned. We also investigate two different ways of including cell level descriptions of response to hypoxia and the way they are related .
Understanding human crowd behaviour has been an intriguing topic of interdisciplinary research in recent decades. Modelling of crowd dynamics using differential equations is an indispensable approach to unraveling the various complex dynamics involved in such interacting particle systems. Numerical simulation of pedestrian crowd via these mathematical models allows us to study different realistic scenarios beyond the limitations of studies via controlled experiments.
In this thesis, the main objective is to understand and analyse the dynamics in a domain shared by both pedestrians and moving obstacles. We model pedestrian motion by combining the social force concept with the idea of optimal path computation. This leads to a system of ordinary differential equations governing the dynamics of individual pedestrians via the interaction forces (social forces) between them. Additionally, a non-local force term involving the optimal path and desired velocity governs the pedestrian trajectory. The optimal path computation involves solving a time-independent Eikonal equation, which is coupled to the system of ODEs. A hydrodynamic model is developed from this microscopic model via the mean-field limit.
To consider the interaction with moving obstacles in the domain, we model a set of kinematic equations for the obstacle motion. Two kinds of obstacles are considered - "passive", which move in their predefined trajectories and have only a one-way interaction with pedestrians, and "dynamic", which have a feedback interaction with pedestrians and have their trajectories changing dynamically. The coupled model of pedestrians and obstacles is used to discern pedestrian collision avoidance behaviour in different computational scenarios in a long rectangular domain. We observe that pedestrians avoid collisions through route choice strategies that involve changes in speed and path. We extend this model to consider the interaction between pedestrians and vehicular traffic. We appropriately model the interactions of vehicles, following lane traffic, based on the car-following approach. We observe how the deceleration and braking mechanism of vehicles is executed at pedestrian crossings depending on the right of way on the roads.
As a second objective, we study the disease contagion in moving crowds. We consider the influence of the crowd motion in a complex dynamical environment on the course of infection of pedestrians. A hydrodynamic model for multi-group pedestrian flow is derived from the kinetic equations based on a social force model. It is coupled along with an Eikonal equation to a non-local SEIS contagion model for disease spread. Here, apart from the description of local contacts, the influence of contact times has also been modelled. We observe that the nature of the flow and the geometry of the domain lead to changes in density which affect the contact time and, consequently, the rate of spread of infection.
Finally, the social force model is compared to a variable speed based rational behaviour pedestrian model. We derive a hierarchy of the heuristics-based model from microscopic to macroscopic scales and numerically investigate these models in different density scenarios. Various numerical test cases are considered, including uni- and bi-directional flows and scenarios with and without obstacles. We observe that in low-density scenarios, collision avoidance forces arising from the behavioural heuristics give valid results. Whereas in high-density scenarios, repulsive force terms are essential.
The numerical simulations of all the models are carried out using a mesh-free particle method based on least square approximations. The meshfree numerical framework provides an efficient and elegant way to handle complex geometric situations involving boundaries and stationary or moving obstacles.
The precise regulation of synaptic connectivity is essential for the processing of information in the brain. Any aberrant loss of synaptic connectivity due to genetic mutations will disrupt information flow in the nervous system and may represent the underlying cause of psychiatric or neurodegenerative diseases. Therefore, identification of the molecular mechanisms controlling synaptic plasticity and maintenance is essential for our understanding of neuronal circuits in development and disease.
Maturity model for determining digitalization levels within different product lifecycle phases
(2021)
Maintaining pace with ongoing changes due to digitalization is challenging for manufacturing companies. For successful
implementation of digitalization, manufacturing companies must consider their existing technical systems, organizational
structures, and processes, as well as social aspects. With the support of a maturity model, a company-specific digitalization
level can be evaluated to provide manufacturing companies with an initial insight into their particular status quo; this
can serve as a starting point for future optimization and digitalization projects. Furthermore, the results of such an analysis
allow objective comparison of different areas within the company and with competitors. In this paper, the “Integrierte Arbeitssystemgestaltung
in digitalisierten Produktionsunternehmen” (InAsPro) maturity model is presented, which considers
the Development, Production, and Assembly product lifecycle phases, as well as Aftersales, and assesses their digitalization
level focusing on the four dimensions of Technology, Organization, Social Issues, and Corporate Strategy. The maturity
model’s rating scale distinguishes between four maturity levels. The results given by the InAsPro maturity model for an
entire company are presented, along with those for each product lifecycle phase. Extensive descriptions for each specific
maturity level are also provided.
Consider a linear realization of a matroid over a field. One associates with it a configuration
polynomial and a symmetric bilinear form with linear homogeneous coefficients.
The corresponding configuration hypersurface and its non-smooth locus support the
respective first and second degeneracy scheme of the bilinear form.We showthat these
schemes are reduced and describe the effect of matroid connectivity: for (2-)connected
matroids, the configuration hypersurface is integral, and the second degeneracy scheme
is reduced Cohen–Macaulay of codimension 3. If the matroid is 3-connected, then also
the second degeneracy scheme is integral. In the process, we describe the behavior
of configuration polynomials, forms and schemes with respect to various matroid
constructions.
Loss of USP28 and SPINT2 expression promotes cancer cell survival after whole genome doubling
(2021)
Background
Whole genome doubling is a frequent event during cancer evolution and shapes the cancer genome due to the occurrence of chromosomal instability. Yet, erroneously arising human tetraploid cells usually do not proliferate due to p53 activation that leads to CDKN1A expression, cell cycle arrest, senescence and/or apoptosis.
Methods
To uncover the barriers that block the proliferation of tetraploids, we performed a RNAi mediated genome-wide screen in a human colorectal cancer cell line (HCT116).
Results
We identified 140 genes whose depletion improved the survival of tetraploid cells and characterized in depth two of them: SPINT2 and USP28. We found that SPINT2 is a general regulator of CDKN1A transcription via histone acetylation. Using mass spectrometry and immunoprecipitation, we found that USP28 interacts with NuMA1 and affects centrosome clustering. Tetraploid cells accumulate DNA damage and loss of USP28 reduces checkpoint activation, thus facilitating their proliferation.
Conclusions
Our results indicate three aspects that contribute to the survival of tetraploid cells: (i) increased mitogenic signaling and reduced expression of cell cycle inhibitors, (ii) the ability to establish functional bipolar spindles and (iii) reduced DNA damage signaling.
This article investigates a network interdiction problem on a tree network: given a subset of nodes chosen as facilities, an interdictor may dissect the network by removing a size-constrained set of edges, striving to worsen the established facilities best possible. Here, we consider a reachability objective function, which is closely related to the covering objective function: the interdictor aims to minimize the number of customers that are still connected to any facility after interdiction. For the covering objective on general graphs, this problem is known to be NP-complete (Fröhlich and Ruzika In: On the hardness of covering-interdiction problems. Theor. Comput. Sci., 2021). In contrast to this, we propose a polynomial-time solution algorithm to solve the problem on trees. The algorithm is based on dynamic programming and reveals the relation of this location-interdiction problem to knapsack-type problems. However, the input data for the dynamic program must be elaborately generated and relies on the theoretical results presented in this article. As a result, trees are the first known graph class that admits a polynomial-time algorithm for edge interdiction problems in the context of facility location planning.
Plasticity in metallic glasses depends on their stoichiometry. We explore this dependence by molecular dynamics simulations for the case of CuZr alloys using the compositions Cu64.5Zr35.5, Cu50Zr50, and Cu35.5Zr64.5. Plasticity is induced by nanoindentation and orthogonal cutting. Only the Cu64.5Zr35.5 sample shows the formation of localized strain in the form of shear bands, while plasticity is more homogeneous for the other samples. This feature concurs with the high fraction of full icosahedral short-range order found for Cu64.5Zr35.5. In all samples, the atomic density is reduced in the plastic zone; this reduction is accompanied by a decrease of the average atom coordination, with the possible exception of Cu35.5Zr64.5, where coordination fluctuations are high. The strongest density reduction occurs in Cu64.5Zr35.5, where it is connected with the partial destruction of full icosahedral short-range order. The difference in plasticity mechanism influences the shape of the pileup and of the chip generated by nanoindentation and cutting, respectively.
Linear evolution equations are considered usually for the time variable being defined on an interval where typically initial conditions or time periodicity of solutions is required to single out certain solutions. Here, we would like to make a point of allowing time to be defined on a metric graph or network where on the branching points coupling conditions are imposed such that time can have ramifications and even loops. This not only generalizes the classical setting and allows for more freedom in the modeling of coupled and interacting systems of evolution equations, but it also provides a unified framework for initial value and time-periodic problems. For these time-graph Cauchy problems questions of well-posedness and regularity of solutions for parabolic problems are studied along with the question of which time-graph Cauchy problems cannot be reduced to an iteratively solvable sequence of Cauchy problems on intervals. Based on two different approaches—an application of the Kalton–Weis theorem on the sum of closed operators and an explicit computation of a Green’s function—we present the main well-posedness and regularity results. We further study some qualitative properties of solutions. While we mainly focus on parabolic problems, we also explain how other Cauchy problems can be studied along the same lines. This is exemplified by discussing coupled systems with constraints that are non-local in time akin to periodicity.
In recent years, ◂...▸optical character recognition (OCR) systems have been used to digitally preserve historical archives. To transcribe historical archives into a machine-readable form, first, the documents are scanned, then an OCR is applied. In order to digitize documents without the need to remove them from where they are archived, it is valuable to have a portable device that combines scanning and OCR capabilities. Nowadays, there exist many commercial and open-source document digitization techniques, which are optimized for contemporary documents. However, they fail to give sufficient text recognition accuracy for transcribing historical documents due to the severe quality degradation of such documents. On the contrary, the anyOCR system, which is designed to mainly digitize historical documents, provides high accuracy. However, this comes at a cost of high computational complexity resulting in long runtime and high power consumption. To tackle these challenges, we propose a low power energy-efficient accelerator with real-time capabilities called iDocChip, which is a configurable hybrid hardware-software programmable ◂...▸System-on-Chip (SoC) based on anyOCR for digitizing historical documents. In this paper, we focus on one of the most crucial processing steps in the anyOCR system: Text and Image Segmentation, which makes use of a multi-resolution morphology-based algorithm. Moreover, an optimized FPGA-based hybrid architecture of this anyOCR step along with its optimized software implementations are presented. We demonstrate our results on multiple embedded and general-purpose platforms with respect to runtime and power consumption. The resulting hardware accelerator outperforms the existing anyOCR by 6.2×, while achieving 207× higher energy-efficiency and maintaining its high accuracy.
Daseinsvorsorge im Bereich des Schutzes der Trinkwasserressourcen beginnt mit der Pflege des Wasserdargebots. In Anbetracht der sich in tatsächlicher und rechtlicher Hinsicht ändernden Gegebenheiten beim Schutz von Trink-, Mineral- und Heilwasservorkommen (Klimawandel, “Wasserstress” infolge qualitativer und quantitativer Verschlechterungen bis hin zur Trinkwasserknappheit) stellt sich die Frage, wie den Gefahren der zunehmenden Verschlechterung dieser Ressourcen begegnet werden kann. Der nachfolgende Beitrag belegt die These, dass auch bei einer gut gemeinten von Beschleunigungs- und Deregulierungsbestrebungen getragenen landesgesetzlichen Änderung risikovorsorgender, rechtssystematischer Schutzkomponenten der abwägungsrelevante Belang nicht außer Acht bleiben darf, dass es dadurch zu einer systematischen Verschlechterung des rechtlichen und umweltplanerischen Kontroll- und Schutzsystems mit nachteiligen Folgewirkungen für diese Schutzgüter kommen kann. Insofern sollte der Stellenwert, welcher der objektiv-rechtliche Rechtsschutz bei der Erhaltung der besonderen “Naturschätze”, der Trink-, Mineral- und Heilwasservorkommen, einnimmt, nicht unterschätzt werden.
Functional illiteracy and developmental dyslexia: looking for common roots. A systematic review
(2021)
A considerable amount of the population in more economically developed countries are functionally illiterate (i.e., low literate). Despite some years of schooling and basic reading skills, these individuals cannot properly read and write and, as a consequence have problems to understand even short texts. An often-discussed approach (Greenberg et al. 1997) assumes weak phonological processing skills coupled with untreated developmental dyslexia as possible causes of functional illiteracy. Although there is some data suggesting commonalities between low literacy and developmental dyslexia, it is still not clear, whether these reflect shared consequences (i.e., cognitive and behavioral profile) or shared causes. The present systematic review aims at exploring the similarities and differences identified in empirical studies investigating both functional illiterate and developmental dyslexic samples. Nine electronic databases were searched in order to identify all quantitative studies published in English or German. Although a broad search strategy and few limitations were applied, only 5 studies have been identified adequate from the resulting 9269 references. The results point to the lack of studies directly comparing functional illiterate with developmental dyslexic samples. Moreover, a huge variance has been identified between the studies in how they approached the concept of functional illiteracy, particularly when it came to critical categories such the applied definition, terminology, criteria for inclusion in the sample, research focus, and outcome measures. The available data highlight the need for more direct comparisons in order to understand what extent functional illiteracy and dyslexia share common characteristics.
Im Rahmen dieses Beitrags werden Ergebnisse einer Untersuchung an feststoffgeschmierten Wälzlagern vorgestellt. Betrachtet werden dabei Lager, welche einen speziellen, modifizierten Käfig verwenden. Die Käfigtaschen des Käfigs dienen dabei, zusätzlich zu ihrer ursprünglichen Funktion, der Führung der Wälzkörper, als Schmierstoffdepot. Es werden zunächst Prüfaufbau und die Versuchsbedingungen erläutert und in diesem Zusammenhang wird gezeigt, dass der in diesem Beitrag verwendete Aufbau, verglichen mit dem Aufbau vorangegangener Arbeiten, eine deutlich reduzierte Streuung aufweist. Als eine nicht zu vernachlässigende Fehlerquelle bei der gravimetrischen Bestimmung des Käfigtaschenverschleißes wurde das hygroskopische Verhalten des Polymercompounds identifiziert. Einer Verfälschung dieser Messergebnisse durch die unkontrollierte Feuchtigkeitsaufnahme aus der Umgebung, muss durch einen zeitlich vorgelagerten Trocknungsprozess unter definierten Bedingungen vorgebeugt werden. Zudem wird gezeigt, dass die Käfigtaschen sowohl durch den Innenring des Lagers, als auch durch die Wälzkörper verschlissen werden. Hierbei wird eine Messmethode zur Ermittlung der, durch den Innenring verschlissenen, Materialmenge vorgestellt. Durch Oberflächenanalysen der Messingstruktur des Käfigs wird eine Reduzierung des Zinks nachgewiesen, sowie eine Änderung der Oberflächenstruktur festgestellt. Als Ursache wird ein Sublimieren des Zinks aufgrund der Versuchsbedingungen vermutet. Weiterhin wird gezeigt, dass die Prüftemperatur von 300 °C zu einem Schrumpfen der Lagerringe führt. Eine Vorwegnahme dieser Maßverringerung ist durch Temperierung bei 300 °C für 48 h möglich.
Effects of the Velocity Sequences on the Friction and Wear Performance of PEEK-Based Materials
(2021)
In the present study, effects of the sliding velocity sequences on the friction and wear properties of pure polyetheretherketone (PEEK) and a PEEK hybrid composite were studied. It is demonstrated that the tribological properties of pure PEEK and its composite show a complex nature of the dependence on the velocity sequences in the studied range. The friction coefficient of PEEK is independent on previous velocity histories. In contrast, the testing sequence of the velocity exerts obvious impact on the friction coefficient of the PEEK composite at slow sliding velocities. With respect to the wear performance, the specific wear rate of pure PEEK exhibits a strong dependence on the sequences of the velocity only at the initial pv-levels. For the PEEK composite, its specific wear rate exhibits an obvious dependence on the previous velocity levels at a low nominal pressure of 1 MPa. When the pressure is increased to 8 MPa, the impact of the velocity sequences on the wear performance becomes insignificant. In addition, the tribological properties clearly correlate with the temperature of the tribosystem.
The reason why variant selection phenomena occur in ausforming treatments is still not known. For that reason, in this work, the effect of compressive deformation on the macro and micro-texture of a bainitic microstructure was analyzed in a medium-carbon high-silicon steel subjected to ausforming treatments, where deformation was applied at 520 °C, 400 °C and 300 °C. The as-received material presented a very weak ⟨331⟩ fiber texture along the rod axis, due to prior thermomechanical processing. For the samples isothermally heat-treated, it was detected that the bainitic ferrite inherited a ⟨100⟩ fiber texture from the ⟨110⟩ fiber texture present in the prior austenite. The intensity of this transformation texture was more pronounced as the deformation temperature decreased. Also, variant selection was examined at different scales by combining Electron-Backscattered Diffraction and X-ray Diffraction. The quantification of the fraction of crystallographic variants under certain conventions for every condition revealed variant selection in samples subjected to ausforming treatments, where these phenomena were stronger as the deformation temperature was lower. Finally, some of the theories proposed so far to explain these variant selection phenomena were tested, showing that variants were not selected based on their Bain group and that their selection can be better described in terms of their belonging to packets, if these are defined according to a global reference frame. This suggests that the phenomena might have to do with the effect of deformation mechanisms on the prior austenite.
The deformation of a nano-sized polycrystalline Al bar under the action of vice plates is studied using molecular dynamics simulation. Two grain sizes are considered, fine-grained and coarse-grained. Deformation in the fine-grained sample is mainly caused by grain-boundary processes which induce grain displacement and rotation. Deformation in the coarse-grained sample is caused by grain-boundary processes and dislocation plasticity. The sample distortion manifests itself by the center-of-mass motion of the grains. Grain rotation is responsible for surface roughening after the loading process. While the plastic deformation is caused by the loading process, grain rearrangements under load release also contribute considerably to the final sample distortion.
Der vorliegende Aufsatz untersucht, (1) inwieweit Unterschiede in der Ausgestaltung der Migrationspolitik auf substaatlicher Ebene in der Bundesrepublik Deutschland bestehen und (2) wodurch sich die Policy-Varianz zwischen den deutschen Ländern erklären lässt. Während bestehende Studien ähnlich gelagerte Fragen meist nur auf Basis eines spezifischen Indikators der Migrationspolitik untersucht haben – wie etwa der Ausgaben – schlagen wir ein mehrdimensionales Messkonzept vor, das sechs unterschiedliche Dimensionen der Migrationspolitik auf Länderebene unterscheidet: (1) die Art der Unterbringung, (2) die Art der Leistungserbringung, (3) die Gesundheitsversorgung, (4) die Aufnahmepraxis, (5) die Abschiebepraxis, sowie die (6) bundesstaatliche Positionierung am Beispiel der „sicheren Herkunftsländer“. Zur Analyse möglicher Pfade zur Erklärung der Unterschiede zwischen den Bundesländern nutzen wir eine fuzzy-set QCA-Analyse und greifen auf Parteipolitik, sozioökonomischen Kontext und die Einstellungen der Bevölkerung als Bedingungen zurück.
Unsere Ergebnisse zeigen, dass in der Tat substanzielle Unterschiede zwischen den Bundesländern bestehen. Zudem finden wir, dass die parteipolitische Zusammensetzung der Regierung in unterschiedlichen Pfaden eine wichtige Bedingung für das Vorliegen restriktiver bzw. permissiver Migrationspolitik ist. In keinem einzigen kausalen Pfad der fsQCA-Analyse ist überhaupt eine Erklärung restriktiver bzw. permissiver Migrationspolitik ohne Berücksichtigung der Parteiideologie möglich – ein Ergebnis, das klar für die hohe Relevanz der parteipolitischen Zusammensetzung der Regierung spricht. Die Einstellungsmuster der Bevölkerung in dem jeweiligen Bundesland, die Migrationspolitik und die sozioökonomischen Bedingungen scheinen hingegen nur eine untergeordnete Rolle zu spielen.
The plasma membrane harbors a specific set of transmembrane proteins which enable diverse cellular functions such as nutrient uptake, ion homeostasis and cellular signaling. The surface levels of these proteins need to be dynamically regulated to allow for plastic changes in cellular behaviour e. g. upon cell stress or during neuronal communication. Endocytosis is a powerful mechanism for quickly adapting the surface proteome via protein internalization. Here, I discuss how endocytosis contributes to brain function and counteracts cell stress.
In this study we investigated parafoveal processing by L1 and late L2 speakers of English (L1 German) while reading in English.
We hypothesized that L2ers would make use of semantic and orthographic information parafoveally. Using the gaze contingent
boundary paradigm, we manipulated six parafoveal masks in a sentence (Mark found th*e wood for the fire; * indicates the
invisible boundary): identical word mask (wood), English orthographic mask (wook), English stringmask (zwwl), German mask
(holz), German orthographic mask (holn), and German string mask (kxfs). We found an orthographic benefit for L1ers and L2ers
when the mask was orthographically related to the target word (wood vs. wook) in line with previous L1 research. English L2ers
did not derive a benefit (rather an interference) when a non-cognate translation mask from their L1 was used (wood vs. holz), but
did derive a benefit from a German orthographic mask (wood vs. holn). While unexpected, it may be that L2ers incur a switching
cost when the complete German word is presented parafoveally, and derive a benefit by keeping both lexicons active when a
partial German word is presented parafoveally (narrowing down lexical candidates). To the authors’ knowledge there is no
mention of parafoveal processing in any model of L2 processing/reading, and the current study provides the first evidence for a
parafoveal non-cognate orthographic benefit (but only with partial orthographic overlap) in sentence reading for L2ers. We
discuss how these findings fit into the framework of bilingual word recognition theories.
In some specific applications, the need of an optimized rolling bearing, having a similar load carrying capacity as a tapered roller bearing but with much lower friction losses is still to be addressed. In this paper, a new model is developed using a multibody simulation software and its experimental validation is presented.
After studying many different (in use and only patented) roller geometries and based on an existing and already validated model for tapered roller bearings, a new model has been created changing the basis of its geometry. When the rolling bearing is highly loaded, the new geometry will show lower friction losses than a conventional tapered roller bearing. In order to confirm this premise, as well as to validate the model, a prototype of the new optimized geometry has been manufactured and experimentally tested, together with a tapered roller bearing of same main dimensions. The tests have taken place in a frictional torque test rig, where it is possible to realistically reproduce the loads and misalignments occurring on a bearing.
The results of these tests together with its comparisons with the results of the multibody simulation models are discussed here. It has been observed, that the new model not only can be validated, but also presents less friction losses than the ones obtained when using a tapered roller bearing under some operating points with highly loaded bearings.
The consumption of red meat is associated with an increased risk for colorectal cancer (CRC). Multiple lines of evidence suggest
that heme iron as abundant constituent of red meat is responsible for its carcinogenic potential. However, the underlying
mechanisms are not fully understood and particularly the role of intestinal inflammation has not been investigated. To address
this important issue, we analyzed the impact of heme iron (0.25 μmol/g diet) on the intestinal microbiota, gut inflammation
and colorectal tumor formation in mice. An iron-balanced diet with ferric citrate (0.25 μmol/g diet) was used as reference.
16S rRNA sequencing revealed that dietary heme reduced α-diversity and caused a persistent intestinal dysbiosis, with a
continuous increase in gram-negative Proteobacteria. This was linked to chronic gut inflammation and hyperproliferation of
the intestinal epithelium as attested by mini-endoscopy, histopathology and immunohistochemistry. Dietary heme triggered
the infiltration of myeloid cells into colorectal mucosa with an increased level of COX-2 positive cells. Furthermore, flow
cytometry-based phenotyping demonstrated an increased number of T cells and B cells in the lamina propria following heme
intake, while γδ-T cells were reduced in the intraepithelial compartment. Dietary heme iron catalyzed formation of fecal
N-nitroso compounds and was genotoxic in intestinal epithelial cells, yet suppressed intestinal apoptosis as evidenced by
confocal microscopy and western blot analysis. Finally, a chemically induced CRC mouse model showed persistent intestinal
dysbiosis, chronic gut inflammation and increased colorectal tumorigenesis following heme iron intake. Altogether, this study
unveiled intestinal inflammation as important driver in heme iron-associated colorectal carcinogenesis.
This paper presents an iterative finite element (FE)–based method to calculate the gravity-free shape of nonrigid parts from
an optical measurement performed on a non-over-constrained fixture. Measuring these kinds of parts in a stress-free state
is almost impossible because deflections caused by their weight occur. To solve this problem, a simulation model of the
measurement is created using available methods of reverse engineering. Then, an iterative algorithm calculates the gravityfree
shape. The approach does not require a CAD model of the measured part, implying the whole part can be fully scanned.
The application of this method mainly addresses thin, unstable sheet metal parts, like those commonly used in the automotive
or aerospace industry. To show the performance of the proposed method, validations with simulation and experimental
data are presented. The shown results meet the predefined quality goal to predict shapes within a tolerance of ±0.05 mm
measured in surface normal direction.
US arms control policies have shifted frequently in the last 60 years, ranging from
the role of a ‘brakeman’ regarding international arms control, to the role of a
‘booster,’ initiating new agreements. My article analyzes the conditions that contribute
to this mixed pattern. A crisp-set Qualitative Comparative Analysis (QCA) evaluates
24 cases of US decisions on international arms control treaties (1963–2021).
The analysis reveals that the strength of conservative treaty skeptics in the Senate, in
conjunction with other factors, has contributed to the demise of arms control policies
since the end of the Cold War. A brief study of the Trump administration’s arms
control policies provides case-sensitive insights to corroborate the conditions identified
by the QCA. The findings suggest that conservative treaty skeptics contested the
bipartisan consensus and thus impaired the ability of the USA to perform its leadership
role within the international arms control regime.
The promise of algorithmic decision-making (ADM) lies in its capacity to support or replace human decision-making based on a superior ability to solve specific cognitive tasks. Applications have found their way into various domains of decision-making—and even find appeal in the realm of politics. Against the backdrop of widespread dissatisfaction with politicians in established democracies, there are even calls for replacing politicians with machines. Our discipline has hitherto remained surprisingly silent on these issues. The present article argues that it is important to have a clear grasp of when and how ADM is compatible with political decision-making. While algorithms may help decision-makers in the evidence-based selection of policy instruments to achieve pre-defined goals, bringing ADM to the heart of politics, where the guiding goals are set, is dangerous. Democratic politics, we argue, involves a kind of learning that is incompatible with the learning and optimization performed by algorithmic systems.
We propose a universal method for the evaluation of generalized standard materials that greatly simplifies the material law implementation process. By means of automatic differentiation and a numerical integration scheme, AutoMat reduces the implementation effort to two potential functions. By moving AutoMat to the GPU, we close the performance gap to conventional evaluation routines and demonstrate in detail that the expression level reverse mode of automatic differentiation as well as its extension to second order derivatives can be applied inside CUDA kernels. We underline the effectiveness and the applicability of AutoMat by integrating it into the FFT-based homogenization scheme of Moulinec and Suquet and discuss the benefits of using AutoMat with respect to runtime and solution accuracy for an elasto-viscoplastic example.
Heterocystous Cyanobacteria of the genus Nodularia form major blooms in brackish waters, while terrestrial Nostoc species occur worldwide, often associated in biological soil crusts. Both genera, by virtue of their ability to fix N2 and conduct oxygenic photosynthesis, contribute significantly to global primary productivity. Select Nostoc and Nodularia species produce the hepatotoxin nodularin and whether its production will change under climate change conditions needs to be assessed. In light of this, the effects of elevated atmospheric CO2 availability on growth, carbon and N2 fixation as well as nodularin production were investigated in toxin and non-toxin producing species of both genera. Results highlighted the following:
Biomass and volume specific biological nitrogen fixation (BNF) rates were respectively almost six and 17 fold higher in the aquatic Nodularia species compared to the terrestrial Nostoc species tested, under elevated CO2 conditions.
There was a direct correlation between elevated CO2 and decreased dry weight specific cellular nodularin content in a diazotrophically grown terrestrial Nostoc species, and the aquatic Nodularia species, regardless of nitrogen availability.
Elevated atmospheric CO2 levels were correlated to a reduction in biomass specific BNF rates in non-toxic Nodularia species.
Nodularin producers exhibited stronger stimulation of net photosynthesis rates (NP) and growth (more positive Cohen’s d) and less stimulation of dark respiration and BNF per volume compared to non-nodularin producers under elevated CO2 levels.
This study is the first to provide information on NP and nodularin production under elevated atmospheric CO2 levels for Nodularia and Nostoc species under nitrogen replete and diazotrophic conditions.
When considering complex systems, identifying the most important actors is often of relevance. When the system is modeled
as a network, centrality measures are used which assign each node a value due to its position in the network. It is often
disregarded that they implicitly assume a network process flowing through a network, and also make assumptions of how
the network process flows through the network. A node is then central with respect to this network process (Borgatti in Soc
Netw 27(1):55–71, 2005, https ://doi.org/10.1016/j.socne t.2004.11.008). It has been shown that real-world processes often
do not fulfill these assumptions (Bockholt and Zweig, in Complex networks and their applications VIII, Springer, Cham,
2019, https ://doi.org/10.1007/978-3-030-36683 -4_7). In this work, we systematically investigate the impact of the measures’
assumptions by using four datasets of real-world processes. In order to do so, we introduce several variants of the betweenness
and closeness centrality which, for each assumption, use either the assumed process model or the behavior of the real-world
process. The results are twofold: on the one hand, for all measure variants and almost all datasets, we find that, in general,
the standard centrality measures are quite robust against deviations in their process model. On the other hand, we observe a
large variation of ranking positions of single nodes, even among the nodes ranked high by the standard measures. This has
implications for the interpretability of results of those centrality measures. Since a mismatch of the behaviour of the real
network process and the assumed process model does even affect the highly-ranked nodes, resulting rankings need to be
interpreted with care.
A Strained Partnership: Krise und Resilienz in den transatlantischen Beziehungen 20 Jahre nach 9/11
(2021)
2021 lieferte aus transatlantischer Perspektive gleich mehrere Zäsuren. Im Januar wurde US-Präsident Donald Trump, dessen disruptive Politik diverse Konflikte mit Europa provozierte, durch Joseph R. Biden abgelöst. Im August endete in Afghanistan der längste Einsatz in der Geschichte der NATO mit einem chaotischen Abzug und der Machtübernahme der Taliban, fast 20 Jahre nach dem Beginn des Krieges. Und schließlich läuteten die Bundestagswahlen im September das Ende der Amtszeit Angela Merkels ein, die als Bundeskanzlerin in 16 Regierungsjahren auf vier US-Präsidenten traf. Diese Zäsuren bieten Anlass genug, eine Bilanz der transatlantischen Beziehungen seit 9/11 zu ziehen.
Machining-induced residual stresses (MIRS) are a main driver for distortion of thin-walled monolithic aluminum workpieces. Before one can develop compensation techniques to minimize distortion, the effect of machining on the MIRS has to be fully understood. This means that not only an investigation of the effect of different process parameters on the MIRS is important. In addition, the repeatability of the MIRS resulting from the same machining condition has to be considered. In past research, statistical confidence of MIRS of machined samples was not focused on. In this paper, the repeatability of the MIRS for different machining modes, consisting of a variation in feed per tooth and cutting speed, is investigated. Multiple hole-drilling measurements within one sample and on different samples, machined with the same parameter set, were part of the investigations. Besides, the effect of two different clamping strategies on the MIRS was investigated. The results show that an overall repeatability for MIRS is given for stable machining (between 16 and 34% repeatability standard deviation of maximum normal MIRS), whereas instable machining, detected by vibrations in the force signal, has worse repeatability (54%) independent of the used clamping strategy. Further experiments, where a 1-mm-thick wafer was removed at the milled surface, show the connection between MIRS and their distortion. A numerical stress analysis reveals that the measured stress data is consistent with machining-induced distortion across and within different machining modes. It was found that more and/or deeper MIRS cause more distortion.
Analysis of dimensional accuracy for micro-milled areal material measures with kinematic simulation
(2021)
The calibration of areal surface topography measuring instruments is of high relevance to estimate the measurement uncertainty and to guarantee the traceability of the measurement results. Calibration structures for optical measuring instruments must be sufficiently small to determine the limits of the instruments.
Besides other methods, micro-milling is a suitable process for manufacturing areal material measures. For the manufacturing by micro-milling with ball end mills, the tool radius (effective cutter radius) is the corresponding limiting factor: if the tool radius is too large to penetrate the concave profile details without removing the surrounding material, deviations from the target geometry will occur. These deviations can be detected and excluded before experimental manufacturing with the aid of a kinematic simulation.
In this study, a kinematic simulation model for the prediction of the dimensional accuracy of micro-milled areal material measures is developed and validated. Subsequently, a radius study is conducted to determine how the tool radius r of the tool influences the dimensional accuracy of an areal crossed sinusoidal (ACS) geometry according to ISO 25178-70 [1] with a defined amplitude d and period length p. The resulting theoretical surface texture parameters are evaluated and compared to the target values. It was shown that the surface texture parameters deviate from the nominal values depending on the effective cutter radius used. Based on the results of the study, it can be determined with which effective tool radius the measurands Sa and Sq of the material measures are best met. The ideal effective radius for the application considered is between 50 and 75 μm.
Adaptive numerical integration of exponential finite elements for a phase field fracture model
(2021)
Phase field models for fracture are energy-based and employ a continuous field variable, the phase field, to indicate cracks. The width of the transition zone of this field variable between damaged and intact regions is controlled by a regularization parameter. Narrow transition zones are required for a good approximation of the fracture energy which involves steep gradients of the phase field. This demands a high mesh density in finite element simulations if 4-node elements with standard bilinear shape functions are used. In order to improve the quality of the results with coarser meshes, exponential shape functions derived from the analytic solution of the 1D model are introduced for the discretization of the phase field variable. Compared to the bilinear shape functions these special shape functions allow for a better approximation of the fracture field. Unfortunately, lower-order Gauss-Legendre quadrature schemes, which are sufficiently accurate for the integration of bilinear shape functions, are not sufficient for an accurate integration of the exponential shape functions. Therefore in this work, the numerical accuracy of higher-order Gauss-Legendre formulas and a double exponential formula for numerical integration is analyzed.
Without actors, there is no action: How interpersonal interactions help to explain routine dynamics
(2020)
In this paper, we argue that it is important to gain a better understanding on how people interact with each other to explain routine dynamics. Thus, we propose to focus on the interpersonal interactions of actors which is not only the fact that actors interact with each other but that the manner and quality of these interactions is important to understand routine dynamics. By drawing on social exchange theory, we propose a framework that seeks to explain routine dynamics based on different relationships between actors. Building on this framework, we provide different process models indicating how routine performing and patterning is enacted due to the respective relationship of actors. Our insights contribute to research on routine dynamics by arguing (1) that actions of patterning are dependent on the relationship of actors; (2) that trust works as an enabler for creating new patterns of actions; (3) that distrust functions as an enhancer for interrupting and dissolving patterns of actions.
Defects change the phonon spectrum and also the magnetic properties of bcc-Fe. Using molecular dynamics simulation, the influence of defects – vacancies, dislocations, and grain boundaries – on the phonon spectra and magnetic properties of bcc-Fe is determined. It is found that the main influence of defects consists in a decrease of the amplitude of the longitudinal peak, PL, at around 37 meV. While the change in phonon spectra shows only little dependence on the defect type, the quantitative decrease of PL is proportional to the defect concentration. Local magnetic moments can be determined from the local atomic volumes. Again, the changes in the magnetic moments of a defective crystal are linear in the defect concentrations. In addition, the change of the phonon density of states and the magnetic moments under homogeneous uniaxial strain are investigated.
Mobile devices (smartphones or tablets) as experimental tools (METs) offer inspiring possibilities for science education, but until now, there has been little research studying this approach. Previous research indicated that METs have positive effects on students’ interest and curiosity. The present investigation focuses on potential cognitive effects of METs using video analyses on tablets to investigate pendulum movements and an instruction that has been used before to study effects of smartphones’ acceleration sensors. In a quasi-experimental repeated-measurement design, a treatment group uses METs (TG, NTG = 23) and a control group works with traditional experimental tools (CG, NCG = 28) to study the effects on interest, curiosity, and learning achievement. Moreover, various control variables were taken into account. We suppose that pupils in the TG have a lower extraneous cognitive load and higher learning achievement than those in the CG working with traditional experimental tools. ANCOVAs showed significantly higher levels of learning achievement in the TG (medium effect size). No differences were found for interest, curiosity, or cognitive load. This might be due to a smaller material context provided by tablets, in comparison to smartphones, as more pupils possess and are familiar with smartphones than with tablets. Another reason for the unchanged interest might be the composition of the sample: While previous research showed that especially originally less-interested students profited most from using METs, the current sample contained only specialized courses, i.e., students with a high original interest, for whom the effect of METs on their interest is presumably smaller.
Existentialist philosophy offers an understanding of how trying to eliminate ambiguities that inevitably mark the human condition only seemingly leads to freedom. This existentialist outlook can also serve to shed light on how democratic politics may similarly show tendencies which aim at overcoming immanent tensions. Such tendencies in democratic politics can be clarified using Sartre’s notion of ignorance – and truth as its counterpart. His concept of ignorance goes beyond merely facts or knowledge and refers to a mode of being. It expresses a subject’s desire to avoid, rather than confront, resistances stemming from the world. Based on a distinction of different forms in which this orientation can manifest itself, this article shows how democratic politics, too, can be threatened by ignorance as a way of doing politics. This ignorance comes in different guises which all express a desire to eliminate tensions that democratic politics cannot overcome without undermining itself.