Refine
Year of publication
Document Type
- Article (729) (remove)
Has Fulltext
- yes (729)
Keywords
- AG-RESY (42)
- PARO (30)
- SKALP (15)
- Schule (12)
- MINT (11)
- Mathematische Modellierung (11)
- Stadtplanung (9)
- Denkmäler (8)
- HANDFLEX (8)
- Monitoring (8)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (154)
- Kaiserslautern - Fachbereich Informatik (134)
- Kaiserslautern - Fachbereich Physik (102)
- Kaiserslautern - Fachbereich Mathematik (84)
- Kaiserslautern - Fachbereich Sozialwissenschaften (53)
- Kaiserslautern - Fachbereich Biologie (50)
- Kaiserslautern - Fachbereich Chemie (42)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (27)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (26)
- Kaiserslautern - Fachbereich Bauingenieurwesen (24)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (22)
- Landau - Fachbereich Natur- und Umweltwissenschaften (4)
- Kaiserslautern - Fachbereich ARUBI (2)
- Landau - Fachbereich Psychologie (2)
- Kaiserslautern - Fachbereich Architektur (1)
- Landau - Fachbereich Erziehungswissenschaften (1)
- Universität (1)
Die Möglichkeit einer Prämienanpassung in der deutschen PKV ist vom Wert des sogenannten auslösenden Faktors abhängig, der mittels einer linearen Extrapolation der Schadenquotienten der vergangenen drei Jahre berechnet wird. Seine frühzeitige, verlässliche Vorhersage ist aus Sicht des Risikomanagements von großer Bedeutung. Wir untersuchen deshalb vielfältige Vorhersageansätze, die von klassischen Zeitreihenansätzen und Regression über neuronale Netze bis hin zu hybriden Modellen reichen. Während bei den klassischen Methoden Regression mit ARIMA-Fehlern am besten abschneidet, zeigt ein neuronales Netz, das mit Zeitreihenvorhersage kombiniert oder auf desaisonalisierten und trendbereinigten Daten trainiert wurde, das insgesamt beste Verhalten.
Gliomas are primary brain tumors with a high invasive potential and infiltrative spread. Among them, glioblastoma multiforme (GBM) exhibits microvascular hyperplasia and pronounced necrosis triggered by hypoxia. Histological samples showing garland-like hypercellular structures (so-called pseudopalisades) centered around the occlusion site of a capillary are typical for GBM and hint on poor prognosis of patient survival. We propose a multiscale modeling approach in the kinetic theory of active particles framework and deduce by an upscaling process a reaction-diffusion model with repellent pH-taxis. We prove existence of a unique global bounded classical solution for a version of the obtained macroscopic system and investigate the asymptotic behavior of the solution. Moreover, we study two different types of scaling and compare the behavior of the obtained macroscopic PDEs by way of simulations. These show that patterns (not necessarily of Turing type), including pseudopalisades, can be formed for some parameter ranges, in accordance with the tumor grade. This is true when the PDEs are obtained via parabolic scaling (undirected tissue), while no such patterns are observed for the PDEs arising by a hyperbolic limit (directed tissue). This suggests that brain tissue might be undirected - at least as far as glioma migration is concerned. We also investigate two different ways of including cell level descriptions of response to hypoxia and the way they are related .
The precise regulation of synaptic connectivity is essential for the processing of information in the brain. Any aberrant loss of synaptic connectivity due to genetic mutations will disrupt information flow in the nervous system and may represent the underlying cause of psychiatric or neurodegenerative diseases. Therefore, identification of the molecular mechanisms controlling synaptic plasticity and maintenance is essential for our understanding of neuronal circuits in development and disease.
Maturity model for determining digitalization levels within different product lifecycle phases
(2021)
Maintaining pace with ongoing changes due to digitalization is challenging for manufacturing companies. For successful
implementation of digitalization, manufacturing companies must consider their existing technical systems, organizational
structures, and processes, as well as social aspects. With the support of a maturity model, a company-specific digitalization
level can be evaluated to provide manufacturing companies with an initial insight into their particular status quo; this
can serve as a starting point for future optimization and digitalization projects. Furthermore, the results of such an analysis
allow objective comparison of different areas within the company and with competitors. In this paper, the “Integrierte Arbeitssystemgestaltung
in digitalisierten Produktionsunternehmen” (InAsPro) maturity model is presented, which considers
the Development, Production, and Assembly product lifecycle phases, as well as Aftersales, and assesses their digitalization
level focusing on the four dimensions of Technology, Organization, Social Issues, and Corporate Strategy. The maturity
model’s rating scale distinguishes between four maturity levels. The results given by the InAsPro maturity model for an
entire company are presented, along with those for each product lifecycle phase. Extensive descriptions for each specific
maturity level are also provided.
Consider a linear realization of a matroid over a field. One associates with it a configuration
polynomial and a symmetric bilinear form with linear homogeneous coefficients.
The corresponding configuration hypersurface and its non-smooth locus support the
respective first and second degeneracy scheme of the bilinear form.We showthat these
schemes are reduced and describe the effect of matroid connectivity: for (2-)connected
matroids, the configuration hypersurface is integral, and the second degeneracy scheme
is reduced Cohen–Macaulay of codimension 3. If the matroid is 3-connected, then also
the second degeneracy scheme is integral. In the process, we describe the behavior
of configuration polynomials, forms and schemes with respect to various matroid
constructions.
Loss of USP28 and SPINT2 expression promotes cancer cell survival after whole genome doubling
(2021)
Background
Whole genome doubling is a frequent event during cancer evolution and shapes the cancer genome due to the occurrence of chromosomal instability. Yet, erroneously arising human tetraploid cells usually do not proliferate due to p53 activation that leads to CDKN1A expression, cell cycle arrest, senescence and/or apoptosis.
Methods
To uncover the barriers that block the proliferation of tetraploids, we performed a RNAi mediated genome-wide screen in a human colorectal cancer cell line (HCT116).
Results
We identified 140 genes whose depletion improved the survival of tetraploid cells and characterized in depth two of them: SPINT2 and USP28. We found that SPINT2 is a general regulator of CDKN1A transcription via histone acetylation. Using mass spectrometry and immunoprecipitation, we found that USP28 interacts with NuMA1 and affects centrosome clustering. Tetraploid cells accumulate DNA damage and loss of USP28 reduces checkpoint activation, thus facilitating their proliferation.
Conclusions
Our results indicate three aspects that contribute to the survival of tetraploid cells: (i) increased mitogenic signaling and reduced expression of cell cycle inhibitors, (ii) the ability to establish functional bipolar spindles and (iii) reduced DNA damage signaling.
This article investigates a network interdiction problem on a tree network: given a subset of nodes chosen as facilities, an interdictor may dissect the network by removing a size-constrained set of edges, striving to worsen the established facilities best possible. Here, we consider a reachability objective function, which is closely related to the covering objective function: the interdictor aims to minimize the number of customers that are still connected to any facility after interdiction. For the covering objective on general graphs, this problem is known to be NP-complete (Fröhlich and Ruzika In: On the hardness of covering-interdiction problems. Theor. Comput. Sci., 2021). In contrast to this, we propose a polynomial-time solution algorithm to solve the problem on trees. The algorithm is based on dynamic programming and reveals the relation of this location-interdiction problem to knapsack-type problems. However, the input data for the dynamic program must be elaborately generated and relies on the theoretical results presented in this article. As a result, trees are the first known graph class that admits a polynomial-time algorithm for edge interdiction problems in the context of facility location planning.
Plasticity in metallic glasses depends on their stoichiometry. We explore this dependence by molecular dynamics simulations for the case of CuZr alloys using the compositions Cu64.5Zr35.5, Cu50Zr50, and Cu35.5Zr64.5. Plasticity is induced by nanoindentation and orthogonal cutting. Only the Cu64.5Zr35.5 sample shows the formation of localized strain in the form of shear bands, while plasticity is more homogeneous for the other samples. This feature concurs with the high fraction of full icosahedral short-range order found for Cu64.5Zr35.5. In all samples, the atomic density is reduced in the plastic zone; this reduction is accompanied by a decrease of the average atom coordination, with the possible exception of Cu35.5Zr64.5, where coordination fluctuations are high. The strongest density reduction occurs in Cu64.5Zr35.5, where it is connected with the partial destruction of full icosahedral short-range order. The difference in plasticity mechanism influences the shape of the pileup and of the chip generated by nanoindentation and cutting, respectively.
Linear evolution equations are considered usually for the time variable being defined on an interval where typically initial conditions or time periodicity of solutions is required to single out certain solutions. Here, we would like to make a point of allowing time to be defined on a metric graph or network where on the branching points coupling conditions are imposed such that time can have ramifications and even loops. This not only generalizes the classical setting and allows for more freedom in the modeling of coupled and interacting systems of evolution equations, but it also provides a unified framework for initial value and time-periodic problems. For these time-graph Cauchy problems questions of well-posedness and regularity of solutions for parabolic problems are studied along with the question of which time-graph Cauchy problems cannot be reduced to an iteratively solvable sequence of Cauchy problems on intervals. Based on two different approaches—an application of the Kalton–Weis theorem on the sum of closed operators and an explicit computation of a Green’s function—we present the main well-posedness and regularity results. We further study some qualitative properties of solutions. While we mainly focus on parabolic problems, we also explain how other Cauchy problems can be studied along the same lines. This is exemplified by discussing coupled systems with constraints that are non-local in time akin to periodicity.
In recent years, ◂...▸optical character recognition (OCR) systems have been used to digitally preserve historical archives. To transcribe historical archives into a machine-readable form, first, the documents are scanned, then an OCR is applied. In order to digitize documents without the need to remove them from where they are archived, it is valuable to have a portable device that combines scanning and OCR capabilities. Nowadays, there exist many commercial and open-source document digitization techniques, which are optimized for contemporary documents. However, they fail to give sufficient text recognition accuracy for transcribing historical documents due to the severe quality degradation of such documents. On the contrary, the anyOCR system, which is designed to mainly digitize historical documents, provides high accuracy. However, this comes at a cost of high computational complexity resulting in long runtime and high power consumption. To tackle these challenges, we propose a low power energy-efficient accelerator with real-time capabilities called iDocChip, which is a configurable hybrid hardware-software programmable ◂...▸System-on-Chip (SoC) based on anyOCR for digitizing historical documents. In this paper, we focus on one of the most crucial processing steps in the anyOCR system: Text and Image Segmentation, which makes use of a multi-resolution morphology-based algorithm. Moreover, an optimized FPGA-based hybrid architecture of this anyOCR step along with its optimized software implementations are presented. We demonstrate our results on multiple embedded and general-purpose platforms with respect to runtime and power consumption. The resulting hardware accelerator outperforms the existing anyOCR by 6.2×, while achieving 207× higher energy-efficiency and maintaining its high accuracy.
Daseinsvorsorge im Bereich des Schutzes der Trinkwasserressourcen beginnt mit der Pflege des Wasserdargebots. In Anbetracht der sich in tatsächlicher und rechtlicher Hinsicht ändernden Gegebenheiten beim Schutz von Trink-, Mineral- und Heilwasservorkommen (Klimawandel, “Wasserstress” infolge qualitativer und quantitativer Verschlechterungen bis hin zur Trinkwasserknappheit) stellt sich die Frage, wie den Gefahren der zunehmenden Verschlechterung dieser Ressourcen begegnet werden kann. Der nachfolgende Beitrag belegt die These, dass auch bei einer gut gemeinten von Beschleunigungs- und Deregulierungsbestrebungen getragenen landesgesetzlichen Änderung risikovorsorgender, rechtssystematischer Schutzkomponenten der abwägungsrelevante Belang nicht außer Acht bleiben darf, dass es dadurch zu einer systematischen Verschlechterung des rechtlichen und umweltplanerischen Kontroll- und Schutzsystems mit nachteiligen Folgewirkungen für diese Schutzgüter kommen kann. Insofern sollte der Stellenwert, welcher der objektiv-rechtliche Rechtsschutz bei der Erhaltung der besonderen “Naturschätze”, der Trink-, Mineral- und Heilwasservorkommen, einnimmt, nicht unterschätzt werden.
Functional illiteracy and developmental dyslexia: looking for common roots. A systematic review
(2021)
A considerable amount of the population in more economically developed countries are functionally illiterate (i.e., low literate). Despite some years of schooling and basic reading skills, these individuals cannot properly read and write and, as a consequence have problems to understand even short texts. An often-discussed approach (Greenberg et al. 1997) assumes weak phonological processing skills coupled with untreated developmental dyslexia as possible causes of functional illiteracy. Although there is some data suggesting commonalities between low literacy and developmental dyslexia, it is still not clear, whether these reflect shared consequences (i.e., cognitive and behavioral profile) or shared causes. The present systematic review aims at exploring the similarities and differences identified in empirical studies investigating both functional illiterate and developmental dyslexic samples. Nine electronic databases were searched in order to identify all quantitative studies published in English or German. Although a broad search strategy and few limitations were applied, only 5 studies have been identified adequate from the resulting 9269 references. The results point to the lack of studies directly comparing functional illiterate with developmental dyslexic samples. Moreover, a huge variance has been identified between the studies in how they approached the concept of functional illiteracy, particularly when it came to critical categories such the applied definition, terminology, criteria for inclusion in the sample, research focus, and outcome measures. The available data highlight the need for more direct comparisons in order to understand what extent functional illiteracy and dyslexia share common characteristics.
Im Rahmen dieses Beitrags werden Ergebnisse einer Untersuchung an feststoffgeschmierten Wälzlagern vorgestellt. Betrachtet werden dabei Lager, welche einen speziellen, modifizierten Käfig verwenden. Die Käfigtaschen des Käfigs dienen dabei, zusätzlich zu ihrer ursprünglichen Funktion, der Führung der Wälzkörper, als Schmierstoffdepot. Es werden zunächst Prüfaufbau und die Versuchsbedingungen erläutert und in diesem Zusammenhang wird gezeigt, dass der in diesem Beitrag verwendete Aufbau, verglichen mit dem Aufbau vorangegangener Arbeiten, eine deutlich reduzierte Streuung aufweist. Als eine nicht zu vernachlässigende Fehlerquelle bei der gravimetrischen Bestimmung des Käfigtaschenverschleißes wurde das hygroskopische Verhalten des Polymercompounds identifiziert. Einer Verfälschung dieser Messergebnisse durch die unkontrollierte Feuchtigkeitsaufnahme aus der Umgebung, muss durch einen zeitlich vorgelagerten Trocknungsprozess unter definierten Bedingungen vorgebeugt werden. Zudem wird gezeigt, dass die Käfigtaschen sowohl durch den Innenring des Lagers, als auch durch die Wälzkörper verschlissen werden. Hierbei wird eine Messmethode zur Ermittlung der, durch den Innenring verschlissenen, Materialmenge vorgestellt. Durch Oberflächenanalysen der Messingstruktur des Käfigs wird eine Reduzierung des Zinks nachgewiesen, sowie eine Änderung der Oberflächenstruktur festgestellt. Als Ursache wird ein Sublimieren des Zinks aufgrund der Versuchsbedingungen vermutet. Weiterhin wird gezeigt, dass die Prüftemperatur von 300 °C zu einem Schrumpfen der Lagerringe führt. Eine Vorwegnahme dieser Maßverringerung ist durch Temperierung bei 300 °C für 48 h möglich.
Effects of the Velocity Sequences on the Friction and Wear Performance of PEEK-Based Materials
(2021)
In the present study, effects of the sliding velocity sequences on the friction and wear properties of pure polyetheretherketone (PEEK) and a PEEK hybrid composite were studied. It is demonstrated that the tribological properties of pure PEEK and its composite show a complex nature of the dependence on the velocity sequences in the studied range. The friction coefficient of PEEK is independent on previous velocity histories. In contrast, the testing sequence of the velocity exerts obvious impact on the friction coefficient of the PEEK composite at slow sliding velocities. With respect to the wear performance, the specific wear rate of pure PEEK exhibits a strong dependence on the sequences of the velocity only at the initial pv-levels. For the PEEK composite, its specific wear rate exhibits an obvious dependence on the previous velocity levels at a low nominal pressure of 1 MPa. When the pressure is increased to 8 MPa, the impact of the velocity sequences on the wear performance becomes insignificant. In addition, the tribological properties clearly correlate with the temperature of the tribosystem.
The reason why variant selection phenomena occur in ausforming treatments is still not known. For that reason, in this work, the effect of compressive deformation on the macro and micro-texture of a bainitic microstructure was analyzed in a medium-carbon high-silicon steel subjected to ausforming treatments, where deformation was applied at 520 °C, 400 °C and 300 °C. The as-received material presented a very weak ⟨331⟩ fiber texture along the rod axis, due to prior thermomechanical processing. For the samples isothermally heat-treated, it was detected that the bainitic ferrite inherited a ⟨100⟩ fiber texture from the ⟨110⟩ fiber texture present in the prior austenite. The intensity of this transformation texture was more pronounced as the deformation temperature decreased. Also, variant selection was examined at different scales by combining Electron-Backscattered Diffraction and X-ray Diffraction. The quantification of the fraction of crystallographic variants under certain conventions for every condition revealed variant selection in samples subjected to ausforming treatments, where these phenomena were stronger as the deformation temperature was lower. Finally, some of the theories proposed so far to explain these variant selection phenomena were tested, showing that variants were not selected based on their Bain group and that their selection can be better described in terms of their belonging to packets, if these are defined according to a global reference frame. This suggests that the phenomena might have to do with the effect of deformation mechanisms on the prior austenite.
The deformation of a nano-sized polycrystalline Al bar under the action of vice plates is studied using molecular dynamics simulation. Two grain sizes are considered, fine-grained and coarse-grained. Deformation in the fine-grained sample is mainly caused by grain-boundary processes which induce grain displacement and rotation. Deformation in the coarse-grained sample is caused by grain-boundary processes and dislocation plasticity. The sample distortion manifests itself by the center-of-mass motion of the grains. Grain rotation is responsible for surface roughening after the loading process. While the plastic deformation is caused by the loading process, grain rearrangements under load release also contribute considerably to the final sample distortion.
Der vorliegende Aufsatz untersucht, (1) inwieweit Unterschiede in der Ausgestaltung der Migrationspolitik auf substaatlicher Ebene in der Bundesrepublik Deutschland bestehen und (2) wodurch sich die Policy-Varianz zwischen den deutschen Ländern erklären lässt. Während bestehende Studien ähnlich gelagerte Fragen meist nur auf Basis eines spezifischen Indikators der Migrationspolitik untersucht haben – wie etwa der Ausgaben – schlagen wir ein mehrdimensionales Messkonzept vor, das sechs unterschiedliche Dimensionen der Migrationspolitik auf Länderebene unterscheidet: (1) die Art der Unterbringung, (2) die Art der Leistungserbringung, (3) die Gesundheitsversorgung, (4) die Aufnahmepraxis, (5) die Abschiebepraxis, sowie die (6) bundesstaatliche Positionierung am Beispiel der „sicheren Herkunftsländer“. Zur Analyse möglicher Pfade zur Erklärung der Unterschiede zwischen den Bundesländern nutzen wir eine fuzzy-set QCA-Analyse und greifen auf Parteipolitik, sozioökonomischen Kontext und die Einstellungen der Bevölkerung als Bedingungen zurück.
Unsere Ergebnisse zeigen, dass in der Tat substanzielle Unterschiede zwischen den Bundesländern bestehen. Zudem finden wir, dass die parteipolitische Zusammensetzung der Regierung in unterschiedlichen Pfaden eine wichtige Bedingung für das Vorliegen restriktiver bzw. permissiver Migrationspolitik ist. In keinem einzigen kausalen Pfad der fsQCA-Analyse ist überhaupt eine Erklärung restriktiver bzw. permissiver Migrationspolitik ohne Berücksichtigung der Parteiideologie möglich – ein Ergebnis, das klar für die hohe Relevanz der parteipolitischen Zusammensetzung der Regierung spricht. Die Einstellungsmuster der Bevölkerung in dem jeweiligen Bundesland, die Migrationspolitik und die sozioökonomischen Bedingungen scheinen hingegen nur eine untergeordnete Rolle zu spielen.
The plasma membrane harbors a specific set of transmembrane proteins which enable diverse cellular functions such as nutrient uptake, ion homeostasis and cellular signaling. The surface levels of these proteins need to be dynamically regulated to allow for plastic changes in cellular behaviour e. g. upon cell stress or during neuronal communication. Endocytosis is a powerful mechanism for quickly adapting the surface proteome via protein internalization. Here, I discuss how endocytosis contributes to brain function and counteracts cell stress.
In this study we investigated parafoveal processing by L1 and late L2 speakers of English (L1 German) while reading in English.
We hypothesized that L2ers would make use of semantic and orthographic information parafoveally. Using the gaze contingent
boundary paradigm, we manipulated six parafoveal masks in a sentence (Mark found th*e wood for the fire; * indicates the
invisible boundary): identical word mask (wood), English orthographic mask (wook), English stringmask (zwwl), German mask
(holz), German orthographic mask (holn), and German string mask (kxfs). We found an orthographic benefit for L1ers and L2ers
when the mask was orthographically related to the target word (wood vs. wook) in line with previous L1 research. English L2ers
did not derive a benefit (rather an interference) when a non-cognate translation mask from their L1 was used (wood vs. holz), but
did derive a benefit from a German orthographic mask (wood vs. holn). While unexpected, it may be that L2ers incur a switching
cost when the complete German word is presented parafoveally, and derive a benefit by keeping both lexicons active when a
partial German word is presented parafoveally (narrowing down lexical candidates). To the authors’ knowledge there is no
mention of parafoveal processing in any model of L2 processing/reading, and the current study provides the first evidence for a
parafoveal non-cognate orthographic benefit (but only with partial orthographic overlap) in sentence reading for L2ers. We
discuss how these findings fit into the framework of bilingual word recognition theories.
In some specific applications, the need of an optimized rolling bearing, having a similar load carrying capacity as a tapered roller bearing but with much lower friction losses is still to be addressed. In this paper, a new model is developed using a multibody simulation software and its experimental validation is presented.
After studying many different (in use and only patented) roller geometries and based on an existing and already validated model for tapered roller bearings, a new model has been created changing the basis of its geometry. When the rolling bearing is highly loaded, the new geometry will show lower friction losses than a conventional tapered roller bearing. In order to confirm this premise, as well as to validate the model, a prototype of the new optimized geometry has been manufactured and experimentally tested, together with a tapered roller bearing of same main dimensions. The tests have taken place in a frictional torque test rig, where it is possible to realistically reproduce the loads and misalignments occurring on a bearing.
The results of these tests together with its comparisons with the results of the multibody simulation models are discussed here. It has been observed, that the new model not only can be validated, but also presents less friction losses than the ones obtained when using a tapered roller bearing under some operating points with highly loaded bearings.
The consumption of red meat is associated with an increased risk for colorectal cancer (CRC). Multiple lines of evidence suggest
that heme iron as abundant constituent of red meat is responsible for its carcinogenic potential. However, the underlying
mechanisms are not fully understood and particularly the role of intestinal inflammation has not been investigated. To address
this important issue, we analyzed the impact of heme iron (0.25 μmol/g diet) on the intestinal microbiota, gut inflammation
and colorectal tumor formation in mice. An iron-balanced diet with ferric citrate (0.25 μmol/g diet) was used as reference.
16S rRNA sequencing revealed that dietary heme reduced α-diversity and caused a persistent intestinal dysbiosis, with a
continuous increase in gram-negative Proteobacteria. This was linked to chronic gut inflammation and hyperproliferation of
the intestinal epithelium as attested by mini-endoscopy, histopathology and immunohistochemistry. Dietary heme triggered
the infiltration of myeloid cells into colorectal mucosa with an increased level of COX-2 positive cells. Furthermore, flow
cytometry-based phenotyping demonstrated an increased number of T cells and B cells in the lamina propria following heme
intake, while γδ-T cells were reduced in the intraepithelial compartment. Dietary heme iron catalyzed formation of fecal
N-nitroso compounds and was genotoxic in intestinal epithelial cells, yet suppressed intestinal apoptosis as evidenced by
confocal microscopy and western blot analysis. Finally, a chemically induced CRC mouse model showed persistent intestinal
dysbiosis, chronic gut inflammation and increased colorectal tumorigenesis following heme iron intake. Altogether, this study
unveiled intestinal inflammation as important driver in heme iron-associated colorectal carcinogenesis.
This paper presents an iterative finite element (FE)–based method to calculate the gravity-free shape of nonrigid parts from
an optical measurement performed on a non-over-constrained fixture. Measuring these kinds of parts in a stress-free state
is almost impossible because deflections caused by their weight occur. To solve this problem, a simulation model of the
measurement is created using available methods of reverse engineering. Then, an iterative algorithm calculates the gravityfree
shape. The approach does not require a CAD model of the measured part, implying the whole part can be fully scanned.
The application of this method mainly addresses thin, unstable sheet metal parts, like those commonly used in the automotive
or aerospace industry. To show the performance of the proposed method, validations with simulation and experimental
data are presented. The shown results meet the predefined quality goal to predict shapes within a tolerance of ±0.05 mm
measured in surface normal direction.
US arms control policies have shifted frequently in the last 60 years, ranging from
the role of a ‘brakeman’ regarding international arms control, to the role of a
‘booster,’ initiating new agreements. My article analyzes the conditions that contribute
to this mixed pattern. A crisp-set Qualitative Comparative Analysis (QCA) evaluates
24 cases of US decisions on international arms control treaties (1963–2021).
The analysis reveals that the strength of conservative treaty skeptics in the Senate, in
conjunction with other factors, has contributed to the demise of arms control policies
since the end of the Cold War. A brief study of the Trump administration’s arms
control policies provides case-sensitive insights to corroborate the conditions identified
by the QCA. The findings suggest that conservative treaty skeptics contested the
bipartisan consensus and thus impaired the ability of the USA to perform its leadership
role within the international arms control regime.
The promise of algorithmic decision-making (ADM) lies in its capacity to support or replace human decision-making based on a superior ability to solve specific cognitive tasks. Applications have found their way into various domains of decision-making—and even find appeal in the realm of politics. Against the backdrop of widespread dissatisfaction with politicians in established democracies, there are even calls for replacing politicians with machines. Our discipline has hitherto remained surprisingly silent on these issues. The present article argues that it is important to have a clear grasp of when and how ADM is compatible with political decision-making. While algorithms may help decision-makers in the evidence-based selection of policy instruments to achieve pre-defined goals, bringing ADM to the heart of politics, where the guiding goals are set, is dangerous. Democratic politics, we argue, involves a kind of learning that is incompatible with the learning and optimization performed by algorithmic systems.
We propose a universal method for the evaluation of generalized standard materials that greatly simplifies the material law implementation process. By means of automatic differentiation and a numerical integration scheme, AutoMat reduces the implementation effort to two potential functions. By moving AutoMat to the GPU, we close the performance gap to conventional evaluation routines and demonstrate in detail that the expression level reverse mode of automatic differentiation as well as its extension to second order derivatives can be applied inside CUDA kernels. We underline the effectiveness and the applicability of AutoMat by integrating it into the FFT-based homogenization scheme of Moulinec and Suquet and discuss the benefits of using AutoMat with respect to runtime and solution accuracy for an elasto-viscoplastic example.
Heterocystous Cyanobacteria of the genus Nodularia form major blooms in brackish waters, while terrestrial Nostoc species occur worldwide, often associated in biological soil crusts. Both genera, by virtue of their ability to fix N2 and conduct oxygenic photosynthesis, contribute significantly to global primary productivity. Select Nostoc and Nodularia species produce the hepatotoxin nodularin and whether its production will change under climate change conditions needs to be assessed. In light of this, the effects of elevated atmospheric CO2 availability on growth, carbon and N2 fixation as well as nodularin production were investigated in toxin and non-toxin producing species of both genera. Results highlighted the following:
Biomass and volume specific biological nitrogen fixation (BNF) rates were respectively almost six and 17 fold higher in the aquatic Nodularia species compared to the terrestrial Nostoc species tested, under elevated CO2 conditions.
There was a direct correlation between elevated CO2 and decreased dry weight specific cellular nodularin content in a diazotrophically grown terrestrial Nostoc species, and the aquatic Nodularia species, regardless of nitrogen availability.
Elevated atmospheric CO2 levels were correlated to a reduction in biomass specific BNF rates in non-toxic Nodularia species.
Nodularin producers exhibited stronger stimulation of net photosynthesis rates (NP) and growth (more positive Cohen’s d) and less stimulation of dark respiration and BNF per volume compared to non-nodularin producers under elevated CO2 levels.
This study is the first to provide information on NP and nodularin production under elevated atmospheric CO2 levels for Nodularia and Nostoc species under nitrogen replete and diazotrophic conditions.
When considering complex systems, identifying the most important actors is often of relevance. When the system is modeled
as a network, centrality measures are used which assign each node a value due to its position in the network. It is often
disregarded that they implicitly assume a network process flowing through a network, and also make assumptions of how
the network process flows through the network. A node is then central with respect to this network process (Borgatti in Soc
Netw 27(1):55–71, 2005, https ://doi.org/10.1016/j.socne t.2004.11.008). It has been shown that real-world processes often
do not fulfill these assumptions (Bockholt and Zweig, in Complex networks and their applications VIII, Springer, Cham,
2019, https ://doi.org/10.1007/978-3-030-36683 -4_7). In this work, we systematically investigate the impact of the measures’
assumptions by using four datasets of real-world processes. In order to do so, we introduce several variants of the betweenness
and closeness centrality which, for each assumption, use either the assumed process model or the behavior of the real-world
process. The results are twofold: on the one hand, for all measure variants and almost all datasets, we find that, in general,
the standard centrality measures are quite robust against deviations in their process model. On the other hand, we observe a
large variation of ranking positions of single nodes, even among the nodes ranked high by the standard measures. This has
implications for the interpretability of results of those centrality measures. Since a mismatch of the behaviour of the real
network process and the assumed process model does even affect the highly-ranked nodes, resulting rankings need to be
interpreted with care.
A Strained Partnership: Krise und Resilienz in den transatlantischen Beziehungen 20 Jahre nach 9/11
(2021)
2021 lieferte aus transatlantischer Perspektive gleich mehrere Zäsuren. Im Januar wurde US-Präsident Donald Trump, dessen disruptive Politik diverse Konflikte mit Europa provozierte, durch Joseph R. Biden abgelöst. Im August endete in Afghanistan der längste Einsatz in der Geschichte der NATO mit einem chaotischen Abzug und der Machtübernahme der Taliban, fast 20 Jahre nach dem Beginn des Krieges. Und schließlich läuteten die Bundestagswahlen im September das Ende der Amtszeit Angela Merkels ein, die als Bundeskanzlerin in 16 Regierungsjahren auf vier US-Präsidenten traf. Diese Zäsuren bieten Anlass genug, eine Bilanz der transatlantischen Beziehungen seit 9/11 zu ziehen.
Machining-induced residual stresses (MIRS) are a main driver for distortion of thin-walled monolithic aluminum workpieces. Before one can develop compensation techniques to minimize distortion, the effect of machining on the MIRS has to be fully understood. This means that not only an investigation of the effect of different process parameters on the MIRS is important. In addition, the repeatability of the MIRS resulting from the same machining condition has to be considered. In past research, statistical confidence of MIRS of machined samples was not focused on. In this paper, the repeatability of the MIRS for different machining modes, consisting of a variation in feed per tooth and cutting speed, is investigated. Multiple hole-drilling measurements within one sample and on different samples, machined with the same parameter set, were part of the investigations. Besides, the effect of two different clamping strategies on the MIRS was investigated. The results show that an overall repeatability for MIRS is given for stable machining (between 16 and 34% repeatability standard deviation of maximum normal MIRS), whereas instable machining, detected by vibrations in the force signal, has worse repeatability (54%) independent of the used clamping strategy. Further experiments, where a 1-mm-thick wafer was removed at the milled surface, show the connection between MIRS and their distortion. A numerical stress analysis reveals that the measured stress data is consistent with machining-induced distortion across and within different machining modes. It was found that more and/or deeper MIRS cause more distortion.
Analysis of dimensional accuracy for micro-milled areal material measures with kinematic simulation
(2021)
The calibration of areal surface topography measuring instruments is of high relevance to estimate the measurement uncertainty and to guarantee the traceability of the measurement results. Calibration structures for optical measuring instruments must be sufficiently small to determine the limits of the instruments.
Besides other methods, micro-milling is a suitable process for manufacturing areal material measures. For the manufacturing by micro-milling with ball end mills, the tool radius (effective cutter radius) is the corresponding limiting factor: if the tool radius is too large to penetrate the concave profile details without removing the surrounding material, deviations from the target geometry will occur. These deviations can be detected and excluded before experimental manufacturing with the aid of a kinematic simulation.
In this study, a kinematic simulation model for the prediction of the dimensional accuracy of micro-milled areal material measures is developed and validated. Subsequently, a radius study is conducted to determine how the tool radius r of the tool influences the dimensional accuracy of an areal crossed sinusoidal (ACS) geometry according to ISO 25178-70 [1] with a defined amplitude d and period length p. The resulting theoretical surface texture parameters are evaluated and compared to the target values. It was shown that the surface texture parameters deviate from the nominal values depending on the effective cutter radius used. Based on the results of the study, it can be determined with which effective tool radius the measurands Sa and Sq of the material measures are best met. The ideal effective radius for the application considered is between 50 and 75 μm.
Adaptive numerical integration of exponential finite elements for a phase field fracture model
(2021)
Phase field models for fracture are energy-based and employ a continuous field variable, the phase field, to indicate cracks. The width of the transition zone of this field variable between damaged and intact regions is controlled by a regularization parameter. Narrow transition zones are required for a good approximation of the fracture energy which involves steep gradients of the phase field. This demands a high mesh density in finite element simulations if 4-node elements with standard bilinear shape functions are used. In order to improve the quality of the results with coarser meshes, exponential shape functions derived from the analytic solution of the 1D model are introduced for the discretization of the phase field variable. Compared to the bilinear shape functions these special shape functions allow for a better approximation of the fracture field. Unfortunately, lower-order Gauss-Legendre quadrature schemes, which are sufficiently accurate for the integration of bilinear shape functions, are not sufficient for an accurate integration of the exponential shape functions. Therefore in this work, the numerical accuracy of higher-order Gauss-Legendre formulas and a double exponential formula for numerical integration is analyzed.
Without actors, there is no action: How interpersonal interactions help to explain routine dynamics
(2020)
In this paper, we argue that it is important to gain a better understanding on how people interact with each other to explain routine dynamics. Thus, we propose to focus on the interpersonal interactions of actors which is not only the fact that actors interact with each other but that the manner and quality of these interactions is important to understand routine dynamics. By drawing on social exchange theory, we propose a framework that seeks to explain routine dynamics based on different relationships between actors. Building on this framework, we provide different process models indicating how routine performing and patterning is enacted due to the respective relationship of actors. Our insights contribute to research on routine dynamics by arguing (1) that actions of patterning are dependent on the relationship of actors; (2) that trust works as an enabler for creating new patterns of actions; (3) that distrust functions as an enhancer for interrupting and dissolving patterns of actions.
Defects change the phonon spectrum and also the magnetic properties of bcc-Fe. Using molecular dynamics simulation, the influence of defects – vacancies, dislocations, and grain boundaries – on the phonon spectra and magnetic properties of bcc-Fe is determined. It is found that the main influence of defects consists in a decrease of the amplitude of the longitudinal peak, PL, at around 37 meV. While the change in phonon spectra shows only little dependence on the defect type, the quantitative decrease of PL is proportional to the defect concentration. Local magnetic moments can be determined from the local atomic volumes. Again, the changes in the magnetic moments of a defective crystal are linear in the defect concentrations. In addition, the change of the phonon density of states and the magnetic moments under homogeneous uniaxial strain are investigated.
Mobile devices (smartphones or tablets) as experimental tools (METs) offer inspiring possibilities for science education, but until now, there has been little research studying this approach. Previous research indicated that METs have positive effects on students’ interest and curiosity. The present investigation focuses on potential cognitive effects of METs using video analyses on tablets to investigate pendulum movements and an instruction that has been used before to study effects of smartphones’ acceleration sensors. In a quasi-experimental repeated-measurement design, a treatment group uses METs (TG, NTG = 23) and a control group works with traditional experimental tools (CG, NCG = 28) to study the effects on interest, curiosity, and learning achievement. Moreover, various control variables were taken into account. We suppose that pupils in the TG have a lower extraneous cognitive load and higher learning achievement than those in the CG working with traditional experimental tools. ANCOVAs showed significantly higher levels of learning achievement in the TG (medium effect size). No differences were found for interest, curiosity, or cognitive load. This might be due to a smaller material context provided by tablets, in comparison to smartphones, as more pupils possess and are familiar with smartphones than with tablets. Another reason for the unchanged interest might be the composition of the sample: While previous research showed that especially originally less-interested students profited most from using METs, the current sample contained only specialized courses, i.e., students with a high original interest, for whom the effect of METs on their interest is presumably smaller.
Existentialist philosophy offers an understanding of how trying to eliminate ambiguities that inevitably mark the human condition only seemingly leads to freedom. This existentialist outlook can also serve to shed light on how democratic politics may similarly show tendencies which aim at overcoming immanent tensions. Such tendencies in democratic politics can be clarified using Sartre’s notion of ignorance – and truth as its counterpart. His concept of ignorance goes beyond merely facts or knowledge and refers to a mode of being. It expresses a subject’s desire to avoid, rather than confront, resistances stemming from the world. Based on a distinction of different forms in which this orientation can manifest itself, this article shows how democratic politics, too, can be threatened by ignorance as a way of doing politics. This ignorance comes in different guises which all express a desire to eliminate tensions that democratic politics cannot overcome without undermining itself.
Introducing parallelism and exploring its use is still a fundamental challenge for the computer algebra community. In high-performance numerical simulation, on the other hand, transparent environments for distributed computing which follow the principle of separating coordination and computation have been a success story for many years. In this paper, we explore the potential of using this principle in the context of computer algebra. More precisely, we combine two well-established systems: The mathematics we are interested in is implemented in the computer algebra system SINGULAR, whose focus is on polynomial computations, while the coordination is left to the workflow management system GPI-Space, which relies on Petri nets as its mathematical modeling language and has been successfully used for coordinating the parallel execution (autoparallelization) of academic codes as well as for commercial software in application areas such as seismic data processing. The result of our efforts is a major step towards a framework for massively parallel computations in the application areas of SINGULAR, specifically in commutative algebra and algebraic geometry. As a first test case for this framework, we have modeled and implemented a hybrid smoothness test for algebraic varieties which combines ideas from Hironaka’s celebrated desingularization proof with the classical Jacobian criterion. Applying our implementation to two examples originating from current research in algebraic geometry, one of which cannot be handled by other means, we illustrate the behavior of the smoothness test within our framework and investigate how the computations scale up to 256 cores.
Endocytosis of the amyloid precursor protein (APP) is critical for generation of β-amyloid, aggregating in Alzheimer's disease. APP endocytosis depending on the intracellular NPTY motif is well investigated, whereas involvement of the YTSI (also termed BaSS) motif remains controversial. Here, we show that APP lacking the YTSI motif (ΔYTSI) displays reduced localization to early endosomes and decreased internalization rates, similar to APP ΔNPTY. Additionally, we show that the YTSI-binding protein, PAT1a interacts with the Rab5 activator RME-6, as shown by several independent assays. Interestingly, knockdown of RME-6 decreased APP endocytosis, whereas overexpression increased the same. Similarly, APP ΔNPTY endocytosis was affected by PAT1a and RME-6 overexpression, whereas APP ΔYTSI internalization remained unchanged. Moreover, we could show that RME-6 mediated increase of APP endocytosis can be diminished upon knocking down PAT1a. Together, our data identify RME-6 as a novel player in APP endocytosis, involving the YTSI-binding protein PAT1a.
Comparison of Premixed Fuel and Premixed Charge Operation for Propane-Diesel Dual-Fuel Combustion
(2023)
With the rising popularity of dual-fuel combustion, liquefied
petroleum gas (LPG) can be utilized in high-compression diesel
engines. Through production from biomass (biomass to liquid, BtL),
biopropane as a direct substitute for LPG can contribute to a reduction
in greenhouse gas emissions caused by combustion engines. In a
conventional dual-fuel engine, the low reactivity fuel (LRF) propane
is premixed with the intake air to form a homogeneous mixture. This
air-fuel mixture is then ignited by the high reactivity fuel (HRF) in the
form of a diesel pilot injection inside the cylinder. In the presented
work, this premixed charge operation (PCO) is compared to a method
where propane and diesel are blended directly upstream of the high-
pressure pump (premixed fuel operation, PFO) in variable mixing
ratios for different engine loads and speeds. Furthermore, the effects
of internal and external exhaust gas recirculation are investigated for
each operating mode. The results show that PCO allows higher
propane ratios of up to 75 % at low loads, while PFO enables higher
percentages of propane at medium and high loads (up to 50 %),
allowing for a “reactivity on demand” approach. In addition, PFO
shows significantly lower emissions of unburned hydrocarbons
(-98.3 %) and carbon monoxide (-94.6 %) compared to PCO while
soot emissions are reduced in both cases. The use of EGR allows
nitrogen oxide emissions to be lowered to similar levels for both
operation modes and shows benefits concerning unburned
hydrocarbon (-73.5 %) and carbon monoxide (-62.9 %) emissions in
PCO.
Employing site-directed spin labeling (SDSL), the structure of maltose-binding protein (MBP) had previously been studied in the native state by electron paramagnetic resonance (EPR) spectroscopy. Several spin-labeled double cysteine mutants were distributed all over the structure of this cysteine-free protein and revealed distance information between the nitroxide residues from double electron–electron resonance (DEER). The results were in good agreement with the known X-ray structure. We have now extended these studies to the molten globule (MG) state, a folding intermediate, which can be stabilized around pH 3 and that is characterized by secondary but hardly any tertiary structure. Instead of clearly defined distance features as found in the native state, several additional characteristics indicate that the MG structure of MBP contains different polypeptide chain and domain orientations. MBP is also known to bind its substrate maltose even in MG state although with lower affinity. Additionally, we have now created new mutants allowing for spin labeling at or near the active site. Our data confirm an already preformed ligand site structure in the MG explaining its substrate binding capability and thus most probably serving as a nucleation center for the final native structure.
In response priming experiments, a participant has to respond as quickly and as accurately as possible to a target stimulus preceded by a prime. The prime and the target can either be mapped to the same response (consistent trial) or to different responses (inconsistent trial). Here, we investigate the effects of two sequential primes (each one either consistent or inconsistent) followed by one target in a response priming experiment. We employ discrete-time hazard functions of response occurrence and conditional accuracy functions to explore the temporal dynamics of sequential motor activation. In two experiments (small-N design, 12 participants, 100 trials per cell and subject), we find that (1) the earliest responses are controlled exclusively by the first prime if primes are presented in quick succession, (2) intermediate responses reflect competition between primes, with the second prime increasingly dominating the response as its time of onset is moved forward, and (3) only the slowest responses are clearly controlled by the target. The current study provides evidence that sequential primes meet strict criteria for sequential response activation. Moreover, it suggests that primes can influence responses out of a memory buffer when they are presented so early that participants are forced to delay their responses.
Micro machining with micro pencil grinding tools (MPGTs) is an emerging technology that can be used to manufacture closed microchannel structures in hard and brittle materials as well as hardened steels like 16MnCr5. At their current operating conditions, these tools have a comparatively short tool life. In previous works, MPGTs in combination with a minimum quantity lubrication (MQL) system were used to manufacture microchannels in 16MnCr5 hardened steel. The study has shown that steel adhesions clog the abrasive layer of MPGTs, most likely resulting from insufficient lubrication. In this paper, a metalworking fluid (MWF) supply method was developed to improve the process: a submerged micro grinding process, in which machining takes place inside a pool of MWF. In this study, the effect of seven types of MWFs on material adhesions at the bottom surface of the tool is evaluated. Equivalent good MWFs are then compared in a micro pendulum grinding experiment till failure.
1,2-unsaturated pyrrolizidine alkaloids (PAs) are natural plant constituents comprising more than 600 different structures. A major source of human exposure is thought to be cross-contamination of food, feed and phytomedicines with PA plants. In humans, laboratory and farm animals, certain PAs exert pronounced liver toxicity and can induce malignant liver tumors in rodents. Here, we investigated the cytotoxicity and genotoxicity of eleven PAs belonging to different structural classes. Although all PAs were negative in the fluctuation Ames test in Salmonella, they were cytotoxic and induced micronuclei in human HepG2 hepatoblastoma cells over-expressing human cytochrome P450 3A4. Lasiocarpine and cyclic diesters except monocrotaline were the most potent congeners both in cytotoxicity and micronucleus assays with concentrations below 3 μM inducing a doubling in micronuclei counts. Other open di-esters and all monoesters exhibited weaker or much weaker geno- and cytotoxicity. The findings were in agreement with recently suggested interim Relative Potency (iREP) factors with the exceptions of europine and monocrotaline. A more detailed micronuclei analysis at low concentrations of lasiocarpine, retrorsine or senecionine indicated that pronounced hypolinearity of the concentration–response curves was evident for retrorsine and senecionine but not for lasiocarpine. Our findings show that the genotoxic and cytotoxic potencies of PAs in a human hepatic cell line vary in a structure-dependent manner. Both the low potency of monoesters and the shape of prototype concentration–response relationships warrant a substance- and structure-specific approach in the risk assessment of PAs.
In this paper we present the comparison of experiments and numerical simulations for bubble cutting by a wire. The air bubble is surrounded by water. In the experimental setup an air bubble is injected on the bottom of a water column. When the bubble rises and contacts the wire, it is separated into two daughter bubbles. The flow is modeled by the incompressible Navier–Stokes equations. A meshfree method is used to simulate the bubble cutting. We have observed that the experimental and numerical results are in very good agreement. Moreover, we have further presented simulation results for liquid with higher viscosity. In this case the numerical results are close to previously published results.
In selective laser melting (SLM) the variation of process parameters significantly impacts the resulting workpiece characteristics. In this study, AISI 316L was manufactured by SLM with varying laser power, layer thickness, and hatch spacing. Contrary to most studies, the input energy density was kept constant for all variations by adjusting the scanning speed. The varied parameters were evaluated at two different input energy densities. The investigations reveal that a constant energy density with varying laser parameters results into considerable differences of the workpieces’ roughness, density, and microhardness. The density and the microhardness of the manufactured components can be improved by selecting appropriate parameters of the laser power, the layer thickness, and the hatch spacing. For this reason, the input energy density alone is no indicator for the resulting workpiece characteristics, but rather the ratio of scanning speed, layer thickness, or hatch spacing to laser power. Furthermore, it was found that the microhardness of an additively manufactured material correlates with its relative density. In the parameter study presented in this paper, relative densities of the additively manufactured workpieces of up to 99.9% were achieved.
The weight of evidence pro/contra classifying the process-related food contaminant (PRC) acrylamide (AA) as a genotoxic carcinogen is reviewed. Current dietary AA exposure estimates reflect margins of exposure (MOEs) < 500. Several arguments support the view that AA may not act as a genotoxic carcinogen, especially not at consumer-relevant exposure levels: Biotransformation of AA into genotoxic glycidamide (GA) in primary rat hepatocytes is markedly slower than detoxifying coupling to glutathione (GS). Repeated feeding of rats with AA containing foods, bringing about uptake of 100 µg/kg/day of AA, resulted in dose x time-related buildup of AA-hemoglobin (Hb) adducts, whereas GA-Hb adducts remained within the background. Since hepatic oxidative biotransformation of AA into GA was proven by simultaneous urinary mercapturic acid monitoring it can be concluded that at this nutritional intake level any GA formed in the liver from AA is quantitatively coupled to GS to be excreted as mercapturic acid in urine. In an oral single dose–response study in rats, AA induced DNA N7-GA-Gua adducts dose-dependently in the high dose range (> 100 µg/kg b w). At variance, in the dose range below 100 µg/kg b.w. down to levels of average consumers exposure, DNA N7 -Gua lesions were found only sporadically, without dose dependence, and at levels close to the lower bound of similar human background DNA N7-Gua lesions. No DNA damage was detected by the comet assay within this low dose range. GA is a very weak mutagen, known to predominantly induce DNA N7-GA-Gua adducts, especially in the lower dose range. There is consensus that DNA N7-GA-Gua adducts exhibit rather low mutagenic potency. The low mutagenic potential of GA has further been evidenced by comparison to preactivated forms of other process-related contaminants, such as N-Nitroso compounds or polycyclic aromatic hydrocarbons, potent food borne mutagens/carcinogens. Toxicogenomic studies provide no evidence supporting a genotoxic mode of action (MOA), rather indicate effects on calcium signalling and cytoskeletal functions in rodent target organs. Rodent carcinogenicity studies show induction of strain- and species-specific neoplasms, with MOAs not considered likely predictive for human cancer risk. In summary, the overall evidence clearly argues for a nongenotoxic/nonmutagenic MOA underlying the neoplastic effects of AA in rodents. In consequence, a tolerable intake level (TDI) may be defined, guided by mechanistic elucidation of key adverse effects and supported by biomarker-based dosimetry in experimental systems and humans.
Die politikwissenschaftliche Literatur zum deutschen Föderalismus ist überaus vielfältig. Neben Analysen der institutionellen Arrangements, ihrer Veränderungen sowie der Dynamiken des deutschen Verbundföderalismus, finden sich auch zahlreiche Untersuchungen zu einzelnen Politikfeldern, die sowohl die Interaktionen zwischen Bund und Ländern als auch die Varianz zwischen den Policies der Länder samt ihrer Bestimmungsfaktoren untersuchen. Darüber hinaus haben sich in den vergangenen Jahrzehnten eigene Forschungszweige zu Parteien im Bundesstaat und zur Parlamentsforschung auf Länderebene etabliert. Trotz dieser großen Forschungsaktivität sind jedoch einige zentrale Fragen der Politikwissenschaft zum Zusammenspiel zwischen Wählern, Parteien, Parlamenten und Regierungen sowie deren Wirkung auf politischen Outputs und Outcomes weiterhin unbeantwortet. Dies ist, so das Argument dieses Beitrags, insbesondere der fehlenden Zusammenführung einzelner Literaturstränge und der noch unzureichenden empirischen Datenbasis geschuldet. Mittels einer Systematisierung des gegenwärtigen Literaturstands entwirft der Aufsatz ein Forschungsprogramm, das auf eine umfassende Analyse des politischen Willensbildungs- und Entscheidungsfindungsprozesses in den deutschen Bundesländern abstellt und Fragen der Responsivität und Rückkopplung systematisch in den Blick nimmt.
Since the h-index has been invented, it is the most frequently discussed bibliometric value and one of the most commonly used metrics to quantify a researcher’s scientific output. The more it is increasingly gaining popularity to use the metric as an indication of the quality of a job applicant or an employee the more important it is to assure its correctitude. Many platforms offer the h-index of a scientist as a service, sometimes without the explicit knowledge of the respective person. In this article we show that looking up the h-index for a researcher on the five most commonly used platforms, namely AMiner, Google Scholar, ResearchGate, Scopus and Web of Science, results in a variance that is in many cases as large as the average value. This is due to the varying definitions of what a scientific article is, the underlying data basis, and different qualities of the entity recognition problem. To perform our study, we crawled the h-index of the worlds top researchers according to two different rankings, all the Nobel Prize laureates except Literature and Peace, and the teaching staff of the computer science department of the TU Kaiserslautern Germany with whom we additionally computed their h-index manually. Thus we showed that the individual h-indices differ to an alarming extent between the platforms. We observed that researchers with an extraordinary high h-index and researchers with an index appropriate to the scientific career path and the respective scientific field are affected alike by these problems.
During cryogenic turning of metastable austenitic stainless steels, a deformation-induced phase transformation from γ-austenite to α’-martensite can be realized in the workpiece subsurface, which results in a higher microhardness as well as in improved fatigue strength and wear resistance. The α’-martensite content and resulting workpiece properties strongly depend on the process parameters and the resulting thermomechanical load during cryogenic turning. In order to achieve specific workpiece properties, extensive knowledge about this correlation is required. Parametric models, based on physical correlations, are only partly able to predict the resulting properties due to limited knowledge on the complex interactions between stress, strain, temperature, and the resulting kinematics of deformation-induced phase transformation. Machine learning algorithms can be used to detect this kind of knowledge in data sets. Therefore, the goal of this paper is to evaluate and compare the applicability of three machine learning methods (support vector regression, random forest regression, and artificial neural network) to derive models that support the prediction of workpiece properties based on thermomechanical loads. For this purpose, workpiece property data and respective process forces and temperatures are used as training and testing data. After training the models with 55 data samples, the support vector regression model showed the highest prediction accuracy.
Within this work, we utilize the framework of phase field modeling for fracture in order to handle a very crucial issue in terms of designing technical structures, namely the phenomenon of fatigue crack growth. So far, phase field fracture models were applied to a number of problems in the field of fracture mechanics and were proven to yield reliable results even for complex crack problems. For crack growth due to cyclic fatigue, our basic approach considers an additional energy contribution entering the regularized energy density function accounting for crack driving forces associated with fatigue damage. With other words, the crack surface energy is not solely in competition with the time-dependent elastic strain energy but also with a contribution consisting of accumulated energies, which enables crack extension even for small maximum loads. The load time function applied to a certain structure has an essential effect on its fatigue life. Besides the pure magnitude of a certain load cycle, it is highly decisive at which point of the fatigue life a certain load cycle is applied. Furthermore, the level of the mean load has a significant effect. We show that the model developed within this study is able to predict realistic fatigue crack growth behavior in terms of accurate growth rates and also to account for mean stress effects and different stress ratios. These are important properties that must be treated accurately in order to yield an accurate model for arbitrary load sequences, where various amplitude loading occurs.
Papadimitriou and Yannakakis (Proceedings of the 41st annual IEEE symposium on the
Foundations of Computer Science (FOCS), pp 86–92, 2000) show that the polynomial-time
solvability of a certain auxiliary problem determines the class of multiobjective optimization
problems that admit a polynomial-time computable (1+ε, . . . , 1+ε)-approximate Pareto set
(also called an ε-Pareto set). Similarly, in this article, we characterize the class ofmultiobjective
optimization problems having a polynomial-time computable approximate ε-Pareto set
that is exact in one objective by the efficient solvability of an appropriate auxiliary problem.
This class includes important problems such as multiobjective shortest path and spanning
tree, and the approximation guarantee we provide is, in general, best possible. Furthermore,
for biobjective optimization problems from this class, we provide an algorithm that computes
a one-exact ε-Pareto set of cardinality at most twice the cardinality of a smallest such set and
show that this factor of 2 is best possible. For three or more objective functions, however,
we prove that no constant-factor approximation on the cardinality of the set can be obtained
efficiently.
Modeling of solid-particle effects on bubble breakage and coalescence in slurry bubble columns
(2020)
Solid particles heavily affect the hydrodynamics in slurry bubble columns. The effects arise through varying breakup and coalescence behavior of the bubbles with the presence of solid particles where particles in the micrometer range lead to a promotion of coalescence in particular. To simulate the gas-liquid-solid flow in a slurry bubble column, the Eulerian multifluid approach can be employed to couple computational fluid dynamics (CFD) with the population balance equation (PBE) and thus to account for breakup and coalescence of bubbles.
In this work, three approaches are presented to modify the breakup and coalescence models to account for enhanced coalescence in the coupled CFD-PBE framework. The approaches are applied to a reference simulation case with available experimental data. In addition, the impacts of the modifications on the simulated bubble size distribution (BSD) and the applicability of the approaches are evaluated. The capabilities as well as the differences and limits of the approaches are demonstrated and explained.
In the field of metal additive manufacturing (AM), one of the most used methods is selective laser melting (SLM)—building components layer by layer in a powder bed via laser. The process of SLM is defined by several parameters like laser power, laser scanning speed, hatch spacing, or layer thickness. The manufacturing of small components via AM is very difficult as it sets high demands on the powder to be used and on the SLM process in general. Hence, SLM with subsequent micromilling is a suitable method for the production of microstructured, additively manufactured components. One application for this kind of components is microstructured implants which are typically unique and therefore well suited for additive manufacturing. In order to enable the micromachining of additively manufactured materials, the influence of the special properties of the additive manufactured material on micromilling processes needs to be investigated. In this research, a detailed characterization of additive manufactured workpieces made of AISI 316L is shown. Further, the impact of the process parameters and the build-up direction defined during SLM on the workpiece properties is investigated. The resulting impact of the workpiece properties on micromilling is analyzed and rated on the basis of process forces, burr formation, surface roughness, and tool wear. Significant differences in the results of micromilling were found depending on the geometry of the melt paths generated during SLM.
An important ingredient of any moving-mesh method for fluid-structure interaction (FSI) problems is the mesh moving technique (MMT) used to adapt the computational mesh in the moving fluid domain. An ideal MMT is computationally inexpensive, can handle large mesh motions without inverting mesh elements and can sustain an FSI simulation for extensive periods of time without irreversibly distorting the mesh. Here we compare several commonly used MMTs which are based on the solution of elliptic partial differential equations, including harmonic extension, bi-harmonic extension and techniques based on the equations of linear elasticity. Moreover, we propose a novel MMT which utilizes ideas from continuation methods to efficiently solve the equations of nonlinear elasticity and proves to be robust even when the mesh undergoes extreme motions. In addition to that, we study how each MMT behaves when combined with the mesh-Jacobian-based stiffening. Finally, we evaluate the performance of different MMTs on a popular two-dimensional FSI benchmark reproduced by using an isogeometric partitioned solver with strong coupling.
This contribution presents the results of a replication study on the learning effect of tablet-supported video analysis compared to traditional teaching sequences using non-digital experimental materials in the subject areas of uniform and accelerated motion in high school physics lessons. In addition to the replication of the preliminary study results recently published in this journal (Becker et al 2018, 2019), the investigation of the effect on the cognitive load as well as the emotional state of the students is another focal point. Compared to the preliminary study, the sample size was significantly increased from N = 109 to N = 294. The individual effects of the preliminary study could be replicated in this way. For both topics, a significant reduction of extraneous cognitive load and a positive effect on intervention-induced emotions could be demonstrated. Moreover, the theoretically founded causal relationship between emotion, cognitive load, and learning achievement could be empirically verified by means of structural equation modeling.
When machining metastable austenitic stainless steel with cryogenic cooling, a deformation-induced phase transformation from γ-austenite to α′-martensite can be realized in the workpiece subsurface. This leads to a higher microhardness and thus improved fatigue and wear resistance. A parametric and a non-parametric model were developed in order to investigate the correlation between the thermomechanical load in the workpiece subsurface and the resulting α′-martensite content. It was demonstrated that increasing passive forces and cutting forces promoted the deformation-induced phase transformation, while increasing temperatures had an inhibiting effect. The feed force had no significant influence on the α′-martensite content. With the proposed models it is now possible to estimate the α′-martensite content during cryogenic turning by means of in-situ measurement of process forces and temperatures.
Habitat fragmentation and forest management have been considered to drastically alter the nature of forest ecosystems globally. However, much uncertainty remains regarding the causative mechanisms mediating temperate forest responses, such as forest physical environment and the structure of woody plant assemblages, regardless of the role these forests play for global sustainability. In this paper, we examine how both habitat fragmentation and timber exploitation via silvicultural operations affect these two factors at local and habitat spatial scales in a hyper-fragmented landscape of mixed beech forests spanning more than 1500 km2 in SW Germany. Variables were recorded across 57 1000 m2 plots covering four habitats: small forest fragments, forest edges within large control forests, as well as managed and unmanaged forest interior sites. As expected, forest habitats differed in disturbance level, physical conditions and community structure at plot and habitat scale. Briefly, diversity of plant assemblages differed across all forest habitats (highest in edge forests) and correlated with integrative indices of edge, fragmentation and management effects. Surprisingly, managed and unmanaged forests did not differ in terms of species richness at local spatial scale, but managed forests exhibited a clear signal of physical/floristic homogenization as species promoted by silviculture proliferated; i.e. impoverished communities at landscape scale. Moreover, functional composition of plant communities responded to the microclimatic regime within forest fragments, resulting in a higher prevalence of species adapted to these microclimatic conditions. Our results underscore the notion that forest fragmentation and silvicultural management (1) promote changes in microclimatic regimes, (2) alter the balance between light-demanding and shade-adapted species, (3) support diverse floras across forest edges, and (4) alter patterns of beta diversity. Hence, in human-modified landscapes edge-affected habitats can be recognized as biodiversity reservoirs in contrast to impoverished managed interior forests. Furthermore, our results ratify the role of unmanaged forests as a source of environmental variability, species turnover, and distinct woody plant communities.
Recurrent Neural Networks, in particular One-dimensional and Multidimensional Long Short-Term Memory (1D-LSTM and MD-LSTM) have achieved state-of-the-art classification accuracy in many applications such as machine translation, image caption generation, handwritten text recognition, medical imaging and many more. However, high classification accuracy comes at high compute, storage, and memory bandwidth requirements, which make their deployment challenging, especially for energy-constrained platforms such as portable devices. In comparison to CNNs, not so many investigations exist on efficient hardware implementations for 1D-LSTM especially under energy constraints, and there is no research publication on hardware architecture for MD-LSTM. In this article, we present two novel architectures for LSTM inference: a hardware architecture for MD-LSTM, and a DRAM-based Processing-in-Memory (DRAM-PIM) hardware architecture for 1D-LSTM. We present for the first time a hardware architecture for MD-LSTM, and show a trade-off analysis for accuracy and hardware cost for various precisions. We implement the new architecture as an FPGA-based accelerator that outperforms NVIDIA K80 GPU implementation in terms of runtime by up to 84× and energy efficiency by up to 1238× for a challenging dataset for historical document image binarization from DIBCO 2017 contest, and a well known MNIST dataset for handwritten digits recognition. Our accelerator demonstrates highest accuracy and comparable throughput in comparison to state-of-the-art FPGA-based implementations of multilayer perceptron for MNIST dataset. Furthermore, we present a new DRAM-PIM architecture for 1D-LSTM targeting energy efficient compute platforms such as portable devices. The DRAM-PIM architecture integrates the computation units in a close proximity to the DRAM cells in order to maximize the data parallelism and energy efficiency. The proposed DRAM-PIM design is 16.19 × more energy efficient as compared to FPGA implementation. The total chip area overhead of this design is 18 % compared to a commodity 8 Gb DRAM chip. Our experiments show that the DRAM-PIM implementation delivers a throughput of 1309.16 GOp/s for an optical character recognition application.
The application of plant suspension culture to produce valuable compounds, such as the triterpenoids oleanolic acid and ursolic acid, is a well-established alternative to the cultivation of whole plants. Cambial meristematic cells (CMCs) are a growing field of research, often showing superior cultivation properties compared to their dedifferentiated cell (DDC) counterparts. In this work, the first-time establishment of O. basilicum CMCs is demonstrated. DDCs and CMCs were cultivated in shake flasks and wave-mixed disposable bioreactors (wDBRs) and evaluated regarding triterpenoid productivity and biomass accumulation. CMCs showed characteristic small vacuoles and were found to be significantly smaller than DDCs. Productivities of oleanolic and ursolic acid of CMCs were determined at 3.02 ± 0.76 mg/(l*d) and 4.79 ± 0.48 mg/(l*d) after 19 days wDBR cultivation, respectively. These values were consistently higher than any productivities determined for DDCs over the observed cultivation period of 37 days. Elicitation with methyl jasmonate of DDCs and CMCs in shake flasks resulted in increased product contents up to 48 h after elicitor addition, with the highest increase found in CMCs at 232.30 ± 19.33% (oleanolic acid) and 192.44 ± 18.23% (ursolic acid) after 48 h.
Dort, wo in Prozessen und Anwendungen Flüssigkeiten unter hohem Druck in rotierende Systeme eingespeist werden, kommen Radialwellendichtringe an die Grenzen ihrer Leistungsfähigkeit. Treten in den Dichtkontakten zusätzlich noch hohe Relativgeschwindigkeiten auf, eignen sich auch Gleitringdichtungen nicht mehr als dynamische Dichtung. Aufgrund ihrer sehr hohen thermischen Beständigkeit etablierten sich Rechteckdichtringe aus Hochleistungskunststoffen wie Polyimiden für diese Anwendungen. In ihrem Aufbau ähneln sie Kolbenringen, wie sie in Verbrennungskraftmaschinen und Kolbenmaschinen zum Einsatz kommen, weshalb im englischen Sprachgebrauch die Bezeichnung „piston ring“ verbreitet ist.
Als zentrale Größe für die Belastung des Rechteckdichtrings wird das Lastäquivalent aus dem Produkt von anliegendem Fluiddruck und der Relativgeschwindigkeit im Kontakt herangezogen (auch p · v-Wert). Der p · v-Wert wird als Systemkenngröße herangezogen, um die Eignung des Werkstoffs hinsichtlich ertragbarer Reibleistungen im Kontakt für die jeweilige Anwendung zu prüfen. Vorangegangene Arbeiten befassten sich vorwiegend mit der Leckagebildung, Reibungsreduzierung sowie der Bestimmung geeigneter Materialpaarungen für das Dichtsystem. Dabei wurden Einflüsse von Lageabweichungen auf die Funktionalität der Dichtringe nicht betrachtet. Mit Hilfe eines adaptierten Prüfstands am Lehrstuhl für Maschinenelemente und Getriebetechnik der Technische Universität Kaiserslautern, der zur Untersuchung von Radialwellendichtringen unter statischen und dynamischen Auslenkungen dient, soll das Verständnis über Rechteckdichtringe unter statischen und dynamischen Auslenkung erweitert werden.
Das Verhalten von Rechteckdichtringen unter statischen und dynamischen Lageabweichungen wird von sich überlagernden Einflüssen bestimmt. Hierbei hängt die auftretende Leckage des Dichtsystems vorrangig von den Betriebsgrößen wie Fluiddruck und den statischen Lageabweichungen ab. Dynamische Verlagerungen innerhalb des Dichtsystem beeinflussen das Leckageverhalten negativ, wobei kein Zusammenhang zwischen Leckage und Betrag oder Frequenz der Auslenkung herrscht. Die Querschnittsfläche des Dichtrings sowie die Geometrie der Nut führen so divergierenden Betriebsverhalten, wobei die druckabhängige Leckagebildung von anderen Verhaltensmustern überlagert werden kann.
As a consequence of globalization and migration, the number of children receiving literacy instruction in their second language (L2) is high and still increasing. Therefore, teachers need instruction methods that are effective in both L1 and L2 learners. Here, we investigate the effectiveness of a computerized training program combining phoneme perception, phonological awareness, and systematic phonics, in a sample of second-graders (N = 26) instructed in German as L2. Based on prior evidence concerning (1) literacy acquisition in L2 and (2) effects of literacy development on oral language abilities, we expected significant training effects on children’s literacy skills and vocabulary knowledge. The children of the training group worked through the program during school lessons, 20 min per day, for a period of 8 weeks. The controls continued to receive standard classroom instruction. German tests of phonological awareness, reading, spelling, and vocabulary were performed at three time points (pretest, immediate posttest, and follow-up after 9 weeks). Analyses confirmed that improvements in phonological awareness, spelling, and vocabulary between pretest and posttest were stronger in the training group when compared to the controls. For spelling and vocabulary, these effects were still significant at follow-up. Effect sizes were medium to high. For the reading measures, no group differences were found. In sum, the results yield further evidence for the effectiveness of phonics-based literacy instruction in L2 learners, and for the beneficial effects of basic literacy skills on novel word learning.
Monitoring of patient-reported outcomes and providing therapists with progress feedback has been shown to be beneficial for treatment outcomes (e.g., by preventing therapy failures). Despite recent advances in monitoring and feedback research, little is known about why some therapists benefit from feedback more than others. Addressing this issue, the present article uses the basic science literature on belief updating to propose a theoretical model for these between-therapist differences. In doing so, we provide a novel framework that allows testable hypotheses about when and how feedback on therapy progress is likely to improve treatment outcomes. In particular, we argue that the integration of feedback and its effect on therapists’ behavior depends on the weight therapists assign to their prior beliefs regarding treatment progress relative to the weight of the feedback received. We conclude by outlining some directions for future research on the underpinnings of this model, and point to some implications for the training of therapists and provision of feedback.
Using the molecular dynamics simulation, we study the cutting of Al/Si bilayer systems. While the plasticity of metals is dominated by dislocation activity, the deformation behavior of Si crystals is governed by phase transformations—here to the amorphous phase. We find that twinning adds as a major deformation mechanism in the cutting of Al crystals. Cutting of Si crystals requires thrust forces that are larger than the cutting forces in order to induce amorphization; in metals, the thrust forces are relatively smaller than the cutting forces. When putting an Al top layer on a Si substrate, the thrust force is reduced; the opposite effect is observed if a Si top layer is put on an Al substrate. Covering an Al substrate with a thin Si top layer has the detrimental effect that the hard Si requires high pressures for cutting; as a consequence, twinning planes with intersecting directions are generated that ultimately lead to cracks in the ductile Al substrate. The crystallinity of the Si chip is strongly changed if an Al substrate is put under the Si top layer: With decreasing thickness of the Si top layer, the Si chip retains a higher degree of crystallinity.
Equations of state based on intermolecular potentials are often developed about the Lennard-Jones (LJ) potential. Many of such EOS have been proposed in the past. In this work, 20 LJ EOS were examined regarding their performance on Brown’s characteristic curves and characteristic state points. Brown’s characteristic curves are directly related to the virial coefficients at specific state points, which can be computed exactly from the intermolecular potential. Therefore, also the second and third virial coefficient of the LJ fluid were investigated. This approach allows a comparison of available LJ EOS at extreme conditions. Physically based, empirical, and semi-theoretical LJ EOS were examined. Most investigated LJ EOS exhibit some unphysical artifacts.
In this work, we investigate and compare the condensation behavior of hydrophilic, hydrophobic, and biphilic microgrooved silicon samples etched by reactive ion etching. The microgrooves were 25 mm long and 17−19 μm deep with different
topologies depending on the etching process. Anisotropically etched samples had 30 μm wide rectangular microgrooves and silicon ridges between them. They were either left hydrophilic or covered with a hydrophobic fluorocarbon or photoresist layer.
Anisotropically etched samples consisted of 48 μm wide semicircular shaped microgrooves, 12 μm wide silicon ridges between them, and a 30 μm wide photoresist stripe centered on the ridges. The lateral dimensions were chosen to be much smaller than the capillary length of water to support drainage of droplets by coalescence rather than droplet sliding. Furthermore, to achieve a low thermal resistance of the periodic surface structure consisting of water-filled grooves and silicon ridges, the trench depth was also kept small. The dripped-off total amount of condensate (AoC) was measured for each sample for 12 h under the same boundary
conditions (chamber temperature 30 °C, cooling temperature 6 °C, and relative humidity 60%). The maximum increase in AoC of 15.9% (9.6%) against the hydrophilic (hydrophobic) reference sample was obtained for the biphilic samples. In order to elucidate their unique condensation behavior, in situ optical imaging was performed at normal incidence. It shows that the drainage of droplets from the stripe’s surface into the microgrooves as well as occasional droplet sliding events are the dominant processes to clear the surface. To rationalize this behavior, the Hough Circle Transform algorithm was implemented for image processing to receive
additional information about the transient droplet size and number distribution. Postprocessing of these data allows calculation
The measurement and assessment of indoor air quality in terms of respirable particulate constituents is relevant, especially in light of the COVID-19 pandemic and associated infection events. To analyze indoor infectious potential and to develop customized hygiene concepts, the measurement
monitoring of the anthropogenic aerosol spreading is necessary. For indoor aerosol measurements
usually standard lab equipment is used. However, these devices are time-consuming, expensive and unwieldy. The idea is to replace this standard laboratory equipment with low-cost sensors widely used for monitoring fine dust (particulate matter—PM). Due to the low acquisition costs, many sensors can be used to determine the aerosol load, even in large rooms. Thus, the aim of this work
is to verify the measurement capability of low-cost sensors. For this purpose, two different models of low-cost sensors are compared with established laboratory measuring instruments. The study
was performed with artificially prepared NaCl aerosols with a well-defined size and morphology. In addition, the influence of the relative humidity, which can vary significantly indoors, on the measurement capability of the low-cost sensors is investigated. For this purpose, a heating stage was
developed and tested. The results show a discrepancy in measurement capability between low-cost sensors and laboratory measuring instruments. This difference can be attributed to the partially different measuring method, as well as the different measuring particle size ranges. The determined measurement accuracy is nevertheless good, considering the compactness and the acquisition price of the low-cost sensors.
The amyloid precursor protein (APP) is a key molecular component of Alzheimer’s disease (AD) pathogenesis. Proteolytic APP processing generates various cleavage products, including extracellular amyloid beta (Aβ) and the cytoplasmic APP intracellular domain (AICD). Although the role of AICD in the activation of kinase signaling pathways is well established in the context
of full-length APP, little is known about intracellular effects of the AICD fragment, particularly within discrete neuronal compartments. Deficits in fast axonal transport (FAT) and axonopathy documented in AD-affected neurons prompted us to evaluate potential axon-autonomous effects of the AICD fragment for the first time. Vesicle motility assays using the isolated squid axoplasm
preparation revealed inhibition of FAT by AICD. Biochemical experiments linked this effect to aberrant activation of selected axonal kinases and heightened phosphorylation of the anterograde motor protein conventional kinesin, consistent with precedents showing phosphorylation-dependent regulation of motors proteins powering FAT. Pharmacological inhibitors of these kinases alleviated the AICD inhibitory effect on FAT. Deletion experiments indicated this effect requires a sequence encompassing the NPTY motif in AICD and interacting axonal proteins containing a phosphotyrosinebinding domain. Collectively, these results provide a proof of principle for axon-specific effects of AICD, further suggesting a potential mechanistic framework linking alterations in APP processing, FAT deficits, and axonal pathology in AD.
The driving process involves many layers of planning and navigation, in order to enable tractable solutions for the otherwise highly complex problem of autonomous driving. One such layer involves an inherent discrete layer of decision-making corresponding to tactical maneuvers. Inspired by this, the focus of this work is predicting high-level maneuvers for the ego-vehicle. As maneuver prediction is fundamentally feedback-structured, it requires modeling techniques that take into consideration the interaction awareness of the traffic agents involved. This work
addresses this challenge by modeling the traffic scenario as an interaction graph and proposing three deep learning architectures for interaction-aware tactical maneuver prediction of the ego-vehicle. These architectures are based on graph neural networks (GNNs) for extracting spatial features among traffic agents and recurrent neural networks (RNNs) for extracting dynamic motion patterns of surrounding agents. These proposed architectures have been trained and evaluated using BLVD dataset. Moreover, this dataset is expanded using data augmentation, data oversampling and data undersampling approaches, to strengthen model's resilience and enhance the learning process. Lastly, we compare proposed learning architectures for ego-vehicle maneuver prediction in various driving circumstances with various numbers of surrounding traffic agents in order to effectively verify the proposed architectures.
Nuclear inelastic scattering of synchrotron radiation is used to study the changes induced by external tensile strain on the
phonon density of states (pDOS) of polycrystalline Fe samples. The data are interpreted with the help of dedicated atomistic
simulations. The longitudinal phonon peak at around 37 meV and also the second transverse peak at 27 meV are decreased
under strain. This is caused by the production of defects under strain. Also the thermodynamic properties of the pDOS demonstrate
a weakening of the force constants and of the mean phonon energy under strain. Remaining differences between
experiment and simulation are discussed.
Spreading dynamics on lithium niobate: An example of an intrinsically charged ferroelectric surface
(2023)
Droplet wetting and manipulation are essential for the efficient functioning of many applications, ranging from microfluidic applications to electronic devices, agriculture, medical diagnosis, etc. As a means of manipulating droplet wetting, the effect of applying an external voltage or surface charge has been extensively exploited and is known as electrowetting. However, there also exist many materials which bear a quasi-permanent surface charge, like electrets, which are widely employed in sensors or energy storage. In addition, other materials in nature can acquire surface charge by the triboelectric effect, like human hair, natural rubber, and polymers. Nevertheless, there do not exist any studies on spreading on this class of charged surfaces. In our work, we for the first time investigate spreading dynamics on lithium niobate (LiNbO3) as an example of a ferroelectric material with strong instantaneous polarization (0.7C/m2). We find a spreading behavior that significantly differs from classic surfaces. Spreading times can be significantly enlarged compared to standard surfaces, up to hundreds of seconds. Furthermore, the classic Tanner’s law does not describe the spreading dynamics. Instead, the evolution of the droplet radius is dominated by an exponential law. Contact angles and spreading dynamics are also polarization-dependent. They are also influenced by adsorption layers, such as they are left behind by cleaning. Overall, all results indicate that adsorption layers play a significant role in the wetting dynamics of lithium niobate and possibly other charged materials, where such processes are very pronounced. Possible mechanisms are discussed. Our findings are essential for the understanding of wetting on charged surfaces like ferroelectric materials in general. The knowledge of surface charge-based wettability difference, surface charge specific adsorption and its impact on wettability can be utilized in applications like, printing, microfluidics, triboelectric nanogenerators, and to develop biocompatible components for tissue engineering.
Reactive bubble columns are omnipresent in the chemical industry. The layout of these columns is still limited by correlations and therefore improved simulation techniques are required to describe the complex hydrodynamics/reaction interaction. In this work, we focus on the numerical and experimental study of the viscosity influence on bubble motion and reaction using an Euler-Lagrange framework with an added oscillation and reaction model to bring the column layout base closer to a predictive level. For comparison and validation, experimental data in various water-glycerol solutions was obtained in a cylindrical bubble column at low gas hold-up, where the main parameters such as bubble size, motion, and velocities were detected. Glycerol leads thereby to a change in viscosity and surface tension. Further, the surface tension was modified by addition of a surfactant. The bubble oscillating motion in low to higher viscosity could be described using an Euler-Lagrange framework and enables a description of industrial bubble flows. In addition, the simulations were in good agreement concerning reactive mass transfer investigations at higher viscosity of the liquid which led to an overall lower mass transfer compared to the cases with lower viscosity.
Using molecular dynamics simulations, the adsorption and diffusion of doxorubicin drug molecules in boron nitride nanotubes are investigated. The interaction between doxorubicin and the nanotube is governed by van der Waals attraction. We find strong adsorption of doxorubicin to the wall for narrow nanotubes (radius of 9 Å). For larger radii (12 and 15 Å), the adsorption energy decreases, while the diffusion coefficient of doxorubicin increases. It does, however, not reach the values of pure water, as adsorption events still hinder the doxorubicin mobility. It is concluded that nanotubes wider than around 4 nm diameter can serve as efficient drug containers for targeted drug delivery of doxorubicin in cancer chemotherapy.
Personalized dynamic pricing (PDP) involves dynamically setting individual-consumer prices for the same product or service according to consumer-identifying information. Despite its profitability, this pricing provokes strong negative fairness perceptions, explaining why managers are reluctant to implement it. This research provides important insights into the effect of two PDP dimensions (price individualization level and segmentation base) on fairness perceptions and the moderating role of privacy concerns. The results of two experimental studies indicate that consumers perceive individual prices as less fair than segment prices. They also evaluate location-based pricing as less fair than purchase history-based pricing. Consumer privacy concerns moderate these effects.
Purpose
We investigated the cytosolic and membrane-associated contents of polyphenols after 4 hours of incubation (50 μM of each polyphenol) in the colon carcinoma cell line T84 using a novel, rapid, and convenient method based on permeabilization of the cell membrane using digitonin. The colon carcinoma cell line was used to investigate the intestinal uptake of polyphenols present in apple products.
Recent Findings
The results showed that hydroxycinnamic acids (caffeic and 5-caffeoylquinic acid) were only detected in the cytosolic fractions. In contrast, 0.3 to 8.2% of the initial concentrations (50 μM) of the flavonoids phloretin, quercetin, phloretin 2′-O-glucoside, and quercetin 3-O-rhamnoside were found in the membrane-associated fractions. In the cytosolic fractions, 0.2–2.9% of these compounds were detected, corresponding to 25 to 40% of the total cell-associated (cytosolic plus membrane-associated fractions) polyphenol content.
Summary
Our results showed that after uptake, polyphenols were present in the cytosolic fraction of the cells as well as associated with the cell membrane. The presented method provides a useful in vitro tool for determining biologically active compounds in cellular fractions.
The cultivation of cyanobacteria with the addition of an organic carbon source (meaning as heterotrophic or mixotrophic cultivation) is a promising technique to increase their slow growth rate. However, most cyanobacteria cultures are infected by non-separable heterotrophic bacteria. While their contribution to the biomass is rather insignificant in a phototrophic cultivation, problems may arise in heterotrophic and mixotrophic mode. Heterotrophic bacteria can potentially utilize carbohydrates quickly, thus preventing any benefit for the cyanobacteria. In order to estimate the advantage of the supplementation of a carbon source, it is essential to quantify the proportion of cyanobacteria and heterotrophic bacteria in the resulting biomass. In this work, the use of quantitative polymerase chain reaction (qPCR) is proposed. To prepare the samples, a DNA extraction method for cyanobacteria was improved to provide reproducible and robust results for the group of terrestrial cyanobacteria. Two pairs of primers were used, which bind either to the 16S rRNA gene of all cyanobacteria or all bacteria including cyanobacteria. This allows a determination of the proportion of cyanobacteria in the biomass. The method was established with the two terrestrial cyanobacteria Trichocoleus sociatus SAG 26.92 and Nostoc muscorum SAG B-1453-12a. As proof of concept, a heterotrophic cultivation with T. sociatus with glucose was performed. After 2 days of cultivation, a reduction of the biomass partition of the cyanobacterium to 90% was detected. Afterwards, the proportion increased again.
Phase field modeling of fracture has been in the focus of research for over a decade now. The field has gained attention properly due to its benefiting features for the numerical simulations even for complex crack problems. The framework was so far applied to quasi static and dynamic fracture for brittle as well as for ductile materials with isotropic and also with anisotropic fracture resistance. However, fracture due to cyclic mechanical fatigue, which is a very important phenomenon regarding a safe, durable and also economical design of structures, is considered only recently in terms of phase field modeling. While in first phase field models the material’s fracture toughness becomes degraded to simulate fatigue crack growth, we present an alternative method within this work, where the driving force for the fatigue mechanism increases due to cyclic loading. This new contribution is governed by the evolution of fatigue damage, which can be approximated by a linear law, namely the Miner’s rule, for damage accumulation. The proposed model is able to predict nucleation as well as growth of a fatigue crack. Furthermore, by an assessment of crack growth rates obtained from several numerical simulations by a conventional approach for the description of fatigue crack growth, it is shown that the presented model is able to predict realistic behavior.
In the field of liquid filtration, the realization of gas throughput-free cake filtration has been investigated for a long time. Cake filtration without gas throughput would lead to energy savings in general and would reduce the mechanically achievable residual moisture in filter cakes in particular. The reason why gas throughput-free filtration could not be realized with fabrics so far is that the achievable pore sizes are not small enough, and that the associated capillary pressure is too low for gas throughput-free filtration. Microporous membranes can prevent gas flow through open pores and cracks in the filter cake at a standard differential pressure for cake filtration of 0.8 bar due to their smaller pore size. Since large-scale implementation with membranes was not yet successful due to their inadequate mechanical strength, this work focuses on the development and testing of a novel composite material. It combines the advantages of gas throughput-free filtration using membranes with the mechanical stability of fabrics. For the production of the composites, a paste dot coating with adhesive, which is a common method in the textile industry, was used. Based on filtration experiments, delamination and tensile tests, as well as CT analysis, it is shown that this method is suitable for the production of composite filter materials for gas throughput-free cake filtration.
This article is dedicated to the weight set decomposition of a multiobjective (mixed-)integer linear problem with three objectives. We propose an algorithm that returns a decomposition of the parameter set of the weighted sum scalarization by solving biobjective subproblems via Dichotomic Search which corresponds to a line exploration in the weight set. Additionally, we present theoretical results regarding the boundary of the weight set components that direct the line exploration. The resulting algorithm runs in output polynomial time, i.e. its running time is polynomial in the encoding length of both the input and output. Also, the proposed approach can be used for each weight set component individually and is able to give intermediate results, which can be seen as an “approximation” of the weight set component. We compare the running time of our method with the one of an existing algorithm and conduct a computational study that shows the competitiveness of our algorithm. Further, we give a state-of-the-art survey of algorithms in the literature.
In a (linear) parametric optimization problem, the objective value of each feasible solution is an affine function of a real-valued parameter and one is interested in computing a solution for each possible value of the parameter. For many important parametric optimization problems including the parametric versions of the shortest path problem, the assignment problem, and the minimum cost flow problem, however, the piecewise linear function mapping the parameter to the optimal objective value of the corresponding non-parametric instance (the optimal value function) can have super-polynomially many breakpoints (points of slope change). This implies that any optimal algorithm for such a problem must output a super-polynomial number of solutions. We provide a method for lifting approximation algorithms for non-parametric optimization problems to their parametric counterparts that is applicable to a general class of parametric optimization problems. The approximation guarantee achieved by this method for a parametric problem is arbitrarily close to the approximation guarantee of the algorithm for the corresponding non-parametric problem. It outputs polynomially many solutions and has polynomial running time if the non-parametric algorithm has polynomial running time. In the case that the non-parametric problem can be solved exactly in polynomial time or that an FPTAS is available, the method yields an FPTAS. In particular, under mild assumptions, we obtain the first parametric FPTAS for each of the specific problems mentioned above and a (3/2 + ε) -approximation algorithm for the parametric metric traveling salesman problem. Moreover, we describe a post-processing procedure that, if the non-parametric problem can be solved exactly in polynomial time, further decreases the number of returned solutions such that the method outputs at most twice as many solutions as needed at minimum for achieving the desired approximation guarantee.
Laser-based powder bed fusion (L-PBF) is a promising technology for the production of near net–shaped metallic components. The high surface roughness and the comparatively low-dimensional accuracy of such components, however, usually require a finishing by a subtractive process such as milling or grinding in order to meet the requirements of the application. Materials manufactured via L-PBF are characterized by a unique microstructure and anisotropic material properties. These specific properties could also affect the subtractive processes themselves. In this paper, the effect of L-PBF on the machinability of the aluminum alloy AlSi10Mg is explored when milling. The chips, the process forces, the surface morphology, the microhardness, and the burr formation are analyzed in dependence on the manufacturing parameter settings used for L-PBF and the direction of feed motion of the end mill relative to the build-up direction of the parts. The results are compared with a conventionally cast AlSi10Mg. The analysis shows that L-PBF influences the machinability. Differences between the reference and the L-PBF AlSi10Mg were observed in the chip form, the process forces, the surface morphology, and the burr formation. The initial manufacturing method of the part thus needs to be considered during the design of the finishing process to achieve suitable results.
Various regulatory initiatives (such as the pan-European PRIIP-regulation or the German chance-risk classification for state subsidized pension products) have been introduced that require product providers to assess and disclose the risk-return profile of their issued products by means of a key information document. We will in this context outline a concept for a (forward-looking) simulation-based approach and highlight its application and advantages. For reasons of comparison, we further illustrate the performance of approximation methods based on a projection of observed returns into the future such as the Cornish–Fisher expansion or bootstrap methods.
A detailed study of a cylinder activation concept by efficiency loss analysis and 1D simulation
(2020)
Cylinder deactivation is a well-known measure for reducing fuel consumption, especially when applied to gasoline engines. Mostly, such systems are designed to deactivate half of the number of cylinders of the engine. In this study, a new concept is investigated for deactivating only one out of four cylinders of a commercial vehicle diesel engine (“3/4-cylinder concept”). For this purpose, cylinders 2–4 of the engine are operated in “real” 3-cylinder mode, thus with the firing order and ignition distance of a regular 3-cylinder engine, while the first cylinder is only activated near full load, running in parallel to the fourth cylinder. This concept was integrated into a test engine and evaluated on an engine test bench. As the investigations revealed significant improvements for the low-to-medium load region as well as disadvantages for high load, an extensive numerical analysis was carried out based on the experimental results. This included both 1D simulation runs and a detailed cylinder-specific efficiency loss analysis. Based on the results of this analysis, further steps for optimizing the concept were derived and studied by numerical calculations. As a result, it can be concluded that the 3/4-cylinder concept may provide significant improvements of real-world fuel economy when integrated as a drive unit into a tractor.
Several studies now document the disproportionate distribution of environmental pollution across different groups, but many are based on aggregated data or subjective pollution measures. In this study, we describe the air quality disadvantage of migrants in Germany using objective pollution data linked to nationally representative individual-level survey data. We intersect 1 × 1 km2 grid geo-references from the German General Social Survey (ALLBUS) 2014, 2016, and 2018 with 2 × 2 km2 estimates of annually averaged air pollution by the German Environment Agency for nitrogen dioxide, ozone, and particulate matter. Respondents with a migration background are exposed to higher levels of nitrogen dioxide and particulate matter than people of German descent. Urbanity of residence partly explains these differences, up to 81 per cent for particulate matter and about 30 per cent for other pollutants. A larger proportion of immigrants live in larger cities, which are more prone to high levels of air pollution. This is especially true for second-generation migrants. Income differences, on the other hand, do not explain the migrant disadvantage. In city fixed effects models, the patterns for migration background point unambiguously in the direction of environmental disadvantage for all pollutants except ozone. However, the within-municipality associations are weak.
Increased bat hunting at polluted streams suggests chemical exposure rather than prey shortage
(2023)
Streams and their riparian areas are important habitats and foraging sites for bats feeding on emergent aquatic insects. Chemical pollutants entering freshwater streams from agricultural and wastewater sources have been shown to alter aquatic insect emergence, yet little is known about how this impacts insectivorous bats in riparian areas. In this study, we investigate the relationships between the presence of wastewater effluent, in-stream pesticide toxicity, the number of emergent and flying aquatic insects, and the activity and hunting behaviour of bats at 14 streams in southwestern Germany. Stream sites were located in riparian forests, sheltered from direct exposure to pollutants from agricultural and urban areas. We focused on three bat species associated with riparian areas: Myotis daubentonii, M. cf. brandtii, and Pipistrellus pipistrellus. We found that streams with higher pesticide toxicity and more frequent detection of wastewater also tended to be warmer and have higher nutrient and lower oxygen concentrations. We did not observe a reduction of insect emergence, bat activity or hunting rates in association with pesticide toxicity and wastewater detections. Instead, the activity and hunting rates of Myotis spp. were higher at more polluted sites. The observed increase in bat hunting at more polluted streams suggests that instead of reduced prey availability, chemical pollution at the levels measured in the present study could expose bats to pollutants transported from the stream by emergent aquatic insects.
The Arctic is undergoing strong environmental changes, affecting species and
whole biological communities. To assess the impact on these communities,
including their composition and functions, we need more information on their
current distribution and biology. In the High-Arctic tundra, dung from animals,
such as muskoxen (Ovibos moschatus), is a relatively understudied microhabitat
that may be attractive for organisms like dung-feeding insects as well as gastrointestinal
parasites. Using a DNA barcoding approach, we examined muskox
droppings from two Greenlandic regions for dung-dwelling invertebrates. In
15% of all samples, we found the DNA of insect species in the orders Diptera
and Lepidoptera. The saprophagous Diptera colonized dung differently in west
versus north-east Greenland and summer versus winter. In addition, we found
muskox dung harbouring endoparasitic nematodes in samples from both
regions. However, we could not find traces of saprophagous arthropods, such as
collembolans and mites, from the soil sphere. Our pilot study sheds a first light on the invertebrates living in this neglected Arctic microhabitat.
We present a model predictive control (MPC) algorithm for online time-optimal trajectory planning of cooperative robotic manipulators. Robotic arms sharing a common confined operational space are exposed to high interrobot collision
risks. For collision avoidance, a smooth robot geometry approximation by Bézier curves is applied, utilizing velocity constraints and tangent separating planes, enabling an efficient generation of robot trajectories in real-time. The proposed optimization algorithm is validated on an experimental setup consisting of two collaborative robotic arms performing synchronous pick-and-place tasks.
Aquatic emergent insect communities form an important link between aquatic and terrestrial
ecosystems, yet studying them is costly and time-consuming as they are usually
diverse and superabundant. Metabarcoding is a valuable tool to investigate arthropod
community compositions, however high-throughput applications, such as for biomonitoring,
require cost-effective and user-friendly procedures. To investigate if the time-consuming
and labour-intensive DNA extraction step can be omitted in metabarcoding, we
studied the difference in detection rates and individual read abundance using standard
DNA extraction versus direct PCR protocols. Metabarcoding with and without DNA extraction
was performed with artificially created communities of known composition as
well as on natural communities both of the dipteran family Chironomidae to compare
detection rates, individual read abundances and presence-absence community composition.
We found that the novel approach of direct PCR metabarcoding presented here
did not alter detection rates and had a minor effect on individual read abundances in
artificially created communities. Furthermore, presence-absence community compositions
of natural chironomid communities were highly comparable using both approaches.
In conclusion, we showed that direct PCR protocols can be applied in chironomid
metabarcoding approaches, with possible application for a wider range of arthropod
taxa, enabling us to study communities more efficiently in the future.
Heme oxygenase-1 (HO-1) is an enzyme located at the endoplasmic reticulum, which is responsible for the degradation of cellular heme into ferrous iron, carbon monoxide and biliverdin-IXa. In addition to this main function, the enzyme is involved in many other homeostatic, toxic and cancer-related mechanisms. In this review, we first summarize the importance of HO-1 in
physiology and pathophysiology with a focus on the digestive system. We then detail its structure and function, followed by a section on the regulatory mechanisms that control HO-1 expression and activity. Moreover, HO-2 as important further HO isoform is discussed, highlighting the similarities
and differences with regard to HO-1. Subsequently, we describe the direct and indirect cytoprotective functions of HO-1 and its breakdown products carbon monoxide and biliverdin-IXa, but also highlight possible pro-inflammatory effects. Finally, we address the role of HO-1 in cancer with a particular
focus on colorectal cancer. Here, relevant pathways and mechanisms are presented, through which HO-1 impacts tumor induction and tumor progression. These include oxidative stress and DNA damage, ferroptosis, cell cycle progression and apoptosis as well as migration, proliferation, and
epithelial-mesenchymal transition.
Physical vapor deposition (PVD) coatings are vital for enhancing wear resistance. However this technology faces challenges when coating inaccessible surfaces due to its line-of-sight characteristic. A potential remedy is utilizing triboactive CrAlMoN coatings. These form a tribofilm in the contact zone when applied to one contact partner along with a suitable lubricant. This tribofilm can subsequently safeguard inaccessible yet tribologically stressed surfaces. One of the main applications for this method is roller chain drives, whose longevity depends on the joint wear and the resulting chain elongation. Large-scale pin coatings have proven effective in curbing wear and prolonging chain life. However, the inaccessibility of bushes complicates standard PVD coating procedures. Triboactive coatings offer the possibility of forming transfer layers on the bushes, thereby enhancing friction reduction and wear protection. Experimental material studies for chain drives can be cost-intensive due to complexity and numerous components. This article demonstrates that CrAlN and CrAlMoN coatings in combination with greases with the additives phosphorus and sulfur can reduce friction and wear in chain joints. Furthermore, it is shown that a reasonable selection of tribometer testing can significantly reduce costs. Comparing the results of tests on a pin-on-disk tribometer and component tests show that model tests cannot completely replace component tests. But the combination offers an efficient way to optimize test matrices. Triboactive coatings like CrAlMoN hold promise for addressing the challenge of inaccessible surfaces. Reasonable tribometer test selection can help mitigate the costs of experimental studies, making these coatings a more practical
solution.
Since the end of the Cold War, Germany has been considered a largely safe country. But increasing terrorism, the COVID-19 pandemic, the war in Ukraine, and national flood disasters with serious consequences have led to growing attention to civil protection issues in politics and society. Thereby the reduction of possible risks is closely linked to rescue forces being well trained and the population being adequately informed about how to behave during disasters. Thus, adult learning is central to reducing risks associated with disasters. This paper, therefore, examines what works are available from adult and continuing education research on disaster protection in Germany after the 2nd World War. The results of this first
comprehensive scoping review in this field show that pedagogical issues in disaster risk reduction are addressed by various disciplines. Most of these are practice-oriented and aim for the development of pedagogical concepts. High-quality scientific works that are empirically based or oriented towards the development of theoretical foundations, are hardly to be found. Overall, this in-depth research thus reveals a large research gap in the field of adult pedagogical research on the area of disaster education in Germany.
This pilot study aimed to investigate the use of sensorimotor insoles in pain reduction, different orthopedic indications, and the wearing duration effects on the development of pain. Three hundred and forty patients were asked about their pain perception using a visual analog scale (VAS) in a pre–post analysis. Three main intervention durations were defined: VAS_post: up to 3 months, 3 to 6 months, and more than 6 months. The results show significant differences for the within-subject factor “time of measurement”, as well as for the between-subject factor indication (p < 0.001) and worn duration (p < 0.001). No interaction was found between indication and time of measurements (model A) or between worn duration and time of measurements (model B). The results of this pilot study must be cautiously and critically interpreted, but may support the hypothesis that sensorimotor insoles could be a helpful tool for subjective pain reduction. The missing control group and the lack of confounding variables such as methodological weaknesses, natural healing processes, and complementary therapies must be taken into account. Based on these experiences and findings, a RCT and systematic review will follow.
The modified fouling index (MFI) is a crucial characteristic for assessing the fouling potential of reverse osmosis (RO) feed water. Although the MFI is widely used, the estimation time required for filtration and data evaluation is still relatively long. In this study, the relationship between the MFI and instantaneous spectroscopic extinction measurements was investigated. Since both measurements show a linear correlation with particle concentration, it was assumed that a change in the MFI can be detected by monitoring the optical density of the feed water. To prove this assumption, a test bench for a simultaneous measurement of the MFI and optical extinction was designed. Silica monospheres with sizes of 120 nm and 400 nm and mixtures of both fractions were added to purified tap water as model foulants. MFI filtration tests were performed with a standard 0.45 µm PES membrane, and a 0.1 µm PP membrane. Extinction measurements were carried out with a newly designed flow cell inside a UV–VIS spectrometer to get online information on the particle properties of the feed water, such as the particle concentration and mean particle size. The measurement results show that the extinction ratio of different light wavelengths, which should remain constant for a particulate system, independent of the number of particles, only persisted at higher particle concentrations. Nevertheless, a good correlation between extinction and MFI for different particle concentrations with restrictions towards the ratio of particle and pore size of the test membrane was found. These findings can be used for new sensory process monitoring systems, if the deficiencies can be overcome.
River ecosystems are being threatened by rising temperatures, aridity, and salinity due to climate change and increased water abstractions. These threats also put human well-being at risk, as people and rivers are closely connected, particularly in water-scarce regions. We aimed to investigate the relationship between human well-being and biological and physico-chemical river water quality using the arid Draa River basin as a case study. Physico-chemical water measurements, biological monitoring of aquatic macroinvertebrates, and household surveys were used to assess the state of the river water, ecosystem, and human well-being, as well as the associations between them. Salinity levels exceeded maximum permissible values for drinking water in 35 % and irrigation water in 12 % of the sites. Salinity and low flow were associated with low biological quality. Human satisfaction with water quantity and quality, agriculture, the natural environment, and overall life satisfaction were low particularly in the Middle Draa, where 89% of respondents reported emotional distress due to water salinity and scarcity. Drinking and irrigation water quality was generally rated lower in areas characterized by higher levels of water salinity and scarcity. The study found positive associations between the river water quality and biological quality indices, but no significant association between these factors and human satisfaction. These findings suggest that the relationship between human satisfaction and the biological and physicochemical river water quality is complex and that a more comprehensive approach to human well-being is likely needed to establish relationships.
The impact of cognitive and motivational resources on engangement with automated formative feedback
(2024)
The effectiveness of automated formative feedback highly depends on student feedback engagement that is largely determined by learners’ cognitive and motivational resources. Yet, most studies have only investigated either cognitive resources (e.g., mental effort), or motivational resources (e.g., expectancy-value-cost variables). The purpose of this study is to examine the development (indicated by time) and relationship of 1) cognitive, 2) affective, and 3) behavioral feedback engagement as a function of cognitive and motivational resources in a computer-based learning environment with automated formative feedback. Data was collected from N = 330 German B.Ed. Elementary Education students who worked four consecutive sessions on summarizing texts. Previously invested mental effort (t-1) affected situational expectancy and cost but not situational value. 1) Cognitive feedback engagement was positively associated with previous performance but neither associated with cognitive nor motivational resources. 2) Affective feedback engagement was positively associated with intrinsic value and negatively associated with situational expectancies, invested mental effort and previous performance. 3) Behavioral feedback engagement was positively associated with situational expectancies and invested mental effort. This study contributes to the understanding of student’s cognitive and motivational structures when engaging with automated formative feedback.
Das Controlling hat seinen Platz in der Betriebswirtschaftslehre und damit als akademische Disziplin noch nicht gefunden, ja es ist nicht einmal allgemein geklärt, ob Controlling überhaupt eine wissenschaftliche Disziplin ist. Denn für die Anerkennung als wissenschaftliche Teildisziplin müsste es mit Kant gelingen, „das Unterscheidende, was sie mit keiner andern gemein hat, und was ihr also eigenthümlich ist“ genau zu bestimmen. Der Versuch einer derartigen „Bestimmung“ ist charakteristisch für die wissenschaftliche Beschäftigung mit „Controlling“ im deutschen Sprachraum.
Nach einem systematisierenden Überblick über bisherige Konzeptionalisierungsversuche und deren kritischer Würdigung wird aus der Erfolglosigkeit dieser Bemühungen in den letzten 50 Jahren der Schluss gezogen, dass der Versuch, „Controlling“ in Relation zur „klassischen“ Betriebswirtschaftslehre zu konzeptionalisieren, gescheitert ist. Will man nun den Versuch einer wissenschaftlichen Konzeptionalisierung nicht gänzlich aufgeben, so ist es möglicherweise sinnvoll, auf einen alternativen Referenzrahmen zurückzugreifen. Ein solcher Referenzrahmen stellt das Konzept der Privatwirtschaftslehre (PWL) dar. Dieses wird im Weiteren genutzt, um eine andere Fundierung des Controllings zu schaffen, um wiederum auf dieser Basis einen Controllingansatz zu formulieren, der die zuvor kritisierten Schwächen überwindet.
Recent research suggests that the common core of all aversive traits can be understood through the Dark Factor of Personality (D). Previously, the overlap among aversive traits has also been described as the low pole of HEXACO Honesty-Humility. Relying on longitudinal data and a range of theoretically derived outcome criteria, we test in four studies (total N > 2,500) whether and how D and low Honesty-Humility differ. Although the constructs shared around 66% of variance (meta-analytically aggregated across all studies), they longitudinally differently accounted for diverse aversive traits and showed theoretically meaningful and distinct associations to pretentiousness, distrust-related beliefs, and empathy. These results suggest that D and low Honesty-Humility are best understood as strongly overlapping, yet functionally different and nomologically distinct constructs.
With the transition of fluid-capillary-based “Lab on a chip 1.0″ concepts in analytical chemistry to “Lab on a chip
2.0″ approaches relying on distinct fluid droplets (“digital microfluidics”, DMF), the need for reliable methods for
droplet actuation has increasingly come into focus. One possible approach is based on “electrowetting on
dielectric” (EWOD). This technique has the disadvantage that any possible desired later positions of the droplets
on the chip have to be defined prior to chip realization because one of the EWOD electrode layers has to be
structured accordingly. “Optoelectrowetting” (OEW) goes a step further in the sense that the later droplet positions
do not have to be known before, and none of the electrode layers has to be structured. Instead, the
electrical parameters of the layer sequence can be altered locally by an impinging (and movable) light spot.
Although some research groups have succeeded in demonstrating OEW actuation of droplets, the optimization of
the relevant parameters of the layer sequence and the droplet – at least half a dozen parameters altogether – is
tedious and not straight-forward. In this contribution, for optimization purposes, the equations governing OEW
are revisited and altered again, e.g., by numerical implementation of the experimentally well-known saturation
of the contact angle change. Additionally, a Nelder-Mead algorithm is applied to find the parameters, on which
the optimization has to focus to maximize contact angle changes and, thus, mechanical forces on the droplets.
The numerical investigation yields diverse results, e.g., the finding that the droplet’s contact area on the
dielectric layer has a strong influence on the contact angle change and the question whether the droplet is pulled
or pushed. Moreover, the interplay between frequency and amplitude of the applied rectangular alternate voltage
is important for optimization.
In recent years, the automotive industry has shifted from purely combustion engine-driven vehicles towards hybridization due to the introduction of CO2 emission legislation. Hybrid powertrains also represent an important pillar and starting point in the journey towards zero-emission and full electrification. Fulfilling the most recent emission standards requires efficient control strategies for the engine, capable of real-time operation. Model accuracy is one of the main parameters which directly influence the performance of such control strategies. Specific methodologies developed in the past, such as physically- or phenomenologically-based approaches, have already facilitated the modeling of the combustion engine. Even though these models can accurately predict emissions in steady state conditions, their performance during transient engine operation is time-consuming and still not sufficiently reliable. The major contribution of the current work is to clarify and apply the recent advancements in data-driven modeling techniques, especially in time series forecasting with feedforward neural networks (FFNNs) and long short-term memory networks (LSTMs), to address the limitations mentioned above and to compare the different approaches. The quantity and quality of data are significant challenges for data-driven modeling. This paper studies the modeling of gasoline engine emissions using FFNNs and LSTMs. The data quantity and quality requirements are studied based on a portable emission measurement system (PEMS), measuring at 1 Hz, and additional analyses on an engine test bench with a HiL setup, providing the possibility of increasing the measurement frequency with more sophisticated devices by a factor of five. Subsequently, the training and validation of the FFNNs and LSTMs are outlined, and finally, the model accuracy is discussed.
In micro milling, size effects such as the ratio of uncut chip thickness to cutting edge radius result to high mechanical stresses. The tools need to be able to withstand these, with as little tool wear as possible. Cemented carbides are currently the tool substrates of choice. Technical ceramics are highly wear resis- tant as well, but they are not yet used in micro milling. To utilize their potential in micro cutting pro- cesses, we previously identified Y-TZP as the best ceramic for this purpose. Compared to cemented carbide, they exhibit only marginal tool wear when micro milling PMMA. To investigate whether the 3Y-TZP characteristics influence the performance of all-ceramic micro end mills, three different substrate materials were used to manufacture tools that were tested by micro milling of PMMA. Further varied factors were the feed per tooth and the spindle speed. The initial cutting edge sharpness of the tools and the tool wear were used to quantify the results. One substrate was found to result in lower cutting edge radii and a more stable manufacturing process than the others. Also, a feed per tooth dependent wear behavior was observed.
Sensing location information in indoor scenes requires a high accuracy and is a challenging task, mainly because of multipath and NLoS (non-line-of-sight) propagation. GNSS signals cannot penetrate well in indoor environment. Satellite-based navigation and positioning systems cannot therefore be used for indoor positioning.. Other technologies have been suggested for indoor usage, among them, Wi-Fi (802.11) and 5G NR (New Radio). The primary aim of this study is to discuss the advantages and drawbacks of 5G and Wi-Fi positioning techniques for indoor localization.
The fluid dynamic (flow rates) and hydrodynamic behavior (local droplet size distributions and local holdup) of a continuous DN300 pump-mixer were investigated using water as the continuous phase and paraffin oil as the dispersed phase. The influence of the impeller speed (375 to 425 rpm), the feed phase ratio (10 to 30 volume percent), and the total flow rate (0.5 to 2.3 L/min) were investigated by measuring the pumping height, local holdup of the disperse phase, and the droplet size distribution (DSD). The latter one was measured at three different vessel positions using an image-based telecentric shadowgraphic technique. The droplet diameters were extracted from the acquired images using a neural network. The Sauter mean diameters were calculated from the DSD and correlated with an extended model based on Doulah (1975), considering the impeller speed, the feed phase ratio, and additionally the flow rate. The new correlation can describe an extensive database containing 155 experiments of the fluid and hydrodynamic within a 15 % error range