Refine
Year of publication
Document Type
- Article (729) (remove)
Has Fulltext
- yes (729)
Keywords
- AG-RESY (42)
- PARO (30)
- SKALP (15)
- Schule (12)
- MINT (11)
- Mathematische Modellierung (11)
- Stadtplanung (9)
- Denkmäler (8)
- HANDFLEX (8)
- Monitoring (8)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (154)
- Kaiserslautern - Fachbereich Informatik (134)
- Kaiserslautern - Fachbereich Physik (102)
- Kaiserslautern - Fachbereich Mathematik (84)
- Kaiserslautern - Fachbereich Sozialwissenschaften (53)
- Kaiserslautern - Fachbereich Biologie (50)
- Kaiserslautern - Fachbereich Chemie (42)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (27)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (26)
- Kaiserslautern - Fachbereich Bauingenieurwesen (24)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (22)
- Landau - Fachbereich Natur- und Umweltwissenschaften (4)
- Kaiserslautern - Fachbereich ARUBI (2)
- Landau - Fachbereich Psychologie (2)
- Kaiserslautern - Fachbereich Architektur (1)
- Landau - Fachbereich Erziehungswissenschaften (1)
- Universität (1)
The number of sensors used in modern devices is rapidly increasing, and the interaction with sensors demands analog-to-digital data conversion (ADC). A conventional ADC in leading-edge technologies faces
many issues due to signal swings, manufacturing deviations, noise, etc. Designers of ADCs are moving to the
time domain and digital designs techniques to deal with these issues. This work pursues a novel self-adaptive
spiking neural ADC (SN-ADC) design with promising features, e.g., technology scaling issues, low-voltage
operation, low power, and noise-robust conditioning. The SN-ADC uses spike time to carry the information.
Therefore, it can be effectively translated to aggressive new technologies to implement reliable advanced sensory electronic systems. The SN-ADC supports self-x (self-calibration, self-optimization, and self-healing) and
machine learning required for the internet of things (IoT) and Industry 4.0. We have designed the main part of
SN-ADC, which is an adaptive spike-to-digital converter (ASDC). The ASDC is based on a self-adaptive complementary metal–oxide–semiconductor (CMOS) memristor. It mimics the functionality of biological synapses,
long-term plasticity, and short-term plasticity. The key advantage of our design is the entirely local unsupervised
adaptation scheme. The adaptation scheme consists of two hierarchical layers; the first layer is self-adapted, and
the second layer is manually treated in this work. In our previous work, the adaptation process is based on 96 variables. Therefore, it requires considerable adaptation time to correct the synapses’ weight. This paper proposes a
novel self-adaptive scheme to reduce the number of variables to only four and has better adaptation capability
with less delay time than our previous implementation. The maximum adaptation times of our previous work
and this work are 15 h and 27 min vs. 1 min and 47.3 s. The current winner-take-all (WTA) circuits have issues, a
high-cost design, and no identifying the close spikes. Therefore, a novel WTA circuit with memory is proposed.
It used 352 transistors for 16 inputs and can process spikes with a minimum time difference of 3 ns. The ASDC
has been tested under static and dynamic variations. The nominal values of the SN-ADC parameters’ number
of missing codes (NOMCs), integral non-linearity (INL), and differential non-linearity (DNL) are no missing
code, 0.4 and 0.22 LSB, respectively, where LSB stands for the least significant bit. However, these values are
degraded due to the dynamic and static deviation with maximum simulated change equal to 0.88 and 4 LSB and
6 codes for DNL, INL, and NOMC, respectively. The adaptation resets the SN-ADC parameters to the nominal
values. The proposed ASDC is designed using X-FAB 0.35 µm CMOS technology and Cadence tools.
A potential fucoidan-based PEGylated PLGA nanoparticles (NPs) offering a proper delivery of N-methyl anthranilic acid (MA, a model of hydrophobic anti-inflammatory drug) have been
developed via the formation of fucoidan aqueous coating surrounding PEGylated PLGA NPs. The optimum formulation (FuP2) composed of fucoidan:m-PEG-PLGA (1:0.5 w/w) with particle size(365 ± 20.76 nm), zeta potential (-22.30 ± 2.56 mV), % entrapment efficiency (85.45 ± 7.41), drug loading (51.36 ± 4.75 µg/mg of NPs), % initial burst (47.91 ± 5.89), and % cumulative release
(102.79 ± 6.89) has been further investigated for the anti-inflammatory in vivo study. This effect of
FuP2 was assessed in rats’ carrageenan-induced acute inflammation model. The average weight of the
paw edema was significantly lowered (p ≤ 0.05) by treatment with FuP2. Moreover, cyclooxygenase-2 and tumor necrosis factor-alpha immunostaining were decreased in FuP2 treated group compared to the other groups. The levels of prostaglandin E2, nitric oxide, and malondialdehyde were significantly
reduced (p ≤ 0.05) in the FuP2-treated group. A significant reduction (p ≤ 0.05) in the expression
of interleukins (IL-1b and IL-6) with an improvement of the histological findings of the paw tissues was observed in the FuP2-treated group. Thus, fucoidan-based PEGylated PLGA–MA NPs are a promising anti-inflammatory delivery system that can be applied for other similar drugs potentiating their pharmacological and pharmacokinetic properties.
Manipulating deformable linear objects - Vision-based recognition of contact state transitions -
(1999)
A new and systematic approach to machine vision-based robot manipulation of deformable (non-rigid) linear objects is introduced. This approach reduces the computational needs by using a simple state-oriented model of the objects. These states describe the relation of the object with respect to an obstacle and are derived from the object image and its features. Therefore, the object is segmented from a standard video frame using a fast segmentation algorithm. Several object features are presented which allow the state recognition of the object while being manipulated by the robot.
Self-adaptation allows software systems to autonomously adjust their behavior during run-time by handling all possible
operating states that violate the requirements of the managed system. This requires an adaptation engine that receives adaptation
requests during the monitoring process of the managed system and responds with an automated and appropriate adaptation
response. During the last decade, several engineering methods have been introduced to enable self-adaptation in software systems.
However, these methods lack addressing (1) run-time uncertainty that hinders the adaptation process and (2) the performance
impacts resulted from the complexity and the large number of the adaptation space. This paper presents CRATER, a framework
that builds an external adaptation engine for self-adaptive software systems. The adaptation engine, which is built on Case-based
Reasoning, handles the aforementioned challenges together. This paper is braced with an experiment illustrating the benefits of
this framework. The experimental results shows the potential of CRATER in terms handling run-time uncertainty and adaptation
remembrance that enhances the performance for large number of adaptation space.
Postmortem Analysis of Decayed Online Social Communities: Cascade Pattern Analysis and Prediction
(2018)
Recently, many online social networks, such as MySpace, Orkut, and Friendster, have faced inactivity decay of their members, which contributed to the collapse of these networks. The reasons, mechanics, and prevention mechanisms of such inactivity decay are not fully understood. In this work, we analyze decayed and alive subwebsites from the Stack Exchange platform. The analysis mainly focuses on the inactivity cascades that occur among the members of these communities. We provide measures to understand the decay process and statistical analysis to extract the patterns that accompany the inactivity decay. Additionally, we predict cascade size and cascade virality using machine learning. The results of this work include a statistically significant difference of the decay patterns between the decayed and the alive subwebsites. These patterns are mainly cascade size, cascade virality, cascade duration, and cascade similarity. Additionally, the contributed prediction framework showed satisfactorily prediction results compared to a baseline predictor. Supported by empirical evidence, the main findings of this work are (1) there are significantly different decay patterns in the alive and the decayed subwebsites of the Stack Exchange; (2) the cascade’s node degrees contribute more to the decay process than the cascade’s virality, which indicates that the expert members of the Stack Exchange subwebsites were mainly responsible for the activity or inactivity of the Stack Exchange subwebsites; (3) the Statistics subwebsite is going through decay dynamics that may lead to it becoming fully-decayed; (4) the decay process is not governed by only one network measure, it is better described using multiple measures; (5) decayed subwebsites were originally less resilient to inactivity decay, unlike the alive subwebsites; and (6) network’s structure in the early stages of its evolution dictates the activity/inactivity characteristics of the network.
Tracking waterborne microplastic (MP) in urban areas is a challenging task because of the various sources and transport pathways involved. Since MP occurs in low concentrations in most wastewater and stormwater streams, large sample volumes need to be captured, prepared, and carefully analyzed. The recent research in urban areas focused mainly on MP emissions at wastewater treatment plants (WWTPs), as obvious entry points into receiving waters. However, important transport pathways under wet-weather conditions are yet not been investigated thoroughly. In addition, the lack of comprehensive and comparable sampling strategies complicated the attempts for a deeper understanding of occurrence and sources. The goal of this paper is to (i) introduce and describe sampling strategies for MP at different locations in a municipal catchment area under dry and wet-weather conditions, (ii) quantify MP emissions from the entire catchment and two other smaller ones within the bigger catchment, and (iii) compare the emissions under dry and wet-weather conditions. WWTP has a high removal rate of MP (>96%), with an estimated emission rate of 189 kg/a or 0.94 g/[population equivalents (PEQ · a)], and polyethylene (PE) as the most abundant MP. The specific dry-weather emissions at a subcatchment were ≈30 g/(PEQ · a) higher than in the influent of WWTP with 23 g/(PEQ · a). Specific wet-weather emissions from large sub-catchment with higher traffic and population densities were 1952 g/(ha · a) higher than the emissions from smaller catchment (796 g/[ha · a]) with less population and traffic. The results suggest that wet-weather transport pathways are likely responsible for 2–4 times more MP emissions into receiving waters compared to dry-weather ones due to tire abrasion entered from streets through gullies. However, more investigations of wet-weather MP need to be carried out considering additional catchment attributes and storm event characteristics.
In this paper we consider the stochastic primitive equation for geophysical flows subject to transport noise and turbulent pressure. Admitting very rough noise terms, the global existence and uniqueness of solutions to this stochastic partial differential equation are proven using stochastic maximal
-regularity, the theory of critical spaces for stochastic evolution equations, and global a priori bounds. Compared to other results in this direction, we do not need any smallness assumption on the transport noise which acts directly on the velocity field and we also allow rougher noise terms. The adaptation to Stratonovich type noise and, more generally, to variable viscosity and/or conductivity are discussed as well.
Ein Bioraffineriekonzept für (Bier-)Treber wird vorgeschlagen, bei dem im Gegensatz zu bestehenden Konzepten wasserlösliche Komponenten durch Pressen abgetrennt und als Grundlage für eine Milchsäurefermentation mit Lactobacillus delbrueckii subsp. lactis verwendet werden. Die verbleibenden strukturellen Kohlenhydrate des Treberrückstandes werden durch hydrothermale und enzymatische Vorbehandlung in fermentierbare Zucker überführt. Es entstehen deutlich weniger Nebenprodukte, die das Wachstum von Mikroorganismen inhibieren können, als bei der Nutzung von nicht abgepresstem Treber.
Simulation of Particle Interaction with Surface Microdefects during Cold Gas-Dynamic Spraying
(2022)
The cold gas-dynamic spray (CGDS) technique is utilized for repairing processes of a
large number of metallic components in mechanical and process engineering, such as bridges or
vehicles. Fine particles impacting on the component surface can be severely deformed and penetrate
into the defects, filling and coating them, resulting in possible protection against corrosion or crack
propagation. This work focuses on the investigation of the impact behavior of cold sprayed particles
with the wall surface having microdefects in the form of cavities. The collision of fine single particles
with the substrate, both made from AISI 1045 steel, was simulated with the finite element method
(FEM) using the Johnson–Cook failure model. The impact phenomena of particles on different
microdefect geometries were obtained and compared with the collision on a smooth surface. The
particle diameter and defect were varied to investigate the influence of the size on the deformation
behaviour. The different impact scenarios result in different temperature and stress distributions in
the contact zone, penetration and deformation behavior during the collision.
Using molecular dynamics simulation, we study the cutting of an Fe single crystal using
tools with various rake angles α. We focus on the (110)[001] cut system, since here, the crystal
plasticity is governed by a simple mechanism for not too strongly negative rake angles. In this
case, the evolution of the chip is driven by the generation of edge dislocations with the Burgers
vector b = 1
2
[111], such that a fixed shear angle of φ = 54.7◦
is established. It is independent of
the rake angle of the tool. The chip form is rectangular, and the chip thickness agrees with the
theoretical result calculated for this shear angle from the law of mass conservation. We find that the
force angle χ between the direction of the force and the cutting direction is independent of the rake
angle; however, it does not obey the predictions of macroscopic cutting theories, nor the correlations
observed in experiments of (polycrystalline) cutting of mild steel. Only for (strongly) negative rake
angles, the mechanism of plasticity changes, leading to a complex chip shape or even suppressing the
formation of a chip. In these cases, the force angle strongly increases while the friction angle tends
to zero.
We present two techniques for reasoning from cases to solve classification tasks: Induction and case-based reasoning. We contrast the two technologies (that are often confused) and show how they complement each other. Based on this, we describe how they are integrated in one single platform for reasoning from cases: The Inreca system.
We present an approach to systematically describing case-based reasoning systems bydifferent kinds of criteria. One main requirement was the practical relevance of these criteria and their usability for real-life applications. We report on the results we achieved from a case study carried out in the INRECA1 Esprit project.
Substrate channeling is a widespread mechanism in metabolic pathways to avoid decomposition of unstable intermediates, competing reactions, and to accelerate catalytic turnover. During the biosynthesis of light-harvesting phycobilins in cyanobacteria, two members of the ferredoxin-dependent bilin reductases are involved in the reduction of the open-chain tetrapyrrole biliverdin IXα to the pink pigment phycoerythrobilin. The first reaction is catalyzed by 15,16-dihydrobiliverdin:ferredoxin oxidoreductase and produces the unstable intermediate 15,16-dihydrobiliverdin (DHBV). This intermediate is subsequently converted by phycoerythrobilin:ferredoxin oxidoreductase to the final product phycoerythrobilin. Although substrate channeling has been postulated already a decade ago, detailed experimental evidence was missing. Using a new on-column assay employing immobilized enzyme in combination with UV-Vis and fluorescence spectroscopy revealed that both enzymes transiently interact and that transfer of the intermediate is facilitated by a significantly higher binding affinity of DHBV toward phycoerythrobilin:ferredoxin oxidoreductase. Concluding from the presented data, the intermediate DHBV is transferred via proximity channeling.
Scaled boundary isogeometric analysis (SB-IGA) describes the computational domain by proper boundary NURBS together with a well-defined scaling center; see [5]. More precisely, we consider star convex domains whose domain boundaries correspond to a sequence of NURBS curves and the interior is determined by a scaling of the boundary segments with respect to a chosen scaling center. However, providing a decomposition into star shaped blocks one can utilize SB-IGA also for more general shapes. Even though several geometries can be described by a single patch, in applications frequently there appear multipatch structures. Whereas a C0 continuous patch coupling can be achieved relatively easily, the situation becomes more complicated if higher regularity is required. Consequently, a suitable coupling method is inevitably needed for analyses that require global C1 continuity.In this contribution we apply the concept of analysis-suitable G1 parametrizations [2] to the framework of SB-IGA for the C1 coupling of planar domains with a special consideration of the scaling center. We obtain globally C1 regular basis functions and this enables us to handle problems such as the Kirchhoff-Love plate and shell, where smooth coupling is an issue. Furthermore, the boundary representation within SB-IGA makes the method suitable for the concept of trimming. In particular, we see the possibility to extend the coupling procedure to study trimmed plates and shells.The approach was implemented using the GeoPDEs package [1] and its performance was tested on several numerical examples. Finally, we discuss the advantages and disadvantages of the proposed method and outline future perspectives.
Micro machining with micro pencil grinding tools (MPGTs) is an emerging technology that can be used to manufacture closed microchannel structures in hard and brittle materials as well as hardened steels like 16MnCr5. At their current operating conditions, these tools have a comparatively short tool life. In previous works, MPGTs in combination with a minimum quantity lubrication (MQL) system were used to manufacture microchannels in 16MnCr5 hardened steel. The study has shown that steel adhesions clog the abrasive layer of MPGTs, most likely resulting from insufficient lubrication. In this paper, a metalworking fluid (MWF) supply method was developed to improve the process: a submerged micro grinding process, in which machining takes place inside a pool of MWF. In this study, the effect of seven types of MWFs on material adhesions at the bottom surface of the tool is evaluated. Equivalent good MWFs are then compared in a micro pendulum grinding experiment till failure.
The scaffolding protein family Fe65, composed of Fe65, Fe65L1, and Fe65L2, was identified as an interaction partner of the amyloid precursor protein (APP), which plays a key function in Alzheimer’s disease. All three Fe65 family members possess three highly conserved interaction domains, forming complexes with diverse binding partners that can be assigned to different cellular functions, such as transactivation of genes in the nucleus, modulation of calcium homeostasis and lipid metabolism, and regulation of the actin cytoskeleton. In this article, we rule out putative new intracellular signaling mechanisms of the APP-interacting protein Fe65 in the regulation of actin cytoskeleton dynamics in the context of various neuronal functions, such as cell migration, neurite outgrowth, and synaptic plasticity.
Die Bestanderhaltung historischer Bauwerke bedarf gründlicher Voruntersuchung, Qualitätskontrolle und Bauwerksüberwachung, um die Eingriffe in die Denkmalsubstanz zu minimieren und Folgeschäden zu vermeiden. Zerstörungsfreie Prüfmethoden und numerische Modellierungsverfahren bieten heute bewährte und neue Möglichkeiten, gesicherte Kenntnisse über die Bauwerke und die altersbedingten Veränderungen ihrer Baumaterialien zu erzielen und gleichzeitig die Eingriffe für Materialentnahmen und Bauwerksöffnungen zu minimieren. Anhand von Fallbeispielen werden aktuelle Forschungsergebnisse präsentiert. Georadarmessungen werden mit theoretischen Modellierungen kombiniert, um gemessene Anomalien in Materialparametern zu begründen. Moderne Anforderungen wie die energetische Sanierung historischer Gebäude werfen neue Problemfelder auf, für die anhand von Modellierungen des Wärme- und Feuchtetransports Antworten gefunden werden. Die Weiterentwicklung von Ultraschallmesstechnik und Signalauswertung ermöglicht neue Anwendungen bei der Untersuchung verwitterter Sandsteinoberflächen mittels Rayleighwellen.
Die Möglichkeit einer Prämienanpassung in der deutschen PKV ist vom Wert des sogenannten auslösenden Faktors abhängig, der mittels einer linearen Extrapolation der Schadenquotienten der vergangenen drei Jahre berechnet wird. Seine frühzeitige, verlässliche Vorhersage ist aus Sicht des Risikomanagements von großer Bedeutung. Wir untersuchen deshalb vielfältige Vorhersageansätze, die von klassischen Zeitreihenansätzen und Regression über neuronale Netze bis hin zu hybriden Modellen reichen. Während bei den klassischen Methoden Regression mit ARIMA-Fehlern am besten abschneidet, zeigt ein neuronales Netz, das mit Zeitreihenvorhersage kombiniert oder auf desaisonalisierten und trendbereinigten Daten trainiert wurde, das insgesamt beste Verhalten.
Using molecular dynamics simulation, we study nanoindentation in large samples of Cu–Zr glass at various temperatures between zero and the glass transition temperature. We find that besides the elastic modulus, the yielding point also strongly (by around 50%) decreases with increasing temperature; this behavior is in qualitative agreement with predictions of the cooperative shear model. Shear-transformation zones (STZs) show up in increasing sizes at low temperatures, leading to shear-band activity. Cluster analysis of the STZs exhibits a power-law behavior in the statistics of STZ sizes. We find strong plastic activity also during the unloading phase; it shows up both in the deactivation of previous plastic zones and the appearance of new zones, leading to the observation of pop-outs. The statistics of STZs occurring during unloading show that they operate in a similar nature as the STZs found during loading. For both cases, loading and unloading, we find the statistics of STZs to be related to directed percolation. Material hardness shows a weak strain-rate dependence, confirming previously reported experimental findings; the number of pop-ins is reduced at slower indentation rate. Analysis of the dependence of our simulation results on the quench rate applied during preparation of the glass shows only a minor effect on the properties of STZs.
Plasticity in metallic glasses depends on their stoichiometry. We explore this dependence by molecular dynamics simulations for the case of CuZr alloys using the compositions Cu64.5Zr35.5, Cu50Zr50, and Cu35.5Zr64.5. Plasticity is induced by nanoindentation and orthogonal cutting. Only the Cu64.5Zr35.5 sample shows the formation of localized strain in the form of shear bands, while plasticity is more homogeneous for the other samples. This feature concurs with the high fraction of full icosahedral short-range order found for Cu64.5Zr35.5. In all samples, the atomic density is reduced in the plastic zone; this reduction is accompanied by a decrease of the average atom coordination, with the possible exception of Cu35.5Zr64.5, where coordination fluctuations are high. The strongest density reduction occurs in Cu64.5Zr35.5, where it is connected with the partial destruction of full icosahedral short-range order. The difference in plasticity mechanism influences the shape of the pileup and of the chip generated by nanoindentation and cutting, respectively.
Cutting of metallic glasses produces as a rule serrated and segmented chips in experiments, while atomistic simulations produce straight unserrated chips. We demonstrate here that with increasing depth of cut – with all other parameters unchanged – chip serration starts to affect the morphology of the chip also in molecular dynamics simulations. The underlying reason is the shear localization in shear bands. As the distance between shear bands increases with increasing depth of cut, the surface morphology of the chip becomes increasingly segmented. The parallel shear bands that formed during cutting do no longer interact with each other when their separation is ≳10 nm. Our results are analogous to the so-called fold instability that has been found when machining nanocrystalline metals.
We examine the predictability of 299 capital market anomalies enhanced by 30 machine learning approaches and over 250 models in a dataset with more than 500 million firm-month anomaly observations. We find significant monthly (out-of-sample) returns of around 1.8–2.0%, and over 80% of the models yield returns equal to or larger than our linearly constructed baseline factor. For the best performing models, the risk-adjusted returns are significant across alternative asset pricing models, considering transaction costs with round-trip costs of up to 2% and including only anomalies after publication. Our results indicate that non-linear models can reveal market inefficiencies (mispricing) that are hard to conciliate with risk-based explanations.
The extraction kinetics of polyphenols, which are leached from red vine leaves, are studied and evaluated using a laboratory robot and nonconventional processing techniques such as ultrasonic (US)-, microwave (MW)-, and pulsed electric field (PEF)-assisted extraction processes. The robotic high-throughput screening reveals optimal extraction conditions at a pH value of 2.5, a temperature of 56 °C, and a solvent mixture of methanol:water:HCl of 50:49:1 v/v/v. Nonconventional processing techniques, such as MW- and US-assisted extraction, have the fastest kinetics and produce the highest polyphenol yield. The non-conventional techniques yield is 2.29 g/L (MW) resp. 2.47 g/L (US) for particles that range in size from 450 to 2000 µm and 2.20 g/L (MW) resp. 2.05 g/L (US) for particles that range from 2000 to 4000 µm. PEF has the lowest yield of polyphenols with 0.94 g/L (450–2000 µm), resp. 0.64 g/L (2000–4000 µm) in comparison to 1.82 g/L (2000 to 4000 µm) in a standard stirred vessel (50 °C). When undried red vine leaves (2000 to 4000 µm) are used the total phenol content is 1.44 g/L with PEF.
As global networks are being used by more and more people,they are becoming increasingly interesting for commercial appli-cations. The recent success and change in direction of the World-Wide Web is a clear indication for this. However, this success meta largely unprepared communications infrastructure. The Inter-net as an originally non-profit network did neither offer the secu-rity, nor the globally available accounting infrastructure byitself.These problems were addressed in the recent past, but in aseemingly ad-hoc manner. Several different accounting schemessensible for only certain types of commercial transactions havebeen developed, which either seem to neglect the problems ofscalability, or trade security for efficiency. Finally, some propos-als aim at achieving near perfect security at the expense of effi-ciency, thus rendering those systems to be of no practical use.In contrast, this paper presents a suitably configurable schemefor accounting in a general, widely distributed client/server envi-ronment. When developing the protocol presented in this paper,special attention has been paid to make this approach work wellin the future setting of high-bandwidth, high-latency internets.The developed protocol has been applied to a large-scale distrib-uted application, a WWW-based software development environ-ment.
Die Realisierung zunehmend komplexer Softwareprojekte erfordert das direkte und indirekteZusammenwirken einer immer größer werdenden Zahl von Personen. Die dafür benötigte Infrastrukturist mit der zunehmenden globalen Rechner-Vernetzung bereits vorhanden, doch wird ihr Potential vonherkömmlichen Werkzeugen in der Regel bei weitem nicht ausgeschöpft. Das in diesem Artikelvorgestellte Rahmenmodell für Softwareentwicklung wurde explizit im Hinblick auf die globaleKooperation von Entwicklern entworfen. WebMake, eine auf diesem Modell basierende Software-entwicklungsumgebung, adressiert das Ziel seiner Einsetzbarkeit im globalen Maßstab durch dieVerwendung des World-Wide Web als Datenspeicherungs- und Kommunikationsinfrastruktur.
In this paper, a framework for globally distributed soft-ware development and management environments, whichwe call Booster is presented. Additionally, the first experi-ences with WebMake, an application developed to serve asan experimental platform for a software developmentenvironment based on the World Wide Web and theBooster framework is introduced. Booster encompasses thebasic building blocks and mechanisms necessary tosupport a truly cooperative distributed softwaredevelopment from the very beginning to the last steps in asoftware life cycle. It is thus a precursor of the GlobalSoftware Highway, in which providers and users can meetfor the development, management, exchange and usage ofall kind of software.
In this paper, we compare the BERKOM globally ac-cessible services project (GLASS) with the well-knownWorld-Wide Web with respect to the ease of development,realization, and distribution of multimedia presentations.This comparison is based on the experiences we gainedwhen implementing a gateway between GLASS and theWorld-Wide Web. Since both systems are shown to haveobvious weaknesses, we are concluding this paper with apresentation of a better way to multimedia document en-gineering and distribution. This concept is based on awell-accepted approach to function-shipping in the Inter-net: the Java language, permitting for example a smoothintegration of GLASS92 MHEG objects and WWW HTMLpages within one common environment.
Structural resilience describes urban drainage systems’ (UDSs) ability to minimize the
frequency and magnitude of failure due to common structural issues such as pipe clogging and
cracking or pump failure. Structural resilience is often neglected in the design of UDSs. The current
literature supports structural decentralization as a way to introduce structural resilience into UDSs.
Although there are promising methods in the literature for generating and optimizing decentralized
separate stormwater collection systems, incorporating hydraulic simulations in unsteady flow, these
approaches sometimes require high computational effort, especially for flat areas. This may hamper
their integration into ordinary commercially designed UDS software due to their predominantly
scientific purposes. As a response, this paper introduces simplified cost and structural resilience
indices that can be used as heuristic parameters for optimizing the UDS layout. These indices only
use graph connectivity information, which is computationally much less expensive than hydraulic
simulation. The use of simplified objective functions significantly simplifies the feasible search space
and reduces blind searches by optimization. To demonstrate the application and advantages of the
proposed model, a real case study in the southwest city of Ahvaz, Iran was explored. The proposed
framework was proven to be promising for reducing the computational effort and for delivering
realistic cost-wise and resilient UDSs.
The dynamic behaviour of unsaturated sand rubber chips mixtures at various gravimetric contents is evaluated through an experimental study comprising resonant column tests in a fixed-free device. Chips were irregularly shaped with dimensions ranging from 5 to 14 mm. Three types of sand with different gradation have been considered. Relative density amounted to 0.5 for all specimens. Due to the large size of the chips, the diameter of the specimens had to be equal to 100 mm, which in turn required a re-calibration of the device assuming a frequency-dependent drive head inertia. The effects of confining stress, rubber chips content, and sand gradation on shear modulus and damping ratio are determined over wide ranges of the shear strain. At small strains, as known for sands, increasing the confining stress stiffens the mixtures. Increasing the rubber chips content reduces significantly the shear modulus and increases the damping ratio. At higher strains, increasing the confining stress or the rubber content flattens the reduction of the shear modulus with strain. Damping at high strains does not show any appreciable dependence on rubber content. Unloading–reloading sequences are used to assess shear modulus degradation and threshold strains. Finally, design equations are derived from the test results to predict the dynamic response of the composite material.
The impact of cognitive and motivational resources on engangement with automated formative feedback
(2024)
The effectiveness of automated formative feedback highly depends on student feedback engagement that is largely determined by learners’ cognitive and motivational resources. Yet, most studies have only investigated either cognitive resources (e.g., mental effort), or motivational resources (e.g., expectancy-value-cost variables). The purpose of this study is to examine the development (indicated by time) and relationship of 1) cognitive, 2) affective, and 3) behavioral feedback engagement as a function of cognitive and motivational resources in a computer-based learning environment with automated formative feedback. Data was collected from N = 330 German B.Ed. Elementary Education students who worked four consecutive sessions on summarizing texts. Previously invested mental effort (t-1) affected situational expectancy and cost but not situational value. 1) Cognitive feedback engagement was positively associated with previous performance but neither associated with cognitive nor motivational resources. 2) Affective feedback engagement was positively associated with intrinsic value and negatively associated with situational expectancies, invested mental effort and previous performance. 3) Behavioral feedback engagement was positively associated with situational expectancies and invested mental effort. This study contributes to the understanding of student’s cognitive and motivational structures when engaging with automated formative feedback.
Tumor emergence and progression is a complex phenomenon that assumes special molecular and cellular interactions. The hierarchical structuring and communication via feedback signaling of different cell types, which are categorized as the stem, progenitor, and differentiated cells in dependence of their maturity level, plays an important role. Under healthy conditions, these cells build a dynamical system that is responsible for facilitating the homeostatic regulation of the tissue. Generally, in this hierarchical setting, stem and progenitor cells are yet likely to undergo a mutation, when a cell divides into two daughter cells. This may lead to the development of abnormal characteristics, i.e. mutation in the cell, yielding an unrestrained number of cells. Therefore, the regulation of a stem cell’s proliferation and differentiation rate is crucial for maintaining the balance in the overall cell population. In this paper, a maturity based mathematical model with feedback regulation is formulated for healthy and mutated cell lineages. It is given in the form of coupled ordinary and partial differential equations. The focus is laid on the dynamical effects resulting from acquiring a mutation in the hierarchical structure of stem, progenitor and fully differentiated cells. Additionally, the effects of nonlinear feedback regulation from mature cells into both stem and progenitor cell populations have been inspected. The steady-state solutions of the model are derived analytically. Numerical simulations and results based on a finite volume scheme underpin various expected behavioral patterns of the homeostatic regulation and cancer evolution. For instance, it has been found that the mutated cells can experience significant growth even with a single somatic mutation, but under homeostatic regulation acquire a steady-state and thus, ensuing healthy cell population to either a steady-state or a lower cell concentration. Furthermore, the model behavior has been validated with different experimentally measured tumor values from the literature.
The first observation of self-focusing of dipolar spin waves in garnet film media is reported. In particular, we show that the quasi-stationary diffraction of a finite-aperture spin wave beam in a focusing medium leads to the concentration of the wave power in one focal point rather than along a certain line (channel). The obtained results demonstrate the wide applicability of non-linear spin wave media to study non-linear wave phenomena using an advanced combined microwave-Brillouin light scattering technique for a two-dimensional mapping of the spin wave amplitudes.
In a (linear) parametric optimization problem, the objective value of each feasible solution is an affine function of a real-valued parameter and one is interested in computing a solution for each possible value of the parameter. For many important parametric optimization problems including the parametric versions of the shortest path problem, the assignment problem, and the minimum cost flow problem, however, the piecewise linear function mapping the parameter to the optimal objective value of the corresponding non-parametric instance (the optimal value function) can have super-polynomially many breakpoints (points of slope change). This implies that any optimal algorithm for such a problem must output a super-polynomial number of solutions. We provide a method for lifting approximation algorithms for non-parametric optimization problems to their parametric counterparts that is applicable to a general class of parametric optimization problems. The approximation guarantee achieved by this method for a parametric problem is arbitrarily close to the approximation guarantee of the algorithm for the corresponding non-parametric problem. It outputs polynomially many solutions and has polynomial running time if the non-parametric algorithm has polynomial running time. In the case that the non-parametric problem can be solved exactly in polynomial time or that an FPTAS is available, the method yields an FPTAS. In particular, under mild assumptions, we obtain the first parametric FPTAS for each of the specific problems mentioned above and a (3/2 + ε) -approximation algorithm for the parametric metric traveling salesman problem. Moreover, we describe a post-processing procedure that, if the non-parametric problem can be solved exactly in polynomial time, further decreases the number of returned solutions such that the method outputs at most twice as many solutions as needed at minimum for achieving the desired approximation guarantee.
Two different material batches made of random and textured orientated polycrystalline nickel-base superalloy René80 were investigated under isothermal low cycle fatigue tests at 850 °C for a notched specimen geometry. In contrast to a smooth specimen geometry, no significant improvement in fatigue behaviour of the notched specimen could be observed for the textured material. Finite element simulations reveal an area along the notch where high stiffness evolves for the textured material, which lead to nearly similar shear stresses in the slip systems compared to a random orientation distribution and therefore to no distinct differences in the lifetime.
With a yearly production of about 39 million tons, brewer’s spent grain (BSG) is the
most abundant brewing industry byproduct. Because it is rich in fiber and protein, it is commonly
used as cattle feed but could also be used within the human diet. Additionally, it contains many
bioactive substances such as hydroxycinnamic acids that are known to be antioxidants and potent
inhibitors of enzymes of glucose metabolism. Therefore, our study aim was to prepare different
extracts—A1-A7 (solid-liquid extraction with 60% acetone); HE1-HE6 (alkaline hydrolysis followed
by ethyl acetate extraction) and HA1-HA3 (60% acetone extraction of alkaline residue)—from various
BSGs which were characterized for their total phenolic (TPC) and total flavonoid (TFC) contents,
before conducting in vitro studies on their effects on the glucose metabolism enzymes α-amylase,
α-glucosidase, dipeptidyl peptidase IV (DPP IV), and glycogen phosphorylase α (GPα). Depending
on the extraction procedures, TPCs ranged from 20–350 μg gallic acid equivalents/mg extract
and TFCs were as high as 94 μg catechin equivalents/mg extract. Strong inhibition of glucose
metabolism enzymes was also observed: the IC50 values for α-glucosidase inhibition ranged from
67.4 ± 8.1 μg/mL to 268.1 ± 29.4 μg/mL, for DPP IV inhibition they ranged from 290.6 ± 97.4 to
778.4 ± 95.5 μg/mL and for GPα enzyme inhibition from 12.6 ± 1.1 to 261 ± 6 μg/mL. However, the
extracts did not strongly inhibit α-amylase. In general, the A extracts from solid-liquid extraction
with 60% acetone showed stronger inhibitory potential towards α-glucosidase and GPα than other
extracts whereby no correlation with TPC or TFC were observed. Additionally, DPP IV was mainly
inhibited by HE extracts but the effect was not of biological relevance. Our results show that BSG
is a potent source of α-glucosidase and GPα inhibitors, but further research is needed to identify
these bioactive compounds within BSG extracts focusing on extracts from solid-liquid extraction
with 60% acetone.
Sensing location information in indoor scenes requires a high accuracy and is a challenging task, mainly because of multipath and NLoS (non-line-of-sight) propagation. GNSS signals cannot penetrate well in indoor environment. Satellite-based navigation and positioning systems cannot therefore be used for indoor positioning.. Other technologies have been suggested for indoor usage, among them, Wi-Fi (802.11) and 5G NR (New Radio). The primary aim of this study is to discuss the advantages and drawbacks of 5G and Wi-Fi positioning techniques for indoor localization.
This contribution presents the results of a replication study on the learning effect of tablet-supported video analysis compared to traditional teaching sequences using non-digital experimental materials in the subject areas of uniform and accelerated motion in high school physics lessons. In addition to the replication of the preliminary study results recently published in this journal (Becker et al 2018, 2019), the investigation of the effect on the cognitive load as well as the emotional state of the students is another focal point. Compared to the preliminary study, the sample size was significantly increased from N = 109 to N = 294. The individual effects of the preliminary study could be replicated in this way. For both topics, a significant reduction of extraneous cognitive load and a positive effect on intervention-induced emotions could be demonstrated. Moreover, the theoretically founded causal relationship between emotion, cognitive load, and learning achievement could be empirically verified by means of structural equation modeling.
The importance of well trained and stable neck flexors and extensors as well as trunk muscles for intentional headers in soccer is increasingly discussed. The neck flexors and extensors should ensure a coupling of trunk and head at the time of ball contact to increase the physical mass hitting the ball and reduce head acceleration. The aim of the study was to analyze the influence of a 6-week strength training program (neck flexors, neck extensors) on the acceleration of the head during standing, jumping and running headers as well as after fatigue of the trunk muscles on a pendulum header. A total of 33 active male soccer players (20.3 ± 3.6 years, 1.81 ± 0.07 m, 75.5 ± 8.3 kg) participated and formed two training intervention groups (IG1: independent adult team, IG2: independent youth team) and one control group (CG: players from different teams). The training intervention consisted of three exercises for the neck flexors and extensors. The training effects were verified by means of the isometric maximum voluntary contraction (IMVC) measured by a telemetric Noraxon DTS force sensor. The head acceleration during ball contact was determined using a telemetric Noraxon DTS 3D accelerometer. There was no significant change of the IMVC over time between the groups (F=2.265, p=.121). Head acceleration was not reduced significantly for standing (IG1 0.4 ± 2.0, IG2 0.1 ± 1.4, CG -0.4 ± 1.2; F = 0.796, p = 0.460), jumping (IG1-0.7 ± 1.4, IG2-0.2 ± 0.9, CG 0.1 ± 1.2; F = 1.272, p = 0.295) and running (IG1-1.0 ± 1.9, IG2-0.2 ± 1.4, CG -0.1 ± 1.6; F = 1.050, p = 0.362) headers as well as after fatigue of the trunk musculature for post-jumping (IG1-0.2 ± 2.1, IG2-0.6 ± 1.4; CG -0.6 ± 1.3; F = 0.184, p = 0.833) and post-running (IG1-0.3 ± 1.6, IG2-0.7 ± 1.2, CG 0.0 ± 1.4; F = 0.695, p = 0.507) headers over time between IG1, IG2 and CG. A 6-week strength training of the neck flexors and neck extensors could not show the presumed preventive benefit. Both the effects of a training intervention and the consequences of an effective intervention for the acceleration of the head while heading seem to be more complex than previously assumed and presumably only come into effect in case of strong impacts.
Key words: Heading, kinetics, head-neck-torso-alignment, neck musculature, repetitive head impacts, concussion
Heading in Soccer: Does Kinematics of the Head‐Neck‐Torso Alignment Influence Head Acceleration?
(2021)
There is little scientific evidence regarding the cumulative effect of purposeful heading. The head-neck-torso alignment is considered to be of great importance when it comes to minimizing potential risks when heading. Therefore, this study determined the relationship between head-neck-torso alignment (cervical spine, head, thoracic spine) and the acceleration of the head, the relationship between head acceleration and maximum ball speed after head impact and differences between head accelerations throughout different heading approaches (standing, jumping, running). A total of 60 male soccer players (18.9 ± 4.0 years, 177.6 ± 14.9 cm, 73.1 ± 8.6 kg) participated in the study. Head accelerations were measured by a telemetric Noraxon DTS 3D Sensor, whereas angles for the head-neck-torso alignment and ball speed were analyzed with a Qualisys Track Manager program. No relationship at all was found for the standing, jumping and running approaches. Concerning the relationship between head acceleration and maximum ball speed after head impact only for the standing header a significant result was calculated (p = 0.024, R2 = .085). A significant difference in head acceleration (p < .001) was identified between standing, jumping and running headers. To sum up, the relationship between head acceleration and head-neck-torso alignment is more complex than initially assumed and could not be proven in this study. Furthermore first data were generated to check whether the acceleration of the head is a predictor for the resulting maximum ball speed after head impact, but further investigations have to follow. Lastly, we confirmed the results that the head acceleration differs with the approach.
The core muscles play a central role in stabilizing the head during headers in soccer. The objective of this study was to examine the influence of a fatigued core musculature on the acceleration of the head during jump headers and run headers. Acceleration of the head was measured in a pre-post-design in 68 soccer players (age: 21.5 ± 3.8 years, height: 180.0 ± 13.9 cm, weight: 76.9 ± 8.1 kg). Data were recorded by means of a telemetric 3D acceleration sensor and with a pendulum header. The treatment encompassed two exercises each for the ventral, lateral, and dorsal muscle chains. The acceleration of the head between pre- and post-test was reduced by 0.3 G (p = 0.011) in jump headers and by 0.2 G (p = 0.067) in run headers. An additional analysis of all pretests showed an increased acceleration in run headers when compared to stand headers (p < 0.001) and jump headers (p < 0.001). No differences were found in the sub-group comparisons: semi-professional vs. recreational players, offensive vs. defensive players. Based on the results, we conclude that the acceleration of the head after fatiguing the core muscles does not increase, which stands in contrast to postulated expectations. More tests with accelerated soccer balls are required for a conclusive statement.
In search of new technologies for optimizing the performance and space requirements of electronic and optical micro-circuits, the concept of spoof surface plasmon polaritons (SSPPs) has come to the fore of research in recent years. Due to the ability of SSPPs to confine and guide the energy of electromagnetic waves in a subwavelength space below the diffraction limit, SSPPs deliver all the tools to implement integrated circuits with a high integration rate. However, in order to guide SSPPs in the terahertz frequency range, it is necessary to carefully design metasurfaces that allow one to manipulate the spatio-temporal and spectral properties of the SSPPs at will. Here, we propose a specifically designed cut-wire metasurface that sustains strongly confined SSPP modes at terahertz frequencies. As we show by numerical simulations and also prove in experimental measurements, the proposed metasurface can tightly guide SSPPs on straight and curved pathways while maintaining their subwavelength field confinement perpendicular to the surface. Furthermore, we investigate the dependence of the spatio-temporal and spectral properties of the SSPP modes on the width of the metasurface lanes that can be composed of one, two or three cut-wires in the transverse direction. Our investigations deliver new insights into downsizing effects of guiding structures for SSPPs.
The development and implementation of an observational video-based risk assessment is described. Occupational risk assessment is one of the most important yet also challenging tasks for employers. Most assessment tools to date use questionnaires, expert interviews and similar tools. Video analysis is a promising tool for risk assessment, but it needs an objective basis. A video of a plastering worker was recorded using a 360 degree camera. The recording was then analyzed using the developed observational matrix concerning Work Characteristics, Work Activities as well as potential risks. Risk factors present during the video of the work included lifting, fall from ladder, hazardous substances as well as occasionally bad posture. The worker had no or just one risk factor present during most of the time of the video recording, while only 16 seconds with more than one risk factor present according to the observational matrix. The paper presents a promising practical method to assess occupational risks on a case-by-case basis. It can help with the risk assessment process in companies which is required by law in some industrialized countries. The matrix in combination with video analysis is a first step towards digital observational risk assessment. It can also be the basis of an automated risk assessment process.
Over the past 2 decades, there has been much progress on the classification of symplectic linear quotient singularities V/G admitting a symplectic (equivalently, crepant) resolution of singularities. The classification is almost complete but there is an infinite series of groups in dimension 4—the symplectically primitive but complex imprimitive groups—and 10 exceptional groups up to dimension 10, for which it is still open. In this paper, we treat the remaining infinite series and prove that for all but possibly 39 cases there is no symplectic resolution. We thereby reduce the classification problem to finitely many open cases. We furthermore prove non-existence of a symplectic resolution for one exceptional group, leaving 39+9=48 open cases in total. We do not expect any of the remaining cases to admit a symplectic resolution.
In the field of liquid filtration, the realization of gas throughput-free cake filtration has been investigated for a long time. Cake filtration without gas throughput would lead to energy savings in general and would reduce the mechanically achievable residual moisture in filter cakes in particular. The reason why gas throughput-free filtration could not be realized with fabrics so far is that the achievable pore sizes are not small enough, and that the associated capillary pressure is too low for gas throughput-free filtration. Microporous membranes can prevent gas flow through open pores and cracks in the filter cake at a standard differential pressure for cake filtration of 0.8 bar due to their smaller pore size. Since large-scale implementation with membranes was not yet successful due to their inadequate mechanical strength, this work focuses on the development and testing of a novel composite material. It combines the advantages of gas throughput-free filtration using membranes with the mechanical stability of fabrics. For the production of the composites, a paste dot coating with adhesive, which is a common method in the textile industry, was used. Based on filtration experiments, delamination and tensile tests, as well as CT analysis, it is shown that this method is suitable for the production of composite filter materials for gas throughput-free cake filtration.
Specific parameters of cake filtration, such as the filter cake and filter medium resistances, can be determined using the pressurized housing cell standardized in the guideline VDI 2762 by measuring the filtrate mass on a laboratory scale. For reproducible measurements and an exact detection of the filtration start, an improved test setup is presented and compared with a standard setup according to the guideline VDI 2762. On the basis of measurements without and with a particle system to be filtered, it is shown that the characteristic nonlinear course at the beginning of each filtration, which can be seen in the t/V-V diagram, is influenced by the used measuring equipment.
Adjustment Effects of Maximum Intensity Tolerance During Whole-Body Electromyostimulation Training
(2019)
Intensity regulation during whole-body electromyostimulation (WB-EMS) training is mostly controlled by subjective scales such as CR-10 Borg scale. To determine objective training intensities derived from a maximum as it is used in conventional strength training using the one-repetition-maximum (1-RM), a comparable maximum in WB-EMS is necessary. Therefore, the aim of this study was to examine, if there is an individual maximum intensity tolerance plateau after multiple consecutive EMS application sessions. A total of 52 subjects (24.1 ± 3.2 years; 76.8 ± 11.1 kg; 1.77 ± 0.09 m) participated in the longitudinal, observational study (38 males, 14 females). Each participant carried out four consecutive maximal EMS applications (T1–T4) separated by 1 week. All muscle groups were stimulated successively until their individual maximum and combined to a whole-body stimulation index to carry out a possible statement for the development of the maximum intensity tolerance of the whole body. There was a significant main effect between the measurement times for all participants (p < 0.001; ????2 = 0.39) as well as gender specific for males (p = 0.001; ????2 = 0.18) and females (p < 0.001; ????2 = 0.57). There were no interaction effects of gender × measurement time (p = 0.394). The maximum intensity tolerance increased significantly from T1 to T2 (p = 0.001) and T2 to T3 (p < 0.001). There was no significant difference between T3 and T4 (p = 1.0). These results indicate that there is an adjustment of the individual maximum intensity tolerance to a WB-EMS training after three consecutive tests. Therefore, there is a need of several habituation units comparable to the identification of the individual 1-RM in conventional strength training. Further research should focus on an objective intensity-specific regulation of the WB-EMS based on the individual maximum intensity tolerance to characterize different training areas and therefore generate specific adaptations to a WB-EMS training compared to conventional strength training methods.
Whole-body electromyostimulation (WB-EMS) is an extension of the EMS application known in physical therapy. In WB-EMS, body composition and skinfold thickness seem to play a decisive role in influencing the Ohmic resistance and therefore the maximum intensity tolerance. That is why the therapeutic success of (WB-)EMS may depend on individual anatomical parameters. The aim of the study was to find out whether gender, skinfold thickness and parameters of body composition have an influence on the maximum intensity tolerance in WB-EMS. [Participants and Methods] Fifty-two participants were included in the study. Body composition (body impedance, body fat, fat mass, fat-free mass) and skinfold thicknesses were measured and set into relation to the maximum intensity tolerance. [Results] No relationship between the different anthropometric parameters and the maximum intensity tolerance was detected for both genders. Considering the individual muscle groups, no similarities were found in the results. [Conclusion] Body composition or skinfold thickness do not seem to have any influence on the maximum intensity tolerance in WB-EMS training. For the application in physiotherapy this means that a dosage of the electrical voltage within the scope of a (WB-) EMS application is only possible via the subjective feedback (BORG Scale).
The difference in the efficacy of altered stimulation parameters in whole-body-electromyostimulation (WB-EMS) training remains largely unexplored. However, higher impulse frequencies (>50 Hz) might be most adequate for strength gain. The aim of this study was to analyze potential differences in sports-related performance parameters after a 10-week WB-EMS training with different frequencies. A total of 51 untrained participants (24.9 ± 3.9 years, 174 ± 9 cm, 72.4 ± 16.4 kg, BMI 23.8 ± 4.1, body fat 24.7 ± 8.1 %) was randomly divided into three groups: one inactive control group (CON) and two training groups. They completed a 10-week WB-EMS program of 1.5 sessions/week, equal content but different stimulation frequencies (training with 20 Hz (T20) vs. training with 85 Hz (T85)). Before and after intervention, all participants completed jumping (Counter Movement Jump (CMJ), Squat Jump (SJ), Drop Jump (DJ)), sprinting (5m, 10m, 30m), and strength tests (isometric trunk flexion/extension). One-way ANOVA was applied to calculate parameter changes. Post-hoc least significant difference tests were performed to identify group differences. Significant differences were identified for CMJ (p = 0.007), SJ (p = 0.022), trunk flexion (p = 0.020) and extension (p=.013) with significant group differences between both training groups and CON (not between the two training groups T20 and T85). A 10-week WB-EMS training leads to significant improvements of jump and strength parameters in untrained participants. No differences could be detected between the frequencies. Therefore, both stimulation frequencies can be regarded as adequate for increasing specific sport performance parameters. Further aspects as regeneration or long term effects by the use of different frequencies still need to be clarified.
Red fruits and their juices are rich sources of polyphenols, especially anthocyanins.
Some studies have shown that such polyphenols can inhibit enzymes of the carbohydrate metabolism,
such as α-amylase and α-glucosidase, that indirectly regulate blood sugar levels. The presented
study examined the in vitro inhibitory activity against α-amylase and α-glucosidase of various
phenolic extracts prepared from direct juices, concentrates, and purees of nine different berries which
differ in their anthocyanin and copigment profile. Generally, the extracts with the highest phenolic
content—aronia (67.7 ± 3.2 g GAE/100 g; cyanidin 3-galactoside; chlorogenic acid), pomegranate
(65.7 ± 7.9 g GAE/100 g; cyanidin 3,5-diglucoside; punicalin), and red grape (59.6 ± 2.5 g GAE/100 g;
malvidin 3-glucoside; quercetin 3-glucuronide)—showed also one of the highest inhibitory activities
against α-amylase (326.9 ± 75.8 µg/mL; 789.7 ± 220.9 µg/mL; 646.1 ± 81.8 µg/mL) and α-glucosidase
(115.6 ± 32.5 µg/mL; 127.8 ± 20.1 µg/mL; 160.6 ± 68.4 µg/mL) and, partially, were even more potent
inhibitors than acarbose (441 ± 30 µg/mL; 1439 ± 85 µg/mL). Additionally, the investigation of single
anthocyanins and glycosylated flavonoids demonstrated a structure- and size-dependent inhibitory
activity. In the future in vivo studies are envisaged.
Loss of USP28 and SPINT2 expression promotes cancer cell survival after whole genome doubling
(2021)
Background
Whole genome doubling is a frequent event during cancer evolution and shapes the cancer genome due to the occurrence of chromosomal instability. Yet, erroneously arising human tetraploid cells usually do not proliferate due to p53 activation that leads to CDKN1A expression, cell cycle arrest, senescence and/or apoptosis.
Methods
To uncover the barriers that block the proliferation of tetraploids, we performed a RNAi mediated genome-wide screen in a human colorectal cancer cell line (HCT116).
Results
We identified 140 genes whose depletion improved the survival of tetraploid cells and characterized in depth two of them: SPINT2 and USP28. We found that SPINT2 is a general regulator of CDKN1A transcription via histone acetylation. Using mass spectrometry and immunoprecipitation, we found that USP28 interacts with NuMA1 and affects centrosome clustering. Tetraploid cells accumulate DNA damage and loss of USP28 reduces checkpoint activation, thus facilitating their proliferation.
Conclusions
Our results indicate three aspects that contribute to the survival of tetraploid cells: (i) increased mitogenic signaling and reduced expression of cell cycle inhibitors, (ii) the ability to establish functional bipolar spindles and (iii) reduced DNA damage signaling.
We provide a complete elaboration of the L2-Hilbert space hypocoercivity theorem for the degenerate Langevin dynamics with multiplicative noise, studying the longtime behavior of the strongly continuous contraction semigroup solving the abstract Cauchy problem for the associated backward Kolmogorov operator. Hypocoercivity for the Langevin dynamics with constant diffusion matrix was proven previously by Dolbeault, Mouhot and Schmeiser in the corresponding Fokker–Planck framework and made rigorous in the Kolmogorov backwards setting by Grothaus and Stilgenbauer. We extend these results to weakly differentiable diffusion coefficient matrices, introducing multiplicative noise for the corresponding stochastic differential equation. The rate of convergence is explicitly computed depending on the choice of these coefficients and the potential giving the outer force. In order to obtain a solution to the abstract Cauchy problem, we first prove essential self-adjointness of non-degenerate elliptic Dirichlet operators on Hilbert spaces, using prior elliptic regularity results and techniques from Bogachev, Krylov and Röckner. We apply operator perturbation theory to obtain essential m-dissipativity of the Kolmogorov operator, extending the m-dissipativity results from Conrad and Grothaus. We emphasize that the chosen Kolmogorov approach is natural, as the theory of generalized Dirichlet forms implies a stochastic representation of the Langevin semigroup as the transition kernel of a diffusion process which provides a martingale solution to the Langevin equation with multiplicative noise. Moreover, we show that even a weak solution is obtained this way.
In the strive for the climate-neutral and ultra-low emission vehicle powertrains of the future, synthetic fuels produced from renewable sources will play a major role. Polyoxymethylene dimethyl ethers (POMDME or “OME”) produced from renewable hydrogen are a very promising candidate for zero-impact emissions in future CI engines. To optimize the utilisation of these fuels in terms of efficiency, performance and emissions, it is not only necessary to adapt the combustion parameters, but especially to optimize the injection and mixture formation process. In the present work, the spray break-up behavior and mixture formation of OME fuel is investigated numerically in 3D CFD and validated against experimental data from optical measurements in a high pressure/high temperature chamber using Schlieren and Mie scattering. For comparison, the same operating points using conventional diesel fuel were measured in the optical chamber, and the CFD modeling was optimized based on these data. To model the spray-breakup phenomena reliably, the primary break-up model according to Fischer is used, taking into account the nozzle internal flow in a detailed calculation of the disperse droplet phase. As OME has not yet been investigated very intensively with respect to its chemico-physical properties, chemical analyses of the substance properties were carried out to capture the most important parameters correctly in the simulation. With this approach, the results of the optical spray measurement could be reproduced well by the numerical model for the cases studied here, laying the basis for further numerical studies of OME sprays, including real engine operation.
Development of a simple substitute model to describe the normal force of fluids in narrow gaps
(2023)
Fluids in narrow gaps are employed frequently in many applications. The motivation for their use is diverse and ranges from hydrodynamic lubrication in plain bearings to the transport of hard particles into the working gap for the purpose of machining workpiece surfaces in lapping processes. Depending on the focus of the analysis, it may be useful to investigate the entire pressure field or to calculate only individual quantities. For example, in sophisticated simulations it may be of interest to know the resulting force of a fluid as a function of the external system state in order to describe its damping characteristics. Especially for the simulation of flows in narrow gaps, the Reynolds equation is a convenient choice, which, in contrast to the more general Navier-Stokes equations, can lead to considerable savings in computational time because no three-dimensional discretization is required, but only a two-dimensional discretization. However, if not a highly detailed pressure field is of interest, but only simple relations such as the resulting force as a function of distance and velocity, and if this relation to be evaluated many times for different parameter combinations over a wide range of values, the use of a robust substitute model is a good choice. This article deals with the creation of such a substitute model based on the Reynolds equation taking cavitation into account.
Tribological systems are often characterized based on time-averaged quantities such as wear rates, friction coefficients and material properties. It is well known that some tribological metrics show variations depending on the laboratory conducting the study and the reproduction method selected. Perhaps the key to overcome this problem is to avoid a strong compression of the information generated. In this context, the arising forces and the coefficient of friction in three-body wear systems are investigated in more detail. The mean value of a time series of these physical quantities is only a single property and by no means an exhaustive description. A more detailed consideration of the variances could be a necessary condition to allow an appropriate comparison of tribological parameters and a correct interpretation of the properties of tribological systems. For this purpose, we examine two very simple tribological systems exemplarily and take a closer look at the properties of some characteristic process quantities.
Using the mixed-metal approach, a direct synthesis route at ambient pressure was developed for a new type of bimetallic metal-organic framework based on the CPO-27 structure. The structural characterization of CPO-27(Cu0.6−CS−Co0.4) using X-ray diffraction, transmission electron microscopy, energy-dispersive X-ray mapping and X-ray absorption spectroscopy revealed that the Cu2+ and Co2+ ions were exclusively incorporated at the metal positions of the CPO-27 lattice, but with a core-shell distribution within the crystallites. The parent framework material was then utilized as a precursor for the generation of novel bimetallic carbon-supported materials using the controlled thermal decomposition in a reducing atmosphere. During this decomposition process, the distribution of the two metals remained the same, which resulted in unique needle-shaped particles with a high dispersion of cobalt at the periphery of the amorphous carbon and agglomerated copper particles in the inside.
The fatigue life of metals manufactured via laser-based powder bed fusion (L-PBF) highly
depends on process-induced defects. In this context, not only the size and geometry of the defect, but
also the properties and the microstructure of the surrounding material volume must be considered.
In the presented work, the microstructural changes in the vicinity of a crack-initiating defect in a
fatigue specimen produced via L-PBF and made of AISI 316L were analyzed in detail. Xenon plasma
focused ion beam (Xe-FIB) technique, scanning electron microscopy (SEM), and electron backscatter
diffraction (EBSD) were used to investigate the phase distribution, local misorientations, and grain
structure, including the crystallographic orientations. These analyses revealed a fine grain structure
in the vicinity of the defect, which is arranged in accordance with the melt pool geometry. Besides
pronounced cyclic plastic deformation, a deformation-induced transformation of the initial austenitic
phase into α’-martensite was observed. The plastic deformation as well as the phase transformation
were more pronounced near the border between the defect and the surrounding material volume.
However, the extent of the plastic deformation and the deformation-induced phase transformation
varies locally in this border region. Although a beneficial effect of certain grain orientations on the
phase transformation and plastic deformability was observed, the microstructural changes found
cannot solely be explained by the respective crystallographic orientation. These changes are assumed
to further depend on the inhomogeneous distribution of the multiaxial stresses beneath the defect as
well as the grain morphology
As additive manufacturing offers only low surface quality, a subsequent machining of functional and highly loaded areas is required. Thus, a sound knowledge of the interrelation between the additive and subtractive manufacturing process as well as the resulting mechanical properties is indispensable. In this work, specimens were manufactured by using laser-based powder bed fusion (L-PBF) with substantially different sets of process parameters as well as subsequent grinding (G) or milling (M). Despite the substantially different surface topographies, the fatigue tests revealed only a slight influence of the subtractive manufacturing on the fatigue behavior, whereas the different laser-based powder bed fusion process parameters led to pronounced changes in fatigue strength. In contrast, a significant influence of subtractive finishing on the fatigue properties of the defect-free continuously cast (CC) reference specimens was observed. This can be explained by a dominating influence of process-induced defects in laser-based powder bed fusion material, which overruled the influence of surface machining. However, although both laser-based powder bed fusion parameter sets resulted in substantial defects, one set yielded similar fatigue strength compared to continuously cast specimens.
The 22 wt.% Cr, fully ferritic stainless steel Crofer®22 H has higher thermomechanical
fatigue (TMF)- lifetime compared to advanced ferritic-martensitic P91, which is assumed to be caused
by different damage tolerance, leading to differences in crack propagation and failure mechanisms.
To analyze this, instrumented cyclic indentation tests (CITs) were used because the material’s
cyclic hardening potential—which strongly correlates with damage tolerance, can be determined
by analyzing the deformation behavior in CITs. In the presented work, CITs were performed for
both materials at specimens loaded for different numbers of TMF-cycles. These investigations show
higher damage tolerance for Crofer®22 H and demonstrate changes in damage tolerance during
TMF-loading for both materials, which correlates with the cyclic deformation behavior observed in
TMF-tests. Furthermore, the results obtained at Crofer®22 H indicate an increase of damage tolerance
in the second half of TMF-lifetime, which cannot be observed for P91. Moreover, CITs were performed
at Crofer®22 H in the vicinity of a fatigue crack, enabling to locally analyze the damage tolerance.
These CITs show differences between crack edges and the crack tip. Conclusively, the presented
results demonstrate that CITs can be utilized to analyze TMF-induced changes in damage tolerance.
To exploit the whole potential of Additive Manufacturing (AM), a sound knowledge about the mechanical and especially cyclic properties of AM materials as well as their dependency on the process parameters is indispensable. In the presented work, the influence of chemical composition of the used powder on the fatigue behavior of Selectively Laser Melted (SLM) and Laser Deposition Welded (LDW) specimens made of austenitic stainless steel AISI 316L was investigated. Therefore, in each manufacturing process two variations of chemical composition of the used powder were utilized. For qualitative characterization of the materials cyclic deformation behavior, load increase tests (LITs) were performed and further used for the physically based lifetime calculation method (PhyBaLLIT), enabling an efficient determination of stress (S)–number of cycles to failure (Nf) curves (S–Nf), which show excellent correlation to additionally performed constant amplitude tests (CATs). Moreover, instrumented cyclic indentation tests (PhyBaLCHT) were utilized to characterize the materials’ defect tolerance in a comparably short time. All material variants exhibit a high influence of microstructural defects on the fatigue properties. Consequently, for the SLM process a higher fatigue lifetime at lower stress amplitudes could be observed for the batch with a higher defect tolerance, resulting from a more pronounced deformation induced austenite–α’-martensite transformation. In correspondence to that, the batch of LDW material with an increased defect tolerance exhibit a higher fatigue strength. However, the differences in defect tolerance between the LDW batches is only slightly influenced by phase transformation and seems to be mainly governed by differences in hardening potential of the austenitic microstructure. Furthermore, a significantly higher fatigue strength could be observed for SLM material in relation to LDW specimens, because of a refined microstructure and smaller microstructural defects of SLM specimens.
We propose a universal method for the evaluation of generalized standard materials that greatly simplifies the material law implementation process. By means of automatic differentiation and a numerical integration scheme, AutoMat reduces the implementation effort to two potential functions. By moving AutoMat to the GPU, we close the performance gap to conventional evaluation routines and demonstrate in detail that the expression level reverse mode of automatic differentiation as well as its extension to second order derivatives can be applied inside CUDA kernels. We underline the effectiveness and the applicability of AutoMat by integrating it into the FFT-based homogenization scheme of Moulinec and Suquet and discuss the benefits of using AutoMat with respect to runtime and solution accuracy for an elasto-viscoplastic example.
In this note, we define one more way of quantization of classical systems. The quantization we consider is an analogue of classical Jordan–Schwinger map which has been known and used for a long time by physicists. The difference, compared to Jordan–Schwinger map, is that we use generators of Cuntz algebra O∞ (i.e. countable family of mutually orthogonal partial isometries of separable Hilbert space) as a “building blocks” instead of creation–annihilation operators. The resulting scheme satisfies properties similar to Van Hove prequantization, i.e. exact conservation of Lie brackets and linearity.
When considering complex systems, identifying the most important actors is often of relevance. When the system is modeled
as a network, centrality measures are used which assign each node a value due to its position in the network. It is often
disregarded that they implicitly assume a network process flowing through a network, and also make assumptions of how
the network process flows through the network. A node is then central with respect to this network process (Borgatti in Soc
Netw 27(1):55–71, 2005, https ://doi.org/10.1016/j.socne t.2004.11.008). It has been shown that real-world processes often
do not fulfill these assumptions (Bockholt and Zweig, in Complex networks and their applications VIII, Springer, Cham,
2019, https ://doi.org/10.1007/978-3-030-36683 -4_7). In this work, we systematically investigate the impact of the measures’
assumptions by using four datasets of real-world processes. In order to do so, we introduce several variants of the betweenness
and closeness centrality which, for each assumption, use either the assumed process model or the behavior of the real-world
process. The results are twofold: on the one hand, for all measure variants and almost all datasets, we find that, in general,
the standard centrality measures are quite robust against deviations in their process model. On the other hand, we observe a
large variation of ranking positions of single nodes, even among the nodes ranked high by the standard measures. This has
implications for the interpretability of results of those centrality measures. Since a mismatch of the behaviour of the real
network process and the assumed process model does even affect the highly-ranked nodes, resulting rankings need to be
interpreted with care.
In this paper we construct a numerical solver for the Saint Venant equations. Special attention
is given to the balancing of the source terms, including the bottom slope and variable cross-
sectional profiles. Therefore a special discretization of the pressure law is used, in order to
transfer analytical properties to the numerical method. Based on this approximation a well-
balanced solver is developed, assuring the C-property and depth positivity. The performance
of this method is studied in several test cases focusing on accurate capturing of steady states.
Das MINT-EC-Girls-Camp: Math-Talent-School richtet sich an mathematikbegeisterte Schülerinnen von MINT-EC-Schulen, die Einblicke in die Berufswelt von Mathematikerinnen und Mathematikern bekommen möchten. Die Veranstaltung veranschaulicht den Schülerinnen die steigende Relevanz angewandter mathematischer Forschungsgebiete, wie der Techno- und der Wirtschaftsmathematik. Sie soll dazu dienen, Schüler:innen die Bedeutung mathematischer Arbeitsweisen in der heutigen Berufswelt, insbesondere in Industrie und Wirtschaft, begreifbar zu machen. Die Talent-School wird organisiert von MINT-EC und dem Felix-Klein-Zentrum für Mathematik. Die fachwissenschaftliche Betreuung der Schülerinnen während dieser Talent-School wurde durch Mitarbeitende des Kompetenzzentrums für Mathematische Modellierung in MINT-Projekten in der Schule (KOMMS) der TU Kaiserslautern und des Fraunhofer ITWM umgesetzt. In diesem Report beschreiben wir die Projekte, die während der Talent-School im Oktober 2022 durchgeführt wurden.
Thermal comfort is one of the most important factors for occupant satisfaction and, as a result, for the building energy performance. Decentralized heating and cooling systems, also known as “Personal Environmental Comfort Systems” (PECS), have attracted significant interest in research and industry in recent years. While building simulation software is used in practice to improve the energy performance of buildings, most building simulation applications use the PMV approach for comfort calculations. This article presents a newly developed building controller that uses a holistic approach in the consideration of PECS within the framework of the building simulation software Esp-r. With PhySCo, a dynamic physiology, sensation, and comfort model, the presented building controller can adjust the setpoint temperatures of the central HVAC system as well as control the use of PECS based on the thermal sensation and comfort values of a virtual human. An adaptive building controller with a wide dead-band and adaptive setpoints between 18 to 26 °C (30 °C) was compared to a basic controller with a fixed and narrow setpoint range between 21 to 24 °C. The simulations were conducted for temperate western European climate (Mannheim, Germany), classified as Cfb climate according to Köppen-Geiger. With the adaptive controller, a 12.5% reduction in end-use energy was achieved in winter. For summer conditions, a variation between the adaptive controller, an office chair with a cooling function, and a fan increased the upper setpoint temperature to 30 °C while still maintaining comfortable conditions and reducing the end-use energy by 15.3%. In spring, the same variation led to a 9.3% reduction in the final energy. The combinations of other systems were studied with the newly presented controller.
Seit 1993 veranstaltet der Fachbereich Mathematik der TU Kaiserslautern jährlich die mathematischen Modellierungswochen. Die Veranstaltung erwuchs parallel zu der steigenden Relevanz angewandter mathematischer Forschungsgebiete, wie der Technomathematik und der Wirtschaftsmathematik. Sie soll dazu dienen, Schülerinnen und Schülern die Bedeutung mathematischer Arbeitsweisen in der heutigen Berufswelt, insbesondere in Industrie und Wirtschaft, begreifbar zu machen. Darüber hinaus bietet die Modellierungswoche den teilnehmenden Lehrkräften einen Einblick in die Projektarbeit mit offenen Fragestellungen im Rahmen der mathematischen Modellierung. In diesem Report beschreiben wir die Projekte, die während der Modellierungswoche im Dezember 2021 durchgeführt wurden. Der Themenschwerpunkt der Veranstaltung lautete "Wetter und Katastrophenschutz".
Seit 1993 veranstaltet der Fachbereich Mathematik der TU Kaiserslautern jährlich die mathematischen Modellierungswochen. Die Veranstaltung erwuchs parallel zu der steigenden Relevanz angewandter mathematischer Forschungsgebiete, wie der Techno- und der Wirtschaftsmathematik. Sie soll dazu dienen, Schülerinnen und Schülern die Bedeutung mathematischer Arbeitsweisen in der heutigen Berufswelt, insbesondere in Industrie und Wirtschaft, begreifbar zu machen. Darüber hinaus bietet die Modellierungswoche den teilnehmenden Lehrkräften einen Einblick in die Projektarbeit mit offenen Fragestellungen im Rahmen der mathematischen Modellierung. In diesem Report beschreiben wir die Projekte, die während der Modellierungswoche im Dezember 2022 durchgeführt wurden.
Qualitative NMR spectroscopic and quantitative calorimetric binding studies were performed to characterize the interaction of nontoxic mimics of the V-type nerve agent VX (O-ethyl S-[2-(diisopropylamino)ethyl] methylphosphonothioate) and the Novichok nerve agent A-234 (ethyl (1-(diethylamino)ethylidene)phosphoramidofluoridate) with a series of receptors in 100 mM aqueous phosphate buffer at pH 7.4 and 37°C. These investigations provided information about the preferred geometry with which the nerve agent mimics are included into the receptor cavities and about the stability of the complexes formed. According to the results, the positively charged VX mimic prefers to bind to cation receptors such as sulfonated calixarenes and an acyclic cucurbituril but does not noticeably interact with cyclodextrins. While binding to the acyclic cucurbituril is stronger than that to calixarenes, the mode of inclusion into the sulfonatocalix[4]arene cavity is better suited for the development of scavengers that bind and detoxify V-type nerve agents. The neutral Novichok mimic, on the other hand, only interacts with the acyclic cucurbituril with a strength required for scavenger development. These binding studies thus provided guidelines for the further development of nerve agent scavengers.
Jet loop reactors are standard multiphase reactors used in chemical, biological and environmental processes. The strong liquid jet provided by a nozzle enforces both internal circulation of liquid and gas as well as entrainment and dispersion of the gas phase. We present a one-dimensional compartment model based on a momentum balance that describes the internal circulation of gas and liquid phase in the jet loop reactor. This model considers the influence of local variations of the gas volume fraction on the internal circulation. These local variations can be caused by coalescence of gas bubbles, additional gas-feeding points and gas consumption or production. In this work, we applied the model to study the influence of a gas-consuming reaction on the internal circulation. In a comprehensive sensitivity analysis, the interaction of different parameters such as rate of reaction, power input through the nozzle, gas holdup, reactor geometry, and circulation rate were investigated. The results show that gas consumption can have a significant impact on internal circulation. Industrially relevant operating conditions have even been found where the internal circulation comes to a complete standstill.
The locally occurring mechanisms of hydrogen embrittlement significantly influence
the fatigue behavior of a material, which was shown in previous research on two different AISI
300-series austenitic stainless steels with different austenite stabilities. In this preliminary work, an
enhanced fatigue crack growth as well as changes in crack initiation sites and morphology caused
by hydrogen were observed. To further analyze the results obtained in this previous research, in
the present work the local cyclic deformation behavior of the material volume was analyzed by
using cyclic indentation testing. Moreover, these results were correlated to the local dislocation
structures obtained with transmission electron microscopy (TEM) in the vicinity of fatigue cracks.
The cyclic indentation tests show a decreased cyclic hardening potential as well as an increased
dislocation mobility for the conditions precharged with hydrogen, which correlates to the TEM
analysis, revealing courser dislocation cells in the vicinity of the fatigue crack tip. Consequently,
the presented results indicate that the hydrogen enhanced localized plasticity (HELP) mechanism
leads to accelerated crack growth and change in crack morphology for the materials investigated. In
summary, the cyclic indentation tests show a high potential for an analysis of the effects of hydrogen
on the local cyclic deformation behavior.
A detailed study of a cylinder activation concept by efficiency loss analysis and 1D simulation
(2020)
Cylinder deactivation is a well-known measure for reducing fuel consumption, especially when applied to gasoline engines. Mostly, such systems are designed to deactivate half of the number of cylinders of the engine. In this study, a new concept is investigated for deactivating only one out of four cylinders of a commercial vehicle diesel engine (“3/4-cylinder concept”). For this purpose, cylinders 2–4 of the engine are operated in “real” 3-cylinder mode, thus with the firing order and ignition distance of a regular 3-cylinder engine, while the first cylinder is only activated near full load, running in parallel to the fourth cylinder. This concept was integrated into a test engine and evaluated on an engine test bench. As the investigations revealed significant improvements for the low-to-medium load region as well as disadvantages for high load, an extensive numerical analysis was carried out based on the experimental results. This included both 1D simulation runs and a detailed cylinder-specific efficiency loss analysis. Based on the results of this analysis, further steps for optimizing the concept were derived and studied by numerical calculations. As a result, it can be concluded that the 3/4-cylinder concept may provide significant improvements of real-world fuel economy when integrated as a drive unit into a tractor.
Liegruppen
(1997)
This article presents a methodology whereby adjoint solutions for partitioned multiphysics problems can be computed efficiently, in a way that is completely independent of the underlying physical sub-problems, the associated numerical solution methods, and the number and type of couplings between them. By applying the reverse mode of algorithmic differentiation to each discipline, and by using a specialized recording strategy, diagonal and cross terms can be evaluated individually, thereby allowing different solution methods for the generic coupled problem (for example block-Jacobi or block-Gauss-Seidel). Based on an implementation in the open-source multiphysics simulation and design software SU2, we demonstrate how the same algorithm can be applied for shape sensitivity analysis on a heat exchanger (conjugate heat transfer), a deforming wing (fluid–structure interaction), and a cooled turbine blade where both effects are simultaneously taken into account.
Enhancing the quality of surgical interventions is one of the main goals of surgical robotics. Thus we have devised a surgical robotic system for maxillofacial surgery which can be used as an intelligent intraoperative surgical tool. Up to now a surgeon preoperatively plans an intervention by studying twodimensional X-rays, thus neglecting the third dimension. In course of the special research programme "Computer and Sensor Aided Surgery" a planning system has been developed at our institute, which allows the surgeon to plan an operation on a threedimensional computer model of the patient . Transposing the preoperatively planned bone cuts, bore holes, cavities, and milled surfaces during surgery still proves to be a problem, as no adequate means are at hand: the actual performance of the surgical intervention and the surgical outcome solely depend on the experience and the skill of the operating surgeon. In this paper we present our approach of a surgical robotic system to be used in maxillofacial surgery. Special stress is being laid upon the modelling of the environment in the operating theatre and the motion planning of our surgical robot .
Das in Manderen, Departement Moselle, gelegene Schloss von Malbrouck befindet sich genau an der deutschen und luxemburgischen Grenze. Sein Bau wurde nach dem Willen von Arnold VI, dem Grundherrn von Sierck, im Jahre 1419 begonnen, wurde 1434, in dem Jahr, in dem das Schloss für geeignet befunden wurde, einem Angriff widerstehen zu können, vollendet und dann in den Dienst des Erzbistums Trier gestellt.
Unglücklicherweise war zum Zeitpunkt des Todes von Ritter Arnold dessen Nachfolge nicht gesichert, weshalb das Schloss ab Ende des XV. bis Anfang des XVII. Jahrhunderts von einer Hand in die nächste überwechselte.
Seit es im Jahre 1930 unter Denkmalschutz gestellt wurde und der Conseil Général de la Moselle es 1975 von seinem letzten Eigentümer, einem Bauern, zurückkaufte, wurde das Schloss vollständig saniert und im September 1988 wiedereröffnet. Wie jedes andere Bauwerk dieser Größenordnung braucht es eine feine und genaue Überwachung. Aus diesem Grund wollte der Conseil Général de la Moselle strategischer Partner im Projekt CURe MODERN werden.
Habitat fragmentation and forest management have been considered to drastically alter the nature of forest ecosystems globally. However, much uncertainty remains regarding the causative mechanisms mediating temperate forest responses, such as forest physical environment and the structure of woody plant assemblages, regardless of the role these forests play for global sustainability. In this paper, we examine how both habitat fragmentation and timber exploitation via silvicultural operations affect these two factors at local and habitat spatial scales in a hyper-fragmented landscape of mixed beech forests spanning more than 1500 km2 in SW Germany. Variables were recorded across 57 1000 m2 plots covering four habitats: small forest fragments, forest edges within large control forests, as well as managed and unmanaged forest interior sites. As expected, forest habitats differed in disturbance level, physical conditions and community structure at plot and habitat scale. Briefly, diversity of plant assemblages differed across all forest habitats (highest in edge forests) and correlated with integrative indices of edge, fragmentation and management effects. Surprisingly, managed and unmanaged forests did not differ in terms of species richness at local spatial scale, but managed forests exhibited a clear signal of physical/floristic homogenization as species promoted by silviculture proliferated; i.e. impoverished communities at landscape scale. Moreover, functional composition of plant communities responded to the microclimatic regime within forest fragments, resulting in a higher prevalence of species adapted to these microclimatic conditions. Our results underscore the notion that forest fragmentation and silvicultural management (1) promote changes in microclimatic regimes, (2) alter the balance between light-demanding and shade-adapted species, (3) support diverse floras across forest edges, and (4) alter patterns of beta diversity. Hence, in human-modified landscapes edge-affected habitats can be recognized as biodiversity reservoirs in contrast to impoverished managed interior forests. Furthermore, our results ratify the role of unmanaged forests as a source of environmental variability, species turnover, and distinct woody plant communities.
Introducing parallelism and exploring its use is still a fundamental challenge for the computer algebra community. In high-performance numerical simulation, on the other hand, transparent environments for distributed computing which follow the principle of separating coordination and computation have been a success story for many years. In this paper, we explore the potential of using this principle in the context of computer algebra. More precisely, we combine two well-established systems: The mathematics we are interested in is implemented in the computer algebra system SINGULAR, whose focus is on polynomial computations, while the coordination is left to the workflow management system GPI-Space, which relies on Petri nets as its mathematical modeling language and has been successfully used for coordinating the parallel execution (autoparallelization) of academic codes as well as for commercial software in application areas such as seismic data processing. The result of our efforts is a major step towards a framework for massively parallel computations in the application areas of SINGULAR, specifically in commutative algebra and algebraic geometry. As a first test case for this framework, we have modeled and implemented a hybrid smoothness test for algebraic varieties which combines ideas from Hironaka’s celebrated desingularization proof with the classical Jacobian criterion. Applying our implementation to two examples originating from current research in algebraic geometry, one of which cannot be handled by other means, we illustrate the behavior of the smoothness test within our framework and investigate how the computations scale up to 256 cores.
US arms control policies have shifted frequently in the last 60 years, ranging from
the role of a ‘brakeman’ regarding international arms control, to the role of a
‘booster,’ initiating new agreements. My article analyzes the conditions that contribute
to this mixed pattern. A crisp-set Qualitative Comparative Analysis (QCA) evaluates
24 cases of US decisions on international arms control treaties (1963–2021).
The analysis reveals that the strength of conservative treaty skeptics in the Senate, in
conjunction with other factors, has contributed to the demise of arms control policies
since the end of the Cold War. A brief study of the Trump administration’s arms
control policies provides case-sensitive insights to corroborate the conditions identified
by the QCA. The findings suggest that conservative treaty skeptics contested the
bipartisan consensus and thus impaired the ability of the USA to perform its leadership
role within the international arms control regime.
A Strained Partnership: Krise und Resilienz in den transatlantischen Beziehungen 20 Jahre nach 9/11
(2021)
2021 lieferte aus transatlantischer Perspektive gleich mehrere Zäsuren. Im Januar wurde US-Präsident Donald Trump, dessen disruptive Politik diverse Konflikte mit Europa provozierte, durch Joseph R. Biden abgelöst. Im August endete in Afghanistan der längste Einsatz in der Geschichte der NATO mit einem chaotischen Abzug und der Machtübernahme der Taliban, fast 20 Jahre nach dem Beginn des Krieges. Und schließlich läuteten die Bundestagswahlen im September das Ende der Amtszeit Angela Merkels ein, die als Bundeskanzlerin in 16 Regierungsjahren auf vier US-Präsidenten traf. Diese Zäsuren bieten Anlass genug, eine Bilanz der transatlantischen Beziehungen seit 9/11 zu ziehen.
Biological soil crusts (biocrusts) are a common element of the Queensland (Australia) dry savannah ecosystem and are composed of cyanobacteria, algae, lichens, bryophytes, fungi and heterotrophic bacteria. Here we report how the CO2 gas exchange of the cyanobacteria-dominated biocrust type from Boodjamulla National Park in the north Queensland Gulf Savannah responds to the pronounced climatic seasonality and on their quality as a carbon sink using a semi-automatic cuvette system. The dominant cyanobacteria are the filamentous species Symplocastrum purpurascens together with Scytonema sp. Metabolic activity was recorded between 1 July 2010 and 30 June 2011, during which CO2 exchange was only evident from November 2010 until mid-April 2011, representative of 23.6 % of the 1-year recording period. In November at the onset of the wet season, the first month (November) and the last month (April) of activity had pronounced respiratory loss of CO2. The metabolic active period accounted for 25 % of the wet season and of that period 48.6 % was net photosynthesis (NP) and 51.4 % dark respiration (DR). During the time of NP, net photosynthetic uptake of CO2 during daylight hours was reduced by 32.6 % due to water supersaturation. In total, the biocrust fixed 229.09 mmol CO2 m−2 yr−1, corresponding to an annual carbon gain of 2.75 g m−2 yr−1. Due to malfunction of the automatic cuvette system, data from September and October 2010 together with some days in November and December 2010 could not be analysed for NP and DR. Based on climatic and gas exchange data from November 2010, an estimated loss of 88 mmol CO2 m−2 was found for the 2 months, resulting in corrected annual rates of 143.1 mmol CO2 m−2 yr−1, equivalent to a carbon gain of 1.7 g m−2 yr−1. The bulk of the net photosynthetic activity occurred above a relative humidity of 42 %, indicating a suitable climatic combination of temperature, water availability and light intensity well above 200 µmol photons m−2 s−1 photosynthetic active radiation. The Boodjamulla biocrust exhibited high seasonal variability in CO2 gas exchange pattern, clearly divided into metabolically inactive winter months and active summer months. The metabolic active period commences with a period (of up to 3 months) of carbon loss, likely due to reestablishment of the crust structure and restoration of NP prior to about a 4-month period of net carbon gain. In the Gulf Savannah biocrust system, seasonality over the year investigated showed that only a minority of the year is actually suitable for biocrust growth and thus has a small window for potential contribution to soil organic matter.
With the transition of fluid-capillary-based “Lab on a chip 1.0″ concepts in analytical chemistry to “Lab on a chip
2.0″ approaches relying on distinct fluid droplets (“digital microfluidics”, DMF), the need for reliable methods for
droplet actuation has increasingly come into focus. One possible approach is based on “electrowetting on
dielectric” (EWOD). This technique has the disadvantage that any possible desired later positions of the droplets
on the chip have to be defined prior to chip realization because one of the EWOD electrode layers has to be
structured accordingly. “Optoelectrowetting” (OEW) goes a step further in the sense that the later droplet positions
do not have to be known before, and none of the electrode layers has to be structured. Instead, the
electrical parameters of the layer sequence can be altered locally by an impinging (and movable) light spot.
Although some research groups have succeeded in demonstrating OEW actuation of droplets, the optimization of
the relevant parameters of the layer sequence and the droplet – at least half a dozen parameters altogether – is
tedious and not straight-forward. In this contribution, for optimization purposes, the equations governing OEW
are revisited and altered again, e.g., by numerical implementation of the experimentally well-known saturation
of the contact angle change. Additionally, a Nelder-Mead algorithm is applied to find the parameters, on which
the optimization has to focus to maximize contact angle changes and, thus, mechanical forces on the droplets.
The numerical investigation yields diverse results, e.g., the finding that the droplet’s contact area on the
dielectric layer has a strong influence on the contact angle change and the question whether the droplet is pulled
or pushed. Moreover, the interplay between frequency and amplitude of the applied rectangular alternate voltage
is important for optimization.
Die Akustik liefert einen interessanten Hintergrund, interdisziplinären und fächerverbindenen Unterricht zwischen Mathematik, Physik und Musik durchzuführen. SchülerInnen können hierbei beispielsweise experimentell tätig sein, indem sie Audioaufnahmen selbst erzeugen und sich mit Computersoftware Frequenzspektren erzeugen lassen. Genauso können die Schüler auch Frequenzspektren vorgeben und daraus Klänge erzeugen. Dies kann beispielsweise dazu dienen, den Begriff der Obertöne im Musikunterricht physikalisch oder mathematisch greifbar zu machen oder in der Harmonielehre Frequenzverhältnisse von Intervallen und Dreiklängen näher zu untersuchen.
Der Computer ist hier ein sehr nützliches Hilfsmittel, da der mathematische Hintergrund dieser Aufgabe -- das Wechseln zwischen Audioaufnahme und ihrem Frequenzbild -- sich in der Fourier-Analysis findet, die für SchülerInnen äußerst anspruchsvoll ist. Indem man jedoch die Fouriertransformation als numerisches Hilfsmittel einführt, das nicht im Detail verstanden werden muss, lässt sich an anderer Stelle interessante Mathematik betreiben und die Zusammenhänge zwischen Akustik und Musik können spielerisch erfahren werden.
Im folgenden Beitrag wird eine Herangehensweise geschildert, wie wir sie bereits bei der Felix-Klein-Modellierungswoche umgesetzt haben: Die SchülerInnen haben den Auftrag erhalten, einen Synthesizer zu entwickeln, mit dem verschiedene Musikinstrumente nachgeahmt werden können. Als Hilfsmittel haben sie eine kurze Einführung in die Eigenschaften der Fouriertransformation erhalten, sowie Audioaufnahmen verschiedener Instrumente.
Wir zeigen an einigen Beispielen, wie man numerische Simulationen in Tabellenkalkulationsprogrammen (hier speziell in Excel) erzeugen kann. Diese können beispielsweise im Kontext von mathematischer Modellierung verwendet werden.
Die Beispiele umfassen ein Modell zur Ausbreitung von Krankheiten, die Flugkurve eines Fußballs unter Berücksichtigung von Luftreibung, eine Monte-Carlo-Simulation zur experimentellen Bestimmung von pi, eine Monte-Carlo-Simulation eines gemischten Kartenstapels und die Modellierung von Benzinpreisen mit einem Preistrend und Rauschen
In diesem Text werden einige wichtige Grundlagen zusammengefasst, mit denen ein schneller Einstieg in das Arbeiten mit Arduino und Raspberry Pi möglich ist. Wir diskutieren nicht die Grundfunktionen der Geräte, weil es dafür zahlreiche Hilfestellungen im Internet gibt. Stattdessen konzentrieren wir uns vor allem auf die Steuerung von Sensoren und Aktoren und diskutieren einige Projektideen, die den MINT-interdisziplinären Projektunterricht bereichern können.
Die Konstruktion eines Schrittzählers mit einem Arduino-Mikrocontroller und einem Bewegungssensor ist ein spannendes Technikprojekt. Wir erläutern den Grundgedanken hinter der produktorientierten Modellierung und die vielfältigen Möglichkeiten, die Fragestellung zu bearbeiten. Darüberhinaus werden die technischen Details der verwendeten Hardware diskutiert, um einen schnellen Einstieg ins Thema zu ermöglichen.
The folding of newly synthesized polypeptides requires the coordinated action of molecular chaperones. Prokaryotic cells and the chloroplasts of plant cells possess the ribosome-associated chaperone trigger factor, which binds nascent polypeptides at their exit stage from the ribosomal tunnel. The structure of bacterial trigger factor has been well characterized and it has a dragon-shaped conformation, with flexible domains responsible for ribosome binding, peptidyl-prolyl cis–trans isomerization (PPIase) activity and substrate protein binding. Chloroplast trigger-factor sequences have diversified from those of their bacterial orthologs and their molecular mechanism in plant organelles has been little investigated to date. Here, the crystal structure of the plastidic trigger factor from the green alga Chlamydomonas reinhardtii is presented at 2.6 Å resolution. Due to the high intramolecular flexibility of the protein, diffraction to this resolution was only achieved using a protein that lacked the N-terminal ribosome-binding domain. The eukaryotic trigger factor from C. reinhardtii exhibits a comparable dragon-shaped conformation to its bacterial counterpart. However, the C-terminal chaperone domain displays distinct charge distributions, with altered positioning of the helical arms and a specifically altered charge distribution along the surface responsible for substrate binding. While the PPIase domain shows a highly conserved structure compared with other PPIases, its rather weak activity and an unusual orientation towards the C-terminal domain points to specific adaptations of eukaryotic trigger factor for function in chloroplasts.
This DFG-funded research project aimed to gain a better understanding of the mechanisms of the W-Cl repair principle within the framework of fundamental investigations, to contribute to the creation of the necessary basis for a broader application of the repair principle in practice. The focus was on the development of a model to describe the chloride redistribution after the application of a system sealing surface protective coating. On the basis of Fick's second law of diffusion, a mathematical model with a self-contained analytical solution was developed, with the help of which the chloride redistribution after application of a system sealing surface protective coating can be calculated under the idealized assumption of complete water saturation of the concrete. Furthermore, the influence of the dehydration of the concrete, expected as a result of the application of the repair principle W-Cl, on the chloride redistribution was investigated. On the basis of laboratory tests and numerical simulations, material-specific reduction functions were developed to quantify the relationship between the chloride diffusion coefficient and the ambient humidity.
Recently convex optimization models were successfully applied
for solving various problems in image analysis and restoration.
In this paper, we are interested in relations between
convex constrained optimization problems
of the form
\({\rm argmin} \{ \Phi(x)\) subject to \(\Psi(x) \le \tau \}\)
and their penalized counterparts
\({\rm argmin} \{\Phi(x) + \lambda \Psi(x)\}\).
We recall general results on the topic by the help of an epigraphical projection.
Then we deal with the special setting \(\Psi := \| L \cdot\|\) with \(L \in \mathbb{R}^{m,n}\)
and \(\Phi := \varphi(H \cdot)\),
where \(H \in \mathbb{R}^{n,n}\) and \(\varphi: \mathbb R^n \rightarrow \mathbb{R} \cup \{+\infty\} \)
meet certain requirements which are often fulfilled in image processing models.
In this case we prove by incorporating the dual problems
that there exists a bijective function
such that
the solutions of the constrained problem coincide with those of the
penalized problem if and only if \(\tau\) and \(\lambda\) are in the graph
of this function.
We illustrate the relation between \(\tau\) and \(\lambda\) for various problems
arising in image processing.
In particular, we point out the relation to the Pareto frontier for joint sparsity problems.
We demonstrate the performance of the
constrained model in restoration tasks of images corrupted by Poisson noise
with the \(I\)-divergence as data fitting term \(\varphi\)
and in inpainting models with the constrained nuclear norm.
Such models can be useful if we have a priori knowledge on the image rather than on the noise level.
Editorial
(2020)
This paper presents an iterative finite element (FE)–based method to calculate the gravity-free shape of nonrigid parts from
an optical measurement performed on a non-over-constrained fixture. Measuring these kinds of parts in a stress-free state
is almost impossible because deflections caused by their weight occur. To solve this problem, a simulation model of the
measurement is created using available methods of reverse engineering. Then, an iterative algorithm calculates the gravityfree
shape. The approach does not require a CAD model of the measured part, implying the whole part can be fully scanned.
The application of this method mainly addresses thin, unstable sheet metal parts, like those commonly used in the automotive
or aerospace industry. To show the performance of the proposed method, validations with simulation and experimental
data are presented. The shown results meet the predefined quality goal to predict shapes within a tolerance of ±0.05 mm
measured in surface normal direction.