Refine
Year of publication
Document Type
- Article (707) (remove)
Has Fulltext
- yes (707)
Keywords
- AG-RESY (42)
- PARO (30)
- SKALP (15)
- Schule (12)
- MINT (11)
- Mathematische Modellierung (11)
- Stadtplanung (9)
- Denkmäler (8)
- HANDFLEX (8)
- Monitoring (8)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (145)
- Kaiserslautern - Fachbereich Informatik (134)
- Kaiserslautern - Fachbereich Physik (100)
- Kaiserslautern - Fachbereich Mathematik (82)
- Kaiserslautern - Fachbereich Sozialwissenschaften (53)
- Kaiserslautern - Fachbereich Biologie (47)
- Kaiserslautern - Fachbereich Chemie (39)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (27)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (26)
- Kaiserslautern - Fachbereich Bauingenieurwesen (22)
III/V semiconductor quantum dots (QD) are in the focus of optoelectronics research for about 25 years now. Most of the work
has been done on InAs QD on GaAs substrate. But, e.g., Ga(As)Sb (antimonide) QD on GaAs substrate/buffer have also gained
attention for the last 12 years.There is a scientific dispute on whether there is a wetting layer before antimonide QD formation, as
commonly expected for Stransky-Krastanov growth, or not. Usually ex situ photoluminescence (PL) and atomic force microscope
(AFM) measurements are performed to resolve similar issues. In this contribution, we show that reflectance anisotropy/difference
spectroscopy (RAS/RDS) can be used for the same purpose as an in situ, real-time monitoring technique. It can be employed not
only to identify QD growth via a distinct RAS spectrum, but also to get information on the existence of a wetting layer and its
thickness. The data suggest that for antimonide QD growth the wetting layer has a thickness of 1 ML (one monolayer) only.
Modern society relies on convenience services and mobile communication. Cloud computing is the current trend to make data and applications available at any time on every device. Data centers concentrate computation and storage at central locations, while they claim themselves green due to their optimized maintenance and increased energy efficiency. The key enabler for this evolution is the microelectronics industry. The trend to power efficient mobile devices has forced this industry to change its design dogma to: ”keep data locally and reduce data communication whenever possible”. Therefore we ask: is cloud computing repeating the aberrations of its enabling industry?
The plasma membrane transporter SOS1 (SALT-OVERLY SENSITIVE1) is vital for plant survival under salt stress. SOS1 activity is tightly regulated, but little is known about the underlying mechanism. SOS1 contains a cytosolic, autoinhibitory C-terminal tail (abbreviated as SOS1 C-term), which is targeted by the protein kinase SOS2 to trigger its transport activity. Here, to identify additional binding proteins that regulate SOS1 activity, we synthesized the SOS1 C-term domain and used it as bait to probe Arabidopsis thaliana cell extracts. Several 14-3-3 proteins, which function in plant salt tolerance, specifically bound to and interacted with the SOS1 C-term. Compared to wild-type plants, when exposed to salt stress, Arabidopsis plants overexpressing SOS1 C-term showed improved salt tolerance, significantly reduced Na+ accumulation in leaves, reduced induction of the salt-responsive gene WRKY25, decreased soluble sugar, starch, and proline levels, less impaired inflorescence formation and increased biomass. It appears that overexpressing SOS1 C-term leads to the sequestration of inhibitory 14-3-3 proteins, allowing SOS1 to be more readily activated and leading to increased salt tolerance. We propose that the SOS1 C-term binds to previously unknown proteins such as 14-3-3 isoforms, thereby regulating salt tolerance. This finding uncovers another regulatory layer of the plant salt tolerance program
Previously in this journal we have reported on fundamental transversemode selection (TMS#0) of broad area semiconductor lasers
(BALs) with integrated twice-retracted 4f set-up and film-waveguide lens as the Fourier-transform element. Now we choose and
report on a simpler approach for BAL-TMS#0, i.e., the use of a stable confocal longitudinal BAL resonator of length L with a
transverse constriction.The absolute value of the radius R of curvature of both mirror-facets convex in one dimension (1D) is R = L
= 2f with focal length f.The round trip length 2L = 4f againmakes up for a Fourier-optical 4f set-up and the constriction resulting
in a resonator-internal beam waist stands for a Fourier-optical low-pass spatial frequency filter. Good TMS#0 is achieved, as long
as the constriction is tight enough, but filamentation is not completely suppressed.
1. Introduction
Broad area (semiconductor diode) lasers (BALs) are intended
to emit high optical output powers (where “high” is relative
and depending on the material system). As compared to
conventional narrow stripe lasers, the higher power is distributed
over a larger transverse cross-section, thus avoiding
catastrophic optical mirror damage (COMD). Typical BALs
have emitter widths of around 100 ????m.
Thedrawback is the distribution of the high output power
over a large number of transverse modes (in cases without
countermeasures) limiting the portion of the light power in
the fundamental transverse mode (mode #0), which ought to
be maximized for the sake of good light focusability.
Thus techniques have to be used to support, prefer, or
select the fundamental transverse mode (transverse mode
selection TMS#0) by suppression of higher order modes
already upon build-up of the laser oscillation.
In many cases reported in the literature, either a BAL
facet, the
We compute three-dimensional displacement vector fields to estimate the deformation of microstructural data sets in mechanical tests. For this, we extend the well-known optical flow by Brox et al. to three dimensions, with special focus on the discretization of nonlinear terms. We evaluate our method first by synthetically deforming foams and comparing against this ground truth and second with data sets of samples that underwent real mechanical tests. Our results are compared to those from state-of-the-art algorithms in materials science and medical image registration. By a thorough evaluation, we show that our proposed method is able to resolve the displacement best among all chosen comparison methods.
In this contribution a phase field model for ductile fracture with linear isotropic hardening is presented. An energy functional consisting of an elastic energy, a plastic dissipation potential and a Griffith type fracture energy constitutes the model. The application of an unaltered radial return algorithm on element level is possible due to the choice of an appropriate coupling between the nodal degrees of freedom, namely the displacement and the crack/fracture fields. The degradation function models the mentioned coupling by reducing the stiffness of the material and the plastic contribution of the energy density in broken material. Furthermore, to solve the global system of differential equations comprising the balance of linear momentum and the quasi-static Ginzburg-Landau type evolution equation, the application of a monolithic iterative solution scheme becomes feasible. The compact model is used to perform 3D simulations of fracture in tension. The computed plastic zones are compared to the dog-bone model that is used to derive validity criteria for KIC measurements.
Sensing location information in indoor scenes requires a high accuracy and is a challenging task, mainly because of multipath and NLoS (non-line-of-sight) propagation. GNSS signals cannot penetrate well in indoor environment. Satellite-based navigation and positioning systems cannot therefore be used for indoor positioning.. Other technologies have been suggested for indoor usage, among them, Wi-Fi (802.11) and 5G NR (New Radio). The primary aim of this study is to discuss the advantages and drawbacks of 5G and Wi-Fi positioning techniques for indoor localization.
This paper presents a new approach to parallel path planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based a best-first search algorithm and needs no essential off-line computations. The algorithm works in an implicitly discrete configuration space. Collisions are detected in the Cartesian workspace by hierarchical distance computation based on polyhedral models of the robot and the obstacles. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel path planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows very good speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
A branch-and-cut approach and alternative formulations for thetraveling salesman problem with drone
(2020)
In this paper, we are interested in studying thetraveling salesman problem withdrone(TSP-D). Given a set of customers and a truck that is equipped with a singledrone, the TSP-D asks that all customers are served exactly once and minimal deliv-ery time is achieved. We provide two compact mixed integer linear programmingformulations that can be used to address instances with up to 10 customer within afew seconds. Notably, we introduce a third formulation for the TSP-D with an expo-nential number of constraints. The latter formulation is suitable to be solved by abranch-and-cut algorithm. Indeed, this approach can be used to find optimal solu-tions for several instances with up to 20 customers within 1 hour, thus challenging thecurrent state-of-the-art in solving the TSP-D. A detailed numerical study providesan in-depth comparison on the effectiveness of the proposed formulations. More-over, we reveal further details on the operational characteristics of a drone-assisteddelivery system. By using three different sets of benchmark instances, considera-tion is given to various assumptions that affect, for example, technological droneparameters and the impact of distance metrics.
In grinding, the crystal grain size of the workpiece material is relatively same range compared to the removal depth. This raises a question if an anisotropic material model, which considers the effect of the crystal grain size and orientations, would better predict the process forces when compared to an isotropic material model. Initially, a simple micro-indentation process is chosen to compare the two models. In this work, a crystal plasticity model and an isotropic Johnson-Cooke plasticity model are employed to simulate micro-identation of a twinning induced plasticity (TWIP) steel. The results of the two models are compared using the force-displacement curves from the micro-indentation experiments. In the future, the study will be extended to describe the material removal process during a single grit scratch test.
In cake filtration processes, where particles in a suspension are separated by forming a filter
cake on the filter medium, the resistances of filter cake and filter medium cause a specific pressure
drop which consequently defines the process energy effort. The micromechanics of the filter cake
formation (interactions between particles, fluid, other particles and filter medium) must be considered
to describe pore clogging, filter cake growth and consolidation correctly. A precise 3D modeling
approach to describe these effects is the resolved coupling of the Computational Fluid Dynamics with
the Discrete Element Method (CFD-DEM). This work focuses on the development and validation of a
CFD-DEM model, which is capable to predict the filter cake formation during solid-liquid separation
accurately. The model uses the Lattice-Boltzmann Method (LBM) to directly solve the flow equations
in the CFD part of the coupling and the DEM for the calculation of particle interactions. The developed
model enables the 4-way coupling to consider particle-fluid and particle-particle interactions. The
results of this work are presented in two steps. First, the developed model is validated with an
empirical model of the single particle settling velocity in the transition regime of the fluid-particle
flow. The model is also enhanced with additional particles to determine the particle-particle influence.
Second, the separation of silica glass particles from water in a pressurized housing at constant pressure
is experimentally investigated. The measured filter cake, filter medium and interference resistances
are in a good agreement with the results of the 3D simulations, demonstrating the applicability of the
resolved CFD-DEM coupling for analyzing and optimizing cake filtration processes.
Nucleophilic substitution of [(η5-cyclopentadienyl)(η6-chlorobenzene)iron(II)] hexafluorophosphate with sodium imidazolate resulted in the formation of [(η5-cyclopentadienyl)(η6-phenyl)iron(II)]imidazole hexafluorophosphate. The corresponding dicationic imidazolium salt, which was obtained by treating this imidazole precursor with methyl iodide, underwent cyclometallation with bis[dichlorido(η5-1,2,3,4,5-pentamethylcyclopentadienyl]iridium(III) in the presence of triethyl amine. The resulting bimetallic iridium(III) complex is the first example of an NHC complex bearing a cationic and cyclometallated [(η5-cyclopentadienyl)(η6-phenyl)iron(II)]+ substituent. As its iron(II) precursors, the bimetallic iridium(III) complex was fully characterized by means of spectroscopy, elemental analysis and single crystal X-ray diffraction. In addition, it was investigated in a catalytic study, wherein it showed high activity in transfer hydrogenation compared to its neutral analogue having a simple phenyl instead of a cationic [(η5-cyclopentadienyl)(η6-phenyl)iron(II)]+ unit at the NHC ligand.
We study the sensor fault estimation and accommodation problems in a data-driven \(\mathcal{H}_\infty\) setting, leading to a data-driven sensor fault-tolerant control scheme. First, we formulate the fault estimation problem as a finite-horizon minimax \(\mathcal{H}_\infty\)-optimization problem in a data-driven setup, whose solution yields the fault estimate. The estimated fault is then used for output compensation. This compensated output and the experimental input are used to achieve certain control objectives in a data-driven \(\mathcal{H}_\infty\) setting. Next, the data-driven \(\mathcal{H}_\infty\) fault estimation and control problems are solved using a subspace predictor-based approach. Finally, the proposed algorithm is applied to the steering subsystem of the remotely operated underwater vehicle.
The fatigue life of metals manufactured via laser-based powder bed fusion (L-PBF) highly
depends on process-induced defects. In this context, not only the size and geometry of the defect, but
also the properties and the microstructure of the surrounding material volume must be considered.
In the presented work, the microstructural changes in the vicinity of a crack-initiating defect in a
fatigue specimen produced via L-PBF and made of AISI 316L were analyzed in detail. Xenon plasma
focused ion beam (Xe-FIB) technique, scanning electron microscopy (SEM), and electron backscatter
diffraction (EBSD) were used to investigate the phase distribution, local misorientations, and grain
structure, including the crystallographic orientations. These analyses revealed a fine grain structure
in the vicinity of the defect, which is arranged in accordance with the melt pool geometry. Besides
pronounced cyclic plastic deformation, a deformation-induced transformation of the initial austenitic
phase into α’-martensite was observed. The plastic deformation as well as the phase transformation
were more pronounced near the border between the defect and the surrounding material volume.
However, the extent of the plastic deformation and the deformation-induced phase transformation
varies locally in this border region. Although a beneficial effect of certain grain orientations on the
phase transformation and plastic deformability was observed, the microstructural changes found
cannot solely be explained by the respective crystallographic orientation. These changes are assumed
to further depend on the inhomogeneous distribution of the multiaxial stresses beneath the defect as
well as the grain morphology
A detailed study of a cylinder activation concept by efficiency loss analysis and 1D simulation
(2020)
Cylinder deactivation is a well-known measure for reducing fuel consumption, especially when applied to gasoline engines. Mostly, such systems are designed to deactivate half of the number of cylinders of the engine. In this study, a new concept is investigated for deactivating only one out of four cylinders of a commercial vehicle diesel engine (“3/4-cylinder concept”). For this purpose, cylinders 2–4 of the engine are operated in “real” 3-cylinder mode, thus with the firing order and ignition distance of a regular 3-cylinder engine, while the first cylinder is only activated near full load, running in parallel to the fourth cylinder. This concept was integrated into a test engine and evaluated on an engine test bench. As the investigations revealed significant improvements for the low-to-medium load region as well as disadvantages for high load, an extensive numerical analysis was carried out based on the experimental results. This included both 1D simulation runs and a detailed cylinder-specific efficiency loss analysis. Based on the results of this analysis, further steps for optimizing the concept were derived and studied by numerical calculations. As a result, it can be concluded that the 3/4-cylinder concept may provide significant improvements of real-world fuel economy when integrated as a drive unit into a tractor.
In this paper we present an interpreter which allows to support the validation of conceptual models in early stages of the development. We compare hypermedia and expert system approaches to knowledge processing and show how an integrated approach eases the creation of expert systems. Our knowledge engineering tool CoMo-Kit allows a "smooth" transition from initial protocols via a semi-formal specification based on a typed hypertext up to an running expert system. The interpreter uses the intermediate hypertext representation for the interactive solution of problems. Thereby, tasks are distributed to agents via an local area network. This means that the specification of an expert system can directly be used to solve real world problems. If there exist formal (operational) specifications for subtasks then these are delegated to computers. Therefore, our approach allows to specify and validate distributed, cooperative systems where some subtasks are solved by humans and other subtasks are solved automatically by computers.
A practical distributed planning and control system for industrial robots is presented. The hierarchical concept consists of three independent levels. Each level is modularly implemented and supplies an application interface (API) to the next higher level. At the top level, we propose an automatic motion planner. The motion planner is based on a best-first search algorithm and needs no essential off-line computations. At the middle level, we propose a PC-based robot control architecture, which can easily be adapted to any industrial kinematics and application. Based on a client/server-principle, the control unit estab-lishes an open user interface for including application specific programs. At the bottom level, we propose a flexible and modular concept for the integration of the distributed motion control units based on the CAN bus. The concept allows an on-line adaptation of the control parameters according to the robot's configuration. This implies high accuracy for the path execution and improves the overall system performance.
A stereoselective synthesis of isoindolo[2,1-a]quinolin-11(5H)-ones containing three contiguous stereogenic centers is described. This Lewis-acid mediated reaction of enamides with N-aryl-acylimines affords the desired fused heterocyclic isoindolinones in high yields and diastereoselectivities. Scope and limitations of this method are discussed. The stereochemical outcome of this transformation indicates a stepwise reaction pathway.
We propose a model for glioma patterns in a microlocal tumor environment under
the influence of acidity, angiogenesis, and tissue anisotropy. The bottom-up model deduction
eventually leads to a system of reaction–diffusion–taxis equations for glioma and endothelial cell
population densities, of which the former infers flux limitation both in the self-diffusion and taxis
terms. The model extends a recently introduced (Kumar, Li and Surulescu, 2020) description of
glioma pseudopalisade formation with the aim of studying the effect of hypoxia-induced tumor
vascularization on the establishment and maintenance of these histological patterns which are typical
for high-grade brain cancer. Numerical simulations of the population level dynamics are performed
to investigate several model scenarios containing this and further effects.
We present an entropy concept measuring quantum localization in dynamical systems based on time averaged probability densities. The suggested entropy concept is a generalization of a recently introduced [PRL 75, 326 (1995)] phase-space entropy to any representation chosen according to the system and the physical question under consideration. In this paper we inspect the main characteristics of the entropy and the relation to other measures of localization. In particular the classical correspondence is discussed and the statistical properties are evaluated within the framework of random vector theory. In this way we show that the suggested entropy is a suitable method to detect quantum localization phenomena in dynamical systems.
Various regulatory initiatives (such as the pan-European PRIIP-regulation or the German chance-risk classification for state subsidized pension products) have been introduced that require product providers to assess and disclose the risk-return profile of their issued products by means of a key information document. We will in this context outline a concept for a (forward-looking) simulation-based approach and highlight its application and advantages. For reasons of comparison, we further illustrate the performance of approximation methods based on a projection of observed returns into the future such as the Cornish–Fisher expansion or bootstrap methods.
In many robotic applications, the teaching of points in space is necessary to register the robot coordinate system with the one of the application. Robot-human interaction is awkward and dangerous for the human because of the possibly large size and power of the robot, so robot movements must be predictable and natural. We present a novel hybrid control algorithm which provides the needed precision in small scale movements while allowing for fast and intuitive large scale translations.
A highly water-dispersible heterogeneous Brønsted acid surfactant was prepared by synthesis of a bi-functional anisotropic Janus-type material. The catalyst comprises ionic functionalities on one side and propyl-SO3H groups on the other. The novel material was investigated as a green substitute of a homogeneous acidic phase transfer catalyst (PTC). The activity of the catalyst was investigated for the aqueous-phase oxidation of cyclohexene to adipic acid with 30 % hydrogen peroxide even in a decagram-scale. It can also be used for the synthesis of some other carboxylic acid derivatives as well as diethyl phthalate.
The handling of oxygen sensitive samples and growth of obligate anaerobic organisms
requires the stringent exclusion of oxygen, which is omnipresent in our normal atmospheric
environment. Anaerobic workstations (aka. Glove boxes) enable the handling of
oxygen sensitive samples during complex procedures, or the long-term incubation of
anaerobic organisms. Depending on the application requirements, commercial workstations
can cost up to 60.000 €. Here we present the complete build instructions for a highly
adaptive, Arduino based, anaerobic workstation for microbial cultivation and sample handling,
with features normally found only in high cost commercial solutions. This build can
automatically regulate humidity, H2 levels (as oxygen reductant), log the environmental
data and purge the airlock. It is built as compact as possible to allow it to fit into regular
growth chambers for full environmental control. In our experiments, oxygen levels during
the continuous growth of oxygen producing cyanobacteria, stayed under 0.03 % for 21 days
without needing user intervention. The modular Arduino controller allows for the easy
incorporation of additional regulation parameters, such as CO2 concentration or air pressure.
This paper provides researchers with a low cost, entry level workstation for anaerobic
sample handling with the flexibility to match their specific experimental needs.
The performance of napkins is nowadays improved substantially by embedding granules of a superabsorbent into the cellulose matrix. In this paper a continuous model for the liquid transport in such an Ultra Napkin is proposed. Its mean feature is a nonlinear diffusion equation strongly coupled with an ODE describing a reversible absorbtion process. An efficient numerical method based on a symmetrical time splitting and a finite difference scheme of ADI-predictor-corrector type has been developed to solve these equations in a three dimensional setting. Numerical results are presented that can be used to optimize the granule distribution.
Cloudy inhomogenities in artificial fabrics are graded by a fast method which is based on a Laplacian pyramid decomposition of the fabric image. This band-pass representation takes into account the scale character of the cloudiness. A quality measure of the entire cloudiness is obtained as a weighted mean over the variances of all scales.
We have investigated urine samples after coffee consumption using targeted and untargeted
approaches to identify furan and 2-methylfuran metabolites in urine samples by UPLC-qToF.
The aim was to establish a fast, robust, and time-saving method involving ultra-performance
liquid chromatography-quantitative time-of-flight tandem mass spectrometry (UPLC-qToF-MS/MS).
The developed method detected previously reported metabolites, such as Lys-BDA, and others that
had not been previously identified, or only detected in animal or in vitro studies. The developed
UPLC-qToF method detected previously reported metabolites, such as lysine-cis-2-butene-1,4-dial
(Lys-BDA) adducts, and others that had not been previously identified, or only detected in animal
and in vitro studies. In sum, the UPLC-qToF approach provides additional information that may be
valuable in future human or animal intervention studies.
Solar radiation data is essential for the development of many solar energy applications ranging from thermal collectors to building simulation tools, but its availability is limited, especially the diffuse radiation component. There are several studies aimed at predicting this value, but very few studies cover the generalizability of such models on varying climates. Our study investigates how well these models generalize and also show how to enhance their generalizability on different climates. Since machine learning approaches are known to generalize well, we apply them to truly understand how well they perform on different climates than they are originally trained. Therefore, we trained them on datasets from the U.S. and tested on several European climates. The machine learning model that is developed for U.S. climates not only showed low mean absolute error (MAE) of 23 W/m2, but also generalized very well on European climates with MAE in the range of 20 to 27 W/m2. Further investigation into the factors influencing the generalizability revealed that careful selection of the training data can improve the results significantly
A novel shadowgraphic inline probe to measure crystal size distributions (CSD),
based on acquired greyscale images, is evaluated in terms of elevated temperatures and fragile
crystals, and compared to well-established, alternative online and offline measurement techniques,
i.e., sieving analysis and online microscopy. Additionally, the operation limits, with respect to
temperature, supersaturation, suspension, and optical density, are investigated. Two different
substance systems, potassium dihydrogen phosphate (prisms) and thiamine hydrochloride (needles),
are crystallized for this purpose at 25 L scale. Crystal phases of the well-known KH2PO4/H2O system
are measured continuously by the inline probe and in a bypass by the online microscope during
cooling crystallizations. Both measurement techniques show similar results with respect to the crystal
size distribution, except for higher temperatures, where the bypass variant tends to fail due to
blockage. Thiamine hydrochloride, a substance forming long and fragile needles in aqueous solutions,
is solidified with an anti-solvent crystallization with ethanol. The novel inline probe could identify
a new field of application for image-based crystal size distribution measurements, with respect
to difficult particle shapes (needles) and elevated temperatures, which cannot be evaluated with
common techniques.
We present a parallel control architecture for industrial robot cells. It is based on closed functional components arranged in a flat communication hierarchy. The components may be executed by different processing elements, and each component itself may run on multiple processing elements. The system is driven by the instructions of a central cell control component. We set up necessary requirements for industrial robot cells and possible parallelization levels. These are met by the suggested robot control architecture. As an example we present a robot work cell and a component for motion planning, which fits well in this concept.
Phase field modeling of fracture has been in the focus of research for over a decade now. The field has gained attention properly due to its benefiting features for the numerical simulations even for complex crack problems. The framework was so far applied to quasi static and dynamic fracture for brittle as well as for ductile materials with isotropic and also with anisotropic fracture resistance. However, fracture due to cyclic mechanical fatigue, which is a very important phenomenon regarding a safe, durable and also economical design of structures, is considered only recently in terms of phase field modeling. While in first phase field models the material’s fracture toughness becomes degraded to simulate fatigue crack growth, we present an alternative method within this work, where the driving force for the fatigue mechanism increases due to cyclic loading. This new contribution is governed by the evolution of fatigue damage, which can be approximated by a linear law, namely the Miner’s rule, for damage accumulation. The proposed model is able to predict nucleation as well as growth of a fatigue crack. Furthermore, by an assessment of crack growth rates obtained from several numerical simulations by a conventional approach for the description of fatigue crack growth, it is shown that the presented model is able to predict realistic behavior.
In this note, we define one more way of quantization of classical systems. The quantization we consider is an analogue of classical Jordan–Schwinger map which has been known and used for a long time by physicists. The difference, compared to Jordan–Schwinger map, is that we use generators of Cuntz algebra O∞ (i.e. countable family of mutually orthogonal partial isometries of separable Hilbert space) as a “building blocks” instead of creation–annihilation operators. The resulting scheme satisfies properties similar to Van Hove prequantization, i.e. exact conservation of Lie brackets and linearity.
The cultivation of cyanobacteria with the addition of an organic carbon source (meaning as heterotrophic or mixotrophic cultivation) is a promising technique to increase their slow growth rate. However, most cyanobacteria cultures are infected by non-separable heterotrophic bacteria. While their contribution to the biomass is rather insignificant in a phototrophic cultivation, problems may arise in heterotrophic and mixotrophic mode. Heterotrophic bacteria can potentially utilize carbohydrates quickly, thus preventing any benefit for the cyanobacteria. In order to estimate the advantage of the supplementation of a carbon source, it is essential to quantify the proportion of cyanobacteria and heterotrophic bacteria in the resulting biomass. In this work, the use of quantitative polymerase chain reaction (qPCR) is proposed. To prepare the samples, a DNA extraction method for cyanobacteria was improved to provide reproducible and robust results for the group of terrestrial cyanobacteria. Two pairs of primers were used, which bind either to the 16S rRNA gene of all cyanobacteria or all bacteria including cyanobacteria. This allows a determination of the proportion of cyanobacteria in the biomass. The method was established with the two terrestrial cyanobacteria Trichocoleus sociatus SAG 26.92 and Nostoc muscorum SAG B-1453-12a. As proof of concept, a heterotrophic cultivation with T. sociatus with glucose was performed. After 2 days of cultivation, a reduction of the biomass partition of the cyanobacterium to 90% was detected. Afterwards, the proportion increased again.
The paper studies the dynamics of transitions between the levels of a Wannier-Stark ladder induced by a resonant periodic driving. The analysis of the problem is done in terms of resonance quasienergy states, which take into account the metastable character of the Wannier-Stark states. It is shown that the periodic driving creates from a localized Wannier-Stark state an extended Bloch-like state with a spatial length varying in time as ~ t^1/2. Such a state can find applications in the field of atomic optics because it generates a coherent pulsed atomic beam.
Daylight is important for the well-being of humans. Therefore, many office buildings use
large windows and glass facades to let more daylight into office spaces. However, this increases the
chance of glare in office spaces, which results in visual discomfort. Shading systems in buildings
can prevent glare but are not effectively adapted to changing sky conditions and sun position,
thus losing valuable daylight. Moreover, many shading systems are also aesthetically unappealing.
Electrochromic (EC) glass in this regard might be a better alternative, due to its light transmission
properties that can be altered when a voltage is applied. EC glass facilitates zoning and also supports
control of each zone separately. This allows the right amount of daylight at any time of the day.
However, an effective control strategy is still required to efficiently control EC glass. Reinforcement
learning (RL) is a promising control strategy that can learn from rewards and penalties and use this
feedback to adapt to user inputs. We trained a Deep Q learning (DQN) agent on a set of weather data
and visual comfort data, where the agent tries to adapt to the occupant’s feedback while observing
the sun position and radiation at given intervals. The trained DQN agent can avoid bright daylight
and glare scenarios in 97% of the cases and increases the amount of useful daylight up to 90%, thus
significantly reducing the need for artificial lighting.
The semantics of everyday language and the semanticsof its naive translation into classical first-order language consider-ably differ. An important discrepancy that is addressed in this paperis about the implicit assumption what exists. For instance, in thecase of universal quantification natural language uses restrictions andpresupposes that these restrictions are non-empty, while in classi-cal logic it is only assumed that the whole universe is non-empty.On the other hand, all constants mentioned in classical logic arepresupposed to exist, while it makes no problems to speak about hy-pothetical objects in everyday language. These problems have beendiscussed in philosophical logic and some adequate many-valuedlogics were developed to model these phenomena much better thanclassical first-order logic can do. An adequate calculus, however, hasnot yet been given. Recent years have seen a thorough investigationof the framework of many-valued truth-functional logics. UnfortuADnately, restricted quantifications are not truth-functional, hence theydo not fit the framework directly. We solve this problem by applyingrecent methods from sorted logics.
One of the many features needed to support the activities of autonomous systems is the ability of motion planning. It enables robots to move in their environment securely and to accomplish given tasks. Unfortunately, the control loop comprising sensing, planning, and acting has not yet been closed for robots in dynamic environments. One reason involves the long execution times of the motion planning component. A solution for this problem is offered by the use of highly computational parallelism. Thus, an important task is the parallelization of existing motion planning algorithms for robots so that they are suitable for highly computational parallelism. In several cases, completely new algorithms have to be designed, so that a parallelization is feasible. In this survey, we review recent approaches to motion planning using parallel computation.
In recent years, more and more publications and material for studying and teaching, e. g. for Web-based teaching (WBT), appear "online" and digital libraries are built to manage such publications and online materials. Therefore, the most important concerns are related to the problem of durable, sustained storage and the management of content together with its metadata existing in heterogeneous styles and formats. In this paper, we present specific techniques and their use to support metadata-based catalog services. Such semistructured metadata (represented as XML fragments), which belong to online learning resources, need efficient XML-based query support, scalable result set processing, and comprehensive facilities for personalization purposes. We discuss the associated problems, subsequently derive the concepts of a suitable architecture, and finally outline the realization by means of our prototype system that is based on the J2EE component model.
In the scalar case one knows that a complex normalized function of boundedvariation \(\phi\) on \([0,1]\) defines a unique complex regular Borel measure\(\mu\) on \([0,1]\). In this note we show that this is no longer true in generalin the vector valued case, even if \(\phi\) is assumed to be continuous. Moreover, the functions \(\phi\) which determine a countably additive vectormeasure \(\mu\) are characterized.
Personalized dynamic pricing (PDP) involves dynamically setting individual-consumer prices for the same product or service according to consumer-identifying information. Despite its profitability, this pricing provokes strong negative fairness perceptions, explaining why managers are reluctant to implement it. This research provides important insights into the effect of two PDP dimensions (price individualization level and segmentation base) on fairness perceptions and the moderating role of privacy concerns. The results of two experimental studies indicate that consumers perceive individual prices as less fair than segment prices. They also evaluate location-based pricing as less fair than purchase history-based pricing. Consumer privacy concerns moderate these effects.
A Strained Partnership: Krise und Resilienz in den transatlantischen Beziehungen 20 Jahre nach 9/11
(2021)
2021 lieferte aus transatlantischer Perspektive gleich mehrere Zäsuren. Im Januar wurde US-Präsident Donald Trump, dessen disruptive Politik diverse Konflikte mit Europa provozierte, durch Joseph R. Biden abgelöst. Im August endete in Afghanistan der längste Einsatz in der Geschichte der NATO mit einem chaotischen Abzug und der Machtübernahme der Taliban, fast 20 Jahre nach dem Beginn des Krieges. Und schließlich läuteten die Bundestagswahlen im September das Ende der Amtszeit Angela Merkels ein, die als Bundeskanzlerin in 16 Regierungsjahren auf vier US-Präsidenten traf. Diese Zäsuren bieten Anlass genug, eine Bilanz der transatlantischen Beziehungen seit 9/11 zu ziehen.
Background: The positive effect of carbohydrates from commercial beverages on soccer-specific exercise has been clearly demonstrated. However, no study is available that uses a home-mixed beverage in a test where technical skills were required. Methods: Nine subjects participated vol-untarily in this double-blind, randomized, placebo-controlled crossover study. On three testing days, the subjects performed six Hoff tests with a 3-min active break as a preload and then the Yo-Yo Intermittent Running Test Level 1 (Yo-Yo IR1) until exhaustion. On test days 2 and 3, the subjects received either a 69 g carbohydrate-containing drink (syrup–water mixture) or a carbo-hydrate-free drink (aromatic water). Beverages were given in several doses of 250 mL each: 30 min before and immediately before the exercise and after 18 and 39 min of exercise. The primary target parameters were the running performance in the Hoff test and Yo-Yo IR1, body mass and heart rate. Statistical differences between the variables of both conditions were analyzed using paired samples t-tests. Results: The maximum heart rate in Yo-Yo IR1 showed significant differ-ences (syrup: 191.1 ± 6.2 bpm; placebo: 188.0 ± 6.89 bpm; t(6) = −2.556; p = 0.043; dz = 0.97). The running performance in Yo-Yo IR1 under the condition syrup significantly increased by 93.33 ± 84.85 m (0–240 m) on average (p = 0.011). Conclusions: The intake of a syrup–water mixture with a total of 69 g carbohydrates leads to an increase in high-intensive running performance after soccer specific loads. Therefore, the intake of carbohydrate solutions is recommended for intermit-tent loads and should be increasingly considered by coaches and players.
When considering complex systems, identifying the most important actors is often of relevance. When the system is modeled
as a network, centrality measures are used which assign each node a value due to its position in the network. It is often
disregarded that they implicitly assume a network process flowing through a network, and also make assumptions of how
the network process flows through the network. A node is then central with respect to this network process (Borgatti in Soc
Netw 27(1):55–71, 2005, https ://doi.org/10.1016/j.socne t.2004.11.008). It has been shown that real-world processes often
do not fulfill these assumptions (Bockholt and Zweig, in Complex networks and their applications VIII, Springer, Cham,
2019, https ://doi.org/10.1007/978-3-030-36683 -4_7). In this work, we systematically investigate the impact of the measures’
assumptions by using four datasets of real-world processes. In order to do so, we introduce several variants of the betweenness
and closeness centrality which, for each assumption, use either the assumed process model or the behavior of the real-world
process. The results are twofold: on the one hand, for all measure variants and almost all datasets, we find that, in general,
the standard centrality measures are quite robust against deviations in their process model. On the other hand, we observe a
large variation of ranking positions of single nodes, even among the nodes ranked high by the standard measures. This has
implications for the interpretability of results of those centrality measures. Since a mismatch of the behaviour of the real
network process and the assumed process model does even affect the highly-ranked nodes, resulting rankings need to be
interpreted with care.
Even though it is not very often admitted, partial functionsdo play a significant role in many practical applications of deduction sys-tems. Kleene has already given a semantic account of partial functionsusing a three-valued logic decades ago, but there has not been a satisfact-ory mechanization. Recent years have seen a thorough investigation ofthe framework of many-valued truth-functional logics. However, strongKleene logic, where quantification is restricted and therefore not truth-functional, does not fit the framework directly. We solve this problemby applying recent methods from sorted logics. This paper presents atableau calculus that combines the proper treatment of partial functionswith the efficiency of sorted calculi.
This survey provides the reader with an overview of numerous results on p-permu- tation modules and the closely related classes of endo-trivial, endo-permutation and endo-p- permutation modules. These classes of modules play an important role in the representation theory of finite groups. For example, they are important building blocks used to understand and parametrise several kinds of categorical equivalences between blocks of finite group alge- bras. For this reason, there has been, since the late 1990’s, much interest in classifying such modules. The aim of this manuscript is to review classical results as well as all the major recent advances in the area. The first part of this survey serves as an introduction to the topic for non-experts in modular representation theory of finite groups, outlining proof ideas of the most important results at the foundations of the theory. Simultaneously, the connections between the aforementioned classes of modules are emphasised. In this respect, results, which are dispersed in the literature, are brought together, and emphasis is put on common properties and the role played by the p-permutation modules throughout the theory. Finally, in the last part of the manuscript, lifting results from positive characteristic to characteristic zero are collected and their proofs sketched.
A novel method is presented which allows a fast computation of complex energy resonance states in Stark systems, i.e. systems in a homogeneous field. The technique is based on the truncation of a shift-operator in momentum space. Numerical results for space periodic and non-periodic systems illustrate the extreme simplicity of the method.
Anisotropy of tracer-coupled networks is a hallmark in many brain regions. In the past, the topography of these networks was analyzed using various approaches, which focused on different aspects, e.g., position, tracer signal, or direction of coupled cells. Here, we developed a vector-based method to analyze the extent and preferential direction of tracer spreading. As a model region, we chose the lateral superior olive—a nucleus that exhibits specialized network topography. In acute slices, sulforhodamine 101-positive astrocytes were patch-clamped and dialyzed with the GJ-permeable tracer neurobiotin, which was subsequently labeled with avidin alexa fluor 488. A predetermined threshold was used to differentiate between tracer-coupled and tracer-uncoupled cells. Tracer extent was calculated from the vector means of tracer-coupled cells in four 90° sectors. We then computed the preferential direction using a rotating coordinate system and post hoc fitting of these results with a sinusoidal function. The new method allows for an objective analysis of tracer spreading that provides information about shape and orientation of GJ networks. We expect this approach to become a vital tool for the analysis of coupling anisotropy in many brain regions
In this paper we construct a numerical solver for the Saint Venant equations. Special attention
is given to the balancing of the source terms, including the bottom slope and variable cross-
sectional profiles. Therefore a special discretization of the pressure law is used, in order to
transfer analytical properties to the numerical method. Based on this approximation a well-
balanced solver is developed, assuring the C-property and depth positivity. The performance
of this method is studied in several test cases focusing on accurate capturing of steady states.
The statistics of the resonance widths and the behavior of the survival probability is studied in a particular model of quantum chaotic scattering (a particle in a periodic potential subject to static and time-periodic forces) introduced earlier in Ref. [5,6]. The coarse-grained distribution of the resonance widths is shown to be in good agreement with the prediction of Random Matrix Theory (RMT). The behavior of the survival probability shows, however, some deviation from RMT.