Refine
Year of publication
Document Type
- Article (609) (remove)
Language
- English (609) (remove)
Has Fulltext
- yes (609)
Keywords
- AG-RESY (32)
- PARO (24)
- SKALP (14)
- resonances (8)
- Wannier-Stark systems (7)
- lifetimes (7)
- HANDFLEX (6)
- Quantum mechanics (6)
- motion planning (5)
- quantum mechanics (5)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (143)
- Kaiserslautern - Fachbereich Informatik (107)
- Kaiserslautern - Fachbereich Physik (99)
- Kaiserslautern - Fachbereich Mathematik (63)
- Kaiserslautern - Fachbereich Sozialwissenschaften (47)
- Kaiserslautern - Fachbereich Biologie (43)
- Kaiserslautern - Fachbereich Chemie (37)
- Kaiserslautern - Fachbereich Bauingenieurwesen (23)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (21)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (12)
III/V semiconductor quantum dots (QD) are in the focus of optoelectronics research for about 25 years now. Most of the work
has been done on InAs QD on GaAs substrate. But, e.g., Ga(As)Sb (antimonide) QD on GaAs substrate/buffer have also gained
attention for the last 12 years.There is a scientific dispute on whether there is a wetting layer before antimonide QD formation, as
commonly expected for Stransky-Krastanov growth, or not. Usually ex situ photoluminescence (PL) and atomic force microscope
(AFM) measurements are performed to resolve similar issues. In this contribution, we show that reflectance anisotropy/difference
spectroscopy (RAS/RDS) can be used for the same purpose as an in situ, real-time monitoring technique. It can be employed not
only to identify QD growth via a distinct RAS spectrum, but also to get information on the existence of a wetting layer and its
thickness. The data suggest that for antimonide QD growth the wetting layer has a thickness of 1 ML (one monolayer) only.
Modern society relies on convenience services and mobile communication. Cloud computing is the current trend to make data and applications available at any time on every device. Data centers concentrate computation and storage at central locations, while they claim themselves green due to their optimized maintenance and increased energy efficiency. The key enabler for this evolution is the microelectronics industry. The trend to power efficient mobile devices has forced this industry to change its design dogma to: ”keep data locally and reduce data communication whenever possible”. Therefore we ask: is cloud computing repeating the aberrations of its enabling industry?
The plasma membrane transporter SOS1 (SALT-OVERLY SENSITIVE1) is vital for plant survival under salt stress. SOS1 activity is tightly regulated, but little is known about the underlying mechanism. SOS1 contains a cytosolic, autoinhibitory C-terminal tail (abbreviated as SOS1 C-term), which is targeted by the protein kinase SOS2 to trigger its transport activity. Here, to identify additional binding proteins that regulate SOS1 activity, we synthesized the SOS1 C-term domain and used it as bait to probe Arabidopsis thaliana cell extracts. Several 14-3-3 proteins, which function in plant salt tolerance, specifically bound to and interacted with the SOS1 C-term. Compared to wild-type plants, when exposed to salt stress, Arabidopsis plants overexpressing SOS1 C-term showed improved salt tolerance, significantly reduced Na+ accumulation in leaves, reduced induction of the salt-responsive gene WRKY25, decreased soluble sugar, starch, and proline levels, less impaired inflorescence formation and increased biomass. It appears that overexpressing SOS1 C-term leads to the sequestration of inhibitory 14-3-3 proteins, allowing SOS1 to be more readily activated and leading to increased salt tolerance. We propose that the SOS1 C-term binds to previously unknown proteins such as 14-3-3 isoforms, thereby regulating salt tolerance. This finding uncovers another regulatory layer of the plant salt tolerance program
Previously in this journal we have reported on fundamental transversemode selection (TMS#0) of broad area semiconductor lasers
(BALs) with integrated twice-retracted 4f set-up and film-waveguide lens as the Fourier-transform element. Now we choose and
report on a simpler approach for BAL-TMS#0, i.e., the use of a stable confocal longitudinal BAL resonator of length L with a
transverse constriction.The absolute value of the radius R of curvature of both mirror-facets convex in one dimension (1D) is R = L
= 2f with focal length f.The round trip length 2L = 4f againmakes up for a Fourier-optical 4f set-up and the constriction resulting
in a resonator-internal beam waist stands for a Fourier-optical low-pass spatial frequency filter. Good TMS#0 is achieved, as long
as the constriction is tight enough, but filamentation is not completely suppressed.
1. Introduction
Broad area (semiconductor diode) lasers (BALs) are intended
to emit high optical output powers (where “high” is relative
and depending on the material system). As compared to
conventional narrow stripe lasers, the higher power is distributed
over a larger transverse cross-section, thus avoiding
catastrophic optical mirror damage (COMD). Typical BALs
have emitter widths of around 100 ????m.
Thedrawback is the distribution of the high output power
over a large number of transverse modes (in cases without
countermeasures) limiting the portion of the light power in
the fundamental transverse mode (mode #0), which ought to
be maximized for the sake of good light focusability.
Thus techniques have to be used to support, prefer, or
select the fundamental transverse mode (transverse mode
selection TMS#0) by suppression of higher order modes
already upon build-up of the laser oscillation.
In many cases reported in the literature, either a BAL
facet, the
We compute three-dimensional displacement vector fields to estimate the deformation of microstructural data sets in mechanical tests. For this, we extend the well-known optical flow by Brox et al. to three dimensions, with special focus on the discretization of nonlinear terms. We evaluate our method first by synthetically deforming foams and comparing against this ground truth and second with data sets of samples that underwent real mechanical tests. Our results are compared to those from state-of-the-art algorithms in materials science and medical image registration. By a thorough evaluation, we show that our proposed method is able to resolve the displacement best among all chosen comparison methods.
In this contribution a phase field model for ductile fracture with linear isotropic hardening is presented. An energy functional consisting of an elastic energy, a plastic dissipation potential and a Griffith type fracture energy constitutes the model. The application of an unaltered radial return algorithm on element level is possible due to the choice of an appropriate coupling between the nodal degrees of freedom, namely the displacement and the crack/fracture fields. The degradation function models the mentioned coupling by reducing the stiffness of the material and the plastic contribution of the energy density in broken material. Furthermore, to solve the global system of differential equations comprising the balance of linear momentum and the quasi-static Ginzburg-Landau type evolution equation, the application of a monolithic iterative solution scheme becomes feasible. The compact model is used to perform 3D simulations of fracture in tension. The computed plastic zones are compared to the dog-bone model that is used to derive validity criteria for KIC measurements.
Sensing location information in indoor scenes requires a high accuracy and is a challenging task, mainly because of multipath and NLoS (non-line-of-sight) propagation. GNSS signals cannot penetrate well in indoor environment. Satellite-based navigation and positioning systems cannot therefore be used for indoor positioning.. Other technologies have been suggested for indoor usage, among them, Wi-Fi (802.11) and 5G NR (New Radio). The primary aim of this study is to discuss the advantages and drawbacks of 5G and Wi-Fi positioning techniques for indoor localization.
This paper presents a new approach to parallel path planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based a best-first search algorithm and needs no essential off-line computations. The algorithm works in an implicitly discrete configuration space. Collisions are detected in the Cartesian workspace by hierarchical distance computation based on polyhedral models of the robot and the obstacles. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel path planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows very good speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
A branch-and-cut approach and alternative formulations for thetraveling salesman problem with drone
(2020)
In this paper, we are interested in studying thetraveling salesman problem withdrone(TSP-D). Given a set of customers and a truck that is equipped with a singledrone, the TSP-D asks that all customers are served exactly once and minimal deliv-ery time is achieved. We provide two compact mixed integer linear programmingformulations that can be used to address instances with up to 10 customer within afew seconds. Notably, we introduce a third formulation for the TSP-D with an expo-nential number of constraints. The latter formulation is suitable to be solved by abranch-and-cut algorithm. Indeed, this approach can be used to find optimal solu-tions for several instances with up to 20 customers within 1 hour, thus challenging thecurrent state-of-the-art in solving the TSP-D. A detailed numerical study providesan in-depth comparison on the effectiveness of the proposed formulations. More-over, we reveal further details on the operational characteristics of a drone-assisteddelivery system. By using three different sets of benchmark instances, considera-tion is given to various assumptions that affect, for example, technological droneparameters and the impact of distance metrics.
In grinding, the crystal grain size of the workpiece material is relatively same range compared to the removal depth. This raises a question if an anisotropic material model, which considers the effect of the crystal grain size and orientations, would better predict the process forces when compared to an isotropic material model. Initially, a simple micro-indentation process is chosen to compare the two models. In this work, a crystal plasticity model and an isotropic Johnson-Cooke plasticity model are employed to simulate micro-identation of a twinning induced plasticity (TWIP) steel. The results of the two models are compared using the force-displacement curves from the micro-indentation experiments. In the future, the study will be extended to describe the material removal process during a single grit scratch test.
In cake filtration processes, where particles in a suspension are separated by forming a filter
cake on the filter medium, the resistances of filter cake and filter medium cause a specific pressure
drop which consequently defines the process energy effort. The micromechanics of the filter cake
formation (interactions between particles, fluid, other particles and filter medium) must be considered
to describe pore clogging, filter cake growth and consolidation correctly. A precise 3D modeling
approach to describe these effects is the resolved coupling of the Computational Fluid Dynamics with
the Discrete Element Method (CFD-DEM). This work focuses on the development and validation of a
CFD-DEM model, which is capable to predict the filter cake formation during solid-liquid separation
accurately. The model uses the Lattice-Boltzmann Method (LBM) to directly solve the flow equations
in the CFD part of the coupling and the DEM for the calculation of particle interactions. The developed
model enables the 4-way coupling to consider particle-fluid and particle-particle interactions. The
results of this work are presented in two steps. First, the developed model is validated with an
empirical model of the single particle settling velocity in the transition regime of the fluid-particle
flow. The model is also enhanced with additional particles to determine the particle-particle influence.
Second, the separation of silica glass particles from water in a pressurized housing at constant pressure
is experimentally investigated. The measured filter cake, filter medium and interference resistances
are in a good agreement with the results of the 3D simulations, demonstrating the applicability of the
resolved CFD-DEM coupling for analyzing and optimizing cake filtration processes.
Nucleophilic substitution of [(η5-cyclopentadienyl)(η6-chlorobenzene)iron(II)] hexafluorophosphate with sodium imidazolate resulted in the formation of [(η5-cyclopentadienyl)(η6-phenyl)iron(II)]imidazole hexafluorophosphate. The corresponding dicationic imidazolium salt, which was obtained by treating this imidazole precursor with methyl iodide, underwent cyclometallation with bis[dichlorido(η5-1,2,3,4,5-pentamethylcyclopentadienyl]iridium(III) in the presence of triethyl amine. The resulting bimetallic iridium(III) complex is the first example of an NHC complex bearing a cationic and cyclometallated [(η5-cyclopentadienyl)(η6-phenyl)iron(II)]+ substituent. As its iron(II) precursors, the bimetallic iridium(III) complex was fully characterized by means of spectroscopy, elemental analysis and single crystal X-ray diffraction. In addition, it was investigated in a catalytic study, wherein it showed high activity in transfer hydrogenation compared to its neutral analogue having a simple phenyl instead of a cationic [(η5-cyclopentadienyl)(η6-phenyl)iron(II)]+ unit at the NHC ligand.
We study the sensor fault estimation and accommodation problems in a data-driven \(\mathcal{H}_\infty\) setting, leading to a data-driven sensor fault-tolerant control scheme. First, we formulate the fault estimation problem as a finite-horizon minimax \(\mathcal{H}_\infty\)-optimization problem in a data-driven setup, whose solution yields the fault estimate. The estimated fault is then used for output compensation. This compensated output and the experimental input are used to achieve certain control objectives in a data-driven \(\mathcal{H}_\infty\) setting. Next, the data-driven \(\mathcal{H}_\infty\) fault estimation and control problems are solved using a subspace predictor-based approach. Finally, the proposed algorithm is applied to the steering subsystem of the remotely operated underwater vehicle.
The fatigue life of metals manufactured via laser-based powder bed fusion (L-PBF) highly
depends on process-induced defects. In this context, not only the size and geometry of the defect, but
also the properties and the microstructure of the surrounding material volume must be considered.
In the presented work, the microstructural changes in the vicinity of a crack-initiating defect in a
fatigue specimen produced via L-PBF and made of AISI 316L were analyzed in detail. Xenon plasma
focused ion beam (Xe-FIB) technique, scanning electron microscopy (SEM), and electron backscatter
diffraction (EBSD) were used to investigate the phase distribution, local misorientations, and grain
structure, including the crystallographic orientations. These analyses revealed a fine grain structure
in the vicinity of the defect, which is arranged in accordance with the melt pool geometry. Besides
pronounced cyclic plastic deformation, a deformation-induced transformation of the initial austenitic
phase into α’-martensite was observed. The plastic deformation as well as the phase transformation
were more pronounced near the border between the defect and the surrounding material volume.
However, the extent of the plastic deformation and the deformation-induced phase transformation
varies locally in this border region. Although a beneficial effect of certain grain orientations on the
phase transformation and plastic deformability was observed, the microstructural changes found
cannot solely be explained by the respective crystallographic orientation. These changes are assumed
to further depend on the inhomogeneous distribution of the multiaxial stresses beneath the defect as
well as the grain morphology
A detailed study of a cylinder activation concept by efficiency loss analysis and 1D simulation
(2020)
Cylinder deactivation is a well-known measure for reducing fuel consumption, especially when applied to gasoline engines. Mostly, such systems are designed to deactivate half of the number of cylinders of the engine. In this study, a new concept is investigated for deactivating only one out of four cylinders of a commercial vehicle diesel engine (“3/4-cylinder concept”). For this purpose, cylinders 2–4 of the engine are operated in “real” 3-cylinder mode, thus with the firing order and ignition distance of a regular 3-cylinder engine, while the first cylinder is only activated near full load, running in parallel to the fourth cylinder. This concept was integrated into a test engine and evaluated on an engine test bench. As the investigations revealed significant improvements for the low-to-medium load region as well as disadvantages for high load, an extensive numerical analysis was carried out based on the experimental results. This included both 1D simulation runs and a detailed cylinder-specific efficiency loss analysis. Based on the results of this analysis, further steps for optimizing the concept were derived and studied by numerical calculations. As a result, it can be concluded that the 3/4-cylinder concept may provide significant improvements of real-world fuel economy when integrated as a drive unit into a tractor.
In this paper we present an interpreter which allows to support the validation of conceptual models in early stages of the development. We compare hypermedia and expert system approaches to knowledge processing and show how an integrated approach eases the creation of expert systems. Our knowledge engineering tool CoMo-Kit allows a "smooth" transition from initial protocols via a semi-formal specification based on a typed hypertext up to an running expert system. The interpreter uses the intermediate hypertext representation for the interactive solution of problems. Thereby, tasks are distributed to agents via an local area network. This means that the specification of an expert system can directly be used to solve real world problems. If there exist formal (operational) specifications for subtasks then these are delegated to computers. Therefore, our approach allows to specify and validate distributed, cooperative systems where some subtasks are solved by humans and other subtasks are solved automatically by computers.
A practical distributed planning and control system for industrial robots is presented. The hierarchical concept consists of three independent levels. Each level is modularly implemented and supplies an application interface (API) to the next higher level. At the top level, we propose an automatic motion planner. The motion planner is based on a best-first search algorithm and needs no essential off-line computations. At the middle level, we propose a PC-based robot control architecture, which can easily be adapted to any industrial kinematics and application. Based on a client/server-principle, the control unit estab-lishes an open user interface for including application specific programs. At the bottom level, we propose a flexible and modular concept for the integration of the distributed motion control units based on the CAN bus. The concept allows an on-line adaptation of the control parameters according to the robot's configuration. This implies high accuracy for the path execution and improves the overall system performance.
A stereoselective synthesis of isoindolo[2,1-a]quinolin-11(5H)-ones containing three contiguous stereogenic centers is described. This Lewis-acid mediated reaction of enamides with N-aryl-acylimines affords the desired fused heterocyclic isoindolinones in high yields and diastereoselectivities. Scope and limitations of this method are discussed. The stereochemical outcome of this transformation indicates a stepwise reaction pathway.
We propose a model for glioma patterns in a microlocal tumor environment under
the influence of acidity, angiogenesis, and tissue anisotropy. The bottom-up model deduction
eventually leads to a system of reaction–diffusion–taxis equations for glioma and endothelial cell
population densities, of which the former infers flux limitation both in the self-diffusion and taxis
terms. The model extends a recently introduced (Kumar, Li and Surulescu, 2020) description of
glioma pseudopalisade formation with the aim of studying the effect of hypoxia-induced tumor
vascularization on the establishment and maintenance of these histological patterns which are typical
for high-grade brain cancer. Numerical simulations of the population level dynamics are performed
to investigate several model scenarios containing this and further effects.
We present an entropy concept measuring quantum localization in dynamical systems based on time averaged probability densities. The suggested entropy concept is a generalization of a recently introduced [PRL 75, 326 (1995)] phase-space entropy to any representation chosen according to the system and the physical question under consideration. In this paper we inspect the main characteristics of the entropy and the relation to other measures of localization. In particular the classical correspondence is discussed and the statistical properties are evaluated within the framework of random vector theory. In this way we show that the suggested entropy is a suitable method to detect quantum localization phenomena in dynamical systems.
Various regulatory initiatives (such as the pan-European PRIIP-regulation or the German chance-risk classification for state subsidized pension products) have been introduced that require product providers to assess and disclose the risk-return profile of their issued products by means of a key information document. We will in this context outline a concept for a (forward-looking) simulation-based approach and highlight its application and advantages. For reasons of comparison, we further illustrate the performance of approximation methods based on a projection of observed returns into the future such as the Cornish–Fisher expansion or bootstrap methods.
In many robotic applications, the teaching of points in space is necessary to register the robot coordinate system with the one of the application. Robot-human interaction is awkward and dangerous for the human because of the possibly large size and power of the robot, so robot movements must be predictable and natural. We present a novel hybrid control algorithm which provides the needed precision in small scale movements while allowing for fast and intuitive large scale translations.
A highly water-dispersible heterogeneous Brønsted acid surfactant was prepared by synthesis of a bi-functional anisotropic Janus-type material. The catalyst comprises ionic functionalities on one side and propyl-SO3H groups on the other. The novel material was investigated as a green substitute of a homogeneous acidic phase transfer catalyst (PTC). The activity of the catalyst was investigated for the aqueous-phase oxidation of cyclohexene to adipic acid with 30 % hydrogen peroxide even in a decagram-scale. It can also be used for the synthesis of some other carboxylic acid derivatives as well as diethyl phthalate.
The handling of oxygen sensitive samples and growth of obligate anaerobic organisms
requires the stringent exclusion of oxygen, which is omnipresent in our normal atmospheric
environment. Anaerobic workstations (aka. Glove boxes) enable the handling of
oxygen sensitive samples during complex procedures, or the long-term incubation of
anaerobic organisms. Depending on the application requirements, commercial workstations
can cost up to 60.000 €. Here we present the complete build instructions for a highly
adaptive, Arduino based, anaerobic workstation for microbial cultivation and sample handling,
with features normally found only in high cost commercial solutions. This build can
automatically regulate humidity, H2 levels (as oxygen reductant), log the environmental
data and purge the airlock. It is built as compact as possible to allow it to fit into regular
growth chambers for full environmental control. In our experiments, oxygen levels during
the continuous growth of oxygen producing cyanobacteria, stayed under 0.03 % for 21 days
without needing user intervention. The modular Arduino controller allows for the easy
incorporation of additional regulation parameters, such as CO2 concentration or air pressure.
This paper provides researchers with a low cost, entry level workstation for anaerobic
sample handling with the flexibility to match their specific experimental needs.
The performance of napkins is nowadays improved substantially by embedding granules of a superabsorbent into the cellulose matrix. In this paper a continuous model for the liquid transport in such an Ultra Napkin is proposed. Its mean feature is a nonlinear diffusion equation strongly coupled with an ODE describing a reversible absorbtion process. An efficient numerical method based on a symmetrical time splitting and a finite difference scheme of ADI-predictor-corrector type has been developed to solve these equations in a three dimensional setting. Numerical results are presented that can be used to optimize the granule distribution.
Cloudy inhomogenities in artificial fabrics are graded by a fast method which is based on a Laplacian pyramid decomposition of the fabric image. This band-pass representation takes into account the scale character of the cloudiness. A quality measure of the entire cloudiness is obtained as a weighted mean over the variances of all scales.
We have investigated urine samples after coffee consumption using targeted and untargeted
approaches to identify furan and 2-methylfuran metabolites in urine samples by UPLC-qToF.
The aim was to establish a fast, robust, and time-saving method involving ultra-performance
liquid chromatography-quantitative time-of-flight tandem mass spectrometry (UPLC-qToF-MS/MS).
The developed method detected previously reported metabolites, such as Lys-BDA, and others that
had not been previously identified, or only detected in animal or in vitro studies. The developed
UPLC-qToF method detected previously reported metabolites, such as lysine-cis-2-butene-1,4-dial
(Lys-BDA) adducts, and others that had not been previously identified, or only detected in animal
and in vitro studies. In sum, the UPLC-qToF approach provides additional information that may be
valuable in future human or animal intervention studies.
Solar radiation data is essential for the development of many solar energy applications ranging from thermal collectors to building simulation tools, but its availability is limited, especially the diffuse radiation component. There are several studies aimed at predicting this value, but very few studies cover the generalizability of such models on varying climates. Our study investigates how well these models generalize and also show how to enhance their generalizability on different climates. Since machine learning approaches are known to generalize well, we apply them to truly understand how well they perform on different climates than they are originally trained. Therefore, we trained them on datasets from the U.S. and tested on several European climates. The machine learning model that is developed for U.S. climates not only showed low mean absolute error (MAE) of 23 W/m2, but also generalized very well on European climates with MAE in the range of 20 to 27 W/m2. Further investigation into the factors influencing the generalizability revealed that careful selection of the training data can improve the results significantly
A novel shadowgraphic inline probe to measure crystal size distributions (CSD),
based on acquired greyscale images, is evaluated in terms of elevated temperatures and fragile
crystals, and compared to well-established, alternative online and offline measurement techniques,
i.e., sieving analysis and online microscopy. Additionally, the operation limits, with respect to
temperature, supersaturation, suspension, and optical density, are investigated. Two different
substance systems, potassium dihydrogen phosphate (prisms) and thiamine hydrochloride (needles),
are crystallized for this purpose at 25 L scale. Crystal phases of the well-known KH2PO4/H2O system
are measured continuously by the inline probe and in a bypass by the online microscope during
cooling crystallizations. Both measurement techniques show similar results with respect to the crystal
size distribution, except for higher temperatures, where the bypass variant tends to fail due to
blockage. Thiamine hydrochloride, a substance forming long and fragile needles in aqueous solutions,
is solidified with an anti-solvent crystallization with ethanol. The novel inline probe could identify
a new field of application for image-based crystal size distribution measurements, with respect
to difficult particle shapes (needles) and elevated temperatures, which cannot be evaluated with
common techniques.
We present a parallel control architecture for industrial robot cells. It is based on closed functional components arranged in a flat communication hierarchy. The components may be executed by different processing elements, and each component itself may run on multiple processing elements. The system is driven by the instructions of a central cell control component. We set up necessary requirements for industrial robot cells and possible parallelization levels. These are met by the suggested robot control architecture. As an example we present a robot work cell and a component for motion planning, which fits well in this concept.
Phase field modeling of fracture has been in the focus of research for over a decade now. The field has gained attention properly due to its benefiting features for the numerical simulations even for complex crack problems. The framework was so far applied to quasi static and dynamic fracture for brittle as well as for ductile materials with isotropic and also with anisotropic fracture resistance. However, fracture due to cyclic mechanical fatigue, which is a very important phenomenon regarding a safe, durable and also economical design of structures, is considered only recently in terms of phase field modeling. While in first phase field models the material’s fracture toughness becomes degraded to simulate fatigue crack growth, we present an alternative method within this work, where the driving force for the fatigue mechanism increases due to cyclic loading. This new contribution is governed by the evolution of fatigue damage, which can be approximated by a linear law, namely the Miner’s rule, for damage accumulation. The proposed model is able to predict nucleation as well as growth of a fatigue crack. Furthermore, by an assessment of crack growth rates obtained from several numerical simulations by a conventional approach for the description of fatigue crack growth, it is shown that the presented model is able to predict realistic behavior.
In this note, we define one more way of quantization of classical systems. The quantization we consider is an analogue of classical Jordan–Schwinger map which has been known and used for a long time by physicists. The difference, compared to Jordan–Schwinger map, is that we use generators of Cuntz algebra O∞ (i.e. countable family of mutually orthogonal partial isometries of separable Hilbert space) as a “building blocks” instead of creation–annihilation operators. The resulting scheme satisfies properties similar to Van Hove prequantization, i.e. exact conservation of Lie brackets and linearity.
The cultivation of cyanobacteria with the addition of an organic carbon source (meaning as heterotrophic or mixotrophic cultivation) is a promising technique to increase their slow growth rate. However, most cyanobacteria cultures are infected by non-separable heterotrophic bacteria. While their contribution to the biomass is rather insignificant in a phototrophic cultivation, problems may arise in heterotrophic and mixotrophic mode. Heterotrophic bacteria can potentially utilize carbohydrates quickly, thus preventing any benefit for the cyanobacteria. In order to estimate the advantage of the supplementation of a carbon source, it is essential to quantify the proportion of cyanobacteria and heterotrophic bacteria in the resulting biomass. In this work, the use of quantitative polymerase chain reaction (qPCR) is proposed. To prepare the samples, a DNA extraction method for cyanobacteria was improved to provide reproducible and robust results for the group of terrestrial cyanobacteria. Two pairs of primers were used, which bind either to the 16S rRNA gene of all cyanobacteria or all bacteria including cyanobacteria. This allows a determination of the proportion of cyanobacteria in the biomass. The method was established with the two terrestrial cyanobacteria Trichocoleus sociatus SAG 26.92 and Nostoc muscorum SAG B-1453-12a. As proof of concept, a heterotrophic cultivation with T. sociatus with glucose was performed. After 2 days of cultivation, a reduction of the biomass partition of the cyanobacterium to 90% was detected. Afterwards, the proportion increased again.
The paper studies the dynamics of transitions between the levels of a Wannier-Stark ladder induced by a resonant periodic driving. The analysis of the problem is done in terms of resonance quasienergy states, which take into account the metastable character of the Wannier-Stark states. It is shown that the periodic driving creates from a localized Wannier-Stark state an extended Bloch-like state with a spatial length varying in time as ~ t^1/2. Such a state can find applications in the field of atomic optics because it generates a coherent pulsed atomic beam.
Daylight is important for the well-being of humans. Therefore, many office buildings use
large windows and glass facades to let more daylight into office spaces. However, this increases the
chance of glare in office spaces, which results in visual discomfort. Shading systems in buildings
can prevent glare but are not effectively adapted to changing sky conditions and sun position,
thus losing valuable daylight. Moreover, many shading systems are also aesthetically unappealing.
Electrochromic (EC) glass in this regard might be a better alternative, due to its light transmission
properties that can be altered when a voltage is applied. EC glass facilitates zoning and also supports
control of each zone separately. This allows the right amount of daylight at any time of the day.
However, an effective control strategy is still required to efficiently control EC glass. Reinforcement
learning (RL) is a promising control strategy that can learn from rewards and penalties and use this
feedback to adapt to user inputs. We trained a Deep Q learning (DQN) agent on a set of weather data
and visual comfort data, where the agent tries to adapt to the occupant’s feedback while observing
the sun position and radiation at given intervals. The trained DQN agent can avoid bright daylight
and glare scenarios in 97% of the cases and increases the amount of useful daylight up to 90%, thus
significantly reducing the need for artificial lighting.
The semantics of everyday language and the semanticsof its naive translation into classical first-order language consider-ably differ. An important discrepancy that is addressed in this paperis about the implicit assumption what exists. For instance, in thecase of universal quantification natural language uses restrictions andpresupposes that these restrictions are non-empty, while in classi-cal logic it is only assumed that the whole universe is non-empty.On the other hand, all constants mentioned in classical logic arepresupposed to exist, while it makes no problems to speak about hy-pothetical objects in everyday language. These problems have beendiscussed in philosophical logic and some adequate many-valuedlogics were developed to model these phenomena much better thanclassical first-order logic can do. An adequate calculus, however, hasnot yet been given. Recent years have seen a thorough investigationof the framework of many-valued truth-functional logics. UnfortuADnately, restricted quantifications are not truth-functional, hence theydo not fit the framework directly. We solve this problem by applyingrecent methods from sorted logics.
One of the many features needed to support the activities of autonomous systems is the ability of motion planning. It enables robots to move in their environment securely and to accomplish given tasks. Unfortunately, the control loop comprising sensing, planning, and acting has not yet been closed for robots in dynamic environments. One reason involves the long execution times of the motion planning component. A solution for this problem is offered by the use of highly computational parallelism. Thus, an important task is the parallelization of existing motion planning algorithms for robots so that they are suitable for highly computational parallelism. In several cases, completely new algorithms have to be designed, so that a parallelization is feasible. In this survey, we review recent approaches to motion planning using parallel computation.
In recent years, more and more publications and material for studying and teaching, e. g. for Web-based teaching (WBT), appear "online" and digital libraries are built to manage such publications and online materials. Therefore, the most important concerns are related to the problem of durable, sustained storage and the management of content together with its metadata existing in heterogeneous styles and formats. In this paper, we present specific techniques and their use to support metadata-based catalog services. Such semistructured metadata (represented as XML fragments), which belong to online learning resources, need efficient XML-based query support, scalable result set processing, and comprehensive facilities for personalization purposes. We discuss the associated problems, subsequently derive the concepts of a suitable architecture, and finally outline the realization by means of our prototype system that is based on the J2EE component model.
In the scalar case one knows that a complex normalized function of boundedvariation \(\phi\) on \([0,1]\) defines a unique complex regular Borel measure\(\mu\) on \([0,1]\). In this note we show that this is no longer true in generalin the vector valued case, even if \(\phi\) is assumed to be continuous. Moreover, the functions \(\phi\) which determine a countably additive vectormeasure \(\mu\) are characterized.
Personalized dynamic pricing (PDP) involves dynamically setting individual-consumer prices for the same product or service according to consumer-identifying information. Despite its profitability, this pricing provokes strong negative fairness perceptions, explaining why managers are reluctant to implement it. This research provides important insights into the effect of two PDP dimensions (price individualization level and segmentation base) on fairness perceptions and the moderating role of privacy concerns. The results of two experimental studies indicate that consumers perceive individual prices as less fair than segment prices. They also evaluate location-based pricing as less fair than purchase history-based pricing. Consumer privacy concerns moderate these effects.
Background: The positive effect of carbohydrates from commercial beverages on soccer-specific exercise has been clearly demonstrated. However, no study is available that uses a home-mixed beverage in a test where technical skills were required. Methods: Nine subjects participated vol-untarily in this double-blind, randomized, placebo-controlled crossover study. On three testing days, the subjects performed six Hoff tests with a 3-min active break as a preload and then the Yo-Yo Intermittent Running Test Level 1 (Yo-Yo IR1) until exhaustion. On test days 2 and 3, the subjects received either a 69 g carbohydrate-containing drink (syrup–water mixture) or a carbo-hydrate-free drink (aromatic water). Beverages were given in several doses of 250 mL each: 30 min before and immediately before the exercise and after 18 and 39 min of exercise. The primary target parameters were the running performance in the Hoff test and Yo-Yo IR1, body mass and heart rate. Statistical differences between the variables of both conditions were analyzed using paired samples t-tests. Results: The maximum heart rate in Yo-Yo IR1 showed significant differ-ences (syrup: 191.1 ± 6.2 bpm; placebo: 188.0 ± 6.89 bpm; t(6) = −2.556; p = 0.043; dz = 0.97). The running performance in Yo-Yo IR1 under the condition syrup significantly increased by 93.33 ± 84.85 m (0–240 m) on average (p = 0.011). Conclusions: The intake of a syrup–water mixture with a total of 69 g carbohydrates leads to an increase in high-intensive running performance after soccer specific loads. Therefore, the intake of carbohydrate solutions is recommended for intermit-tent loads and should be increasingly considered by coaches and players.
When considering complex systems, identifying the most important actors is often of relevance. When the system is modeled
as a network, centrality measures are used which assign each node a value due to its position in the network. It is often
disregarded that they implicitly assume a network process flowing through a network, and also make assumptions of how
the network process flows through the network. A node is then central with respect to this network process (Borgatti in Soc
Netw 27(1):55–71, 2005, https ://doi.org/10.1016/j.socne t.2004.11.008). It has been shown that real-world processes often
do not fulfill these assumptions (Bockholt and Zweig, in Complex networks and their applications VIII, Springer, Cham,
2019, https ://doi.org/10.1007/978-3-030-36683 -4_7). In this work, we systematically investigate the impact of the measures’
assumptions by using four datasets of real-world processes. In order to do so, we introduce several variants of the betweenness
and closeness centrality which, for each assumption, use either the assumed process model or the behavior of the real-world
process. The results are twofold: on the one hand, for all measure variants and almost all datasets, we find that, in general,
the standard centrality measures are quite robust against deviations in their process model. On the other hand, we observe a
large variation of ranking positions of single nodes, even among the nodes ranked high by the standard measures. This has
implications for the interpretability of results of those centrality measures. Since a mismatch of the behaviour of the real
network process and the assumed process model does even affect the highly-ranked nodes, resulting rankings need to be
interpreted with care.
Even though it is not very often admitted, partial functionsdo play a significant role in many practical applications of deduction sys-tems. Kleene has already given a semantic account of partial functionsusing a three-valued logic decades ago, but there has not been a satisfact-ory mechanization. Recent years have seen a thorough investigation ofthe framework of many-valued truth-functional logics. However, strongKleene logic, where quantification is restricted and therefore not truth-functional, does not fit the framework directly. We solve this problemby applying recent methods from sorted logics. This paper presents atableau calculus that combines the proper treatment of partial functionswith the efficiency of sorted calculi.
This survey provides the reader with an overview of numerous results on p-permu- tation modules and the closely related classes of endo-trivial, endo-permutation and endo-p- permutation modules. These classes of modules play an important role in the representation theory of finite groups. For example, they are important building blocks used to understand and parametrise several kinds of categorical equivalences between blocks of finite group alge- bras. For this reason, there has been, since the late 1990’s, much interest in classifying such modules. The aim of this manuscript is to review classical results as well as all the major recent advances in the area. The first part of this survey serves as an introduction to the topic for non-experts in modular representation theory of finite groups, outlining proof ideas of the most important results at the foundations of the theory. Simultaneously, the connections between the aforementioned classes of modules are emphasised. In this respect, results, which are dispersed in the literature, are brought together, and emphasis is put on common properties and the role played by the p-permutation modules throughout the theory. Finally, in the last part of the manuscript, lifting results from positive characteristic to characteristic zero are collected and their proofs sketched.
A novel method is presented which allows a fast computation of complex energy resonance states in Stark systems, i.e. systems in a homogeneous field. The technique is based on the truncation of a shift-operator in momentum space. Numerical results for space periodic and non-periodic systems illustrate the extreme simplicity of the method.
Anisotropy of tracer-coupled networks is a hallmark in many brain regions. In the past, the topography of these networks was analyzed using various approaches, which focused on different aspects, e.g., position, tracer signal, or direction of coupled cells. Here, we developed a vector-based method to analyze the extent and preferential direction of tracer spreading. As a model region, we chose the lateral superior olive—a nucleus that exhibits specialized network topography. In acute slices, sulforhodamine 101-positive astrocytes were patch-clamped and dialyzed with the GJ-permeable tracer neurobiotin, which was subsequently labeled with avidin alexa fluor 488. A predetermined threshold was used to differentiate between tracer-coupled and tracer-uncoupled cells. Tracer extent was calculated from the vector means of tracer-coupled cells in four 90° sectors. We then computed the preferential direction using a rotating coordinate system and post hoc fitting of these results with a sinusoidal function. The new method allows for an objective analysis of tracer spreading that provides information about shape and orientation of GJ networks. We expect this approach to become a vital tool for the analysis of coupling anisotropy in many brain regions
In this paper we construct a numerical solver for the Saint Venant equations. Special attention
is given to the balancing of the source terms, including the bottom slope and variable cross-
sectional profiles. Therefore a special discretization of the pressure law is used, in order to
transfer analytical properties to the numerical method. Based on this approximation a well-
balanced solver is developed, assuring the C-property and depth positivity. The performance
of this method is studied in several test cases focusing on accurate capturing of steady states.
The statistics of the resonance widths and the behavior of the survival probability is studied in a particular model of quantum chaotic scattering (a particle in a periodic potential subject to static and time-periodic forces) introduced earlier in Ref. [5,6]. The coarse-grained distribution of the resonance widths is shown to be in good agreement with the prediction of Random Matrix Theory (RMT). The behavior of the survival probability shows, however, some deviation from RMT.
We report on Brillouin light scattering investigations of the elastic properties in Co/Ni superlattices which exhibit localized electronic eigenstates near the Fermi level causing an oscillation of the resistivity as a function of the superlattice periodicity A. No oscillations of the Rayleigh and Sezawa mode as a function of A could be observed within an error margin of +- 2% indicating that the localized electronic states do not contribute to the elastic constants.
For modeling approaches in systems biology, knowledge of the absolute abundances of cellular proteins is essential. One way to gain this knowledge is the use of quantification concatamers (QconCATs), which are synthetic proteins consisting of proteotypic peptides derived from the target proteins to be quantified. The QconCAT protein is labeled with a heavy isotope upon expression in E. coli and known amounts of the purified protein are spiked into a whole cell protein extract. Upon tryptic digestion, labeled and unlabeled peptides are released from the QconCAT and the native proteins, respectively, and both are quantified by LC-MS/MS. The labeled Q-peptides then serve as standards for determining the absolute quantity of the native peptides/proteins. Here we have applied the QconCAT approach to Chlamydomonas reinhardtii for the absolute quantification of the major proteins and protein complexes driving photosynthetic light reactions in the thylakoid membranes and carbon fixation in the pyrenoid. We found that with 25.2 attomol/cell the Rubisco large subunit makes up 6.6% of all proteins in a Chlamydomonas cell and with this exceeds the amount of the small subunit by a factor of 1.56. EPYC1, which links Rubisco to form the pyrenoid, is eight times less abundant than RBCS, and Rubisco activase is 32-times less abundant than RBCS. With 5.2 attomol/cell, photosystem II is the most abundant complex involved in the photosynthetic light reactions, followed by plastocyanin, photosystem I and the cytochrome b6/f complex, which range between 2.9 and 3.5 attomol/cell. The least abundant complex is the ATP synthase with 2 attomol/cell. While applying the QconCAT approach, we have been able to identify many potential pitfalls associated with this technique. We analyze and discuss these pitfalls in detail and provide an optimized workflow for future applications of this technique.
The measurement of self-diffusion coefficients using pulsed-field gradient (PFG) nuclear magnetic resonance (NMR) spectroscopy is a well-established method. Recently, benchtop NMR spectrometers with gradient coils have also been used, which greatly simplify these measurements. However, a disadvantage of benchtop NMR spectrometers is the lower resolution of the acquired NMR signals compared to high-field NMR spectrometers, which requires sophisticated analysis methods. In this work, we use a recently developed quantum mechanical (QM) model-based approach for the estimation of self-diffusion coefficients from complex benchtop NMR data. With the knowledge of the species present in the mixture, signatures for each species are created and adjusted to the measured NMR signal. With this model-based approach, the self-diffusion coefficients of all species in the mixtures were estimated with a discrepancy of less than 2 % compared to self-diffusion coefficients estimated from high-field NMR data sets of the same mixtures. These results suggest benchtop NMR is a reliable tool for quantitative analysis of self-diffusion coefficients, even in complex mixtures.
INRECA offers tools and methods for developing, validating, and maintaining classification, diagnosis and decision support systems. INRECA's basic technologies are inductive and case-based reasoning [9]. INRECA fully integrates [2] both techniques within one environment and uses the respective advantages of both technologies. Its object-oriented representation language CASUEL [10, 3] allows the definition of complex case structures, relations, similarity measures, as well as background knowledge to be used for adaptation. The objectoriented representation language makes INRECA a domain independent tool for its destined kind of tasks. When problems are solved via case-based reasoning, the primary kind of knowledge that is used during problem solving is the very specific knowledge contained in the cases. However, in many situations this specific knowledge by itself is not sufficient or appropriate to cope with all requirements of an application. Very often, background knowledge is available and/or necessary to better explore and interpret the available cases [1]. Such general knowledge may state dependencies between certain case features and can be used to infer additional, previously unknown features from the known ones.
In this paper we generalize the notion of method for proofplanning. While we adopt the general structure of methods introducedby Alan Bundy, we make an essential advancement in that we strictlyseparate the declarative knowledge from the procedural knowledge. Thischange of paradigm not only leads to representations easier to under-stand, it also enables modeling the important activity of formulatingmeta-methods, that is, operators that adapt the declarative part of exist-ing methods to suit novel situations. Thus this change of representationleads to a considerably strengthened planning mechanism.After presenting our declarative approach towards methods we describethe basic proof planning process with these. Then we define the notion ofmeta-method, provide an overview of practical examples and illustratehow meta-methods can be integrated into the planning process.
Extending the planADbased paradigm for auto-mated theorem proving, we developed in previ-ous work a declarative approach towards rep-resenting methods in a proof planning frame-work to support their mechanical modification.This paper presents a detailed study of a classof particular methods, embodying variations ofa mathematical technique called diagonaliza-tion. The purpose of this paper is mainly two-fold. First we demonstrate that typical math-ematical methods can be represented in ourframework in a natural way. Second we illus-trate our philosophy of proof planning: besidesplanning with a fixed repertoire of methods,metaADmethods create new methods by modify-ing existing ones. With the help of three differ-ent diagonalization problems we present an ex-ample trace protocol of the evolution of meth-ods: an initial method is extracted from a par-ticular successful proof. This initial method isthen reformulated for the subsequent problems,and more general methods can be obtained byabstracting existing methods. Finally we comeup with a fairly abstract method capable ofdealing with all the three problems, since it cap-tures the very key idea of diagonalization.
Fracture phenomena can be described by a phase field model in which an independent scalar field variable in addition to the mechanical displacement is considered [3]. This field approximates crack surfaces as a continuous transition zone from a value that indicates intact material to another value that represents the crack. For an accurate approximation of cracks, narrow transition zones resulting in steep gradients of the fracture field are required. This necessitates a high mesh density in finite element simulations, which leads to an increased computational effort. In order to circumvent this problem without forfeiting accuracy, exponential shape functions were introduced in the discretization of the phase field variable, see [4]. These special shape functions allow for a better approximation of steep gradients of the phase field with less elements as compared to standard Lagrange elements. Unfortunately, the orientation of the exponential shape functions is not uniquely determined and needs to be set up in the correct way in order to improve the approximation of smooth cracks. This work solves the issue by adaptively reorientating the exponential shape functions according to the nodal values of the phase field gradient in each element. Furthermore, a local approach is pursued that uses exponential shape function only in the vicinity of the crack, whereas standard bilinear shape function are used away from the crack.
Adaptive numerical integration of exponential finite elements for a phase field fracture model
(2021)
Phase field models for fracture are energy-based and employ a continuous field variable, the phase field, to indicate cracks. The width of the transition zone of this field variable between damaged and intact regions is controlled by a regularization parameter. Narrow transition zones are required for a good approximation of the fracture energy which involves steep gradients of the phase field. This demands a high mesh density in finite element simulations if 4-node elements with standard bilinear shape functions are used. In order to improve the quality of the results with coarser meshes, exponential shape functions derived from the analytic solution of the 1D model are introduced for the discretization of the phase field variable. Compared to the bilinear shape functions these special shape functions allow for a better approximation of the fracture field. Unfortunately, lower-order Gauss-Legendre quadrature schemes, which are sufficiently accurate for the integration of bilinear shape functions, are not sufficient for an accurate integration of the exponential shape functions. Therefore in this work, the numerical accuracy of higher-order Gauss-Legendre formulas and a double exponential formula for numerical integration is analyzed.
One technique to describe the failure of mechanical structures is a phase field model for fracture. Phase field models for fracture consider an independent scalar field variable in addition to the mechanical displacement [1]. The phase field ansatz approximates crack surfaces as a continuous transition zone in which the phase field variable varies from a value that indicates intact material to another value that represents cracks. For a good approximation of cracks, these transition zones are required to be narrow, which leads to steep gradients in the fracture field. As a consequence, the required mesh density in a finite element simulation and thus the computational effort increases. In order to circumvent this efficiency problem, exponential shape functions were introduced in the discretization of the phase field variable, see [2]. Compared to the bilinear shape functions these special shape functions allow for a better approximation of the steep transition with less elements. Unfortunately, the exponential shape functions are not symmetric, which requires a certain orientation of elements relative to the crack surfaces. This adaptation is not uniquely determined and needs to be set up in the correct way in order to improve the approximation of smooth cracks. The issue is solved in this work by reorientating the exponential shape functions according to the nodal value of phase field gradient in a particular element. To be precise, this work discusses an adaptive algorithm that implements such a reorientation for 2d and 3d situations.
Machining is very common in industry, e.g. automotive industry and aerospace industry, which is a nonlinear dynamic problem including large deformations, large strain, large strain rates and high temperatures, that implies some difficulties for numerical methods such as Finite element method. One way to simulate such kind of problems is the Particle Finite Element Method (PFEM) which combines the advantages of continuum mechanics and discrete modeling techniques. In this work we introduce an improved PFEM called the Adaptive Particle Finite Element Method (A-PFEM). The A-PFEM introduces particles and removes wrong elements along the numerical simulation to improve accuracy, precision, decrease computing time and resolve the phenomena that take place in machining in multiple scales. At the end of this paper, some examples are present to show the performance of the A-PFEM.
Adjustment Effects of Maximum Intensity Tolerance During Whole-Body Electromyostimulation Training
(2019)
Intensity regulation during whole-body electromyostimulation (WB-EMS) training is mostly controlled by subjective scales such as CR-10 Borg scale. To determine objective training intensities derived from a maximum as it is used in conventional strength training using the one-repetition-maximum (1-RM), a comparable maximum in WB-EMS is necessary. Therefore, the aim of this study was to examine, if there is an individual maximum intensity tolerance plateau after multiple consecutive EMS application sessions. A total of 52 subjects (24.1 ± 3.2 years; 76.8 ± 11.1 kg; 1.77 ± 0.09 m) participated in the longitudinal, observational study (38 males, 14 females). Each participant carried out four consecutive maximal EMS applications (T1–T4) separated by 1 week. All muscle groups were stimulated successively until their individual maximum and combined to a whole-body stimulation index to carry out a possible statement for the development of the maximum intensity tolerance of the whole body. There was a significant main effect between the measurement times for all participants (p < 0.001; ????2 = 0.39) as well as gender specific for males (p = 0.001; ????2 = 0.18) and females (p < 0.001; ????2 = 0.57). There were no interaction effects of gender × measurement time (p = 0.394). The maximum intensity tolerance increased significantly from T1 to T2 (p = 0.001) and T2 to T3 (p < 0.001). There was no significant difference between T3 and T4 (p = 1.0). These results indicate that there is an adjustment of the individual maximum intensity tolerance to a WB-EMS training after three consecutive tests. Therefore, there is a need of several habituation units comparable to the identification of the individual 1-RM in conventional strength training. Further research should focus on an objective intensity-specific regulation of the WB-EMS based on the individual maximum intensity tolerance to characterize different training areas and therefore generate specific adaptations to a WB-EMS training compared to conventional strength training methods.
Adsorption and Diffusion of Cisplatin Molecules in Nanoporous Materials: A Molecular Dynamics Study
(2019)
Using molecular dynamics simulations, the adsorption and diffusion of cisplatin drug molecules in nanopores is investigated for several inorganic materials. Three different materials are studied with widely-varying properties: metallic gold, covalent silicon, and silica. We found a strong influence of both the van der Waals and the electrostatic interaction on the adsorption behavior on the pore walls, which in turn influence the diffusion coefficients. While van der Waals forces generally lead to a reduction of the diffusion coefficient, the fluctuations in the electrostatic energy induced by orientation changes of the cisplatin molecule were found to help desorb the molecule from the wall.
Surface wetting can be described by using phase field models [1]. In these models, often either the contact angle or the surface tensions between the solid and the fluid are prescribed directly on the wall in order to represent the solid-fluid interaction. However, the interaction of the wall and the fluid are not strictly local. The influence of the wall, which can be described by wall potentials [2], reaches out into the fluid, which is the reason for the formation of adsorbate layers. The investigation shows how such a wall potential can be included into a phase field model of wetting. It is found that by considering this energy contribution, the model is able to capture the adsorbate layer.
To achieve the Paris climate protection goals there is an urgent need for action in the energy sector. Innovative concepts in the fields of short-term flexibility, long-term energy storage and energy conversion are required to defossilize all sectors by 2040. Water management is already involved in this field with biogas production and power generation and partly with using flexibility options. However, further steps are possible. Additionally, from a water management perspective, the elimination of organic micropollutants (OMP) is increasingly important. In this feasibility study a concept is presented, reacting to energy surplus and deficits from the energy grid and thus providing the needed long-term storage in combination with the elimination of OMP in municipal wastewater treatment plants (WWTPs). The concept is based on the operation of an electrolyzer, driven by local power production on the plant (photovoltaic (PV), combined heat and power plant (CHP)-units) as well as renewable energy from the grid (to offer system service: automatic frequency restoration reserve (aFRR)), to produce hydrogen and oxygen. Hydrogen is fed into the local gas grid and oxygen used for micropollutant removal via upgrading it to ozone. The feasibility of such a concept was examined for the WWTP in Mainz (Germany). It has been shown that despite partially unfavorable boundary conditions concerning renewable surplus energy in the grid, implementing electrolysis operated with regenerative energy in combination with micropollutant removal using ozonation and activated carbon filter is a reasonable and sustainable option for both, the climate and water protection
Financing measures and incentive schemes for (existing and new) building owners can promote the sustainable settlement development of rural regions or municipalities and, in a wider sense, entire countries or cross-border regions. In order to be used on a broad scale, the concept of revolving funds must continue to be further developed. In this research, the concept of an advanced revolving housing fund (ARF) for building owners to support the sustainable development of rural regions and potential mechanisms are introduced. The ARF is designed to reflect impacts and challenges with regard to rural regions in Germany, Europe and beyond. Based on New Institutional Economics, the Theory of Spatial Organisms, an expert workshop, interviews and discussions and further literature research, the fundamentals for incentive schemes and the essential mechanisms and design aspects of the ARF are derived. This includes the principal structure and governance of a holding fund and several regional funds. Based on this, input parameters for the financial modelling of an ARF are presented as well as guiding elements for empirical testing to promote more research in this area. It is found that the ARF should have a regional focus and must be a comprehensive instrument of settlement development with additional informal and formal measures. The developed concept promises new impulses, in particular, for rural regions. It is proposed to test the concept by means of case studies in pioneer regions of different countries
The development of algorithmic differentiation (AD) tools focuses mostly on handling floating point types in the target language. Taping optimizations in these tools mostly focus on specific operations like matrix vector products. Aggregated types like std::complex are usually handled by specifying the AD type as a template argument. This approach provides exact results, but prevents the use of expression templates. If AD tools are extended and specialized such that aggregated types can be added to the expression framework, then this will result in reduced memory utilization and improve the timing for applications where aggregated types such as complex number or matrix vector operations are used. Such an integration requires a reformulation of the stored data per expression and a rework of the tape evaluation process. We will demonstrate the overheads on a synthetic benchmark and show the improvement when aggregated types are handled properly by the expression framework of the AD tool.
Algorithms increasingly govern people's lives, including through rapidly spreading applications in the public sector. This paper sheds light on acceptance of algorithms used by the public sector emphasizing that algorithms, as parts of socio-technical systems, are always embedded in a specific social context. We show that citizens' acceptance of an algorithm is strongly shaped by how they evaluate aspects of this context, namely the personal importance of the specific problems an algorithm is supposed to help address and their trust in the organizations deploying the algorithm. The objective performance of presented algorithms affects acceptance much less in comparison. These findings are based on an original dataset from a survey covering two real-world applications, predictive policing and skin cancer prediction, with a sample of 2661 respondents from a representative German online panel. The results have important implications for the conditions under which citizens will accept algorithms in the public sector.
A method for efficiently handling associativity and commutativity (AC) in implementations of (equational) theorem provers without incorporating AC as an underlying theory will be presented. The key of substantial efficiency gains resides in a more suitable representation of permutation-equations (such as f(x,f(y,z))=f(y,f(z,x)) for instance). By representing these permutation-equations through permutations in the mathematical sense (i.e. bijective func- tions :{1,..,n} {1,..,n}), and by applying adapted and specialized inference rules, we can cope more appropriately with the fact that permutation-equations are playing a particular role. Moreover, a number of restrictions concerning application and generation of permuta- tion-equations can be found that would not be possible in this extent when treating permu- tation-equations just like any other equation. Thus, further improvements in efficiency can be achieved.
In a widely-studied class of multi-parametric optimization problems, the objective value of each solution is an affine function of real-valued parameters. Then, the goal is to provide an optimal solution set, i.e., a set containing an optimal solution for each non-parametric problem obtained by fixing a parameter vector. For many multi-parametric optimization problems, however, an optimal solution set of minimum cardinality can contain super-polynomially many solutions. Consequently, no polynomial-time exact algorithms can exist for these problems even if P=NP. We propose an approximation method that is applicable to a general class of multi-parametric optimization problems and outputs a set of solutions with cardinality polynomial in the instance size and the inverse of the approximation guarantee. This method lifts approximation algorithms for non-parametric optimization problems to their parametric version and provides an approximation guarantee that is arbitrarily close to the approximation guarantee of the approximation algorithm for the non-parametric problem. If the non-parametric problem can be solved exactly in polynomial time or if an FPTAS is available, our algorithm is an FPTAS. Further, we show that, for any given approximation guarantee, the minimum cardinality of an approximation set is, in general, not ℓ-approximable for any natural number ℓ less or equal to the number of parameters, and we discuss applications of our results to classical multi-parametric combinatorial optimizations problems. In particular, we obtain an FPTAS for the multi-parametric minimum s-t-cut problem, an FPTAS for the multi-parametric knapsack problem, as well as an approximation algorithm for the multi-parametric maximization of independence systems problem.
In a (linear) parametric optimization problem, the objective value of each feasible solution is an affine function of a real-valued parameter and one is interested in computing a solution for each possible value of the parameter. For many important parametric optimization problems including the parametric versions of the shortest path problem, the assignment problem, and the minimum cost flow problem, however, the piecewise linear function mapping the parameter to the optimal objective value of the corresponding non-parametric instance (the optimal value function) can have super-polynomially many breakpoints (points of slope change). This implies that any optimal algorithm for such a problem must output a super-polynomial number of solutions. We provide a method for lifting approximation algorithms for non-parametric optimization problems to their parametric counterparts that is applicable to a general class of parametric optimization problems. The approximation guarantee achieved by this method for a parametric problem is arbitrarily close to the approximation guarantee of the algorithm for the corresponding non-parametric problem. It outputs polynomially many solutions and has polynomial running time if the non-parametric algorithm has polynomial running time. In the case that the non-parametric problem can be solved exactly in polynomial time or that an FPTAS is available, the method yields an FPTAS. In particular, under mild assumptions, we obtain the first parametric FPTAS for each of the specific problems mentioned above and a (3/2 + ε) -approximation algorithm for the parametric metric traveling salesman problem. Moreover, we describe a post-processing procedure that, if the non-parametric problem can be solved exactly in polynomial time, further decreases the number of returned solutions such that the method outputs at most twice as many solutions as needed at minimum for achieving the desired approximation guarantee.
Recently, phase field modeling of fatigue fracture has gained a lot of attention from many researches and studies, since the fatigue damage of structures is a crucial issue in mechanical design. Differing from traditional phase field fracture models, our approach considers not only the elastic strain energy and crack surface energy, additionally, we introduce a fatigue energy contribution into the regularized energy density function caused by cyclic load. Comparing to other type of fracture phenomenon, fatigue damage occurs only after a large number of load cycles. It requires a large computing effort in a computer simulation. Furthermore, the choice of the cycle number increment is usually determined by a compromise between simulation time and accuracy. In this work, we propose an efficient phase field method for cyclic fatigue propagation that only requires moderate computational cost without sacrificing accuracy. We divide the entire fatigue fracture simulation into three stages and apply different cycle number increments in each damage stage. The basic concept of the algorithm is to associate the cycle number increment with the damage increment of each simulation iteration. Numerical examples show that our method can effectively predict the phenomenon of fatigue crack growth and reproduce fracture patterns.
A novel method for the highly stereoselective synthesis of tetrahydropyrans is reported. This domino reaction is based on a twofold addition of enamides to aldehydes followed by a subsequent cyclization and furnishes fully substituted tetrahydropyrans in high yields. Three new σ-bonds and five continuous stereogenic centers are formed in this one-pot process with a remarkable degree of diastereoselectivity. In most cases, the formation of only one out of 16 possible diastereomers is observed. Two different stereoisomers can be accessed in a controlled fashion starting either from an E- or a Z-configured enamide.
We present an approach to systematically describing case-based reasoning systems bydifferent kinds of criteria. One main requirement was the practical relevance of these criteria and their usability for real-life applications. We report on the results we achieved from a case study carried out in the INRECA1 Esprit project.
A characterisation of the spaces \({\mathcal {G}}_K\) and \({\mathcal {G}}_K'\) introduced in Grothaus et al. (Methods Funct Anal Topol 3(2):46–64, 1997) and Potthoff and Timpel (Potential Anal 4(6):637–654, 1995) is given. A first characterisation of these spaces provided in Grothaus et al. (Methods Funct Anal Topol 3(2):46–64, 1997) uses the concepts of holomorphy on infinite dimensional spaces. We, instead, give a characterisation in terms of U-functionals, i.e., classic holomorphic function on the one dimensional field of complex numbers. We apply our new characterisation to derive new results concerning a stochastic transport equation and the stochastic heat equation with multiplicative noise.
This article is dedicated to the weight set decomposition of a multiobjective (mixed-)integer linear problem with three objectives. We propose an algorithm that returns a decomposition of the parameter set of the weighted sum scalarization by solving biobjective subproblems via Dichotomic Search which corresponds to a line exploration in the weight set. Additionally, we present theoretical results regarding the boundary of the weight set components that direct the line exploration. The resulting algorithm runs in output polynomial time, i.e. its running time is polynomial in the encoding length of both the input and output. Also, the proposed approach can be used for each weight set component individually and is able to give intermediate results, which can be seen as an “approximation” of the weight set component. We compare the running time of our method with the one of an existing algorithm and conduct a computational study that shows the competitiveness of our algorithm. Further, we give a state-of-the-art survey of algorithms in the literature.
The development of complex software systems is driven by many diverse and sometimes contradictory requirements such as correctness and maintainability of resulting products, development costs, and time-to-market. To alleviate these difficulties, we propose a development method for distributed systems that integrates different basic approaches. First, it combines the use of the formal description technique SDL with software reuse concepts. This results in the definition of a use-case driven, incremental development method with SDL-patterns as the main reusable artifacts. Experience with this approach has shown that there are several other factors of influence, such as the quality of reuse artifacts or the experience of the development team. Therefore, we further combined our SDL-pattern approach with an improvement methodology known from the area of experimental software engineering. In order to demonstrate the validity of this integrating approach, we sketch some representative outcomings of a case study.
Several activities around the world aim at integrating object-oriented data models with relational ones in order to improve database management systems. As a first result of these activities, object-relational database management systems (ORDBMS) are already commercially available and, simultaneously, are subject to several research projects. This (position) paper reports on our activities in exploiting object-relational database technology for establishing repository manager functionality supporting software engineering (SE) processes. We argue that some of the key features of ORDBMS can directly be exploited to fulfill many of the needs of SE processes. Thus, ORDBMS, as we think, are much better suited to support SE applications than any others. Nevertheless, additional functionality, e. g., providing adequate version management, is required in order to gain a completely satisfying SE repository. In order to remain flexible, we have developed a generative approach for providing this additional functionality. It remains to be seen whether this approach, in turn, can effectively exploit ORDBMS features. This paper, therefore, wants to show that ORDBMS can substantially contribute to both establishing and running SE repositories.
Comprehensive reuse and systematic evolution of reuse artifacts as proposed by the Quality Improvement Paradigm (QIP) do not only require tool support for mere storage and retrieval. Rather, an integrated management of (potentially reusable) experience data as well as project-related data is needed. This paper presents an approach exploiting object-relational database technology to implement QIP-driven reuse repositories. Requirements, concepts, and implementational aspects are discussed and illustrated through a running example, namely the reuse and continuous improvement of SDL patterns for developing distributed systems. Our system is designed to support all phases of a reuse process and the accompanying improvement cycle by providing adequate functionality. Its implementation is based on object-relational database technology along with an infrastructure well suited for these purposes.
Many mathematical proofs are hard to generate forhumans and even harder for automated theoremprovers. Classical techniques of automated theoremproving involve the application of basic rules, of built-in special procedures, or of tactics. Melis (Melis 1993)introduced a new method for analogical reasoning inautomated theorem proving. In this paper we showhow the derivational analogy replay method is relatedand extended to encompass analogy-driven proof planconstruction. The method is evaluated by showing theproof plan generation of the Pumping Lemma for con-text free languages derived by analogy with the proofplan of the Pumping Lemma for regular languages.This is an impressive evaluation test for the analogicalreasoning method applied to automated theorem prov-ing, as the automated proof of this Pumping Lemmais beyond the capabilities of any of the current auto-mated theorem provers.
Phase-gradient metasurfaces can be designed to manipulate electromagnetic waves according to the generalized Snell’s law. Here, we show that a phased parallel-plate waveguide array (PPWA) can be devised to act in the same manner as a phase-gradient metasurface. We derive an analytic model that describes the wave propagation in the PPWA and calculate both the angle and amplitude distribution of the diffracted waves. The analytic model provides an intuitive understanding of the diffraction from the PPWA. We verify the (semi-)analytically calculated angle and amplitude distribution of the diffracted waves by numerical 3-D simulations and experimental measurements in a microwave goniometer.
Radar cross section reducing (RCSR) metasurfaces or coding metasurfaces were primarily designed for normally incident radiation in the past. It is evident that the performance of coding metasurfaces for RCSR can be significantly improved by additional backscattering reduction of obliquely incident radiation, which requires a valid analytic conception tool. Here, we derive an analytic current density distribution model for the calculation of the backscatter far-field of obliquely incident radiation on a coding metasurface for RCSR. For demonstration, we devise and fabricate a metasurface for a working frequency of 10.66GHz and obtain good agreement between the measured, simulated, and analytically calculated backscatter far-fields. The metasurface significantly reduces backscattering for incidence angles between −40∘ and 40∘ in a spectral working range of approximately 1GHz.
Analysis of dimensional accuracy for micro-milled areal material measures with kinematic simulation
(2021)
The calibration of areal surface topography measuring instruments is of high relevance to estimate the measurement uncertainty and to guarantee the traceability of the measurement results. Calibration structures for optical measuring instruments must be sufficiently small to determine the limits of the instruments.
Besides other methods, micro-milling is a suitable process for manufacturing areal material measures. For the manufacturing by micro-milling with ball end mills, the tool radius (effective cutter radius) is the corresponding limiting factor: if the tool radius is too large to penetrate the concave profile details without removing the surrounding material, deviations from the target geometry will occur. These deviations can be detected and excluded before experimental manufacturing with the aid of a kinematic simulation.
In this study, a kinematic simulation model for the prediction of the dimensional accuracy of micro-milled areal material measures is developed and validated. Subsequently, a radius study is conducted to determine how the tool radius r of the tool influences the dimensional accuracy of an areal crossed sinusoidal (ACS) geometry according to ISO 25178-70 [1] with a defined amplitude d and period length p. The resulting theoretical surface texture parameters are evaluated and compared to the target values. It was shown that the surface texture parameters deviate from the nominal values depending on the effective cutter radius used. Based on the results of the study, it can be determined with which effective tool radius the measurands Sa and Sq of the material measures are best met. The ideal effective radius for the application considered is between 50 and 75 μm.
The locally occurring mechanisms of hydrogen embrittlement significantly influence
the fatigue behavior of a material, which was shown in previous research on two different AISI
300-series austenitic stainless steels with different austenite stabilities. In this preliminary work, an
enhanced fatigue crack growth as well as changes in crack initiation sites and morphology caused
by hydrogen were observed. To further analyze the results obtained in this previous research, in
the present work the local cyclic deformation behavior of the material volume was analyzed by
using cyclic indentation testing. Moreover, these results were correlated to the local dislocation
structures obtained with transmission electron microscopy (TEM) in the vicinity of fatigue cracks.
The cyclic indentation tests show a decreased cyclic hardening potential as well as an increased
dislocation mobility for the conditions precharged with hydrogen, which correlates to the TEM
analysis, revealing courser dislocation cells in the vicinity of the fatigue crack tip. Consequently,
the presented results indicate that the hydrogen enhanced localized plasticity (HELP) mechanism
leads to accelerated crack growth and change in crack morphology for the materials investigated. In
summary, the cyclic indentation tests show a high potential for an analysis of the effects of hydrogen
on the local cyclic deformation behavior.
Machining-induced residual stresses (MIRS) are a main driver for distortion of thin-walled monolithic aluminum workpieces. Before one can develop compensation techniques to minimize distortion, the effect of machining on the MIRS has to be fully understood. This means that not only an investigation of the effect of different process parameters on the MIRS is important. In addition, the repeatability of the MIRS resulting from the same machining condition has to be considered. In past research, statistical confidence of MIRS of machined samples was not focused on. In this paper, the repeatability of the MIRS for different machining modes, consisting of a variation in feed per tooth and cutting speed, is investigated. Multiple hole-drilling measurements within one sample and on different samples, machined with the same parameter set, were part of the investigations. Besides, the effect of two different clamping strategies on the MIRS was investigated. The results show that an overall repeatability for MIRS is given for stable machining (between 16 and 34% repeatability standard deviation of maximum normal MIRS), whereas instable machining, detected by vibrations in the force signal, has worse repeatability (54%) independent of the used clamping strategy. Further experiments, where a 1-mm-thick wafer was removed at the milled surface, show the connection between MIRS and their distortion. A numerical stress analysis reveals that the measured stress data is consistent with machining-induced distortion across and within different machining modes. It was found that more and/or deeper MIRS cause more distortion.
Laser-based powder bed fusion (L-PBF) is a promising technology for the production of near net–shaped metallic components. The high surface roughness and the comparatively low-dimensional accuracy of such components, however, usually require a finishing by a subtractive process such as milling or grinding in order to meet the requirements of the application. Materials manufactured via L-PBF are characterized by a unique microstructure and anisotropic material properties. These specific properties could also affect the subtractive processes themselves. In this paper, the effect of L-PBF on the machinability of the aluminum alloy AlSi10Mg is explored when milling. The chips, the process forces, the surface morphology, the microhardness, and the burr formation are analyzed in dependence on the manufacturing parameter settings used for L-PBF and the direction of feed motion of the end mill relative to the build-up direction of the parts. The results are compared with a conventionally cast AlSi10Mg. The analysis shows that L-PBF influences the machinability. Differences between the reference and the L-PBF AlSi10Mg were observed in the chip form, the process forces, the surface morphology, and the burr formation. The initial manufacturing method of the part thus needs to be considered during the design of the finishing process to achieve suitable results.
Thermal comfort is one of the most important factors for occupant satisfaction and, as a result, for the building energy performance. Decentralized heating and cooling systems, also known as “Personal Environmental Comfort Systems” (PECS), have attracted significant interest in research and industry in recent years. While building simulation software is used in practice to improve the energy performance of buildings, most building simulation applications use the PMV approach for comfort calculations. This article presents a newly developed building controller that uses a holistic approach in the consideration of PECS within the framework of the building simulation software Esp-r. With PhySCo, a dynamic physiology, sensation, and comfort model, the presented building controller can adjust the setpoint temperatures of the central HVAC system as well as control the use of PECS based on the thermal sensation and comfort values of a virtual human. An adaptive building controller with a wide dead-band and adaptive setpoints between 18 to 26 °C (30 °C) was compared to a basic controller with a fixed and narrow setpoint range between 21 to 24 °C. The simulations were conducted for temperate western European climate (Mannheim, Germany), classified as Cfb climate according to Köppen-Geiger. With the adaptive controller, a 12.5% reduction in end-use energy was achieved in winter. For summer conditions, a variation between the adaptive controller, an office chair with a cooling function, and a fan increased the upper setpoint temperature to 30 °C while still maintaining comfortable conditions and reducing the end-use energy by 15.3%. In spring, the same variation led to a 9.3% reduction in the final energy. The combinations of other systems were studied with the newly presented controller.
Finishing processes result in changes of near-surface morphology, which strongly influences the fatigue behavior of components. Especially, roller bearings show a high dependency of the lifetime on surface roughness and the residual stress state in the subsurface volume. To analyze the influence of different finishing processes on the near-surface morphology, including the residual stress state, roller bearing rings made of AISI 52100 are finished in this work using hard turning, rough grinding, and fine grinding. In addition, fatigue specimens made of AISI 52100 and finished by cryogenic hard turning are investigated. For each condition, the residual stresses are determined at different distances from the surface, showing pronounced compressive stresses for all conditions. While the ground roller bearing rings show highest compressive residual stresses at the surface, the hard turned bearing ring and the cryogenic hard turned fatigue specimens reveal maximum compressive stresses in the subsurface volume. Moreover, cyclic indentation tests (CITs) are conducted in the different subsurface volumes, showing a higher cyclic plasticity in relation to the respective initial state, which is assumed to be caused by finishing-induced compressive residual stresses. Thus, the presented results indicate a high potential of CITs to efficiently characterize the residual stress state.
The 22 wt.% Cr, fully ferritic stainless steel Crofer®22 H has higher thermomechanical
fatigue (TMF)- lifetime compared to advanced ferritic-martensitic P91, which is assumed to be caused
by different damage tolerance, leading to differences in crack propagation and failure mechanisms.
To analyze this, instrumented cyclic indentation tests (CITs) were used because the material’s
cyclic hardening potential—which strongly correlates with damage tolerance, can be determined
by analyzing the deformation behavior in CITs. In the presented work, CITs were performed for
both materials at specimens loaded for different numbers of TMF-cycles. These investigations show
higher damage tolerance for Crofer®22 H and demonstrate changes in damage tolerance during
TMF-loading for both materials, which correlates with the cyclic deformation behavior observed in
TMF-tests. Furthermore, the results obtained at Crofer®22 H indicate an increase of damage tolerance
in the second half of TMF-lifetime, which cannot be observed for P91. Moreover, CITs were performed
at Crofer®22 H in the vicinity of a fatigue crack, enabling to locally analyze the damage tolerance.
These CITs show differences between crack edges and the crack tip. Conclusively, the presented
results demonstrate that CITs can be utilized to analyze TMF-induced changes in damage tolerance.
Spin Hamiltonian parameters of a pentanuclear Os Ni cyanometallate complex are derived from ab initio wave function based calculations, namely valence-type configuration interaction calculations with a complete active space including spin-orbit interaction (CASOCI) in a single-step procedure. While fits of experimental data performed so far could reproduce the data but the resulting parameters were not satisfactory, the parameters derived in the present work reproduce experimental data and at the same time have a reasonable size. The one-centre parameters (local matrices and single-ion zero field splitting tensors) are within an expected range, the anisotropic exchange parameters obtained in this work for an Os−Ni pair are not exceedingly large but determine the low-T part of the experimental χT curve. Exchange interactions (both isotropic and anisotropic) obtained from CASOCI have to be scaled by a factor of 2.5 to obtain agreement with experiment, a known deficiency of such types of calculation. After scaling the parameters, the isotropic Os−Ni exchange coupling constant is cm−1 and the D parameter of the (nearly axial) anisotropic Os−Ni exchange is −1, so anisotropic exchange is larger in absolute size than isotropic exchange. The negative value of the isotropic J (indicating antiferromagnetic coupling) seemingly contradicts the large-temperature behaviour of the temperature dependent susceptibility curve, but this is caused by the negative g value of the Os centres. This negative g value is a universal feature of a pseudo-octahedral coordination with configuration and strong spin-orbit interaction. Knowing the size of these exchange interactions is important because Os(CN) is a versatile building block for the synthesis of / magnetic materials.
Static magnetic and spin wave properties of square lattices of permalloy micron dots with thicknesses of 500 Å and 1000 Å and with varying dot separations have been investigated. A magnetic fourfold anisotropy was found for the lattice with dot diameters of 1 micrometer and a dot separation of 0.1 micrometer. The anisotropy is attributed to an anisotropic dipole-dipole interaction between magnetically unsaturated parts of the dots. The anisotropy strength (order of 100000 erg/cm^3 ) decreases with increasing in-plane applied magnetic field.
Biological soil crusts (biocrusts) are a common element of the Queensland (Australia) dry savannah ecosystem and are composed of cyanobacteria, algae, lichens, bryophytes, fungi and heterotrophic bacteria. Here we report how the CO2 gas exchange of the cyanobacteria-dominated biocrust type from Boodjamulla National Park in the north Queensland Gulf Savannah responds to the pronounced climatic seasonality and on their quality as a carbon sink using a semi-automatic cuvette system. The dominant cyanobacteria are the filamentous species Symplocastrum purpurascens together with Scytonema sp. Metabolic activity was recorded between 1 July 2010 and 30 June 2011, during which CO2 exchange was only evident from November 2010 until mid-April 2011, representative of 23.6 % of the 1-year recording period. In November at the onset of the wet season, the first month (November) and the last month (April) of activity had pronounced respiratory loss of CO2. The metabolic active period accounted for 25 % of the wet season and of that period 48.6 % was net photosynthesis (NP) and 51.4 % dark respiration (DR). During the time of NP, net photosynthetic uptake of CO2 during daylight hours was reduced by 32.6 % due to water supersaturation. In total, the biocrust fixed 229.09 mmol CO2 m−2 yr−1, corresponding to an annual carbon gain of 2.75 g m−2 yr−1. Due to malfunction of the automatic cuvette system, data from September and October 2010 together with some days in November and December 2010 could not be analysed for NP and DR. Based on climatic and gas exchange data from November 2010, an estimated loss of 88 mmol CO2 m−2 was found for the 2 months, resulting in corrected annual rates of 143.1 mmol CO2 m−2 yr−1, equivalent to a carbon gain of 1.7 g m−2 yr−1. The bulk of the net photosynthetic activity occurred above a relative humidity of 42 %, indicating a suitable climatic combination of temperature, water availability and light intensity well above 200 µmol photons m−2 s−1 photosynthetic active radiation. The Boodjamulla biocrust exhibited high seasonal variability in CO2 gas exchange pattern, clearly divided into metabolically inactive winter months and active summer months. The metabolic active period commences with a period (of up to 3 months) of carbon loss, likely due to reestablishment of the crust structure and restoration of NP prior to about a 4-month period of net carbon gain. In the Gulf Savannah biocrust system, seasonality over the year investigated showed that only a minority of the year is actually suitable for biocrust growth and thus has a small window for potential contribution to soil organic matter.
Designing exotic structures in low dimensions is key in today’s quest to tailor novel quantum states
in materials with unique symmetries. Particularly intriguing materials in this regard are low
dimensional aperiodic structures with non-conventional symmetries that are otherwise forbidden
in translation symmetric crystals. In our work, we focus on the link between the structural and
electronic properties of aperiodically ordered aromatic molecules on a quasicrystalline surface,
which has largely been neglected so far. As an exemplary case, we investigate the self-assembly and
the interfacial electronic properties of the nano-graphene-like molecule coronene on the bulk
truncated icosahedral (i) Al–Pd–Mn quasicrystalline surface using multiple surface sensitive
techniques. We find an aperiodically ordered coronene monolayer (ML) film on the i-Al–Pd–Mn
surface that is characterized by the same local motifs of the P1 Penrose tiling model as the bare
i-Al–Pd–Mn surface. The electronic valence band structure of the coronene/i-Al–Pd–Mn system
is characterized by the pseudogap of the bare i-Al–Pd–Mn, which persists the adsorption of
coronene confirming the quasiperiodic nature of the interface. In addition, we find a newly formed
interface state of partial molecular character that suggests an at least partial chemical interaction
between the molecule and the quasicrystalline surface. We propose that this partial chemical
molecule–surface interaction is responsible for imprinting the quasicrystalline order of the surface
onto the molecular film.
The design of the fifth generation (5G) cellular network should take account of the emerging services with divergent quality of service requirements. For instance, a vehicle-to-everything (V2X) communication is required to facilitate the local data exchange and therefore improve the automation level in automated driving applications. In this work, we inspect the performance of two different air interfaces (i.e., LTE-Uu and PC5) which are proposed by the third generation partnership project (3GPP) to enable the V2X communication. With these two air interfaces, the V2X communication can be realized by transmitting data packets either over the network infrastructure or directly among traffic participants. In addition, the ultra-high reliability requirement in some V2X communication scenarios can not be fulfilled with any single transmission technology (i.e., either LTE-Uu or PC5). Therefore, we discuss how to efficiently apply multi-radio access technologies (multi-RAT) to improve the communication reliability. In order to exploit the multi-RAT in an efficient manner, both the independent and the coordinated transmission schemes are designed and inspected. Subsequently, the conventional uplink is also extended to the case where a base station can receive data packets through both the LTE-Uu and PC5 interfaces. Moreover, different multicast-broadcast single-frequency network (MBSFN) area mapping approaches are also proposed to improve the communication reliability in the LTE downlink. Last but not least, a system level simulator is implemented in this work. The simulation results do not only provide us insights on the performances of different technologies but also validate the effectiveness of the proposed multi-RAT scheme.
For transferring existing knowledge into new projects, reuse has become an important factor in today's software industry. However, to set reuse into practice, reusable artifacts have to be stored somewhere, and must be offered to (re-)users on demand. For this purpose, advanced reuse repository systems like, for instance, instantiations of the Experience Base concept, are quite frequently used. Many people, from different projects, have to access such a repository at various phases of software development processes to retrieve or store reusable data. In order to fulfill the given tasks, each of these user has specific needs. Taking this into account, a reuse repository has to offer tailored user interfaces and functions for different user groups. Furthermore, since the contents of such a repository usually represent the state of the art of an organization's (core) competencies, not everyone should be allowed to freely access each and every repository entry. This isespecially true for persons that are not part of the organization. This report discusses role concepts that can be applied to reuse repository systems to overcome some of the stated access problems. Commonly used roles for software development and reuse repository management are listed. Based on these roles, a basic set of roles, as implemented in the SFB 501 Experience Base, is introduced.
Microbial planktonic communities are the basis of food webs in aquatic ecosystems since they contribute substantially to primary production and nutrient recycling. Network analyses of DNA metabarcoding data sets emerged as a powerful tool to untangle the complex ecological relationships among the key players in food webs. In this study, we evaluated co-occurrence networks constructed from time-series metabarcoding data sets (12 months, biweekly sampling) of protistan plankton communities in surface layers (epilimnion) and bottom waters (hypolimnion) of two temperate deep lakes, Lake Mondsee (Austria) and Lake Zurich (Switzerland). Lake Zurich plankton communities were less tightly connected, more fragmented and had a higher susceptibility to a species extinction scenario compared to Lake Mondsee communities. We interpret these results as a lower robustness of Lake Zurich protistan plankton to environmental stressors, especially stressors resulting from climate change. In all networks, the phylum Ciliophora contributed the highest number of nodes, among them several in key positions of the networks. Associations in ciliate-specific subnetworks resembled autecological species-specific traits that indicate adaptions to specific environmental conditions. We demonstrate the strength of co-occurrence network analyses to deepen our understanding of plankton community dynamics in lakes and indicate biotic relationships, which resulted in new hypotheses that may guide future research in climate-stressed ecosystems.
Purpose
We investigated the cytosolic and membrane-associated contents of polyphenols after 4 hours of incubation (50 μM of each polyphenol) in the colon carcinoma cell line T84 using a novel, rapid, and convenient method based on permeabilization of the cell membrane using digitonin. The colon carcinoma cell line was used to investigate the intestinal uptake of polyphenols present in apple products.
Recent Findings
The results showed that hydroxycinnamic acids (caffeic and 5-caffeoylquinic acid) were only detected in the cytosolic fractions. In contrast, 0.3 to 8.2% of the initial concentrations (50 μM) of the flavonoids phloretin, quercetin, phloretin 2′-O-glucoside, and quercetin 3-O-rhamnoside were found in the membrane-associated fractions. In the cytosolic fractions, 0.2–2.9% of these compounds were detected, corresponding to 25 to 40% of the total cell-associated (cytosolic plus membrane-associated fractions) polyphenol content.
Summary
Our results showed that after uptake, polyphenols were present in the cytosolic fraction of the cells as well as associated with the cell membrane. The presented method provides a useful in vitro tool for determining biologically active compounds in cellular fractions.
Static and dynamic properties of patterned magnetic permalloy films are investigated. In square lattices of circular shaped permalloy dots an anisotropic coupling mechanism has been found, which is identified as being due to intrinsically unsaturated parts of the dots caused by spatial variations of demagnetizing field. In arrays of magnetic wires a quantization of the surface spin wave mode in several dispersionless modes is observed and quantitatively described. For large wavevectors the frequency separation between the modes becomes smaller and the frequencies converge to the dispersion of the dipole-exchange surface mode of a continuous film.
This pilot study aimed to investigate the use of sensorimotor insoles in pain reduction, different orthopedic indications, and the wearing duration effects on the development of pain. Three hundred and forty patients were asked about their pain perception using a visual analog scale (VAS) in a pre–post analysis. Three main intervention durations were defined: VAS_post: up to 3 months, 3 to 6 months, and more than 6 months. The results show significant differences for the within-subject factor “time of measurement”, as well as for the between-subject factor indication (p < 0.001) and worn duration (p < 0.001). No interaction was found between indication and time of measurements (model A) or between worn duration and time of measurements (model B). The results of this pilot study must be cautiously and critically interpreted, but may support the hypothesis that sensorimotor insoles could be a helpful tool for subjective pain reduction. The missing control group and the lack of confounding variables such as methodological weaknesses, natural healing processes, and complementary therapies must be taken into account. Based on these experiences and findings, a RCT and systematic review will follow.
Heterocystous Cyanobacteria of the genus Nodularia form major blooms in brackish waters, while terrestrial Nostoc species occur worldwide, often associated in biological soil crusts. Both genera, by virtue of their ability to fix N2 and conduct oxygenic photosynthesis, contribute significantly to global primary productivity. Select Nostoc and Nodularia species produce the hepatotoxin nodularin and whether its production will change under climate change conditions needs to be assessed. In light of this, the effects of elevated atmospheric CO2 availability on growth, carbon and N2 fixation as well as nodularin production were investigated in toxin and non-toxin producing species of both genera. Results highlighted the following:
Biomass and volume specific biological nitrogen fixation (BNF) rates were respectively almost six and 17 fold higher in the aquatic Nodularia species compared to the terrestrial Nostoc species tested, under elevated CO2 conditions.
There was a direct correlation between elevated CO2 and decreased dry weight specific cellular nodularin content in a diazotrophically grown terrestrial Nostoc species, and the aquatic Nodularia species, regardless of nitrogen availability.
Elevated atmospheric CO2 levels were correlated to a reduction in biomass specific BNF rates in non-toxic Nodularia species.
Nodularin producers exhibited stronger stimulation of net photosynthesis rates (NP) and growth (more positive Cohen’s d) and less stimulation of dark respiration and BNF per volume compared to non-nodularin producers under elevated CO2 levels.
This study is the first to provide information on NP and nodularin production under elevated atmospheric CO2 levels for Nodularia and Nostoc species under nitrogen replete and diazotrophic conditions.