Refine
Year of publication
Document Type
- Article (726) (remove)
Has Fulltext
- yes (726)
Keywords
- AG-RESY (42)
- PARO (30)
- SKALP (15)
- Schule (12)
- MINT (11)
- Mathematische Modellierung (11)
- Stadtplanung (9)
- Denkmäler (8)
- HANDFLEX (8)
- Monitoring (8)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (153)
- Kaiserslautern - Fachbereich Informatik (134)
- Kaiserslautern - Fachbereich Physik (101)
- Kaiserslautern - Fachbereich Mathematik (84)
- Kaiserslautern - Fachbereich Sozialwissenschaften (53)
- Kaiserslautern - Fachbereich Biologie (50)
- Kaiserslautern - Fachbereich Chemie (42)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (27)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (26)
- Kaiserslautern - Fachbereich Bauingenieurwesen (23)
Laser-based powder bed fusion (L-PBF) is a promising technology for the production of near net–shaped metallic components. The high surface roughness and the comparatively low-dimensional accuracy of such components, however, usually require a finishing by a subtractive process such as milling or grinding in order to meet the requirements of the application. Materials manufactured via L-PBF are characterized by a unique microstructure and anisotropic material properties. These specific properties could also affect the subtractive processes themselves. In this paper, the effect of L-PBF on the machinability of the aluminum alloy AlSi10Mg is explored when milling. The chips, the process forces, the surface morphology, the microhardness, and the burr formation are analyzed in dependence on the manufacturing parameter settings used for L-PBF and the direction of feed motion of the end mill relative to the build-up direction of the parts. The results are compared with a conventionally cast AlSi10Mg. The analysis shows that L-PBF influences the machinability. Differences between the reference and the L-PBF AlSi10Mg were observed in the chip form, the process forces, the surface morphology, and the burr formation. The initial manufacturing method of the part thus needs to be considered during the design of the finishing process to achieve suitable results.
Fucoidans are multifunctional marine macromolecules that are subjected to numerous and various downstream processes during their production. These processes were considered the most important abiotic factors affecting fucoidan chemical skeletons, quality, physicochemical properties, biological properties and industrial applications. Since a universal protocol for fucoidans production has not been established yet, all the currently used processes were presented and justified. The current article complements our previous articles in the fucoidans field, provides an updated overview regarding the different downstream processes, including pre-treatment, extraction, purification and enzymatic modification processes, and shows the recent non-traditional applications of fucoidans in relation to their characters.
Background: The positive effect of carbohydrates from commercial beverages on soccer-specific exercise has been clearly demonstrated. However, no study is available that uses a home-mixed beverage in a test where technical skills were required. Methods: Nine subjects participated vol-untarily in this double-blind, randomized, placebo-controlled crossover study. On three testing days, the subjects performed six Hoff tests with a 3-min active break as a preload and then the Yo-Yo Intermittent Running Test Level 1 (Yo-Yo IR1) until exhaustion. On test days 2 and 3, the subjects received either a 69 g carbohydrate-containing drink (syrup–water mixture) or a carbo-hydrate-free drink (aromatic water). Beverages were given in several doses of 250 mL each: 30 min before and immediately before the exercise and after 18 and 39 min of exercise. The primary target parameters were the running performance in the Hoff test and Yo-Yo IR1, body mass and heart rate. Statistical differences between the variables of both conditions were analyzed using paired samples t-tests. Results: The maximum heart rate in Yo-Yo IR1 showed significant differ-ences (syrup: 191.1 ± 6.2 bpm; placebo: 188.0 ± 6.89 bpm; t(6) = −2.556; p = 0.043; dz = 0.97). The running performance in Yo-Yo IR1 under the condition syrup significantly increased by 93.33 ± 84.85 m (0–240 m) on average (p = 0.011). Conclusions: The intake of a syrup–water mixture with a total of 69 g carbohydrates leads to an increase in high-intensive running performance after soccer specific loads. Therefore, the intake of carbohydrate solutions is recommended for intermit-tent loads and should be increasingly considered by coaches and players.
This paper aims to improve the traditional calibration method for reconfigurable self-X (self-calibration, self-healing, self-optimize, etc.) sensor interface readout circuit for industry 4.0. A cost-effective test stimulus is applied to the device under test, and the transient response of the system is analyzed to correlate the circuit's characteristics parameters. Due to complexity in the search and objective space of the smart sensory electronics, a novel experience replay particle swarm optimization (ERPSO) algorithm is being proposed and proved a better-searching capability than some currently well-known PSO algorithms. The newly proposed ERPSO expanded the selection producer of the classical PSO by introducing an experience replay buffer (ERB) intending to reduce the probability of trapping into the local minima. The ERB reflects the archive of previously visited global best particles, while its selection is based upon an adaptive epsilon greedy method in the velocity updating model. The performance of the proposed ERPSO algorithm is verified by using eight different popular benchmarking functions. Furthermore, an extrinsic evaluation of the ERPSO algorithm is also examined on a reconfigurable wide swing indirect current-feedback instrumentation amplifier (CFIA). For the later test, we proposed an efficient optimization procedure by using total harmonic distortion analyses of CFIA output to reduce the total number of measurements and save considerable optimization time and cost. The proposed optimization methodology is roughly 3 times faster than the classical optimization process. The circuit is implemented by using Cadence design tools and CMOS 0.35 µm technology from Austria Microsystems (AMS). The efficiency and robustness are the key features of the proposed methodology toward implementing reliable sensory electronic systems for industry 4.0 applications.
This article proposes a new clock-dependent gain-scheduled dynamic output feedback controller for delayed linear parameter varying systems with piecewise constant parameters. The proposed controller guarantees ℒ2-performance. By employing a clock-dependent Lyapunov–Krasovskii functional, a sufficient condition for the existence of the controller is provided in terms of clock- and parameter-dependent linear matrix inequalities. A case study on output feedback control of delayed switched systems is also provided. To illustrate the efficacy of the result, it is applied to a practical VTOL helicopter model.
In recent years, the concept of a centralized drainage system that connect an entire city to one single treatment plant is increasingly being questioned in terms of the costs, reliability, and environmental impacts. This study introduces an optimization approach based on decentralization in order to develop a cost-effective and sustainable sewage collection system. For this purpose, a new algorithm based on the growing spanning tree algorithm is developed for decentralized layout generation and treatment plant allocation. The trade-off between construction and operation costs, resilience, and the degree of centralization is a multiobjective problem that consists of two subproblems: the layout of the networks and the hydraulic design. The innovative characteristics of the proposed framework are that layout and hydraulic designs are solved simultaneously, three objectives are optimized together, and the entire problem solving process is self-adaptive. The model is then applied to a real case study. The results show that finding an optimum degree of centralization could reduce not only the network’s costs by 17.3%, but could also increase its structural resilience significantly compared to fully centralized networks.
It is difficult for robots to handle a vibrating deformable object. Even for human beings it is a high-risk operation to, for example, insert a vibrating linear object into a small hole. However, fast manipulation using a robot arm is not just a dream; it may be achieved if some important features of the vibration are detected online. In this paper, we present an approach for fast manipulation using a force/torque sensor mounted on the robot's wrist. Template matching method is employed to recognize the vibrational phase of the deformable objects. Therefore, a fast manipulation can be performed with a high success rate, even if there is acute vibration. Experiments inserting a deformable object into a hole are conducted to test the presented method. Results demonstrate that the presented sensor-based online fast manipulation is feasible.
As a consequence of the real estate market crash after 2008, large investors invested a significant amount of wealth into single-family houses to construct a portfolio of rental dwellings, whose income is securitized in the capital. In some local housing markets, these investors own remarkable numbers of single-family houses. Furthermore, their trading activities have resulted in a new investment strategy, which exacerbates property wealth concentration and polarization. This new investment strategy and its portfolio optimization inspire curiosity about its influence on housing markets. This paper first aims to find an optimal portfolio strategy by employing an expected utility optimization from the terminal wealth, which adopts a stochastic model that includes a variety of economic states to estimate house prices. Second, it aims to analyze the effect of large investors on the housing market. The results show the investment strategies of large investors depend on the balance among economic state, maintenance cost, rental income, interest rate and investment willingness of large investors to housing and their effect depends on the state of the economy.
Load modeling is one of the crucial tasks for improving smart grids’ energy efficiency. Among many alternatives, machine learning-based load models have become popular in applications and have shown outstanding performance in recent years. The performance of these models highly relies on data quality and quantity available for training. However, gathering a sufficient amount of high-quality data is time-consuming and extremely expensive. In the last decade, Generative Adversarial Networks (GANs) have demonstrated their potential to solve the data shortage problem by generating synthetic data by learning from recorded/empirical data. Educated synthetic datasets can reduce prediction error of electricity consumption when combined with empirical data. Further, they can be used to enhance risk management calculations. Therefore, we propose RCGAN, TimeGAN, CWGAN, and RCWGAN which take individual electricity consumption data as input to provide synthetic data in this study. Our work focuses on one dimensional times series, and numerical experiments on an empirical dataset show that GANs are indeed able to generate synthetic data with realistic appearance.
Machining is very common in industry, e.g. automotive industry and aerospace industry, which is a nonlinear dynamic problem including large deformations, large strain, large strain rates and high temperatures, that implies some difficulties for numerical methods such as Finite element method. One way to simulate such kind of problems is the Particle Finite Element Method (PFEM) which combines the advantages of continuum mechanics and discrete modeling techniques. In this work we introduce an improved PFEM called the Adaptive Particle Finite Element Method (A-PFEM). The A-PFEM introduces particles and removes wrong elements along the numerical simulation to improve accuracy, precision, decrease computing time and resolve the phenomena that take place in machining in multiple scales. At the end of this paper, some examples are present to show the performance of the A-PFEM.
Recently, phase field modeling of fatigue fracture has gained a lot of attention from many researches and studies, since the fatigue damage of structures is a crucial issue in mechanical design. Differing from traditional phase field fracture models, our approach considers not only the elastic strain energy and crack surface energy, additionally, we introduce a fatigue energy contribution into the regularized energy density function caused by cyclic load. Comparing to other type of fracture phenomenon, fatigue damage occurs only after a large number of load cycles. It requires a large computing effort in a computer simulation. Furthermore, the choice of the cycle number increment is usually determined by a compromise between simulation time and accuracy. In this work, we propose an efficient phase field method for cyclic fatigue propagation that only requires moderate computational cost without sacrificing accuracy. We divide the entire fatigue fracture simulation into three stages and apply different cycle number increments in each damage stage. The basic concept of the algorithm is to associate the cycle number increment with the damage increment of each simulation iteration. Numerical examples show that our method can effectively predict the phenomenon of fatigue crack growth and reproduce fracture patterns.
In the last decades, the phase field method has drawn much attention for its application in fracture mechanics because it offers a simple unified framework for crack propagation. The core idea of phase field models for fracture is to introduce a continuous scalar field representing the discontinuous crack. Recently, a phase field model for fatigue has been proposed along this path. The fatigue failure differs from the other fracture scenarios since cracks only occur after a considerable number of load cycles. As fracturing happens, changes of the material microstructure are involved, which causes the evolution of the structural configuration. Thus, a new mathematical description not based on traditional spatial coordinates but the material manifold is desired, which will serve as an elegant analysis tool to understand the energetic forces for crack propagation. Configurational forces are a suitable choice for this purpose, as they describe the energetic driving forces associated with phenomena changing the material itself. In this work, we present a phase field model for fatigue. Furthermore, the phase field fatigue model is analyzed within the concept of configurational forces, which provides a straightforward way to understand the phase field simulations of fatigue fracture.
Phospho-regulation of the Shugoshin - Condensin interaction at the centromere in budding yeast
(2020)
Correct bioriented attachment of sister chromatids to the mitotic spindle is essential for chromosome segregation. In budding yeast, the conserved protein shugoshin (Sgo1) contributes to biorientation by recruiting the protein phosphatase PP2A-Rts1 and the condensin complex to centromeres. Using peptide prints, we identified a Serine-Rich Motif (SRM) of Sgo1 that mediates the interaction with condensin and is essential for centromeric condensin recruitment and the establishment of biorientation. We show that the interaction is regulated via phosphorylation within the SRM and we determined the phospho-sites using mass spectrometry. Analysis of the phosphomimic and phosphoresistant mutants revealed that SRM phosphorylation disrupts the shugoshin–condensin interaction. We present evidence that Mps1, a central kinase in the spindle assembly checkpoint, directly phosphorylates Sgo1 within the SRM to regulate the interaction with condensin and thereby condensin localization to centromeres. Our findings identify novel mechanisms that control shugoshin activity at the centromere in budding yeast.
Print path-dependent contact temperature dependency for 3D printing using fused filament fabrication
(2022)
This paper focuses on the effects of different time spans and thus different contact temperatures when a molten strand contacts an adjacent already solidified strand in a plane during 3D printing with fused filament fabrication. For this purpose, both the manufacturing parameters and the geometry of the component are systematically varied and the effect on morphology and mechanical properties is investigated. The results clearly show that even with identical printing parameters, the transitions between the individual layers are much more visible with long time spans until fusion and lead to low mechanical properties. In contrast, short spans lead to hardly visible welds and high mechanical properties. Transferring the findings to different component sizes ultimately verifies that the average temperature at the time of contact between the already solidified and the currently deposited strand is decisive for component quality. In order to generate high component qualities, this finding must therefore be taken into account in the future in the path generation strategy, i.e., in so-called slicing.
Global trends such as climate change and the scarcity of sustainable raw materials require adaptive, more flexible and resource-saving wastewater infrastructures for rural areas. Since 2018, in the community Reinighof, an isolated site in the countryside of Rhineland Palatinate (Germany), an autarkic, decentralized wastewater treatment and phosphorus recovery concept has been developed, implemented and tested. While feces are composted, an easy-to-operate system for producing struvite as a mineral fertilizer was developed and installed to recover phosphorus from urine. The nitrogen-containing supernatant of this process stage is treated in a special soil filter and afterwards discharged to a constructed wetland for grey water treatment, followed by an evaporation pond. To recover more than 90% of the phosphorus contained in the urine, the influence of the magnesium source, the dosing strategy, the molar ratio of Mg:P and the reaction and sedimentation time were investigated. The results show that, with a long reaction time of 1.5 h and a molar ratio of Mg:P above 1.3, constraints concerning magnesium source can be overcome and a stable process can be achieved even under varying boundary conditions. Within the special soil filter, the high ammonium nitrogen concentrations of over 3000 mg/L in the supernatant of the struvite reactor were considerably reduced. In the effluent of the following constructed wetland for grey water treatment, the ammonium-nitrogen concentrations were below 1 mg/L. This resource efficient decentralized wastewater treatment is self-sufficient, produces valuable fertilizer and does not need a centralized wastewater system as back up. It has high potential to be transferred to other rural communities.
This paper discusses the problem of automatic off-line programming and motion planning for industrial robots. At first, a new concept consisting of three steps is proposed. The first step, a new method for on-line motion planning is introduced. The motion planning method is based on the A*-search algorithm and works in the implicit configuration space. During searching, the collisions are detected in the explicitly represented Cartesian workspace by hierarchical distance computation. In the second step, the trajectory planner has to transform the path into a time and energy optimal robot program. The practical application of these two steps strongly depends on the method for robot calibration with high accuracy, thus, mapping the virtual world onto the real world, which is discussed in the third step.
This paper presents a new approach to parallel motion planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based on the A-search algorithm and needs no essential off-line computations. The algorithm works in an implicitly descrete configuration space. Collisions are detected in the Cartesian workspace by hierarchical distance computation based on the given CAD model. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel motion planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows linear speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
A practical distributed planning and control system for industrial robots is presented. The hierarchical concept consists of three independent levels. Each level is modularly implemented and supplies an application interface (API) to the next higher level. At the top level, we propose an automatic motion planner. The motion planner is based on a best-first search algorithm and needs no essential off-line computations. At the middle level, we propose a PC-based robot control architecture, which can easily be adapted to any industrial kinematics and application. Based on a client/server-principle, the control unit estab-lishes an open user interface for including application specific programs. At the bottom level, we propose a flexible and modular concept for the integration of the distributed motion control units based on the CAN bus. The concept allows an on-line adaptation of the control parameters according to the robot's configuration. This implies high accuracy for the path execution and improves the overall system performance.
This paper presents a new approach to parallel motion planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based on the A*-search algorithm and needs no essential off-line computations. The algorithm works in an implicitly descrete configuration space. Collisions are detected in the cartesian workspace by hierarchical distance computation based on the given CAD model. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel motion planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows linear, and sometimes even superlinear speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
A new problem for the automated off-line programming of industrial robot application is investigated. The Multi-Goal Path Planning is to find the collision-free path connecting a set of goal poses and minimizing e.g. the total path length. Our solution is based on an earlier reported path planner for industrial robot arms with 6 degrees-of-freedom in an on-line given 3D environment. To control the path planner, four different goal selection methods are introduced and compared. While the Random and the Nearest Pair Selection methods can be used with any path planner, the Nearest Goal and the Adaptive Pair Selection method are favorable for our planner. With the latter two goal selection methods, the Multi-Goal Path Planning task can be significantly accelerated, because they are able to automatically solve the simplest path planning problems first. Summarizing, compared to Random or Nearest Pair Selection, this new Multi-Goal Path Planning approach results in a further cost reduction of the programming phase.
Anwendungen effizienter Verfahren in Automation - Universität Karlsruhe auf der SPS97 in Nürnberg -
(1998)
Die Bewegungsplanung für Industrieroboter ist eine notwendige Voraussetzung, damit sich autonome Systeme kollisionsfrei durch die Umwelt bewegen können. Die Berücksichtigung von dynamischen Hindernissen zur Laufzeit erfordert allerdings leistungsfähige Algorithmen, zur Lösung dieser Aufgabenstellung in Echtzeit. Eine Möglichkeit zur Beschleunigung der Algorithmen ist der effiziente Einsatz von skalierbarer Parallelverarbeitung. Die softwaretechnische Umsetzung kann aber nur dann erfolgreich sein, wenn ein Parallelrechner zur Verfügung steht, der einen hohen Datendurchsatz bei geringer Latenzzeit bietet. Darüber hinaus muß dieser Parallelrechner unter vertretbarem Aufwand bedienbar sein und ein gutes Preisleistungsverhältnis aufweisen, damit die Parallelverarbeitung verstärkt in der Industrie zum Einsatz kommt. In diesem Artikel wird ein Workstation-Cluster auf der Basis von neun Standard- PCs vorgestellt, die über eine spezielle Kommunikationskarte miteinander vernetzt sind. In den einzelnen Abschnitten werden die gesammelten Erfahrungen bei der Inbetriebnahme, Systemadministration und Anwendung geschildert. Als Beispiel für eine Anwendung auf diesem Cluster wird ein paralleler Bewegungsplaner für Industrieroboter beschrieben.
In response priming experiments, a participant has to respond as quickly and as accurately as possible to a target stimulus preceded by a prime. The prime and the target can either be mapped to the same response (consistent trial) or to different responses (inconsistent trial). Here, we investigate the effects of two sequential primes (each one either consistent or inconsistent) followed by one target in a response priming experiment. We employ discrete-time hazard functions of response occurrence and conditional accuracy functions to explore the temporal dynamics of sequential motor activation. In two experiments (small-N design, 12 participants, 100 trials per cell and subject), we find that (1) the earliest responses are controlled exclusively by the first prime if primes are presented in quick succession, (2) intermediate responses reflect competition between primes, with the second prime increasingly dominating the response as its time of onset is moved forward, and (3) only the slowest responses are clearly controlled by the target. The current study provides evidence that sequential primes meet strict criteria for sequential response activation. Moreover, it suggests that primes can influence responses out of a memory buffer when they are presented so early that participants are forced to delay their responses.
Surface wetting can be simulated using a phase field approach which describes the continuous liquid-gas transition with the help of an order parameter. In this publication, wetting of non-planar surfaces is investigated based on a phase field model by Diewald et al. [1, 2]. Different scenarios of droplets on rough surfaces are simulated. The static equilibrium for those scenarios is calculated using an Allen-Cahn evolution equation. The influence of the surface morphology on the resulting contact angle is investigated while the width of the phase transition from liquid to gas is varied as a model parameter.
Surface wetting can be described by using phase field models [1]. In these models, often either the contact angle or the surface tensions between the solid and the fluid are prescribed directly on the wall in order to represent the solid-fluid interaction. However, the interaction of the wall and the fluid are not strictly local. The influence of the wall, which can be described by wall potentials [2], reaches out into the fluid, which is the reason for the formation of adsorbate layers. The investigation shows how such a wall potential can be included into a phase field model of wetting. It is found that by considering this energy contribution, the model is able to capture the adsorbate layer.
A novel shadowgraphic inline probe to measure crystal size distributions (CSD),
based on acquired greyscale images, is evaluated in terms of elevated temperatures and fragile
crystals, and compared to well-established, alternative online and offline measurement techniques,
i.e., sieving analysis and online microscopy. Additionally, the operation limits, with respect to
temperature, supersaturation, suspension, and optical density, are investigated. Two different
substance systems, potassium dihydrogen phosphate (prisms) and thiamine hydrochloride (needles),
are crystallized for this purpose at 25 L scale. Crystal phases of the well-known KH2PO4/H2O system
are measured continuously by the inline probe and in a bypass by the online microscope during
cooling crystallizations. Both measurement techniques show similar results with respect to the crystal
size distribution, except for higher temperatures, where the bypass variant tends to fail due to
blockage. Thiamine hydrochloride, a substance forming long and fragile needles in aqueous solutions,
is solidified with an anti-solvent crystallization with ethanol. The novel inline probe could identify
a new field of application for image-based crystal size distribution measurements, with respect
to difficult particle shapes (needles) and elevated temperatures, which cannot be evaluated with
common techniques.
The fluid dynamic (flow rates) and hydrodynamic behavior (local droplet size distributions and local holdup) of a continuous DN300 pump-mixer were investigated using water as the continuous phase and paraffin oil as the dispersed phase. The influence of the impeller speed (375 to 425 rpm), the feed phase ratio (10 to 30 volume percent), and the total flow rate (0.5 to 2.3 L/min) were investigated by measuring the pumping height, local holdup of the disperse phase, and the droplet size distribution (DSD). The latter one was measured at three different vessel positions using an image-based telecentric shadowgraphic technique. The droplet diameters were extracted from the acquired images using a neural network. The Sauter mean diameters were calculated from the DSD and correlated with an extended model based on Doulah (1975), considering the impeller speed, the feed phase ratio, and additionally the flow rate. The new correlation can describe an extensive database containing 155 experiments of the fluid and hydrodynamic within a 15 % error range
In this work, steady-state droplet size distributions in a DN300 stirred batch vessel with a
Rushton turbine impeller are investigated using an insertion probe based on the telecentric transmit-
ted light principle. High-resolution droplet size distributions are extracted from the images using
a convolutional neural network for image-analysis in order to investigate the influence of impeller
speed and phase fraction (up to 50 vol.-%). In addition, Sauter mean diameters were calculated and
correlated with two semi-empirical approaches, while the standard approach only accomplished 5.7%
accuracy, and the correlation of Laso et al. provided a relative mean error of 4.0%. In addition, the
correlated exponent in the Weber number was fitted to the experimental data of this work yielding a
slightly different value than the theoretical (−0.6), which allows a better representation of the low
coalescence tendency of the system, which is usually neglected in standard procedures.
One of the ongoing tasks in space structure testing is the vibration test, in which a given structure is mounted onto a shaker and excited by a certain input load on a given frequency range, in order to reproduce the rigor of launch. These vibration tests need to be conducted in order to ensure that the devised structure meets the expected loads of its future application. However, the structure must not be overtested to avoid any risk of damage. For this, the system’s response to the testing loads, i.e., stresses and forces in the structure, must be monitored and predicted live during the test. In order to solve the issues associated with existing methods of live monitoring of the structure’s response, this paper investigated the use of artificial neural networks (ANNs) to predict the system’s responses during the test. Hence, a framework was developed with different use cases to compare various kinds of artificial neural networks and eventually identify the most promising one. Thus, the conducted research accounts for a novel method for live prediction of stresses, allowing failure to be evaluated for different types of material via yield criteria
INRECA offers tools and methods for developing, validating, and maintaining classification, diagnosis and decision support systems. INRECA's basic technologies are inductive and case-based reasoning [9]. INRECA fully integrates [2] both techniques within one environment and uses the respective advantages of both technologies. Its object-oriented representation language CASUEL [10, 3] allows the definition of complex case structures, relations, similarity measures, as well as background knowledge to be used for adaptation. The objectoriented representation language makes INRECA a domain independent tool for its destined kind of tasks. When problems are solved via case-based reasoning, the primary kind of knowledge that is used during problem solving is the very specific knowledge contained in the cases. However, in many situations this specific knowledge by itself is not sufficient or appropriate to cope with all requirements of an application. Very often, background knowledge is available and/or necessary to better explore and interpret the available cases [1]. Such general knowledge may state dependencies between certain case features and can be used to infer additional, previously unknown features from the known ones.
Edit distances between merge trees of scalar fields have many applications in scientific visualization, such as ensemble analysis, feature tracking or symmetry detection. In this paper, we propose branch mappings, a novel approach to the construction of edit mappings for merge trees. Classic edit mappings match nodes or edges of two trees onto each other, and therefore have to either rely on branch decompositions of both trees or have to use auxiliary node properties to determine a matching. In contrast, branch mappings employ branch properties instead of node similarity information, and are independent of predetermined branch decompositions. Especially for topological features, which are typically based on branch properties, this allows a more intuitive distance measure which is also less susceptible to instabilities from small-scale perturbations. For trees with 𝒪(n) nodes, we describe an 𝒪(n4) algorithm for computing optimal branch mappings, which is faster than the only other branch decomposition-independent method in the literature by more than a linear factor. Furthermore, we compare the results of our method on synthetic and real-world examples to demonstrate its practicality and utility.
Im Rahmen dieses Beitrags werden Ergebnisse einer Untersuchung an feststoffgeschmierten Wälzlagern vorgestellt. Betrachtet werden dabei Lager, welche einen speziellen, modifizierten Käfig verwenden. Die Käfigtaschen des Käfigs dienen dabei, zusätzlich zu ihrer ursprünglichen Funktion, der Führung der Wälzkörper, als Schmierstoffdepot. Es werden zunächst Prüfaufbau und die Versuchsbedingungen erläutert und in diesem Zusammenhang wird gezeigt, dass der in diesem Beitrag verwendete Aufbau, verglichen mit dem Aufbau vorangegangener Arbeiten, eine deutlich reduzierte Streuung aufweist. Als eine nicht zu vernachlässigende Fehlerquelle bei der gravimetrischen Bestimmung des Käfigtaschenverschleißes wurde das hygroskopische Verhalten des Polymercompounds identifiziert. Einer Verfälschung dieser Messergebnisse durch die unkontrollierte Feuchtigkeitsaufnahme aus der Umgebung, muss durch einen zeitlich vorgelagerten Trocknungsprozess unter definierten Bedingungen vorgebeugt werden. Zudem wird gezeigt, dass die Käfigtaschen sowohl durch den Innenring des Lagers, als auch durch die Wälzkörper verschlissen werden. Hierbei wird eine Messmethode zur Ermittlung der, durch den Innenring verschlissenen, Materialmenge vorgestellt. Durch Oberflächenanalysen der Messingstruktur des Käfigs wird eine Reduzierung des Zinks nachgewiesen, sowie eine Änderung der Oberflächenstruktur festgestellt. Als Ursache wird ein Sublimieren des Zinks aufgrund der Versuchsbedingungen vermutet. Weiterhin wird gezeigt, dass die Prüftemperatur von 300 °C zu einem Schrumpfen der Lagerringe führt. Eine Vorwegnahme dieser Maßverringerung ist durch Temperierung bei 300 °C für 48 h möglich.
Die politikwissenschaftliche Literatur zum deutschen Föderalismus ist überaus vielfältig. Neben Analysen der institutionellen Arrangements, ihrer Veränderungen sowie der Dynamiken des deutschen Verbundföderalismus, finden sich auch zahlreiche Untersuchungen zu einzelnen Politikfeldern, die sowohl die Interaktionen zwischen Bund und Ländern als auch die Varianz zwischen den Policies der Länder samt ihrer Bestimmungsfaktoren untersuchen. Darüber hinaus haben sich in den vergangenen Jahrzehnten eigene Forschungszweige zu Parteien im Bundesstaat und zur Parlamentsforschung auf Länderebene etabliert. Trotz dieser großen Forschungsaktivität sind jedoch einige zentrale Fragen der Politikwissenschaft zum Zusammenspiel zwischen Wählern, Parteien, Parlamenten und Regierungen sowie deren Wirkung auf politischen Outputs und Outcomes weiterhin unbeantwortet. Dies ist, so das Argument dieses Beitrags, insbesondere der fehlenden Zusammenführung einzelner Literaturstränge und der noch unzureichenden empirischen Datenbasis geschuldet. Mittels einer Systematisierung des gegenwärtigen Literaturstands entwirft der Aufsatz ein Forschungsprogramm, das auf eine umfassende Analyse des politischen Willensbildungs- und Entscheidungsfindungsprozesses in den deutschen Bundesländern abstellt und Fragen der Responsivität und Rückkopplung systematisch in den Blick nimmt.
Algorithms increasingly govern people's lives, including through rapidly spreading applications in the public sector. This paper sheds light on acceptance of algorithms used by the public sector emphasizing that algorithms, as parts of socio-technical systems, are always embedded in a specific social context. We show that citizens' acceptance of an algorithm is strongly shaped by how they evaluate aspects of this context, namely the personal importance of the specific problems an algorithm is supposed to help address and their trust in the organizations deploying the algorithm. The objective performance of presented algorithms affects acceptance much less in comparison. These findings are based on an original dataset from a survey covering two real-world applications, predictive policing and skin cancer prediction, with a sample of 2661 respondents from a representative German online panel. The results have important implications for the conditions under which citizens will accept algorithms in the public sector.
Comparative public policy is a blooming research area. It also suffers from some curious blind spots. In this paper we discuss four of these: (1) the obsession with covariance, which means that important phenomena are ignored; (2) the lack of agency, which leads to underwhelming explanatory models; (3) the unclear universe of cases, which means the inferential value of theories and the empirical results are unclear; and (4) the focus on outputs, even though most theories contain strong assumptions about the political process leading to certain outputs. Following this discussion, we then outline how a closer integration of policy process theories may be fruitful for future research.
We studied the development of cognitive abilities related to intelligence and creativity
(N = 48, 6–10 years old), using a longitudinal design (over one school year), in order
to evaluate an Enrichment Program for gifted primary school children initiated by
the government of the German federal state of Rhineland-Palatinate (Entdeckertag
Rheinland Pfalz, Germany; ET; Day of Discoverers). A group of German primary school
children (N = 24), identified earlier as intellectually gifted and selected to join the
ET program was compared to a gender-, class- and IQ- matched group of control
children that did not participate in this program. All participants performed the Standard
Progressive Matrices (SPM) test, which measures intelligence in well-defined problem
space; the Creative Reasoning Task (CRT), which measures intelligence in ill-defined
problem space; and the test of creative thinking-drawing production (TCT-DP), which
measures creativity, also in ill-defined problem space. Results revealed that problem
space matters: the ET program is effective only for the improvement of intelligence
operating in well-defined problem space. An effect was found for intelligence as
measured by SPM only, but neither for intelligence operating in ill-defined problem space
(CRT) nor for creativity (TCT-DP). This suggests that, depending on the type of problem
spaces presented, different cognitive abilities are elicited in the same child. Therefore,
enrichment programs for gifted, but also for children attending traditional schools,
should provide opportunities to develop cognitive abilities related to intelligence,
operating in both well- and ill-defined problem spaces, and to creativity in a parallel,
using an interactive approach.
Im Gegensatz zum Übertragungsnetz, dessen Struktur hinreichend genau bekannt ist, sind passende Netzmodelle
für Mittelspannungsnetze (MS-Netze) wegen der hohen Anzahlen der MS-Netze und Verteilnetzbetreiber (VNB)
nur schwer abzubilden. Des Weiteren ist eine detaillierte Darstellung realer MS-Netze in wissenschaftlichen Publikationen
aus datenschutzrechtlichen Gründen meist nicht erwünscht. In dieser Arbeit werden MS-Netzmodelle
sowie ihre Entwicklung im Detail erklärt. Damit stehen erstmals für die Öffentlichkeit nachvollziehbare MS-Netzmodelle
für den deutschsprachigen Raum zur Verfügung. Sie können als Benchmark für wissenschaftliche Untersuchungen
sowie zur Methodenentwicklung verwendet werden.
To investigate whether participants can activate only one spatially oriented number line at a time or
multiple number lines simultaneously, they were asked to solve a unit magnitude comparison task
(unit smaller/larger than 5) and a parity judgment task (even/odd) on two-digit numbers. In both these
primary tasks, decades were irrelevant. After some of the primary task trials (randomly), participants
were asked to additionally solve a secondary task based on the previously presented number. In
Experiment 1, they had to decide whether the two-digit number presented for the primary task was
larger or smaller than 50. Thus, for the secondary task decades were relevant. In contrast, in Experiment
2, the secondary task was a color judgment task, which means decades were irrelevant. In Experiment
1, decades’ and units’ magnitudes influenced the spatial association of numbers separately. In contrast,
in Experiment 2, only the units were spatially associated with magnitude. It was concluded that
multiple number lines (one for units and one for decades) can be activated if attention is focused on
multiple, separate magnitude attributes.
Due to the steadily increasing number of decentralized generation units, the upcoming smart meter rollout and the expected electrification of the transport sector (e-mobility), grid planning and grid operation at low-voltage (LV) level are facing major challenges. Therefore, many studies, research and demonstration projects on the above topics have been carried out in recent years, and the results and the methods developed have been published. However, the published methods usually cannot be replicated or validated, since the majority of the examination models or the scenarios used are incomprehensible to third parties. There is a lack of uniform grid models that map the German LV grids and can be used for comparative investigations, which are similar to the example of the North American distribution grid models of the IEEE. In contrast to the transmission grid, whose structure is known with high accuracy, suitable grid models for LV grids are difficult to map because of the high number of LV grids and distribution system operators. Furthermore, a detailed description of real LV grids is usually not available in scientific publications for data privacy
reasons. For investigations within a research project, the most characteristic synthetic LV grid models have been created, which are based on common settlement structures and usual grid planning principles in Germany. In this work, these LV grid models, and their development are explained in detail. For the first time, comprehensible LV grid models for the middle European area are available to the public, which can be used as a benchmark for further scientific research and method developments.
This document is an English version of the paper which was originally written in German1. In addition, this paper discusses a few more aspects especially on the planning process of distribution grids in Germany.
Durch die stetige Zunahme von dezentralen Erzeugungsanlagen, den anstehenden Smart-Meter Rollout sowie die zu erwartende Elektrifizierung des Verkehrssektors (E-Mobilität) steht die Netzplanung und Netzbetriebsführung von Niederspannungsnetzen (NS-Netzen) in Deutschland vor großen Herausforderungen. In den letzten Jahren wurden daher viele Studien, Forschungs- und Demonstrationsprojekte zu den oben genannten Themen durchge-führt und die Ergebnisse sowie die entwickelten Methoden publiziert. Jedoch lassen sich die publizierten Methoden meist nicht nachbilden bzw. validieren, da die Untersuchungsmodelle oder die angesetzten Szenarien für Dritte nicht nachvollziehbar sind. Es fehlen einheitliche Netzmodelle, die die deutschen NS-Netze abbilden und für Ver-gleichsuntersuchungen herangezogen werden können, ähnlich dem Beispiel der nordamerikanischen Verteilnetzmodelle des IEEE.
Im Gegensatz zum Übertragungsnetz, dessen Struktur hinreichend genau bekannt ist, sind passende Netzmodelle für NS-Netze wegen der hohen Anzahlen der NS-Netze und Verteilnetzbetreiber (VNB) nur schwer abzubilden. Des Weiteren ist eine detaillierte Darstellung realer NS-Netze in wissenschaftlichen Publikationen aus daten-schutzrechtlichen Gründen meist nicht erwünscht. Für Untersuchungen im Rahmen eines Forschungsprojekts wurden darum möglichst charakteristische synthetische NS-Netzmodelle erstellt, die sich an gängigen deutschen Siedlungsstrukturen und üblichen Netzplanungsgrundsätzen orientieren. In dieser Arbeit werden diese NS-Netzmodelle sowie ihre Entwicklung im Detail erklärt. Damit stehen erstmals für die Öffentlichkeit nachvollziehbare NS-Netzmodelle für den deutschsprachigen Raum zur Verfügung. Sie können als Benchmark für wissenschaftliche Untersuchungen sowie zur Methodenentwicklung verwendet werden.
Regelkonzept für eine Niederspannungsnetzautomatisierung unter Verwendung des Merit-Order-Prinzips
(2022)
Durch die zunehmende Erzeugungsleistung auf Niederspannungsnetzebene (NS-Netzebene) durch Photovoltaikanlagen, sowie die Elektrifizierung des Wärme- und des Verkehrssektors sind Investitionen in die NS-Netze notwendig. Ein höherer Digitalisierungsgrad im NS-Netz birgt das Potential, die notwendigen Investitionen genauer zu identifizieren, und damit ggf. zu reduzieren oder zeitlich zu verschieben. Hierbei stellt die Markteinführung intelligenter Messsysteme, sog. Smart Meter, eine neue Möglichkeit dar, Messwerte aus dem NS-Netz zu erhalten und auf deren Grundlage die Stellgrößen verfügbarer Aktoren zu optimieren. Dazu stellt sich die Frage, wie Messdaten unterschiedlicher Messzyklen in einem Netzautomatisierungssystem genutzt werden können und wie sich das nicht-lineare ganzzahlige Optimierungsproblem der Stellgrößenoptimierung effizient lösen lässt. Diese Arbeit befasst sich mit der Lösung des Optimierungsproblems. Dazu kommt eine Stellgrößenoptimierung nach dem Merit-Order-Prinzip zur Anwendung.
The size congruity effect involves interference between numerical magnitude and physical size of visually presented numbers: congruent numbers (either both small or both large in numerical magnitude and physical size) are responded to faster than incongruent ones (small numerical magnitude/large physical size or vice versa). Besides, numerical magnitude is associated with lateralized response codes, leading to the Spatial Numerical Association of Response Codes (SNARC) effect: small numerical magnitudes are preferably responded to on the left side and large ones on the right side. Whereas size congruity effects are ascribed to interference between stimulus dimensions in the decision stage, SNARC effects are understood as (in)compatibilities in stimulus-response combinations. Accordingly, size congruity and SNARC effects were previously found to be independent in parity and in physical size judgment tasks. We investigated their dependency in numerical magnitude judgment tasks. We obtained independent size congruity and SNARC effects in these tasks and replicated this observation for the parity judgment task. The results confirm and extend the notion that size congruity and SNARC effects operate in different representational spaces. We discuss possible implications for number representation.
The modified fouling index (MFI) is a crucial characteristic for assessing the fouling potential of reverse osmosis (RO) feed water. Although the MFI is widely used, the estimation time required for filtration and data evaluation is still relatively long. In this study, the relationship between the MFI and instantaneous spectroscopic extinction measurements was investigated. Since both measurements show a linear correlation with particle concentration, it was assumed that a change in the MFI can be detected by monitoring the optical density of the feed water. To prove this assumption, a test bench for a simultaneous measurement of the MFI and optical extinction was designed. Silica monospheres with sizes of 120 nm and 400 nm and mixtures of both fractions were added to purified tap water as model foulants. MFI filtration tests were performed with a standard 0.45 µm PES membrane, and a 0.1 µm PP membrane. Extinction measurements were carried out with a newly designed flow cell inside a UV–VIS spectrometer to get online information on the particle properties of the feed water, such as the particle concentration and mean particle size. The measurement results show that the extinction ratio of different light wavelengths, which should remain constant for a particulate system, independent of the number of particles, only persisted at higher particle concentrations. Nevertheless, a good correlation between extinction and MFI for different particle concentrations with restrictions towards the ratio of particle and pore size of the test membrane was found. These findings can be used for new sensory process monitoring systems, if the deficiencies can be overcome.
The Griffith-Ley oxidation of alcohols to aldehydes and ketones is performed with either RuCl3 ⋅ (H2O)x or a highly stable, well-defined ruthenium catalyst and with cheap trimethylamine N-oxide (TMAO) as the oxygen source. The use of n-heptane as the solvent, which forms a second phase with TMAO and a part of the alcohol, allows the reactions to be performed with a minimum amount of catalyst. This results in high local concentrations and thus to very rapid conversions. Detailed quantum chemical calculations suggest, that the Griffith-Ley oxidation not necessarily requires high oxidation states of ruthenium but can also proceed with RuII/RuIV species.
Using industrial robots for machining applications in flexible manufacturing
processes lacks a high accuracy. The main reason for the deviation is the
flexibility of the gearbox. Secondary Encoders (SE) as an additional, high precision
angle sensor offer a huge potential of detecting gearbox deviations. This paper
aims to use SE to reduce gearbox compliances with a feed forward, adaptive
neural control. The control network is trained with a second network for system
identification. The presented algorithm is capable of online application and optimizes
the robot accuracy in a nonlinear simulation.
We present an identification benchmark data set for a full robot movement with an KUKA KR300 R2500 ultra SE industrial robot. It is a robot with a nominal payload capacity of 300 kg, a weight of 1120 kg and a reach of 2500mm. It exhibits 12 states accounting for position and velocity for each of the 6 joints. The robot encounters backlash in all joints, pose-dependent inertia, pose-dependent gravitational loads, pose-dependent hydraulic forces, pose- and velocity dependent centripetal and Coriolis forces as well as a nonlinear friction, which is temperature dependent and therefore potentially time varying. We supply the prepared dataset for black-box identification of the forward or the inverse robot dynamics. Additional to the data for black-box modelling, we supply high-frequency raw data and videos of each experiment. A baseline and figures of merit are defined to make results compareable across different identification methods.
We present an identification benchmark data set for a full robot movement with an KUKA KR300 R2500 ultra SE industrial robot. It is a robot with a nominal payload capacity of 300 kg, a weight of 1120 kg and a reach of 2500mm. It exhibits 12 states accounting for position and velocity for each of the 6 joints. The robot encounters backlash in all joints, pose-dependent inertia, pose-dependent gravitational loads, pose-dependent hydraulic forces, pose- and velocity dependent centripetal and Coriolis forces as well as a nonlinear friction, which is temperature dependent and therefore potentially time varying. We supply the prepared dataset for black-box identification of the forward or the inverse robot dynamics. Additional to the data for black-box modelling, we supply high-frequency raw data and videos of each experiment. A baseline and figures of merit are defined to make results compareable across different identification methods.
Kinetic models of human motion rely on boundary conditions which are defined by the interaction of the body with its environment. In the simplest case, this interaction is limited to the foot contact with the ground and is given by the so called ground reaction force (GRF). A major challenge in the reconstruction of GRF from kinematic data is the double support phase, referring to the state with multiple ground contacts. In this case, the GRF prediction is not well defined. In this work we present an approach to reconstruct and distribute vertical GRF (vGRF) to each foot separately, using only kinematic data. We propose the biomechanically inspired force shadow method (FSM) to obtain a unique solution for any contact phase, including double support, of an arbitrary motion. We create a kinematic based function, model an anatomical foot shape and mimic the effect of hip muscle activations. We compare our estimations with the measurements of a Zebris pressure plate and obtain correlations of 0.39≤r≤0.94 for double support motions and 0.83≤r≤0.87 for a walking motion. The presented data is based on inertial human motion capture, showing the applicability for scenarios outside the laboratory. The proposed approach has low computational complexity and allows for online vGRF estimation.
A survey on continuous, semidiscrete and discrete well-posedness and scale-space results for a class of nonlinear diffusion filters is presented. This class does not require any monotony assumption (comparison principle) and, thus, allows image restoration as well. The theoretical results include existence, uniqueness, continuous dependence on the initial image, maximum-minimum principles, average grey level invariance, smoothing Lyapunov functionals, and convergence to a constant steady state.
Cloudy inhomogenities in artificial fabrics are graded by a fast method which is based on a Laplacian pyramid decomposition of the fabric image. This band-pass representation takes into account the scale character of the cloudiness. A quality measure of the entire cloudiness is obtained as a weighted mean over the variances of all scales.
The ideas of texture analysis by means of the structure tensor are combined with the scale-space concept of anisotropic diffusion filtering. In contrast to many other nonlinear diffusion techniques, the proposed one uses a diffusion tensor instead of a scalar diffusivity. This allows true anisotropic behaviour. The preferred diffusion direction is determined according to the phase angle of the structure tensor. The diffusivity in this direction is increasing with the local coherence of the signal. This filter is constructed in such a way that it gives a mathematically well-funded scale-space representation of the original image. Experiments demonstrate its usefulness for the processing of interrupted one-dimensional structures such as fingerprint and fabric images.
The performance of napkins is nowadays improved substantially by embedding granules of a superabsorbent into the cellulose matrix. In this paper a continuous model for the liquid transport in such an Ultra Napkin is proposed. Its mean feature is a nonlinear diffusion equation strongly coupled with an ODE describing a reversible absorbtion process. An efficient numerical method based on a symmetrical time splitting and a finite difference scheme of ADI-predictor-corrector type has been developed to solve these equations in a three dimensional setting. Numerical results are presented that can be used to optimize the granule distribution.
The structural integrity of synaptic connections critically depends on the interaction between synaptic cell adhesion molecules (CAMs) and the underlying actin and microtubule cytoskeleton. This interaction is mediated by giant Ankyrins, that act as specialized adaptors to establish and maintain axonal and synaptic compartments. In Drosophila, two giant isoforms of Ankyrin2 (Ank2) control synapse stability and organization at the larval neuromuscular junction (NMJ). Both Ank2-L and Ank2-XL are highly abundant in motoneuron axons and within the presynaptic terminal, where they control synaptic CAMs distribution and organization of microtubules. Here, we address the role of the conserved N-terminal ankyrin repeat domain (ARD) for subcellular localization and function of these giant Ankyrins in vivo. We used a P[acman] based rescue approach to generate deletions of ARD subdomains, that contain putative binding sites of interacting transmembrane proteins. We show that specific subdomains control synaptic but not axonal localization of Ank2-L. These domains contain binding sites to L1-family member CAMs, and we demonstrate that these regions are necessary for the organization of synaptic CAMs and for the control of synaptic stability. In contrast, presynaptic Ank2-XL localization only partially depends on the ARD but strictly requires the presynaptic presence of Ank2-L demonstrating a critical co-dependence of the two isoforms at the NMJ. Ank2-XL dependent control of microtubule organization correlates with presynaptic abundance of the protein and is thus only partially affected by ARD deletions. Together, our data provides novel insights into the synaptic targeting of giant Ankyrins with relevance for the control of synaptic plasticity and maintenance.
Machining-induced residual stresses (MIRS) are a main driver for distortion of thin-walled monolithic aluminum workpieces. Before one can develop compensation techniques to minimize distortion, the effect of machining on the MIRS has to be fully understood. This means that not only an investigation of the effect of different process parameters on the MIRS is important. In addition, the repeatability of the MIRS resulting from the same machining condition has to be considered. In past research, statistical confidence of MIRS of machined samples was not focused on. In this paper, the repeatability of the MIRS for different machining modes, consisting of a variation in feed per tooth and cutting speed, is investigated. Multiple hole-drilling measurements within one sample and on different samples, machined with the same parameter set, were part of the investigations. Besides, the effect of two different clamping strategies on the MIRS was investigated. The results show that an overall repeatability for MIRS is given for stable machining (between 16 and 34% repeatability standard deviation of maximum normal MIRS), whereas instable machining, detected by vibrations in the force signal, has worse repeatability (54%) independent of the used clamping strategy. Further experiments, where a 1-mm-thick wafer was removed at the milled surface, show the connection between MIRS and their distortion. A numerical stress analysis reveals that the measured stress data is consistent with machining-induced distortion across and within different machining modes. It was found that more and/or deeper MIRS cause more distortion.
Financing measures and incentive schemes for (existing and new) building owners can promote the sustainable settlement development of rural regions or municipalities and, in a wider sense, entire countries or cross-border regions. In order to be used on a broad scale, the concept of revolving funds must continue to be further developed. In this research, the concept of an advanced revolving housing fund (ARF) for building owners to support the sustainable development of rural regions and potential mechanisms are introduced. The ARF is designed to reflect impacts and challenges with regard to rural regions in Germany, Europe and beyond. Based on New Institutional Economics, the Theory of Spatial Organisms, an expert workshop, interviews and discussions and further literature research, the fundamentals for incentive schemes and the essential mechanisms and design aspects of the ARF are derived. This includes the principal structure and governance of a holding fund and several regional funds. Based on this, input parameters for the financial modelling of an ARF are presented as well as guiding elements for empirical testing to promote more research in this area. It is found that the ARF should have a regional focus and must be a comprehensive instrument of settlement development with additional informal and formal measures. The developed concept promises new impulses, in particular, for rural regions. It is proposed to test the concept by means of case studies in pioneer regions of different countries
Heterocystous Cyanobacteria of the genus Nodularia form major blooms in brackish waters, while terrestrial Nostoc species occur worldwide, often associated in biological soil crusts. Both genera, by virtue of their ability to fix N2 and conduct oxygenic photosynthesis, contribute significantly to global primary productivity. Select Nostoc and Nodularia species produce the hepatotoxin nodularin and whether its production will change under climate change conditions needs to be assessed. In light of this, the effects of elevated atmospheric CO2 availability on growth, carbon and N2 fixation as well as nodularin production were investigated in toxin and non-toxin producing species of both genera. Results highlighted the following:
Biomass and volume specific biological nitrogen fixation (BNF) rates were respectively almost six and 17 fold higher in the aquatic Nodularia species compared to the terrestrial Nostoc species tested, under elevated CO2 conditions.
There was a direct correlation between elevated CO2 and decreased dry weight specific cellular nodularin content in a diazotrophically grown terrestrial Nostoc species, and the aquatic Nodularia species, regardless of nitrogen availability.
Elevated atmospheric CO2 levels were correlated to a reduction in biomass specific BNF rates in non-toxic Nodularia species.
Nodularin producers exhibited stronger stimulation of net photosynthesis rates (NP) and growth (more positive Cohen’s d) and less stimulation of dark respiration and BNF per volume compared to non-nodularin producers under elevated CO2 levels.
This study is the first to provide information on NP and nodularin production under elevated atmospheric CO2 levels for Nodularia and Nostoc species under nitrogen replete and diazotrophic conditions.
The cultivation of cyanobacteria with the addition of an organic carbon source (meaning as heterotrophic or mixotrophic cultivation) is a promising technique to increase their slow growth rate. However, most cyanobacteria cultures are infected by non-separable heterotrophic bacteria. While their contribution to the biomass is rather insignificant in a phototrophic cultivation, problems may arise in heterotrophic and mixotrophic mode. Heterotrophic bacteria can potentially utilize carbohydrates quickly, thus preventing any benefit for the cyanobacteria. In order to estimate the advantage of the supplementation of a carbon source, it is essential to quantify the proportion of cyanobacteria and heterotrophic bacteria in the resulting biomass. In this work, the use of quantitative polymerase chain reaction (qPCR) is proposed. To prepare the samples, a DNA extraction method for cyanobacteria was improved to provide reproducible and robust results for the group of terrestrial cyanobacteria. Two pairs of primers were used, which bind either to the 16S rRNA gene of all cyanobacteria or all bacteria including cyanobacteria. This allows a determination of the proportion of cyanobacteria in the biomass. The method was established with the two terrestrial cyanobacteria Trichocoleus sociatus SAG 26.92 and Nostoc muscorum SAG B-1453-12a. As proof of concept, a heterotrophic cultivation with T. sociatus with glucose was performed. After 2 days of cultivation, a reduction of the biomass partition of the cyanobacterium to 90% was detected. Afterwards, the proportion increased again.
The move away from fossil fuels and the diversification of the primary energy sources used are imperative both in terms of mitigating global warming and ensuring the political independence of the Western world. For the industries of agriculture and forestry, it is possible to secure the basic energy supply through their own yield. The use of vegetable oil is a possibility to satisfy the energy requirements for agricultural machines both autonomously and sustainably. Up to now, rapeseed has been the most important plant for oil production in Western Europe. In the EU, rapeseed oil is currently credited with up to 60% fossil CO2 savings compared to conventional diesel fuel. As a result, since 2018, rapeseed oil is no longer considered as biofuel in the EU. However, if cultivation and processing are completely based on renewable energy sources, up to 90% of fossil CO2 emissions can be saved in the future. This also applies to rapeseed oil, which is a by-product of animal feed production. In addition, pure rapeseed oil is chemically unchanged and thus biodegradable, which makes it particularly attractive for use in environmentally sensitive areas.
To increase the attractiveness of rapeseed oil as a fuel for the agricultural industry, a multi-fuel concept for the flexible use of rapeseed oil, diesel fuel and any mixtures of these two fuels would be beneficial, as it minimizes economic risks due to price fluctuations, availability, and taxation. For implementing such a concept, technical adjustments to the propulsion system are necessary. In existing vegetable oil vehicles, cost-intensive additional components are required for diesel particulate filter regeneration. Conventional regeneration via post-injected fuel (which does not participate in combustion) leads to dilution of the engine oil with vegetable oil.
This study elaborates the possibilities of DPF regeneration in vegetable oil operation by internal engine measures without the need for post-injection. This includes strategies for generating exhaust gas temperatures in high-idle operation which are suitable for regeneration. For this purpose, strategies combining throttling and retarded combustion are used. The measures were successfully tested with respect to their effectiveness for DPF regeneration. It could also be proved that no increased engine oil dilution occurs as a result of the regeneration procedure.
For a prospective series application, however, regeneration should also be possible in transient engine operation. For this purpose, the measures developed for high-idle regeneration have been transferred to partial load points to gain insight into their applicability for transient engine operation. In addition, the effect of external EGR on regeneration has been considered. As the previous investigations of high-idle regeneration showed that regeneration is most critical when pure rapeseed oil is used, the studies of regeneration in part-load operation were limited to pure rapeseed oil. The systematic parameter variations carried out during the studies helped to improve the understanding of the system and the mechanisms of regeneration. The results of the investigation show that the exhaust gas temperature can be increased significantly by the measures studied. However, achieving the exhaust temperature required for DPF regeneration remains a challenge for certain operating points.
Functional Metallic Microcomponents via Liquid-Phase Multiphoton Direct Laser Writing: A Review
(2019)
We present an overview of functional metallic microstructures fabricated via direct laser writing out of the liquid phase. Metallic microstructures often are key components in diverse applications such as, e.g., microelectromechanical systems (MEMS). Since the metallic component’s functionality mostly depends on other components, a technology that enables on-chip fabrication of these metal structures is highly desirable. Direct laser writing via multiphoton absorption is such a fabrication method. In the past, it has mostly been used to fabricate multidimensional polymeric structures. However, during the last few years different groups have put effort into the development of novel photosensitive materials that enable fabrication of metallic—especially gold and silver—microstructures. The results of these efforts are summarized in this review and show that direct laser fabrication of metallic microstructures has reached the level of applicability.
A concept for the quantification of cooperative effects in transition-metal complexes is presented. It is demonstrated for a series of novel N,N- (mononuclear) and C,N-coordinated homo- and heterometallic binuclear complexes based on the (2-dimethylamino)-4-(2-pyrimidinyl)pyrimidine ligand, which are accessible by applying roll-over cyclometallation. These iridium-, platinum-, and palladium-containing compounds are investigated with respect to their absorption and fluorescence spectra. The cooperative effects in the electronic absorptions, i. e., the energetic shifts between mononuclear and dinuclear complexes, and free ligands are analyzed on the basis of the lowest energy π-π* transitions and compared to calculated data, obtained from TD-DFT calculations. Furthermore the corresponding fluorescence spectra are presented and analyzed with respect to the concept of cooperativity.
About the approach The approach of TOPO was originally developed in the FABEL project1[1] to support architects in designing buildings with complex installations. Supplementing knowledge-based design tools, which are available only for selected subtasks, TOPO aims to cover the whole design process. To that aim, it relies almost exclusively on archived plans. Input to TOPO is a partial plan, and output is an elaborated plan. The input plan constitutes the query case and the archived plans form the case base with the source cases. A plan is a set of design objects. Each design object is defined by some semantic attributes and by its bounding box in a 3-dimensional coordinate system. TOPO supports the elaboration of plans by adding design objects.
Struktur und Werkzeuge des experiment-spezifischen Datenbereichs der SFB501 Erfahrungsdatenbank
(1999)
Software-Entwicklungsartefakte müssen zielgerichtet während der Durchführung eines Software- Projekts erfasst werden, um für die Wiederverwendung aufbereitet werden zu können. Die methodische Basis hierzu bildet im Sonderforschungsbereich 501 das Konzept der Erfahrungsdatenbank. In ihrem experiment-spezifischen Datenbereich werden für jedes Entwicklungsprojekt alle Software-Entwicklungsartefakte abgelegt, die während des Lebenszyklus eines Projektes anfallen. In ihrem übergreifenden Datenbereich werden all die jenigen Artefakte aus dem experiment-spezifischen Datenbereich zusammengefasst, die für eine Wiederverwendung in nachfolgenden Projekten in Frage kommen. Es hat sich gezeigt, dass bereits zur Nutzung der Datenmengen im experiment- spezifischen Datenbereich der Erfahrungsdatenbank ein systematischer Zugriff notwendig ist. Ein systematischer Zugriff setzt jedoch eine normierte Struktur voraus. Im experiment-spezifischen Bereich werden zwei Arten von Experimenttypen unterschieden: "Kontrollierte Experimente" und "Fallstudien". Dieser Bericht beschreibt die Ablage- und Zugriffsstruktur für den Experimenttyp "Fallstudien". Die Struktur wurde aufgrund der Erfahrungen in ersten Fallstudien entwickelt und evaluiert.
Versions- und Konfigurationsmanagement sind zentrale Instrumente zur intellektuellen Beherrschung komplexer Softwareentwicklungen. In stark wiederverwendungsorientierten Softwareentwicklungsansätzen -wie vom SFB bereitgestellt- muß der Begriff der Konfiguration von traditionell produktorientierten Artefakten auf Prozesse und sonstige Entwicklungserfahrungen erweitert werden. In dieser Veröffentlichung wird ein derartig erweitertes Konfigurationsmodell vorgestellt. Darüberhinau wird eine Ergänzung traditioneller Projektplanungsinformationen diskutiert, die die Ableitung maßgeschneiderter Versions- und Konfigurationsmanagementmechanismen vor Projektbeginn ermöglichen.
Let \(X\) be a Banach lattice. Necessary and sufficient conditions for a linear operator \(A:D(A) \to X\), \(D(A)\subseteq X\), to be of positive \(C^0\)-scalar type are given. In addition, the question is discussed which conditions on the Banach lattice imply that every operator of positive \(C^0\)-scalar type is necessarily of positive scalar type.
In the scalar case one knows that a complex normalized function of boundedvariation \(\phi\) on \([0,1]\) defines a unique complex regular Borel measure\(\mu\) on \([0,1]\). In this note we show that this is no longer true in generalin the vector valued case, even if \(\phi\) is assumed to be continuous. Moreover, the functions \(\phi\) which determine a countably additive vectormeasure \(\mu\) are characterized.
The following two norms for holomorphic functions \(F\), defined on the right complex half-plane \(\{z \in C:\Re(z)\gt 0\}\) with values in a Banach space \(X\), are equivalent:
\[\begin{eqnarray*} \lVert F \rVert _{H_p(C_+)} &=& \sup_{a\gt0}\left( \int_{-\infty}^\infty \lVert F(a+ib) \rVert ^p \ db \right)^{1/p}
\mbox{, and} \\ \lVert F \rVert_{H_p(\Sigma_{\pi/2})} &=& \sup_{\lvert \theta \lvert \lt \pi/2}\left( \int_0^\infty \left \lVert F(re^{i \theta}) \right \rVert ^p\ dr \right)^{1/p}.\end{eqnarray*}\] As a consequence, we derive a description of boundary values ofsectorial holomorphic functions, and a theorem of Paley-Wiener typefor sectorial holomorphic functions.
Hardware prototyping is an essential part in the hardware design flow. Furthermore, hardware prototyping usually relies on system-level design and hardware-in-the-loop simulations in order to develop, test and evaluate intellectual property cores. One common task in this process consist on interfacing cores with different port specifications. Data width conversion is used to overcome this issue. This work presents two open source hardware cores compliant with AXI4-Stream bus protocol, where each core performs upsizing/downsizing data width conversion.
Within a biorefinery platform several conversion steps such as pretreatment, saccharification, fermentation and downstream processing are necessary to obtain the final bio-based product(s) from lignocellulosic biomass. The structural composition of the biomass, especially the lignin content, determines the necessary pretreatment steps. To obtain sugar monomers, the hydrolysis of lignocellulosic biomass is an essential step. This work examines the impact of different pretreatments on the sugar release during biocatalysis. Even without prior pretreatment the biocatalysis of low lignin biomass achieves glucose yields of up to 93 %, while the biocatalysis of high lignin biomass requires an upstream hydrothermal procedure to achieve a glucose yield of 74
Using the molecular dynamics simulation, we study the cutting of Al/Si bilayer systems. While the plasticity of metals is dominated by dislocation activity, the deformation behavior of Si crystals is governed by phase transformations—here to the amorphous phase. We find that twinning adds as a major deformation mechanism in the cutting of Al crystals. Cutting of Si crystals requires thrust forces that are larger than the cutting forces in order to induce amorphization; in metals, the thrust forces are relatively smaller than the cutting forces. When putting an Al top layer on a Si substrate, the thrust force is reduced; the opposite effect is observed if a Si top layer is put on an Al substrate. Covering an Al substrate with a thin Si top layer has the detrimental effect that the hard Si requires high pressures for cutting; as a consequence, twinning planes with intersecting directions are generated that ultimately lead to cracks in the ductile Al substrate. The crystallinity of the Si chip is strongly changed if an Al substrate is put under the Si top layer: With decreasing thickness of the Si top layer, the Si chip retains a higher degree of crystallinity.
Nanoindentation simulations are performed for a Ni(111) bi-crystal, in which the grain boundary is coated by a graphene layer. We study both a weak and a strong interface, realized by a 30∘ and a 60∘ twist boundary, respectively, and compare our results for the composite also with those of an elemental Ni bi-crystal. We find hardening of the elemental Ni when a strong, i.e., low-energy, grain boundary is introduced, and softening for a weak grain boundary. For the strong grain boundary, the interface barrier strength felt by dislocations upon passing the interface is responsible for the hardening; for the weak grain boundary, confinement of the dislocations results in the weakening. For the Ni-graphene composite, we find in all cases a weakening influence that is caused by the graphene blocking the passage of dislocations and absorbing them. In addition, interface failure occurs when the indenter reaches the graphene, again weakening the composite structure.
The deformation of a nano-sized polycrystalline Al bar under the action of vice plates is studied using molecular dynamics simulation. Two grain sizes are considered, fine-grained and coarse-grained. Deformation in the fine-grained sample is mainly caused by grain-boundary processes which induce grain displacement and rotation. Deformation in the coarse-grained sample is caused by grain-boundary processes and dislocation plasticity. The sample distortion manifests itself by the center-of-mass motion of the grains. Grain rotation is responsible for surface roughening after the loading process. While the plastic deformation is caused by the loading process, grain rearrangements under load release also contribute considerably to the final sample distortion.
Functional illiteracy and developmental dyslexia: looking for common roots. A systematic review
(2021)
A considerable amount of the population in more economically developed countries are functionally illiterate (i.e., low literate). Despite some years of schooling and basic reading skills, these individuals cannot properly read and write and, as a consequence have problems to understand even short texts. An often-discussed approach (Greenberg et al. 1997) assumes weak phonological processing skills coupled with untreated developmental dyslexia as possible causes of functional illiteracy. Although there is some data suggesting commonalities between low literacy and developmental dyslexia, it is still not clear, whether these reflect shared consequences (i.e., cognitive and behavioral profile) or shared causes. The present systematic review aims at exploring the similarities and differences identified in empirical studies investigating both functional illiterate and developmental dyslexic samples. Nine electronic databases were searched in order to identify all quantitative studies published in English or German. Although a broad search strategy and few limitations were applied, only 5 studies have been identified adequate from the resulting 9269 references. The results point to the lack of studies directly comparing functional illiterate with developmental dyslexic samples. Moreover, a huge variance has been identified between the studies in how they approached the concept of functional illiteracy, particularly when it came to critical categories such the applied definition, terminology, criteria for inclusion in the sample, research focus, and outcome measures. The available data highlight the need for more direct comparisons in order to understand what extent functional illiteracy and dyslexia share common characteristics.
A highly water-dispersible heterogeneous Brønsted acid surfactant was prepared by synthesis of a bi-functional anisotropic Janus-type material. The catalyst comprises ionic functionalities on one side and propyl-SO3H groups on the other. The novel material was investigated as a green substitute of a homogeneous acidic phase transfer catalyst (PTC). The activity of the catalyst was investigated for the aqueous-phase oxidation of cyclohexene to adipic acid with 30 % hydrogen peroxide even in a decagram-scale. It can also be used for the synthesis of some other carboxylic acid derivatives as well as diethyl phthalate.
Janus-Materialien sind anisotrope Nano- und Mikroarchitekturen, die zwei verschiedene Seiten mit unterschiedlichen oder entgegengesetzten physikochemischen Eigenschaften aufweisen. Parallel zur Entwicklung neuer Methoden zur Herstellung dieser Materialien wurden entscheidende Fortschritte in Bezug auf Anwendungen erzielt, z. B. in der Biologie, der Katalyse, der Pharmazie und neuerdings auch in der Batterietechnologie. Dieser Kurzaufsatz stellt die jüngsten und wichtigsten Erfolge bei der Anwendung aufgabenspezifisch funktionalisierter Janus-Nanomaterialien im Bereich der heterogenen Katalyse für unterschiedliche chemische Transformationen vor. Er umfasst Reduktionreaktionen, oxidative Entschwefelung und Farbstoffabbau, asymmetrische Katalyse, Biomassetransformationen, Kaskadenreaktionen, Oxidationsreaktionen, übergangsmetallkatalysierte Kreuzkupplungsreaktionen, elektro- und photokatalytische Reaktionen sowie Gasphasenreaktionen. Zum Abschluss folgt ein Ausblick auf mögliche zukünftige Anwendungen.
Janus materials are anisotropic nano- and microarchitectures with two different faces consisting of distinguishable or opposite physicochemical properties. In parallel with the discovery of new methods for the fabrication of these materials, decisive progress has been made in their application, for example, in biological science, catalysis, pharmaceuticals, and, more recently, in battery technology. This Minireview systematically covers recent and significant achievements in the application of task-specific Janus nanomaterials as heterogeneous catalysts in various types of chemical reactions, including reduction, oxidative desulfurization and dye degradation, asymmetric catalysis, biomass transformation, cascade reactions, oxidation, transition-metal-catalyzed cross-coupling reactions, electro- and photocatalytic reactions, as well as gas-phase reactions. Finally, an outlook on possible future applications is given.
Small concentrations of alloying elements can modify the
α
α
-
γ
γ
phase transition temperature
T
c
Tc
of Fe. We study this effect using an atomistic model based on a set of many-body interaction potentials for iron and several alloying elements. Free-energy calculations based on perturbation theory allow us to determine the change in
T
c
Tc
introduced by the alloying element. The resulting changes are in semi-quantitative agreement with experiment. The effect is traced back to the shape of the pair potential describing the interaction between the Fe and the alloying atom
Indentation and Scratching with a Rotating Adhesive Tool: A Molecular Dynamics Simulation Study
(2022)
For the specific case of a spherical diamond nanoparticle with 10 nm radius rolling over a planar Fe surface, we employ molecular dynamics simulation to study the processes of indentation and scratching. The particle is rotating (rolling). We focus on the influence of the adhesion force between the nanoparticle and the surface on the damage mechanisms on the surface; the adhesion is modeled by a pair potential with arbitrarily prescribed value of the adhesion strength. With increasing adhesion, the following effects are observed. The load needed for indentation decreases and so does the effective material hardness; this effect is considerably more pronounced than for a non-rotating particle. During scratching, the tangential force, and hence the friction coefficient, increase. The torque needed to keep the particle rolling adds to the total work for scratching; however, for a particle rolling without slip on the surface the total work is minimum. In this sense, a rolling particle induces the most efficient scratching process. For both indentation and scratching, the length of the dislocation network generated in the substrate reduces. After leaving the surface, the particle is (partially) covered with substrate atoms and the scratch groove is roughened. We demonstrate that these effects are based on substrate atom transport under the rotating particle from the front towards the rear; this transport already occurs for a repulsive particle but is severely intensified by adhesion.
Fragmentation of granular clusters may be studied by experiments and by granular mechanics simulation. When comparing results, it is often assumed that results can be compared when scaled to the same value of E/◂◽.▸Esep, where E denotes the collision energy and ◂◽.▸Esep is the energy needed to break every contact in the granular clusters. The ratio ◂+▸E/◂◽.▸Esep∝v2 depends on the collision velocity v but not on the number of grains per cluster, N. We test this hypothesis using granular-mechanics simulations on silica clusters containing a few thousand grains in the velocity range where fragmentation starts. We find that a good parameter to compare different systems is given by ◂+▸E/(Nα◂◽.▸Esep), where α∼2/3. The occurrence of the extra factor Nα is caused by energy dissipation during the collision such that large clusters request a higher impact energy for reaching the same level of fragmentation than small clusters. Energy is dissipated during the collision mainly by normal and tangential (sliding) forces between grains. For large values of the viscoelastic friction parameter, we find smaller cluster fragmentation, since fragment velocities are smaller and allow for fragment recombination.
The electrochemical process of microbial electrosynthesis (MES) is used to drive the metabolism of electroactive microorganisms for the production of valuable chemicals and fuels. MES combines the advantages of electrochemistry, engineering, and microbiology and offers alternative production processes based on renewable raw materials and regenerative energies. In addition to the reactor concept and electrode design, the biocatalysts used have a significant influence on the performance of MES. Thus, pure and mixed cultures can be used as biocatalysts. By using mixed cultures, interactions between organisms, such as the direct interspecies electron transfer (DIET) or syntrophic interactions, influence the performance in terms of productivity and the product range of MES. This review focuses on the comparison of pure and mixed cultures in microbial electrosynthesis. The performance indicators, such as productivities and coulombic efficiencies (CEs), for both procedural methods are discussed. Typical products in MES are methane and acetate, therefore these processes are the focus of this review. In general, most studies used mixed cultures as biocatalyst, as more advanced performance of mixed cultures has been seen for both products. When comparing pure and mixed cultures in equivalent experimental setups a 3-fold higher methane and a nearly 2-fold higher acetate production rate can be achieved in mixed cultures. However, studies of pure culture MES for methane production have shown some improvement through reactor optimization and operational mode reaching similar performance indicators as mixed culture MES. Overall, the review gives an overview of the advantages and disadvantages of using pure or mixed cultures in MES.
We report on generation of pulsed broadband terahertz radiation utilizing the inverse spin hall effect in Fe/Pt bilayers on MgO and sapphire substrates. The emitter was optimized with respect to layer thickness, growth parameters, substrates and geometrical arrangement. The experimentally determined optimum layer thicknesses were in qualitative agreement with simulations of the spin current induced in the ferromagnetic layer. Our model takes into account generation of spin polarization, spin diffusion and accumulation in Fe and Pt and electrical as well as optical properties of the bilayer samples. Using the device in a counterintuitive orientation a Si lens was attached to increase the collection efficiency of the emitter. The optimized emitter provided a bandwidth of up to 8 THz which was mainly limited by the low-temperature-grown GaAs (LT-GaAS) photoconductive antenna used as detector and the pulse length of the pump laser. The THz pulse length was as short as 220 fs for a sub 100 fs pulse length of the 800 nm pump laser. Average pump powers as low as 25 mW (at a repetition rate of 75 MHz) have been used for terahertz generation. This and the general performance make the spintronic terahertz emitter compatible with established emitters based on optical rectification in nonlinear crystals.
We present a model predictive control (MPC) algorithm for online time-optimal trajectory planning of cooperative robotic manipulators. Robotic arms sharing a common confined operational space are exposed to high interrobot collision
risks. For collision avoidance, a smooth robot geometry approximation by Bézier curves is applied, utilizing velocity constraints and tangent separating planes, enabling an efficient generation of robot trajectories in real-time. The proposed optimization algorithm is validated on an experimental setup consisting of two collaborative robotic arms performing synchronous pick-and-place tasks.
We consider N coupled linear oscillators with time-dependent coecients. An exact complex amplitude - real phase decomposition of the oscillatory motion is constructed. This decomposition is further used to derive N exact constants of motion which generalise the so-called Ermakov-Lewis invariant of a single oscillator. In the Floquet problem of periodic oscillator coecients we discuss the existence of periodic complex amplitude functions in terms of existing Floquet solutions.
A harmonic oscillator subject to a parametric pulse is examined. The aim of the paper is to present a new theory for analysing transitions due to parametric pulses. The new theoretical notions which are introduced relate the pulse parameters in a direct way with the transition matrix elements. The harmonic oscillator transitions are expressed in terms of asymptotic properties of a companion oscillator, the Milne (amplitude) oscillator. A traditional phase-amplitude decomposition of the harmonic-oscillator solutions results in the so-called Milne's equation for the amplitude, and the phase is determined by an exact relation to the amplitude. This approach is extended in the present analysis with new relevant concepts and parameters for pulse dynamics of classical and quantal systems. The amplitude oscillator has a particularly nice numerical behavior. In the case of strong pulses it does not possess any of the fast oscillations induced by the pulse on the original harmonic oscillator. Furthermore, the new dynamical parameters introduced in this approach relate closely to relevant characteristics of the pulse. The relevance to quantum mechanical problems such as reflection and transmission from a localized well and mechanical problems of controlling vibrations is illustrated.
We consider the maximum flow problem with minimum quantities (MFPMQ), which is a variant of the maximum flow problem where
the flow on each arc in the network is restricted to be either zero or above a given lower bound (a minimum quantity), which
may depend on the arc. This problem has recently been shown to be weakly NP-complete even on series-parallel graphs.
In this paper, we provide further complexity and approximability results for MFPMQ and several special cases.
We first show that it is strongly NP-hard to approximate MFPMQ on general graphs (and even bipartite graphs) within any positive factor.
On series-parallel graphs, however, we present a pseudo-polynomial time dynamic programming algorithm for the problem.
We then study the case that the minimum quantity is the same for each arc in the network and show that, under this restriction, the problem is still
weakly NP-complete on general graphs, but can be solved in strongly polynomial time on series-parallel graphs.
On general graphs, we present a \((2 - 1/\lambda) \)-approximation algorithm for this case, where \(\lambda\) denotes the common minimum quantity of all arcs.
Deactivation processes of photoexcited (λex = 580 nm) phycocyanobilin (PCB) in methanol were investigated by means of UV/Vis and mid-IR femtosecond (fs) transient absorption (TA) as well as static fluorescence spectroscopy, supported by density-functional-theory calculations of three relevant ground state conformers, PCBA, PCBB and PCBC, their relative electronic state energies and normal mode vibrational analysis. UV/Vis fs-TA reveals time constants of 2.0, 18 and 67 ps, describing decay of PCBB*, of PCBA* and thermal re-equilibration of PCBA, PCBB and PCBC, respectively, in line with the model by Dietzek et al. (Chem Phys Lett 515:163, 2011) and predecessors. Significant substantiation and extension of this model is achieved first via mid-IR fs-TA, i.e. identification of molecular structures and their dynamics, with time constants of 2.6, 21 and 40 ps, respectively. Second, transient IR continuum absorption (CA) is observed in the region above 1755 cm−1 (CA1) and between 1550 and 1450 cm−1 (CA2), indicative for the IR absorption of highly polarizable protons in hydrogen bonding networks (X–H…Y). This allows to characterize chromophore protonation/deprotonation processes, associated with the electronic and structural dynamics, on a molecular level. The PCB photocycle is suggested to be closed via a long living (> 1 ns), PCBC-like (i.e. deprotonated), fluorescent species.
VIPP proteins aid thylakoid biogenesis and membrane maintenance in cyanobacteria, algae, and plants. Some members of the Chlorophyceae contain two VIPP paralogs termed VIPP1 and VIPP2, which originate from an early gene duplication event during the evolution of green algae. VIPP2 is barely expressed under nonstress conditions but accumulates in cells exposed to high light intensities or H2O2, during recovery from heat stress, and in mutants with defective integration (alb3.1) or translocation (secA) of thylakoid membrane proteins. Recombinant VIPP2 forms rod-like structures in vitro and shows a strong affinity for phosphatidylinositol phosphate. Under stress conditions, >70% of VIPP2 is present in membrane fractions and localizes to chloroplast membranes. A vipp2 knock-out mutant displays no growth phenotypes and no defects in the biogenesis or repair of photosystem II. However, after exposure to high light intensities, the vipp2 mutant accumulates less HSP22E/F and more LHCSR3 protein and transcript. This suggests that VIPP2 modulates a retrograde signal for the expression of nuclear genes HSP22E/F and LHCSR3. Immunoprecipitation of VIPP2 from solubilized cells and membrane-enriched fractions revealed major interactions with VIPP1 and minor interactions with HSP22E/F. Our data support a distinct role of VIPP2 in sensing and coping with chloroplast membrane stress.
In cyanobacteria and plants, VIPP1 plays crucial roles in the biogenesis and repair of thylakoid membrane protein complexes and in coping with chloroplast membrane stress. In chloroplasts, VIPP1 localizes in distinct patterns at or close to envelope and thylakoid membranes. In vitro, VIPP1 forms higher-order oligomers of >1 MDa that organize into rings and rods. However, it remains unknown how VIPP1 oligomerization is related to function. Using time-resolved fluorescence anisotropy and sucrose density gradient centrifugation, we show here that Chlamydomonas reinhardtii VIPP1 binds strongly to liposomal membranes containing phosphatidylinositol-4-phosphate (PI4P). Cryo-electron tomography reveals that VIPP1 oligomerizes into rods that can engulf liposomal membranes containing PI4P. These findings place VIPP1 into a group of membrane-shaping proteins including epsin and BAR domain proteins. Moreover, they point to a potential role of phosphatidylinositols in directing the shaping of chloroplast membranes.
Cognitive Load Theory is considered universally applicable to all kinds of learning scenarios. However, instead of a universal method for measuring cognitive load that suits different learning contexts or target groups, there is a great variety of assessment approaches. Particularly common are subjective rating scales, which even allow for measuring the three assumed types of cognitive load in a differentiated way. Although these scales have been proven to be effective for various learning tasks, they might not be an optimal fit for the learning demands of specific complex environments like technology-enhanced STEM laboratory courses. The aim of this research was therefore to examine and compare existing rating scales in terms of validity for this learning context and to identify options for adaptation, if necessary. For the present study, the two most common subjective rating scales that are known to differentiate between load types (the Cognitive Load Scale by Leppink et al. and the Naïve Rating Scale by Klepsch et al.) were slightly adapted to the context of learning through structured hands-on experimentation where elements like measurement data, experimental setups, and experimental tasks affect knowledge acquisition. N=95 engineering students performed six experiments examining basic electric circuits where they had to explore fundamental relationships between physical quantities based on observed data. Immediately after experimentation, students answered both adapted scales. Various indicators of validity, which considered the scales’ internal structure and their relation to variables like group allocation as participants were randomly assigned to two conditions with contrasting spatial arrangement of measurement data, were analyzed. For the given data set, the intended three-factorial structure could not be confirmed and most of the a priori defined subscales showed insufficient internal consistency. A multitrait-multimethod analysis suggests convergent and discriminant evidence between the scales which could not be confirmed sufficiently. The two contrasted experimental conditions were expected to result in different ratings for extraneous load, which was solely detected by one adapted scale. As a further step, two new scales were assembled based on the overall item pool and the given data set. They revealed a three-factorial structure in accordance with the three types of load and seem to be promising new tools, although their subscales for extraneous load still suffer from low reliability scores.
The use of vegetable oil as a fuel for agricultural and forestry vehicles allows a CO2 reduction of up to 60 %. On the other hand, the availability of vegetable oil is limited, and price competitiveness depends heavily on the respective oil price. In order to reduce the dependence on the availability of specific fuels, the joint research project “MuSt5-Trak” (Multi-Fuel EU Stage 5 Tractor) aims at developing a prototype tractor capable of running on arbitrary mixtures of diesel and rapeseed oil.
Depending on the fuel mixture used, the engine parameters need to be adapted to the respective operating conditions. For this purpose, it is necessary to detect the composition of the fuel mixture and the fuel quality. Regardless of the available fuel mixture, all functions for regular engine operation must be maintained. A conventional active regeneration of the diesel particulate filter (DPF) cannot be carried out because rapeseed oil has a flash point of 230°C, compared to 80°C for diesel fuel. This leads to a condensation of rapeseed oil while using post-injection at low and medium part load operating points, which causes a dilution of the engine oil.
In this work, engine-internal measures for achieving DPF regeneration with rapeseed oil and mixtures of diesel fuel and rapeseed oil are investigated. In order to provide stationary operating conditions in real engine operation, a “high-idle” operating point is chosen. The fuel mixtures are examined with regard to compatibility concerning a reduction of the air-fuel ratio, late combustion phasing and multiple injections. The highest temperatures are expected from a combination of these control options. After the completion of a regeneration cycle, the fuel input into the engine oil is controlled. These investigations will serve as a basis for the subsequent development of more complex regeneration strategies for close-to-reality engine operating cycles with varying load conditions.
Patients after total hip arthroplasty (THA) suffer from lingering musculoskeletal restrictions. Three-dimensional (3D) gait analysis in combination with machine-learning approaches is used to detect these impairments. In this work, features from the 3D gait kinematics, spatio temporal parameters (Set 1) and joint angles (Set 2), of an inertial sensor (IMU) system are proposed as an input for a support vector machine (SVM) model, to differentiate impaired and non-impaired gait. The features were divided into two subsets. The IMU-based features were validated against an optical motion capture (OMC) system by means of 20 patients after THA and a healthy control group of 24 subjects. Then the SVM model was trained on both subsets. The validation of the IMU system-based kinematic features revealed root mean squared errors in the joint kinematics from 0.24° to 1.25°. The validity of the spatio-temporal gait parameters (STP) revealed a similarly high accuracy. The SVM models based on IMU data showed an accuracy of 87.2% (Set 1) and 97.0% (Set 2). The current work presents valid IMU-based features, employed in an SVM model for the classification of the gait of patients after THA and a healthy control. The study reveals that the features of Set 2 are more significant concerning the classification problem. The present IMU system proves its potential to provide accurate features for the incorporation in a mobile gait-feedback system for patients after THA.
3D joint kinematics can provide important information about the quality of movements. Optical motion capture systems (OMC) are considered the gold standard in motion analysis. However, in recent years, inertial measurement units (IMU) have become a promising alternative. The aim of this study was to validate IMU-based 3D joint kinematics of the lower extremities during different movements. Twenty-eight healthy subjects participated in this study. They performed bilateral squats (SQ), single-leg squats (SLS) and countermovement jumps (CMJ). The IMU kinematics was calculated using a recently-described sensor-fusion algorithm. A marker based OMC system served as a reference. Only the technical error based on algorithm performance was considered, incorporating OMC data for the calibration, initialization, and a biomechanical model. To evaluate the validity of IMU-based 3D joint kinematics, root mean squared error (RMSE), range of motion error (ROME), Bland-Altman (BA) analysis as well as the coefficient of multiple correlation (CMC) were calculated. The evaluation was twofold. First, the IMU data was compared to OMC data based on marker clusters; and, second based on skin markers attached to anatomical landmarks. The first evaluation revealed means for RMSE and ROME for all joints and tasks below 3°. The more dynamic task, CMJ, revealed error measures approximately 1° higher than the remaining tasks. Mean CMC values ranged from 0.77 to 1 over all joint angles and all tasks. The second evaluation showed an increase in the RMSE of 2.28°– 2.58° on average for all joints and tasks. Hip flexion revealed the highest average RMSE in all tasks (4.87°– 8.27°). The present study revealed a valid IMU-based approach for the measurement of 3D joint kinematics in functional movements of varying demands. The high validity of the results encourages further development and the extension of the present approach into clinical settings.
Understanding the mechanisms and controlling
the possibilities of surface nanostructuring is of crucial interest
for both fundamental science and application perspectives.
Here, we report a direct experimental observation
of laser-induced periodic surface structures (LIPSS) formed
near a predesigned gold step edge following single-pulse
femtosecond laser irradiation. Simulation results based on a
hybrid atomistic-continuum model fully support the experimental
observations. We experimentally detect nanosized
surface features with a periodicity of ∼300 nm and heights of
a few tens of nanometers.We identify two key components of
single-pulse LIPSS formation: excitation of surface plasmon
polaritons and material reorganization. Our results lay a
solid foundation toward simple and efficient usage of light
for innovative material processing technologies.
Biological soil crusts (biocrusts) have been recognized as key ecological players in arid and semiarid regions at both local and global scales. They are important biodiversity components, provide critical ecosystem services, and strongly influence soil-plant relationships, and successional trajectories via facilitative, competitive, and edaphic engineering effects. Despite these important ecological roles, very little is known about biocrusts in seasonally dry tropical forests. Here we present a first baseline study on biocrust cover and ecosystem service provision in a human-modified landscape of the Brazilian Caatinga, South America's largest tropical dry forest. More specifically, we explored (1) across a network of 34 0.1 ha permanent plots the impact of disturbance, soil, precipitation, and vegetation-related parameters on biocrust cover in different stages of forest regeneration, and (2) the effect of disturbance on species composition, growth and soil organic carbon sequestration comparing early and late successional communities in two case study sites at opposite ends of the disturbance gradient. Our findings revealed that biocrusts are a conspicuous component of the Caatinga ecosystem with at least 50 different taxa of cyanobacteria, algae, lichens and bryophytes (cyanobacteria and bryophytes dominating) covering nearly 10% of the total land surface and doubling soil organic carbon content relative to bare topsoil. High litter cover, high disturbance by goats, and low soil compaction were the leading drivers for reduced biocrust cover, while precipitation was not associated Second-growth forests supported anequally spaced biocrust cover, while in old-growth-forests biocrust cover was patchy. Disturbance reduced biocrust growth by two thirds and carbon sequestration by half. In synthesis, biocrusts increase soil organic carbon (SOC) in dry forests and as they double the SOC content in disturbed areas, may be capable of counterbalancing disturbance-induced soil degradation in this ecosystem. As they fix and fertilize depauperated soils, they may play a substantial role in vegetation regeneration in the human-modified Caatinga, and may have an extended ecological role due to the ever-increasing human encroachment on natural landscapes. Even though biocrusts benefit from human presence in dry forests, high levels of anthropogenic disturbance could threaten biocrust-provided ecosystem services, and call for further, in-depth studies to elucidate the underlying mechanisms.
Ecophysiological characterizations of photoautotrophic communities are not only necessary to identify the response of carbon fixation related to different climatic factors, but also to evaluate risks connected to changing environments. In biological soil crusts (BSCs), the description of ecophysiological features is difficult, due to the high variability in taxonomic composition and variable methodologies applied. Especially for BSCs in early successional stages, the available datasets are rare or focused on individual constituents, although these crusts may represent the only photoautotrophic component in many heavily disturbed ruderal areas, such as parking lots or building areas with increasing surface area worldwide. We analyzed the response of photosynthesis and respiration to changing BSC water contents (WCs), temperature and light in two early successional BSCs. We investigated whether the response of these parameters was different between intact BSC and the isolated dominating components. BSCs dominated by the cyanobacterium Nostoc commune and dominated by the green alga Zygogonium ericetorum were examined. A major divergence between the two BSCs was their absolute carbon fixation rate on a chlorophyll basis, which was significantly higher for the cyanobacterial crust. Nevertheless, independent of species composition, both crust types and their isolated organisms had convergent features such as high light acclimatization and a minor and very late-occurring depression in carbon uptake at water suprasaturation. This particular setup of ecophysiological features may enable these communities to cope with a high variety of climatic stresses and may therefore be a reason for their success in heavily disturbed areas with ongoing human impact. However, the shape of the response was different for intact BSC compared to separated organisms, especially in absolute net photosynthesis (NP) rates. This emphasizes the importance of measuring intact BSCs under natural conditions for collecting reliable data for meaningful analysis of BSC ecosystem services.
In recent years, the automotive industry has shifted from purely combustion engine-driven vehicles towards hybridization due to the introduction of CO2 emission legislation. Hybrid powertrains also represent an important pillar and starting point in the journey towards zero-emission and full electrification. Fulfilling the most recent emission standards requires efficient control strategies for the engine, capable of real-time operation. Model accuracy is one of the main parameters which directly influence the performance of such control strategies. Specific methodologies developed in the past, such as physically- or phenomenologically-based approaches, have already facilitated the modeling of the combustion engine. Even though these models can accurately predict emissions in steady state conditions, their performance during transient engine operation is time-consuming and still not sufficiently reliable. The major contribution of the current work is to clarify and apply the recent advancements in data-driven modeling techniques, especially in time series forecasting with feedforward neural networks (FFNNs) and long short-term memory networks (LSTMs), to address the limitations mentioned above and to compare the different approaches. The quantity and quality of data are significant challenges for data-driven modeling. This paper studies the modeling of gasoline engine emissions using FFNNs and LSTMs. The data quantity and quality requirements are studied based on a portable emission measurement system (PEMS), measuring at 1 Hz, and additional analyses on an engine test bench with a HiL setup, providing the possibility of increasing the measurement frequency with more sophisticated devices by a factor of five. Subsequently, the training and validation of the FFNNs and LSTMs are outlined, and finally, the model accuracy is discussed.
Longwave radiative heat transfer is a key determinant of energy consumption in buildings
and view factor calculations are therefore required for the detailed simulation of heat transfer
between buildings and their environment as well as for heat exchange within rooms. Typically,
these calculations are either derived through analytical means or performed as a part of the simulation
process. This paper describes the methodology for employing RADIANCE, a command-line
open-source raytracing software, for performing view factor calculations. Since it was introduced
in the late-1980s, RADIANCE has been almost exclusively employed as the back-end engine for
lighting simulations. We discuss the theoretical basis for calculating view factors through Monte
Carlo calculations with RADIANCE and propose a corresponding workflow. The results generated
through RADIANCE are validated by comparing them with analytical solutions. The fundamental
methodology proposed in this paper can be scaled up to calculate view factors for more complex,
practical scenarios. Furthermore, the portability, multi-processing functionality and cross-platform
compatibility offered by RADIANCE can also be employed in the calculation of view factors.
Characterization of an Aerosol-Based Photobioreactor for Cultivation of Phototrophic Biofilms
(2021)
Phototrophic biofilms, in particular terrestrial cyanobacteria, offer a variety of biotechnologically interesting products such as natural dyes, antibiotics or dietary supplements. However,
phototrophic biofilms are difficult to cultivate in submerged bioreactors. A new generation of biofilm
photobioreactors imitates the natural habitat resulting in higher productivity. In this work, an aerosol-based photobioreactor is presented that was characterized for the cultivation of phototrophic biofilms.
Experiments and simulation of aerosol distribution showed a uniform aerosol supply to biofilms.
Compared to previous prototypes, the growth of the terrestrial cyanobacterium Nostoc sp. could be
almost tripled. Different surfaces for biofilm growth were investigated regarding hydrophobicity,
contact angle, light- and temperature distribution. Further, the results were successfully simulated.
Finally, the growth of Nostoc sp. was investigated on different surfaces and the biofilm thickness was
measured noninvasively using optical coherence tomography. It could be shown that the cultivation
surface had no influence on biomass production, but did affect biofilm thickness.
In diesem Beitrag stellt sich die Nachwuchswissenschaftlerin Dr.-Ing. Dorina Strieth vom Lehrgebiet Bioverfahrenstechnik der TU Kaiserslautern vor. Neben aktuellen Forschungsarbeiten und Lehraktivität berichtet sie über die Notwendigkeit des Wissenstransfers in die Zivilgesellschaft. Fachlich berichtet sie von aktuellen Ergebnissen der intelligenten Nutzung phototropher Biofilme sowie dem Potenzial zur biotechnologischen Herstellung nachhaltiger Baumaterialien.
Initiated by a task in tunable microoptics, but not limited to this application, a microfluidic droplet array in an upright standing module with 3 × 3 subcells and droplet actuation via electrowetting is presented. Each subcell is filled with a single (of course transparent) water droplet, serving as a movable iris, surrounded by opaque blackened decane. Each subcell measures 1 × 1 mm ² and incorporates 2 × 2 quadratically arranged positions for the droplet. All 3 × 3 droplets are actuated synchronously by electrowetting on dielectric (EWOD). The droplet speed is up to 12 mm/s at 130 V (Vrms) with response times of about 40 ms. Minimum operating voltage is 30 V. Horizontal and vertical movement of the droplets is demonstrated. Furthermore, a minor modification of the subcells allows us to exploit the flattening of each droplet. Hence, the opaque decane fluid sample can cover each water droplet and render each subcell opaque, resulting in switchable irises of constant opening diameter. The concept does not require any mechanically moving parts or external pumps.
The measurement and assessment of indoor air quality in terms of respirable particulate constituents is relevant, especially in light of the COVID-19 pandemic and associated infection events. To analyze indoor infectious potential and to develop customized hygiene concepts, the measurement
monitoring of the anthropogenic aerosol spreading is necessary. For indoor aerosol measurements
usually standard lab equipment is used. However, these devices are time-consuming, expensive and unwieldy. The idea is to replace this standard laboratory equipment with low-cost sensors widely used for monitoring fine dust (particulate matter—PM). Due to the low acquisition costs, many sensors can be used to determine the aerosol load, even in large rooms. Thus, the aim of this work
is to verify the measurement capability of low-cost sensors. For this purpose, two different models of low-cost sensors are compared with established laboratory measuring instruments. The study
was performed with artificially prepared NaCl aerosols with a well-defined size and morphology. In addition, the influence of the relative humidity, which can vary significantly indoors, on the measurement capability of the low-cost sensors is investigated. For this purpose, a heating stage was
developed and tested. The results show a discrepancy in measurement capability between low-cost sensors and laboratory measuring instruments. This difference can be attributed to the partially different measuring method, as well as the different measuring particle size ranges. The determined measurement accuracy is nevertheless good, considering the compactness and the acquisition price of the low-cost sensors.