Refine
Year of publication
Document Type
- Article (587) (remove)
Has Fulltext
- yes (587)
Keywords
- AG-RESY (42)
- PARO (30)
- SKALP (15)
- Schule (12)
- MINT (11)
- Mathematische Modellierung (11)
- Stadtplanung (9)
- Denkmäler (8)
- HANDFLEX (8)
- Monitoring (8)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Informatik (122)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (96)
- Kaiserslautern - Fachbereich Physik (93)
- Kaiserslautern - Fachbereich Mathematik (76)
- Kaiserslautern - Fachbereich Sozialwissenschaften (49)
- Kaiserslautern - Fachbereich Biologie (38)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (27)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (24)
- Kaiserslautern - Fachbereich Chemie (18)
- Kaiserslautern - Fachbereich Bauingenieurwesen (17)
Laser-based powder bed fusion (L-PBF) is a promising technology for the production of near net–shaped metallic components. The high surface roughness and the comparatively low-dimensional accuracy of such components, however, usually require a finishing by a subtractive process such as milling or grinding in order to meet the requirements of the application. Materials manufactured via L-PBF are characterized by a unique microstructure and anisotropic material properties. These specific properties could also affect the subtractive processes themselves. In this paper, the effect of L-PBF on the machinability of the aluminum alloy AlSi10Mg is explored when milling. The chips, the process forces, the surface morphology, the microhardness, and the burr formation are analyzed in dependence on the manufacturing parameter settings used for L-PBF and the direction of feed motion of the end mill relative to the build-up direction of the parts. The results are compared with a conventionally cast AlSi10Mg. The analysis shows that L-PBF influences the machinability. Differences between the reference and the L-PBF AlSi10Mg were observed in the chip form, the process forces, the surface morphology, and the burr formation. The initial manufacturing method of the part thus needs to be considered during the design of the finishing process to achieve suitable results.
Fucoidans are multifunctional marine macromolecules that are subjected to numerous and various downstream processes during their production. These processes were considered the most important abiotic factors affecting fucoidan chemical skeletons, quality, physicochemical properties, biological properties and industrial applications. Since a universal protocol for fucoidans production has not been established yet, all the currently used processes were presented and justified. The current article complements our previous articles in the fucoidans field, provides an updated overview regarding the different downstream processes, including pre-treatment, extraction, purification and enzymatic modification processes, and shows the recent non-traditional applications of fucoidans in relation to their characters.
Background: The positive effect of carbohydrates from commercial beverages on soccer-specific exercise has been clearly demonstrated. However, no study is available that uses a home-mixed beverage in a test where technical skills were required. Methods: Nine subjects participated vol-untarily in this double-blind, randomized, placebo-controlled crossover study. On three testing days, the subjects performed six Hoff tests with a 3-min active break as a preload and then the Yo-Yo Intermittent Running Test Level 1 (Yo-Yo IR1) until exhaustion. On test days 2 and 3, the subjects received either a 69 g carbohydrate-containing drink (syrup–water mixture) or a carbo-hydrate-free drink (aromatic water). Beverages were given in several doses of 250 mL each: 30 min before and immediately before the exercise and after 18 and 39 min of exercise. The primary target parameters were the running performance in the Hoff test and Yo-Yo IR1, body mass and heart rate. Statistical differences between the variables of both conditions were analyzed using paired samples t-tests. Results: The maximum heart rate in Yo-Yo IR1 showed significant differ-ences (syrup: 191.1 ± 6.2 bpm; placebo: 188.0 ± 6.89 bpm; t(6) = −2.556; p = 0.043; dz = 0.97). The running performance in Yo-Yo IR1 under the condition syrup significantly increased by 93.33 ± 84.85 m (0–240 m) on average (p = 0.011). Conclusions: The intake of a syrup–water mixture with a total of 69 g carbohydrates leads to an increase in high-intensive running performance after soccer specific loads. Therefore, the intake of carbohydrate solutions is recommended for intermit-tent loads and should be increasingly considered by coaches and players.
This paper aims to improve the traditional calibration method for reconfigurable self-X (self-calibration, self-healing, self-optimize, etc.) sensor interface readout circuit for industry 4.0. A cost-effective test stimulus is applied to the device under test, and the transient response of the system is analyzed to correlate the circuit's characteristics parameters. Due to complexity in the search and objective space of the smart sensory electronics, a novel experience replay particle swarm optimization (ERPSO) algorithm is being proposed and proved a better-searching capability than some currently well-known PSO algorithms. The newly proposed ERPSO expanded the selection producer of the classical PSO by introducing an experience replay buffer (ERB) intending to reduce the probability of trapping into the local minima. The ERB reflects the archive of previously visited global best particles, while its selection is based upon an adaptive epsilon greedy method in the velocity updating model. The performance of the proposed ERPSO algorithm is verified by using eight different popular benchmarking functions. Furthermore, an extrinsic evaluation of the ERPSO algorithm is also examined on a reconfigurable wide swing indirect current-feedback instrumentation amplifier (CFIA). For the later test, we proposed an efficient optimization procedure by using total harmonic distortion analyses of CFIA output to reduce the total number of measurements and save considerable optimization time and cost. The proposed optimization methodology is roughly 3 times faster than the classical optimization process. The circuit is implemented by using Cadence design tools and CMOS 0.35 µm technology from Austria Microsystems (AMS). The efficiency and robustness are the key features of the proposed methodology toward implementing reliable sensory electronic systems for industry 4.0 applications.
In recent years, the concept of a centralized drainage system that connect an entire city to one single treatment plant is increasingly being questioned in terms of the costs, reliability, and environmental impacts. This study introduces an optimization approach based on decentralization in order to develop a cost-effective and sustainable sewage collection system. For this purpose, a new algorithm based on the growing spanning tree algorithm is developed for decentralized layout generation and treatment plant allocation. The trade-off between construction and operation costs, resilience, and the degree of centralization is a multiobjective problem that consists of two subproblems: the layout of the networks and the hydraulic design. The innovative characteristics of the proposed framework are that layout and hydraulic designs are solved simultaneously, three objectives are optimized together, and the entire problem solving process is self-adaptive. The model is then applied to a real case study. The results show that finding an optimum degree of centralization could reduce not only the network’s costs by 17.3%, but could also increase its structural resilience significantly compared to fully centralized networks.
It is difficult for robots to handle a vibrating deformable object. Even for human beings it is a high-risk operation to, for example, insert a vibrating linear object into a small hole. However, fast manipulation using a robot arm is not just a dream; it may be achieved if some important features of the vibration are detected online. In this paper, we present an approach for fast manipulation using a force/torque sensor mounted on the robot's wrist. Template matching method is employed to recognize the vibrational phase of the deformable objects. Therefore, a fast manipulation can be performed with a high success rate, even if there is acute vibration. Experiments inserting a deformable object into a hole are conducted to test the presented method. Results demonstrate that the presented sensor-based online fast manipulation is feasible.
As a consequence of the real estate market crash after 2008, large investors invested a significant amount of wealth into single-family houses to construct a portfolio of rental dwellings, whose income is securitized in the capital. In some local housing markets, these investors own remarkable numbers of single-family houses. Furthermore, their trading activities have resulted in a new investment strategy, which exacerbates property wealth concentration and polarization. This new investment strategy and its portfolio optimization inspire curiosity about its influence on housing markets. This paper first aims to find an optimal portfolio strategy by employing an expected utility optimization from the terminal wealth, which adopts a stochastic model that includes a variety of economic states to estimate house prices. Second, it aims to analyze the effect of large investors on the housing market. The results show the investment strategies of large investors depend on the balance among economic state, maintenance cost, rental income, interest rate and investment willingness of large investors to housing and their effect depends on the state of the economy.
Load modeling is one of the crucial tasks for improving smart grids’ energy efficiency. Among many alternatives, machine learning-based load models have become popular in applications and have shown outstanding performance in recent years. The performance of these models highly relies on data quality and quantity available for training. However, gathering a sufficient amount of high-quality data is time-consuming and extremely expensive. In the last decade, Generative Adversarial Networks (GANs) have demonstrated their potential to solve the data shortage problem by generating synthetic data by learning from recorded/empirical data. Educated synthetic datasets can reduce prediction error of electricity consumption when combined with empirical data. Further, they can be used to enhance risk management calculations. Therefore, we propose RCGAN, TimeGAN, CWGAN, and RCWGAN which take individual electricity consumption data as input to provide synthetic data in this study. Our work focuses on one dimensional times series, and numerical experiments on an empirical dataset show that GANs are indeed able to generate synthetic data with realistic appearance.
Recently, phase field modeling of fatigue fracture has gained a lot of attention from many researches and studies, since the fatigue damage of structures is a crucial issue in mechanical design. Differing from traditional phase field fracture models, our approach considers not only the elastic strain energy and crack surface energy, additionally, we introduce a fatigue energy contribution into the regularized energy density function caused by cyclic load. Comparing to other type of fracture phenomenon, fatigue damage occurs only after a large number of load cycles. It requires a large computing effort in a computer simulation. Furthermore, the choice of the cycle number increment is usually determined by a compromise between simulation time and accuracy. In this work, we propose an efficient phase field method for cyclic fatigue propagation that only requires moderate computational cost without sacrificing accuracy. We divide the entire fatigue fracture simulation into three stages and apply different cycle number increments in each damage stage. The basic concept of the algorithm is to associate the cycle number increment with the damage increment of each simulation iteration. Numerical examples show that our method can effectively predict the phenomenon of fatigue crack growth and reproduce fracture patterns.
Phospho-regulation of the Shugoshin - Condensin interaction at the centromere in budding yeast
(2020)
Correct bioriented attachment of sister chromatids to the mitotic spindle is essential for chromosome segregation. In budding yeast, the conserved protein shugoshin (Sgo1) contributes to biorientation by recruiting the protein phosphatase PP2A-Rts1 and the condensin complex to centromeres. Using peptide prints, we identified a Serine-Rich Motif (SRM) of Sgo1 that mediates the interaction with condensin and is essential for centromeric condensin recruitment and the establishment of biorientation. We show that the interaction is regulated via phosphorylation within the SRM and we determined the phospho-sites using mass spectrometry. Analysis of the phosphomimic and phosphoresistant mutants revealed that SRM phosphorylation disrupts the shugoshin–condensin interaction. We present evidence that Mps1, a central kinase in the spindle assembly checkpoint, directly phosphorylates Sgo1 within the SRM to regulate the interaction with condensin and thereby condensin localization to centromeres. Our findings identify novel mechanisms that control shugoshin activity at the centromere in budding yeast.
Global trends such as climate change and the scarcity of sustainable raw materials require adaptive, more flexible and resource-saving wastewater infrastructures for rural areas. Since 2018, in the community Reinighof, an isolated site in the countryside of Rhineland Palatinate (Germany), an autarkic, decentralized wastewater treatment and phosphorus recovery concept has been developed, implemented and tested. While feces are composted, an easy-to-operate system for producing struvite as a mineral fertilizer was developed and installed to recover phosphorus from urine. The nitrogen-containing supernatant of this process stage is treated in a special soil filter and afterwards discharged to a constructed wetland for grey water treatment, followed by an evaporation pond. To recover more than 90% of the phosphorus contained in the urine, the influence of the magnesium source, the dosing strategy, the molar ratio of Mg:P and the reaction and sedimentation time were investigated. The results show that, with a long reaction time of 1.5 h and a molar ratio of Mg:P above 1.3, constraints concerning magnesium source can be overcome and a stable process can be achieved even under varying boundary conditions. Within the special soil filter, the high ammonium nitrogen concentrations of over 3000 mg/L in the supernatant of the struvite reactor were considerably reduced. In the effluent of the following constructed wetland for grey water treatment, the ammonium-nitrogen concentrations were below 1 mg/L. This resource efficient decentralized wastewater treatment is self-sufficient, produces valuable fertilizer and does not need a centralized wastewater system as back up. It has high potential to be transferred to other rural communities.
This paper discusses the problem of automatic off-line programming and motion planning for industrial robots. At first, a new concept consisting of three steps is proposed. The first step, a new method for on-line motion planning is introduced. The motion planning method is based on the A*-search algorithm and works in the implicit configuration space. During searching, the collisions are detected in the explicitly represented Cartesian workspace by hierarchical distance computation. In the second step, the trajectory planner has to transform the path into a time and energy optimal robot program. The practical application of these two steps strongly depends on the method for robot calibration with high accuracy, thus, mapping the virtual world onto the real world, which is discussed in the third step.
This paper presents a new approach to parallel motion planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based on the A-search algorithm and needs no essential off-line computations. The algorithm works in an implicitly descrete configuration space. Collisions are detected in the Cartesian workspace by hierarchical distance computation based on the given CAD model. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel motion planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows linear speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
A practical distributed planning and control system for industrial robots is presented. The hierarchical concept consists of three independent levels. Each level is modularly implemented and supplies an application interface (API) to the next higher level. At the top level, we propose an automatic motion planner. The motion planner is based on a best-first search algorithm and needs no essential off-line computations. At the middle level, we propose a PC-based robot control architecture, which can easily be adapted to any industrial kinematics and application. Based on a client/server-principle, the control unit estab-lishes an open user interface for including application specific programs. At the bottom level, we propose a flexible and modular concept for the integration of the distributed motion control units based on the CAN bus. The concept allows an on-line adaptation of the control parameters according to the robot's configuration. This implies high accuracy for the path execution and improves the overall system performance.
This paper presents a new approach to parallel motion planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based on the A*-search algorithm and needs no essential off-line computations. The algorithm works in an implicitly descrete configuration space. Collisions are detected in the cartesian workspace by hierarchical distance computation based on the given CAD model. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel motion planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows linear, and sometimes even superlinear speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
A new problem for the automated off-line programming of industrial robot application is investigated. The Multi-Goal Path Planning is to find the collision-free path connecting a set of goal poses and minimizing e.g. the total path length. Our solution is based on an earlier reported path planner for industrial robot arms with 6 degrees-of-freedom in an on-line given 3D environment. To control the path planner, four different goal selection methods are introduced and compared. While the Random and the Nearest Pair Selection methods can be used with any path planner, the Nearest Goal and the Adaptive Pair Selection method are favorable for our planner. With the latter two goal selection methods, the Multi-Goal Path Planning task can be significantly accelerated, because they are able to automatically solve the simplest path planning problems first. Summarizing, compared to Random or Nearest Pair Selection, this new Multi-Goal Path Planning approach results in a further cost reduction of the programming phase.
Anwendungen effizienter Verfahren in Automation - Universität Karlsruhe auf der SPS97 in Nürnberg -
(1998)
Die Bewegungsplanung für Industrieroboter ist eine notwendige Voraussetzung, damit sich autonome Systeme kollisionsfrei durch die Umwelt bewegen können. Die Berücksichtigung von dynamischen Hindernissen zur Laufzeit erfordert allerdings leistungsfähige Algorithmen, zur Lösung dieser Aufgabenstellung in Echtzeit. Eine Möglichkeit zur Beschleunigung der Algorithmen ist der effiziente Einsatz von skalierbarer Parallelverarbeitung. Die softwaretechnische Umsetzung kann aber nur dann erfolgreich sein, wenn ein Parallelrechner zur Verfügung steht, der einen hohen Datendurchsatz bei geringer Latenzzeit bietet. Darüber hinaus muß dieser Parallelrechner unter vertretbarem Aufwand bedienbar sein und ein gutes Preisleistungsverhältnis aufweisen, damit die Parallelverarbeitung verstärkt in der Industrie zum Einsatz kommt. In diesem Artikel wird ein Workstation-Cluster auf der Basis von neun Standard- PCs vorgestellt, die über eine spezielle Kommunikationskarte miteinander vernetzt sind. In den einzelnen Abschnitten werden die gesammelten Erfahrungen bei der Inbetriebnahme, Systemadministration und Anwendung geschildert. Als Beispiel für eine Anwendung auf diesem Cluster wird ein paralleler Bewegungsplaner für Industrieroboter beschrieben.
In response priming experiments, a participant has to respond as quickly and as accurately as possible to a target stimulus preceded by a prime. The prime and the target can either be mapped to the same response (consistent trial) or to different responses (inconsistent trial). Here, we investigate the effects of two sequential primes (each one either consistent or inconsistent) followed by one target in a response priming experiment. We employ discrete-time hazard functions of response occurrence and conditional accuracy functions to explore the temporal dynamics of sequential motor activation. In two experiments (small-N design, 12 participants, 100 trials per cell and subject), we find that (1) the earliest responses are controlled exclusively by the first prime if primes are presented in quick succession, (2) intermediate responses reflect competition between primes, with the second prime increasingly dominating the response as its time of onset is moved forward, and (3) only the slowest responses are clearly controlled by the target. The current study provides evidence that sequential primes meet strict criteria for sequential response activation. Moreover, it suggests that primes can influence responses out of a memory buffer when they are presented so early that participants are forced to delay their responses.
A novel shadowgraphic inline probe to measure crystal size distributions (CSD),
based on acquired greyscale images, is evaluated in terms of elevated temperatures and fragile
crystals, and compared to well-established, alternative online and offline measurement techniques,
i.e., sieving analysis and online microscopy. Additionally, the operation limits, with respect to
temperature, supersaturation, suspension, and optical density, are investigated. Two different
substance systems, potassium dihydrogen phosphate (prisms) and thiamine hydrochloride (needles),
are crystallized for this purpose at 25 L scale. Crystal phases of the well-known KH2PO4/H2O system
are measured continuously by the inline probe and in a bypass by the online microscope during
cooling crystallizations. Both measurement techniques show similar results with respect to the crystal
size distribution, except for higher temperatures, where the bypass variant tends to fail due to
blockage. Thiamine hydrochloride, a substance forming long and fragile needles in aqueous solutions,
is solidified with an anti-solvent crystallization with ethanol. The novel inline probe could identify
a new field of application for image-based crystal size distribution measurements, with respect
to difficult particle shapes (needles) and elevated temperatures, which cannot be evaluated with
common techniques.
The fluid dynamic (flow rates) and hydrodynamic behavior (local droplet size distributions and local holdup) of a continuous DN300 pump-mixer were investigated using water as the continuous phase and paraffin oil as the dispersed phase. The influence of the impeller speed (375 to 425 rpm), the feed phase ratio (10 to 30 volume percent), and the total flow rate (0.5 to 2.3 L/min) were investigated by measuring the pumping height, local holdup of the disperse phase, and the droplet size distribution (DSD). The latter one was measured at three different vessel positions using an image-based telecentric shadowgraphic technique. The droplet diameters were extracted from the acquired images using a neural network. The Sauter mean diameters were calculated from the DSD and correlated with an extended model based on Doulah (1975), considering the impeller speed, the feed phase ratio, and additionally the flow rate. The new correlation can describe an extensive database containing 155 experiments of the fluid and hydrodynamic within a 15 % error range
In this work, steady-state droplet size distributions in a DN300 stirred batch vessel with a
Rushton turbine impeller are investigated using an insertion probe based on the telecentric transmit-
ted light principle. High-resolution droplet size distributions are extracted from the images using
a convolutional neural network for image-analysis in order to investigate the influence of impeller
speed and phase fraction (up to 50 vol.-%). In addition, Sauter mean diameters were calculated and
correlated with two semi-empirical approaches, while the standard approach only accomplished 5.7%
accuracy, and the correlation of Laso et al. provided a relative mean error of 4.0%. In addition, the
correlated exponent in the Weber number was fitted to the experimental data of this work yielding a
slightly different value than the theoretical (−0.6), which allows a better representation of the low
coalescence tendency of the system, which is usually neglected in standard procedures.
Habitat fragmentation and forest management have been considered to drastically alter the nature of forest ecosystems globally. However, much uncertainty remains regarding the causative mechanisms mediating temperate forest responses, such as forest physical environment and the structure of woody plant assemblages, regardless of the role these forests play for global sustainability. In this paper, we examine how both habitat fragmentation and timber exploitation via silvicultural operations affect these two factors at local and habitat spatial scales in a hyper-fragmented landscape of mixed beech forests spanning more than 1500 km2 in SW Germany. Variables were recorded across 57 1000 m2 plots covering four habitats: small forest fragments, forest edges within large control forests, as well as managed and unmanaged forest interior sites. As expected, forest habitats differed in disturbance level, physical conditions and community structure at plot and habitat scale. Briefly, diversity of plant assemblages differed across all forest habitats (highest in edge forests) and correlated with integrative indices of edge, fragmentation and management effects. Surprisingly, managed and unmanaged forests did not differ in terms of species richness at local spatial scale, but managed forests exhibited a clear signal of physical/floristic homogenization as species promoted by silviculture proliferated; i.e. impoverished communities at landscape scale. Moreover, functional composition of plant communities responded to the microclimatic regime within forest fragments, resulting in a higher prevalence of species adapted to these microclimatic conditions. Our results underscore the notion that forest fragmentation and silvicultural management (1) promote changes in microclimatic regimes, (2) alter the balance between light-demanding and shade-adapted species, (3) support diverse floras across forest edges, and (4) alter patterns of beta diversity. Hence, in human-modified landscapes edge-affected habitats can be recognized as biodiversity reservoirs in contrast to impoverished managed interior forests. Furthermore, our results ratify the role of unmanaged forests as a source of environmental variability, species turnover, and distinct woody plant communities.
One of the ongoing tasks in space structure testing is the vibration test, in which a given structure is mounted onto a shaker and excited by a certain input load on a given frequency range, in order to reproduce the rigor of launch. These vibration tests need to be conducted in order to ensure that the devised structure meets the expected loads of its future application. However, the structure must not be overtested to avoid any risk of damage. For this, the system’s response to the testing loads, i.e., stresses and forces in the structure, must be monitored and predicted live during the test. In order to solve the issues associated with existing methods of live monitoring of the structure’s response, this paper investigated the use of artificial neural networks (ANNs) to predict the system’s responses during the test. Hence, a framework was developed with different use cases to compare various kinds of artificial neural networks and eventually identify the most promising one. Thus, the conducted research accounts for a novel method for live prediction of stresses, allowing failure to be evaluated for different types of material via yield criteria
INRECA offers tools and methods for developing, validating, and maintaining classification, diagnosis and decision support systems. INRECA's basic technologies are inductive and case-based reasoning [9]. INRECA fully integrates [2] both techniques within one environment and uses the respective advantages of both technologies. Its object-oriented representation language CASUEL [10, 3] allows the definition of complex case structures, relations, similarity measures, as well as background knowledge to be used for adaptation. The objectoriented representation language makes INRECA a domain independent tool for its destined kind of tasks. When problems are solved via case-based reasoning, the primary kind of knowledge that is used during problem solving is the very specific knowledge contained in the cases. However, in many situations this specific knowledge by itself is not sufficient or appropriate to cope with all requirements of an application. Very often, background knowledge is available and/or necessary to better explore and interpret the available cases [1]. Such general knowledge may state dependencies between certain case features and can be used to infer additional, previously unknown features from the known ones.
Im Rahmen dieses Beitrags werden Ergebnisse einer Untersuchung an feststoffgeschmierten Wälzlagern vorgestellt. Betrachtet werden dabei Lager, welche einen speziellen, modifizierten Käfig verwenden. Die Käfigtaschen des Käfigs dienen dabei, zusätzlich zu ihrer ursprünglichen Funktion, der Führung der Wälzkörper, als Schmierstoffdepot. Es werden zunächst Prüfaufbau und die Versuchsbedingungen erläutert und in diesem Zusammenhang wird gezeigt, dass der in diesem Beitrag verwendete Aufbau, verglichen mit dem Aufbau vorangegangener Arbeiten, eine deutlich reduzierte Streuung aufweist. Als eine nicht zu vernachlässigende Fehlerquelle bei der gravimetrischen Bestimmung des Käfigtaschenverschleißes wurde das hygroskopische Verhalten des Polymercompounds identifiziert. Einer Verfälschung dieser Messergebnisse durch die unkontrollierte Feuchtigkeitsaufnahme aus der Umgebung, muss durch einen zeitlich vorgelagerten Trocknungsprozess unter definierten Bedingungen vorgebeugt werden. Zudem wird gezeigt, dass die Käfigtaschen sowohl durch den Innenring des Lagers, als auch durch die Wälzkörper verschlissen werden. Hierbei wird eine Messmethode zur Ermittlung der, durch den Innenring verschlissenen, Materialmenge vorgestellt. Durch Oberflächenanalysen der Messingstruktur des Käfigs wird eine Reduzierung des Zinks nachgewiesen, sowie eine Änderung der Oberflächenstruktur festgestellt. Als Ursache wird ein Sublimieren des Zinks aufgrund der Versuchsbedingungen vermutet. Weiterhin wird gezeigt, dass die Prüftemperatur von 300 °C zu einem Schrumpfen der Lagerringe führt. Eine Vorwegnahme dieser Maßverringerung ist durch Temperierung bei 300 °C für 48 h möglich.
Die politikwissenschaftliche Literatur zum deutschen Föderalismus ist überaus vielfältig. Neben Analysen der institutionellen Arrangements, ihrer Veränderungen sowie der Dynamiken des deutschen Verbundföderalismus, finden sich auch zahlreiche Untersuchungen zu einzelnen Politikfeldern, die sowohl die Interaktionen zwischen Bund und Ländern als auch die Varianz zwischen den Policies der Länder samt ihrer Bestimmungsfaktoren untersuchen. Darüber hinaus haben sich in den vergangenen Jahrzehnten eigene Forschungszweige zu Parteien im Bundesstaat und zur Parlamentsforschung auf Länderebene etabliert. Trotz dieser großen Forschungsaktivität sind jedoch einige zentrale Fragen der Politikwissenschaft zum Zusammenspiel zwischen Wählern, Parteien, Parlamenten und Regierungen sowie deren Wirkung auf politischen Outputs und Outcomes weiterhin unbeantwortet. Dies ist, so das Argument dieses Beitrags, insbesondere der fehlenden Zusammenführung einzelner Literaturstränge und der noch unzureichenden empirischen Datenbasis geschuldet. Mittels einer Systematisierung des gegenwärtigen Literaturstands entwirft der Aufsatz ein Forschungsprogramm, das auf eine umfassende Analyse des politischen Willensbildungs- und Entscheidungsfindungsprozesses in den deutschen Bundesländern abstellt und Fragen der Responsivität und Rückkopplung systematisch in den Blick nimmt.
Der vorliegende Aufsatz untersucht, (1) inwieweit Unterschiede in der Ausgestaltung der Migrationspolitik auf substaatlicher Ebene in der Bundesrepublik Deutschland bestehen und (2) wodurch sich die Policy-Varianz zwischen den deutschen Ländern erklären lässt. Während bestehende Studien ähnlich gelagerte Fragen meist nur auf Basis eines spezifischen Indikators der Migrationspolitik untersucht haben – wie etwa der Ausgaben – schlagen wir ein mehrdimensionales Messkonzept vor, das sechs unterschiedliche Dimensionen der Migrationspolitik auf Länderebene unterscheidet: (1) die Art der Unterbringung, (2) die Art der Leistungserbringung, (3) die Gesundheitsversorgung, (4) die Aufnahmepraxis, (5) die Abschiebepraxis, sowie die (6) bundesstaatliche Positionierung am Beispiel der „sicheren Herkunftsländer“. Zur Analyse möglicher Pfade zur Erklärung der Unterschiede zwischen den Bundesländern nutzen wir eine fuzzy-set QCA-Analyse und greifen auf Parteipolitik, sozioökonomischen Kontext und die Einstellungen der Bevölkerung als Bedingungen zurück.
Unsere Ergebnisse zeigen, dass in der Tat substanzielle Unterschiede zwischen den Bundesländern bestehen. Zudem finden wir, dass die parteipolitische Zusammensetzung der Regierung in unterschiedlichen Pfaden eine wichtige Bedingung für das Vorliegen restriktiver bzw. permissiver Migrationspolitik ist. In keinem einzigen kausalen Pfad der fsQCA-Analyse ist überhaupt eine Erklärung restriktiver bzw. permissiver Migrationspolitik ohne Berücksichtigung der Parteiideologie möglich – ein Ergebnis, das klar für die hohe Relevanz der parteipolitischen Zusammensetzung der Regierung spricht. Die Einstellungsmuster der Bevölkerung in dem jeweiligen Bundesland, die Migrationspolitik und die sozioökonomischen Bedingungen scheinen hingegen nur eine untergeordnete Rolle zu spielen.
Comparative public policy is a blooming research area. It also suffers from some curious blind spots. In this paper we discuss four of these: (1) the obsession with covariance, which means that important phenomena are ignored; (2) the lack of agency, which leads to underwhelming explanatory models; (3) the unclear universe of cases, which means the inferential value of theories and the empirical results are unclear; and (4) the focus on outputs, even though most theories contain strong assumptions about the political process leading to certain outputs. Following this discussion, we then outline how a closer integration of policy process theories may be fruitful for future research.
Algorithms are increasingly used in different domains of public policy. They help humans to profile unemployed, support administrations to detect tax fraud and give recidivism risk scores that judges or criminal justice managers take into account when they make bail decisions. In recent years, critics have increasingly pointed to ethical challenges of these tools and emphasized problems of discrimination, opaqueness or accountability, and computer scientists have proposed technical solutions to these issues. In contrast to these important debates, the literature on how these tools are implemented in the actual everyday decision-making process has remained cursory. This is problematic because the consequences of ADM systems are at least as dependent on the implementation in an actual decision-making context as on their technical features. In this study, we show how the introduction of risk assessment tools in the criminal justice sector on the local level in the USA has deeply transformed the decision-making process. We argue that this is mainly due to the fact that the evidence generated by the algorithm introduces a notion of statistical prediction to a situation which was dominated by fundamental uncertainty about the outcome before. While this expectation is supported by the case study evidence, the possibility to shift blame to the algorithm does seem much less important to the criminal justice actors.
We studied the development of cognitive abilities related to intelligence and creativity
(N = 48, 6–10 years old), using a longitudinal design (over one school year), in order
to evaluate an Enrichment Program for gifted primary school children initiated by
the government of the German federal state of Rhineland-Palatinate (Entdeckertag
Rheinland Pfalz, Germany; ET; Day of Discoverers). A group of German primary school
children (N = 24), identified earlier as intellectually gifted and selected to join the
ET program was compared to a gender-, class- and IQ- matched group of control
children that did not participate in this program. All participants performed the Standard
Progressive Matrices (SPM) test, which measures intelligence in well-defined problem
space; the Creative Reasoning Task (CRT), which measures intelligence in ill-defined
problem space; and the test of creative thinking-drawing production (TCT-DP), which
measures creativity, also in ill-defined problem space. Results revealed that problem
space matters: the ET program is effective only for the improvement of intelligence
operating in well-defined problem space. An effect was found for intelligence as
measured by SPM only, but neither for intelligence operating in ill-defined problem space
(CRT) nor for creativity (TCT-DP). This suggests that, depending on the type of problem
spaces presented, different cognitive abilities are elicited in the same child. Therefore,
enrichment programs for gifted, but also for children attending traditional schools,
should provide opportunities to develop cognitive abilities related to intelligence,
operating in both well- and ill-defined problem spaces, and to creativity in a parallel,
using an interactive approach.
Im Gegensatz zum Übertragungsnetz, dessen Struktur hinreichend genau bekannt ist, sind passende Netzmodelle
für Mittelspannungsnetze (MS-Netze) wegen der hohen Anzahlen der MS-Netze und Verteilnetzbetreiber (VNB)
nur schwer abzubilden. Des Weiteren ist eine detaillierte Darstellung realer MS-Netze in wissenschaftlichen Publikationen
aus datenschutzrechtlichen Gründen meist nicht erwünscht. In dieser Arbeit werden MS-Netzmodelle
sowie ihre Entwicklung im Detail erklärt. Damit stehen erstmals für die Öffentlichkeit nachvollziehbare MS-Netzmodelle
für den deutschsprachigen Raum zur Verfügung. Sie können als Benchmark für wissenschaftliche Untersuchungen
sowie zur Methodenentwicklung verwendet werden.
To investigate whether participants can activate only one spatially oriented number line at a time or
multiple number lines simultaneously, they were asked to solve a unit magnitude comparison task
(unit smaller/larger than 5) and a parity judgment task (even/odd) on two-digit numbers. In both these
primary tasks, decades were irrelevant. After some of the primary task trials (randomly), participants
were asked to additionally solve a secondary task based on the previously presented number. In
Experiment 1, they had to decide whether the two-digit number presented for the primary task was
larger or smaller than 50. Thus, for the secondary task decades were relevant. In contrast, in Experiment
2, the secondary task was a color judgment task, which means decades were irrelevant. In Experiment
1, decades’ and units’ magnitudes influenced the spatial association of numbers separately. In contrast,
in Experiment 2, only the units were spatially associated with magnitude. It was concluded that
multiple number lines (one for units and one for decades) can be activated if attention is focused on
multiple, separate magnitude attributes.
Due to the steadily increasing number of decentralized generation units, the upcoming smart meter rollout and the expected electrification of the transport sector (e-mobility), grid planning and grid operation at low-voltage (LV) level are facing major challenges. Therefore, many studies, research and demonstration projects on the above topics have been carried out in recent years, and the results and the methods developed have been published. However, the published methods usually cannot be replicated or validated, since the majority of the examination models or the scenarios used are incomprehensible to third parties. There is a lack of uniform grid models that map the German LV grids and can be used for comparative investigations, which are similar to the example of the North American distribution grid models of the IEEE. In contrast to the transmission grid, whose structure is known with high accuracy, suitable grid models for LV grids are difficult to map because of the high number of LV grids and distribution system operators. Furthermore, a detailed description of real LV grids is usually not available in scientific publications for data privacy
reasons. For investigations within a research project, the most characteristic synthetic LV grid models have been created, which are based on common settlement structures and usual grid planning principles in Germany. In this work, these LV grid models, and their development are explained in detail. For the first time, comprehensible LV grid models for the middle European area are available to the public, which can be used as a benchmark for further scientific research and method developments.
This document is an English version of the paper which was originally written in German1. In addition, this paper discusses a few more aspects especially on the planning process of distribution grids in Germany.
Durch die stetige Zunahme von dezentralen Erzeugungsanlagen, den anstehenden Smart-Meter Rollout sowie die zu erwartende Elektrifizierung des Verkehrssektors (E-Mobilität) steht die Netzplanung und Netzbetriebsführung von Niederspannungsnetzen (NS-Netzen) in Deutschland vor großen Herausforderungen. In den letzten Jahren wurden daher viele Studien, Forschungs- und Demonstrationsprojekte zu den oben genannten Themen durchge-führt und die Ergebnisse sowie die entwickelten Methoden publiziert. Jedoch lassen sich die publizierten Methoden meist nicht nachbilden bzw. validieren, da die Untersuchungsmodelle oder die angesetzten Szenarien für Dritte nicht nachvollziehbar sind. Es fehlen einheitliche Netzmodelle, die die deutschen NS-Netze abbilden und für Ver-gleichsuntersuchungen herangezogen werden können, ähnlich dem Beispiel der nordamerikanischen Verteilnetzmodelle des IEEE.
Im Gegensatz zum Übertragungsnetz, dessen Struktur hinreichend genau bekannt ist, sind passende Netzmodelle für NS-Netze wegen der hohen Anzahlen der NS-Netze und Verteilnetzbetreiber (VNB) nur schwer abzubilden. Des Weiteren ist eine detaillierte Darstellung realer NS-Netze in wissenschaftlichen Publikationen aus daten-schutzrechtlichen Gründen meist nicht erwünscht. Für Untersuchungen im Rahmen eines Forschungsprojekts wurden darum möglichst charakteristische synthetische NS-Netzmodelle erstellt, die sich an gängigen deutschen Siedlungsstrukturen und üblichen Netzplanungsgrundsätzen orientieren. In dieser Arbeit werden diese NS-Netzmodelle sowie ihre Entwicklung im Detail erklärt. Damit stehen erstmals für die Öffentlichkeit nachvollziehbare NS-Netzmodelle für den deutschsprachigen Raum zur Verfügung. Sie können als Benchmark für wissenschaftliche Untersuchungen sowie zur Methodenentwicklung verwendet werden.
Regelkonzept für eine Niederspannungsnetzautomatisierung unter Verwendung des Merit-Order-Prinzips
(2022)
Durch die zunehmende Erzeugungsleistung auf Niederspannungsnetzebene (NS-Netzebene) durch Photovoltaikanlagen, sowie die Elektrifizierung des Wärme- und des Verkehrssektors sind Investitionen in die NS-Netze notwendig. Ein höherer Digitalisierungsgrad im NS-Netz birgt das Potential, die notwendigen Investitionen genauer zu identifizieren, und damit ggf. zu reduzieren oder zeitlich zu verschieben. Hierbei stellt die Markteinführung intelligenter Messsysteme, sog. Smart Meter, eine neue Möglichkeit dar, Messwerte aus dem NS-Netz zu erhalten und auf deren Grundlage die Stellgrößen verfügbarer Aktoren zu optimieren. Dazu stellt sich die Frage, wie Messdaten unterschiedlicher Messzyklen in einem Netzautomatisierungssystem genutzt werden können und wie sich das nicht-lineare ganzzahlige Optimierungsproblem der Stellgrößenoptimierung effizient lösen lässt. Diese Arbeit befasst sich mit der Lösung des Optimierungsproblems. Dazu kommt eine Stellgrößenoptimierung nach dem Merit-Order-Prinzip zur Anwendung.
The size congruity effect involves interference between numerical magnitude and physical size of visually presented numbers: congruent numbers (either both small or both large in numerical magnitude and physical size) are responded to faster than incongruent ones (small numerical magnitude/large physical size or vice versa). Besides, numerical magnitude is associated with lateralized response codes, leading to the Spatial Numerical Association of Response Codes (SNARC) effect: small numerical magnitudes are preferably responded to on the left side and large ones on the right side. Whereas size congruity effects are ascribed to interference between stimulus dimensions in the decision stage, SNARC effects are understood as (in)compatibilities in stimulus-response combinations. Accordingly, size congruity and SNARC effects were previously found to be independent in parity and in physical size judgment tasks. We investigated their dependency in numerical magnitude judgment tasks. We obtained independent size congruity and SNARC effects in these tasks and replicated this observation for the parity judgment task. The results confirm and extend the notion that size congruity and SNARC effects operate in different representational spaces. We discuss possible implications for number representation.
The modified fouling index (MFI) is a crucial characteristic for assessing the fouling potential of reverse osmosis (RO) feed water. Although the MFI is widely used, the estimation time required for filtration and data evaluation is still relatively long. In this study, the relationship between the MFI and instantaneous spectroscopic extinction measurements was investigated. Since both measurements show a linear correlation with particle concentration, it was assumed that a change in the MFI can be detected by monitoring the optical density of the feed water. To prove this assumption, a test bench for a simultaneous measurement of the MFI and optical extinction was designed. Silica monospheres with sizes of 120 nm and 400 nm and mixtures of both fractions were added to purified tap water as model foulants. MFI filtration tests were performed with a standard 0.45 µm PES membrane, and a 0.1 µm PP membrane. Extinction measurements were carried out with a newly designed flow cell inside a UV–VIS spectrometer to get online information on the particle properties of the feed water, such as the particle concentration and mean particle size. The measurement results show that the extinction ratio of different light wavelengths, which should remain constant for a particulate system, independent of the number of particles, only persisted at higher particle concentrations. Nevertheless, a good correlation between extinction and MFI for different particle concentrations with restrictions towards the ratio of particle and pore size of the test membrane was found. These findings can be used for new sensory process monitoring systems, if the deficiencies can be overcome.
Using industrial robots for machining applications in flexible manufacturing
processes lacks a high accuracy. The main reason for the deviation is the
flexibility of the gearbox. Secondary Encoders (SE) as an additional, high precision
angle sensor offer a huge potential of detecting gearbox deviations. This paper
aims to use SE to reduce gearbox compliances with a feed forward, adaptive
neural control. The control network is trained with a second network for system
identification. The presented algorithm is capable of online application and optimizes
the robot accuracy in a nonlinear simulation.
We present an identification benchmark data set for a full robot movement with an KUKA KR300 R2500 ultra SE industrial robot. It is a robot with a nominal payload capacity of 300 kg, a weight of 1120 kg and a reach of 2500mm. It exhibits 12 states accounting for position and velocity for each of the 6 joints. The robot encounters backlash in all joints, pose-dependent inertia, pose-dependent gravitational loads, pose-dependent hydraulic forces, pose- and velocity dependent centripetal and Coriolis forces as well as a nonlinear friction, which is temperature dependent and therefore potentially time varying. We supply the prepared dataset for black-box identification of the forward or the inverse robot dynamics. Additional to the data for black-box modelling, we supply high-frequency raw data and videos of each experiment. A baseline and figures of merit are defined to make results compareable across different identification methods.
We present an identification benchmark data set for a full robot movement with an KUKA KR300 R2500 ultra SE industrial robot. It is a robot with a nominal payload capacity of 300 kg, a weight of 1120 kg and a reach of 2500mm. It exhibits 12 states accounting for position and velocity for each of the 6 joints. The robot encounters backlash in all joints, pose-dependent inertia, pose-dependent gravitational loads, pose-dependent hydraulic forces, pose- and velocity dependent centripetal and Coriolis forces as well as a nonlinear friction, which is temperature dependent and therefore potentially time varying. We supply the prepared dataset for black-box identification of the forward or the inverse robot dynamics. Additional to the data for black-box modelling, we supply high-frequency raw data and videos of each experiment. A baseline and figures of merit are defined to make results compareable across different identification methods.
Kinetic models of human motion rely on boundary conditions which are defined by the interaction of the body with its environment. In the simplest case, this interaction is limited to the foot contact with the ground and is given by the so called ground reaction force (GRF). A major challenge in the reconstruction of GRF from kinematic data is the double support phase, referring to the state with multiple ground contacts. In this case, the GRF prediction is not well defined. In this work we present an approach to reconstruct and distribute vertical GRF (vGRF) to each foot separately, using only kinematic data. We propose the biomechanically inspired force shadow method (FSM) to obtain a unique solution for any contact phase, including double support, of an arbitrary motion. We create a kinematic based function, model an anatomical foot shape and mimic the effect of hip muscle activations. We compare our estimations with the measurements of a Zebris pressure plate and obtain correlations of 0.39≤r≤0.94 for double support motions and 0.83≤r≤0.87 for a walking motion. The presented data is based on inertial human motion capture, showing the applicability for scenarios outside the laboratory. The proposed approach has low computational complexity and allows for online vGRF estimation.
A survey on continuous, semidiscrete and discrete well-posedness and scale-space results for a class of nonlinear diffusion filters is presented. This class does not require any monotony assumption (comparison principle) and, thus, allows image restoration as well. The theoretical results include existence, uniqueness, continuous dependence on the initial image, maximum-minimum principles, average grey level invariance, smoothing Lyapunov functionals, and convergence to a constant steady state.
Cloudy inhomogenities in artificial fabrics are graded by a fast method which is based on a Laplacian pyramid decomposition of the fabric image. This band-pass representation takes into account the scale character of the cloudiness. A quality measure of the entire cloudiness is obtained as a weighted mean over the variances of all scales.
The ideas of texture analysis by means of the structure tensor are combined with the scale-space concept of anisotropic diffusion filtering. In contrast to many other nonlinear diffusion techniques, the proposed one uses a diffusion tensor instead of a scalar diffusivity. This allows true anisotropic behaviour. The preferred diffusion direction is determined according to the phase angle of the structure tensor. The diffusivity in this direction is increasing with the local coherence of the signal. This filter is constructed in such a way that it gives a mathematically well-funded scale-space representation of the original image. Experiments demonstrate its usefulness for the processing of interrupted one-dimensional structures such as fingerprint and fabric images.
The performance of napkins is nowadays improved substantially by embedding granules of a superabsorbent into the cellulose matrix. In this paper a continuous model for the liquid transport in such an Ultra Napkin is proposed. Its mean feature is a nonlinear diffusion equation strongly coupled with an ODE describing a reversible absorbtion process. An efficient numerical method based on a symmetrical time splitting and a finite difference scheme of ADI-predictor-corrector type has been developed to solve these equations in a three dimensional setting. Numerical results are presented that can be used to optimize the granule distribution.
The structural integrity of synaptic connections critically depends on the interaction between synaptic cell adhesion molecules (CAMs) and the underlying actin and microtubule cytoskeleton. This interaction is mediated by giant Ankyrins, that act as specialized adaptors to establish and maintain axonal and synaptic compartments. In Drosophila, two giant isoforms of Ankyrin2 (Ank2) control synapse stability and organization at the larval neuromuscular junction (NMJ). Both Ank2-L and Ank2-XL are highly abundant in motoneuron axons and within the presynaptic terminal, where they control synaptic CAMs distribution and organization of microtubules. Here, we address the role of the conserved N-terminal ankyrin repeat domain (ARD) for subcellular localization and function of these giant Ankyrins in vivo. We used a P[acman] based rescue approach to generate deletions of ARD subdomains, that contain putative binding sites of interacting transmembrane proteins. We show that specific subdomains control synaptic but not axonal localization of Ank2-L. These domains contain binding sites to L1-family member CAMs, and we demonstrate that these regions are necessary for the organization of synaptic CAMs and for the control of synaptic stability. In contrast, presynaptic Ank2-XL localization only partially depends on the ARD but strictly requires the presynaptic presence of Ank2-L demonstrating a critical co-dependence of the two isoforms at the NMJ. Ank2-XL dependent control of microtubule organization correlates with presynaptic abundance of the protein and is thus only partially affected by ARD deletions. Together, our data provides novel insights into the synaptic targeting of giant Ankyrins with relevance for the control of synaptic plasticity and maintenance.
Machining-induced residual stresses (MIRS) are a main driver for distortion of thin-walled monolithic aluminum workpieces. Before one can develop compensation techniques to minimize distortion, the effect of machining on the MIRS has to be fully understood. This means that not only an investigation of the effect of different process parameters on the MIRS is important. In addition, the repeatability of the MIRS resulting from the same machining condition has to be considered. In past research, statistical confidence of MIRS of machined samples was not focused on. In this paper, the repeatability of the MIRS for different machining modes, consisting of a variation in feed per tooth and cutting speed, is investigated. Multiple hole-drilling measurements within one sample and on different samples, machined with the same parameter set, were part of the investigations. Besides, the effect of two different clamping strategies on the MIRS was investigated. The results show that an overall repeatability for MIRS is given for stable machining (between 16 and 34% repeatability standard deviation of maximum normal MIRS), whereas instable machining, detected by vibrations in the force signal, has worse repeatability (54%) independent of the used clamping strategy. Further experiments, where a 1-mm-thick wafer was removed at the milled surface, show the connection between MIRS and their distortion. A numerical stress analysis reveals that the measured stress data is consistent with machining-induced distortion across and within different machining modes. It was found that more and/or deeper MIRS cause more distortion.
Financing measures and incentive schemes for (existing and new) building owners can promote the sustainable settlement development of rural regions or municipalities and, in a wider sense, entire countries or cross-border regions. In order to be used on a broad scale, the concept of revolving funds must continue to be further developed. In this research, the concept of an advanced revolving housing fund (ARF) for building owners to support the sustainable development of rural regions and potential mechanisms are introduced. The ARF is designed to reflect impacts and challenges with regard to rural regions in Germany, Europe and beyond. Based on New Institutional Economics, the Theory of Spatial Organisms, an expert workshop, interviews and discussions and further literature research, the fundamentals for incentive schemes and the essential mechanisms and design aspects of the ARF are derived. This includes the principal structure and governance of a holding fund and several regional funds. Based on this, input parameters for the financial modelling of an ARF are presented as well as guiding elements for empirical testing to promote more research in this area. It is found that the ARF should have a regional focus and must be a comprehensive instrument of settlement development with additional informal and formal measures. The developed concept promises new impulses, in particular, for rural regions. It is proposed to test the concept by means of case studies in pioneer regions of different countries
The move away from fossil fuels and the diversification of the primary energy sources used are imperative both in terms of mitigating global warming and ensuring the political independence of the Western world. For the industries of agriculture and forestry, it is possible to secure the basic energy supply through their own yield. The use of vegetable oil is a possibility to satisfy the energy requirements for agricultural machines both autonomously and sustainably. Up to now, rapeseed has been the most important plant for oil production in Western Europe. In the EU, rapeseed oil is currently credited with up to 60% fossil CO2 savings compared to conventional diesel fuel. As a result, since 2018, rapeseed oil is no longer considered as biofuel in the EU. However, if cultivation and processing are completely based on renewable energy sources, up to 90% of fossil CO2 emissions can be saved in the future. This also applies to rapeseed oil, which is a by-product of animal feed production. In addition, pure rapeseed oil is chemically unchanged and thus biodegradable, which makes it particularly attractive for use in environmentally sensitive areas.
To increase the attractiveness of rapeseed oil as a fuel for the agricultural industry, a multi-fuel concept for the flexible use of rapeseed oil, diesel fuel and any mixtures of these two fuels would be beneficial, as it minimizes economic risks due to price fluctuations, availability, and taxation. For implementing such a concept, technical adjustments to the propulsion system are necessary. In existing vegetable oil vehicles, cost-intensive additional components are required for diesel particulate filter regeneration. Conventional regeneration via post-injected fuel (which does not participate in combustion) leads to dilution of the engine oil with vegetable oil.
This study elaborates the possibilities of DPF regeneration in vegetable oil operation by internal engine measures without the need for post-injection. This includes strategies for generating exhaust gas temperatures in high-idle operation which are suitable for regeneration. For this purpose, strategies combining throttling and retarded combustion are used. The measures were successfully tested with respect to their effectiveness for DPF regeneration. It could also be proved that no increased engine oil dilution occurs as a result of the regeneration procedure.
For a prospective series application, however, regeneration should also be possible in transient engine operation. For this purpose, the measures developed for high-idle regeneration have been transferred to partial load points to gain insight into their applicability for transient engine operation. In addition, the effect of external EGR on regeneration has been considered. As the previous investigations of high-idle regeneration showed that regeneration is most critical when pure rapeseed oil is used, the studies of regeneration in part-load operation were limited to pure rapeseed oil. The systematic parameter variations carried out during the studies helped to improve the understanding of the system and the mechanisms of regeneration. The results of the investigation show that the exhaust gas temperature can be increased significantly by the measures studied. However, achieving the exhaust temperature required for DPF regeneration remains a challenge for certain operating points.
Functional Metallic Microcomponents via Liquid-Phase Multiphoton Direct Laser Writing: A Review
(2019)
We present an overview of functional metallic microstructures fabricated via direct laser writing out of the liquid phase. Metallic microstructures often are key components in diverse applications such as, e.g., microelectromechanical systems (MEMS). Since the metallic component’s functionality mostly depends on other components, a technology that enables on-chip fabrication of these metal structures is highly desirable. Direct laser writing via multiphoton absorption is such a fabrication method. In the past, it has mostly been used to fabricate multidimensional polymeric structures. However, during the last few years different groups have put effort into the development of novel photosensitive materials that enable fabrication of metallic—especially gold and silver—microstructures. The results of these efforts are summarized in this review and show that direct laser fabrication of metallic microstructures has reached the level of applicability.
About the approach The approach of TOPO was originally developed in the FABEL project1[1] to support architects in designing buildings with complex installations. Supplementing knowledge-based design tools, which are available only for selected subtasks, TOPO aims to cover the whole design process. To that aim, it relies almost exclusively on archived plans. Input to TOPO is a partial plan, and output is an elaborated plan. The input plan constitutes the query case and the archived plans form the case base with the source cases. A plan is a set of design objects. Each design object is defined by some semantic attributes and by its bounding box in a 3-dimensional coordinate system. TOPO supports the elaboration of plans by adding design objects.
Struktur und Werkzeuge des experiment-spezifischen Datenbereichs der SFB501 Erfahrungsdatenbank
(1999)
Software-Entwicklungsartefakte müssen zielgerichtet während der Durchführung eines Software- Projekts erfasst werden, um für die Wiederverwendung aufbereitet werden zu können. Die methodische Basis hierzu bildet im Sonderforschungsbereich 501 das Konzept der Erfahrungsdatenbank. In ihrem experiment-spezifischen Datenbereich werden für jedes Entwicklungsprojekt alle Software-Entwicklungsartefakte abgelegt, die während des Lebenszyklus eines Projektes anfallen. In ihrem übergreifenden Datenbereich werden all die jenigen Artefakte aus dem experiment-spezifischen Datenbereich zusammengefasst, die für eine Wiederverwendung in nachfolgenden Projekten in Frage kommen. Es hat sich gezeigt, dass bereits zur Nutzung der Datenmengen im experiment- spezifischen Datenbereich der Erfahrungsdatenbank ein systematischer Zugriff notwendig ist. Ein systematischer Zugriff setzt jedoch eine normierte Struktur voraus. Im experiment-spezifischen Bereich werden zwei Arten von Experimenttypen unterschieden: "Kontrollierte Experimente" und "Fallstudien". Dieser Bericht beschreibt die Ablage- und Zugriffsstruktur für den Experimenttyp "Fallstudien". Die Struktur wurde aufgrund der Erfahrungen in ersten Fallstudien entwickelt und evaluiert.
Versions- und Konfigurationsmanagement sind zentrale Instrumente zur intellektuellen Beherrschung komplexer Softwareentwicklungen. In stark wiederverwendungsorientierten Softwareentwicklungsansätzen -wie vom SFB bereitgestellt- muß der Begriff der Konfiguration von traditionell produktorientierten Artefakten auf Prozesse und sonstige Entwicklungserfahrungen erweitert werden. In dieser Veröffentlichung wird ein derartig erweitertes Konfigurationsmodell vorgestellt. Darüberhinau wird eine Ergänzung traditioneller Projektplanungsinformationen diskutiert, die die Ableitung maßgeschneiderter Versions- und Konfigurationsmanagementmechanismen vor Projektbeginn ermöglichen.
Let \(X\) be a Banach lattice. Necessary and sufficient conditions for a linear operator \(A:D(A) \to X\), \(D(A)\subseteq X\), to be of positive \(C^0\)-scalar type are given. In addition, the question is discussed which conditions on the Banach lattice imply that every operator of positive \(C^0\)-scalar type is necessarily of positive scalar type.
In the scalar case one knows that a complex normalized function of boundedvariation \(\phi\) on \([0,1]\) defines a unique complex regular Borel measure\(\mu\) on \([0,1]\). In this note we show that this is no longer true in generalin the vector valued case, even if \(\phi\) is assumed to be continuous. Moreover, the functions \(\phi\) which determine a countably additive vectormeasure \(\mu\) are characterized.
The following two norms for holomorphic functions \(F\), defined on the right complex half-plane \(\{z \in C:\Re(z)\gt 0\}\) with values in a Banach space \(X\), are equivalent:
\[\begin{eqnarray*} \lVert F \rVert _{H_p(C_+)} &=& \sup_{a\gt0}\left( \int_{-\infty}^\infty \lVert F(a+ib) \rVert ^p \ db \right)^{1/p}
\mbox{, and} \\ \lVert F \rVert_{H_p(\Sigma_{\pi/2})} &=& \sup_{\lvert \theta \lvert \lt \pi/2}\left( \int_0^\infty \left \lVert F(re^{i \theta}) \right \rVert ^p\ dr \right)^{1/p}.\end{eqnarray*}\] As a consequence, we derive a description of boundary values ofsectorial holomorphic functions, and a theorem of Paley-Wiener typefor sectorial holomorphic functions.
Hardware prototyping is an essential part in the hardware design flow. Furthermore, hardware prototyping usually relies on system-level design and hardware-in-the-loop simulations in order to develop, test and evaluate intellectual property cores. One common task in this process consist on interfacing cores with different port specifications. Data width conversion is used to overcome this issue. This work presents two open source hardware cores compliant with AXI4-Stream bus protocol, where each core performs upsizing/downsizing data width conversion.
Nanoindentation simulations are performed for a Ni(111) bi-crystal, in which the grain boundary is coated by a graphene layer. We study both a weak and a strong interface, realized by a 30∘ and a 60∘ twist boundary, respectively, and compare our results for the composite also with those of an elemental Ni bi-crystal. We find hardening of the elemental Ni when a strong, i.e., low-energy, grain boundary is introduced, and softening for a weak grain boundary. For the strong grain boundary, the interface barrier strength felt by dislocations upon passing the interface is responsible for the hardening; for the weak grain boundary, confinement of the dislocations results in the weakening. For the Ni-graphene composite, we find in all cases a weakening influence that is caused by the graphene blocking the passage of dislocations and absorbing them. In addition, interface failure occurs when the indenter reaches the graphene, again weakening the composite structure.
Small concentrations of alloying elements can modify the
α
α
-
γ
γ
phase transition temperature
T
c
Tc
of Fe. We study this effect using an atomistic model based on a set of many-body interaction potentials for iron and several alloying elements. Free-energy calculations based on perturbation theory allow us to determine the change in
T
c
Tc
introduced by the alloying element. The resulting changes are in semi-quantitative agreement with experiment. The effect is traced back to the shape of the pair potential describing the interaction between the Fe and the alloying atom
Using the molecular dynamics simulation, we study the cutting of Al/Si bilayer systems. While the plasticity of metals is dominated by dislocation activity, the deformation behavior of Si crystals is governed by phase transformations—here to the amorphous phase. We find that twinning adds as a major deformation mechanism in the cutting of Al crystals. Cutting of Si crystals requires thrust forces that are larger than the cutting forces in order to induce amorphization; in metals, the thrust forces are relatively smaller than the cutting forces. When putting an Al top layer on a Si substrate, the thrust force is reduced; the opposite effect is observed if a Si top layer is put on an Al substrate. Covering an Al substrate with a thin Si top layer has the detrimental effect that the hard Si requires high pressures for cutting; as a consequence, twinning planes with intersecting directions are generated that ultimately lead to cracks in the ductile Al substrate. The crystallinity of the Si chip is strongly changed if an Al substrate is put under the Si top layer: With decreasing thickness of the Si top layer, the Si chip retains a higher degree of crystallinity.
The deformation of a nano-sized polycrystalline Al bar under the action of vice plates is studied using molecular dynamics simulation. Two grain sizes are considered, fine-grained and coarse-grained. Deformation in the fine-grained sample is mainly caused by grain-boundary processes which induce grain displacement and rotation. Deformation in the coarse-grained sample is caused by grain-boundary processes and dislocation plasticity. The sample distortion manifests itself by the center-of-mass motion of the grains. Grain rotation is responsible for surface roughening after the loading process. While the plastic deformation is caused by the loading process, grain rearrangements under load release also contribute considerably to the final sample distortion.
Fragmentation of granular clusters may be studied by experiments and by granular mechanics simulation. When comparing results, it is often assumed that results can be compared when scaled to the same value of E/◂◽.▸Esep, where E denotes the collision energy and ◂◽.▸Esep is the energy needed to break every contact in the granular clusters. The ratio ◂+▸E/◂◽.▸Esep∝v2 depends on the collision velocity v but not on the number of grains per cluster, N. We test this hypothesis using granular-mechanics simulations on silica clusters containing a few thousand grains in the velocity range where fragmentation starts. We find that a good parameter to compare different systems is given by ◂+▸E/(Nα◂◽.▸Esep), where α∼2/3. The occurrence of the extra factor Nα is caused by energy dissipation during the collision such that large clusters request a higher impact energy for reaching the same level of fragmentation than small clusters. Energy is dissipated during the collision mainly by normal and tangential (sliding) forces between grains. For large values of the viscoelastic friction parameter, we find smaller cluster fragmentation, since fragment velocities are smaller and allow for fragment recombination.
Using molecular dynamics simulations, the adsorption and diffusion of doxorubicin drug molecules in boron nitride nanotubes are investigated. The interaction between doxorubicin and the nanotube is governed by van der Waals attraction. We find strong adsorption of doxorubicin to the wall for narrow nanotubes (radius of 9 Å). For larger radii (12 and 15 Å), the adsorption energy decreases, while the diffusion coefficient of doxorubicin increases. It does, however, not reach the values of pure water, as adsorption events still hinder the doxorubicin mobility. It is concluded that nanotubes wider than around 4 nm diameter can serve as efficient drug containers for targeted drug delivery of doxorubicin in cancer chemotherapy.
Nuclear inelastic scattering of synchrotron radiation is used to study the changes induced by external tensile strain on the
phonon density of states (pDOS) of polycrystalline Fe samples. The data are interpreted with the help of dedicated atomistic
simulations. The longitudinal phonon peak at around 37 meV and also the second transverse peak at 27 meV are decreased
under strain. This is caused by the production of defects under strain. Also the thermodynamic properties of the pDOS demonstrate
a weakening of the force constants and of the mean phonon energy under strain. Remaining differences between
experiment and simulation are discussed.
Defects change the phonon spectrum and also the magnetic properties of bcc-Fe. Using molecular dynamics simulation, the influence of defects – vacancies, dislocations, and grain boundaries – on the phonon spectra and magnetic properties of bcc-Fe is determined. It is found that the main influence of defects consists in a decrease of the amplitude of the longitudinal peak, PL, at around 37 meV. While the change in phonon spectra shows only little dependence on the defect type, the quantitative decrease of PL is proportional to the defect concentration. Local magnetic moments can be determined from the local atomic volumes. Again, the changes in the magnetic moments of a defective crystal are linear in the defect concentrations. In addition, the change of the phonon density of states and the magnetic moments under homogeneous uniaxial strain are investigated.
Cutting of metallic glasses produces as a rule serrated and segmented chips in experiments, while atomistic simulations produce straight unserrated chips. We demonstrate here that with increasing depth of cut – with all other parameters unchanged – chip serration starts to affect the morphology of the chip also in molecular dynamics simulations. The underlying reason is the shear localization in shear bands. As the distance between shear bands increases with increasing depth of cut, the surface morphology of the chip becomes increasingly segmented. The parallel shear bands that formed during cutting do no longer interact with each other when their separation is ≳10 nm. Our results are analogous to the so-called fold instability that has been found when machining nanocrystalline metals.
Plasticity in metallic glasses depends on their stoichiometry. We explore this dependence by molecular dynamics simulations for the case of CuZr alloys using the compositions Cu64.5Zr35.5, Cu50Zr50, and Cu35.5Zr64.5. Plasticity is induced by nanoindentation and orthogonal cutting. Only the Cu64.5Zr35.5 sample shows the formation of localized strain in the form of shear bands, while plasticity is more homogeneous for the other samples. This feature concurs with the high fraction of full icosahedral short-range order found for Cu64.5Zr35.5. In all samples, the atomic density is reduced in the plastic zone; this reduction is accompanied by a decrease of the average atom coordination, with the possible exception of Cu35.5Zr64.5, where coordination fluctuations are high. The strongest density reduction occurs in Cu64.5Zr35.5, where it is connected with the partial destruction of full icosahedral short-range order. The difference in plasticity mechanism influences the shape of the pileup and of the chip generated by nanoindentation and cutting, respectively.
Indentation and Scratching with a Rotating Adhesive Tool: A Molecular Dynamics Simulation Study
(2022)
For the specific case of a spherical diamond nanoparticle with 10 nm radius rolling over a planar Fe surface, we employ molecular dynamics simulation to study the processes of indentation and scratching. The particle is rotating (rolling). We focus on the influence of the adhesion force between the nanoparticle and the surface on the damage mechanisms on the surface; the adhesion is modeled by a pair potential with arbitrarily prescribed value of the adhesion strength. With increasing adhesion, the following effects are observed. The load needed for indentation decreases and so does the effective material hardness; this effect is considerably more pronounced than for a non-rotating particle. During scratching, the tangential force, and hence the friction coefficient, increase. The torque needed to keep the particle rolling adds to the total work for scratching; however, for a particle rolling without slip on the surface the total work is minimum. In this sense, a rolling particle induces the most efficient scratching process. For both indentation and scratching, the length of the dislocation network generated in the substrate reduces. After leaving the surface, the particle is (partially) covered with substrate atoms and the scratch groove is roughened. We demonstrate that these effects are based on substrate atom transport under the rotating particle from the front towards the rear; this transport already occurs for a repulsive particle but is severely intensified by adhesion.
The cultivation of cyanobacteria with the addition of an organic carbon source (meaning as heterotrophic or mixotrophic cultivation) is a promising technique to increase their slow growth rate. However, most cyanobacteria cultures are infected by non-separable heterotrophic bacteria. While their contribution to the biomass is rather insignificant in a phototrophic cultivation, problems may arise in heterotrophic and mixotrophic mode. Heterotrophic bacteria can potentially utilize carbohydrates quickly, thus preventing any benefit for the cyanobacteria. In order to estimate the advantage of the supplementation of a carbon source, it is essential to quantify the proportion of cyanobacteria and heterotrophic bacteria in the resulting biomass. In this work, the use of quantitative polymerase chain reaction (qPCR) is proposed. To prepare the samples, a DNA extraction method for cyanobacteria was improved to provide reproducible and robust results for the group of terrestrial cyanobacteria. Two pairs of primers were used, which bind either to the 16S rRNA gene of all cyanobacteria or all bacteria including cyanobacteria. This allows a determination of the proportion of cyanobacteria in the biomass. The method was established with the two terrestrial cyanobacteria Trichocoleus sociatus SAG 26.92 and Nostoc muscorum SAG B-1453-12a. As proof of concept, a heterotrophic cultivation with T. sociatus with glucose was performed. After 2 days of cultivation, a reduction of the biomass partition of the cyanobacterium to 90% was detected. Afterwards, the proportion increased again.
The application of plant suspension culture to produce valuable compounds, such as the triterpenoids oleanolic acid and ursolic acid, is a well-established alternative to the cultivation of whole plants. Cambial meristematic cells (CMCs) are a growing field of research, often showing superior cultivation properties compared to their dedifferentiated cell (DDC) counterparts. In this work, the first-time establishment of O. basilicum CMCs is demonstrated. DDCs and CMCs were cultivated in shake flasks and wave-mixed disposable bioreactors (wDBRs) and evaluated regarding triterpenoid productivity and biomass accumulation. CMCs showed characteristic small vacuoles and were found to be significantly smaller than DDCs. Productivities of oleanolic and ursolic acid of CMCs were determined at 3.02 ± 0.76 mg/(l*d) and 4.79 ± 0.48 mg/(l*d) after 19 days wDBR cultivation, respectively. These values were consistently higher than any productivities determined for DDCs over the observed cultivation period of 37 days. Elicitation with methyl jasmonate of DDCs and CMCs in shake flasks resulted in increased product contents up to 48 h after elicitor addition, with the highest increase found in CMCs at 232.30 ± 19.33% (oleanolic acid) and 192.44 ± 18.23% (ursolic acid) after 48 h.
The electrochemical process of microbial electrosynthesis (MES) is used to drive the metabolism of electroactive microorganisms for the production of valuable chemicals and fuels. MES combines the advantages of electrochemistry, engineering, and microbiology and offers alternative production processes based on renewable raw materials and regenerative energies. In addition to the reactor concept and electrode design, the biocatalysts used have a significant influence on the performance of MES. Thus, pure and mixed cultures can be used as biocatalysts. By using mixed cultures, interactions between organisms, such as the direct interspecies electron transfer (DIET) or syntrophic interactions, influence the performance in terms of productivity and the product range of MES. This review focuses on the comparison of pure and mixed cultures in microbial electrosynthesis. The performance indicators, such as productivities and coulombic efficiencies (CEs), for both procedural methods are discussed. Typical products in MES are methane and acetate, therefore these processes are the focus of this review. In general, most studies used mixed cultures as biocatalyst, as more advanced performance of mixed cultures has been seen for both products. When comparing pure and mixed cultures in equivalent experimental setups a 3-fold higher methane and a nearly 2-fold higher acetate production rate can be achieved in mixed cultures. However, studies of pure culture MES for methane production have shown some improvement through reactor optimization and operational mode reaching similar performance indicators as mixed culture MES. Overall, the review gives an overview of the advantages and disadvantages of using pure or mixed cultures in MES.
Employing site-directed spin labeling (SDSL), the structure of maltose-binding protein (MBP) had previously been studied in the native state by electron paramagnetic resonance (EPR) spectroscopy. Several spin-labeled double cysteine mutants were distributed all over the structure of this cysteine-free protein and revealed distance information between the nitroxide residues from double electron–electron resonance (DEER). The results were in good agreement with the known X-ray structure. We have now extended these studies to the molten globule (MG) state, a folding intermediate, which can be stabilized around pH 3 and that is characterized by secondary but hardly any tertiary structure. Instead of clearly defined distance features as found in the native state, several additional characteristics indicate that the MG structure of MBP contains different polypeptide chain and domain orientations. MBP is also known to bind its substrate maltose even in MG state although with lower affinity. Additionally, we have now created new mutants allowing for spin labeling at or near the active site. Our data confirm an already preformed ligand site structure in the MG explaining its substrate binding capability and thus most probably serving as a nucleation center for the final native structure.
We report on generation of pulsed broadband terahertz radiation utilizing the inverse spin hall effect in Fe/Pt bilayers on MgO and sapphire substrates. The emitter was optimized with respect to layer thickness, growth parameters, substrates and geometrical arrangement. The experimentally determined optimum layer thicknesses were in qualitative agreement with simulations of the spin current induced in the ferromagnetic layer. Our model takes into account generation of spin polarization, spin diffusion and accumulation in Fe and Pt and electrical as well as optical properties of the bilayer samples. Using the device in a counterintuitive orientation a Si lens was attached to increase the collection efficiency of the emitter. The optimized emitter provided a bandwidth of up to 8 THz which was mainly limited by the low-temperature-grown GaAs (LT-GaAS) photoconductive antenna used as detector and the pulse length of the pump laser. The THz pulse length was as short as 220 fs for a sub 100 fs pulse length of the 800 nm pump laser. Average pump powers as low as 25 mW (at a repetition rate of 75 MHz) have been used for terahertz generation. This and the general performance make the spintronic terahertz emitter compatible with established emitters based on optical rectification in nonlinear crystals.
In this paper we present the comparison of experiments and numerical simulations for bubble cutting by a wire. The air bubble is surrounded by water. In the experimental setup an air bubble is injected on the bottom of a water column. When the bubble rises and contacts the wire, it is separated into two daughter bubbles. The flow is modeled by the incompressible Navier–Stokes equations. A meshfree method is used to simulate the bubble cutting. We have observed that the experimental and numerical results are in very good agreement. Moreover, we have further presented simulation results for liquid with higher viscosity. In this case the numerical results are close to previously published results.
We present a model predictive control (MPC) algorithm for online time-optimal trajectory planning of cooperative robotic manipulators. Robotic arms sharing a common confined operational space are exposed to high interrobot collision
risks. For collision avoidance, a smooth robot geometry approximation by Bézier curves is applied, utilizing velocity constraints and tangent separating planes, enabling an efficient generation of robot trajectories in real-time. The proposed optimization algorithm is validated on an experimental setup consisting of two collaborative robotic arms performing synchronous pick-and-place tasks.
We consider N coupled linear oscillators with time-dependent coecients. An exact complex amplitude - real phase decomposition of the oscillatory motion is constructed. This decomposition is further used to derive N exact constants of motion which generalise the so-called Ermakov-Lewis invariant of a single oscillator. In the Floquet problem of periodic oscillator coecients we discuss the existence of periodic complex amplitude functions in terms of existing Floquet solutions.
A harmonic oscillator subject to a parametric pulse is examined. The aim of the paper is to present a new theory for analysing transitions due to parametric pulses. The new theoretical notions which are introduced relate the pulse parameters in a direct way with the transition matrix elements. The harmonic oscillator transitions are expressed in terms of asymptotic properties of a companion oscillator, the Milne (amplitude) oscillator. A traditional phase-amplitude decomposition of the harmonic-oscillator solutions results in the so-called Milne's equation for the amplitude, and the phase is determined by an exact relation to the amplitude. This approach is extended in the present analysis with new relevant concepts and parameters for pulse dynamics of classical and quantal systems. The amplitude oscillator has a particularly nice numerical behavior. In the case of strong pulses it does not possess any of the fast oscillations induced by the pulse on the original harmonic oscillator. Furthermore, the new dynamical parameters introduced in this approach relate closely to relevant characteristics of the pulse. The relevance to quantum mechanical problems such as reflection and transmission from a localized well and mechanical problems of controlling vibrations is illustrated.
We consider the maximum flow problem with minimum quantities (MFPMQ), which is a variant of the maximum flow problem where
the flow on each arc in the network is restricted to be either zero or above a given lower bound (a minimum quantity), which
may depend on the arc. This problem has recently been shown to be weakly NP-complete even on series-parallel graphs.
In this paper, we provide further complexity and approximability results for MFPMQ and several special cases.
We first show that it is strongly NP-hard to approximate MFPMQ on general graphs (and even bipartite graphs) within any positive factor.
On series-parallel graphs, however, we present a pseudo-polynomial time dynamic programming algorithm for the problem.
We then study the case that the minimum quantity is the same for each arc in the network and show that, under this restriction, the problem is still
weakly NP-complete on general graphs, but can be solved in strongly polynomial time on series-parallel graphs.
On general graphs, we present a \((2 - 1/\lambda) \)-approximation algorithm for this case, where \(\lambda\) denotes the common minimum quantity of all arcs.
Over the past 2 decades, there has been much progress on the classification of symplectic linear quotient singularities V/G admitting a symplectic (equivalently, crepant) resolution of singularities. The classification is almost complete but there is an infinite series of groups in dimension 4—the symplectically primitive but complex imprimitive groups—and 10 exceptional groups up to dimension 10, for which it is still open. In this paper, we treat the remaining infinite series and prove that for all but possibly 39 cases there is no symplectic resolution. We thereby reduce the classification problem to finitely many open cases. We furthermore prove non-existence of a symplectic resolution for one exceptional group, leaving 39+9=48 open cases in total. We do not expect any of the remaining cases to admit a symplectic resolution.
In cyanobacteria and plants, VIPP1 plays crucial roles in the biogenesis and repair of thylakoid membrane protein complexes and in coping with chloroplast membrane stress. In chloroplasts, VIPP1 localizes in distinct patterns at or close to envelope and thylakoid membranes. In vitro, VIPP1 forms higher-order oligomers of >1 MDa that organize into rings and rods. However, it remains unknown how VIPP1 oligomerization is related to function. Using time-resolved fluorescence anisotropy and sucrose density gradient centrifugation, we show here that Chlamydomonas reinhardtii VIPP1 binds strongly to liposomal membranes containing phosphatidylinositol-4-phosphate (PI4P). Cryo-electron tomography reveals that VIPP1 oligomerizes into rods that can engulf liposomal membranes containing PI4P. These findings place VIPP1 into a group of membrane-shaping proteins including epsin and BAR domain proteins. Moreover, they point to a potential role of phosphatidylinositols in directing the shaping of chloroplast membranes.
Cognitive Load Theory is considered universally applicable to all kinds of learning scenarios. However, instead of a universal method for measuring cognitive load that suits different learning contexts or target groups, there is a great variety of assessment approaches. Particularly common are subjective rating scales, which even allow for measuring the three assumed types of cognitive load in a differentiated way. Although these scales have been proven to be effective for various learning tasks, they might not be an optimal fit for the learning demands of specific complex environments like technology-enhanced STEM laboratory courses. The aim of this research was therefore to examine and compare existing rating scales in terms of validity for this learning context and to identify options for adaptation, if necessary. For the present study, the two most common subjective rating scales that are known to differentiate between load types (the Cognitive Load Scale by Leppink et al. and the Naïve Rating Scale by Klepsch et al.) were slightly adapted to the context of learning through structured hands-on experimentation where elements like measurement data, experimental setups, and experimental tasks affect knowledge acquisition. N=95 engineering students performed six experiments examining basic electric circuits where they had to explore fundamental relationships between physical quantities based on observed data. Immediately after experimentation, students answered both adapted scales. Various indicators of validity, which considered the scales’ internal structure and their relation to variables like group allocation as participants were randomly assigned to two conditions with contrasting spatial arrangement of measurement data, were analyzed. For the given data set, the intended three-factorial structure could not be confirmed and most of the a priori defined subscales showed insufficient internal consistency. A multitrait-multimethod analysis suggests convergent and discriminant evidence between the scales which could not be confirmed sufficiently. The two contrasted experimental conditions were expected to result in different ratings for extraneous load, which was solely detected by one adapted scale. As a further step, two new scales were assembled based on the overall item pool and the given data set. They revealed a three-factorial structure in accordance with the three types of load and seem to be promising new tools, although their subscales for extraneous load still suffer from low reliability scores.
The use of vegetable oil as a fuel for agricultural and forestry vehicles allows a CO2 reduction of up to 60 %. On the other hand, the availability of vegetable oil is limited, and price competitiveness depends heavily on the respective oil price. In order to reduce the dependence on the availability of specific fuels, the joint research project “MuSt5-Trak” (Multi-Fuel EU Stage 5 Tractor) aims at developing a prototype tractor capable of running on arbitrary mixtures of diesel and rapeseed oil.
Depending on the fuel mixture used, the engine parameters need to be adapted to the respective operating conditions. For this purpose, it is necessary to detect the composition of the fuel mixture and the fuel quality. Regardless of the available fuel mixture, all functions for regular engine operation must be maintained. A conventional active regeneration of the diesel particulate filter (DPF) cannot be carried out because rapeseed oil has a flash point of 230°C, compared to 80°C for diesel fuel. This leads to a condensation of rapeseed oil while using post-injection at low and medium part load operating points, which causes a dilution of the engine oil.
In this work, engine-internal measures for achieving DPF regeneration with rapeseed oil and mixtures of diesel fuel and rapeseed oil are investigated. In order to provide stationary operating conditions in real engine operation, a “high-idle” operating point is chosen. The fuel mixtures are examined with regard to compatibility concerning a reduction of the air-fuel ratio, late combustion phasing and multiple injections. The highest temperatures are expected from a combination of these control options. After the completion of a regeneration cycle, the fuel input into the engine oil is controlled. These investigations will serve as a basis for the subsequent development of more complex regeneration strategies for close-to-reality engine operating cycles with varying load conditions.
Patients after total hip arthroplasty (THA) suffer from lingering musculoskeletal restrictions. Three-dimensional (3D) gait analysis in combination with machine-learning approaches is used to detect these impairments. In this work, features from the 3D gait kinematics, spatio temporal parameters (Set 1) and joint angles (Set 2), of an inertial sensor (IMU) system are proposed as an input for a support vector machine (SVM) model, to differentiate impaired and non-impaired gait. The features were divided into two subsets. The IMU-based features were validated against an optical motion capture (OMC) system by means of 20 patients after THA and a healthy control group of 24 subjects. Then the SVM model was trained on both subsets. The validation of the IMU system-based kinematic features revealed root mean squared errors in the joint kinematics from 0.24° to 1.25°. The validity of the spatio-temporal gait parameters (STP) revealed a similarly high accuracy. The SVM models based on IMU data showed an accuracy of 87.2% (Set 1) and 97.0% (Set 2). The current work presents valid IMU-based features, employed in an SVM model for the classification of the gait of patients after THA and a healthy control. The study reveals that the features of Set 2 are more significant concerning the classification problem. The present IMU system proves its potential to provide accurate features for the incorporation in a mobile gait-feedback system for patients after THA.
3D joint kinematics can provide important information about the quality of movements. Optical motion capture systems (OMC) are considered the gold standard in motion analysis. However, in recent years, inertial measurement units (IMU) have become a promising alternative. The aim of this study was to validate IMU-based 3D joint kinematics of the lower extremities during different movements. Twenty-eight healthy subjects participated in this study. They performed bilateral squats (SQ), single-leg squats (SLS) and countermovement jumps (CMJ). The IMU kinematics was calculated using a recently-described sensor-fusion algorithm. A marker based OMC system served as a reference. Only the technical error based on algorithm performance was considered, incorporating OMC data for the calibration, initialization, and a biomechanical model. To evaluate the validity of IMU-based 3D joint kinematics, root mean squared error (RMSE), range of motion error (ROME), Bland-Altman (BA) analysis as well as the coefficient of multiple correlation (CMC) were calculated. The evaluation was twofold. First, the IMU data was compared to OMC data based on marker clusters; and, second based on skin markers attached to anatomical landmarks. The first evaluation revealed means for RMSE and ROME for all joints and tasks below 3°. The more dynamic task, CMJ, revealed error measures approximately 1° higher than the remaining tasks. Mean CMC values ranged from 0.77 to 1 over all joint angles and all tasks. The second evaluation showed an increase in the RMSE of 2.28°– 2.58° on average for all joints and tasks. Hip flexion revealed the highest average RMSE in all tasks (4.87°– 8.27°). The present study revealed a valid IMU-based approach for the measurement of 3D joint kinematics in functional movements of varying demands. The high validity of the results encourages further development and the extension of the present approach into clinical settings.
Understanding the mechanisms and controlling
the possibilities of surface nanostructuring is of crucial interest
for both fundamental science and application perspectives.
Here, we report a direct experimental observation
of laser-induced periodic surface structures (LIPSS) formed
near a predesigned gold step edge following single-pulse
femtosecond laser irradiation. Simulation results based on a
hybrid atomistic-continuum model fully support the experimental
observations. We experimentally detect nanosized
surface features with a periodicity of ∼300 nm and heights of
a few tens of nanometers.We identify two key components of
single-pulse LIPSS formation: excitation of surface plasmon
polaritons and material reorganization. Our results lay a
solid foundation toward simple and efficient usage of light
for innovative material processing technologies.
Biological soil crusts (biocrusts) have been recognized as key ecological players in arid and semiarid regions at both local and global scales. They are important biodiversity components, provide critical ecosystem services, and strongly influence soil-plant relationships, and successional trajectories via facilitative, competitive, and edaphic engineering effects. Despite these important ecological roles, very little is known about biocrusts in seasonally dry tropical forests. Here we present a first baseline study on biocrust cover and ecosystem service provision in a human-modified landscape of the Brazilian Caatinga, South America's largest tropical dry forest. More specifically, we explored (1) across a network of 34 0.1 ha permanent plots the impact of disturbance, soil, precipitation, and vegetation-related parameters on biocrust cover in different stages of forest regeneration, and (2) the effect of disturbance on species composition, growth and soil organic carbon sequestration comparing early and late successional communities in two case study sites at opposite ends of the disturbance gradient. Our findings revealed that biocrusts are a conspicuous component of the Caatinga ecosystem with at least 50 different taxa of cyanobacteria, algae, lichens and bryophytes (cyanobacteria and bryophytes dominating) covering nearly 10% of the total land surface and doubling soil organic carbon content relative to bare topsoil. High litter cover, high disturbance by goats, and low soil compaction were the leading drivers for reduced biocrust cover, while precipitation was not associated Second-growth forests supported anequally spaced biocrust cover, while in old-growth-forests biocrust cover was patchy. Disturbance reduced biocrust growth by two thirds and carbon sequestration by half. In synthesis, biocrusts increase soil organic carbon (SOC) in dry forests and as they double the SOC content in disturbed areas, may be capable of counterbalancing disturbance-induced soil degradation in this ecosystem. As they fix and fertilize depauperated soils, they may play a substantial role in vegetation regeneration in the human-modified Caatinga, and may have an extended ecological role due to the ever-increasing human encroachment on natural landscapes. Even though biocrusts benefit from human presence in dry forests, high levels of anthropogenic disturbance could threaten biocrust-provided ecosystem services, and call for further, in-depth studies to elucidate the underlying mechanisms.
Ecophysiological characterizations of photoautotrophic communities are not only necessary to identify the response of carbon fixation related to different climatic factors, but also to evaluate risks connected to changing environments. In biological soil crusts (BSCs), the description of ecophysiological features is difficult, due to the high variability in taxonomic composition and variable methodologies applied. Especially for BSCs in early successional stages, the available datasets are rare or focused on individual constituents, although these crusts may represent the only photoautotrophic component in many heavily disturbed ruderal areas, such as parking lots or building areas with increasing surface area worldwide. We analyzed the response of photosynthesis and respiration to changing BSC water contents (WCs), temperature and light in two early successional BSCs. We investigated whether the response of these parameters was different between intact BSC and the isolated dominating components. BSCs dominated by the cyanobacterium Nostoc commune and dominated by the green alga Zygogonium ericetorum were examined. A major divergence between the two BSCs was their absolute carbon fixation rate on a chlorophyll basis, which was significantly higher for the cyanobacterial crust. Nevertheless, independent of species composition, both crust types and their isolated organisms had convergent features such as high light acclimatization and a minor and very late-occurring depression in carbon uptake at water suprasaturation. This particular setup of ecophysiological features may enable these communities to cope with a high variety of climatic stresses and may therefore be a reason for their success in heavily disturbed areas with ongoing human impact. However, the shape of the response was different for intact BSC compared to separated organisms, especially in absolute net photosynthesis (NP) rates. This emphasizes the importance of measuring intact BSCs under natural conditions for collecting reliable data for meaningful analysis of BSC ecosystem services.
Gliomas are primary brain tumors with a high invasive potential and infiltrative spread. Among them, glioblastoma multiforme (GBM) exhibits microvascular hyperplasia and pronounced necrosis triggered by hypoxia. Histological samples showing garland-like hypercellular structures (so-called pseudopalisades) centered around the occlusion site of a capillary are typical for GBM and hint on poor prognosis of patient survival. We propose a multiscale modeling approach in the kinetic theory of active particles framework and deduce by an upscaling process a reaction-diffusion model with repellent pH-taxis. We prove existence of a unique global bounded classical solution for a version of the obtained macroscopic system and investigate the asymptotic behavior of the solution. Moreover, we study two different types of scaling and compare the behavior of the obtained macroscopic PDEs by way of simulations. These show that patterns (not necessarily of Turing type), including pseudopalisades, can be formed for some parameter ranges, in accordance with the tumor grade. This is true when the PDEs are obtained via parabolic scaling (undirected tissue), while no such patterns are observed for the PDEs arising by a hyperbolic limit (directed tissue). This suggests that brain tissue might be undirected - at least as far as glioma migration is concerned. We also investigate two different ways of including cell level descriptions of response to hypoxia and the way they are related .
In recent years, the automotive industry has shifted from purely combustion engine-driven vehicles towards hybridization due to the introduction of CO2 emission legislation. Hybrid powertrains also represent an important pillar and starting point in the journey towards zero-emission and full electrification. Fulfilling the most recent emission standards requires efficient control strategies for the engine, capable of real-time operation. Model accuracy is one of the main parameters which directly influence the performance of such control strategies. Specific methodologies developed in the past, such as physically- or phenomenologically-based approaches, have already facilitated the modeling of the combustion engine. Even though these models can accurately predict emissions in steady state conditions, their performance during transient engine operation is time-consuming and still not sufficiently reliable. The major contribution of the current work is to clarify and apply the recent advancements in data-driven modeling techniques, especially in time series forecasting with feedforward neural networks (FFNNs) and long short-term memory networks (LSTMs), to address the limitations mentioned above and to compare the different approaches. The quantity and quality of data are significant challenges for data-driven modeling. This paper studies the modeling of gasoline engine emissions using FFNNs and LSTMs. The data quantity and quality requirements are studied based on a portable emission measurement system (PEMS), measuring at 1 Hz, and additional analyses on an engine test bench with a HiL setup, providing the possibility of increasing the measurement frequency with more sophisticated devices by a factor of five. Subsequently, the training and validation of the FFNNs and LSTMs are outlined, and finally, the model accuracy is discussed.
Longwave radiative heat transfer is a key determinant of energy consumption in buildings
and view factor calculations are therefore required for the detailed simulation of heat transfer
between buildings and their environment as well as for heat exchange within rooms. Typically,
these calculations are either derived through analytical means or performed as a part of the simulation
process. This paper describes the methodology for employing RADIANCE, a command-line
open-source raytracing software, for performing view factor calculations. Since it was introduced
in the late-1980s, RADIANCE has been almost exclusively employed as the back-end engine for
lighting simulations. We discuss the theoretical basis for calculating view factors through Monte
Carlo calculations with RADIANCE and propose a corresponding workflow. The results generated
through RADIANCE are validated by comparing them with analytical solutions. The fundamental
methodology proposed in this paper can be scaled up to calculate view factors for more complex,
practical scenarios. Furthermore, the portability, multi-processing functionality and cross-platform
compatibility offered by RADIANCE can also be employed in the calculation of view factors.
Characterization of an Aerosol-Based Photobioreactor for Cultivation of Phototrophic Biofilms
(2021)
Phototrophic biofilms, in particular terrestrial cyanobacteria, offer a variety of biotechnologically interesting products such as natural dyes, antibiotics or dietary supplements. However,
phototrophic biofilms are difficult to cultivate in submerged bioreactors. A new generation of biofilm
photobioreactors imitates the natural habitat resulting in higher productivity. In this work, an aerosol-based photobioreactor is presented that was characterized for the cultivation of phototrophic biofilms.
Experiments and simulation of aerosol distribution showed a uniform aerosol supply to biofilms.
Compared to previous prototypes, the growth of the terrestrial cyanobacterium Nostoc sp. could be
almost tripled. Different surfaces for biofilm growth were investigated regarding hydrophobicity,
contact angle, light- and temperature distribution. Further, the results were successfully simulated.
Finally, the growth of Nostoc sp. was investigated on different surfaces and the biofilm thickness was
measured noninvasively using optical coherence tomography. It could be shown that the cultivation
surface had no influence on biomass production, but did affect biofilm thickness.
Initiated by a task in tunable microoptics, but not limited to this application, a microfluidic droplet array in an upright standing module with 3 × 3 subcells and droplet actuation via electrowetting is presented. Each subcell is filled with a single (of course transparent) water droplet, serving as a movable iris, surrounded by opaque blackened decane. Each subcell measures 1 × 1 mm ² and incorporates 2 × 2 quadratically arranged positions for the droplet. All 3 × 3 droplets are actuated synchronously by electrowetting on dielectric (EWOD). The droplet speed is up to 12 mm/s at 130 V (Vrms) with response times of about 40 ms. Minimum operating voltage is 30 V. Horizontal and vertical movement of the droplets is demonstrated. Furthermore, a minor modification of the subcells allows us to exploit the flattening of each droplet. Hence, the opaque decane fluid sample can cover each water droplet and render each subcell opaque, resulting in switchable irises of constant opening diameter. The concept does not require any mechanically moving parts or external pumps.
Loss of USP28 and SPINT2 expression promotes cancer cell survival after whole genome doubling
(2021)
Background
Whole genome doubling is a frequent event during cancer evolution and shapes the cancer genome due to the occurrence of chromosomal instability. Yet, erroneously arising human tetraploid cells usually do not proliferate due to p53 activation that leads to CDKN1A expression, cell cycle arrest, senescence and/or apoptosis.
Methods
To uncover the barriers that block the proliferation of tetraploids, we performed a RNAi mediated genome-wide screen in a human colorectal cancer cell line (HCT116).
Results
We identified 140 genes whose depletion improved the survival of tetraploid cells and characterized in depth two of them: SPINT2 and USP28. We found that SPINT2 is a general regulator of CDKN1A transcription via histone acetylation. Using mass spectrometry and immunoprecipitation, we found that USP28 interacts with NuMA1 and affects centrosome clustering. Tetraploid cells accumulate DNA damage and loss of USP28 reduces checkpoint activation, thus facilitating their proliferation.
Conclusions
Our results indicate three aspects that contribute to the survival of tetraploid cells: (i) increased mitogenic signaling and reduced expression of cell cycle inhibitors, (ii) the ability to establish functional bipolar spindles and (iii) reduced DNA damage signaling.
The measurement and assessment of indoor air quality in terms of respirable particulate constituents is relevant, especially in light of the COVID-19 pandemic and associated infection events. To analyze indoor infectious potential and to develop customized hygiene concepts, the measurement
monitoring of the anthropogenic aerosol spreading is necessary. For indoor aerosol measurements
usually standard lab equipment is used. However, these devices are time-consuming, expensive and unwieldy. The idea is to replace this standard laboratory equipment with low-cost sensors widely used for monitoring fine dust (particulate matter—PM). Due to the low acquisition costs, many sensors can be used to determine the aerosol load, even in large rooms. Thus, the aim of this work
is to verify the measurement capability of low-cost sensors. For this purpose, two different models of low-cost sensors are compared with established laboratory measuring instruments. The study
was performed with artificially prepared NaCl aerosols with a well-defined size and morphology. In addition, the influence of the relative humidity, which can vary significantly indoors, on the measurement capability of the low-cost sensors is investigated. For this purpose, a heating stage was
developed and tested. The results show a discrepancy in measurement capability between low-cost sensors and laboratory measuring instruments. This difference can be attributed to the partially different measuring method, as well as the different measuring particle size ranges. The determined measurement accuracy is nevertheless good, considering the compactness and the acquisition price of the low-cost sensors.
In many robotic applications, the teaching of points in space is necessary to register the robot coordinate system with the one of the application. Robot-human interaction is awkward and dangerous for the human because of the possibly large size and power of the robot, so robot movements must be predictable and natural. We present a novel hybrid control algorithm which provides the needed precision in small scale movements while allowing for fast and intuitive large scale translations.
In separation processes not only thermodynamic bulk but also interfacial properties play a crucial role. In
classical theory, a vapour-liquid interface is a two-dimensional object. In reality it is a region in which
properties change over a few nanometres and the density changes continuously from its liquid bulk to its gas
bulk value. Many mixtures show unexpected effects in that transition region. While the total density changes
monotonously from the bulk vapour to the bulk liquid, this does not hold for the molarities of the components.
The molarities of the light boiling component can have a distinct maximum at the interface. That maximum
would be an insurmountable obstacle to mass transfer according to Fickian theory. Even if that argument is
not adopted, it shows that there is good reason to believe that the maximum may affect mass transfer and,
hence, fluid separation processes like absorption or distillation. Unfortunately, there are currently no
experimental methods that can be used for direct studies of density profiles in such interfacial regions. But
such data can be obtained with theoretical methods, namely with molecular dynamics simulations (MD) as
well as with density gradient theory (DGT) or with density functional theory (DFT) combined with an equation
of state (EOS).
Studies from our group on the vapour-liquid interface of several real mixtures and a model fluid using these
methods yield consistent results and reveal an important enrichment in some cases. Strong enrichment is
found at vapour-liquid interfaces in the systems in which one of the components is supercritical. These results
indicate that mixtures, which are typical for absorption processes usually show an important enrichment,
whereas this is not the case for mixtures that are typically separated by distillation.
Equations of state based on intermolecular potentials are often developed about the Lennard-Jones (LJ) potential. Many of such EOS have been proposed in the past. In this work, 20 LJ EOS were examined regarding their performance on Brown’s characteristic curves and characteristic state points. Brown’s characteristic curves are directly related to the virial coefficients at specific state points, which can be computed exactly from the intermolecular potential. Therefore, also the second and third virial coefficient of the LJ fluid were investigated. This approach allows a comparison of available LJ EOS at extreme conditions. Physically based, empirical, and semi-theoretical LJ EOS were examined. Most investigated LJ EOS exhibit some unphysical artifacts.
The simulation of Dynamic Random Access Memories (DRAMs) on system level requires highly accurate models due to their complex timing and power behavior. However, conventional cycle-accurate DRAM subsystem models often become a bottleneck for the overall simulation speed. A promising alternative are simulators based on Transaction Level Modeling, which can be fast and accurate at the same time. In this paper we present DRAMSys4.0, which is, to the best of our knowledge, the fastest and most extensive open-source cycle-accurate DRAM simulation framework. DRAMSys4.0 includes a novel software architecture that enables a fast adaption to different hardware controller implementations and new JEDEC standards. In addition, it already supports the latest standards DDR5 and LPDDR5. We explain how to apply optimization techniques for an increased simulation speed while maintaining full temporal accuracy. Furthermore, we demonstrate the simulator’s accuracy and analysis tools with two application examples. Finally, we provide a detailed investigation and comparison of the most prominent cycle-accurate open-source DRAM simulators with regard to their supported features, analysis capabilities and simulation speed.