Refine
Year of publication
Document Type
- Article (729) (remove)
Has Fulltext
- yes (729)
Keywords
- AG-RESY (42)
- PARO (30)
- SKALP (15)
- Schule (12)
- MINT (11)
- Mathematische Modellierung (11)
- Stadtplanung (9)
- Denkmäler (8)
- HANDFLEX (8)
- Monitoring (8)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (154)
- Kaiserslautern - Fachbereich Informatik (134)
- Kaiserslautern - Fachbereich Physik (102)
- Kaiserslautern - Fachbereich Mathematik (84)
- Kaiserslautern - Fachbereich Sozialwissenschaften (53)
- Kaiserslautern - Fachbereich Biologie (50)
- Kaiserslautern - Fachbereich Chemie (42)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (27)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (26)
- Kaiserslautern - Fachbereich Bauingenieurwesen (24)
Laser-based powder bed fusion (L-PBF) is a promising technology for the production of near net–shaped metallic components. The high surface roughness and the comparatively low-dimensional accuracy of such components, however, usually require a finishing by a subtractive process such as milling or grinding in order to meet the requirements of the application. Materials manufactured via L-PBF are characterized by a unique microstructure and anisotropic material properties. These specific properties could also affect the subtractive processes themselves. In this paper, the effect of L-PBF on the machinability of the aluminum alloy AlSi10Mg is explored when milling. The chips, the process forces, the surface morphology, the microhardness, and the burr formation are analyzed in dependence on the manufacturing parameter settings used for L-PBF and the direction of feed motion of the end mill relative to the build-up direction of the parts. The results are compared with a conventionally cast AlSi10Mg. The analysis shows that L-PBF influences the machinability. Differences between the reference and the L-PBF AlSi10Mg were observed in the chip form, the process forces, the surface morphology, and the burr formation. The initial manufacturing method of the part thus needs to be considered during the design of the finishing process to achieve suitable results.
Fucoidans are multifunctional marine macromolecules that are subjected to numerous and various downstream processes during their production. These processes were considered the most important abiotic factors affecting fucoidan chemical skeletons, quality, physicochemical properties, biological properties and industrial applications. Since a universal protocol for fucoidans production has not been established yet, all the currently used processes were presented and justified. The current article complements our previous articles in the fucoidans field, provides an updated overview regarding the different downstream processes, including pre-treatment, extraction, purification and enzymatic modification processes, and shows the recent non-traditional applications of fucoidans in relation to their characters.
Background: The positive effect of carbohydrates from commercial beverages on soccer-specific exercise has been clearly demonstrated. However, no study is available that uses a home-mixed beverage in a test where technical skills were required. Methods: Nine subjects participated vol-untarily in this double-blind, randomized, placebo-controlled crossover study. On three testing days, the subjects performed six Hoff tests with a 3-min active break as a preload and then the Yo-Yo Intermittent Running Test Level 1 (Yo-Yo IR1) until exhaustion. On test days 2 and 3, the subjects received either a 69 g carbohydrate-containing drink (syrup–water mixture) or a carbo-hydrate-free drink (aromatic water). Beverages were given in several doses of 250 mL each: 30 min before and immediately before the exercise and after 18 and 39 min of exercise. The primary target parameters were the running performance in the Hoff test and Yo-Yo IR1, body mass and heart rate. Statistical differences between the variables of both conditions were analyzed using paired samples t-tests. Results: The maximum heart rate in Yo-Yo IR1 showed significant differ-ences (syrup: 191.1 ± 6.2 bpm; placebo: 188.0 ± 6.89 bpm; t(6) = −2.556; p = 0.043; dz = 0.97). The running performance in Yo-Yo IR1 under the condition syrup significantly increased by 93.33 ± 84.85 m (0–240 m) on average (p = 0.011). Conclusions: The intake of a syrup–water mixture with a total of 69 g carbohydrates leads to an increase in high-intensive running performance after soccer specific loads. Therefore, the intake of carbohydrate solutions is recommended for intermit-tent loads and should be increasingly considered by coaches and players.
This paper aims to improve the traditional calibration method for reconfigurable self-X (self-calibration, self-healing, self-optimize, etc.) sensor interface readout circuit for industry 4.0. A cost-effective test stimulus is applied to the device under test, and the transient response of the system is analyzed to correlate the circuit's characteristics parameters. Due to complexity in the search and objective space of the smart sensory electronics, a novel experience replay particle swarm optimization (ERPSO) algorithm is being proposed and proved a better-searching capability than some currently well-known PSO algorithms. The newly proposed ERPSO expanded the selection producer of the classical PSO by introducing an experience replay buffer (ERB) intending to reduce the probability of trapping into the local minima. The ERB reflects the archive of previously visited global best particles, while its selection is based upon an adaptive epsilon greedy method in the velocity updating model. The performance of the proposed ERPSO algorithm is verified by using eight different popular benchmarking functions. Furthermore, an extrinsic evaluation of the ERPSO algorithm is also examined on a reconfigurable wide swing indirect current-feedback instrumentation amplifier (CFIA). For the later test, we proposed an efficient optimization procedure by using total harmonic distortion analyses of CFIA output to reduce the total number of measurements and save considerable optimization time and cost. The proposed optimization methodology is roughly 3 times faster than the classical optimization process. The circuit is implemented by using Cadence design tools and CMOS 0.35 µm technology from Austria Microsystems (AMS). The efficiency and robustness are the key features of the proposed methodology toward implementing reliable sensory electronic systems for industry 4.0 applications.
This article proposes a new clock-dependent gain-scheduled dynamic output feedback controller for delayed linear parameter varying systems with piecewise constant parameters. The proposed controller guarantees ℒ2-performance. By employing a clock-dependent Lyapunov–Krasovskii functional, a sufficient condition for the existence of the controller is provided in terms of clock- and parameter-dependent linear matrix inequalities. A case study on output feedback control of delayed switched systems is also provided. To illustrate the efficacy of the result, it is applied to a practical VTOL helicopter model.
In recent years, the concept of a centralized drainage system that connect an entire city to one single treatment plant is increasingly being questioned in terms of the costs, reliability, and environmental impacts. This study introduces an optimization approach based on decentralization in order to develop a cost-effective and sustainable sewage collection system. For this purpose, a new algorithm based on the growing spanning tree algorithm is developed for decentralized layout generation and treatment plant allocation. The trade-off between construction and operation costs, resilience, and the degree of centralization is a multiobjective problem that consists of two subproblems: the layout of the networks and the hydraulic design. The innovative characteristics of the proposed framework are that layout and hydraulic designs are solved simultaneously, three objectives are optimized together, and the entire problem solving process is self-adaptive. The model is then applied to a real case study. The results show that finding an optimum degree of centralization could reduce not only the network’s costs by 17.3%, but could also increase its structural resilience significantly compared to fully centralized networks.
It is difficult for robots to handle a vibrating deformable object. Even for human beings it is a high-risk operation to, for example, insert a vibrating linear object into a small hole. However, fast manipulation using a robot arm is not just a dream; it may be achieved if some important features of the vibration are detected online. In this paper, we present an approach for fast manipulation using a force/torque sensor mounted on the robot's wrist. Template matching method is employed to recognize the vibrational phase of the deformable objects. Therefore, a fast manipulation can be performed with a high success rate, even if there is acute vibration. Experiments inserting a deformable object into a hole are conducted to test the presented method. Results demonstrate that the presented sensor-based online fast manipulation is feasible.
As a consequence of the real estate market crash after 2008, large investors invested a significant amount of wealth into single-family houses to construct a portfolio of rental dwellings, whose income is securitized in the capital. In some local housing markets, these investors own remarkable numbers of single-family houses. Furthermore, their trading activities have resulted in a new investment strategy, which exacerbates property wealth concentration and polarization. This new investment strategy and its portfolio optimization inspire curiosity about its influence on housing markets. This paper first aims to find an optimal portfolio strategy by employing an expected utility optimization from the terminal wealth, which adopts a stochastic model that includes a variety of economic states to estimate house prices. Second, it aims to analyze the effect of large investors on the housing market. The results show the investment strategies of large investors depend on the balance among economic state, maintenance cost, rental income, interest rate and investment willingness of large investors to housing and their effect depends on the state of the economy.
Load modeling is one of the crucial tasks for improving smart grids’ energy efficiency. Among many alternatives, machine learning-based load models have become popular in applications and have shown outstanding performance in recent years. The performance of these models highly relies on data quality and quantity available for training. However, gathering a sufficient amount of high-quality data is time-consuming and extremely expensive. In the last decade, Generative Adversarial Networks (GANs) have demonstrated their potential to solve the data shortage problem by generating synthetic data by learning from recorded/empirical data. Educated synthetic datasets can reduce prediction error of electricity consumption when combined with empirical data. Further, they can be used to enhance risk management calculations. Therefore, we propose RCGAN, TimeGAN, CWGAN, and RCWGAN which take individual electricity consumption data as input to provide synthetic data in this study. Our work focuses on one dimensional times series, and numerical experiments on an empirical dataset show that GANs are indeed able to generate synthetic data with realistic appearance.
Machining is very common in industry, e.g. automotive industry and aerospace industry, which is a nonlinear dynamic problem including large deformations, large strain, large strain rates and high temperatures, that implies some difficulties for numerical methods such as Finite element method. One way to simulate such kind of problems is the Particle Finite Element Method (PFEM) which combines the advantages of continuum mechanics and discrete modeling techniques. In this work we introduce an improved PFEM called the Adaptive Particle Finite Element Method (A-PFEM). The A-PFEM introduces particles and removes wrong elements along the numerical simulation to improve accuracy, precision, decrease computing time and resolve the phenomena that take place in machining in multiple scales. At the end of this paper, some examples are present to show the performance of the A-PFEM.
Recently, phase field modeling of fatigue fracture has gained a lot of attention from many researches and studies, since the fatigue damage of structures is a crucial issue in mechanical design. Differing from traditional phase field fracture models, our approach considers not only the elastic strain energy and crack surface energy, additionally, we introduce a fatigue energy contribution into the regularized energy density function caused by cyclic load. Comparing to other type of fracture phenomenon, fatigue damage occurs only after a large number of load cycles. It requires a large computing effort in a computer simulation. Furthermore, the choice of the cycle number increment is usually determined by a compromise between simulation time and accuracy. In this work, we propose an efficient phase field method for cyclic fatigue propagation that only requires moderate computational cost without sacrificing accuracy. We divide the entire fatigue fracture simulation into three stages and apply different cycle number increments in each damage stage. The basic concept of the algorithm is to associate the cycle number increment with the damage increment of each simulation iteration. Numerical examples show that our method can effectively predict the phenomenon of fatigue crack growth and reproduce fracture patterns.
In the last decades, the phase field method has drawn much attention for its application in fracture mechanics because it offers a simple unified framework for crack propagation. The core idea of phase field models for fracture is to introduce a continuous scalar field representing the discontinuous crack. Recently, a phase field model for fatigue has been proposed along this path. The fatigue failure differs from the other fracture scenarios since cracks only occur after a considerable number of load cycles. As fracturing happens, changes of the material microstructure are involved, which causes the evolution of the structural configuration. Thus, a new mathematical description not based on traditional spatial coordinates but the material manifold is desired, which will serve as an elegant analysis tool to understand the energetic forces for crack propagation. Configurational forces are a suitable choice for this purpose, as they describe the energetic driving forces associated with phenomena changing the material itself. In this work, we present a phase field model for fatigue. Furthermore, the phase field fatigue model is analyzed within the concept of configurational forces, which provides a straightforward way to understand the phase field simulations of fatigue fracture.
Phospho-regulation of the Shugoshin - Condensin interaction at the centromere in budding yeast
(2020)
Correct bioriented attachment of sister chromatids to the mitotic spindle is essential for chromosome segregation. In budding yeast, the conserved protein shugoshin (Sgo1) contributes to biorientation by recruiting the protein phosphatase PP2A-Rts1 and the condensin complex to centromeres. Using peptide prints, we identified a Serine-Rich Motif (SRM) of Sgo1 that mediates the interaction with condensin and is essential for centromeric condensin recruitment and the establishment of biorientation. We show that the interaction is regulated via phosphorylation within the SRM and we determined the phospho-sites using mass spectrometry. Analysis of the phosphomimic and phosphoresistant mutants revealed that SRM phosphorylation disrupts the shugoshin–condensin interaction. We present evidence that Mps1, a central kinase in the spindle assembly checkpoint, directly phosphorylates Sgo1 within the SRM to regulate the interaction with condensin and thereby condensin localization to centromeres. Our findings identify novel mechanisms that control shugoshin activity at the centromere in budding yeast.
Print path-dependent contact temperature dependency for 3D printing using fused filament fabrication
(2022)
This paper focuses on the effects of different time spans and thus different contact temperatures when a molten strand contacts an adjacent already solidified strand in a plane during 3D printing with fused filament fabrication. For this purpose, both the manufacturing parameters and the geometry of the component are systematically varied and the effect on morphology and mechanical properties is investigated. The results clearly show that even with identical printing parameters, the transitions between the individual layers are much more visible with long time spans until fusion and lead to low mechanical properties. In contrast, short spans lead to hardly visible welds and high mechanical properties. Transferring the findings to different component sizes ultimately verifies that the average temperature at the time of contact between the already solidified and the currently deposited strand is decisive for component quality. In order to generate high component qualities, this finding must therefore be taken into account in the future in the path generation strategy, i.e., in so-called slicing.
Global trends such as climate change and the scarcity of sustainable raw materials require adaptive, more flexible and resource-saving wastewater infrastructures for rural areas. Since 2018, in the community Reinighof, an isolated site in the countryside of Rhineland Palatinate (Germany), an autarkic, decentralized wastewater treatment and phosphorus recovery concept has been developed, implemented and tested. While feces are composted, an easy-to-operate system for producing struvite as a mineral fertilizer was developed and installed to recover phosphorus from urine. The nitrogen-containing supernatant of this process stage is treated in a special soil filter and afterwards discharged to a constructed wetland for grey water treatment, followed by an evaporation pond. To recover more than 90% of the phosphorus contained in the urine, the influence of the magnesium source, the dosing strategy, the molar ratio of Mg:P and the reaction and sedimentation time were investigated. The results show that, with a long reaction time of 1.5 h and a molar ratio of Mg:P above 1.3, constraints concerning magnesium source can be overcome and a stable process can be achieved even under varying boundary conditions. Within the special soil filter, the high ammonium nitrogen concentrations of over 3000 mg/L in the supernatant of the struvite reactor were considerably reduced. In the effluent of the following constructed wetland for grey water treatment, the ammonium-nitrogen concentrations were below 1 mg/L. This resource efficient decentralized wastewater treatment is self-sufficient, produces valuable fertilizer and does not need a centralized wastewater system as back up. It has high potential to be transferred to other rural communities.
This paper discusses the problem of automatic off-line programming and motion planning for industrial robots. At first, a new concept consisting of three steps is proposed. The first step, a new method for on-line motion planning is introduced. The motion planning method is based on the A*-search algorithm and works in the implicit configuration space. During searching, the collisions are detected in the explicitly represented Cartesian workspace by hierarchical distance computation. In the second step, the trajectory planner has to transform the path into a time and energy optimal robot program. The practical application of these two steps strongly depends on the method for robot calibration with high accuracy, thus, mapping the virtual world onto the real world, which is discussed in the third step.
This paper presents a new approach to parallel motion planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based on the A-search algorithm and needs no essential off-line computations. The algorithm works in an implicitly descrete configuration space. Collisions are detected in the Cartesian workspace by hierarchical distance computation based on the given CAD model. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel motion planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows linear speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
A practical distributed planning and control system for industrial robots is presented. The hierarchical concept consists of three independent levels. Each level is modularly implemented and supplies an application interface (API) to the next higher level. At the top level, we propose an automatic motion planner. The motion planner is based on a best-first search algorithm and needs no essential off-line computations. At the middle level, we propose a PC-based robot control architecture, which can easily be adapted to any industrial kinematics and application. Based on a client/server-principle, the control unit estab-lishes an open user interface for including application specific programs. At the bottom level, we propose a flexible and modular concept for the integration of the distributed motion control units based on the CAN bus. The concept allows an on-line adaptation of the control parameters according to the robot's configuration. This implies high accuracy for the path execution and improves the overall system performance.
This paper presents a new approach to parallel motion planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based on the A*-search algorithm and needs no essential off-line computations. The algorithm works in an implicitly descrete configuration space. Collisions are detected in the cartesian workspace by hierarchical distance computation based on the given CAD model. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel motion planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows linear, and sometimes even superlinear speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
A new problem for the automated off-line programming of industrial robot application is investigated. The Multi-Goal Path Planning is to find the collision-free path connecting a set of goal poses and minimizing e.g. the total path length. Our solution is based on an earlier reported path planner for industrial robot arms with 6 degrees-of-freedom in an on-line given 3D environment. To control the path planner, four different goal selection methods are introduced and compared. While the Random and the Nearest Pair Selection methods can be used with any path planner, the Nearest Goal and the Adaptive Pair Selection method are favorable for our planner. With the latter two goal selection methods, the Multi-Goal Path Planning task can be significantly accelerated, because they are able to automatically solve the simplest path planning problems first. Summarizing, compared to Random or Nearest Pair Selection, this new Multi-Goal Path Planning approach results in a further cost reduction of the programming phase.
Anwendungen effizienter Verfahren in Automation - Universität Karlsruhe auf der SPS97 in Nürnberg -
(1998)
Die Bewegungsplanung für Industrieroboter ist eine notwendige Voraussetzung, damit sich autonome Systeme kollisionsfrei durch die Umwelt bewegen können. Die Berücksichtigung von dynamischen Hindernissen zur Laufzeit erfordert allerdings leistungsfähige Algorithmen, zur Lösung dieser Aufgabenstellung in Echtzeit. Eine Möglichkeit zur Beschleunigung der Algorithmen ist der effiziente Einsatz von skalierbarer Parallelverarbeitung. Die softwaretechnische Umsetzung kann aber nur dann erfolgreich sein, wenn ein Parallelrechner zur Verfügung steht, der einen hohen Datendurchsatz bei geringer Latenzzeit bietet. Darüber hinaus muß dieser Parallelrechner unter vertretbarem Aufwand bedienbar sein und ein gutes Preisleistungsverhältnis aufweisen, damit die Parallelverarbeitung verstärkt in der Industrie zum Einsatz kommt. In diesem Artikel wird ein Workstation-Cluster auf der Basis von neun Standard- PCs vorgestellt, die über eine spezielle Kommunikationskarte miteinander vernetzt sind. In den einzelnen Abschnitten werden die gesammelten Erfahrungen bei der Inbetriebnahme, Systemadministration und Anwendung geschildert. Als Beispiel für eine Anwendung auf diesem Cluster wird ein paralleler Bewegungsplaner für Industrieroboter beschrieben.
In response priming experiments, a participant has to respond as quickly and as accurately as possible to a target stimulus preceded by a prime. The prime and the target can either be mapped to the same response (consistent trial) or to different responses (inconsistent trial). Here, we investigate the effects of two sequential primes (each one either consistent or inconsistent) followed by one target in a response priming experiment. We employ discrete-time hazard functions of response occurrence and conditional accuracy functions to explore the temporal dynamics of sequential motor activation. In two experiments (small-N design, 12 participants, 100 trials per cell and subject), we find that (1) the earliest responses are controlled exclusively by the first prime if primes are presented in quick succession, (2) intermediate responses reflect competition between primes, with the second prime increasingly dominating the response as its time of onset is moved forward, and (3) only the slowest responses are clearly controlled by the target. The current study provides evidence that sequential primes meet strict criteria for sequential response activation. Moreover, it suggests that primes can influence responses out of a memory buffer when they are presented so early that participants are forced to delay their responses.
Surface wetting can be simulated using a phase field approach which describes the continuous liquid-gas transition with the help of an order parameter. In this publication, wetting of non-planar surfaces is investigated based on a phase field model by Diewald et al. [1, 2]. Different scenarios of droplets on rough surfaces are simulated. The static equilibrium for those scenarios is calculated using an Allen-Cahn evolution equation. The influence of the surface morphology on the resulting contact angle is investigated while the width of the phase transition from liquid to gas is varied as a model parameter.
Surface wetting can be described by using phase field models [1]. In these models, often either the contact angle or the surface tensions between the solid and the fluid are prescribed directly on the wall in order to represent the solid-fluid interaction. However, the interaction of the wall and the fluid are not strictly local. The influence of the wall, which can be described by wall potentials [2], reaches out into the fluid, which is the reason for the formation of adsorbate layers. The investigation shows how such a wall potential can be included into a phase field model of wetting. It is found that by considering this energy contribution, the model is able to capture the adsorbate layer.
A novel shadowgraphic inline probe to measure crystal size distributions (CSD),
based on acquired greyscale images, is evaluated in terms of elevated temperatures and fragile
crystals, and compared to well-established, alternative online and offline measurement techniques,
i.e., sieving analysis and online microscopy. Additionally, the operation limits, with respect to
temperature, supersaturation, suspension, and optical density, are investigated. Two different
substance systems, potassium dihydrogen phosphate (prisms) and thiamine hydrochloride (needles),
are crystallized for this purpose at 25 L scale. Crystal phases of the well-known KH2PO4/H2O system
are measured continuously by the inline probe and in a bypass by the online microscope during
cooling crystallizations. Both measurement techniques show similar results with respect to the crystal
size distribution, except for higher temperatures, where the bypass variant tends to fail due to
blockage. Thiamine hydrochloride, a substance forming long and fragile needles in aqueous solutions,
is solidified with an anti-solvent crystallization with ethanol. The novel inline probe could identify
a new field of application for image-based crystal size distribution measurements, with respect
to difficult particle shapes (needles) and elevated temperatures, which cannot be evaluated with
common techniques.
The fluid dynamic (flow rates) and hydrodynamic behavior (local droplet size distributions and local holdup) of a continuous DN300 pump-mixer were investigated using water as the continuous phase and paraffin oil as the dispersed phase. The influence of the impeller speed (375 to 425 rpm), the feed phase ratio (10 to 30 volume percent), and the total flow rate (0.5 to 2.3 L/min) were investigated by measuring the pumping height, local holdup of the disperse phase, and the droplet size distribution (DSD). The latter one was measured at three different vessel positions using an image-based telecentric shadowgraphic technique. The droplet diameters were extracted from the acquired images using a neural network. The Sauter mean diameters were calculated from the DSD and correlated with an extended model based on Doulah (1975), considering the impeller speed, the feed phase ratio, and additionally the flow rate. The new correlation can describe an extensive database containing 155 experiments of the fluid and hydrodynamic within a 15 % error range
In this work, steady-state droplet size distributions in a DN300 stirred batch vessel with a
Rushton turbine impeller are investigated using an insertion probe based on the telecentric transmit-
ted light principle. High-resolution droplet size distributions are extracted from the images using
a convolutional neural network for image-analysis in order to investigate the influence of impeller
speed and phase fraction (up to 50 vol.-%). In addition, Sauter mean diameters were calculated and
correlated with two semi-empirical approaches, while the standard approach only accomplished 5.7%
accuracy, and the correlation of Laso et al. provided a relative mean error of 4.0%. In addition, the
correlated exponent in the Weber number was fitted to the experimental data of this work yielding a
slightly different value than the theoretical (−0.6), which allows a better representation of the low
coalescence tendency of the system, which is usually neglected in standard procedures.
One of the ongoing tasks in space structure testing is the vibration test, in which a given structure is mounted onto a shaker and excited by a certain input load on a given frequency range, in order to reproduce the rigor of launch. These vibration tests need to be conducted in order to ensure that the devised structure meets the expected loads of its future application. However, the structure must not be overtested to avoid any risk of damage. For this, the system’s response to the testing loads, i.e., stresses and forces in the structure, must be monitored and predicted live during the test. In order to solve the issues associated with existing methods of live monitoring of the structure’s response, this paper investigated the use of artificial neural networks (ANNs) to predict the system’s responses during the test. Hence, a framework was developed with different use cases to compare various kinds of artificial neural networks and eventually identify the most promising one. Thus, the conducted research accounts for a novel method for live prediction of stresses, allowing failure to be evaluated for different types of material via yield criteria
INRECA offers tools and methods for developing, validating, and maintaining classification, diagnosis and decision support systems. INRECA's basic technologies are inductive and case-based reasoning [9]. INRECA fully integrates [2] both techniques within one environment and uses the respective advantages of both technologies. Its object-oriented representation language CASUEL [10, 3] allows the definition of complex case structures, relations, similarity measures, as well as background knowledge to be used for adaptation. The objectoriented representation language makes INRECA a domain independent tool for its destined kind of tasks. When problems are solved via case-based reasoning, the primary kind of knowledge that is used during problem solving is the very specific knowledge contained in the cases. However, in many situations this specific knowledge by itself is not sufficient or appropriate to cope with all requirements of an application. Very often, background knowledge is available and/or necessary to better explore and interpret the available cases [1]. Such general knowledge may state dependencies between certain case features and can be used to infer additional, previously unknown features from the known ones.
Edit distances between merge trees of scalar fields have many applications in scientific visualization, such as ensemble analysis, feature tracking or symmetry detection. In this paper, we propose branch mappings, a novel approach to the construction of edit mappings for merge trees. Classic edit mappings match nodes or edges of two trees onto each other, and therefore have to either rely on branch decompositions of both trees or have to use auxiliary node properties to determine a matching. In contrast, branch mappings employ branch properties instead of node similarity information, and are independent of predetermined branch decompositions. Especially for topological features, which are typically based on branch properties, this allows a more intuitive distance measure which is also less susceptible to instabilities from small-scale perturbations. For trees with 𝒪(n) nodes, we describe an 𝒪(n4) algorithm for computing optimal branch mappings, which is faster than the only other branch decomposition-independent method in the literature by more than a linear factor. Furthermore, we compare the results of our method on synthetic and real-world examples to demonstrate its practicality and utility.
Im Rahmen dieses Beitrags werden Ergebnisse einer Untersuchung an feststoffgeschmierten Wälzlagern vorgestellt. Betrachtet werden dabei Lager, welche einen speziellen, modifizierten Käfig verwenden. Die Käfigtaschen des Käfigs dienen dabei, zusätzlich zu ihrer ursprünglichen Funktion, der Führung der Wälzkörper, als Schmierstoffdepot. Es werden zunächst Prüfaufbau und die Versuchsbedingungen erläutert und in diesem Zusammenhang wird gezeigt, dass der in diesem Beitrag verwendete Aufbau, verglichen mit dem Aufbau vorangegangener Arbeiten, eine deutlich reduzierte Streuung aufweist. Als eine nicht zu vernachlässigende Fehlerquelle bei der gravimetrischen Bestimmung des Käfigtaschenverschleißes wurde das hygroskopische Verhalten des Polymercompounds identifiziert. Einer Verfälschung dieser Messergebnisse durch die unkontrollierte Feuchtigkeitsaufnahme aus der Umgebung, muss durch einen zeitlich vorgelagerten Trocknungsprozess unter definierten Bedingungen vorgebeugt werden. Zudem wird gezeigt, dass die Käfigtaschen sowohl durch den Innenring des Lagers, als auch durch die Wälzkörper verschlissen werden. Hierbei wird eine Messmethode zur Ermittlung der, durch den Innenring verschlissenen, Materialmenge vorgestellt. Durch Oberflächenanalysen der Messingstruktur des Käfigs wird eine Reduzierung des Zinks nachgewiesen, sowie eine Änderung der Oberflächenstruktur festgestellt. Als Ursache wird ein Sublimieren des Zinks aufgrund der Versuchsbedingungen vermutet. Weiterhin wird gezeigt, dass die Prüftemperatur von 300 °C zu einem Schrumpfen der Lagerringe führt. Eine Vorwegnahme dieser Maßverringerung ist durch Temperierung bei 300 °C für 48 h möglich.
Die politikwissenschaftliche Literatur zum deutschen Föderalismus ist überaus vielfältig. Neben Analysen der institutionellen Arrangements, ihrer Veränderungen sowie der Dynamiken des deutschen Verbundföderalismus, finden sich auch zahlreiche Untersuchungen zu einzelnen Politikfeldern, die sowohl die Interaktionen zwischen Bund und Ländern als auch die Varianz zwischen den Policies der Länder samt ihrer Bestimmungsfaktoren untersuchen. Darüber hinaus haben sich in den vergangenen Jahrzehnten eigene Forschungszweige zu Parteien im Bundesstaat und zur Parlamentsforschung auf Länderebene etabliert. Trotz dieser großen Forschungsaktivität sind jedoch einige zentrale Fragen der Politikwissenschaft zum Zusammenspiel zwischen Wählern, Parteien, Parlamenten und Regierungen sowie deren Wirkung auf politischen Outputs und Outcomes weiterhin unbeantwortet. Dies ist, so das Argument dieses Beitrags, insbesondere der fehlenden Zusammenführung einzelner Literaturstränge und der noch unzureichenden empirischen Datenbasis geschuldet. Mittels einer Systematisierung des gegenwärtigen Literaturstands entwirft der Aufsatz ein Forschungsprogramm, das auf eine umfassende Analyse des politischen Willensbildungs- und Entscheidungsfindungsprozesses in den deutschen Bundesländern abstellt und Fragen der Responsivität und Rückkopplung systematisch in den Blick nimmt.
Algorithms increasingly govern people's lives, including through rapidly spreading applications in the public sector. This paper sheds light on acceptance of algorithms used by the public sector emphasizing that algorithms, as parts of socio-technical systems, are always embedded in a specific social context. We show that citizens' acceptance of an algorithm is strongly shaped by how they evaluate aspects of this context, namely the personal importance of the specific problems an algorithm is supposed to help address and their trust in the organizations deploying the algorithm. The objective performance of presented algorithms affects acceptance much less in comparison. These findings are based on an original dataset from a survey covering two real-world applications, predictive policing and skin cancer prediction, with a sample of 2661 respondents from a representative German online panel. The results have important implications for the conditions under which citizens will accept algorithms in the public sector.
Comparative public policy is a blooming research area. It also suffers from some curious blind spots. In this paper we discuss four of these: (1) the obsession with covariance, which means that important phenomena are ignored; (2) the lack of agency, which leads to underwhelming explanatory models; (3) the unclear universe of cases, which means the inferential value of theories and the empirical results are unclear; and (4) the focus on outputs, even though most theories contain strong assumptions about the political process leading to certain outputs. Following this discussion, we then outline how a closer integration of policy process theories may be fruitful for future research.
We studied the development of cognitive abilities related to intelligence and creativity
(N = 48, 6–10 years old), using a longitudinal design (over one school year), in order
to evaluate an Enrichment Program for gifted primary school children initiated by
the government of the German federal state of Rhineland-Palatinate (Entdeckertag
Rheinland Pfalz, Germany; ET; Day of Discoverers). A group of German primary school
children (N = 24), identified earlier as intellectually gifted and selected to join the
ET program was compared to a gender-, class- and IQ- matched group of control
children that did not participate in this program. All participants performed the Standard
Progressive Matrices (SPM) test, which measures intelligence in well-defined problem
space; the Creative Reasoning Task (CRT), which measures intelligence in ill-defined
problem space; and the test of creative thinking-drawing production (TCT-DP), which
measures creativity, also in ill-defined problem space. Results revealed that problem
space matters: the ET program is effective only for the improvement of intelligence
operating in well-defined problem space. An effect was found for intelligence as
measured by SPM only, but neither for intelligence operating in ill-defined problem space
(CRT) nor for creativity (TCT-DP). This suggests that, depending on the type of problem
spaces presented, different cognitive abilities are elicited in the same child. Therefore,
enrichment programs for gifted, but also for children attending traditional schools,
should provide opportunities to develop cognitive abilities related to intelligence,
operating in both well- and ill-defined problem spaces, and to creativity in a parallel,
using an interactive approach.
Im Gegensatz zum Übertragungsnetz, dessen Struktur hinreichend genau bekannt ist, sind passende Netzmodelle
für Mittelspannungsnetze (MS-Netze) wegen der hohen Anzahlen der MS-Netze und Verteilnetzbetreiber (VNB)
nur schwer abzubilden. Des Weiteren ist eine detaillierte Darstellung realer MS-Netze in wissenschaftlichen Publikationen
aus datenschutzrechtlichen Gründen meist nicht erwünscht. In dieser Arbeit werden MS-Netzmodelle
sowie ihre Entwicklung im Detail erklärt. Damit stehen erstmals für die Öffentlichkeit nachvollziehbare MS-Netzmodelle
für den deutschsprachigen Raum zur Verfügung. Sie können als Benchmark für wissenschaftliche Untersuchungen
sowie zur Methodenentwicklung verwendet werden.
To investigate whether participants can activate only one spatially oriented number line at a time or
multiple number lines simultaneously, they were asked to solve a unit magnitude comparison task
(unit smaller/larger than 5) and a parity judgment task (even/odd) on two-digit numbers. In both these
primary tasks, decades were irrelevant. After some of the primary task trials (randomly), participants
were asked to additionally solve a secondary task based on the previously presented number. In
Experiment 1, they had to decide whether the two-digit number presented for the primary task was
larger or smaller than 50. Thus, for the secondary task decades were relevant. In contrast, in Experiment
2, the secondary task was a color judgment task, which means decades were irrelevant. In Experiment
1, decades’ and units’ magnitudes influenced the spatial association of numbers separately. In contrast,
in Experiment 2, only the units were spatially associated with magnitude. It was concluded that
multiple number lines (one for units and one for decades) can be activated if attention is focused on
multiple, separate magnitude attributes.
Due to the steadily increasing number of decentralized generation units, the upcoming smart meter rollout and the expected electrification of the transport sector (e-mobility), grid planning and grid operation at low-voltage (LV) level are facing major challenges. Therefore, many studies, research and demonstration projects on the above topics have been carried out in recent years, and the results and the methods developed have been published. However, the published methods usually cannot be replicated or validated, since the majority of the examination models or the scenarios used are incomprehensible to third parties. There is a lack of uniform grid models that map the German LV grids and can be used for comparative investigations, which are similar to the example of the North American distribution grid models of the IEEE. In contrast to the transmission grid, whose structure is known with high accuracy, suitable grid models for LV grids are difficult to map because of the high number of LV grids and distribution system operators. Furthermore, a detailed description of real LV grids is usually not available in scientific publications for data privacy
reasons. For investigations within a research project, the most characteristic synthetic LV grid models have been created, which are based on common settlement structures and usual grid planning principles in Germany. In this work, these LV grid models, and their development are explained in detail. For the first time, comprehensible LV grid models for the middle European area are available to the public, which can be used as a benchmark for further scientific research and method developments.
This document is an English version of the paper which was originally written in German1. In addition, this paper discusses a few more aspects especially on the planning process of distribution grids in Germany.
Durch die stetige Zunahme von dezentralen Erzeugungsanlagen, den anstehenden Smart-Meter Rollout sowie die zu erwartende Elektrifizierung des Verkehrssektors (E-Mobilität) steht die Netzplanung und Netzbetriebsführung von Niederspannungsnetzen (NS-Netzen) in Deutschland vor großen Herausforderungen. In den letzten Jahren wurden daher viele Studien, Forschungs- und Demonstrationsprojekte zu den oben genannten Themen durchge-führt und die Ergebnisse sowie die entwickelten Methoden publiziert. Jedoch lassen sich die publizierten Methoden meist nicht nachbilden bzw. validieren, da die Untersuchungsmodelle oder die angesetzten Szenarien für Dritte nicht nachvollziehbar sind. Es fehlen einheitliche Netzmodelle, die die deutschen NS-Netze abbilden und für Ver-gleichsuntersuchungen herangezogen werden können, ähnlich dem Beispiel der nordamerikanischen Verteilnetzmodelle des IEEE.
Im Gegensatz zum Übertragungsnetz, dessen Struktur hinreichend genau bekannt ist, sind passende Netzmodelle für NS-Netze wegen der hohen Anzahlen der NS-Netze und Verteilnetzbetreiber (VNB) nur schwer abzubilden. Des Weiteren ist eine detaillierte Darstellung realer NS-Netze in wissenschaftlichen Publikationen aus daten-schutzrechtlichen Gründen meist nicht erwünscht. Für Untersuchungen im Rahmen eines Forschungsprojekts wurden darum möglichst charakteristische synthetische NS-Netzmodelle erstellt, die sich an gängigen deutschen Siedlungsstrukturen und üblichen Netzplanungsgrundsätzen orientieren. In dieser Arbeit werden diese NS-Netzmodelle sowie ihre Entwicklung im Detail erklärt. Damit stehen erstmals für die Öffentlichkeit nachvollziehbare NS-Netzmodelle für den deutschsprachigen Raum zur Verfügung. Sie können als Benchmark für wissenschaftliche Untersuchungen sowie zur Methodenentwicklung verwendet werden.
Regelkonzept für eine Niederspannungsnetzautomatisierung unter Verwendung des Merit-Order-Prinzips
(2022)
Durch die zunehmende Erzeugungsleistung auf Niederspannungsnetzebene (NS-Netzebene) durch Photovoltaikanlagen, sowie die Elektrifizierung des Wärme- und des Verkehrssektors sind Investitionen in die NS-Netze notwendig. Ein höherer Digitalisierungsgrad im NS-Netz birgt das Potential, die notwendigen Investitionen genauer zu identifizieren, und damit ggf. zu reduzieren oder zeitlich zu verschieben. Hierbei stellt die Markteinführung intelligenter Messsysteme, sog. Smart Meter, eine neue Möglichkeit dar, Messwerte aus dem NS-Netz zu erhalten und auf deren Grundlage die Stellgrößen verfügbarer Aktoren zu optimieren. Dazu stellt sich die Frage, wie Messdaten unterschiedlicher Messzyklen in einem Netzautomatisierungssystem genutzt werden können und wie sich das nicht-lineare ganzzahlige Optimierungsproblem der Stellgrößenoptimierung effizient lösen lässt. Diese Arbeit befasst sich mit der Lösung des Optimierungsproblems. Dazu kommt eine Stellgrößenoptimierung nach dem Merit-Order-Prinzip zur Anwendung.
The size congruity effect involves interference between numerical magnitude and physical size of visually presented numbers: congruent numbers (either both small or both large in numerical magnitude and physical size) are responded to faster than incongruent ones (small numerical magnitude/large physical size or vice versa). Besides, numerical magnitude is associated with lateralized response codes, leading to the Spatial Numerical Association of Response Codes (SNARC) effect: small numerical magnitudes are preferably responded to on the left side and large ones on the right side. Whereas size congruity effects are ascribed to interference between stimulus dimensions in the decision stage, SNARC effects are understood as (in)compatibilities in stimulus-response combinations. Accordingly, size congruity and SNARC effects were previously found to be independent in parity and in physical size judgment tasks. We investigated their dependency in numerical magnitude judgment tasks. We obtained independent size congruity and SNARC effects in these tasks and replicated this observation for the parity judgment task. The results confirm and extend the notion that size congruity and SNARC effects operate in different representational spaces. We discuss possible implications for number representation.
The modified fouling index (MFI) is a crucial characteristic for assessing the fouling potential of reverse osmosis (RO) feed water. Although the MFI is widely used, the estimation time required for filtration and data evaluation is still relatively long. In this study, the relationship between the MFI and instantaneous spectroscopic extinction measurements was investigated. Since both measurements show a linear correlation with particle concentration, it was assumed that a change in the MFI can be detected by monitoring the optical density of the feed water. To prove this assumption, a test bench for a simultaneous measurement of the MFI and optical extinction was designed. Silica monospheres with sizes of 120 nm and 400 nm and mixtures of both fractions were added to purified tap water as model foulants. MFI filtration tests were performed with a standard 0.45 µm PES membrane, and a 0.1 µm PP membrane. Extinction measurements were carried out with a newly designed flow cell inside a UV–VIS spectrometer to get online information on the particle properties of the feed water, such as the particle concentration and mean particle size. The measurement results show that the extinction ratio of different light wavelengths, which should remain constant for a particulate system, independent of the number of particles, only persisted at higher particle concentrations. Nevertheless, a good correlation between extinction and MFI for different particle concentrations with restrictions towards the ratio of particle and pore size of the test membrane was found. These findings can be used for new sensory process monitoring systems, if the deficiencies can be overcome.
The Griffith-Ley oxidation of alcohols to aldehydes and ketones is performed with either RuCl3 ⋅ (H2O)x or a highly stable, well-defined ruthenium catalyst and with cheap trimethylamine N-oxide (TMAO) as the oxygen source. The use of n-heptane as the solvent, which forms a second phase with TMAO and a part of the alcohol, allows the reactions to be performed with a minimum amount of catalyst. This results in high local concentrations and thus to very rapid conversions. Detailed quantum chemical calculations suggest, that the Griffith-Ley oxidation not necessarily requires high oxidation states of ruthenium but can also proceed with RuII/RuIV species.
Using industrial robots for machining applications in flexible manufacturing
processes lacks a high accuracy. The main reason for the deviation is the
flexibility of the gearbox. Secondary Encoders (SE) as an additional, high precision
angle sensor offer a huge potential of detecting gearbox deviations. This paper
aims to use SE to reduce gearbox compliances with a feed forward, adaptive
neural control. The control network is trained with a second network for system
identification. The presented algorithm is capable of online application and optimizes
the robot accuracy in a nonlinear simulation.
We present an identification benchmark data set for a full robot movement with an KUKA KR300 R2500 ultra SE industrial robot. It is a robot with a nominal payload capacity of 300 kg, a weight of 1120 kg and a reach of 2500mm. It exhibits 12 states accounting for position and velocity for each of the 6 joints. The robot encounters backlash in all joints, pose-dependent inertia, pose-dependent gravitational loads, pose-dependent hydraulic forces, pose- and velocity dependent centripetal and Coriolis forces as well as a nonlinear friction, which is temperature dependent and therefore potentially time varying. We supply the prepared dataset for black-box identification of the forward or the inverse robot dynamics. Additional to the data for black-box modelling, we supply high-frequency raw data and videos of each experiment. A baseline and figures of merit are defined to make results compareable across different identification methods.
We present an identification benchmark data set for a full robot movement with an KUKA KR300 R2500 ultra SE industrial robot. It is a robot with a nominal payload capacity of 300 kg, a weight of 1120 kg and a reach of 2500mm. It exhibits 12 states accounting for position and velocity for each of the 6 joints. The robot encounters backlash in all joints, pose-dependent inertia, pose-dependent gravitational loads, pose-dependent hydraulic forces, pose- and velocity dependent centripetal and Coriolis forces as well as a nonlinear friction, which is temperature dependent and therefore potentially time varying. We supply the prepared dataset for black-box identification of the forward or the inverse robot dynamics. Additional to the data for black-box modelling, we supply high-frequency raw data and videos of each experiment. A baseline and figures of merit are defined to make results compareable across different identification methods.
Kinetic models of human motion rely on boundary conditions which are defined by the interaction of the body with its environment. In the simplest case, this interaction is limited to the foot contact with the ground and is given by the so called ground reaction force (GRF). A major challenge in the reconstruction of GRF from kinematic data is the double support phase, referring to the state with multiple ground contacts. In this case, the GRF prediction is not well defined. In this work we present an approach to reconstruct and distribute vertical GRF (vGRF) to each foot separately, using only kinematic data. We propose the biomechanically inspired force shadow method (FSM) to obtain a unique solution for any contact phase, including double support, of an arbitrary motion. We create a kinematic based function, model an anatomical foot shape and mimic the effect of hip muscle activations. We compare our estimations with the measurements of a Zebris pressure plate and obtain correlations of 0.39≤r≤0.94 for double support motions and 0.83≤r≤0.87 for a walking motion. The presented data is based on inertial human motion capture, showing the applicability for scenarios outside the laboratory. The proposed approach has low computational complexity and allows for online vGRF estimation.
A survey on continuous, semidiscrete and discrete well-posedness and scale-space results for a class of nonlinear diffusion filters is presented. This class does not require any monotony assumption (comparison principle) and, thus, allows image restoration as well. The theoretical results include existence, uniqueness, continuous dependence on the initial image, maximum-minimum principles, average grey level invariance, smoothing Lyapunov functionals, and convergence to a constant steady state.
Cloudy inhomogenities in artificial fabrics are graded by a fast method which is based on a Laplacian pyramid decomposition of the fabric image. This band-pass representation takes into account the scale character of the cloudiness. A quality measure of the entire cloudiness is obtained as a weighted mean over the variances of all scales.