Refine
Year of publication
Document Type
- Article (704) (remove)
Has Fulltext
- yes (704) (remove)
Keywords
- AG-RESY (42)
- PARO (30)
- SKALP (15)
- Schule (12)
- MINT (11)
- Mathematische Modellierung (11)
- Stadtplanung (9)
- Denkmäler (8)
- HANDFLEX (8)
- Monitoring (8)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (144)
- Kaiserslautern - Fachbereich Informatik (133)
- Kaiserslautern - Fachbereich Physik (100)
- Kaiserslautern - Fachbereich Mathematik (82)
- Kaiserslautern - Fachbereich Sozialwissenschaften (53)
- Kaiserslautern - Fachbereich Biologie (46)
- Kaiserslautern - Fachbereich Chemie (39)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (27)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (26)
- Kaiserslautern - Fachbereich Bauingenieurwesen (22)
Das in Manderen, Departement Moselle, gelegene Schloss von Malbrouck befindet sich genau an der deutschen und luxemburgischen Grenze. Sein Bau wurde nach dem Willen von Arnold VI, dem Grundherrn von Sierck, im Jahre 1419 begonnen, wurde 1434, in dem Jahr, in dem das Schloss für geeignet befunden wurde, einem Angriff widerstehen zu können, vollendet und dann in den Dienst des Erzbistums Trier gestellt.
Unglücklicherweise war zum Zeitpunkt des Todes von Ritter Arnold dessen Nachfolge nicht gesichert, weshalb das Schloss ab Ende des XV. bis Anfang des XVII. Jahrhunderts von einer Hand in die nächste überwechselte.
Seit es im Jahre 1930 unter Denkmalschutz gestellt wurde und der Conseil Général de la Moselle es 1975 von seinem letzten Eigentümer, einem Bauern, zurückkaufte, wurde das Schloss vollständig saniert und im September 1988 wiedereröffnet. Wie jedes andere Bauwerk dieser Größenordnung braucht es eine feine und genaue Überwachung. Aus diesem Grund wollte der Conseil Général de la Moselle strategischer Partner im Projekt CURe MODERN werden.
Die Brücke von Rosbrück ist ein Bauwerk aus Spannbeton, das vom Departement Moselle verwaltet wird. Da sie schon 1952 gebaut wurde, weist sie mehrere Spannkabel auf, die offensichtlich rissig oder brüchig sind. Diese Schädigungen stellen die Tragfähigkeit des Bauwerks in Frage.
Es erschien daher notwendig, den Zustand der Kabel im Inneren der Träger zu überprüfen. Im Vorfeld der Kontrolle mittels der MFL-Methode (Magnetic Flux Leakage) wurde deren Position im Verhältnis zu den Ausführungsebenen überprüft. Da diese Überprüfung schlüssig war, wurden dann die MFL-Messungen an einem der Träger des Bauwerks durchgeführt. Die Ergebnisse sind beweiskräftig und es wurden keine Mängel nachgewiesen. In der Folge erscheint es nötig, die Überprüfung (Auskultation) auf geschädigte Bereiche auszudehnen.
In der Betriebswirtschaftslehre bestehen unterschiedliche Sichtweisen in Bezug auf Unternehmensziele und deren Verhältnis zu Stakeholderzielen. Der vorliegende Beitrag untersucht das Verhältnis von Unternehmenszielen und Stakeholderzielen, wobei die zentrale Zielsetzung des Beitrags darin besteht, die Stakeholderziele zu spezifizieren, da bislang existierende Kataloge mit Stakeholderzielen wenig detailliert sind. Dabei werden die unterschiedlichen Stakeholderziele anhand ihrer drei Zieldimensionen untersucht sowie in Formal- und Sachziele unterteilt. Des Weiteren wird deutlich, dass zwischen Unternehmenszielen und Stakeholderzielen durch die differierenden Ziele der unterschiedlichen Stakeholdergruppen ständige Konflikte herrschen, wodurch es grundsätzlich innerhalb unter-nehmerischer Zielsysteme nur zu einer Quasilösung der Konflikte kommen kann.
Zerstörungsfreie (ND) Erkundungstechniken, seien es nun zerstörungsfreie oder geophysikalische Bewertungsmethoden, werden üblicherweise im Bau- und Transportwesen, im Bereich der Energietechnik oder der Stadtentwicklung angewandt. Während sich jedoch im Laufe der letzten Jahrzehnte das Interesse auf interne geometrische Informationen zu der untersuchten Umgebung richtete, konzentrieren sich jüngere Forschungen auf Informationen, die mit der Art und dem Zustand dieser Umgebung verbunden sind, um so dem Begiff der zerstörungsfreien Bewertung näher zu kommen. Die gegenwärtig laufenden Studien versuchen, die aus zerstörungsfreien Messungen abgeleiteten Werte in statistische, mit Lebensdauermodellen verknüpfte Ansätze zu integrieren.
Without actors, there is no action: How interpersonal interactions help to explain routine dynamics
(2020)
In this paper, we argue that it is important to gain a better understanding on how people interact with each other to explain routine dynamics. Thus, we propose to focus on the interpersonal interactions of actors which is not only the fact that actors interact with each other but that the manner and quality of these interactions is important to understand routine dynamics. By drawing on social exchange theory, we propose a framework that seeks to explain routine dynamics based on different relationships between actors. Building on this framework, we provide different process models indicating how routine performing and patterning is enacted due to the respective relationship of actors. Our insights contribute to research on routine dynamics by arguing (1) that actions of patterning are dependent on the relationship of actors; (2) that trust works as an enabler for creating new patterns of actions; (3) that distrust functions as an enhancer for interrupting and dissolving patterns of actions.
Aufgrund der vernetzten Strukturen und Wirkungszusammenhänge dynamischer Systeme werden die zugrundeliegenden mathematischen Modelle meist sehr komplex und erfordern ein hohes mathematisches Verständnis und Geschick. Bei Verwendung von spezieller Software können jedoch auch ohne tiefgehende mathematische oder informatorische Fachkenntnisse komplexe Wirkungsnetze dynamischer Systeme interaktiv erstellt werden. Als Beispiel wollen wir schrittweise das Modell einer Miniwelt entwerfen und Aussagen bezüglich ihrer Bevölkerungsentwicklung treffen.
Die Akustik liefert einen interessanten Hintergrund, interdisziplinären und fächerverbindenen Unterricht zwischen Mathematik, Physik und Musik durchzuführen. SchülerInnen können hierbei beispielsweise experimentell tätig sein, indem sie Audioaufnahmen selbst erzeugen und sich mit Computersoftware Frequenzspektren erzeugen lassen. Genauso können die Schüler auch Frequenzspektren vorgeben und daraus Klänge erzeugen. Dies kann beispielsweise dazu dienen, den Begriff der Obertöne im Musikunterricht physikalisch oder mathematisch greifbar zu machen oder in der Harmonielehre Frequenzverhältnisse von Intervallen und Dreiklängen näher zu untersuchen.
Der Computer ist hier ein sehr nützliches Hilfsmittel, da der mathematische Hintergrund dieser Aufgabe -- das Wechseln zwischen Audioaufnahme und ihrem Frequenzbild -- sich in der Fourier-Analysis findet, die für SchülerInnen äußerst anspruchsvoll ist. Indem man jedoch die Fouriertransformation als numerisches Hilfsmittel einführt, das nicht im Detail verstanden werden muss, lässt sich an anderer Stelle interessante Mathematik betreiben und die Zusammenhänge zwischen Akustik und Musik können spielerisch erfahren werden.
Im folgenden Beitrag wird eine Herangehensweise geschildert, wie wir sie bereits bei der Felix-Klein-Modellierungswoche umgesetzt haben: Die SchülerInnen haben den Auftrag erhalten, einen Synthesizer zu entwickeln, mit dem verschiedene Musikinstrumente nachgeahmt werden können. Als Hilfsmittel haben sie eine kurze Einführung in die Eigenschaften der Fouriertransformation erhalten, sowie Audioaufnahmen verschiedener Instrumente.
Monitoring of patient-reported outcomes and providing therapists with progress feedback has been shown to be beneficial for treatment outcomes (e.g., by preventing therapy failures). Despite recent advances in monitoring and feedback research, little is known about why some therapists benefit from feedback more than others. Addressing this issue, the present article uses the basic science literature on belief updating to propose a theoretical model for these between-therapist differences. In doing so, we provide a novel framework that allows testable hypotheses about when and how feedback on therapy progress is likely to improve treatment outcomes. In particular, we argue that the integration of feedback and its effect on therapists’ behavior depends on the weight therapists assign to their prior beliefs regarding treatment progress relative to the weight of the feedback received. We conclude by outlining some directions for future research on the underpinnings of this model, and point to some implications for the training of therapists and provision of feedback.
Whole-body electromyostimulation (WB-EMS) is an extension of the EMS application known in physical therapy. In WB-EMS, body composition and skinfold thickness seem to play a decisive role in influencing the Ohmic resistance and therefore the maximum intensity tolerance. That is why the therapeutic success of (WB-)EMS may depend on individual anatomical parameters. The aim of the study was to find out whether gender, skinfold thickness and parameters of body composition have an influence on the maximum intensity tolerance in WB-EMS. [Participants and Methods] Fifty-two participants were included in the study. Body composition (body impedance, body fat, fat mass, fat-free mass) and skinfold thicknesses were measured and set into relation to the maximum intensity tolerance. [Results] No relationship between the different anthropometric parameters and the maximum intensity tolerance was detected for both genders. Considering the individual muscle groups, no similarities were found in the results. [Conclusion] Body composition or skinfold thickness do not seem to have any influence on the maximum intensity tolerance in WB-EMS training. For the application in physiotherapy this means that a dosage of the electrical voltage within the scope of a (WB-) EMS application is only possible via the subjective feedback (BORG Scale).
In this paper we present the results of the project “#Datenspende” where during the German election in 2017 more than 4000 people contributed their search results regarding keywords connected to the German election campaign.
Analyzing the donated result lists we prove, that the room for personalization of the search results is very small. Thus the opportunity for the effect mentioned in Eli Pariser’s filter bubble theory to occur in this data is also very small, to a degree that it is negligible. We achieved these results by applying various similarity measures to the result lists that were donated. The first approach using the number of common results as a similarity measure showed that the space for personalization is less than two results out of ten on average when searching for persons and at most four regarding the search for parties. Application of other, more specific measures show that the space is indeed smaller, so that the presence of filter bubbles is not evident.
Moreover this project is also a proof of concept, as it enables society to permanently monitor a search engine’s degree of personalization for any desired search terms. The general design can also be transferred to intermediaries, if appropriate APIs restrict selective access to contents relevant to the study in order to establish a similar degree of trustworthiness.
Wear phenomena in worm gears are dependent on the size of the gears. Whereas larger gears are mainly affected by fatigue wear, abrasive wear is predominant in smaller gears. In this context a simulation model for abrasive wear of worm gears was developed, which is based on an energetic wear equation. This approach associates wear with solid friction energy occurring in the tooth contact. The physically-based wear simulation model includes a tooth contact analysis and tribological calculation to determine the local solid tooth friction and wear. The calculation is iterated with the modified tooth flank geometry of the worn worm wheel, in order to consider the influence of wear on the tooth contact. Experimental results on worm gears are used to determine the wear model parameter and to validate the model. A simulative study for a wide range of worm gear geometries was conducted to investigate the influence of geometry and operating conditions on abrasive wear.
With this article we first like to give a brief review on wavelet thresholding methods in non-Gaussian and non-i.i.d. situations, respectively. Many of these applications are based on Gaussian approximations of the empirical coefficients. For regression and density estimation with independent observations, we establish joint asymptotic normality of the empirical coefficients by means of strong approximations. Then we describe how one can prove asymptotic normality under mixing conditions on the observations by cumulant techniques.; In the second part, we apply these non-linear adaptive shrinking schemes to spectral estimation problems for both a stationary and a non-stationary time series setup. For the latter one, in a model of Dahlhaus on the evolutionary spectrum of a locally stationary time series, we present two different approaches. Moreover, we show that in classes of anisotropic function spaces an appropriately chosen wavelet basis automatically adapts to possibly different degrees of regularity for the different directions. The resulting fully-adaptive spectral estimator attains the rate that is optimal in the idealized Gaussian white noise model up to a logarithmic factor.
We derive minimax rates for estimation in anisotropic smoothness classes. This rate is attained by a coordinatewise thresholded wavelet estimator based on a tensor product basis with separate scale parameter for every dimension. It is shown that this basis is superior to its one-scale multiresolution analog, if different degrees of smoothness in different directions are present.; As an important application we introduce a new adaptive wavelet estimator of the time-dependent spectrum of a locally stationary time series. Using this model which was resently developed by Dahlhaus, we show that the resulting estimator attains nearly the rate, which is optimal in Gaussian white noise, simultaneously over a wide range of smoothness classes. Moreover, by our new approach we overcome the difficulty of how to choose the right amount of smoothing, i.e. how to adapt to the appropriate resolution, for reconstructing the local structure of the evolutionary spectrum in the time-frequency plane.
We consider wavelet estimation of the time-dependent (evolutionary) power spectrum of a locally stationary time series. Allowing for departures from stationary proves useful for modelling, e.g., transient phenomena, quasi-oscillating behaviour or spectrum modulation. In our work wavelets are used to provide an adaptive local smoothing of a short-time periodogram in the time-freqeuncy plane. For this, in contrast to classical nonparametric (linear) approaches we use nonlinear thresholding of the empirical wavelet coefficients of the evolutionary spectrum. We show how these techniques allow for both adaptively reconstructing the local structure in the time-frequency plane and for denoising the resulting estimates. To this end a threshold choice is derived which is motivated by minimax properties w.r.t. the integrated mean squared error. Our approach is based on a 2-d orthogonal wavelet transform modified by using a cardinal Lagrange interpolation function on the finest scale. As an example, we apply our procedure to a time-varying spectrum motivated from mobile radio propagation.
River ecosystems are being threatened by rising temperatures, aridity, and salinity due to climate change and increased water abstractions. These threats also put human well-being at risk, as people and rivers are closely connected, particularly in water-scarce regions. We aimed to investigate the relationship between human well-being and biological and physico-chemical river water quality using the arid Draa River basin as a case study. Physico-chemical water measurements, biological monitoring of aquatic macroinvertebrates, and household surveys were used to assess the state of the river water, ecosystem, and human well-being, as well as the associations between them. Salinity levels exceeded maximum permissible values for drinking water in 35 % and irrigation water in 12 % of the sites. Salinity and low flow were associated with low biological quality. Human satisfaction with water quantity and quality, agriculture, the natural environment, and overall life satisfaction were low particularly in the Middle Draa, where 89% of respondents reported emotional distress due to water salinity and scarcity. Drinking and irrigation water quality was generally rated lower in areas characterized by higher levels of water salinity and scarcity. The study found positive associations between the river water quality and biological quality indices, but no significant association between these factors and human satisfaction. These findings suggest that the relationship between human satisfaction and the biological and physicochemical river water quality is complex and that a more comprehensive approach to human well-being is likely needed to establish relationships.
Water availability shapes edaphic and lithic cyanobacterial communities in the Atacama Desert
(2019)
In the Atacama Desert, cyanobacteria grow on various substrates such as soils (edaphic) and quartz or granitoid stones (lithic). Both edaphic and lithic cyanobacterial communities have been described but no comparison between both communities of the same locality has yet been undertaken. In the present study, we compared both cyanobacterial communities along a precipitation gradient ranging from the arid National Park Pan de Azúcar (PA), which resembles a large fog oasis in the Atacama Desert extending to the semiarid Santa Gracia Natural Reserve (SG) further south, as well as along a precipitation gradient within PA. Various microscopic techniques, as well as culturing and partial 16S rRNA sequencing, were applied to identify 21 cyanobacterial species; the diversity was found to decline as precipitation levels decreased. Additionally, under increasing xeric stress, lithic community species composition showed higher divergence from the surrounding edaphic community, resulting in indigenous hypolithic and chasmoendolithic cyanobacterial communities. We conclude that rain and fog water, respectively, cause contrasting trends regarding cyanobacterial species richness in the edaphic and lithic microhabitats.
A simple method of calculating the Wannier-Stark resonances in 2D lattices is suggested. Using this method we calculate the complex Wannier-Stark spectrum for a non-separable 2D potential realized in optical lattices and analyze its general structure. The dependence of the lifetime of Wannier-Stark states on the direction of the static field (relative to the crystallographic axis of the lattice) is briefly discussed.
The paper studies the effect of a weak periodic driving on metastable Wannier-Stark states. The decay rate of the ground Wannier-Stark states as a continuous function of the driving frequency is calculated numerically. The theoretical results are compared with experimental data of Wilkinson et at. [Phys.Rev.Lett.76, 4512 (1996)] obtained for cold sodium atoms in an accelerated optical lattice.
Wall energy and wall thickness of exchange-coupled rare-earth transition-metal triple layer stacks
(1999)
The room-temperature wall energy sw 54.0310 23 J/m 2 of an exchange-coupled Tb 19.6 Fe 74.7 Co 5.7 /Dy 28.5 Fe 43.2 Co 28.3 double layer stack can be reduced by introducing a soft magnetic intermediate layer in between both layers exhibiting a significantly smaller anisotropy compared to Tb+- FeCo and Dy+- FeCo. sw will decrease linearly with increasing intermediate layer thickness, d IL , until the wall is completely located within the intermediate layer for d IL d w , where d w denotes the wall thickness. Thus, d w can be obtained from the plot sw versus d IL .We determined sw and d w on Gd+- FeCo intermediate layers with different anisotropy behavior ~perpendicular and in-plane easy axis! and compared the results with data obtained from Brillouin light-scattering measurements, where exchange stiffness, A, and uniaxial anisotropy, K u , could be determined. With the knowledge of A and K u , wall energy and thickness were calculated and showed an excellent agreement with the magnetic measurements. A ten times smaller perpendicular anisotropy of Gd 28.1 Fe 71.9 in comparison to Tb+- FeCo and Dy+- FeCo resulted in a much smaller sw 51.1310 23 J/m 2 and d w 524 nm at 300 K. A Gd 34.1 Fe 61.4 Co 4.5 with in-plane anisotropy at room temperature showed a further reduced sw 50.3310 23 J/m 2 and d w 517 nm. The smaller wall energy was a result of a different wall structure compared to perpendicular layers.
The coordination of multiple external representations is important for learning, but yet a difficult task for students, requiring instructional support. The subject in this study covers a typical relation in physics between abstract mathematical equations (definitions of divergence and curl) and a visual representation (vector field plot). To support the connection across both representations, two instructions with written explanations, equations, and visual representations (differing only in the presence of visual cues) were designed and their impact on students’ performance was tested. We captured students’ eye movements while they processed the written instruction and solved subsequent coordination tasks. The results show that students instructed with visual cues (VC students) performed better, responded with higher confidence, experienced less mental effort, and rated the instructional quality better than students instructed without cues. Advanced eye-tracking data analysis methods reveal that cognitive integration processes appear in both groups at the same point in time but they are significantly more pronounced for VC students, reflecting a greater attempt to construct a coherent mental representation during the learning process. Furthermore, visual cues increase the fixation count and total fixation duration on relevant information. During problem solving, the saccadic eye movement pattern of VC students is similar to experts in this domain. The outcomes imply that visual cues can be beneficial in coordination tasks, even for students with high domain knowledge. The study strongly confirms an important multimedia design principle in instruction, that is, that highlighting conceptually relevant information shifts attention to relevant information and thus promotes learning and problem solving. Even more, visual cues can positively influence students’ perception of course materials.
Virtual Robot Programming for Deformable Linear Objects: System concept and Prototype Implementation
(2002)
In this paper we present a method and system for robot programming using virtual reality techniques. The proposed method allows intuitive teaching of a manipulation task with haptic feedback in a graphical simulation system. Based on earlier work, our system allows even an operator who lacks specialized knowledge of robotics to automatically generate a robust sensor-based robot program that is ready to execute on different robots, merely by demonstrating the task in virtual reality.
VIPP proteins aid thylakoid biogenesis and membrane maintenance in cyanobacteria, algae, and plants. Some members of the Chlorophyceae contain two VIPP paralogs termed VIPP1 and VIPP2, which originate from an early gene duplication event during the evolution of green algae. VIPP2 is barely expressed under nonstress conditions but accumulates in cells exposed to high light intensities or H2O2, during recovery from heat stress, and in mutants with defective integration (alb3.1) or translocation (secA) of thylakoid membrane proteins. Recombinant VIPP2 forms rod-like structures in vitro and shows a strong affinity for phosphatidylinositol phosphate. Under stress conditions, >70% of VIPP2 is present in membrane fractions and localizes to chloroplast membranes. A vipp2 knock-out mutant displays no growth phenotypes and no defects in the biogenesis or repair of photosystem II. However, after exposure to high light intensities, the vipp2 mutant accumulates less HSP22E/F and more LHCSR3 protein and transcript. This suggests that VIPP2 modulates a retrograde signal for the expression of nuclear genes HSP22E/F and LHCSR3. Immunoprecipitation of VIPP2 from solubilized cells and membrane-enriched fractions revealed major interactions with VIPP1 and minor interactions with HSP22E/F. Our data support a distinct role of VIPP2 in sensing and coping with chloroplast membrane stress.
In cyanobacteria and plants, VIPP1 plays crucial roles in the biogenesis and repair of thylakoid membrane protein complexes and in coping with chloroplast membrane stress. In chloroplasts, VIPP1 localizes in distinct patterns at or close to envelope and thylakoid membranes. In vitro, VIPP1 forms higher-order oligomers of >1 MDa that organize into rings and rods. However, it remains unknown how VIPP1 oligomerization is related to function. Using time-resolved fluorescence anisotropy and sucrose density gradient centrifugation, we show here that Chlamydomonas reinhardtii VIPP1 binds strongly to liposomal membranes containing phosphatidylinositol-4-phosphate (PI4P). Cryo-electron tomography reveals that VIPP1 oligomerizes into rods that can engulf liposomal membranes containing PI4P. These findings place VIPP1 into a group of membrane-shaping proteins including epsin and BAR domain proteins. Moreover, they point to a potential role of phosphatidylinositols in directing the shaping of chloroplast membranes.
Within this work, we report the results of nuclear inelastic scattering experiments of the low-spin phase of the iron(II) mononuclear SCO complex Fe[HBpz3]2 and density functional theory based calculations performed on a model molecule of the complex. We show that the calculated partial density of vibrational states based on the structure of a single iron(II) center which is linked by three pyrazole rings to borat is in good accordance with the experimentally obtained 57Fe-pDOS and assign the molecular vibrations to the prominent optical phonons.
Defects change the phonon spectrum and also the magnetic properties of bcc-Fe. Using molecular dynamics simulation, the influence of defects – vacancies, dislocations, and grain boundaries – on the phonon spectra and magnetic properties of bcc-Fe is determined. It is found that the main influence of defects consists in a decrease of the amplitude of the longitudinal peak, PL, at around 37 meV. While the change in phonon spectra shows only little dependence on the defect type, the quantitative decrease of PL is proportional to the defect concentration. Local magnetic moments can be determined from the local atomic volumes. Again, the changes in the magnetic moments of a defective crystal are linear in the defect concentrations. In addition, the change of the phonon density of states and the magnetic moments under homogeneous uniaxial strain are investigated.
In diesem Papier beschreiben wir eine Methode zur Spezifikation und Operationalisierung von konzeptuellen Modellen kooperativer wissensbasierter Arbeitsabläufe. Diese erweitert bekannte Ansätze um den Begriff des Agenten und um alternative Aufgabenzerlegungen. Das Papier beschreibt schwerpunktmäßig Techniken, die unserem verteilten Interpreter zugrunde liegen. Dabei gehen wir insbesondere auf Methoden ein, die Abhängigkeiten zwischen Aufgaben behandeln und ein zielgerichtetes Backtracking effizient unterstützen.
3D joint kinematics can provide important information about the quality of movements. Optical motion capture systems (OMC) are considered the gold standard in motion analysis. However, in recent years, inertial measurement units (IMU) have become a promising alternative. The aim of this study was to validate IMU-based 3D joint kinematics of the lower extremities during different movements. Twenty-eight healthy subjects participated in this study. They performed bilateral squats (SQ), single-leg squats (SLS) and countermovement jumps (CMJ). The IMU kinematics was calculated using a recently-described sensor-fusion algorithm. A marker based OMC system served as a reference. Only the technical error based on algorithm performance was considered, incorporating OMC data for the calibration, initialization, and a biomechanical model. To evaluate the validity of IMU-based 3D joint kinematics, root mean squared error (RMSE), range of motion error (ROME), Bland-Altman (BA) analysis as well as the coefficient of multiple correlation (CMC) were calculated. The evaluation was twofold. First, the IMU data was compared to OMC data based on marker clusters; and, second based on skin markers attached to anatomical landmarks. The first evaluation revealed means for RMSE and ROME for all joints and tasks below 3°. The more dynamic task, CMJ, revealed error measures approximately 1° higher than the remaining tasks. Mean CMC values ranged from 0.77 to 1 over all joint angles and all tasks. The second evaluation showed an increase in the RMSE of 2.28°– 2.58° on average for all joints and tasks. Hip flexion revealed the highest average RMSE in all tasks (4.87°– 8.27°). The present study revealed a valid IMU-based approach for the measurement of 3D joint kinematics in functional movements of varying demands. The high validity of the results encourages further development and the extension of the present approach into clinical settings.
In diesem Papier stellen wir einen Interpreter vor, der die Validierung von konzeptuellen Modellen bereits in fruehen Entwicklungsphasen unterstuetzt. Wir vergleichen Hypermedia- und Expertensystemansaetze zur Wissensverarbeitung und erlaeutern, wie ein integrierter Ansatz die Erstellung von Expertensystemen vereinfacht. Das von uns entwickelte Knowledge Engineering Werkzeug ermoeglicht einen "sanften" Uebergang von initialen Protokollen ueber eine semi-formale Spezifikation in Form eines getypten Hypertextes hin zu einem operationalen Expertensystem. Ein Interpreter nutzt die in diesem Prozess erzeugte Zwischenrepraesentation direkt zur interaktiven Loesung von Problemen, wobei einzelne Aufgaben ueber ein lokales Rechnernetz auf die Bearbeiter verteilt werden. Das heisst, die Spezifikation des Expertensystems wird direkt fuer die Loesung realer Probleme eingesetzt. Existieren zu einzelnen Teilaufgaben Operationalisierungen (d.h. Programme), dann werden diese vom Computer bearbeitet.
Mobile devices (smartphones or tablets) as experimental tools (METs) offer inspiring possibilities for science education, but until now, there has been little research studying this approach. Previous research indicated that METs have positive effects on students’ interest and curiosity. The present investigation focuses on potential cognitive effects of METs using video analyses on tablets to investigate pendulum movements and an instruction that has been used before to study effects of smartphones’ acceleration sensors. In a quasi-experimental repeated-measurement design, a treatment group uses METs (TG, NTG = 23) and a control group works with traditional experimental tools (CG, NCG = 28) to study the effects on interest, curiosity, and learning achievement. Moreover, various control variables were taken into account. We suppose that pupils in the TG have a lower extraneous cognitive load and higher learning achievement than those in the CG working with traditional experimental tools. ANCOVAs showed significantly higher levels of learning achievement in the TG (medium effect size). No differences were found for interest, curiosity, or cognitive load. This might be due to a smaller material context provided by tablets, in comparison to smartphones, as more pupils possess and are familiar with smartphones than with tablets. Another reason for the unchanged interest might be the composition of the sample: While previous research showed that especially originally less-interested students profited most from using METs, the current sample contained only specialized courses, i.e., students with a high original interest, for whom the effect of METs on their interest is presumably smaller.
We present new results on standard basis computations of a 0-dimensional ideal I in a power series ring or in the localization of a polynomial ring over a computable field K. We prove the semicontinuity of the “highest corner” in a family of ideals, parametrized by the spectrum of a Noetherian domain A. This semicontinuity is used to design a new modular algorithm for computing a standard basis of I if K is the quotient field of A. It uses the computation over the residue field of a “good” prime ideal of A to truncate high order terms in the subsequent computation over K. We prove that almost all prime ideals are good, so a random choice is very likely to be good, and whether it is good is detected a posteriori by the algorithm. The algorithm yields a significant speed advantage over the non-modular version and works for arbitrary Noetherian domains. The most important special cases are perhaps A = ℤ and A = k[t], k any field and t a set of parameters. Besides its generality, the method differs substantially from previously known modular algorithms for A = ℤ, since it does not manipulate the coefficients. It is also usually faster and can be combined with other modular methods for computations in local rings. The algorithm is implemented in the computer algebra system SINGULAR and we present several examples illustrating its power.
Typical instances, that is, instances that are representative for a particular situ-ation or concept, play an important role in human knowledge representationand reasoning, in particular in analogical reasoning. This wellADknown obser-vation has been a motivation for investigations in cognitive psychology whichprovide a basis for our characterization of typical instances within conceptstructures and for a new inference rule for justified analogical reasoning withtypical instances. In a nutshell this paper suggests to augment the proposi-tional knowledge representation system by a non-propositional part consistingof concept structures which may have directly represented instances as ele-ments. The traditional reasoning system is extended by a rule for justifiedanalogical inference with typical instances using information extracted fromboth knowledge representation subsystems.
Erstmalig wurde Synchrotron-basierte nukleare inelastische Streuung (NIS) unter Nutzung des Mößbauer-Isotops 161Dy für die Untersuchung der vibronischen Eigenschaften eines DyIII-basierten Einzelmolekülmagneten, [Dy(Cy3PO)2(H2O)5]Br3⋅2 (Cy3PO)⋅2 H2O⋅2 EtOH, eingesetzt. Die experimentelle partielle Phononen-Zustandsdichte, die alle Schwingungen mit einer Auslenkung des DyIII-Ions enthält, wurde mit Hilfe von auf Dichtefunktionaltheorie (DFT) basierenden Simulationen reproduziert, was die Zuordnung aller intramolekularen Schwingungsmoden des Moleküls ermöglicht. Diese Studie zeigt, dass 161Dy-NIS als eine experimentelle Methode ein hohes Potential besitzt, um zur Klärung der Rolle von Phononen in Einzelmolekülmagneten beizutragen.
Nanostructured tantalum (Ta)-based dental implants have recently attracted significant attention thanks to their superior biocompatibility and bioactivity as compared to their titanium-based counterparts. While the biological and chemical aspects of Ta implants have been widely studied, their mechanical features have been investigated more rarely. Additionally, the mechanical behavior of these implants and, more importantly, their plastic deformation mechanisms are still not fully understood. Accordingly, in the current research, molecular dynamics simulation as a powerful tool for probing the atomic-scale phenomena is utilized to explore the microstructural evolution of pure polycrystalline Ta samples under tensile loading conditions. Various samples with an average grain size of 2–10 nm are systematically examined using various crystal structure analysis tools to determine the underlying deformation mechanisms. The results reveal that for the samples with an average grain size larger than 8 nm, twinning and dislocation slip are the main sources of any plasticity induced within the sample. For finer-grained samples, the activity of grain boundaries—including grain elongation, rotation, migration, and sliding—are the most important mechanisms governing the plastic deformation. Finally, the temperature-dependent Hall–Petch breakdown is thoroughly examined for the nanocrystalline samples via identification of the grain boundary dynamics.
Iterative methods to solve linear equation systems are widely used in computational physics, engineering and many areas of applied mathematics. In recent works, performance improvements have been achieved based on modifications of several classes of iterative algorithms by various research communities driven by different perspectives and applications. This note presents a brief analysis of conventional and unifying perspectives by highlighting relations between several well-known iterative methods to solve linear equation systems and explicit Euler approximations of the associated parabolic regularized equations. Special cases of equivalence and general relations between different iterative methods such as Jacobi iterations, Richardson iterations, Steepest Descent and Quasi-Newton methods are shown and discussed. The results and discussion extend the conventional perspectives on these iterative methods and give way to intuitive physical interpretations and analogies. The accessibly presented relations give complementary educational insights and aim to inspire transdisciplinary developments of new iterative methods, solvers and preconditioners.
Unification in an Extensional Lambda Calculus with Ordered Function Sorts and Constant Overloading
(1999)
We develop an order-sorted higher-order calculus suitable forautomatic theorem proving applications by extending the extensional simplytyped lambda calculus with a higher-order ordered sort concept and constantoverloading. Huet's well-known techniques for unifying simply typed lambdaterms are generalized to arrive at a complete transformation-based unificationalgorithm for this sorted calculus. Consideration of an order-sorted logicwith functional base sorts and arbitrary term declarations was originallyproposed by the second author in a 1991 paper; we give here a correctedcalculus which supports constant rather than arbitrary term declarations, aswell as a corrected unification algorithm, and prove in this setting resultscorresponding to those claimed there.
The introduction of sorts to first-order automated deduc-tion has brought greater conciseness of representation and a considerablegain in efficiency by reducing search spaces. This suggests that sort in-formation can be employed in higher-order theorem proving with similarresults. This paper develops a sorted (lambda)-calculus suitable for automatictheorem proving applications. It extends the simply typed (lambda)-calculus by ahigher-order sort concept that includes term declarations and functionalbase sorts. The term declaration mechanism studied here is powerfulenough to subsume subsorting as a derived notion and therefore gives ajustification for the special form of subsort inference. We present a set oftransformations for sorted (pre-) unification and prove the nondetermin-istic completeness of the algorithm induced by these transformations.
Arctic, Antarctic and alpine biological soil crusts (BSCs) are formed by adhesion of soil particles to exopolysaccharides (EPSs) excreted by cyanobacterial and green algal communities, the pioneers and main primary producers in these habitats. These BSCs provide and influence many ecosystem services such as soil erodibility, soil formation and nitrogen (N) and carbon (C) cycles. In cold environments degradation rates are low and BSCs continuously increase soil organic C; therefore, these soils are considered to be CO2 sinks. This work provides a novel, nondestructive and highly comparable method to investigate intact BSCs with a focus on cyanobacteria and green algae and their contribution to soil organic C. A new terminology arose,basedonconfocallaserscanningmicroscopy(CLSM) 2-D biomaps, dividing BSCs into a photosynthetic active layer (PAL) made of active photoautotrophic organisms and a photosynthetic inactive layer (PIL) harbouring remnants of cyanobacteria and green algae glued together by their remaining EPSs. By the application of CLSM image analysis (CLSM–IA) to 3-D biomaps, C coming from photosynthetic activeorganismscouldbevisualizedasdepthprofileswithC peaks at 0.5 to 2mm depth. Additionally, the CO2 sink character of these cold soil habitats dominated by BSCs could be highlighted, demonstrating that the first cubic centimetre of soil consists of between 7 and 17% total organic carbon, identified by loss on ignition.
In many applications, visual analytics (VA) has developed into a standard tool to ease data access and knowledge generation. VA describes a holistic cycle transforming data into hypothesis and visualization to generate insights that enhance the data. Unfortunately, many data sources used in the VA process are affected by uncertainty. In addition, the VA cycle itself can introduce uncertainty to the knowledge generation process but does not provide a mechanism to handle these sources of uncertainty. In this manuscript, we aim to provide an extended VA cycle that is capable of handling uncertainty by quantification, propagation, and visualization, defined as uncertainty-aware visual analytics (UAVA). Here, a recap of uncertainty definition and description is used as a starting point to insert novel components in the visual analytics cycle. These components assist in capturing uncertainty throughout the VA cycle. Further, different data types, hypothesis generation approaches, and uncertainty-aware visualization approaches are discussed that fit in the defined UAVA cycle. In addition, application scenarios that can be handled by such a cycle, examples, and a list of open challenges in the area of UAVA are provided.
Algorithms are increasingly used in different domains of public policy. They help humans to profile unemployed, support administrations to detect tax fraud and give recidivism risk scores that judges or criminal justice managers take into account when they make bail decisions. In recent years, critics have increasingly pointed to ethical challenges of these tools and emphasized problems of discrimination, opaqueness or accountability, and computer scientists have proposed technical solutions to these issues. In contrast to these important debates, the literature on how these tools are implemented in the actual everyday decision-making process has remained cursory. This is problematic because the consequences of ADM systems are at least as dependent on the implementation in an actual decision-making context as on their technical features. In this study, we show how the introduction of risk assessment tools in the criminal justice sector on the local level in the USA has deeply transformed the decision-making process. We argue that this is mainly due to the fact that the evidence generated by the algorithm introduces a notion of statistical prediction to a situation which was dominated by fundamental uncertainty about the outcome before. While this expectation is supported by the case study evidence, the possibility to shift blame to the algorithm does seem much less important to the criminal justice actors.
Von den Umweltberichten deutscher Unternehmen werden bisher erst unter 3% im Internet veröffentlicht. Die Tendenz ist steigend. Hier werden die im Internet verfügbaren Umweltberichte ausgewertet und Gründe für die Nutzung des Internet für die Umweltberichterstattung vorgetragen. Der Beitrag ist in fünf Abschnitte gegliedert: Zur thematischen Einführung werden betriebliche Umweltberichte durch eine Morphologie charakterisiert (Abschnitt 2). Es schließen sich die IKT-spezifischen Herausforderungen an umweltberichterstattende Unternehmen als Ansatzpunkte für Umweltberichte im Internet an (Abschnitt 3). Damit ist die Basis für eine Systematisierung der internetbasierten Unterstützungspotenziale zur Umweltberichterstattung gelegt (Abschnitt 4). Der Systematik folgt eine detaillierte Bestandsaufnahme der Umweltberichte deutscher Unternehmen im Internet in fünffacher Hinsicht (Abschnitt 5): Die zugrunde gelegte Untersuchungsmethodik zur Bestandsaufnahme wird erläutert (Abschnitt 5.1). Die ergänzend herangezogenen empirischen Studien zu Umweltberichten im Internet werden ausgewertet (Abschnitt 5.2). Die Ergebnisse bzgl. Inhalt und Darstellung von Umweltberichten im Internet werden ausführlicher beschrieben (Abschnitt 5.3) und durch Erklärungsansätze interpretiert (Abschnitt 5.4). Abschließend werden auf der Grundlage der konzeptionell erschließbaren Unterstützungspotenziale einerseits und der empirischen Studien andererseits zentrale Tendenzen zur zukünftigen Entwicklung von Umweltberichten im Internet vorgetragen (Abschnitt 5.5).
Hier werden die im Internet verfügbaren Umweltberichte von Unternehmen ausgewertet, Praxiserfahrungen von im Internet umweltberichterstattenden Unternehmen dokumentiert, generelle Gründe für die Nutzung des Internet für die Umweltberichterstattung vorgetragen, eine Klassifikation von Umweltberichten im Internet entworfen und Entwicklungstendenzen in der Umweltberichterstattung skizziert. Die Studie ist in sechs Kapitel gegliedert: Zur thematischen Einführung werden Umweltberichte als Kern der Umweltkommunikation von Unternehmen behandelt (Kapitel 2). Es schließen sich die spezifischen informations- und kommunikationstechnischen (IKT) Herausforderungen an umweltberichterstattende Unternehmen an. Diese werden als Ansatzpunkte für eine Umweltberichterstattung im Internet sowie zur Ausschöpfung der technischen Unterstützungspotentiale des Internet betrachtet (Kapitel 3). Damit ist die Basis für eine Übersicht über die verschiedenen technischen Unterstützungspotentiale beim Einsatz von Internettechnologien und -diensten für die Umweltberichterstattung gelegt (Kapitel 4). Der Übersicht folgt eine detaillierte Bestandsaufnahme zu Umweltberichten von Unternehmen im Internet für Deutschland (Kapitel 5). Auf der Grundlage der Bestandsaufnahme wird abschließend für den Einsatz des Internet zur Umweltberichterstattung argumentiert (Kapitel 6).
Die Umweltberichterstattung spielt sowohl für den ökonomischen Erfolg von Unternehmen als auch für eine ökologisch nachhaltige Entwicklung eine zunehmend wichtige Rolle. Dafür sprechen drei Gründe: Erstens können Unternehmen kön-nen durch eine freiwillige und informative Umweltberichterstattung ökologische Schwachstellen aufdecken, Umweltbelastungen reduzieren und Wettbewerbsvorteile im Markt erzielen. Zweitens nehmen gesetzliche und moralische Verpflichtungen zur Umweltberichterstattung zu. Drittens sind die technischen Möglichkeiten zur Umweltberichterstattung durch den Einsatz des Internet enorm gestiegen. Alle drei Tendenzen sind gute Gründe für den Einsatz des Internet zur Umweltberichterstattung. Allerdings sind bei den Umweltberichten von kleinen und mittelständischen Unternehmen (KMU) insgesamt bisher erst weniger als 3% im Internet veröffentlicht, die Tendenz ist jedoch steigend. Bislang nutzen überwiegend internationale und weltweit tätige Großunternehmen das Internet zur Umweltberichterstattung. KMU präsentieren bislang nur selten Umweltberichte im Internet. Hier werden die im Internet verfügbaren Umweltberichte von KMU ausgewertet, generelle Gründe für die Nutzung des Internet für die Umweltberichterstattung vorgetragen und die Möglichkeiten von KMU für eine internetbasierte Umweltbe-richterstattung am Beispiel von Umweltberichten dargestellt. Die Studie ist in sechs Kapitel gegliedert: Zur thematischen Einführung werden Umweltberichte als Kern der Umweltkommunikation von Unternehmen behandelt (Kapitel 2). Es schließen sich die informations- und kommunikationstechnischen (IKT) Herausforderungen an umweltberichterstattende Unternehmen an. Sie werden als Ansatzpunkte für Umweltberichte im Internet und zur Ausschöpfung der technischen Unterstützungspotentiale des Internet betrachtet (Kapitel 3). Damit ist die Basis für eine Übersicht über die verschiedenen technischen Unterstützungspotentiale beim Einsatz von Internettechnologien und -diensten für die Umweltbe-richterstattung gelegt (Kapitel 4). Der Übersicht folgt eine detaillierte Bestandsauf-nahme von Umweltberichten im Internet von KMU in Deutschland (Kapitel 5). Auf der Grundlage der empirischen Bestandsaufnahme werden dann die Möglichkeiten einer internetbasierten Umweltberichterstattung für KMU abgeleitet (Kapitel 6).
Deactivation processes of photoexcited (λex = 580 nm) phycocyanobilin (PCB) in methanol were investigated by means of UV/Vis and mid-IR femtosecond (fs) transient absorption (TA) as well as static fluorescence spectroscopy, supported by density-functional-theory calculations of three relevant ground state conformers, PCBA, PCBB and PCBC, their relative electronic state energies and normal mode vibrational analysis. UV/Vis fs-TA reveals time constants of 2.0, 18 and 67 ps, describing decay of PCBB*, of PCBA* and thermal re-equilibration of PCBA, PCBB and PCBC, respectively, in line with the model by Dietzek et al. (Chem Phys Lett 515:163, 2011) and predecessors. Significant substantiation and extension of this model is achieved first via mid-IR fs-TA, i.e. identification of molecular structures and their dynamics, with time constants of 2.6, 21 and 40 ps, respectively. Second, transient IR continuum absorption (CA) is observed in the region above 1755 cm−1 (CA1) and between 1550 and 1450 cm−1 (CA2), indicative for the IR absorption of highly polarizable protons in hydrogen bonding networks (X–H…Y). This allows to characterize chromophore protonation/deprotonation processes, associated with the electronic and structural dynamics, on a molecular level. The PCB photocycle is suggested to be closed via a long living (> 1 ns), PCBC-like (i.e. deprotonated), fluorescent species.
The Griffith-Ley oxidation of alcohols to aldehydes and ketones is performed with either RuCl3 ⋅ (H2O)x or a highly stable, well-defined ruthenium catalyst and with cheap trimethylamine N-oxide (TMAO) as the oxygen source. The use of n-heptane as the solvent, which forms a second phase with TMAO and a part of the alcohol, allows the reactions to be performed with a minimum amount of catalyst. This results in high local concentrations and thus to very rapid conversions. Detailed quantum chemical calculations suggest, that the Griffith-Ley oxidation not necessarily requires high oxidation states of ruthenium but can also proceed with RuII/RuIV species.
The following two norms for holomorphic functions \(F\), defined on the right complex half-plane \(\{z \in C:\Re(z)\gt 0\}\) with values in a Banach space \(X\), are equivalent:
\[\begin{eqnarray*} \lVert F \rVert _{H_p(C_+)} &=& \sup_{a\gt0}\left( \int_{-\infty}^\infty \lVert F(a+ib) \rVert ^p \ db \right)^{1/p}
\mbox{, and} \\ \lVert F \rVert_{H_p(\Sigma_{\pi/2})} &=& \sup_{\lvert \theta \lvert \lt \pi/2}\left( \int_0^\infty \left \lVert F(re^{i \theta}) \right \rVert ^p\ dr \right)^{1/p}.\end{eqnarray*}\] As a consequence, we derive a description of boundary values ofsectorial holomorphic functions, and a theorem of Paley-Wiener typefor sectorial holomorphic functions.