Refine
Year of publication
Document Type
- Article (334) (remove)
Keywords
- AG-RESY (42)
- PARO (30)
- SKALP (15)
- Stadtplanung (9)
- Denkmäler (8)
- HANDFLEX (8)
- Monitoring (8)
- Raumplanung (8)
- Schule (8)
- resonances (8)
Faculty / Organisational entity
- Fachbereich Informatik (116)
- Fachbereich Physik (68)
- Fachbereich Mathematik (38)
- Fachbereich Biologie (22)
- Fachbereich Raum- und Umweltplanung (22)
- Fachbereich Sozialwissenschaften (17)
- Fachbereich Maschinenbau und Verfahrenstechnik (15)
- Fachbereich Elektrotechnik und Informationstechnik (14)
- Fachbereich Wirtschaftswissenschaften (10)
- Fachbereich Bauingenieurwesen (6)
Das in Manderen, Departement Moselle, gelegene Schloss von Malbrouck befindet sich genau an der deutschen und luxemburgischen Grenze. Sein Bau wurde nach dem Willen von Arnold VI, dem Grundherrn von Sierck, im Jahre 1419 begonnen, wurde 1434, in dem Jahr, in dem das Schloss für geeignet befunden wurde, einem Angriff widerstehen zu können, vollendet und dann in den Dienst des Erzbistums Trier gestellt.
Unglücklicherweise war zum Zeitpunkt des Todes von Ritter Arnold dessen Nachfolge nicht gesichert, weshalb das Schloss ab Ende des XV. bis Anfang des XVII. Jahrhunderts von einer Hand in die nächste überwechselte.
Seit es im Jahre 1930 unter Denkmalschutz gestellt wurde und der Conseil Général de la Moselle es 1975 von seinem letzten Eigentümer, einem Bauern, zurückkaufte, wurde das Schloss vollständig saniert und im September 1988 wiedereröffnet. Wie jedes andere Bauwerk dieser Größenordnung braucht es eine feine und genaue Überwachung. Aus diesem Grund wollte der Conseil Général de la Moselle strategischer Partner im Projekt CURe MODERN werden.
Die Brücke von Rosbrück ist ein Bauwerk aus Spannbeton, das vom Departement Moselle verwaltet wird. Da sie schon 1952 gebaut wurde, weist sie mehrere Spannkabel auf, die offensichtlich rissig oder brüchig sind. Diese Schädigungen stellen die Tragfähigkeit des Bauwerks in Frage.
Es erschien daher notwendig, den Zustand der Kabel im Inneren der Träger zu überprüfen. Im Vorfeld der Kontrolle mittels der MFL-Methode (Magnetic Flux Leakage) wurde deren Position im Verhältnis zu den Ausführungsebenen überprüft. Da diese Überprüfung schlüssig war, wurden dann die MFL-Messungen an einem der Träger des Bauwerks durchgeführt. Die Ergebnisse sind beweiskräftig und es wurden keine Mängel nachgewiesen. In der Folge erscheint es nötig, die Überprüfung (Auskultation) auf geschädigte Bereiche auszudehnen.
In der Betriebswirtschaftslehre bestehen unterschiedliche Sichtweisen in Bezug auf Unternehmensziele und deren Verhältnis zu Stakeholderzielen. Der vorliegende Beitrag untersucht das Verhältnis von Unternehmenszielen und Stakeholderzielen, wobei die zentrale Zielsetzung des Beitrags darin besteht, die Stakeholderziele zu spezifizieren, da bislang existierende Kataloge mit Stakeholderzielen wenig detailliert sind. Dabei werden die unterschiedlichen Stakeholderziele anhand ihrer drei Zieldimensionen untersucht sowie in Formal- und Sachziele unterteilt. Des Weiteren wird deutlich, dass zwischen Unternehmenszielen und Stakeholderzielen durch die differierenden Ziele der unterschiedlichen Stakeholdergruppen ständige Konflikte herrschen, wodurch es grundsätzlich innerhalb unter-nehmerischer Zielsysteme nur zu einer Quasilösung der Konflikte kommen kann.
Zerstörungsfreie (ND) Erkundungstechniken, seien es nun zerstörungsfreie oder geophysikalische Bewertungsmethoden, werden üblicherweise im Bau- und Transportwesen, im Bereich der Energietechnik oder der Stadtentwicklung angewandt. Während sich jedoch im Laufe der letzten Jahrzehnte das Interesse auf interne geometrische Informationen zu der untersuchten Umgebung richtete, konzentrieren sich jüngere Forschungen auf Informationen, die mit der Art und dem Zustand dieser Umgebung verbunden sind, um so dem Begiff der zerstörungsfreien Bewertung näher zu kommen. Die gegenwärtig laufenden Studien versuchen, die aus zerstörungsfreien Messungen abgeleiteten Werte in statistische, mit Lebensdauermodellen verknüpfte Ansätze zu integrieren.
Aufgrund der vernetzten Strukturen und Wirkungszusammenhänge dynamischer Systeme werden die zugrundeliegenden mathematischen Modelle meist sehr komplex und erfordern ein hohes mathematisches Verständnis und Geschick. Bei Verwendung von spezieller Software können jedoch auch ohne tiefgehende mathematische oder informatorische Fachkenntnisse komplexe Wirkungsnetze dynamischer Systeme interaktiv erstellt werden. Als Beispiel wollen wir schrittweise das Modell einer Miniwelt entwerfen und Aussagen bezüglich ihrer Bevölkerungsentwicklung treffen.
Die Akustik liefert einen interessanten Hintergrund, interdisziplinären und fächerverbindenen Unterricht zwischen Mathematik, Physik und Musik durchzuführen. SchülerInnen können hierbei beispielsweise experimentell tätig sein, indem sie Audioaufnahmen selbst erzeugen und sich mit Computersoftware Frequenzspektren erzeugen lassen. Genauso können die Schüler auch Frequenzspektren vorgeben und daraus Klänge erzeugen. Dies kann beispielsweise dazu dienen, den Begriff der Obertöne im Musikunterricht physikalisch oder mathematisch greifbar zu machen oder in der Harmonielehre Frequenzverhältnisse von Intervallen und Dreiklängen näher zu untersuchen.
Der Computer ist hier ein sehr nützliches Hilfsmittel, da der mathematische Hintergrund dieser Aufgabe -- das Wechseln zwischen Audioaufnahme und ihrem Frequenzbild -- sich in der Fourier-Analysis findet, die für SchülerInnen äußerst anspruchsvoll ist. Indem man jedoch die Fouriertransformation als numerisches Hilfsmittel einführt, das nicht im Detail verstanden werden muss, lässt sich an anderer Stelle interessante Mathematik betreiben und die Zusammenhänge zwischen Akustik und Musik können spielerisch erfahren werden.
Im folgenden Beitrag wird eine Herangehensweise geschildert, wie wir sie bereits bei der Felix-Klein-Modellierungswoche umgesetzt haben: Die SchülerInnen haben den Auftrag erhalten, einen Synthesizer zu entwickeln, mit dem verschiedene Musikinstrumente nachgeahmt werden können. Als Hilfsmittel haben sie eine kurze Einführung in die Eigenschaften der Fouriertransformation erhalten, sowie Audioaufnahmen verschiedener Instrumente.
Whole-body electromyostimulation (WB-EMS) is an extension of the EMS application known in physical therapy. In WB-EMS, body composition and skinfold thickness seem to play a decisive role in influencing the Ohmic resistance and therefore the maximum intensity tolerance. That is why the therapeutic success of (WB-)EMS may depend on individual anatomical parameters. The aim of the study was to find out whether gender, skinfold thickness and parameters of body composition have an influence on the maximum intensity tolerance in WB-EMS. [Participants and Methods] Fifty-two participants were included in the study. Body composition (body impedance, body fat, fat mass, fat-free mass) and skinfold thicknesses were measured and set into relation to the maximum intensity tolerance. [Results] No relationship between the different anthropometric parameters and the maximum intensity tolerance was detected for both genders. Considering the individual muscle groups, no similarities were found in the results. [Conclusion] Body composition or skinfold thickness do not seem to have any influence on the maximum intensity tolerance in WB-EMS training. For the application in physiotherapy this means that a dosage of the electrical voltage within the scope of a (WB-) EMS application is only possible via the subjective feedback (BORG Scale).
In this paper we present the results of the project “#Datenspende” where during the German election in 2017 more than 4000 people contributed their search results regarding keywords connected to the German election campaign.
Analyzing the donated result lists we prove, that the room for personalization of the search results is very small. Thus the opportunity for the effect mentioned in Eli Pariser’s filter bubble theory to occur in this data is also very small, to a degree that it is negligible. We achieved these results by applying various similarity measures to the result lists that were donated. The first approach using the number of common results as a similarity measure showed that the space for personalization is less than two results out of ten on average when searching for persons and at most four regarding the search for parties. Application of other, more specific measures show that the space is indeed smaller, so that the presence of filter bubbles is not evident.
Moreover this project is also a proof of concept, as it enables society to permanently monitor a search engine’s degree of personalization for any desired search terms. The general design can also be transferred to intermediaries, if appropriate APIs restrict selective access to contents relevant to the study in order to establish a similar degree of trustworthiness.
With this article we first like to give a brief review on wavelet thresholding methods in non-Gaussian and non-i.i.d. situations, respectively. Many of these applications are based on Gaussian approximations of the empirical coefficients. For regression and density estimation with independent observations, we establish joint asymptotic normality of the empirical coefficients by means of strong approximations. Then we describe how one can prove asymptotic normality under mixing conditions on the observations by cumulant techniques.; In the second part, we apply these non-linear adaptive shrinking schemes to spectral estimation problems for both a stationary and a non-stationary time series setup. For the latter one, in a model of Dahlhaus on the evolutionary spectrum of a locally stationary time series, we present two different approaches. Moreover, we show that in classes of anisotropic function spaces an appropriately chosen wavelet basis automatically adapts to possibly different degrees of regularity for the different directions. The resulting fully-adaptive spectral estimator attains the rate that is optimal in the idealized Gaussian white noise model up to a logarithmic factor.
We derive minimax rates for estimation in anisotropic smoothness classes. This rate is attained by a coordinatewise thresholded wavelet estimator based on a tensor product basis with separate scale parameter for every dimension. It is shown that this basis is superior to its one-scale multiresolution analog, if different degrees of smoothness in different directions are present.; As an important application we introduce a new adaptive wavelet estimator of the time-dependent spectrum of a locally stationary time series. Using this model which was resently developed by Dahlhaus, we show that the resulting estimator attains nearly the rate, which is optimal in Gaussian white noise, simultaneously over a wide range of smoothness classes. Moreover, by our new approach we overcome the difficulty of how to choose the right amount of smoothing, i.e. how to adapt to the appropriate resolution, for reconstructing the local structure of the evolutionary spectrum in the time-frequency plane.
We consider wavelet estimation of the time-dependent (evolutionary) power spectrum of a locally stationary time series. Allowing for departures from stationary proves useful for modelling, e.g., transient phenomena, quasi-oscillating behaviour or spectrum modulation. In our work wavelets are used to provide an adaptive local smoothing of a short-time periodogram in the time-freqeuncy plane. For this, in contrast to classical nonparametric (linear) approaches we use nonlinear thresholding of the empirical wavelet coefficients of the evolutionary spectrum. We show how these techniques allow for both adaptively reconstructing the local structure in the time-frequency plane and for denoising the resulting estimates. To this end a threshold choice is derived which is motivated by minimax properties w.r.t. the integrated mean squared error. Our approach is based on a 2-d orthogonal wavelet transform modified by using a cardinal Lagrange interpolation function on the finest scale. As an example, we apply our procedure to a time-varying spectrum motivated from mobile radio propagation.
A simple method of calculating the Wannier-Stark resonances in 2D lattices is suggested. Using this method we calculate the complex Wannier-Stark spectrum for a non-separable 2D potential realized in optical lattices and analyze its general structure. The dependence of the lifetime of Wannier-Stark states on the direction of the static field (relative to the crystallographic axis of the lattice) is briefly discussed.
The paper studies the effect of a weak periodic driving on metastable Wannier-Stark states. The decay rate of the ground Wannier-Stark states as a continuous function of the driving frequency is calculated numerically. The theoretical results are compared with experimental data of Wilkinson et at. [Phys.Rev.Lett.76, 4512 (1996)] obtained for cold sodium atoms in an accelerated optical lattice.
Wall energy and wall thickness of exchange-coupled rare-earth transition-metal triple layer stacks
(1999)
The room-temperature wall energy sw 54.0310 23 J/m 2 of an exchange-coupled Tb 19.6 Fe 74.7 Co 5.7 /Dy 28.5 Fe 43.2 Co 28.3 double layer stack can be reduced by introducing a soft magnetic intermediate layer in between both layers exhibiting a significantly smaller anisotropy compared to Tb+- FeCo and Dy+- FeCo. sw will decrease linearly with increasing intermediate layer thickness, d IL , until the wall is completely located within the intermediate layer for d IL d w , where d w denotes the wall thickness. Thus, d w can be obtained from the plot sw versus d IL .We determined sw and d w on Gd+- FeCo intermediate layers with different anisotropy behavior ~perpendicular and in-plane easy axis! and compared the results with data obtained from Brillouin light-scattering measurements, where exchange stiffness, A, and uniaxial anisotropy, K u , could be determined. With the knowledge of A and K u , wall energy and thickness were calculated and showed an excellent agreement with the magnetic measurements. A ten times smaller perpendicular anisotropy of Gd 28.1 Fe 71.9 in comparison to Tb+- FeCo and Dy+- FeCo resulted in a much smaller sw 51.1310 23 J/m 2 and d w 524 nm at 300 K. A Gd 34.1 Fe 61.4 Co 4.5 with in-plane anisotropy at room temperature showed a further reduced sw 50.3310 23 J/m 2 and d w 517 nm. The smaller wall energy was a result of a different wall structure compared to perpendicular layers.
The coordination of multiple external representations is important for learning, but yet a difficult task for students, requiring instructional support. The subject in this study covers a typical relation in physics between abstract mathematical equations (definitions of divergence and curl) and a visual representation (vector field plot). To support the connection across both representations, two instructions with written explanations, equations, and visual representations (differing only in the presence of visual cues) were designed and their impact on students’ performance was tested. We captured students’ eye movements while they processed the written instruction and solved subsequent coordination tasks. The results show that students instructed with visual cues (VC students) performed better, responded with higher confidence, experienced less mental effort, and rated the instructional quality better than students instructed without cues. Advanced eye-tracking data analysis methods reveal that cognitive integration processes appear in both groups at the same point in time but they are significantly more pronounced for VC students, reflecting a greater attempt to construct a coherent mental representation during the learning process. Furthermore, visual cues increase the fixation count and total fixation duration on relevant information. During problem solving, the saccadic eye movement pattern of VC students is similar to experts in this domain. The outcomes imply that visual cues can be beneficial in coordination tasks, even for students with high domain knowledge. The study strongly confirms an important multimedia design principle in instruction, that is, that highlighting conceptually relevant information shifts attention to relevant information and thus promotes learning and problem solving. Even more, visual cues can positively influence students’ perception of course materials.
Virtual Robot Programming for Deformable Linear Objects: System concept and Prototype Implementation
(2002)
In this paper we present a method and system for robot programming using virtual reality techniques. The proposed method allows intuitive teaching of a manipulation task with haptic feedback in a graphical simulation system. Based on earlier work, our system allows even an operator who lacks specialized knowledge of robotics to automatically generate a robust sensor-based robot program that is ready to execute on different robots, merely by demonstrating the task in virtual reality.
In cyanobacteria and plants, VIPP1 plays crucial roles in the biogenesis and repair of thylakoid membrane protein complexes and in coping with chloroplast membrane stress. In chloroplasts, VIPP1 localizes in distinct patterns at or close to envelope and thylakoid membranes. In vitro, VIPP1 forms higher-order oligomers of >1 MDa that organize into rings and rods. However, it remains unknown how VIPP1 oligomerization is related to function. Using time-resolved fluorescence anisotropy and sucrose density gradient centrifugation, we show here that Chlamydomonas reinhardtii VIPP1 binds strongly to liposomal membranes containing phosphatidylinositol-4-phosphate (PI4P). Cryo-electron tomography reveals that VIPP1 oligomerizes into rods that can engulf liposomal membranes containing PI4P. These findings place VIPP1 into a group of membrane-shaping proteins including epsin and BAR domain proteins. Moreover, they point to a potential role of phosphatidylinositols in directing the shaping of chloroplast membranes.
In diesem Papier beschreiben wir eine Methode zur Spezifikation und Operationalisierung von konzeptuellen Modellen kooperativer wissensbasierter Arbeitsabläufe. Diese erweitert bekannte Ansätze um den Begriff des Agenten und um alternative Aufgabenzerlegungen. Das Papier beschreibt schwerpunktmäßig Techniken, die unserem verteilten Interpreter zugrunde liegen. Dabei gehen wir insbesondere auf Methoden ein, die Abhängigkeiten zwischen Aufgaben behandeln und ein zielgerichtetes Backtracking effizient unterstützen.
3D joint kinematics can provide important information about the quality of movements. Optical motion capture systems (OMC) are considered the gold standard in motion analysis. However, in recent years, inertial measurement units (IMU) have become a promising alternative. The aim of this study was to validate IMU-based 3D joint kinematics of the lower extremities during different movements. Twenty-eight healthy subjects participated in this study. They performed bilateral squats (SQ), single-leg squats (SLS) and countermovement jumps (CMJ). The IMU kinematics was calculated using a recently-described sensor-fusion algorithm. A marker based OMC system served as a reference. Only the technical error based on algorithm performance was considered, incorporating OMC data for the calibration, initialization, and a biomechanical model. To evaluate the validity of IMU-based 3D joint kinematics, root mean squared error (RMSE), range of motion error (ROME), Bland-Altman (BA) analysis as well as the coefficient of multiple correlation (CMC) were calculated. The evaluation was twofold. First, the IMU data was compared to OMC data based on marker clusters; and, second based on skin markers attached to anatomical landmarks. The first evaluation revealed means for RMSE and ROME for all joints and tasks below 3°. The more dynamic task, CMJ, revealed error measures approximately 1° higher than the remaining tasks. Mean CMC values ranged from 0.77 to 1 over all joint angles and all tasks. The second evaluation showed an increase in the RMSE of 2.28°– 2.58° on average for all joints and tasks. Hip flexion revealed the highest average RMSE in all tasks (4.87°– 8.27°). The present study revealed a valid IMU-based approach for the measurement of 3D joint kinematics in functional movements of varying demands. The high validity of the results encourages further development and the extension of the present approach into clinical settings.
In diesem Papier stellen wir einen Interpreter vor, der die Validierung von konzeptuellen Modellen bereits in fruehen Entwicklungsphasen unterstuetzt. Wir vergleichen Hypermedia- und Expertensystemansaetze zur Wissensverarbeitung und erlaeutern, wie ein integrierter Ansatz die Erstellung von Expertensystemen vereinfacht. Das von uns entwickelte Knowledge Engineering Werkzeug ermoeglicht einen "sanften" Uebergang von initialen Protokollen ueber eine semi-formale Spezifikation in Form eines getypten Hypertextes hin zu einem operationalen Expertensystem. Ein Interpreter nutzt die in diesem Prozess erzeugte Zwischenrepraesentation direkt zur interaktiven Loesung von Problemen, wobei einzelne Aufgaben ueber ein lokales Rechnernetz auf die Bearbeiter verteilt werden. Das heisst, die Spezifikation des Expertensystems wird direkt fuer die Loesung realer Probleme eingesetzt. Existieren zu einzelnen Teilaufgaben Operationalisierungen (d.h. Programme), dann werden diese vom Computer bearbeitet.
Typical instances, that is, instances that are representative for a particular situ-ation or concept, play an important role in human knowledge representationand reasoning, in particular in analogical reasoning. This wellADknown obser-vation has been a motivation for investigations in cognitive psychology whichprovide a basis for our characterization of typical instances within conceptstructures and for a new inference rule for justified analogical reasoning withtypical instances. In a nutshell this paper suggests to augment the proposi-tional knowledge representation system by a non-propositional part consistingof concept structures which may have directly represented instances as ele-ments. The traditional reasoning system is extended by a rule for justifiedanalogical inference with typical instances using information extracted fromboth knowledge representation subsystems.
Unification in an Extensional Lambda Calculus with Ordered Function Sorts and Constant Overloading
(1999)
We develop an order-sorted higher-order calculus suitable forautomatic theorem proving applications by extending the extensional simplytyped lambda calculus with a higher-order ordered sort concept and constantoverloading. Huet's well-known techniques for unifying simply typed lambdaterms are generalized to arrive at a complete transformation-based unificationalgorithm for this sorted calculus. Consideration of an order-sorted logicwith functional base sorts and arbitrary term declarations was originallyproposed by the second author in a 1991 paper; we give here a correctedcalculus which supports constant rather than arbitrary term declarations, aswell as a corrected unification algorithm, and prove in this setting resultscorresponding to those claimed there.
The introduction of sorts to first-order automated deduc-tion has brought greater conciseness of representation and a considerablegain in efficiency by reducing search spaces. This suggests that sort in-formation can be employed in higher-order theorem proving with similarresults. This paper develops a sorted (lambda)-calculus suitable for automatictheorem proving applications. It extends the simply typed (lambda)-calculus by ahigher-order sort concept that includes term declarations and functionalbase sorts. The term declaration mechanism studied here is powerfulenough to subsume subsorting as a derived notion and therefore gives ajustification for the special form of subsort inference. We present a set oftransformations for sorted (pre-) unification and prove the nondetermin-istic completeness of the algorithm induced by these transformations.
Arctic, Antarctic and alpine biological soil crusts (BSCs) are formed by adhesion of soil particles to exopolysaccharides (EPSs) excreted by cyanobacterial and green algal communities, the pioneers and main primary producers in these habitats. These BSCs provide and influence many ecosystem services such as soil erodibility, soil formation and nitrogen (N) and carbon (C) cycles. In cold environments degradation rates are low and BSCs continuously increase soil organic C; therefore, these soils are considered to be CO2 sinks. This work provides a novel, nondestructive and highly comparable method to investigate intact BSCs with a focus on cyanobacteria and green algae and their contribution to soil organic C. A new terminology arose,basedonconfocallaserscanningmicroscopy(CLSM) 2-D biomaps, dividing BSCs into a photosynthetic active layer (PAL) made of active photoautotrophic organisms and a photosynthetic inactive layer (PIL) harbouring remnants of cyanobacteria and green algae glued together by their remaining EPSs. By the application of CLSM image analysis (CLSM–IA) to 3-D biomaps, C coming from photosynthetic activeorganismscouldbevisualizedasdepthprofileswithC peaks at 0.5 to 2mm depth. Additionally, the CO2 sink character of these cold soil habitats dominated by BSCs could be highlighted, demonstrating that the first cubic centimetre of soil consists of between 7 and 17% total organic carbon, identified by loss on ignition.
Von den Umweltberichten deutscher Unternehmen werden bisher erst unter 3% im Internet veröffentlicht. Die Tendenz ist steigend. Hier werden die im Internet verfügbaren Umweltberichte ausgewertet und Gründe für die Nutzung des Internet für die Umweltberichterstattung vorgetragen. Der Beitrag ist in fünf Abschnitte gegliedert: Zur thematischen Einführung werden betriebliche Umweltberichte durch eine Morphologie charakterisiert (Abschnitt 2). Es schließen sich die IKT-spezifischen Herausforderungen an umweltberichterstattende Unternehmen als Ansatzpunkte für Umweltberichte im Internet an (Abschnitt 3). Damit ist die Basis für eine Systematisierung der internetbasierten Unterstützungspotenziale zur Umweltberichterstattung gelegt (Abschnitt 4). Der Systematik folgt eine detaillierte Bestandsaufnahme der Umweltberichte deutscher Unternehmen im Internet in fünffacher Hinsicht (Abschnitt 5): Die zugrunde gelegte Untersuchungsmethodik zur Bestandsaufnahme wird erläutert (Abschnitt 5.1). Die ergänzend herangezogenen empirischen Studien zu Umweltberichten im Internet werden ausgewertet (Abschnitt 5.2). Die Ergebnisse bzgl. Inhalt und Darstellung von Umweltberichten im Internet werden ausführlicher beschrieben (Abschnitt 5.3) und durch Erklärungsansätze interpretiert (Abschnitt 5.4). Abschließend werden auf der Grundlage der konzeptionell erschließbaren Unterstützungspotenziale einerseits und der empirischen Studien andererseits zentrale Tendenzen zur zukünftigen Entwicklung von Umweltberichten im Internet vorgetragen (Abschnitt 5.5).
Hier werden die im Internet verfügbaren Umweltberichte von Unternehmen ausgewertet, Praxiserfahrungen von im Internet umweltberichterstattenden Unternehmen dokumentiert, generelle Gründe für die Nutzung des Internet für die Umweltberichterstattung vorgetragen, eine Klassifikation von Umweltberichten im Internet entworfen und Entwicklungstendenzen in der Umweltberichterstattung skizziert. Die Studie ist in sechs Kapitel gegliedert: Zur thematischen Einführung werden Umweltberichte als Kern der Umweltkommunikation von Unternehmen behandelt (Kapitel 2). Es schließen sich die spezifischen informations- und kommunikationstechnischen (IKT) Herausforderungen an umweltberichterstattende Unternehmen an. Diese werden als Ansatzpunkte für eine Umweltberichterstattung im Internet sowie zur Ausschöpfung der technischen Unterstützungspotentiale des Internet betrachtet (Kapitel 3). Damit ist die Basis für eine Übersicht über die verschiedenen technischen Unterstützungspotentiale beim Einsatz von Internettechnologien und -diensten für die Umweltberichterstattung gelegt (Kapitel 4). Der Übersicht folgt eine detaillierte Bestandsaufnahme zu Umweltberichten von Unternehmen im Internet für Deutschland (Kapitel 5). Auf der Grundlage der Bestandsaufnahme wird abschließend für den Einsatz des Internet zur Umweltberichterstattung argumentiert (Kapitel 6).
Die Umweltberichterstattung spielt sowohl für den ökonomischen Erfolg von Unternehmen als auch für eine ökologisch nachhaltige Entwicklung eine zunehmend wichtige Rolle. Dafür sprechen drei Gründe: Erstens können Unternehmen kön-nen durch eine freiwillige und informative Umweltberichterstattung ökologische Schwachstellen aufdecken, Umweltbelastungen reduzieren und Wettbewerbsvorteile im Markt erzielen. Zweitens nehmen gesetzliche und moralische Verpflichtungen zur Umweltberichterstattung zu. Drittens sind die technischen Möglichkeiten zur Umweltberichterstattung durch den Einsatz des Internet enorm gestiegen. Alle drei Tendenzen sind gute Gründe für den Einsatz des Internet zur Umweltberichterstattung. Allerdings sind bei den Umweltberichten von kleinen und mittelständischen Unternehmen (KMU) insgesamt bisher erst weniger als 3% im Internet veröffentlicht, die Tendenz ist jedoch steigend. Bislang nutzen überwiegend internationale und weltweit tätige Großunternehmen das Internet zur Umweltberichterstattung. KMU präsentieren bislang nur selten Umweltberichte im Internet. Hier werden die im Internet verfügbaren Umweltberichte von KMU ausgewertet, generelle Gründe für die Nutzung des Internet für die Umweltberichterstattung vorgetragen und die Möglichkeiten von KMU für eine internetbasierte Umweltbe-richterstattung am Beispiel von Umweltberichten dargestellt. Die Studie ist in sechs Kapitel gegliedert: Zur thematischen Einführung werden Umweltberichte als Kern der Umweltkommunikation von Unternehmen behandelt (Kapitel 2). Es schließen sich die informations- und kommunikationstechnischen (IKT) Herausforderungen an umweltberichterstattende Unternehmen an. Sie werden als Ansatzpunkte für Umweltberichte im Internet und zur Ausschöpfung der technischen Unterstützungspotentiale des Internet betrachtet (Kapitel 3). Damit ist die Basis für eine Übersicht über die verschiedenen technischen Unterstützungspotentiale beim Einsatz von Internettechnologien und -diensten für die Umweltbe-richterstattung gelegt (Kapitel 4). Der Übersicht folgt eine detaillierte Bestandsauf-nahme von Umweltberichten im Internet von KMU in Deutschland (Kapitel 5). Auf der Grundlage der empirischen Bestandsaufnahme werden dann die Möglichkeiten einer internetbasierten Umweltberichterstattung für KMU abgeleitet (Kapitel 6).
The following two norms for holomorphic functions \(F\), defined on the right complex half-plane \(\{z \in C:\Re(z)\gt 0\}\) with values in a Banach space \(X\), are equivalent:
\[\begin{eqnarray*} \lVert F \rVert _{H_p(C_+)} &=& \sup_{a\gt0}\left( \int_{-\infty}^\infty \lVert F(a+ib) \rVert ^p \ db \right)^{1/p}
\mbox{, and} \\ \lVert F \rVert_{H_p(\Sigma_{\pi/2})} &=& \sup_{\lvert \theta \lvert \lt \pi/2}\left( \int_0^\infty \left \lVert F(re^{i \theta}) \right \rVert ^p\ dr \right)^{1/p}.\end{eqnarray*}\] As a consequence, we derive a description of boundary values ofsectorial holomorphic functions, and a theorem of Paley-Wiener typefor sectorial holomorphic functions.
Passive graduated filters with fixed absorption profile are currently used in image recording to avoid overexposure. However, a whole set of filters with prescribed gradients is required to cope with changing illumination conditions. Furthermore, they demand mechanical adjustment during operation. To overcome these deficiencies we present a microfabricated active electrochromic graduated filter which combines multiple functionalities: The overall absorbance, the position of medium transmission as well as the magnitude of its gradient can be tuned continuously by electrical means. Live image control is possible using low operation voltages in the range of ±2 V to reach a high change in optical density ΔOD of 1.01 (400 nm to 780 nm) with a coloration and bleaching time 1.3 s and 0.2 s, respectively. Owing to their low volume and power consumption they are suitable for widespread applications like in smartphones, surveillance cameras or microscopes.
Most automated theorem provers suffer from the problemthat the resulting proofs are difficult to understand even for experiencedmathematicians. An effective communication between the system andits users, however, is crucial for many applications, such as in a mathematical assistant system. Therefore, efforts have been made to transformmachine generated proofs (e.g. resolution proofs) into natural deduction(ND) proofs. The state-of-the-art procedure of proof transformation fol-lows basically its completeness proof: the premises and the conclusionare decomposed into unit literals, then the theorem is derived by mul-tiple levels of proofs by contradiction. Indeterminism is introduced byheuristics that aim at the production of more elegant results. This inde-terministic character entails not only a complex search, but also leads tounpredictable results.In this paper we first study resolution proofs in terms of meaningful op-erations employed by human mathematicians, and thereby establish acorrespondence between resolution proofs and ND proofs at a more ab-stract level. Concretely, we show that if its unit initial clauses are CNFsof literal premises of a problem, a unit resolution corresponds directly toa well-structured ND proof segment that mathematicians intuitively un-derstand as the application of a definition or a theorem. The consequenceis twofold: First it enhances our intuitive understanding of resolutionproofs in terms of the vocabulary with which mathematicians talk aboutproofs. Second, the transformation process is now largely deterministicand therefore efficient. This determinism also guarantees the quality ofresulting proofs.
Patients after total hip arthroplasty (THA) suffer from lingering musculoskeletal restrictions. Three-dimensional (3D) gait analysis in combination with machine-learning approaches is used to detect these impairments. In this work, features from the 3D gait kinematics, spatio temporal parameters (Set 1) and joint angles (Set 2), of an inertial sensor (IMU) system are proposed as an input for a support vector machine (SVM) model, to differentiate impaired and non-impaired gait. The features were divided into two subsets. The IMU-based features were validated against an optical motion capture (OMC) system by means of 20 patients after THA and a healthy control group of 24 subjects. Then the SVM model was trained on both subsets. The validation of the IMU system-based kinematic features revealed root mean squared errors in the joint kinematics from 0.24° to 1.25°. The validity of the spatio-temporal gait parameters (STP) revealed a similarly high accuracy. The SVM models based on IMU data showed an accuracy of 87.2% (Set 1) and 97.0% (Set 2). The current work presents valid IMU-based features, employed in an SVM model for the classification of the gait of patients after THA and a healthy control. The study reveals that the features of Set 2 are more significant concerning the classification problem. The present IMU system proves its potential to provide accurate features for the incorporation in a mobile gait-feedback system for patients after THA.
Background: The use of health apps to support the treatment of chronic pain is gaining importance. Most available pain management apps are still lacking in content quality and quantity as their developers neither involve health experts to ensure target group suitability nor use gamification to engage and motivate the user. To close this gap, we aimed to develop a gamified pain management app, Pain-Mentor.
Objective: To determine whether medical professionals would approve of Pain-Mentor’s concept and content, this study aimed to evaluate the quality of the app’s first prototype with experts from the field of chronic pain management and to discover necessary improvements.
Methods: A total of 11 health professionals with a background in chronic pain treatment and 2 mobile health experts participated in this study. Each expert first received a detailed presentation of the app. Afterward, they tested Pain-Mentor and then rated its quality using the mobile application rating scale (MARS) in a semistructured interview.
Results: The experts found the app to be of excellent general (mean 4.54, SD 0.55) and subjective quality (mean 4.57, SD 0.43). The app-specific section was rated as good (mean 4.38, SD 0.75). Overall, the experts approved of the app’s content, namely, pain and stress management techniques, behavior change techniques, and gamification. They believed that the use of gamification in Pain-Mentor positively influences the patients’ motivation and engagement and thus has the potential to promote the learning of pain management techniques. Moreover, applying the MARS in a semistructured interview provided in-depth insight into the ratings and concrete suggestions for improvement.
Conclusions: The experts rated Pain-Mentor to be of excellent quality. It can be concluded that experts perceived the use of gamification in this pain management app in a positive manner. This showed that combining pain management with gamification did not negatively affect the app’s integrity. This study was therefore a promising first step in the development of Pain-Mentor.
Using an experience factory is one possible concept for supporting and improving reuse in software development. (i.e., reuse of products, processes, quality models, ...). In the context of the Sonderforschungsbereich 501: "Development of Large Systems with Generic methods" (SFB501), the Software Engineering Laboratory (SE Lab) runs such an experience factory as part of the infrastructure services it offers. The SE Lab also provides several tools to support the planning, developing, measuring, and analyzing activities of software development processes. Among these tools, the SE Lab runs and maintains an experience base, the SFB-EB. When an experience factory is utilized, support for experience base maintenance is an important issue. Furthermore, it might be interesting to evaluate experience base usage with regard to the number of accesses to certain experience elements stored in the database. The same holds for the usage of the tools provided by the SE LAB. This report presents a set of supporting tools that were designed to aid in these tasks. These supporting tools check the experience base's consistency and gather information on the usage of SFB-EB and the tools installed in the SE Lab. The results are processed periodically and displayed as HTML result reports (consistency checking) or bar charts (usage profiles).
Cell division and cell elongation are fundamental processes for growth. In contrast to animal cells, plant cells are surrounded by rigid walls and therefore loosening of the wall is required during elongation. On the other hand, vacuole size has been shown to correlate with cell size and inhibition of vacuolar expansion limits cell growth. However, the specific role of the vacuole during cell elongation is still not fully resolved. Especially the question whether the vacuole is the leading unit during cellular growth or just passively expands upon water uptake remains to be answered. Here, we review recent findings about the contribution of the vacuole to cell elongation. In addition, we also discuss the connection between cell wall status and vacuolar morphology. In particular, we focus on the question whether vacuolar size is dictated by cell size or vice versa and share our personnel view about the sequential steps during cell elongation.
The size congruity effect involves interference between numerical magnitude and physical size of visually presented numbers: congruent numbers (either both small or both large in numerical magnitude and physical size) are responded to faster than incongruent ones (small numerical magnitude/large physical size or vice versa). Besides, numerical magnitude is associated with lateralized response codes, leading to the Spatial Numerical Association of Response Codes (SNARC) effect: small numerical magnitudes are preferably responded to on the left side and large ones on the right side. Whereas size congruity effects are ascribed to interference between stimulus dimensions in the decision stage, SNARC effects are understood as (in)compatibilities in stimulus-response combinations. Accordingly, size congruity and SNARC effects were previously found to be independent in parity and in physical size judgment tasks. We investigated their dependency in numerical magnitude judgment tasks. We obtained independent size congruity and SNARC effects in these tasks and replicated this observation for the parity judgment task. The results confirm and extend the notion that size congruity and SNARC effects operate in different representational spaces. We discuss possible implications for number representation.
Software defined radios can be implemented on general purpose processors (CPUs), e.g. based on a PC. A processor offers high flexibility: It can not only be used to process the data samples, but also to control receiver functions, display a waterfall or run demodulation software. However, processors can only handle signals of limited bandwidth due to their comparatively low processing speed. For signals of high bandwidth the SDR algorithms have to be implemented as custom designed digital circuits on an FPGA chip. An FPGA provides a very high processing speed, but also lacks flexibility and user interfaces. Recently the FPGA manufacturer Xilinx has
introduced a hybrid system on chip called Zynq, that combines both approaches. It features a dual ARM Cortex-A9 processor and an FPGA, that offer the flexibility of a processor with the processing speed of an FPGA on a single chip. The Zynq is therefore very interesting for use in SDRs. In this paper the
application of the Zynq and its evaluation board (Zedboard) will be discussed. As an example, a direct sampling receiver has been implemented on the Zedboard using a high-speed 16 bit ADC with 250 Msps.
The development of a power system based on high shares of renewable energy sources puts high demands on power grids and the remaining controllable power generation plants, load management and the storage of energy. To reach climate protection goals and a significant reduction of CO2, surplus energies from fluctuating renewables have to be used to defossilize not only the power production sector but the mobility, heat and industry sectors as well, which is called sector coupling. In this article, the role of wastewater treatment plants by means of sector coupling is pictured, discussed and evaluated. The results show significant synergies—for example, using electrical surplus energy to produce hydrogen and oxygen with an electrolyzer to use them for long-term storage and enhancing purification processes on the wastewater treatment plant (WWTP). Furthermore, biofuels and storable methane gas can be produced or integrate the WWTP into a local heating network. An interconnection in many fields of different research sectors are given and show that a practical utilization is possible and reasonable for WWTPs to contribute with sustainable energy concepts to defossilization.
Comprehensive reuse and systematic evolution of reuse artifacts as proposed by the Quality Improvement Paradigm (QIP) do not only require tool support for mere storage and retrieval. Rather, an integrated management of (potentially reusable) experience data as well as project-related data is needed. This paper presents an approach exploiting object-relational database technology to implement the QIP-driven reuse repository of the SFB 501. Requirements, concepts, and implementational aspects are discussed and illustrated through a running example, namely the reuse and continuous improvement of SDL patterns for developing distributed systems. Based on this discussion, we argue that object-relational database management systems (ORDBMS) are best suited to implement such a comprehensive reuse repository. It is demonstrated how this technology can be used to support all phases of a reuse process and the accompanying improvement cycle. Although the discussions of this paper are strongly related to the requirements of the SFB 501 experience base, the basic realization concepts, and, thereby, the applicability of ORDBMS, can easily be extended to similar applications, i. e., reuse repositories in general.
Load balancing is one of the central problems that have to be solved in parallel computation. Here, the problem of distributed, dynamic load balancing for massive parallelism is addressed. A new local method, which realizes a physical analogy to equilibrating liquids in multi-dimensional tori or hypercubes, is presented. It is especially suited for communication mechanisms with low set-up to transfer ratio occurring in tightly-coupled or SIMD systems. By successive shifting single load elements to the direct neighbors, the load is automatically transferred to lightly loaded processors. Compared to former methods, the proposed Liquid model has two main advantages. First, the task of load sharing is combined with the task of load balancing, where the former has priority. This property is valuable in many applications and important for highly dynamic load distribution. Second, the Liquid model has high efficiency. Asymptotically, it needs O(D . K . Ldiff ) load transfers to reach the balanced state in a D-dimensional torus with K processors per dimension and a maximum initial load difference of Ldiff . The Liquid model clearly outperforms an earlier load balancing approach, the nearest-neighbor-averaging. Besides a survey of related research, analytical results within a formal framework are derived. These results are validated by worst-case simulations in one-and two-dimensional tori with up to two thousand processors.
The core muscles play a central role in stabilizing the head during headers in soccer. The objective of this study was to examine the influence of a fatigued core musculature on the acceleration of the head during jump headers and run headers. Acceleration of the head was measured in a pre-post-design in 68 soccer players (age: 21.5 ± 3.8 years, height: 180.0 ± 13.9 cm, weight: 76.9 ± 8.1 kg). Data were recorded by means of a telemetric 3D acceleration sensor and with a pendulum header. The treatment encompassed two exercises each for the ventral, lateral, and dorsal muscle chains. The acceleration of the head between pre- and post-test was reduced by 0.3 G (p = 0.011) in jump headers and by 0.2 G (p = 0.067) in run headers. An additional analysis of all pretests showed an increased acceleration in run headers when compared to stand headers (p < 0.001) and jump headers (p < 0.001). No differences were found in the sub-group comparisons: semi-professional vs. recreational players, offensive vs. defensive players. Based on the results, we conclude that the acceleration of the head after fatiguing the core muscles does not increase, which stands in contrast to postulated expectations. More tests with accelerated soccer balls are required for a conclusive statement.
Muscular imbalances of the trunk muscles are held responsible for changes in body posture. At the same time, whole-body electromyostimulation (WB-EMS) has been established as a new training method that enables simultaneous stimulation of many muscle groups. This study was aiming to analyze if a 10 weeks WB-EMS training changes posture-relevant parameters and/or improves isometric strength of the trunk extensors and flexors, and if there are differences based on stimulation at 20 Hz and 85 Hz. Fifty eight untrained adult test persons were divided into three groups (control, CON; training with 20 Hz stimulation, TR20; training with 85 Hz, TR85). Anthropometric parameters, trunk extension and flexion forces and torques, and posture parameters were determined before (n = 58) and after (n = 53: CON: n = 15, TR20: n = 19, TR85: n = 19) a 10 weeks WB-EMS training program (15 applications, 9 exercises). Differences between the groups were calculated for pre- and post-tests using univariate ANOVA and between the test times using repeated (2 × 3) ANOVA. Comparisons of pairs were calculated post hoc based on Fisher (LSD). No differences between the groups were found for the posture parameters. The post hoc analysis of both trunk flexion and trunk extension forces and torques showed a significant difference between the groups TR85 and CON but no difference between the other group pairs. A 10 weeks whole-body electrostimulation training with a stimulation frequency of 85 Hz in contrast to training with a stimulation frequency of 20 Hz improves the trunk muscle strength of an untrained group but does not significantly change posture parameters.
Much reading research has found that informative parafoveal masks lead to a reading benefit for native speakers (see, Schotter et al., 2012). However, little reading research has tested the impact of uninformative parafoveal masks during reading. Additionally, parafoveal processing research is primarily restricted to native speakers. In the current study we manipulated the type of uninformative preview using a gaze contingent boundary paradigm with a group of L1 English speakers and a group of late L2 English speakers (L1 German). We were interested in how different types of uninformative masks impact on parafoveal processing, whether L1 and L2 speakers are similarly impacted, and whether they are sensitive to parafoveally viewed language-specific sub-lexical orthographic information. We manipulated six types of uninformative masks to test these objectives: an Identical, English pseudo-word, German pseudo-word, illegal string of letters, series of X’s, and a blank mask. We found that X masks affect reading the most with slight graded differences across the other masks, L1 and L2 speakers are impacted similarly, and neither group is sensitive to sub-lexical orthographic information. Overall these data show that not all previews are equal, and research should be aware of the way uninformative masks affect reading behavior. Additionally, we hope that future research starts to approach models of eye-movement behavior during reading from not only a monolingual but also from a multilingual perspective.
We consider a variant of the generalized assignment problem (GAP) where the amount of space used in each bin is restricted to be either zero (if the bin is not opened) or above a given lower bound (a minimum quantity). We provide several complexity results for different versions of the problem and give polynomial time exact algorithms and approximation algorithms for restricted cases.
For the most general version of the problem, we show that it does not admit a polynomial time approximation algorithm (unless P=NP), even for the case of a single bin. This motivates to study dual approximation algorithms that compute solutions violating the bin capacities and minimum quantities by a constant factor. When the number of bins is fixed and the minimum quantity of each bin is at least a factor \(\delta>1\) larger than the largest size of an item in the bin, we show how to obtain a polynomial time dual approximation algorithm that computes a solution violating the minimum quantities and bin capacities by at most a factor \(1-\frac{1}{\delta}\) and \(1+\frac{1}{\delta}\), respectively, and whose profit is at least as large as the profit of the best solution that satisfies the minimum quantities and bin capacities strictly.
In particular, for \(\delta=2\), we obtain a polynomial time (1,2)-approximation algorithm.
Background: Aneuploidy, or abnormal chromosome numbers, severely alters cell physiology and is widespread in
cancers and other pathologies. Using model cell lines engineered to carry one or more extra chromosomes, it has
been demonstrated that aneuploidy per se impairs proliferation, leads to proteotoxic as well as replication stress
and triggers conserved transcriptome and proteome changes.
Results: In this study, we analysed for the first time miRNAs and demonstrate that their expression is altered in
response to chromosome gain. The miRNA deregulation is independent of the identity of the extra chromosome
and specific to individual cell lines. By cross-omics analysis we demonstrate that although the deregulated miRNAs
differ among individual aneuploid cell lines, their known targets are predominantly associated with cell development,
growth and proliferation, pathways known to be inhibited in response to chromosome gain. Indeed, we show that up
to 72% of these targets are downregulated and the associated miRNAs are overexpressed in aneuploid cells, suggesting
that the miRNA changes contribute to the global transcription changes triggered by aneuploidy. We identified
hsa-miR-10a-5p to be overexpressed in majority of aneuploid cells. Hsa-miR-10a-5p enhances translation of a
subset of mRNAs that contain so called 5’TOP motif and we show that its upregulation in aneuploids provides
resistance to starvation-induced shut down of ribosomal protein translation.
Conclusions: Our work suggests that the changes of the microRNAome contribute on one hand to the adverse
effects of aneuploidy on cell physiology, and on the other hand to the adaptation to aneuploidy by supporting
translation under adverse conditions.
Keywords: Aneuploidy, Cancer, miRNA, miR-10a-5p, Trisomy
We describe a platform for the portable and secure execution of mobile agents writtenin various interpreted languages on top of a common run-time core. Agents may migrate at anypoint in their execution, fully preserving their state, and may exchange messages with otheragents. One system may contain many virtual places, each establishing a domain of logicallyrelated services under a common security policy governing all agents at this place. Agents areequipped with allowances limiting their resource accesses, both globally per agent lifetime andlocally per place. We discuss aspects of this architecture and report about ongoing work.
The structural integrity of synaptic connections critically depends on the interaction between synaptic cell adhesion molecules (CAMs) and the underlying actin and microtubule cytoskeleton. This interaction is mediated by giant Ankyrins, that act as specialized adaptors to establish and maintain axonal and synaptic compartments. In Drosophila, two giant isoforms of Ankyrin2 (Ank2) control synapse stability and organization at the larval neuromuscular junction (NMJ). Both Ank2-L and Ank2-XL are highly abundant in motoneuron axons and within the presynaptic terminal, where they control synaptic CAMs distribution and organization of microtubules. Here, we address the role of the conserved N-terminal ankyrin repeat domain (ARD) for subcellular localization and function of these giant Ankyrins in vivo. We used a P[acman] based rescue approach to generate deletions of ARD subdomains, that contain putative binding sites of interacting transmembrane proteins. We show that specific subdomains control synaptic but not axonal localization of Ank2-L. These domains contain binding sites to L1-family member CAMs, and we demonstrate that these regions are necessary for the organization of synaptic CAMs and for the control of synaptic stability. In contrast, presynaptic Ank2-XL localization only partially depends on the ARD but strictly requires the presynaptic presence of Ank2-L demonstrating a critical co-dependence of the two isoforms at the NMJ. Ank2-XL dependent control of microtubule organization correlates with presynaptic abundance of the protein and is thus only partially affected by ARD deletions. Together, our data provides novel insights into the synaptic targeting of giant Ankyrins with relevance for the control of synaptic plasticity and maintenance.
We consider N coupled linear oscillators with time-dependent coecients. An exact complex amplitude - real phase decomposition of the oscillatory motion is constructed. This decomposition is further used to derive N exact constants of motion which generalise the so-called Ermakov-Lewis invariant of a single oscillator. In the Floquet problem of periodic oscillator coecients we discuss the existence of periodic complex amplitude functions in terms of existing Floquet solutions.
Poor posture in childhood and adolescence is held responsible for the occurrence
of associated disorders in adult age. This study aimed to verify whether body
posture in adolescence can be enhanced through the improvement of neuromuscular
performance, attained by means of targeted strength, stretch, and body perception
training, and whether any such improvement might also transition into adulthood. From
a total of 84 volunteers, the posture development of 67 adolescents was checked
annually between the age of 14 and 20 based on index values in three posture
situations. 28 adolescents exercised twice a week for about 2 h up to the age of 18, 24
adolescents exercised continually up to the age of 20. Both groups practiced other
additional sports for about 1.8 h/week. Fifteen persons served as a non-exercising
control group, practicing optional sports of about 1.8 h/week until the age of 18,
after that for 0.9 h/week. Group allocation was not random, but depended on the
participants’ choice. A linear mixed model was used to analyze the development
of posture indexes among the groups and over time and the possible influence of
anthropometric parameters (weight, size), of optional athletic activity and of sedentary
behavior. The post hoc pairwise comparison was performed applying the Scheffé test.
The significance level was set at 0.05. The group that exercised continually (TR20)
exhibited a significant posture parameter improvement in all posture situations from
the 2nd year of exercising on. The group that terminated their training when reaching
adulthood (TR18) retained some improvements, such as conscious straightening of the
body posture. In other posture situations (habitual, closed eyes), their posture results
declined again from age 18. The effect sizes determined were between Eta² = 0.12 and
Eta² = 0.19 and represent moderate to strong effects. The control group did not exhibit
any differences. Anthropometric parameters, additional athletic activities and sedentary
behavior did not influence the posture parameters significantly. An additional athletic
training of 2 h per week including elements for improved body perception seems to
have the potential to improve body posture in symptom free male adolescents and
young adults.
Versions- und Konfigurationsmanagement sind zentrale Instrumente zur intellektuellen Beherrschung komplexer Softwareentwicklungen. In stark wiederverwendungsorientierten Softwareentwicklungsansätzen -wie vom SFB bereitgestellt- muß der Begriff der Konfiguration von traditionell produktorientierten Artefakten auf Prozesse und sonstige Entwicklungserfahrungen erweitert werden. In dieser Veröffentlichung wird ein derartig erweitertes Konfigurationsmodell vorgestellt. Darüberhinau wird eine Ergänzung traditioneller Projektplanungsinformationen diskutiert, die die Ableitung maßgeschneiderter Versions- und Konfigurationsmanagementmechanismen vor Projektbeginn ermöglichen.
Durch die stetige Zunahme von dezentralen Erzeugungsanlagen, den anstehenden Smart-Meter Rollout sowie die zu erwartende Elektrifizierung des Verkehrssektors (E-Mobilität) steht die Netzplanung und Netzbetriebsführung von Niederspannungsnetzen (NS-Netzen) in Deutschland vor großen Herausforderungen. In den letzten Jahren wurden daher viele Studien, Forschungs- und Demonstrationsprojekte zu den oben genannten Themen durchge-führt und die Ergebnisse sowie die entwickelten Methoden publiziert. Jedoch lassen sich die publizierten Methoden meist nicht nachbilden bzw. validieren, da die Untersuchungsmodelle oder die angesetzten Szenarien für Dritte nicht nachvollziehbar sind. Es fehlen einheitliche Netzmodelle, die die deutschen NS-Netze abbilden und für Ver-gleichsuntersuchungen herangezogen werden können, ähnlich dem Beispiel der nordamerikanischen Verteilnetzmodelle des IEEE.
Im Gegensatz zum Übertragungsnetz, dessen Struktur hinreichend genau bekannt ist, sind passende Netzmodelle für NS-Netze wegen der hohen Anzahlen der NS-Netze und Verteilnetzbetreiber (VNB) nur schwer abzubilden. Des Weiteren ist eine detaillierte Darstellung realer NS-Netze in wissenschaftlichen Publikationen aus daten-schutzrechtlichen Gründen meist nicht erwünscht. Für Untersuchungen im Rahmen eines Forschungsprojekts wurden darum möglichst charakteristische synthetische NS-Netzmodelle erstellt, die sich an gängigen deutschen Siedlungsstrukturen und üblichen Netzplanungsgrundsätzen orientieren. In dieser Arbeit werden diese NS-Netzmodelle sowie ihre Entwicklung im Detail erklärt. Damit stehen erstmals für die Öffentlichkeit nachvollziehbare NS-Netzmodelle für den deutschsprachigen Raum zur Verfügung. Sie können als Benchmark für wissenschaftliche Untersuchungen sowie zur Methodenentwicklung verwendet werden.
Im Gegensatz zum Übertragungsnetz, dessen Struktur hinreichend genau bekannt ist, sind passende Netzmodelle
für Mittelspannungsnetze (MS-Netze) wegen der hohen Anzahlen der MS-Netze und Verteilnetzbetreiber (VNB)
nur schwer abzubilden. Des Weiteren ist eine detaillierte Darstellung realer MS-Netze in wissenschaftlichen Publikationen
aus datenschutzrechtlichen Gründen meist nicht erwünscht. In dieser Arbeit werden MS-Netzmodelle
sowie ihre Entwicklung im Detail erklärt. Damit stehen erstmals für die Öffentlichkeit nachvollziehbare MS-Netzmodelle
für den deutschsprachigen Raum zur Verfügung. Sie können als Benchmark für wissenschaftliche Untersuchungen
sowie zur Methodenentwicklung verwendet werden.
In this study, the dependence of the cyclic deformation behavior on the surface morphology of metastable austenitic HSD® 600 TWinning Induced Plasticity (TWIP) steel was investigated. This steel—with the alloying concept Mn-Al-Si—shows a fully austenitic microstructure with deformation-induced twinning at ambient temperature. Four different surface morphologies were analyzed: as-received with a so-called rolling skin, after up milling, after down milling, and a reference morphology achieved by polishing. The morphologies were characterized by X-Ray Diffraction (XRD), Focused Ion Beam (FIB), Scanning Electron Microscopy (SEM) as well as confocal microscopy methods and show significant differences in initial residual stresses, phase fractions, topographies and microstructures. For specimens with all variants of the morphologies, fatigue tests were performed in the Low Cycle Fatigue (LCF) and High Cycle Fatigue (HCF) regime to characterize the cyclic deformation behavior and fatigue life. Moreover, this study focused on the frequency-dependent self-heating of the specimens caused by cyclic plasticity in the HCF regime. The results show that both surface morphology and specimen temperature have a significant influence on the cyclic deformation behavior of HSD® 600 TWIP steel in the HCF regime.
In this paper, the effect of shot peening and cryogenic turning on the surface morphologyof the metastable austenitic stainless steel AISI 347 was investigated. In the shot peeningprocess, the coverage and the Almen intensity, which is related to the kinetic energy of thebeads, were varied. During cryogenic turning, the feed rate and the cutting edge radiuswere varied. The manufactured workpieces were characterized by X-ray diffractionregarding the phase fractions, the residual stresses and the full width at half maximum.The microhardness in the hardened surface layer was measured to compare the hardeningeffect of the processes. Furthermore, the surface topography was also characterized. Thenovelty of the research is the direct comparison of the two methods with identical work-pieces (same batch) and identical analytics. It was found that shot peening generally leadsto a more pronounced surface layer hardening, while cryogenic turning allows the hard-ening to be realized in a shorter process chain and also leads to a better surface topog-raphy. For both hardening processes it was demonstrated how the surface morphology canbe modified by adjusting the process parameter.
Struktur und Werkzeuge des experiment-spezifischen Datenbereichs der SFB501 Erfahrungsdatenbank
(1999)
Software-Entwicklungsartefakte müssen zielgerichtet während der Durchführung eines Software- Projekts erfasst werden, um für die Wiederverwendung aufbereitet werden zu können. Die methodische Basis hierzu bildet im Sonderforschungsbereich 501 das Konzept der Erfahrungsdatenbank. In ihrem experiment-spezifischen Datenbereich werden für jedes Entwicklungsprojekt alle Software-Entwicklungsartefakte abgelegt, die während des Lebenszyklus eines Projektes anfallen. In ihrem übergreifenden Datenbereich werden all die jenigen Artefakte aus dem experiment-spezifischen Datenbereich zusammengefasst, die für eine Wiederverwendung in nachfolgenden Projekten in Frage kommen. Es hat sich gezeigt, dass bereits zur Nutzung der Datenmengen im experiment- spezifischen Datenbereich der Erfahrungsdatenbank ein systematischer Zugriff notwendig ist. Ein systematischer Zugriff setzt jedoch eine normierte Struktur voraus. Im experiment-spezifischen Bereich werden zwei Arten von Experimenttypen unterschieden: "Kontrollierte Experimente" und "Fallstudien". Dieser Bericht beschreibt die Ablage- und Zugriffsstruktur für den Experimenttyp "Fallstudien". Die Struktur wurde aufgrund der Erfahrungen in ersten Fallstudien entwickelt und evaluiert.
One of the ongoing tasks in space structure testing is the vibration test, in which a given structure is mounted onto a shaker and excited by a certain input load on a given frequency range, in order to reproduce the rigor of launch. These vibration tests need to be conducted in order to ensure that the devised structure meets the expected loads of its future application. However, the structure must not be overtested to avoid any risk of damage. For this, the system’s response to the testing loads, i.e., stresses and forces in the structure, must be monitored and predicted live during the test. In order to solve the issues associated with existing methods of live monitoring of the structure’s response, this paper investigated the use of artificial neural networks (ANNs) to predict the system’s responses during the test. Hence, a framework was developed with different use cases to compare various kinds of artificial neural networks and eventually identify the most promising one. Thus, the conducted research accounts for a novel method for live prediction of stresses, allowing failure to be evaluated for different types of material via yield criteria
About the approach The approach of TOPO was originally developed in the FABEL project1[1] to support architects in designing buildings with complex installations. Supplementing knowledge-based design tools, which are available only for selected subtasks, TOPO aims to cover the whole design process. To that aim, it relies almost exclusively on archived plans. Input to TOPO is a partial plan, and output is an elaborated plan. The input plan constitutes the query case and the archived plans form the case base with the source cases. A plan is a set of design objects. Each design object is defined by some semantic attributes and by its bounding box in a 3-dimensional coordinate system. TOPO supports the elaboration of plans by adding design objects.
Cyanobacteria of biological soil crusts (BSCs) represent an important part of circumpolar
and Alpine ecosystems, serve as indicators for ecological condition and climate
change, and function as ecosystem engineers by soil stabilization or carbon and nitrogen
input. The characterization of cyanobacteria from both polar regions remains
extremely important to understand geographic distribution patterns and community
compositions. This study is the first of its kind revealing the efficiency of combining
denaturing gradient gel electrophoresis (DGGE), light microscopy and culture-based
16S rRNA gene sequencing, applied to polar and Alpine cyanobacteria dominated
BSCs. This study aimed to show the living proportion of cyanobacteria as an extension
to previously published meta-transcriptome
data of the same study sites.
Molecular fingerprints showed a distinct clustering of cyanobacterial communities
with a close relationship between Arctic and Alpine populations, which differed from
those found in Antarctica. Species richness and diversity supported these results,
which were also confirmed by microscopic investigations of living cyanobacteria
from the BSCs. Isolate-based
sequencing corroborated these trends as cold biome
clades were assigned, which included a potentially new Arctic clade of Oculatella.
Thus, our results contribute to the debate regarding biogeography of cyanobacteria
of cold biomes.
Nanoindentation simulations are performed for a Ni(111) bi-crystal, in which the grain boundary is coated by a graphene layer. We study both a weak and a strong interface, realized by a 30∘ and a 60∘ twist boundary, respectively, and compare our results for the composite also with those of an elemental Ni bi-crystal. We find hardening of the elemental Ni when a strong, i.e., low-energy, grain boundary is introduced, and softening for a weak grain boundary. For the strong grain boundary, the interface barrier strength felt by dislocations upon passing the interface is responsible for the hardening; for the weak grain boundary, confinement of the dislocations results in the weakening. For the Ni-graphene composite, we find in all cases a weakening influence that is caused by the graphene blocking the passage of dislocations and absorbing them. In addition, interface failure occurs when the indenter reaches the graphene, again weakening the composite structure.
This paper is devoted to the mathematica l description of the solution of the so-called rainflow reconstruction problem, i.e. the problem of constructing a time series with an a priori given rainflow m atrix. The algorithm we present is mathematically exact in the sense that no app roximations or heuristics are involved. Furthermore it generates a uniform distr ibution of all possible reconstructions and thus an optimal randomization of the reconstructed series. The algorithm is a genuine on-line scheme. It is easy adj ustable to all variants of rainflow such as sysmmetric and asymmetric versions a nd different residue techniques.
Static magnetic and spin wave properties of square lattices of permalloy micron dots with thicknesses of 500 Å and 1000 Å and with varying dot separations have been investigated. The spin wave frequencies can be well described taking into account the demagnetization factor of each single dot. A magnetic four-fold anisotropy was found for the lattice with dot diameters of 1 micrometer and a dot separation of 0.1 micrometer. The anisotropy is attributed to an anisotropic dipole-dipole interaction between magnetically unsaturated parts of the dots. The anisotropy strength (order of 100000 erg/cm^3 ) decreases with increasing in-plane applied magnetic field.
A new method for calculating Stark resonances is presented and applied for illustration to the simple case of a one-particle, one-dimensional model Hamiltonian. The method is applicable for weak and strong dc fields. The only need, also for the case of many particles in multi-dimensional space, are either the short time evolution matrix elements or the eigenvalues and Fourier components of the eigenfunctions of the field-free Hamiltonian.
The dispersions of dipolar (Damon-Eshbach modes) and exchange dominated spin waves are calculated for in-plane magnetized thin and ultrathin cubic films with (111) crystal orientation and the results are compared with those obtained for the other principal planes. The properties of these magnetic excitations are examined from the point of view of Brillouin light scattering experiments. Attention is paid to study the spin-wave frequency variation as a function of the magnetization direction in the film plane for different film thicknesses. Interface anisotropies and the bulk magnetocrystalline anisotropy are considered in the calculation. A quantitative comparison between an analytical expression obtained in the limit of small film thickness and wave vector and the full numerical calculation is given.
In order to reduce the elapsed time of a computation, a pop-ular approach is to decompose the program into a collection of largelyindependent subtasks which are executed in parallel. Unfortunately, it isoften observed that tightly-coupled parallel programs run considerablyslower than initially expected. In this paper, a framework for the anal-ysis of parallel programs and their potential speedup is presented. Twoparameters which strongly affect the scalability of parallelism are iden-tified, namely the grain of synchronization, and the degree to which thetarget hardware is available. It is shown that for certain classes of appli-cations speedup is inherently poor, even if the program runs under theidealized conditions of perfect load balance, unbounded communicationbandwidth and negligible communication and parallelization overhead.Upper bounds are derived for the speedup that can be obtained in threedifferent types of computations. An example illustrates the main find-ings.
This paper presents fill algorithms for boundary-defined regions in raster graphics. The algorithms require only a constant size working memory. The methods presented are based on the so-called "seed fill" algorithms using the internal connectivity of the region with a given inner point. Basic methods as well as additional heuristics for speeding up the algorithm are described and verified. For different classes of regions, the time complexity of the algorithms is compared using empirical results.
Extending existing calculi by sorts is astrong means for improving the deductive power offirst-order theorem provers. Since many mathemat-ical facts can be more easily expressed in higher-orderlogic - aside the greater power of higher-order logicin principle - , it is desirable to transfer the advant-ages of sorts in the first-order case to the higher-ordercase. One possible method for automating higher-order logic is the translation of problem formulationsinto first-order logic and the usage of first-order the-orem provers. For a certain class of problems thismethod can compete with proving theorems directlyin higher-order logic as for instance with the TPStheorem prover of Peter Andrews or with the Nuprlproof development environment of Robert Constable.There are translations from unsorted higher-order lo-gic based on Church's simple theory of types intomany-sorted first-order logic, which are sound andcomplete with respect to a Henkin-style general mod-els semantics. In this paper we extend correspond-ing translations to translations of order-sorted higher-order logic into order-sorted first-order logic, thus weare able to utilize corresponding first-order theoremprover for proving higher-order theorems. We do notuse any (lambda)-expressions, therefore we have to add so-called comprehension axioms, which a priori makethe procedure well-suited only for essentially first-order theorems. However, in practical applicationsof mathematics many theorems are essentially first-order and as it seems to be the case, the comprehen-sion axioms can be mastered too.
The analyticity property of the one-dimensional complex Hamiltonian system H(x,p)=H_1(x_1,x_2,p_1,p_2)+iH_2(x_1,x_2,p_1,p_2) with p=p_1+ix_2, x=x_1+ip_2 is exploited to obtain a new class of the corresponding two-dimensional integrable Hamiltonian systems where H_1 acts as a new Hamiltonian and H_2 is a second integral of motion. Also a possible connection between H_1 and H_2 is sought in terms of an auto-B"acklund transformation.
An important research problem is the incorporation of "declarative" knowledge into an automated theorem prover that can be utilized in the search for a proof. An interesting pro-posal in this direction is Alan Bundy's approach of using explicit proof plans that encapsulatethe general form of a proof and is instantiated into a particular proof for the case at hand. Wegive some examples that show how a "declarative" highlevel description of a proof can be usedto find proofs of apparently "similiar" theorems by analogy. This "analogical" information isused to select the appropriate axioms from the database so that the theorem can be proved.This information is also used to adjust some options of a resolution theorem prover. In orderto get a powerful tool it is necessary to develop an epistemologically appropriate language todescribe proofs, for which a large set of examples should be used as a testbed. We presentsome ideas in this direction.
IoT systems consist of Hardware/Software systems (e.g., sensors) that are embedded in a physical world, networked and that interact with complex software platforms. The validation of such systems is a challenge and currently mostly done by prototypes. This paper presents the virtual environment for simulation, emulation and validation of an IoT platform and its semantic model in real life scenarios. It is based on a decentralized, bottom up approach that offers interoperability of IoT devices and the value-added services they want to use across different domains. The framework is demonstrated by a comprehensive case study. The example consists of the complete IoT “Smart Energy” use case with focus on data privacy by homomorphic encryption. The performance of the network is compared while using partially homomorphic encryption, fully homomorphic encryption and no encryption at all.As a major result, we found that our framework is capable of simulating big IoT networks and the overhead introduced by homomorphic encryption is feasible for VICINITY.
Dieser Beitrag beschreibt eine Lernumgebung für Schülerinnen und Schüler der Unter- und Mittelstufe mit einem Schwerpunkt im Fach Mathematik. Das Thema dieser Lernumgebung ist die Simulation von Entfluchtungsprozessen im Rahmen von Gebäudeevakuierungen. Dabei wird das Konzept eines zellulären Automaten vermittelt, ohne dabei Programmierkenntnisse vorauszusetzen oder anzuwenden. Anhand dieses speziellen Simulationswerkzeugs des zellulären Automaten werden Eigenschaften, Kenngrößen sowie Vor- und Nachteile von Simulationen im Allgemeinen thematisiert. Dazu gehören unter anderem die experimentelle Datengewinnung, die Festlegung von Modellparametern, die Diskretisierung des zeitlichen und räumlichen Betrachtungshorizonts sowie die zwangsläufig auftretenden (Diskretisierungs-)Fehler, die algorithmischen Abläufe einer Simulation in Form elementarer Handlungsanweisungen, die Speicherung und Visualisierung von Daten aus einer Simulation sowie die Interpretation und kritische Diskussion von Simulationsergebnissen. Die vorgestellte Lernumgebung ermöglicht etliche Variationen zu weiteren Aspekten des Themas „Evakuierungssimulation“ und bietet dadurch auch vielfältige Differenzierungsmöglichkeiten.
Den in der industriellen Produktion eingesetzten Manipulatoren fehlt in der Regel die Möglichkeit, ihre Umwelt wahrzunehmen. Damit Mensch und Roboter in einem gemeinsamen Arbeitsraum arbeiten können, wird im SIMERO-System die Transferbewegung des Roboters durch Kameras abgesichert. Dieses Kamerasystem wird auf Ausfall überprüft. Dabei werden Fehler in der Bildübertragung und Positionierungsfehler der Kameras betrachtet.
Zur Zeit haben Industrieroboter nur eine sehr begrenzte Wahrnehmung ihrer Umwelt. Wenn sich Menschen im Arbeitsraum des Roboters aufhalten sind sie daher gefährdet. Durch eine Einteilung der möglichen Roboterbewegung in verschiedene Klassen kann gezeigt werden, dass die für einen Menschen im Arbeitsraum gefährlichste Bewegung die freie Transferbewegung ist. Daher besteht die betrachtete Aufgabe darin, diese Transferbewegung eines Manipulators durchzuführen, ohne mit dynamischen Hindernissen, wie zum Beispiel Menschen, zu kollidieren. Das SIMERO-System gliedert sich in die vier Hauptkomponenten Bildverarbeitung, Robotermodellierung, Kollisionserkennung und Bahnplanung. Diese Komponenten werden im einzelnen vorgestellt. Die Leistungsfähigkeit des Systems und die weiteren Verbesserungen werden an einem Versuch exemplarisch gezeigt.
Using molecular dynamics simulation, we study nanoindentation in large samples of Cu–Zr glass at various temperatures between zero and the glass transition temperature. We find that besides the elastic modulus, the yielding point also strongly (by around 50%) decreases with increasing temperature; this behavior is in qualitative agreement with predictions of the cooperative shear model. Shear-transformation zones (STZs) show up in increasing sizes at low temperatures, leading to shear-band activity. Cluster analysis of the STZs exhibits a power-law behavior in the statistics of STZ sizes. We find strong plastic activity also during the unloading phase; it shows up both in the deactivation of previous plastic zones and the appearance of new zones, leading to the observation of pop-outs. The statistics of STZs occurring during unloading show that they operate in a similar nature as the STZs found during loading. For both cases, loading and unloading, we find the statistics of STZs to be related to directed percolation. Material hardness shows a weak strain-rate dependence, confirming previously reported experimental findings; the number of pop-ins is reduced at slower indentation rate. Analysis of the dependence of our simulation results on the quench rate applied during preparation of the glass shows only a minor effect on the properties of STZs.
The quasienergy spectrum of a periodically driven quantum system is constructed from classical dynamics by means of the semiclassical initial value representation using coherent states. For the first time, this method is applied to explicitly time dependent systems. For an anharmonic oscillator system with mixed chaotic and regular classical dynamics, the entire quantum spectrum (both regular and chaotic states) is reproduced semiclassically with surprising accuracy. In particular, the method is capable to account for the very small tunneling splittings.
For periodically driven systems, quantum tunneling between classical resonant stability islands in phase space separated by invariant KAM curves or chaotic regions manifests itself by oscillatory motion of wave packets centered on such an island, by multiplet splittings of the quasienergy spectrum, and by phase space localisation of the quasienergy states on symmetry related ,ux tubes. Qualitatively di,erent types of classical resonant island formation | due to discrete symmetries of the system | and their quantum implications are analysed by a (uniform) semiclassical theory. The results are illustrated by a numerical study of a driven non-harmonic oscillator.
The Filter-Diagonalization Method is applied to time periodic Hamiltonians and used to find selectively the regular and chaotic quasienergies of a driven 2D rotor. The use of N cross-correlation probability amplitudes enables a selective calculation of the quasienergies from short time propagation to the time T (N). Compared to the propagation time T (1) which is required for resolving the quasienergy spectrum with the same accuracy from auto-correlation calculations, the cross-correlation time T (N) is shorter by the factor N , that is T (1) = N T (N).
We present a system concept allowing humans to work safely in the same environment as a robot manipulator. Several cameras survey the common workspace. A look-up-table-based fusion algorithm is used to back-project directly from the image spaces of the cameras to the manipulator?s con-figuration space. In the look-up-tables both, the camera calibration and the robot geometry are implicitly encoded. For experiments, a conven-tional 6 axis industrial manipulator is used. The work space is surveyed by four grayscale cameras. Due to the limits of present robot controllers, the computationally expensive parts of the system are executed on a server PC that communicates with the robot controller via Ethernet.
This paper analyzes the problem of sensor-based colli-sion detection for an industrial robotic manipulator. A method to perform collision tests based on images taken from several stationary cameras in the work cell is pre-sented. The collision test works entirely based on the im-ages, and does not construct a representation of the Carte-sian space. It is shown how to perform a collision test for all possible robot configurations using only a single set of images taken simultaneously.
In search of new technologies for optimizing the performance and space requirements of electronic and optical micro-circuits, the concept of spoof surface plasmon polaritons (SSPPs) has come to the fore of research in recent years. Due to the ability of SSPPs to confine and guide the energy of electromagnetic waves in a subwavelength space below the diffraction limit, SSPPs deliver all the tools to implement integrated circuits with a high integration rate. However, in order to guide SSPPs in the terahertz frequency range, it is necessary to carefully design metasurfaces that allow one to manipulate the spatio-temporal and spectral properties of the SSPPs at will. Here, we propose a specifically designed cut-wire metasurface that sustains strongly confined SSPP modes at terahertz frequencies. As we show by numerical simulations and also prove in experimental measurements, the proposed metasurface can tightly guide SSPPs on straight and curved pathways while maintaining their subwavelength field confinement perpendicular to the surface. Furthermore, we investigate the dependence of the spatio-temporal and spectral properties of the SSPP modes on the width of the metasurface lanes that can be composed of one, two or three cut-wires in the transverse direction. Our investigations deliver new insights into downsizing effects of guiding structures for SSPPs.
Die Domäne der Operationsroboter liegt heute in Fräsarbeiten an knöchernen Strukturen. Da Roboter über eine extreme Präzision verfügen und nicht ermüden bietet sich ihr Einsatz ins-besondere bei langwierigen und zugleich hochpräzisen Fräsvorgängen im Bereich der lateralen Schädelbasis an. Aus diesem Grunde wurde ein Verfahren entwickelt, welches aus einer geometrischen Beschreibung des Implantates eine geeignete Fräsbahn errechnet und eine kraftgeregelte Prozesskontrolle des Fräsvorganges implementiert. Mit einem 6*achsigen Knickarmroboter erfolgten die Untersuchungen primär an Tierpräparaten und zur Optimierung an Felsenbeinpräparaten.
In this paper, we compare the BERKOM globally ac-cessible services project (GLASS) with the well-knownWorld-Wide Web with respect to the ease of development,realization, and distribution of multimedia presentations.This comparison is based on the experiences we gainedwhen implementing a gateway between GLASS and theWorld-Wide Web. Since both systems are shown to haveobvious weaknesses, we are concluding this paper with apresentation of a better way to multimedia document en-gineering and distribution. This concept is based on awell-accepted approach to function-shipping in the Inter-net: the Java language, permitting for example a smoothintegration of GLASS92 MHEG objects and WWW HTMLpages within one common environment.
Structured domains are characterized by the fact that there is an intrinsic dependency between certain key elements in the domain. Considering these dependencies leads to better performance of the planning systems, and it is an important factor for determining the relevance of the cases stored in a case-base. However, testing for cases that meet these dependencies, decreases the performance of case-based planning, as other criterions need also to be consider for determining this relevance. We present a domain-independent architecture that explicitly represents these dependencies so that retrieving relevant cases is ensured without negatively affecting the performance of the case-based planning process.
The Filter-Diagonalization Method is used to ,nd the broad and even overlapping resonances of a 1D Hamiltonian used before as a test model for new resonance theories and computational methods. It is found that the use of several complex-scaled cross-correlation probability amplitudes from short time propagation enables the calculation of broad overlapping resonances, which can not be resolved from the amplitude of a single complex-scaled autocorrelation calculation.
A straightforward formulation of a mathematical problem is mostly not ad-equate for resolution theorem proving. We present a method to optimize suchformulations by exploiting the variability of first-order logic. The optimizingtransformation is described as logic morphisms, whose operationalizations aretactics. The different behaviour of a resolution theorem prover for the sourceand target formulations is demonstrated by several examples. It is shown howtactical and resolution-style theorem proving can be combined.
We isolated an encysted ciliate from a geothermal field in Iceland. The morphological features of this isolate fit the descriptions of Dexiotricha colpidiopsis (Kahl, 1926) Jankowski, 1964 very well. These comprise body shape and size in vivo, the number of somatic kineties, and the positions of macronucleus and contractile vacuole. Using state-of-the-art taxonomic methods, the species is redescribed, including phylogenetic analyses of the small subunit ribosomal RNA (SSU rRNA) gene as molecular marker. In the phylogenetic analyses, D. colpidiopsis clusters with the three available SSU rRNA gene sequences of congeners, suggesting a monophyly of the genus Dexiotricha. Its closest relative in phylogenetic analyses is D. elliptica, which also shows a high morphological similarity. This is the first record of a Dexiotricha species from a hot spring, indicating a wide temperature tolerance of this species at least in the encysted state. The new findings on D. colpidiopsis are included in a briefly revision of the scuticociliate genus Dexiotricha and an identification key to the species.
Słowa kluczowe: Dexiotricha, hot spring, morphology, phylogeny, SSU rRNA gene