Refine
Year of publication
Document Type
- Doctoral Thesis (1877)
- Preprint (1185)
- Article (729)
- Report (486)
- Periodical Part (296)
- Master's Thesis (255)
- Working Paper (115)
- Conference Proceeding (47)
- Diploma Thesis (35)
- Lecture (25)
Language
- English (3166)
- German (1993)
- Multiple languages (6)
- Spanish (4)
Keywords
- AG-RESY (64)
- PARO (31)
- Stadtplanung (30)
- Erwachsenenbildung (29)
- Organisationsentwicklung (28)
- Schule (26)
- Simulation (25)
- Modellierung (24)
- Mathematische Modellierung (21)
- Visualisierung (21)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (1184)
- Kaiserslautern - Fachbereich Informatik (928)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (586)
- Kaiserslautern - Fachbereich Chemie (431)
- Kaiserslautern - Fachbereich Sozialwissenschaften (351)
- Kaiserslautern - Fachbereich Physik (331)
- Fraunhofer (ITWM) (224)
- Kaiserslautern - Fachbereich Biologie (187)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (171)
- Distance and Independent Studies Center (DISC) (168)
We present a novel approach to classification, based on a tight coupling of instancebased learning and a genetic algorithm. In contrast to the usual instance-based learning setting, we do not rely on (parts of) the given training set as the basis of a nearestneighbor classifier, but we try to employ artificially generated instances as concept prototypes. The extremely hard problem of finding an appropriate set of concept prototypes is tackled by a genetic search procedure with the classification accuracy on the given training set as evaluation criterion for the genetic fitness measure. Experiments with artificial datasets show that - due to the ability to find concise and accurate concept descriptions that contain few, but typical instances - this classification approach is considerably robust against noise, untypical training instances and irrelevant attributes. These favorable (theoretical) properties are corroborated using a number of hard real-world classification problems.
We present first steps towards fully automated deduction that merely requiresthe user to submit proof problems and pick up results. Essentially, this necessi-tates the automation of the crucial step in the use of a deduction system, namelychoosing and configuring an appropriate search-guiding heuristic. Furthermore,we motivate why learning capabilities are pivotal for satisfactory performance.The infrastructure for automating both the selection of a heuristic and integra-tion of learning are provided in form of an environment embedding the "core"deduction system.We have conducted a case study in connection with a deduction system basedon condensed detachment. Our experiments with a fully automated deductionsystem 'AutoCoDe' have produced remarkable results. We substantiate Au-toCoDe's encouraging achievements with a comparison with the renowned the-orem prover Otter. AutoCoDe outperforms Otter even when assuming veryfavorable conditions for Otter.
Evolving Combinators
(1996)
One of the many abilities that distinguish a mathematician from an auto-mated deduction system is to be able to offer appropriate expressions based onintuition and experience that are substituted for existentially quantified variablesso as to simplify the problem at hand substantially. We propose to simulate thisability with a technique called genetic programming for use in automated deduc-tion. We apply this approach to problems of combinatory logic. Our experimen-tal results show that the approach is viable and actually produces very promisingresults. A comparison with the renowned theorem prover Otter underlines theachievements.This work was supported by the Deutsche Forschungsgemeinschaft (DFG).
We present a method for making use of past proof experience called flexiblere-enactment (FR). FR is actually a search-guiding heuristic that uses past proofexperience to create a search bias. Given a proof P of a problem solved previouslythat is assumed to be similar to the current problem A, FR searches for P andin the "neighborhood" of P in order to find a proof of A.This heuristic use of past experience has certain advantages that make FRquite profitable and give it a wide range of applicability. Experimental studiessubstantiate and illustrate this claim.This work was supported by the Deutsche Forschungsgemeinschaft (DFG).
A method for efficiently handling associativity and commutativity (AC) in implementations of (equational) theorem provers without incorporating AC as an underlying theory will be presented. The key of substantial efficiency gains resides in a more suitable representation of permutation-equations (such as f(x,f(y,z))=f(y,f(z,x)) for instance). By representing these permutation-equations through permutations in the mathematical sense (i.e. bijective func- tions :{1,..,n} {1,..,n}), and by applying adapted and specialized inference rules, we can cope more appropriately with the fact that permutation-equations are playing a particular role. Moreover, a number of restrictions concerning application and generation of permuta- tion-equations can be found that would not be possible in this extent when treating permu- tation-equations just like any other equation. Thus, further improvements in efficiency can be achieved.
We are going to present two methods that allow to exploit previous expe-rience in the area of automated deduction. The first method adapts (learns)the parameters of a heuristic employed for controlling the application of infer-ence rules in order to find a known proof with as little redundant search effortas possible. Adaptation is accomplished by a genetic algorithm. A heuristiclearned that way can then be profitably used to solve similar problems. Thesecond method attempts to re-enact a known proof in a flexible manner in orderto solve an unknown problem whose proof is believed to lie in (close) vicinity.The experimental results obtained with an equational theorem prover show thatthese methods not only allow for impressive speed-ups, but also make it possibleto handle problems that were out of reach before.
We present an approach to automating the selection of search-guiding heuris-tics that control the search conducted by a problem solver. The approach centerson representing problems with feature vectors that are vectors of numerical val-ues. Thus, similarity between problems can be determined by using a distancemeasure on feature vectors. Given a database of problems, each problem beingassociated with the heuristic that was used to solve it, heuristics to be employedto solve a novel problem are suggested in correspondence with the similaritybetween the novel problem and problems of the database.Our approach is strongly connected with instance-based learning and nearest-neighbor classification and therefore possesses incremental learning capabilities.In experimental studies it has proven to be a viable tool for achieving the finaland crucial missing piece of automation of problem solving - namely selecting anappropriate search-guiding heuristic - in a flexible way.This work was supported by the Deutsche Forschungsgemeinschaft (DFG).
We present a method for learning heuristics employed by an automated proverto control its inference machine. The hub of the method is the adaptation of theparameters of a heuristic. Adaptation is accomplished by a genetic algorithm.The necessary guidance during the learning process is provided by a proof prob-lem and a proof of it found in the past. The objective of learning consists infinding a parameter configuration that avoids redundant effort w.r.t. this prob-lem and the particular proof of it. A heuristic learned (adapted) this way canthen be applied profitably when searching for a proof of a similar problem. So,our method can be used to train a proof heuristic for a class of similar problems.A number of experiments (with an automated prover for purely equationallogic) show that adapted heuristics are not only able to speed up enormously thesearch for the proof learned during adaptation. They also reduce redundancies inthe search for proofs of similar theorems. This not only results in finding proofsfaster, but also enables the prover to prove theorems it could not handle before.
Problems stemming from the study of logic calculi in connection with an infer-ence rule called "condensed detachment" are widely acknowledged as prominenttest sets for automated deduction systems and their search guiding heuristics. Itis in the light of these problems that we demonstrate the power of heuristics thatmake use of past proof experience with numerous experiments.We present two such heuristics. The first heuristic attempts to re-enact aproof of a proof problem found in the past in a flexible way in order to find a proofof a similar problem. The second heuristic employs "features" in connection withpast proof experience to prune the search space. Both these heuristics not onlyallow for substantial speed-ups, but also make it possible to prove problems thatwere out of reach when using so-called basic heuristics. Moreover, a combinationof these two heuristics can further increase performance.We compare our results with the results the creators of Otter obtained withthis renowned theorem prover and this way substantiate our achievements.
In this report we present a case study of employing goal-oriented heuristics whenproving equational theorems with the (unfailing) Knut-Bendix completion proce-dure. The theorems are taken from the domain of lattice ordered groups. It will bedemonstrated that goal-oriented (heuristic) criteria for selecting the next critical paircan in many cases significantly reduce the search effort and hence increase per-formance of the proving system considerably. The heuristic, goalADoriented criteriaare on the one hand based on so-called "measures" measuring occurrences andnesting of function symbols, and on the other hand based on matching subterms.We also deal with the property of goal-oriented heuristics to be particularly helpfulin certain stages of a proof. This fact can be addressed by using them in a frame-work for distributed (equational) theorem proving, namely the "teamwork-method".
We present a general framework for developing search heuristics for au-tomated theorem provers. This framework allows for the construction ofheuristics that are on the one hand able to replay (parts of) a given prooffound in the past but are on the other hand flexible enough to deviate fromthe given proof path in order to solve similar proof problems. We substanti-ate the abstract framework by the presentation of three distinct techniquesfor learning appropriate search heuristics based on soADcalled features. Wedemonstrate the usefulness of these techniques in the area of equational de-duction. Comparisons with the renowned theorem prover Otter validatethe applicability and strength of our approach.
Der Variantenreichtum der Pentaphosphaferrocene konnte unter Verwendung von trimethylsilyl-substituierten CpR-Liganden erweitert werden. Neben den einfach und zweifach Tms-substituierten Cp- und Cp= Liganden kommt auch der gemischt substituierte Cp-'-Ligand zum Einsatz. Bei der Cothermolyse von [CpRFe(5-P5)] und [CpRCo(CO)2] entstehen eine Reihe von neuartigen und bekannten Cobalt und Eisen Mehrkernclustern mit unsubstituierten Pn-Liganden. Die Verbindungen [{Cp=Co}4P10], [{Cp=Co}4P4] und [{Cp-Co}4P4] fungierten ebenfalls in ausgewählten Reaktionen als Phosphorquelle mit einer erstaunlichen Produktpalette.
Hajós' conjecture asserts that a simple Eulerian graph on n vertices can be decomposed into at most [(n-1)/2] cycles. The conjecture is only proved for graph classes in which every element contains vertices of degree 2 or 4. We develop new techniques to construct cycle decompositions. They work on the common neighborhood of two degree-6 vertices. With these techniques, we find structures that cannot occur in a minimal counterexample to Hajós' conjecture and verify the conjecture for Eulerian graphs of pathwidth at most 6. This implies that these graphs satisfy the small cycle double cover conjecture.
We present a cooperation concept for automated theorem provers that isbased on a periodical interchange of selected results between several incarnationsof a prover. These incarnations differ from each other in the search heuristic theyemploy for guiding the search of the prover. Depending on the strengths' andweaknesses of these heuristics different knowledge and different communicationstructures are used for selecting the results to interchange.Our concept is easy to implement and can easily be integrated into alreadyexisting theorem provers. Moreover, the resulting cooperation allows the dis-tributed system to find proofs much faster than single heuristics working alone.We substantiate these claims by two case studies: experiments with the DiCoDesystem that is based on the condensed detachment rule and experiments with theSPASS system, a prover for first order logic with equality based on the super-position calculus. Both case studies show the improvements by our cooperationconcept.
We present a methodology for coupling several saturation-based theoremprovers (running on different computers). The methodology is well-suited for re-alizing cooperation between different incarnations of one basic prover. Moreover,also different heterogeneous provers - that differ from each other in the calculusand in the heuristic they employ - can be coupled. Cooperation between the dif-ferent provers is achieved by periodically interchanging clauses which are selectedby so-called referees. We present theoretic results regarding the completeness ofthe system of cooperating provers as well as describe concrete heuristics for de-signing referees. Furthermore, we report on two experimental studies performedwith homogeneous and heterogeneous provers in the areas superposition and un-failing completion. The results reveal that the occurring synergetic effects leadto a significant improvement of performance.
We investigate the usage of so-called inference rights. We point out the prob-lems arising from the inflexibility of existing approaches to heuristically controlthe search of automated deduction systems, and we propose the application ofinference rights that are well-suited for controlling the search more flexibly. More-over, inference rights allow for a mechanism of "partial forgetting" of facts thatis not realizable in the most controlling aproaches. We study theoretical founda-tions of inference rights as well as the integration of inference rights into alreadyexisting inference systems. Furthermore, we present possibilities to control suchmodified inference systems in order to gain efficiency. Finally, we report onexperimental results obtained in the area of condensed detachment.The author was supported by the Deutsche Forschungsgemeinschaft (DFG).
Top-down and bottom-up theorem proving approaches have each specific ad-vantages and disadvantages. Bottom-up provers profit from strong redundancycontrol and suffer from the lack of goal-orientation, whereas top-down provers aregoal-oriented but have weak calculi when their proof lengths are considered. Inorder to integrate both approaches our method is to achieve cooperation betweena top-down and a bottom-up prover: The top-down prover generates subgoalclauses, then they are processed by a bottom-up prover. We discuss theoreticaspects of this methodology and we introduce techniques for a relevancy-basedfiltering of generated subgoal clauses. Experiments with a model eliminationand a superposition-based prover reveal the high potential of our cooperation approach.The author was supported by the Deutsche Forschungsgemeinschaft (DFG).
We examine an approach for demand-driven cooperative theorem proving.We briefly point out the problems arising from the use of common success-driven cooperation methods, and we propose the application of our approachof requirement-based cooperative theorem proving. This approach allows for abetter orientation on current needs of provers in comparison with conventional co-operation concepts. We introduce an abstract framework for requirement-basedcooperation and describe two instantiations of it: Requirement-based exchangeof facts and sub-problem division and transfer via requests. Finally, we reporton experimental studies conducted in the areas superposition and unfailing com-pletion.The author was supported by the Deutsche Forschungsgemeinschaft (DFG).
We examine different possibilities of coupling saturation-based theorem pro-vers by exchanging positive/negative information. We discuss which positive ornegative information is well-suited for cooperative theorem proving and show inan abstract way how this information can be used. Based on this study, we in-troduce a basic model for cooperative theorem proving. We present theoreticalresults regarding the exchange of positive/negative information as well as practi-cal methods and heuristics that allow for a gain of efficiency in comparison withsequential provers. Finally, we report on experimental studies conducted in theareas condensed detachment, unfailing completion, and superposition.The author was supported by the Deutsche Forschungsgemeinschaft (DFG).
Winery by-products arise in high amounts during winemaking processes. Hence, recovery alternatives are of great interest. In this study, effects of extracts from winery by-products (Vitis vinifera L. cv. Riesling) on mitochondrial functions in human hepatocellular carcinoma (HepG2) cells were examined. Polyphenolic profiles of pomace (PE), stem (SE), vine leaf (VLE), and vine shoot extracts (VSE) were characterized by HPLC-UV/Vis-ESI-MS/MS. The extracts induced dose-dependent cytotoxic effects (PE > SE > VLE > VSE). VSE showed protective effects regarding modulation of tert-butyl hydroperoxide (TBH)-induced intracellular reactive oxygen species (ROS) levels. PE, SE and VLE increased the mitochondrial membrane potential (MMP), whereas VSE decreased it owing to mildly impaired mitochondrial respiration. Cells may try to compensate reduced respiration chain complex activities by increasing the mitochondrial mass, as indicated by enhanced citrate synthase activity and mRNA expression levels after VSE incubation. Thus, winery by-products represent interesting sources of bioactive compounds that exert positive or negative effects on mitochondrial functions.
The CBR team of the LISA is involved in several applied research projects based on the CBR paradigm. These applications use adaptation to solve the specific problems they face. So, we have capitalized some experience about how can be expressed and formalized adaptation processes. The bibliography on the subject is quite important but demonstrates a lake of formalism. At most, there exists some classifications about different types of adaptation.
We present the adaptation process in a CBR application for decision support in the domain of industrial supervision. Our approach uses explanations to approximate relations between a problem description and its solution, and the adaptation process is guided by these explanations (a more detailed presentation has been done in [4]).
The analysis of benthic bacterial community structure has emerged as a powerful alternative to traditional microscopy-based taxonomic approaches to monitor aquaculture disturbance in coastal environments. However, local bacterial diversity and community composition vary with season, biogeographic region, hydrology, sediment texture, and aquafarm-specific parameters. Therefore, without an understanding of the inherent variation contained within community complexes, bacterial diversity surveys conducted at individual farms, countries, or specific seasons may not be able to infer global universal pictures of bacterial community diversity and composition at different degrees of aquaculture disturbance. We have analyzed environmental DNA (eDNA) metabarcodes (V3–V4 region of the hypervariable SSU rRNA gene) of 138 samples of different farms located in different major salmon-producing countries. For these samples, we identified universal bacterial core taxa that indicate high, moderate, and low aquaculture impact, regardless of sampling season, sampled country, seafloor substrate type, or local farming and environmental conditions. We also discuss bacterial taxon groups that are specific for individual local conditions. We then link the metabolic properties of the identified bacterial taxon groups to benthic processes, which provides a better understanding of universal benthic ecosystem function(ing) of coastal aquaculture sites. Our results may further guide the continuing development of a practical and generic bacterial eDNA-based environmental monitoring approach.
This article investigates a network interdiction problem on a tree network: given a subset of nodes chosen as facilities, an interdictor may dissect the network by removing a size-constrained set of edges, striving to worsen the established facilities best possible. Here, we consider a reachability objective function, which is closely related to the covering objective function: the interdictor aims to minimize the number of customers that are still connected to any facility after interdiction. For the covering objective on general graphs, this problem is known to be NP-complete (Fröhlich and Ruzika In: On the hardness of covering-interdiction problems. Theor. Comput. Sci., 2021). In contrast to this, we propose a polynomial-time solution algorithm to solve the problem on trees. The algorithm is based on dynamic programming and reveals the relation of this location-interdiction problem to knapsack-type problems. However, the input data for the dynamic program must be elaborately generated and relies on the theoretical results presented in this article. As a result, trees are the first known graph class that admits a polynomial-time algorithm for edge interdiction problems in the context of facility location planning.
Covering edges in networks
(2019)
In this paper we consider the covering problem on a networkG=(V,E)withedgedemands. The task is to cover a subsetJ⊆Eof the edges with a minimum numberof facilities within a predefined coverage radius. We focus on both the nodal andthe absolute version of this problem. In the latter, facilities may be placed every-where in the network. While there already exist polynomial time algorithms to solvethe problem on trees, we establish a finite dominating set (i.e., a finite subset ofpoints provably containing an optimal solution) for the absolute version in generalgraphs. Complexity and approximability results are given and a greedy strategy isproved to be a (1+ln(|J|))-approximate algorithm. Finally, the different approachesare compared in a computational study.
This paper is concerned with the development of a self-adaptive spatial descretization for PDEs using a wavelet basis. A Petrov-Galerkin method [LPT91] is used to reduce the determination of the unknown at the new time step to the computation of scalar products. These have to be discretized in an appropriate way. We investigate this point in detail and devise an algorithm that has linear operation count with respect to the number of unknowns. It is tested with spline wavelets and Meyer wavelets retaining the latter for their better localisation at finite precision. The algorithm is then applied to the one dimensional thermodiffusive equations. We show that the adaption strategy merits to be modified in order to take into account the particular and very strong nonlinearity of this problem. Finally, a supplementary Fourier discretization permits the computation of two dimensional flame fronts.
Validierung einer Farmer-Kammer für die Dosimetrie von Elektronenstrahlung mit geringer Reichweite
(2020)
In dieser Arbeit wurde die Auswirkung der Bezugspunktverschiebung [19, 20, 21, 22, 30, 31, 32, 36] des DIN-Entwurfes 6800-2 2016 bei Elektronenstrahlung, so-wie die Durchführbarkeit der Elektronen-Absolutdosimetrie mit Farmer-Kammern untersucht. Des Weiteren wurde der kE,R-Faktor für Farmerkammern bestimmt. Verwendet wurden vier Flachkammern unterschiedlicher Bauart (Roos-Kammer, Markus-Kammer, Advanced-Markus-Kammer der Firma PTW; PPC40-Kammer der Firma IBA) und drei Farmer-Kammer gleicher Bauart (PTW). Die Messungen wurden mit Elektronenenergien von 4 MeV bis 15 MeV durchgeführt.
In den aufgenommenen Tiefendosiskurven ist die Bezugspunktverschiebung (Δz) deutlich zu sehen. Die Kurven sind je nach Verschiebung parallel nach links (+Δz) oder rechts (-Δz) versetzt. Entsprechend verkürzt oder verlängert sich das Eintrittsfenster. Die sich dadurch veränderten Werte für die Halbwerttiefen der Ionendosis wirken sich widerrum auf die für die Absolutdosimetrie benötigten Referenztiefen zref aus. Diese Änderung der Messtiefe wird allerdings durch die neuen Formeln zur Berechnung der Halbwerttiefe der Wasser-Energiedosis nahezu egalisiert. Die neuen Berechnungsvorschriften für den Korrektionsfaktor zur Berücksichtigung des Einflusses der Strahlungsqualität der Elektronenstrahlung hin-gegen haben größere Auswirkungen auf die Dosiswerte.
Der kE,R-Faktor der Farmer-Kammern wurde durch einen Dosisvergleich mit der Roos-Kammer ermittelt. Der Mittelwert der Dosis aller drei Farmer-Kammern pro Energie wurde mit der Dosis der Roos-Kammer verglichen, der Unterschied als kE,R-Faktor bestimmt. Zusätzlich wurde das Verhalten der Farmer-Kammer bei der Absolutdosimetrie mit Elektronenstrahlung betrachtet. Hierbei wurde eine hohe Stabilität sowohl in den Tiefendosiskurven als auch in der Absolutdosimetrie deutlich, sodass die Farmer-Kammer der Firma PTW als für die Absolutdosimetrie mit Elektronenstrahlung < 10 MeV geeignet eingestuft wird.
Im Rahmen dieser Arbeit wurden DFT Rechnungen zum mechanistischen Verständnis und zur rationalen Entwicklung homogenkatalytischer Reaktionen eingesetzt.
Im ersten Projekt konnten mit Hilfe von DFT Rechnungen effizientere Katalysatorsysteme für Protodecarboxylierungsreaktionen und decarboxylierende Kreuzkupplungen durch rationale Katalysatorentwicklung identifiziert werden. Hierzu wurde die Decarboxylierung von 2- und 4 Fluorbenzoesäure mit DFT Rechnungen untersucht. Zunächst sagten die Rechnungen keine deutlich erhöhten Reaktionsgeschwindigkeiten für Katalysatorsysteme bestehend aus Kupfer(I) und verschiedenen 4,7 disubstituierten 1,10 Phenanthrolinliganden voraus. Weitere Berechnungen prognostizierten hingegen stark erhöhte Effizienz für Silber-basierte Katalysatoren in der Decarboxylierung von ortho-substituierten Benzoesäuren. Tatsächlich konnte daraufhin für diese Carbonsäuren ein Katalysatorsystem bestehend aus AgOAc und K2CO3 in NMP entwickelt werden, welches die Protodecarboxylierung bereits bei 120 °C ermöglicht, 50 °C niedriger als die des Kupfer-basierten Systems.
Die Erkenntnisse ließen sich in der Arbeitsgruppe Gooßen weiterhin auf die decarboxylierende Kreuzkupplung übertragen. Es gelang die Entwicklung eines Ag/Pd-basierten Katalysatorsystems für die Biarylsynthese ausgehend von Benzoesäuren und Aryltriflaten bei Reaktionstemperaturen von nur 130 °C.
Im Folgenden war es möglich, durch den Einsatz von DFT Rechnungen den Reaktionsmechanismus der decarboxylierenden Kreuzkupplung aufzuklären und Voraussagen für ein effizienteres Cu/Pd-basiertes Katalysatorsystem zu treffen. Nachdem durch experimentelle Beobachtungen klar wurde, dass der Decarboxylierungsschritt nicht notwendigerweise geschwindigkeitsbestimmend sein muss, wurde der komplette Katalysezyklus der decarboxylierenden Kreuzkupplung eingehend mit Hilfe von DFT Rechnungen untersucht. In Abhängigkeit des Benzoats wurde die Decarboxylierung oder die Transmetallierung als geschwindigkeitsbestimmend identifiziert. Da in der Transmetallierung zunächst die Bildung eines bimetallischen Cu−Pd-Addukts erforderlich ist, wurde gefolgert, dass die Verwendung von verbrückenden, bidentaten Liganden die Reaktion begünstigen sollte. In der Tat konnte durch Einsatz eines P,N-Liganden eine Cu/Pd-katalysierte decarboxylierende Kreuzkupplung von aromatischen Carboxylaten mit Aryltriflaten bei nur 100 °C entwickelt werden, was einer Absenkung der Reaktionstemperatur um 50 °C entspricht.
Zukünftige Weiterentwicklungen der Cu/Pd-katalysierten decarboxylierenden Kreuzkupplung zielen auf die Überwindung der Beschränkung auf ortho-substituierte Benzoate und den Ersatz der teuren Aryltriflate durch günstigere Arylhalogenide. Arbeiten hierzu sind bereits im Gange.
Im zweiten Projekt wurde der Reaktionsmechanismus der Ruthenium-katalysierten Hydroamidierung terminaler Alkine eingehend untersucht. Nachdem durch Isotopen-markierungsexperimente, Bestimmungen von kinetischen Isotopeneffekten mittels in situ IR-Spektroskopie und verschiedene in situ NMR- sowie ESI-MS-Experimente drei von fünf potentiellen Reaktionsmechanismen ausgeschlossen werden konnten, erlaubten die experimentellen Ergebnisse die Eingrenzung auf einen der verbliebenen Katalysezyklen.
Mit Hilfe von DFT Rechnungen wurde daraufhin bestätigt, dass es sich bei den postulierten Intermediaten um stabile Minima handelt. Das Auftreten einer Ru–Hydrid–Vinylidenspezies lieferte die Erklärung, warum die Hydroamidierung auf terminale Alkine beschränkt ist. Der nukleophile Angriff des Amidliganden an das Vinylidenkohlenstoffatom erklärt die anti-Markovnikov-Selektivität der Reaktion. Nachdem Gooßen und Koley et al. in einer weiteren Untersuchung den Einfluss der Liganden auf die Stereoselektivität der Hydroamidierung aufklären konnten, ist nun der Grundstein für die zukünftige rationale Entwicklung effizienterer Hydroamidierungskatalysatoren gelegt.
Im dritten Projekt konnten Erkenntnisse zum Reaktionsmechanismus der Palladium-katalysierten Isomerisierung von Allylestern zu Enolestern und Hinweise auf die katalytisch aktive Spezies der Reaktion erlangt werden. Zunächst gelang mit dem homodinuklearen Palladiumkatalysator [Pd(μ Br)(PtBu3)]2 die Entwicklung einer effizienten Synthese zur Darstellung einer großen Bandbreite diverser Enolester. In 1 Position verzweigte Enolester dienten anschließend als Substrate für enantioselektive Hydrierungen zur Synthese enantiomerenreiner chiraler Ester.
Aufgrund experimenteller Beobachtungen, die nahelegten, dass ein Palladiumhydrid-Komplex die katalytisch aktive Spezies darstellt, wurde die Bildung verschiedener Palladiumhydrid-Spezies ausgehend vom homodinuklearen Palladiumkatalysator [Pd(μ Br)(PtBu3)]2 mit Hilfe von DFT Rechnungen untersucht. Hierbei konnte der Palladiumhydrid-Komplex [Pd(Br)(H)(PtBu3)] als die vermutlich katalytisch aktive Spezies identifiziert werden. Aufgrund seiner hohen Reaktivität konnten in in situ NMR-Experimenten lediglich ein oxidiertes Dimer und ein Abfangprodukt mit überschüssigem Tri-tert-butylphosphin nachgewiesen werden.
In zukünftigen Arbeiten soll durch kinetische Untersuchungen die Reaktionsordnung der Isomerisierung ermittelt werden. Dies soll dazu beitragen, Aufschluss darüber zu gewinnen, ob tatsächlich ein monometallischer oder ein bimetallischer Komplex die katalytisch aktive Spezies darstellt.
Human interferences within the Earth System are accelerating, leading to major impacts and feedback that we are just beginning to understand. Summarized under the term 'global change' these impacts put human and natural systems under ever-increasing stress and impose a threat to human well-being, particularly in the Global South. Global governance bodies have acknowledged that decisive measures have to be taken to mitigate the causes and to adapt to these new conditions. Nevertheless, neither current international nor national pledges and measures reach the effectiveness needed to sustain global human well-being under accelerating global change. On the contrary, competing interests are not only paralyzing the international debate but also playing an increasingly important role in debates over social fragmentation and societal polarization on national and local scales. This interconnectedness of the natural and the social system and its impact on social phenomena such as cooperation and conflicts need to be understood better, to strengthen social resilience to future disturbances, drive societal transformation towards socially desirable futures while at the same time avoiding path dependencies along continuing colonial continuities. As a case example, this thesis provides insights into southwestern Amazonia, where the intertwined challenges of human contribution to global change in all its dimensions, as well as human adaptation and mitigation attempts to the imposed changes become exaggeratedly visible. As such, southwestern Amazonia with its high social, economic, and biological diversity is a good example to study the deep interrelations of humans with nature and the consequences these relations have on social cohesion amid an ecological crisis.
Therefore, this thesis takes a social-ecological perspective on conflicts and social cohesion. Social cohesion is in a wider sense understood as the way "how members of a society, group, or organization relate to each other and work together" (Dany and Dijkzeul 2022, p. 12). In particular in contexts of violence, conflicts, and fragility, little has been investigated on the role of social cohesion to govern public goods and build resilience for (future) environmental crises. At the same time, governments and international decision-makers more and more acknowledge the role of social cohesion _ comprising both relations between social groups and between groups and the state _ to build upon resilience against crises. Facing uncertainty in how natural and social systems react to certain disturbances and shocks, the governance of potential tipping points, is an additional challenge for the governance of social-ecological systems (SES). Therefore, this thesis asks: "How does governance shape pathways towards cooperative or conflictive social-ecological tipping points?" The results of this thesis can be distinguished into theoretical/conceptual results and empirical results. Initial systematic literature research on the nexus of climate change, land use, and conflict revealed, an extensive body of literature on direct effects, for example, drought-related land use conflicts, with diverging opinions on whether global warming increases the risk for conflicts or not. Adding the perspective of indirect implications, we further identified research gaps, and also a lack of policy recognition, concerning the negative externalities on land use and conflict through climate mitigation and adaptation measures. On a conceptual note, taking a social cohesion perspective into the analysis is beneficial to shift the focus from the problem-oriented perspective of vulnerabilities and conflicts to global change and potential resulting conflicts to a solution-oriented perspective of enhancing agency and resilience to strengthen collaboration. The developed Social Cohesion Conceptual Model and the related analytical framework facilitate the incorporation of societal dynamics into the analysis of SES dynamics. In addition, the elaborated Tipping Multiverse Framework took up this idea and enhanced it with a more detailed perspective on the soil ecosystem and the household livelihood system to identify entry points to potential social-ecological tipping cascades. As such, the Tipping Multiverse Framework offered two matrices that can advance the understanding of regional SES by identifying core processes, functioning, and links in each TE and thus provide entry points to identify potential tipping cascades across SES sub-systems. The exemplified application of these two frameworks on southwestern Amazonia shows the analytical potential of both proposed frameworks in advancing the understanding of social-ecological tipping points and potential tipping cascades in a regional SES.
On an empirical note, zooming in on questions of governance by applying a political ecology lens to human security, we find that 'glocal' resource governance often reproduces, amplifies, or creates power imbalances and divisions on and between different scales. Our results show that the winners of resource extraction are mostly found at the national and international scale while local communities receive little benefit and are left vulnerable to externalities. Hence, our study contributes to the existing research by stressing the importance of one underlying question: "governance by whom and for whom?" This question raised the demand to understand the underlying dynamics of resource governance and resulting conflicts. Therefore, we aimed at analyzing how (environmental) institutions influence the major drivers of social-ecological conflicts over land in and around three protected areas, Tambopata (Peru), the Extractive Reserve Chico Mendes (Brazil), and Manuripi (Bolivia). We found that state institutions, in particular, have the following effects on key conflict drivers: Overlapping responsibilities of governance institutions and limited enforcement of regulations protecting and empowering rural and disadvantaged populations, enabling external actors to (illegally) access and control resources in the protected areas. Consequently, the already fragile social contract between the residents of the protected area and its surrounding areas and the central state is further weakened by the expanding influence of criminal organizations that oppose the state's authority. For state institutions to avoid aggravating these conflict drivers but instead better manage them or even contribute to conflict prevention and mitigation, a transformation from reactive to reflexive institutions and the development of new reflexive governance competencies is needed.
This need for reflexive governance becomes particularly visible when sudden disturbances or shocks impact the SES. Our analysis of the impacts of the COVID-19 pandemic on the interconnections of land use change, ecosystem services, human agency, conflict, and cooperation that the pandemic has had a severe influence on the human security of marginalized social groups in southwestern Amazonia. Civil society actions have been an essential strategy in the fight against COVID-19, not just in the health sector but also in the economic, political, social, and cultural realms. However, our research also showed that the pandemic has consolidated and partly renewed criminal structures, while the already weak state has fallen further behind due to additional tasks managing the pandemic and other disasters such as floods.
In conclusion, it can be said that the reflexivity of governance is crucial to foster cooperation and preventing conflicts in the realm of social-ecological systems. By not only reacting to already occurring changes but also reflecting upon potential future changes, governance can shape transformation pathways away from the detrimental and towards life-sustaining pathways. It can do so, by exercising agency across scales to avoid the crossing of detrimental social-ecological tipping points but rather to trigger life-sustaining tipping points that contribute to global social-ecological well-being.
Die Pandemie traf im Jahr 2020 auch die Kunstpädagogik unvorbereitet. Dem anfänglichen emergency remote teaching folgten elaboriertere Konzepte. Der Einsatz der Fachcommunity war immens – und hat die Disziplin allem Anschein nach dauerhaft verändert.
Die Publikation untersucht fachspezifische Erfahrungen aus der Pandemiezeit, kontextualisiert sie und entwickelt daraus Perspektiven. Dabei geht es nicht nur um den Gegensatz zwischen Präsenz- und Distanzformaten, sondern auch um grundsätzlichere Herausforderungen an das Fach. Die 28 Autor:innen u. a. aus den Bereichen Schule, Hochschule und Museum argumentieren und spekulieren in unterschiedlicher Weise, bisweilen auch zueinander im Widerspruch. Insgesamt ergibt sich so ein erstes Bild davon, was eine Kunstpädagogik nach der Pandemie ausmachen könnte.
Untersuchungen zur Beeinflussung humaner Topoisomerasen und der DNA-Integrität durch Beerenfrüchte
(2008)
Das Ziel dieser Arbeit war es zur Aufklärung der Beeinflussung von humanen Topoisomerasen und der DNA-Integrität durch polyphenolische Substanzen und polyphenolreiche Extrakte beizutragen. Die vorliegende Dissertation zeigt auf, dass auch die beschriebene DNA-schädigende Wirkung von Delphinidin, sowie das Ausbleiben einer antioxidativen Wirkung des Anthocyanidins unter Zellkulturbedinungen teilweise auf das gebildete H2O2 zurückzuführen ist. Insgesamt zeigen die Daten, dass alle in dieser Arbeit untersuchten Polyphenole oder polyphenolreichen Extrakte mit Ausnahme der Ellagsäure Wasserstoffperoxid im Zellkulturmedium generieren. Delphinidin moduliert die DNA-strangbrechende Wirkung der, in der Chemotherapie eingesetzten Topoisomerasegifte Camptothecin, Etoposid und Doxorubicin in HT29-Zellen. Basierend auf diesen Ergebnissen lässt sich für Delphinidin folgender Wirkmechanismus postulieren: Delphinidin wirkt als katalytischer Hemmstoff der Topoisomeraseaktivität, der die Enzyme inaktiviert bevor diese an die DNA binden und einen Strangbruch induzieren. Die in dieser Arbeit untersuchten anthocyanreichen Extrakte hemmen potent die Aktivität humaner Topoisomerasen. Des Weiteren stabilisieren sie nicht das kovalente Enzym-DNA-Intermediat in HT29-Zellen, so dass davon ausgegangen werden kann, dass die Extraktinhaltsstoffe als rein katalytische Topoisomerasehemmstoffe agieren. Eine Bindung an die DNA-Furchen wiesen die Extraktinhaltsstoffe nicht auf. Zudem induzieren die Extrakte keine DNA-Strangbrüche und bewirken keinen Zellzyklusarrest in der G2/M-Phase, so dass der Einfluss der polyphenolischen Extraktinhaltsstoffe auf die DNA-Integrität als gering einzustufen ist. Vergleichbar mit Delphinidin modulieren auch die Extrakte die DNA-strangbrechende Wirkung von Topoisomerasegiften. Weiterhin konnte im Rahmen diese Dissertation bestimmt werden, dass die getesteten Ellagtannine sowie die Ellagsäure die Aktivität humaner Topoisomerasen im zellfreien System hemmen. Dabei wirken nicht nur die Einzelsubstanzen als potente Topoisomerasehemmstoffe, sondern auch ein Eichenholzextrakt. Diese Ergebnisse legen nahe, dass die Ellagtannine keine Stabilisierung des Topoisomerase-DNA-Intermediats bewirken. Des Weiteren konnte für Castalagin eine Modulation der strangbrechenden Wirkung von Camptothecin bestimmt werden. Dieses Ergebnis legt nahe, dass Castalagin als katalytischer Hemmstoff die Topoisomerase hemmt bevor das Enzym an die DNA binden und einen Strangbruch im Zucker-Phosphat-Rückgrat induzieren kann.
Kaskaden aus Aerober Oxidation und Radikalischer Funktionalisierung zum Aufbau Cyclischer Ether
(2013)
Die aerobe Cobalt-katalysierte Oxidation von Alkenolen ist eine zweistufige Reaktionssequenz,
die zum stereoselektiven Aufbau funktionalisierter Tetrahydrofurane genutzt
werden kann. Im ersten, katalytisch verlaufenden Teil der Reaktion werden 4-Pentenole
mit Hilfe von Luftsauerstoff und β-Diketonat-abgeleiteten Cobalt(II)-Komplexen mit hoher
Diastereoselektivität in nucleophile Tetrahydrofurylmethyl-Radikale überführt. Diese
können im zweiten Teil der Reaktion mit einer Reihe von Reagenzien abgefangen werden:
Durch radikalische Addition an Akzeptor-substituierte Olefine und anschließenden H-Atom-Transfer können Seitenketten-funktionalisierte Tetrahydrofurane in Ausbeuten bis 67% erhalten werden, wobei die durch direkte H-Atom-Übertragung gebildeten reduktiv terminierten Tetrahydrofurane als Nebenprodukte gebildet werden. Anhand der Produktverhältnisse konnten relative Geschwindigkeitsfaktoren bestimmt werden, die den Radikal-Charakter der Zwischenstufe bestätigen. Im Gegensatz zu klassischen Radikalreaktionen verläuft auch die Addition an Alkine mit ausreichend hoher Geschwindigkeit um Tetrahydrofurane mit ungesättigter Seitenkette in synthetisch sinnvollen Ausbeuten darzustellen. Diese Eigenschaft konnte zum Aufbau eines diastereomerenreinen Bistetrahydrofurans in einer Kaskade von zwei Cyclisierungen genutzt werden.
Durch radikalische Substitution an Disulfiden können in Cobalt-katalysierten Oxidationen Alkylsufanyl-funktionalisierte Tetrahydrofurane aufgebaut werden, ohne dass die so gebildeten Thioether selbst zu Sulfoxiden und Sulfonen oxidiert werden. Die Einführung der Methylsulfanyl-Gruppe konkurriert dabei mit der direkten H-Atom-
Übertragung und eröffnete so die Möglichkeit aus einer Reihe konkurrenzkinetischer
Experimente die Geschwindigkeitskonstante für die Übertragung der Methylsulfanyl-
Gruppe zu ermitteln. Die Methode ermöglichte die Vereinfachung und Verbesserung der Synthese eines Wirkstoff-Derivats sowie die Darstellung eines 2,6- trans-konfigurierten Tetrahydropyrans.
Darauf aufbauend wurde eine generelle Methode zum Aufbau von Tetrahydropyranen ausgehend von Hexenolen entwickelt, die die hohe Diastereoselektivität, die bei der Cyclisierung von Pentenolen beaobachtet wird, beibehält. Es wurde ein stereochemisches Modell für die Cyclisierung abgeleitet, das die beobachteten Selektivitäten erklärt und Voraussagen zu noch nicht getesteten Substraten ermöglicht. Mit der Synthese von nichtcyclischen Ethern ausgehend von Alkoholen und Alkenen konnte gezeigt werden, dass der Mechanismus der aeroben Cobalt-katalysierten Oxidation über die Synthese von Tetrahydrofuranen und Tetrahydropyranen hinaus anwendbar ist und für die Erforschung weiterer Transformationen unter veränderten Reaktionsbedingungen bereit steht.
The present thesis reports on studies of atomically precise, size-selected tantalum
cluster ions \(Ta_n^±\) under cryogenic conditions in a FT-ICR mass spectrometer with respect to surface adsorbate interactions at the fundamental level, focusing on \(N_2\) and \(H_2\) adsorption and activation. The wealth of results presented here is the result of systematic studies that have revealed valuable kinetic, spectroscopic, and quantum chemical information, which together paint a comprehensive picture of the elementary adsorption steps and mechanisms in detail.
The \(N_2\) and \(H_2\) adsorption processes to \(Ta_n^+\) clusters exhibit dependencies on cluster size n and on adsorbate load. In terms of \(N_2\) adsorption, there is evidence for spontaneous \(N_2\) activation and cleavage by \(Ta_2^+\) - \(Ta_4^+\), while it appears to be suppressedby \(Ta_5^+\) - \(Ta_8^+\). The activation and cleavage of \(N_2\) molecules proceeds across
surmountable barriers and along much-involved multidimensional reaction paths.
Underlying reaction processes and involved intermediates are elucidated. Two different processes are characteristic of \(H_2\) adsorption: There are fast adsoprtion processes without competing desorption reactions at low \(H_2\) loadings, indicating dissociative adsorption processes, followed by slow adsorption reactions accompanied by multiple desorption reactions at high \(H_2\) loadings, indicating molecular \(H_2\) adsorption. The threshold is the completion of the first adsorbate shell. The \(N_2\) adsorption study of \(Ta_n^-\) clusters revealed that the \(N_2\) adsorption ability of anionic tantalum clusters depends strongly on cluster size n. The cluster size n = 9 is the minimum size for \(N_2\) adsorption onto \(Ta_n^-\) clusters to yield stable and detectable cluster adsorbate species \([Ta_n(N_2)_m]^-\).
The direct regioselective C−H-functionalization of simple, unfunctionalized pyridines is considered a long-standing challenge in heterocyclic chemistry. Herein, we report a novel one-pot protocol for the C4-selective sulfonylation of pyridines via triflic anhydride (Tf2O) activation, base-mediated addition of a sulfinic acid salt, and subsequent elimination/re-aromatization. Contrary to previous approaches employing tailored blocking groups, positional selectivity can be controlled by using N-methylpiperidine as simple, readily available external base. This method offers a highly modular and streamlined access to C4-sulfonylated pyridines.
Ziel der Arbeit war es die Verwendung von Modellen und Dokumenten im Produktentwick-lungsprozess vor dem Hintergrund der Aussage, dass sich die Produktentwicklung von einer modellbasierten zu einer dokumentbasierten wandelt, zu untersuchen. Die Betrach-tung von Modellen und Dokumenten auf allgemeiner Ebene hat ergeben, dass sich die beiden Konzepte rein anhand ihrer Definition, ihren Merkmalen und Funktionen nur schwer voneinander unterscheiden lassen und sich somit nicht gegenseitig ausschließen. Die fol-gende zweigeteilte kontextspezifische Untersuchung, zum einen mit dem Fokus auf die traditionelle Sicht der Produktentwicklung und zum anderen mit Fokus auf das Systems Engineering, hat gezeigt, dass in der traditionellen Produktentwicklung die Verwendung von Modellen überwiegt, diese jedoch in der Literatur überwiegend als Dokumente be-zeichnet werden, die Modelle enthalten können. Mit Bezug auf das Systems Engineering konnte eine Fehlbezeichnung bzgl. der Trennung in DBSE und MBSE festgestellt werden, da die Unterscheidung nicht direkt auf der Verwendung von Modellen und Dokumenten basiert, sondern auf einer anderen Art der Informationsübermittlung.
Ziel der Arbeit ist es, auf Basis einer Literaturrecherche mögliche Konsequenzen des Electronic Human Resource Management (E-HRM) in Organisationen darzulegen. Dazu wird zuerst die theoretische Basis geschaffen und die Begriffe HRM und E-HRM definiert. Zusätzlich werden andere synonym verwendete Begriffe (HRIS, web-based HRM, virtual HRM) abgegrenzt und darauf aufbauend ein 3-Stufen-Integrationsmodell des E-HRM entwickelt. Daneben werden vier konkrete Instrumente mit ihren wichtigsten Merkmalen kurz beleuchtet um später darauf aufbauend potentielle Konsequenzen begründen zu können. Die Analyse der Konsequenzen erfolgt schließlich nach der Kategorisierung von Strohmeier (2007) auf der organisationalen und der Individualebene. Die Ergebnisse auf der organisationalen Ebene zeigen dabei vor allem positive Einflüsse in den Bereichen Kosten, Effizienz und Serviceleistung. Auf der Individualebene ist primär mit Veränderungen im Arbeitsalltag, dem Übergang von Verantwortlichkeiten und Bedenken in Bezug auf den Datenschutz zu rechnen.
Ziel der Arbeit ist es, auf Basis einer Literaturrecherche mögliche Konsequenzen des Electronic Human Resource Management (E-HRM) in Organisationen darzulegen. Dazu wird zuerst die theoretische Basis geschaffen und die Begriffe HRM und E-HRM definiert. Zusätzlich werden andere synonym verwendete Begriffe (HRIS, web-based HRM, virtual HRM) abgegrenzt und darauf aufbauend ein 3-Stufen-Integrationsmodell des E-HRM entwickelt. Daneben werden vier konkrete Instrumente mit ihren wichtigsten Merkmalen kurz beleuchtet um später darauf aufbauend potentielle Konsequenzen begründen zu können. Die Analyse der Konsequenzen erfolgt schließlich nach der Kategorisierung von Strohmeier (2007) auf der organisationalen und der Individualebene. Die Ergebnisse auf der organisationalen Ebene zeigen dabei vor allem positive Einflüsse in den Bereichen Kosten, Effizienz und Serviceleistung. Auf der Individualebene ist primär mit Veränderungen im Arbeitsalltag, dem Übergang von Verantwortlichkeiten und Bedenken in Bezug auf den Datenschutz zu rechnen.
In this article a new data-adaptive method for smoothing of bivariate functions is developed. The smoothing is done by kernel regression with rotational invariant bivariate kernels. Two or three local bandwidth parameters are chosen automatically by a two-step plug-in approach. The algorithm starts with small global bandwidth parameters, which adapt during a few iterations to the noisy image. In the next step local bandwidths are estimated. Some general asymptotic results about Gasser-Müller-estimators and optimal bandwidth selection are given. The derived local bandwidth estimators converge and are asymptotically normal.
We introduce two novel techniques for speeding up the generation of digital \((t,s)\)-sequences. Based on these results a new algorithm for the construction of Owen's randomly permuted \((t,s)\)-sequences is developed and analyzed. An implementation of the new techniques is available at http://www.cs.caltech.edu/~ilja/libseq/index.html
In der vorliegenden Arbeit werden Sekundärstrukturmotive von isolierten Peptiden und Peptid-Aggregaten in der Gasphase analysiert. Zur Untersuchung ihrer intrinsischen Eigenschaften werden die isolierten Peptide durch adiabatische Abkühlung in Molekularstrahlen erzeugt. Durch die Anwendung hochsensitiver Techniken der Doppelresonanzspektroskopie, Resonante zwei Photonen Ionisation (R2PI) und Infrarot/Resonante zwei Photonen Ionisation (IR/R2PI), werden die Peptide und Peptid-Aggregate hinsichtlich ihrer Elektronen – und Schwingungsübergängen analysiert. Die Schwingungsfrequenzen im Bereich der Amid A, I, II Moden und im oberen „Fingerprintbereich“ von Peptiden sind sehr signifikant für die Geometrie des Rückgrates und der Seitenketten, z.B. unterscheiden sich Schwingungen von Gruppen, die an Wasserstoffbrückenbindungen beteiligt sind, sehr stark durch ihre Lage und Intensität gegenüber Schwingungen von frei vorliegenden Gruppen. Ein Vergleich mit berechneten Schwingungsfrequenzen aus ab initio und Dichtefunktionaltheorie Rechnungen ermöglicht eine Zuordnung zu einer bestimmten Struktur. Es werden in dieser Arbeit verschiedene Sekundärstrukturen über die Analyse von geschützten Aminosäuren, Di- und Tripeptiden untersucht. Insbesondere gelang es erstmals, ein ß-Faltblattmodellsystem für ein isoliertes Dimer eines Peptids nachzuweisen. Weiterhin werden zum molekularen Verständnis der Mikrosolvatation Aggregate mit Wassermolekülen betrachtet und somit der Einfluss auf die Sekundärstruktur durch sukzessive Aggregation von Wassermolekülen analysiert. In Kooperation mit Prof. Schrader (Universität Duisburg-Essen) werden Templatmoleküle charakterisiert, um ihre Fähigkeiten zur Anlagerung an schädliche ß-Faltblattstrukturen zu untersuchen, die in sogenannten neurodegenerativen Krankheiten häufig auftreten. Die Effizienz ist sowohl über die Analyse der Zahl und Stärke der inter- und intramolekularen Wasserstoffbrückenbindungen als auch über die gebildete Clusterstruktur untersucht worden.
Synaptic transmission is controlled by re-uptake systems that reduce transmitter concentrations in the synaptic cleft and recycle the transmitter into presynaptic terminals. The re-uptake systems are thought to ensure cytosolic concentrations in the terminals that are sufficient for reloading empty synaptic vesicles (SVs). Genetic deletion of glycine transporter 2 (GlyT2) results in severely disrupted inhibitory neurotransmission and ultimately to death. Here we investigated the role of GlyT2 at inhibitory glycinergic synapses in the mammalian auditory brainstem. These synapses are tuned for resilience, reliability, and precision, even during sustained high-frequency stimulation when endocytosis and refilling of SVs probably contribute substantially to efficient replenishment of the readily releasable pool (RRP). Such robust synapses are formed between MNTB and LSO neurons (medial nucleus of the trapezoid body, lateral superior olive). By means of patch-clamp recordings, we assessed the synaptic performance in controls, in GlyT2 knockout mice (KOs), and upon acute pharmacological GlyT2 blockade. Via computational modeling, we calculated the reoccupation rate of empty release sites and RRP replenishment kinetics during 60-s challenge and 60-s recovery periods. Control MNTB-LSO inputs maintained high fidelity neurotransmission at 50 Hz for 60 s and recovered very efficiently from synaptic depression. During 'marathon-experiments' (30,600 stimuli in 20 min), RRP replenishment accumulated to 1,260-fold. In contrast, KO inputs featured severe impairments. For example, the input number was reduced to ~1 (vs. ~4 in controls), implying massive functional degeneration of the MNTB-LSO microcircuit and a role of GlyT2 during synapse maturation. Surprisingly, neurotransmission did not collapse completely in KOs as inputs still replenished their small RRP 80-fold upon 50 Hz | 60 s challenge. However, they totally failed to do so for extended periods. Upon acute pharmacological GlyT2 inactivation, synaptic performance remained robust, in stark contrast to KOs. RRP replenishment was 865-fold in marathon-experiments, only ~1/3 lower than in controls. Collectively, our empirical and modeling results demonstrate that GlyT2 re-uptake activity is not the dominant factor in the SV recycling pathway that imparts indefatigability to MNTB-LSO synapses. We postulate that additional glycine sources, possibly the antiporter Asc-1, contribute to RRP replenishment at these high-fidelity brainstem synapses.
Tandembeschaufelungen werden in Axialverdichtern dort eingesetzt, wo große Strömungsumlenkungen
realisiert werden müssen. Beschaufelungen, bei denen die gesamte Umlenkung
mit nur einem Schaufelprofil realisiert wird, stoßen dabei schnell an ihre aerodynamischen
Grenzen, da es bei hohen Umlenkungen zu Grenzschichtablösung kommen kann.
In Tandembeschaufelungen wird die Umlenkaufgabe auf zwei Schaufelprole aufgeteilt.
An der Vorderkante des in Strömungsrichtung liegenden hinteren Schaufelprofils existiert
eine frische, dünne und ungestörte Grenzschicht für die restliche Umlenkung.
Die Auslegung einer Verdichterbeschaufelung erfolgt gewöhnlich in mehreren koaxialen
Schnitten. Wickelt man einen solchen koaxialen Schnitt in eine Ebene ab, so erhält man ein
sogenanntes Schaufelgitter. Die Überlegenheit von Tandemgittern bezüglich Einzelgittern
bei großen Umlenkungen wurde für zweidimensionale Strömungen bereits häufig in der
Literatur gezeigt. Ebenso sind für zweidimensionale Strömungen durch Tandemgitter die
idealen Parameter für die relative Position der einzelnen Schaufelreihen zueinander und
die Aufteilung der aerodynamischen Last auf die einzelnen Schaufelreihen bekannt. Da
hohe Umlenkungen meist in den letzten Stufen von mehrstufigen Axialverdichtern gefordert
sind, wo die Schaufelhöhenverhältnisse klein sind, wird der Einfluss der Seitenwände
(Nabe und Gehäuse) groß und kann deshalb nicht mehr vernachlässigt werden. Bisher
gibt es allerdings wenig Informationen über die Strömungsstruktur in Tandemgittern an
der Seitenwand und das dreidimensionale Strömungsverhalten. Es ist daher nicht klar, ob
Tandemgitterkonfigurationen, die für zweidimensionale Strömungen die geringsten Verluste
erzeugen, auch in dreidimensionalen Strömungen minimale Verluste verursachen.
In dieser Arbeit werden die experimentellen und numerischen Ergebnisse von vier Tandemgitterkonfigurationen und einem Referenz-Einzelgitter vorgestellt. Die Gitter bestehen
aus NACA 65 Profilen mit kleinem Schaufelhöhenverhältnis. Die Auslegung der Gitter
erfolgte unter Berücksichtigung empirischer Korrelationen von Lieblein und Lei. Die Tandemgitter
unterscheiden sich im Teilungsverhältnis der einzelnen Schaufelreihen und im
Percent Pitch (PP). Alle Gitter bewirken eine Strömungsumlenkung von annähernd 50 Grad
bei einer Reynoldszahl von 8x10^5.
In dieser Arbeit wird gezeigt, wie die Strömung in Tandemgittern strukturiert ist und
insbesondere wie die Sekundärströmung in Tandemgittern entsteht. Anhand der Strömungsstruktur, die mit Hilfe von numerischen und experimentellen Ölanstrichbildern der
Seitenwand und der Profiloberflächen sichtbar gemacht wurde, wird die Verlustentstehung
identifiziert und diskutiert. Corner Stall wird als zentrales Strömungsphänomen in
Tandemgittern ausführlich auf dessen Entstehung und Ausprägung hin untersucht. Strömungsmechanische Kenngrößen, die numerisch und anhand von Messungen ermittelt wurden,
werden miteinander verglichen und der Einfluss der Tandemkonfigurationen auf das
Strömungsfeld im Nachlauf der Gitter wird aufgezeigt. Schließlich werden Empfehlungen gegeben, wie das Teilungsverhältnis der einzelnen Schaufelreihen und der Percent Pitch
(PP) in einem Tandemgitter zu wählen ist, um minimale Strömungsverluste zu realisieren.
In dieser Arbeit wurden experimentelle und theoretische Untersuchungen zum (nahekritischen) Hochdruck-Mehrphasengleichgewicht ternärer Systeme bestehend aus Ethen, Wasser und einem bei Umgebungsbedingungen vollständig wasserlöslichen organischen Lösungsmittel durchgeführt. Die Untersuchungen behandeln die Grundlagen eines neuartigen Flüssig-flüssig-Extraktionsverfahrens für Naturstoffe. Dieses Extraktionsverfahren wird durch einen wässrig-organischen Flüssigphasensplit ermöglicht, der durch das Aufpressen eines Gases (in der Nähe seines kritischen Zustandes) auf eine homogene wässrig / organische Phase auftritt. Die Untersuchung des durch den Flüssigphasensplit erzeugten Dreiphasengleichgewichts (LLV) sowie der Verteilung ausgewählter Naturstoffe zwischen den beiden Flüssigphasen des Dreiphasengleichgewichts bilden den Schwerpunkt dieser Arbeit. Die Arbeit baut auf Untersuchungen von Wendland (1994) und Adrian (1997) auf. Wendland hat das Phasenverhalten der ternären Systeme Kohlendioxid + Wasser + (Aceton bzw. 2-Propanol) vermessen und umfangreiche Fortran-Routinen zur Beschreibung des ternären Phasenverhaltens sowie der binären Randsysteme entwickelt. Adrian (1997) hat die drei ternären Systeme Kohlendioxid + Wasser + (1- / 2-Propanol bzw. Propionsäure) sowie die Verteilung von zehn organischen Naturstoffen bzw. Modellkomponenten auf die koexistierenden flüssigen Phasen des Dreiphasengleichgewichts LLV in einem (teilweise beiden) der ternären Systeme Kohlendioxid + Wasser + (Aceton bzw. 1-Propanol) untersucht. In dieser Arbeit wurde größtenteils Ethen als nahekritisches Gas eingesetzt, da es im Gegensatz zu dem zuvor benutzten Kohlendioxid in wässrigen Lösungen undissoziiert vorliegt und somit nicht den pH-Wert der koexistierenden flüssigen Phasen bestimmt. Die experimentelle Untersuchung der Phasengleichgewichte erfolgte mit einer Phasengleichgewichtsapparatur, die nach der analytischen Methode arbeitet. In einer thermostatisierten Hochdrucksichtzelle (30 cm3) wurde ein Phasengleichgewicht zwischen mehreren koexistierenden Phasen eingestellt. An die Messzelle waren zwei externe Probenahmeschleifen angeschlossen, durch welche die (in der Zelle koexistierenden) Phasen gepumpt wurden und aus denen Proben für die Analyse mittels GC und HPLC entnommen wurden. Bei Temperaturen zwischen 293 und 333 K und Drücken bis 20.5 MPa wurde das Hochdruck-Mehrphasengleichgewicht (LLV) der beiden ternären Systeme Ethen + Wasser + (1- bzw. 2-Propanol) untersucht. Darüber hinaus wurden die Druck-Temperatur-Koordinaten kritischer Endpunktlinien in diesen beiden ternären Systemen bestimmt und weitere Untersuchungen zum generellen Phasenverhalten angestellt. Den Schwerpunkt dieser Arbeit bildeten Messungen zur Verteilung von Naturstoffen auf die koexistierenden flüssigen Phasen des Dreiphasengleichgewichts LLV: Es wurde u. a. die einzelne Verteilung dreier Paare chemisch ähnlicher Naturstoffe im ternären System Ethen + Wasser + 2-Propanol bei 293 und 333 K untersucht. Die Paare waren 2,5-Hexanediol / 2,5-Hexandion, N-Acetyl-Glukosamin / N-Acetyl-Mannosamin und D- / L-Phenylalanin. Im theoretischen Teil dieser Arbeit wurde das Phasenverhalten der ternären Systeme mit der kubischen Zustandsgleichung (EoS) von Peng und Robinson (1976) in der Modifikation von Melhem et al. (1989) kombiniert mit verschiedenen Mischungsregeln beschrieben. Hierbei wurde sowohl eine Vorhersage des ternären Phasenverhaltens aus Informationen zu den binären Randsystemen als auch die Korrelation des ternären Phasenverhaltens angestrebt. Auch aufgrund der teilweise sehr geringen Anzahl von binären Messpunkten, konnte das Verhalten des Systems Ethen + Wasser + 2-Propanol nicht aus den Informationen zu den binären Randsystemen vorhergesagt werden. Für das Verhalten des ternären Systems Ethen + Wasser + 1-Propanol stimmte die Vorhersage nur qualitativ mit den Messwerten überein. Die Wiedergabe des Phasenverhaltens der ternären Systeme verbesserte sich signifikant, wenn die binären Wechselwirkungsparameter an experimentelle Daten für das Dreiphasengleichgewicht der ternären Systeme angepasst wurden. Zur Korrelation der Verteilung der Naturstoffe auf die flüssigen Phasen des Dreiphasengleichgewichts (LLV) wurde eine Methode benutzt, die auf der Anpassung von Reinstoffparametern der Naturstoffe für die Peng-Robinson EoS basiert. Hierbei wurden die in der EoS benötigten Reinstoffparameter des Naturstoffes an die Ergebnisse der Verteilungsmessungen angepasst, wobei die Wechselwirkungsparameter des ternären Grundsystems übernommen und die Parameter für Wechselwirkungen des Naturstoffes mit dem ternären Grundsystem vernachlässigt wurden. Durch dieses Vorgehen beschreiben die Reinstoffparameter der Naturstoffe auch die Mischungseigenschaften. Durch eine Normierung der Mess- bzw. Rechenwerte auf die gemessenen bzw. berechneten ternären oberen und unteren Begrenzungspunkte des Dreiphasengleichgewichts wurde eine quantitative Beschreibung der Messwerte erhalten.
Als Ergänzung zum elektronischen Bibliothekssystem ALEPH wird an der UB Kaiserslautern für die Verwaltung der E-Medien das Electronic Resource Management System SemperTool eingesetzt.
Die Video-Präsentation geht auf die Vorteile dieses ERM-Systems sowie auf dessen
Anwendung an der UB Kaiserslautern ein und beschreibt kurz die genutzten
Funktionalitäten.
In this short note we prove some general results on semi-stable sheaves on P_2 and P_3 with arbitrary linear Hilbert polynomial. Using Beilinson's spectral sequence, we compute free resolutions for this class of semi-stable sheaves and deduce that the smooth moduli spaces M_{r m + s}(P_2) and M_{r m + r - s}(P_2) are birationally equivalent if r and s are coprime.
Seit Aufkommen der Halbleiter-Technologie existiert ein Trend zur Miniaturisierung elektronischer Systeme. Dies, steigende Anforderungen sowie die zunehmende Integration verschiedener Sensoren zur Interaktion mit der Umgebung lassen solche eingebetteten Systeme, wie sie zum Beispiel in mobilen Geräten oder Fahrzeugen vorkommen, zunehmend komplexer werden. Die Folgen sind ein Anstieg der Entwicklungszeit und ein immer höherer Bauteileaufwand, bei gleichzeitig geforderter Reduktion von Größe und Energiebedarf. Insbesondere der Entwurf von Multi-Sensor-Systemen verlangt für jeden verwendeten Sensortyp jeweils gesondert nach einer spezifischen Sensorelektronik und steht damit den Forderungen nach Miniaturisierung und geringem Leistungsverbrauch entgegen.
In dieser Forschungsarbeit wird das oben beschriebene Problem aufgegriffen und die Entwicklung eines universellen Sensor-Interfaces für eben solche Multi-Sensor-Systeme erörtert. Als ein einzelner integrierter Baustein kann dieses Interface bis zu neun verschiedenen Sensoren unterschiedlichen Typs als Sensorelektronik dienen. Die aufnehmbaren Messgrößen umfassen: Spannung, Strom, Widerstand, Kapazität, Induktivität und Impedanz.
Durch dynamische Rekonfigurierbarkeit und applikationsspezifische Programmierung wird eine variable Konfiguration entsprechend der jeweiligen Anforderungen ermöglicht. Sowohl der Entwicklungs- als auch der Bauteileaufwand können dank dieser Schnittstelle, die zudem einen Energiesparmodus beinhaltet, erheblich reduziert werden.
Die flexible Struktur ermöglicht den Aufbau intelligenter Systeme mit sogenannten Self-x Charakteristiken. Diese betreffen Fähigkeiten zur eigenständigen Systemüberwachung, Kalibrierung oder Reparatur und tragen damit zu einer erhöhten Robustheit und Fehlertoleranz bei. Als weitere Innovation enthält das universelle Interface neuartige Schaltungs- und Sensorkonzepte, beispielsweise zur Messung der Chip-Temperatur oder Kompensation thermischer Einflüsse auf die Sensorik.
Zwei unterschiedliche Anwendungen demonstrieren die Funktionalität der hergestellten Prototypen. Die realisierten Applikationen haben die Lebensmittelanalyse sowie die dreidimensionale magnetische Lokalisierung zum Gegenstand.
Im Verlauf dieser Dissertation konnte gezeigt werden, dass eine erhöhte Expression des tonoplastidären Dicarboxylat Transporters zu einem erhöhten Gehalt an Malat bei gleichzeitig vermindertem Citratgehalt in den Überexpressions-Pflanzen führt. Somit konnte, ähnlich wie in den k.o.-Pflanzen, ein reziprokes Verhalten von Citrat und Malat aufgezeigt werden.
Elektrophysiologische Analysen an Oozyten von X. laevis in Zusammenhang mit Aufnahmeversuchen an Proteoliposomen zeigten weiterhin, dass der Transport von Citrat ebenfalls durch den TDT katalysiert wird. Anhand eines negativen Einwärts-Strom an Oozyten konnte gezeigt werden, dass dieser Citrat-Transport elektrogen ist. Weiterhin konnte gezeigt werden, dass Citrat2-H die transportierte Form von Citrat darstellt. Dieses wird vermutlich zusammen mit drei Protonen transportiert.
Die Dianionen Malat und Succinat, sowie höchstwahrscheinlich auch Fumarat, werden ebenfalls über den TDT transportiert. Unter Standardbedingungen werden diese in die Vakuole importiert. Im Gegenzug wird Citrat aus der Vakuole exportiert. Die trans-stimulierende Wechselwirkung von Malat, Succinat und Fumarat auf den Citrat Transport und vice versa bestärkt den in dieser Arbeit postulierten Antiport der jeweiligen Carboxylate über den Tonoplasten. Dieser ist jedoch nicht obligat, was an dem verringerten Transport von Citrat ohne Gegensubstrat über die Membran gezeigt werden konnte.
Unter Trockenstress und osmotischen Stress konnte ebenfalls gezeigt werden, dass die erhöhte Expression des TDT maßgeblich an der Akkumulation von Malat und der Mobilisierung von Citrat unter den genannten Stressbedingungen beteiligt ist.
Letztlich konnte mittels Säurestressexperimenten nachgewiesen werden, dass die Malatakkumulation, bei gleichzeitigem Citrat Abbau nicht zwingend miteinander gekoppelt sind, unter Säurestress müssen daher weitere regulatorische Effekte auf den Malat-Import bzw. den Citrat-Export vorherrschen.
Gegenstand dieser Arbeit ist die kanonische Verbindung klassischer globaler Schwerefeldmodellierung in der Konzeption von Stokes (1849) und Neumann (1887) und moderner lokaler Multiskalenberechnung mittels lokalkompakter adaptiver Wavelets. Besonderes Anliegen ist die "Zoom-in"-Ermittlung von Geoidhöhen aus lokal gegebenen Schwereanomalien bzw. Schwerestörungen.
The basic theory of spherical singular integrals is recapitulated. Criteria are given for measuring the space-frequency localization of functions on the sphere. The trade off between space localization on the sphere and frequency localization in terms of spherical harmonics is described in form of an uncertainty principle. A continuous version of spherical multiresolution is introduced, starting from continuous wavelet transform corresponding to spherical wavelets with vanishing moments up to a certain order. The wavelet transform is characterized by least-squares properties. Scale discretization enables us to construct spherical counterparts of wavelet packets and scale discrete Daubechies" wavelets. It is shown that singular integral operators forming a semigroup of contraction operators of class (Co) (like Abel-Poisson or Gauß-Weierstraß operators) lead in canonical way to pyramyd algorithms. Fully discretized wavelet transforms are obtained via approximate integration rules on the sphere. Finally applications to (geo-)physical reality are discussed in more detail. A combined method is proposed for approximating the low frequency parts of a physical quantity by spherical harmonics and the high frequency parts by spherical wavelets. The particular significance of this combined concept is motivated for the situation of today" s physical geodesy, viz. the determination of the high frequency parts of the earth" s gravitational potential under explicit knowledge of the lower order part in terms of a spherical harmonic expansion.
A continuous version of spherical multiresolution is described, starting from continuous wavelet transform on the sphere. Scale discretization enables us to construct spherical counterparts to Daubechies wavelets and wavelet packets (known from Euclidean theory). Essential tool is the theory of singular integrals on the sphere. It is shown that singular integral operators forming a semigroup of contraction operators of class (Co) (like Abel-Poisson or Gauß-Weierstraß operators) lead in canonical way to (pyramidal) algorithms.
Spline functions that approximate data given on the sphere are developed in a weighted Sobolev space setting. The flexibility of the weights makes possible the choice of the approximating function in a way which emphasizes attributes desirable for the particular application area. Examples show that certain choices of the weight sequences yield known methods. A convergence theorem containing explicit constants yields a usable error bound. Our survey ends with the discussion of spherical splines in geodetically relevant pseudodifferential equations.
Based on a new definition of delation a scale discrete version of spherical multiresolution is described, starting from a scale discrete wavelet transform on the sphere. Depending on the type of application, different families of wavelets are chosen. In particular, spherical Shannon wavelets are constructed that form an orthogonal multiresolution analysis. Finally fully discrete wavelet approximation is discussed in case of band-limited wavelets.
Some new approximation methods are described for harmonic functions corresponding to boundary values on the (unit) sphere. Starting from the usual Fourier (orthogonal) series approach, we propose here nonorthogonal expansions, i.e. series expansions in terms of overcomplete systems consisting of localizing functions. In detail, we are concerned with the so-called Gabor, Toeplitz, and wavelet expansions. Essential tools are modulations, rotations, and dilations of a mother wavelet. The Abel-Poisson kernel turns out to be the appropriate mother wavelet in approximation of harmonic functions from potential values on a spherical boundary.
Discrete families of functions with the property that every function in a certain space can be represented by its formal Fourier series expansion are developed on the sphere. A Fourier series type expansion is obviously true if the family is an orthonormal basis of a Hilbert space, but it also can hold in situations where the family is not orthogonal and is overcomplete. Furthermore, all functions in our approach are axisymmetric (depending only on the spherical distance) so that they can be used adequately in (rotation) invariant pseudodifferential equations on the frames (ii) Gauss- Weierstrass frames, and (iii) frames consisting of locally supported kernel functions. Abel-Poisson frames form families of harmonic functions and provide us with powerful approximation tools in potential theory. Gauss-Weierstrass frames are intimately related to the diffusion equation on the sphere and play an important role in multiscale descriptions of image processing on the sphere. The third class enables us to discuss spherical Fourier expansions by means of axisymmetric finite elements.
This paper presents a method for approximating spherical functions from discrete data of a block-wise grid structure. The essential ingredients of the approach are scaling and wavelet functions within a biorthogonalisation process generated by locally supported zonal kernel functions. In consequence, geophysically and geodetically relevant problems involving rotation-invariant pseudodifferential operators become attackable. A multiresolution analysis is formulated enabling a fast wavelet transform similar to the algorithms known from one-dimensional Euclidean theory.
A new class of locally supported radial basis functions on the (unit) sphere is introduced by forming an infinite number of convolutions of ''isotropic finite elements''. The resulting up functions show useful properties: They are locally supported and are infinitely often differentiable. The main properties of these kernels are studied in detail. In particular, the development of a multiresolution analysis within the reference space of square--integrable functions over the sphere is given. Altogether, the paper presents a mathematically significant and numerically efficient introduction to multiscale approximation by locally supported radial basis functions on the sphere.
Metaharmonic wavelets are introduced for constructing the solution of theHelmholtz equation (reduced wave equation) corresponding to Dirichlet's orNeumann's boundary values on a closed surface approach leading to exactreconstruction formulas is considered in more detail. A scale discrete version ofmultiresolution is described for potential functions metaharmonic outside theclosed surface and satisfying the radiation condition at infinity. Moreover, wediscuss fully discrete wavelet representations of band-limited metaharmonicpotentials. Finally, a decomposition and reconstruction (pyramid) scheme foreconomical numerical implementation is presented for Runge-Walsh waveletapproximation.
Satellite gradiometry and its instrumentation is an ultra-sensitive detection technique of the space gravitational gradient (i.e. the Hesse tensor of the gravitational potential). Gradeometry will be of great significance in inertial navigation, gravity survey, geodynamics and earthquake prediction research. In this paper, satellite gradiometry formulated as an inverse problem of satellite geodesy is discussed from two mathematical aspects: Firstly, satellite gradiometry is considered as a continuous problem of harmonic downward continuation. The space-borne gravity gradients are assumed to be known continuously over the satellite (orbit) surface. Our purpose is to specify sufficient conditions under which uniqueness and existence can be guaranteed. It is shown that, in a spherical context, uniqueness results are obtainable by decomposition of the Hesse matrix in terms of tensor spherical harmonics. In particular, the gravitational potential is proved to be uniquely determined if second order radial derivatives are prescribed at satellite height. This information leads us to a reformulation of satellite gradiometry as a (Fredholm) pseudodifferential equation of first kind. Secondly, for a numerical realization, we assume the gravitational gradients to be known for a finite number of discrete points. The discrete problem is dealt with classical regularization methods, based on filtering techniques by means of spherical wavelets. A spherical singular integral-like approach to regularization methods is established, regularization wavelets are developed which allow the regularization in form of a multiresolution analysis. Moreover, a combined spherical harmonic and spherical regularization wavelet solution is derived as an appropriate tool in future (global and local) high-presision resolution of the earth" s gravitational potential.
Wavelets on closed surfaces in Euclidean space R3 are introduced starting from a scale discrete wavelet transform for potentials harmonic down to a spherical boundary. Essential tools for approximation are integration formulas relating an integral over the sphere to suitable linear combinations of functional values (resp. normal derivatives) on the closed surface under consideration. A scale discrete version of multiresolution is described for potential functions harmonic outside the closed surface and regular at infinity. Furthermore, an exact fully discrete wavelet approximation is developed in case of band-limited wavelets. Finally, the role of wavelets is discussed in three problems, namely (i) the representation of a function on a closed surface from discretely given data, (ii) the (discrete) solution of the exterior Dirichlet problem, and (iii) the (discrete) solution of the exterior Neumann problem.
For the determination of the earth" s gravity field many types of observations are available nowadays, e.g., terrestrial gravimetry, airborne gravimetry, satellite-to-satellite tracking, satellite gradiometry etc. The mathematical connection between these observables on the one hand and gravity field and shape of the earth on the other hand, is called the integrated concept of physical geodesy. In this paper harmonic wavelets are introduced by which the gravitational part of the gravity field can be approximated progressively better and better, reflecting an increasing flow of observations. An integrated concept of physical geodesy in terms of harmonic wavelets is presented. Essential tools for approximation are integration formulas relating an integral over an internal sphere to suitable linear combinations of observation functionals, i.e., linear functionals representing the geodetic observables. A scale discrete version of multiresolution is described for approximating the gravitational potential outside and on the earth" s surface. Furthermore, an exact fully discrete wavelet approximation is developed for the case of band-limited wavelets. A method for combined global outer harmonic and local harmonic wavelet modelling is proposed corresponding to realistic earth" s models. As examples, the role of wavelets is discussed for the classical Stokes problem, the oblique derivative problem, satellite-to-satellite tracking, satellite gravity gradiometry, and combined satellite-to-satellite tracking and gradiometry.
Many problems arising in (geo)physics and technology can be formulated as compact operator equations of the first kind \(A F = G\). Due to the ill-posedness of the equation a variety of regularization methods are in discussion for an approximate solution, where particular emphasize must be put on balancing the data and the approximation error. In doing so one is interested in optimal parameter choice strategies. In this paper our interest lies in an efficient algorithmic realization of a special class of regularization methods. More precisely, we implement regularization methods based on filtered singular value decomposition as a wavelet analysis. This enables us to perform, e.g., Tikhonov-Philips regularization as multiresolution. In other words, we are able to pass over from one regularized solution to another one by adding or subtracting so-called detail information in terms of wavelets. It is shown that regularization wavelets as proposed here are efficiently applicable to a future problem in satellite geodesy, viz. satellite gravity gradiometry.
The purpose of satellite-to-satellite tracking (SST) and/or satellite gravity gradiometry (SGG) is to determine the gravitational field on and outside the Earth's surface from given gradients of the gravitational potential and/or the gravitational field at satellite altitude. In this paper both satellite techniques are analysed and characterized from mathematical point of view. Uniqueness results are formulated. The justification is given for approximating the external gravitational field by finite linear combination of certain gradient fields (for example, gradient fields of single-poles or multi-poles) consistent to a given set of SGG and/or SST data. A strategy of modelling the gravitational field from satellite data within a multiscale concept is described; illustrations based on the EGM96 model are given.
This review article reports current activities and recent progress on constructive approximation and numerical analysis in physical geodesy. The paper focuses on two major topics of interest, namely trial systems for purposes of global and local approximation and methods for adequate geodetic application. A fundamental tool is an uncertainty principle, which gives appropriate bounds for the quantification of space and momentum localization of trial functions. The essential outcome is a better understanding of constructive approximation in terms of radial basis functions such as splines and wavelets.
Two possible substitutes of the Fourier transform in geopotential determination are windowed Fourier transform (WFT) and wavelet transform (WT). In this paper we introduce harmonic WFT and WT and show how it can be used to give information about the geopotential simultaneously in the space domain and the frequency (angular momentum) domain. The counterparts of the inverse Fourier transform are derived, which allow us to reconstruct the geopotential from its WFT and WT, respectively. Moreover, we derive a necessary and sufficient condition that an otherwise arbitrary function of space and frequency has to satisfy to be the WFT or WT of a potential. Finally, least - squares approximation and minimum norm (i.e. least - energy) representation, which will play a particular role in geodetic applications of both WFT and WT, are discussed in more detail.
The satellite-to-satellite tracking (SST) problems are characterized from mathematical point of view. Uniqueness results are formulated. Moreover, the basic relations are developed between (scalar) approximation of the earth's gravitational potential by "scalar basis systems" and (vectorial) approximation of the gravitational eld by "vectorial basis systems". Finally, the mathematical justication is given for approximating the external geopotential field by finite linear combinations of certain gradient fields (for example, gradient fields of multi-poles) consistent to a given set of SST data.
In this paper we introduce a multiscale technique for the analysis of deformation phenomena of the Earth. Classically, the basis functions under use are globally defined and show polynomial character. In consequence, only a global analysis of deformations is possible such that, for example, the water load of an artificial reservoir is hardly to model in that way. Up till now, the alternative to realize a local analysis can only be established by assuming the investigated region to be flat. In what follows we propose a local analysis based on tools (Navier scaling functions and wavelets) taking the (spherical) surface of the Earth into account. Our approach, in particular, enables us to perform a zooming-in procedure. In fact, the concept of Navier wavelets is formulated in such a way that subregions with larger or smaller data density can accordingly be modelled with a higher or lower resolution of the model, respectively.
In modern geoscience, understanding the climate depends on the information about the oceans. Covering two thirds of the Earth, oceans play an important role. Oceanic phenomena are, for example, oceanic circulation, water exchanges between atmosphere, land and ocean or temporal changes of the total water volume. All these features require new methods in constructive approximation, since they are regionally bounded and not globally observable. This article deals with methods of handling data with locally supported basis functions, modeling them in a multiscale scheme involving a wavelet approximation and presenting the main results for the dynamic topography and the geostrophic flow, e.g., in the Northern Atlantic. Further, it is demonstrated that compressional rates of the occurring wavelet transforms can be achieved by use of locally supported wavelets.
By means of the limit and jump relations of classical potential theory with respect to the vectorial Helmholtz equation a wavelet approach is established on a regular surface. The multiscale procedure is constructed in such a way that the emerging scalar, vectorial and tensorial potential kernels act as scaling functions. Corresponding wavelets are defined via a canonical refinement equation. A tree algorithm for fast decomposition of a complex-valued vector field given on a regular surface is developed based on numerical integration rules. By virtue of this tree algorithm, an effcient numerical method for the solution of vectorial Fredholm integral equations on regular surfaces is discussed in more detail. The resulting multiscale formulation is used to solve boundary-value problems for the time harmonic Maxwell's equations corresponding to regular surfaces.
Based on the well-known results of classical potential theory, viz. the limit and jump relations for layer integrals, a numerically viable and e±cient multiscale method of approximating the disturbing potential from gravity anomalies is established on regular surfaces, i.e., on telluroids of ellipsoidal or even more structured geometric shape. The essential idea is to use scale dependent regularizations of the layer potentials occurring in the integral formulation of the linearized Molodensky problem to introduce scaling functions and wavelets on the telluroid. As an application of our multiscale approach some numerical examples are presented on an ellipsoidal telluroid.
Die Bestimmung des Erdgravitationspotentials aus den Meßdaten des Forschungssatelliten CHAMP lässt sich als Operatorgleichung formulieren (SST-Problem). Dieser Ansatz geht davon aus, dass ein geometrischer Orbit des Satelliten CHAMP vorliegt. Mittels numerischer Differentiation unter Einsatz eines geeigneten Denoising Verfahrens kann dann aus dem geometrischen Orbit der Gradient des Potentials längs der Bahn bestimmt werden. Damit sind insbesondere die Radialableitung (und der Flächengradient) auf einem Punktgitter auf der Bahn bekannt. In einem erdfesten System stellt sich dies als eine nahezu vollständige Überdeckung der Erde (bis auf Polar Gaps) mit einem ziemlich dichten Datengitter auf Flughöhe des Satelliten dar. Die Lösung der SST-Operatorgleichung (Bestimmung des Potentials auf der Erdoberfläche aus Kenntnis der Radialableitung auf einem Datengitter auf Flughöhe) ist ein schlecht gestelltes inverses Problem, das mit einer geeigneten Regularisierungstechnik gelöst werden muß. Im vorliegenden Fall wurde eine solche Regularisierung mit Hilfe von nicht-bandlimitierten Regularisierungsskalierungsfunktionen und Regularisierungswavelets umgesetzt. Diese sind stark ortslokalisierend und führen daher auf ein Potentialmodell, welches eine Linearkombination stark ortslokalisierender Funktionen ist. Ein solches Modell kann als Lokalmodell auch aus nur lokalen Daten berechnet werden und bietet daher gegenüber Kugelfunktionsmodellen wie EGM96 erhebliche Vorteile für die moderne Geopotentialbestimmung. Die Diskretisierung und numerische Umsetzung der Berechnung eines solchen Modells erfolgt mit Splines, die hier ebenfalls Linearkombinationen stark ortslokalisierender Funktionen sind. Die großen linearen Gleichungssysteme, die zur Berechnung der glättenden oder interpolierenden Splines gelöst werden müssen, können auf schnelle und effiziente Weise mit dem Schwarzschen alternierenden Algorithmus in Verbindung mit schnellen Summationsverfahren (Fast Multipole Methods) gelöst werden. Eine Kombination des Schwarzschen alternierenden Algorithmus mit solchen schnellen Summationsverfahren ermöglicht eine weitere erhebliche Beschleunigung beim Lösen dieser Gleichungssysteme. Zur Bestimmung von Glättungsparametern (Spline-Smoothing) und Regularisierungsparametern kann die L-Curve Method zum Einsatz kommen.
Being interested in (rotation-)invariant pseudodi erential equations of satellite problems corresponding to spherical orbits, we are reasonably led to generating kernels that depend only on the spherical distance, i. e. in the language of modern constructive approximation form spherical radial basis functions. In this paper approximate identities generated by such (rotation-invariant) kernels which are additionally locally supported are investigated in detail from theoretical as well as numerical point of view. So-called spherical di erence wavelets are introduced. The wavelet transforms are evaluated by the use of a numerical integration rule, that is based on Weyl's law of equidistribution. This approximate formula is constructed such that it can cope with millions of (satellite) data. The approximation error is estimated on the orbital sphere. Finally, we apply the developed theory to the problems of satellite-to-satellite tracking (SST) and satellite gravity gradiometry (SGG).
The static deformation of the surface of the earth caused by surface pressure like the water load of an ocean or an artificial lake is discussed. First a brief mention is made on the solution of the Boussenesq problem for an infinite halfspace with the elastic medium to be assumed as homogeneous and isotropic. Then the elastic response for realistic earth models is determinied by spline interpolation using Navier splines. Major emphasis is on the derteminination of the elastic field caused by water loads from surface tractions on the (real) earth" s surface. Finally the elastic deflection of an artificial lake assuming a homogeneous isotropic crust is compared for both evaluation methods.
The purpose of GPS-satellite-to-satellite tracking (GPS-SST) is to determine the gravitational potential at the earth's surface from measured ranges (geometrical distances) between a low-flying satellite and the high-flying satellites of the Global Posittioning System (GPS). In this paper GPS-satellite-to-satellite tracking is reformulated as the problem of determining the gravitational potential of the earth from given gradients at satellite altitude. Uniqueness and stability of the solution are investigated. The essential tool is to split the gradient field into a normal part (i.e. the first order radial derivative) and a tangential part (i.e. the surface gradient). Uniqueness is proved for polar, circular orbits corresponding to both types of data (first radial derivative and/or surface gradient). In both cases gravity recovery based on satellite-to-satellite tracking turns out to be an exponentially ill-posed problem. As an appropriate solution method regularization in terms of spherical wavelets is proposed based on the knowledge of the singular system. Finally, the extension of this method is generalized to a non-spherical earth and a non-spherical orbital surface based on combined terrestrial and satellite data material.
In modern approximation methods linear combinations in terms of (space localizing) radial basis functions play an essential role. Areas of application are numerical integration formulas on the uni sphere omega corresponding to prescribed nodes, spherical spline interpolation, and spherical wavelet approximation. the evaluation of such a linear combination is a time consuming task, since a certain number of summations, multiplications and the calculation of scalar products are required. This paper presents a generalization of the panel clustering method in a spherical setup. The economy and efficiency of panel clustering is demonstrated for three fields of interest, namely upward continuation of the earth's gravitational potential, geoid computation by spherical splines and wavelet reconstruction of the gravitational potential.
A General Hilbert Space Approach to Wavelets and Its Application in Geopotential Determination
(1999)
A general approach to wavelets is presented within a framework of a separable functional Hilbert space H. Basic tool is the construction of H-product kernels by use of Fourier analysis with respect to an orthonormal basis in H. Scaling function and wavelet are defined in terms of H-product kernels. Wavelets are shown to be 'building blocks' that decorrelate the data. A pyramid scheme provides fast computation. Finally, the determination of the earth's gravitational potential from single and multipole expressions is organized as an example of wavelet approximation in Hilbert space structure.
In this paper, we deal with the problem of spherical interpolation of discretely given data of tensorial type. To this end, spherical tensor fields are investigated and a decomposition formula is described. Tensor spherical harmonics are introduced as eigenfunctions of a tensorial analogon to the Beltrami operator and discussed in detail. Based on these preliminaries, a spline interpolation process is described and error estimates are presented. Furthermore, some relations between the spline basis functions and the theory of radial basis functions are developed.
The paper discusses the approximation of scattered data on the sphere which is one of the major tasks in geomathematics. Starting from the discretization of singular integrals on the sphere the authors devise a simple approximation method that employs locally supported spherical polynomials and does not require equidistributed grids. It is the basis for a hierarchical approximation algorithm using differently scaled basis functions, adaptivity and error control. The method is applied to two examples one of which is a digital terrain model of Australia.
Spline functions that interpolate data given on the sphere are developed in a weighted Sobolev space setting. The flexibility of the weights makes possible the choice of the approximating function in a way which emphasizes attributes desirable for the particular application area. Examples show that certain choices of the weight sequences yield known methods. A pointwise convergence theorem containing explicit constants yields a useable error bound.
As a first approximation the Earth is a sphere; as a second approximation it may be considered an ellipsoid of revolution. The deviations of the actual Earth's gravity field from the ellipsoidal 'normal' field are so small that they can be understood to be linear. The splitting of the Earth's gravity field into a 'normal' and a remaining small 'disturbing' field considerably simplifies the problem of its determination. Under the assumption of an ellipsoidal Earth model high observational accuracy is achievable only if the deviation (deflection of the vertical) of the physical plumb line, to which measurements refer, from the ellipsoidal normal is not ignored. Hence, the determination of the disturbing potential from known deflections of the vertical is a central problem of physical geodesy. In this paper we propose a new, well-promising method for modelling the disturbing potential locally from the deflections of the vertical. Essential tools are integral formulae on the sphere based on Green's function of the Beltrami operator. The determination of the disturbing potential from deflections of the vertical is formulated as a multiscale procedure involving scale-dependent regularized versions of the surface gradient of the Green function. The modelling process is based on a multiscale framework by use of locally supported surface curl-free vector wavelets.
A concept of generalized discrepancy, which involves pseudodifferential operators to give a criterion of equidistributed pointsets, is developed on the sphere. A simply structured formula in terms of elementary functions is established for the computation of the generalized discrepancy. With the help of this formula five kinds of point systems on the sphere, namely lattices in polar coordinates, transformed 2-dimensional sequences, rotations on the sphere, triangulation, and sum of three squares sequence, are investigated. Quantitative tests are done, and the results are compared with each other. Our calculations exhibit different orders of convergence of the generalized discrepancy for different types of point systems.
Spherical Tikhonov Regularization Wavelets in Satellite Gravity Gradiometry with Random Noise
(2000)
This paper considers a special class of regularization methods for satellite gravity gradiometry based on Tikhonov spherical regularization wavelets with particular emphasis on the case of data blurred by random noise. A convergence rate is proved for the regularized solution, and a method is discussed for choosing the regularization level a posteriori from the gradiometer data.
The basic idea behind selective multiscale reconstruction of functions from error-affected data is outlined on the sphere. The selective reconstruction mechanism is based on the premise that multiscale approximation can be well-represented in terms of only a relatively small number of expansion coefficients at various resolution levels. An attempt is made within a tree algorithm (pyramid scheme) to remove the noise component from each scale coefficient using a priori statistical information (provided by an error covariance kernel of a Gaussian, stationary stochastic model).
The following three papers present recent developments in multiscale gravitational field modeling by the use of CHAMP or CHAMP-related data. Part A - The Model SWITCH-03: Observed orbit perturbations of the near-Earth orbiting satellite CHAMP are analyzed to recover the long-wavelength features of the Earth's gravitational potential. More precisely, by tracking the low-flying satellite CHAMP by the high-flying satellites of the Global Positioning System (GPS) a kinematic orbit of CHAMP is obtainable from GPS tracking observations, i.e. the ephemeris in cartesian coordinates in an Earth-fixed coordinate frame (WGS84) becomes available. In this study we are concerned with two tasks: First we present new methods for preprocessing, modelling and analyzing the emerging tracking data. Then, in a first step we demonstrate the strength of our approach by applying it to simulated CHAMP orbit data. In a second step we present results obtained by operating on a data set derived from real CHAMP data. The modelling is mainly based on a connection between non-bandlimited spherical splines and least square adjustment techniques to take into account the non-sphericity of the trajectory. Furthermore, harmonic regularization wavelets for solving the underlying Satellite-to-Satellite Tracking (SST) problem are used within the framework of multiscale recovery of the Earth's gravitational potential leading to SWITCH-03 (Spline and Wavelet Inverse Tikhonov regularized CHamp data). Further it is shown how regularization parameters can be adapted adequately to a specific region improving a globally resolved model. Finally we give a comparison of the developed model to the EGM96 model, the model UCPH2002_02_0.5 from the University of Copenhagen and the GFZ models EIGEN-1s and EIGEN-2. Part B - Multiscale Solutions from CHAMP: CHAMP orbits and accelerometer data are used to recover the long- to medium- wavelength features of the Earth's gravitational potential. In this study we are concerned with analyzing preprocessed data in a framework of multiscale recovery of the Earth's gravitational potential, allowing both global and regional solutions. The energy conservation approach has been used to convert orbits and accelerometer data into in-situ potential. Our modelling is spacewise, based on (1) non-bandlimited least square adjustment splines to take into account the true (non-spherical) shape of the trajectory (2) harmonic regularization wavelets for solving the underlying inverse problem of downward continuation. Furthermore we can show that by adapting regularization parameters to specific regions local solutions can improve considerably on global ones. We apply this concept to kinematic CHAMP orbits, and, for test purposes, to dynamic orbits. Finally we compare our recovered model to the EGM96 model, and the GFZ models EIGEN-2 and EIGEN-GRACE01s. Part C - Multiscale Modeling from EIGEN-1S, EIGEN-2, EIGEN-GRACE01S, UCPH2002_0.5, EGM96: Spherical wavelets have been developed by the Geomathematics Group Kaiserslautern for several years and have been successfully applied to georelevant problems. Wavelets can be considered as consecutive band-pass filters and allow local approximations. The wavelet transform can also be applied to spherical harmonic models of the Earth's gravitational field like the most up-to-date EIGEN-1S, EIGEN-2, EIGEN-GRACE01S, UCPH2002_0.5, and the well-known EGM96. Thereby, wavelet coefficients arise and these shall be made available to other interested groups. These wavelet coefficients allow the reconstruction of the wavelet approximations. Different types of wavelets are considered: bandlimited wavelets (here: Shannon and Cubic Polynomial (CP)) as well as non-bandlimited ones (in our case: Abel-Poisson). For these types wavelet coefficients are computed and wavelet variances are given. The data format of the wavelet coefficients is also included.
This survey paper deals with multiresolution analysis from geodetically relevant data and its numerical realization for functions harmonic outside a (Bjerhammar) sphere inside the Earth. Harmonic wavelets are introduced within a suit- able framework of a Sobolev-like Hilbert space. Scaling functions and wavelets are defined by means of convolutions. A pyramid scheme provides efficient implementation und economical computation. Essential tools are the multiplicative Schwarz alternating algorithm (providing domain decomposition procedures) and fast multipole techniques (accelerating iterative solvers of linear systems).
By means of the limit and jump relations of classical potential theory the framework of a wavelet approach on a regular surface is established. The properties of a multiresolution analysis are verified, and a tree algorithm for fast computation is developed based on numerical integration. As applications of the wavelet approach some numerical examples are presented, including the zoom-in property as well as the detection of high frequency perturbations. At the end we discuss a fast multiscale representation of the solution of (exterior) Dirichlet's or Neumann's boundary-value problem corresponding to regular surfaces.
Abstract: The basic concepts of selective multiscale reconstruction of functions on the sphere from error-affected data is outlined for scalar functions. The selective reconstruction mechanism is based on the premise that multiscale approximation can be well-represented in terms of only a relatively small number of expansion coefficients at various resolution levels. A new pyramid scheme is presented to efficiently remove the noise at different scales using a priori statistical information.
Spline functions that approximate (geostrophic) wind field or ocean circulation data are developed in a weighted Sobolev space setting on the (unit) sphere. Two problems are discussed in more detail: the modelling of the (geostrophic) wind field from (i)discrete scalar air pressure data and (ii) discrete vectorial velocity data. Domain decomposition methods based on the Schwarz alternating algorithm for positive definite symmetric matrices are described for solving large linear systems occuring in vectorial spline interpolation or smoothing of geostrophic flow.
The purpose of this paper is the canonical connection of classical global gravity field determination following the concept of Stokes (1849), Bruns (1878), and Neumann (1887) on the one hand and modern locally oriented multiscale computation by use of adaptive locally supported wavelets on the other hand. Essential tools are regularization methods of the Green, Neumann, and Stokes integral representations. The multiscale approximation is guaranteed simply as linear difference scheme by use of Green, Neumann, and Stokes wavelets, respectively. As an application, gravity anomalies caused by plumes are investigated for the Hawaiian and Iceland areas.
Erfolgreiches Compliance-Management basiert unverzichtbar auf einer gelebten, authentischen, wertegeleiteten Compliance-Kultur als Teil der Organisationskultur. Organisationskulturen sind zwar von außen nicht instruktiv plan- und steuerbar, doch durch gezielte systemische Interventionen beeinflussbar.
Eine nachhaltige Compliance-Kultur beinhaltet die fortwährende Kommunikation der Organisationsmitglieder über individuelle und organisationale Annahmen, Werte, Denk- und Verhaltensmuster und einer daraus resultierenden gemeinsamen Auffassung über die Bedeutung von Compliance in der Organisation.
Wirksame Interventionen systemischer Beratung setzen an den mentalen Modellen an und nicht an formalen Regeln und Kontrollmechanismen. Compliance-Management das vorwiegend auf Kontrolle und damit auf individuelles Fehlverhalten blickt, ignoriert die organisationalen Kontexte und kann aus systemischer Sicht nicht langfristig erfolgreich sein.
Den Führungskräften kommt bei der Verankerung von Compliance in der Organisation besondere Bedeutung zu.
Für systemische Berater*innen gilt es, dem Klientensystem Musterunterbrechungen und neue Handlungsoptionen durch passende Prozessbegleitung und Interventionssetzung zu ermöglichen. Dabei zielen alle Interventionen auf die Reflexion persönlicher und organisationaler mentaler Modelle und die Entwicklung der eigenständigen und verantwortlichen Handlungsfähigkeit der Organisationsmitglieder auf Basis der gemeinsamen Compliance-Kultur.
Compliance ist ein andauernder Change-Prozess und somit Teil der allgemeinen Veränderungsfähigkeit von Organisationen. Dieser Prozess zur integren Organisation muss sowohl auf personaler wie auf organisationaler Ebene stattfinden. Die Wechselwirkungen haben nicht nur Einfluss auf das Thema Compliance, sondern wirken positiv auf das Gesamtsystem als lernende und reflektierende Organisation.
Die systemische Beratung kann das Compliance-Management bei der Verankerung einer Compliance-Kultur in Organisationen nachhaltig unterstützen, indem Beteiligte in den Veränderungsprozess aktiv mit einbezogen werden, neue Perspektiven eröffnet, implizites Wissen sichtbar gemacht und grundlegende Annahmen reflektiert werden.
The proliferation of sensors in everyday devices – especially in smartphones – has led to crowd sensing becoming an important technique in many urban applications ranging from noise pollution mapping or road condition monitoring to tracking the spreading of diseases. However, in order to establish integrated crowd sensing environments on a large scale, some open issues need to be tackled first. On a high level, this thesis concentrates on dealing with two of those key issues: (1) efficiently collecting and processing large amounts of sensor data from smartphones in a scalable manner and (2) extracting abstract data models from those collected data sets thereby enabling the development of complex smart city services based on the extracted knowledge.
Going more into detail, the first main contribution of this thesis is the development of methods and architectures to facilitate simple and efficient deployments, scalability and adaptability of crowd sensing applications in a broad range of scenarios while at the same time enabling the integration of incentivation mechanisms for the participating general public. During an evaluation within a complex, large-scale environment it is shown that real-world deployments of the proposed data recording architecture are in fact feasible. The second major contribution of this thesis is the development of a novel methodology for using the recorded data to extract abstract data models which are representing the inherent core characteristics of the source data correctly. Finally – and in order to bring together the results of the thesis – it is demonstrated how the proposed architecture and the modeling method can be used to implement a complex smart city service by employing a data driven development approach.
Maximum Likelihood Estimators for Markov Switching Autoregressive Processes with ARCH Component
(2009)
We consider a mixture of AR-ARCH models where the switching between the basic states of the observed time series is controlled by a hidden Markov chain. Under simple conditions, we prove consistency and asymptotic normality of the maximum likelihood parameter estimates combining general results on asymptotics of Douc et al (2004) and of geometric ergodicity of Franke et al (2007).
We derive some asymptotics for a new approach to curve estimation proposed by Mr'{a}zek et al. cite{MWB06} which combines localization and regularization. This methodology has been considered as the basis of a unified framework covering various different smoothing methods in the analogous two-dimensional problem of image denoising. As a first step for understanding this approach theoretically, we restrict our discussion here to the least-squares distance where we have explicit formulas for the function estimates and where we can derive a rather complete asymptotic theory from known results for the Priestley-Chao curve estimate. In this paper, we consider only the case where the bias dominates the mean-square error. Other situations are dealt with in subsequent papers.
In this paper we consider a CHARME Model, a class of generalized mixture of nonlinear nonparametric AR-ARCH time series. We apply the theory of Markov models to derive asymptotic stability of this model. Indeed, the goal is to provide some sets of conditions under which our model is geometric ergodic and therefore satisfies some mixing conditions. This result can be considered as the basis toward an asymptotic theory for our model.
We consider data generating mechanisms which can be represented as mixtures of finitely many regression or autoregression models. We propose nonparametric estimators for the functions characterizing the various mixture components based on a local quasi maximum likelihood approach and prove their consistency. We present an EM algorithm for calculating the estimates numerically which is mainly based on iteratively applying common local smoothers and discuss its convergence properties.
We consider data generating mechanisms which can be represented as mixtures of finitely many regression or autoregression models. We propose nonparametric estimators for the functions characterizing the various mixture components based on a local quasi maximum likelihood approach and prove their consistency. We present an EM algorithm for calculating the estimates numerically which is mainly based on iteratively applying common local smoothers and discuss its convergence properties.