Refine
Year of publication
- 1999 (425)
- 2021 (220)
- 2022 (167)
- 2020 (159)
- 2023 (142)
- 1998 (116)
- 2019 (103)
- 2000 (96)
- 2007 (92)
- 2018 (90)
- 1996 (88)
- 2015 (82)
- 1995 (81)
- 2016 (81)
- 2009 (78)
- 2014 (77)
- 1997 (76)
- 1994 (70)
- 2005 (68)
- 2006 (67)
- 2008 (66)
- 2001 (64)
- 2003 (63)
- 2013 (62)
- 2012 (61)
- 2004 (57)
- 2010 (56)
- 2002 (54)
- 2017 (52)
- 2011 (51)
- 1993 (42)
- 1992 (40)
- 2024 (35)
- 1991 (33)
- 1990 (11)
- 1989 (5)
- 1987 (4)
- 1988 (4)
- 1979 (3)
- 1984 (3)
- 1985 (3)
- 1980 (1)
- 1981 (1)
Document Type
- Preprint (1037)
- Doctoral Thesis (939)
- Article (609)
- Report (399)
- Master's Thesis (30)
- Conference Proceeding (28)
- Diploma Thesis (24)
- Periodical Part (21)
- Working Paper (15)
- Lecture (11)
Language
- English (3149) (remove)
Keywords
- AG-RESY (47)
- PARO (25)
- Visualisierung (16)
- SKALP (15)
- Wavelet (13)
- finite element method (12)
- Case-Based Reasoning (11)
- Inverses Problem (11)
- Optimization (11)
- RODEO (11)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (1052)
- Kaiserslautern - Fachbereich Informatik (753)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (302)
- Kaiserslautern - Fachbereich Physik (293)
- Fraunhofer (ITWM) (205)
- Kaiserslautern - Fachbereich Chemie (116)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (115)
- Kaiserslautern - Fachbereich Biologie (98)
- Kaiserslautern - Fachbereich Sozialwissenschaften (76)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (36)
III/V semiconductor quantum dots (QD) are in the focus of optoelectronics research for about 25 years now. Most of the work
has been done on InAs QD on GaAs substrate. But, e.g., Ga(As)Sb (antimonide) QD on GaAs substrate/buffer have also gained
attention for the last 12 years.There is a scientific dispute on whether there is a wetting layer before antimonide QD formation, as
commonly expected for Stransky-Krastanov growth, or not. Usually ex situ photoluminescence (PL) and atomic force microscope
(AFM) measurements are performed to resolve similar issues. In this contribution, we show that reflectance anisotropy/difference
spectroscopy (RAS/RDS) can be used for the same purpose as an in situ, real-time monitoring technique. It can be employed not
only to identify QD growth via a distinct RAS spectrum, but also to get information on the existence of a wetting layer and its
thickness. The data suggest that for antimonide QD growth the wetting layer has a thickness of 1 ML (one monolayer) only.
Modern society relies on convenience services and mobile communication. Cloud computing is the current trend to make data and applications available at any time on every device. Data centers concentrate computation and storage at central locations, while they claim themselves green due to their optimized maintenance and increased energy efficiency. The key enabler for this evolution is the microelectronics industry. The trend to power efficient mobile devices has forced this industry to change its design dogma to: ”keep data locally and reduce data communication whenever possible”. Therefore we ask: is cloud computing repeating the aberrations of its enabling industry?
The plasma membrane transporter SOS1 (SALT-OVERLY SENSITIVE1) is vital for plant survival under salt stress. SOS1 activity is tightly regulated, but little is known about the underlying mechanism. SOS1 contains a cytosolic, autoinhibitory C-terminal tail (abbreviated as SOS1 C-term), which is targeted by the protein kinase SOS2 to trigger its transport activity. Here, to identify additional binding proteins that regulate SOS1 activity, we synthesized the SOS1 C-term domain and used it as bait to probe Arabidopsis thaliana cell extracts. Several 14-3-3 proteins, which function in plant salt tolerance, specifically bound to and interacted with the SOS1 C-term. Compared to wild-type plants, when exposed to salt stress, Arabidopsis plants overexpressing SOS1 C-term showed improved salt tolerance, significantly reduced Na+ accumulation in leaves, reduced induction of the salt-responsive gene WRKY25, decreased soluble sugar, starch, and proline levels, less impaired inflorescence formation and increased biomass. It appears that overexpressing SOS1 C-term leads to the sequestration of inhibitory 14-3-3 proteins, allowing SOS1 to be more readily activated and leading to increased salt tolerance. We propose that the SOS1 C-term binds to previously unknown proteins such as 14-3-3 isoforms, thereby regulating salt tolerance. This finding uncovers another regulatory layer of the plant salt tolerance program
Previously in this journal we have reported on fundamental transversemode selection (TMS#0) of broad area semiconductor lasers
(BALs) with integrated twice-retracted 4f set-up and film-waveguide lens as the Fourier-transform element. Now we choose and
report on a simpler approach for BAL-TMS#0, i.e., the use of a stable confocal longitudinal BAL resonator of length L with a
transverse constriction.The absolute value of the radius R of curvature of both mirror-facets convex in one dimension (1D) is R = L
= 2f with focal length f.The round trip length 2L = 4f againmakes up for a Fourier-optical 4f set-up and the constriction resulting
in a resonator-internal beam waist stands for a Fourier-optical low-pass spatial frequency filter. Good TMS#0 is achieved, as long
as the constriction is tight enough, but filamentation is not completely suppressed.
1. Introduction
Broad area (semiconductor diode) lasers (BALs) are intended
to emit high optical output powers (where “high” is relative
and depending on the material system). As compared to
conventional narrow stripe lasers, the higher power is distributed
over a larger transverse cross-section, thus avoiding
catastrophic optical mirror damage (COMD). Typical BALs
have emitter widths of around 100 ????m.
Thedrawback is the distribution of the high output power
over a large number of transverse modes (in cases without
countermeasures) limiting the portion of the light power in
the fundamental transverse mode (mode #0), which ought to
be maximized for the sake of good light focusability.
Thus techniques have to be used to support, prefer, or
select the fundamental transverse mode (transverse mode
selection TMS#0) by suppression of higher order modes
already upon build-up of the laser oscillation.
In many cases reported in the literature, either a BAL
facet, the
2D quantum dilaton gravitational Hamiltonian, boundary terms and new definition for total energy
(1995)
The ADM and Bondi mass for the RST model have been first discussed from Hawking and Horowitz's argument. Since there is a nonlocal term in the RST model, the RST lagrangian has to be localized so that Hawking and Horowitz's proposal can be carried out. Expressing the localized RST action in terms of the ADM formulation, the RST Hamiltonian can be derived, meanwhile keeping track of all boundary terms. Then the total boundary terms can be taken as the total energy for the RST model. Our result shows that the previous expression for the ADM and Bondi mass actually needs to be modified at quantum level, but at classical level, our mass formula can be reduced to that given by Bilal and Kogan [5] and de Alwis [6]. It has been found that there is a new contribution to the ADM and Bondi mass from the RST boundary due to the existence of the hidden dynamical field. The ADM and Bondi mass with and without the RST boundary for the static and dynamical solutions have been discussed respectively in detail, and some new properties have been found. The thunderpop of the RST model has also been encountered in our new Bondi mass formula.
This paper considers the numerical solution of a transmission boundary-value problem for the time-harmonic Maxwell equations with the help of a special finite volume discretization. Applying this technique to several three-dimensional test problems, we obtain large, sparse, complex linear systems, which are solved by using BiCG, CGS, BiCGSTAB resp., GMRES. We combine these methods with suitably chosen preconditioning matrices and compare the speed of convergence.
Destructive diseases of the lung like lung cancer or fibrosis are still often lethal. Also in case of fibrosis in the liver, the only possible cure is transplantation.
In this thesis, we investigate 3D micro computed synchrotron radiation (SR\( \mu \)CT) images of capillary blood vessels in mouse lungs and livers. The specimen show so-called compensatory lung growth as well as different states of pulmonary and hepatic fibrosis.
During compensatory lung growth, after resecting part of the lung, the remaining part compensates for this loss by extending into the empty space. This process is accompanied by an active vessel growing.
In general, the human lung can not compensate for such a loss. Thus, understanding this process in mice is important to improve treatment options in case of diseases like lung cancer.
In case of fibrosis, the formation of scars within the organ's tissue forces the capillary vessels to grow to ensure blood supply.
Thus, the process of fibrosis as well as compensatory lung growth can be accessed by considering the capillary architecture.
As preparation of 2D microscopic images is faster, easier, and cheaper compared to SR\( \mu \)CT images, they currently form the basis of medical investigation. Yet, characteristics like direction and shape of objects can only properly be analyzed using 3D imaging techniques. Hence, analyzing SR\( \mu \)CT data provides valuable additional information.
For the fibrotic specimen, we apply image analysis methods well-known from material science. We measure the vessel diameter using the granulometry distribution function and describe the inter-vessel distance by the spherical contact distribution. Moreover, we estimate the directional distribution of the capillary structure. All features turn out to be useful to characterize fibrosis based on the deformation of capillary vessels.
It is already known that the most efficient mechanism of vessel growing forms small torus-shaped holes within the capillary structure, so-called intussusceptive pillars. Analyzing their location and number strongly contributes to the characterization of vessel growing. Hence, for all three applications, this is of great interest. This thesis provides the first algorithm to detect intussusceptive pillars in SR\( \mu \)CT images. After segmentation of raw image data, our algorithm works automatically and allows for a quantitative evaluation of a large amount of data.
The analysis of SR\( \mu \)CT data using our pillar algorithm as well as the granulometry, spherical contact distribution, and directional analysis extends the current state-of-the-art in medical studies. Although it is not possible to replace certain 3D features by 2D features without losing information, our results could be used to examine 2D features approximating the 3D findings reasonably well.
The various uses of fiber-reinforced composites, for example in the enclosures of planes, boats and cars, generates the demand for a detailed analysis of these materials. The final goal is to optimize fibrous materials by the means of “virtual material design”. New fibrous materials are virtually created as realizations of a stochastic model and evaluated with physical simulations. In that way, materials can be optimized for specific use cases, without constructing expensive prototypes or performing mechanical experiments. In order to design a practically fabricable material, the stochastic model is first adapted to an existing material and then slightly modified. The virtual reconstruction of the existing material requires a precise knowledge of the geometry of its microstructure. The first part of this thesis describes a fiber quantification method by the means of local measurements of the fiber radius and orientation. The combination of a sparse chord length transform and inertia moments leads to an efficient and precise new algorithm. It outperforms existing approaches with the possibility to treat different fiber radii within one sample, with high precision in continuous space and comparably fast computing time. This local quantification method can be directly applied on gray value images by adapting the directional distance transforms on gray values. In this work, several approaches of this kind are developed and evaluated. Further characterization of the fiber system requires a segmentation of each single fiber. Using basic morphological operators with specific structuring elements, it is possible to derive a probability for each pixel describing if the pixel belongs to a fiber core in a region without overlapping fibers. Tracking high probabilities leads to a partly reconstruction of the fiber cores in non crossing regions. These core parts are then reconnected over critical regions, if they fulfill certain conditions ensuring the affiliation to the same fiber. In the second part of this work, we develop a new stochastic model for dense systems of non overlapping fibers with a controllable level of bending. Existing approaches in the literature have at least one weakness in either achieving high volume fractions, producing non overlapping fibers, or controlling the bending or the orientation distribution. This gap can be bridged by our stochastic model, which operates in two steps. Firstly, a random walk with the multivariate von Mises-Fisher orientation distribution defines bent fibers. Secondly, a force-biased packing approach arranges them in a non overlapping configuration. Furthermore, we provide the estimation of all parameters needed for the fitting of this model to a real microstructure. Finally, we simulate the macroscopic behavior of different microstructures to derive their mechanical and thermal properties. This part is mostly supported by existing software and serves as a summary of physical simulation applied to random fiber systems. The application on a glass fiber reinforced polymer proves the quality of the reconstruction by our stochastic model, as the effective properties match for both the real microstructure and the realizations of the fitted model. This thesis includes all steps to successfully perform virtual material design on various data sets. With novel and efficient algorithms it contributes to the science of analysis and modeling of fiber reinforced materials.
We compute three-dimensional displacement vector fields to estimate the deformation of microstructural data sets in mechanical tests. For this, we extend the well-known optical flow by Brox et al. to three dimensions, with special focus on the discretization of nonlinear terms. We evaluate our method first by synthetically deforming foams and comparing against this ground truth and second with data sets of samples that underwent real mechanical tests. Our results are compared to those from state-of-the-art algorithms in materials science and medical image registration. By a thorough evaluation, we show that our proposed method is able to resolve the displacement best among all chosen comparison methods.
In this contribution a phase field model for ductile fracture with linear isotropic hardening is presented. An energy functional consisting of an elastic energy, a plastic dissipation potential and a Griffith type fracture energy constitutes the model. The application of an unaltered radial return algorithm on element level is possible due to the choice of an appropriate coupling between the nodal degrees of freedom, namely the displacement and the crack/fracture fields. The degradation function models the mentioned coupling by reducing the stiffness of the material and the plastic contribution of the energy density in broken material. Furthermore, to solve the global system of differential equations comprising the balance of linear momentum and the quasi-static Ginzburg-Landau type evolution equation, the application of a monolithic iterative solution scheme becomes feasible. The compact model is used to perform 3D simulations of fracture in tension. The computed plastic zones are compared to the dog-bone model that is used to derive validity criteria for KIC measurements.
The fifth-generation (5G) of wireless networks promises to bring new advances, such as a huge increase in mobile data rates, a plunge in communications latency, and an increase in the quality of experience perceived by users that can cope with the ever-increasing demand in Internet traffic. However, the high cost of capital and operational expenditure (CAPEX/OPEX) of the new 5G network and the lack of a killer application hinder its rapid adoption. In this context, Mobile Network Operators (MNOs) have turned their attention to the following idea: opening up their infrastructure so that vertical businesses can leverage the new 5G network to improve their primary businesses and develop new ones. However, deploying multiple isolated vertical applications on top of the same infrastructure poses unique challenges that must be addressed. In this thesis, we provide critical contributions to developing 5G networks to accommodate different vertical applications in an isolated, flexible, and automated manner. This thesis contributions spawn on three main areas: (i) the development of an integrated fronthaul and backhaul network, (ii) the development of a network slicing overbooking algorithm, and (iii) the development of a method to mitigate the noisy neighbors' problem in a vRAN deployment.
Sensing location information in indoor scenes requires a high accuracy and is a challenging task, mainly because of multipath and NLoS (non-line-of-sight) propagation. GNSS signals cannot penetrate well in indoor environment. Satellite-based navigation and positioning systems cannot therefore be used for indoor positioning.. Other technologies have been suggested for indoor usage, among them, Wi-Fi (802.11) and 5G NR (New Radio). The primary aim of this study is to discuss the advantages and drawbacks of 5G and Wi-Fi positioning techniques for indoor localization.
This paper presents a new approach to parallel path planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based a best-first search algorithm and needs no essential off-line computations. The algorithm works in an implicitly discrete configuration space. Collisions are detected in the Cartesian workspace by hierarchical distance computation based on polyhedral models of the robot and the obstacles. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel path planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows very good speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
We have presented here a two-dimensional kinetical scheme for equations governing the motion of a compressible flow of an ideal gas (air) based on the Kaniel method. The basic flux functions are computed analytically and have been used in the organization of the flux computation. The algorithm is implemented and tested for the 1D shock and 2D shock-obstacle interaction problems.
In this paper a three dimensional stochastic model for the lay-down of fibers on a moving conveyor belt in the production process of nonwoven materials is derived. The model is based on stochastic diferential equations describing the resulting position of the fiber on the belt under the influence of turbulent air ows. The model presented here is an extension of an existing surrogate model, see [6, 3].
The World Wide Web is a medium through which a manufacturer may allow Internet visitors to customize or compose his products. Due to missing or rapidly changing standards these applications are often restricted to relatively simple CGI or JAVA based scripts. Usually, results like images or movies are stored in a database and are transferred on demand to the web-user. Viper (Visualisierung parametrisch editierbarer Raumkomponenten) is a Toolkit [VIP96] written in C++ and JAVA which provides 3D-modeling and visualization methodsfor developing complex web-based applications. The Toolkit has been designed to built a prototype, which can be used to construct and visualize prefabricated homes on the Internet. Alternative applications are outlined in this paper. Within Viper, all objects are stored in a scene graph (VSSG ), which is the basic data structure of the Toolkit. To show the concept and structure of the Toolkit, functionality, and implementation of the prototype are described.
The classic approach in robust optimization is to optimize the solution with respect to the worst case scenario. This pessimistic approach yields solutions that perform best if the worst scenario happens, but also usually perform bad on average. A solution that optimizes the average performance on the other hand lacks in worst-case performance guarantee.
In practice it is important to find a good compromise between these two solutions. We propose to deal with this problem by considering it from a bicriteria perspective. The Pareto curve of the bicriteria problem visualizes exactly how costly it is to ensure robustness and helps to choose the solution with the best balance between expected and guaranteed performance.
Building upon a theoretical observation on the structure of Pareto solutions for problems with polyhedral feasible sets, we present a column generation approach that requires no direct solution of the computationally expensive worst-case problem. In computational experiments we demonstrate the effectivity of both the proposed algorithm, and the bicriteria perspective in general.
A branch-and-cut approach and alternative formulations for thetraveling salesman problem with drone
(2020)
In this paper, we are interested in studying thetraveling salesman problem withdrone(TSP-D). Given a set of customers and a truck that is equipped with a singledrone, the TSP-D asks that all customers are served exactly once and minimal deliv-ery time is achieved. We provide two compact mixed integer linear programmingformulations that can be used to address instances with up to 10 customer within afew seconds. Notably, we introduce a third formulation for the TSP-D with an expo-nential number of constraints. The latter formulation is suitable to be solved by abranch-and-cut algorithm. Indeed, this approach can be used to find optimal solu-tions for several instances with up to 20 customers within 1 hour, thus challenging thecurrent state-of-the-art in solving the TSP-D. A detailed numerical study providesan in-depth comparison on the effectiveness of the proposed formulations. More-over, we reveal further details on the operational characteristics of a drone-assisteddelivery system. By using three different sets of benchmark instances, considera-tion is given to various assumptions that affect, for example, technological droneparameters and the impact of distance metrics.
We consider the problem of evacuating a region with the help of buses. For a given set of possible collection points where evacuees gather, and possible shelter locations where evacuees are brought to, we need to determine both collection points and shelters we would like to use, and bus routes that evacuate the region in minimum time.
We model this integrated problem using an integer linear program, and present a branch-cut-and-price algorithm that generates bus tours in its pricing step. In computational experiments we show that our approach is able to solve instances of realistic size in sufficient time for practical application, and considerably outperforms the usage of a generic ILP solver.
A building-block model reveals new insights into the biogenesis of yeast mitochondrial ribosomes
(2020)
Most of the mitochondrial proteins in yeast are encoded in the nuclear genome, get synthesized by cytosolic ribosomes and are imported via TOM and TIM23 into the matrix or other subcompartments of mitochondria. The mitochondrial DNA in yeast however also encodes a small set of 8 proteins from which most are hydrophobic membrane proteins and build core components of the OXPHOS complexes. They get synthesized by mitochondrial ribosomes which are descendants of bacterial ribosomes and still have some similarities to them. On the other hand, mitochondrial ribosomes experienced various structural and functional changes during evolution that specialized them for the synthesis of the mitochondrial encoded membrane proteins. The mitoribosome contains mitochondria-specific ribosomal proteins and replaced the bacterial 5S rRNA by mitochondria-specific proteins and rRNA extensions. Furthermore, the mitoribosome is tethered to the inner mitochondrial membrane to facilitate a co-translational insertion of newly synthesized proteins. Thus, also the assembly process of mitoribosomes differs from that of bacteria and is to date not well understood.
Therefore, the biogenesis of mitochondrial ribosomes in yeast should be investigated. To this end, a strain was generated in which the gene of the mitochondrial RNA-polymerase RPO41 is under control of an inducible GAL10-promoter. Since the scaffold of ribosomes is built by ribosomal RNAs, the depletion of the RNA-polymerase subsequently leads to a loss of mitochondrial ribosomes. Reinduction of Rpo41 initiates the assembly of new mitoribosomes, which makes this strain an attractive model to study mitoribosome biogenesis.
Initially, the effects of Rpo41 depletion on cellular and mitochondrial physiology was investigated. Upon Rpo41 depletion, growth on respiratory glycerol medium was inhibited. Furthermore, mitochondrial ribosomal 21S and 15S rRNA was diminished and mitochondrial translation was almost completely absent. Also, mitochondrial DNA was strongly reduced due to the fact that mtDNA replication requires RNA primers that get synthesized by Rpo41.
Next, the effect of reinduction of Rpo41 on mitochondria was tested. Time course experiments showed that mitochondrial translation can partially recover from 48h Rpo41 depletion within a timeframe of 4.5h. Sucrose gradient sedimentation experiments further showed that the mitoribosomal constitution was comparable to wildtype control samples during the time course of 4.5h of reinduction, suggesting that the ribosome assembly is not fundamentally altered in Gal-Rpo41 mitochondria. In addition, the depletion time was found to be critical for recovery of mitochondrial translation and mitochondrial RNA levels. It was observed that after 36h of Rpo41 depletion, the rRNA levels and mitochondrial translation recovered to almost 100%, but only within a time course of 10h.
Finally, mitochondria from Gal-Rpo41 cells isolated after different timepoints of reinduction were used to perform complexome profiling and the assembly of mitochondrial protein complexes was investigated. First, the steady state conditions and the assembly process of mitochondrial respiratory chain complexes were monitored. The individual respiratory chain complexes and the super-complexes of complex III, complex IV and complex V were observed. Furthermore, it was seen that they recovered from Rpo41 depletion within 4.5h of reinduction. Complexome profiles of the mitoribosomal small and large subunit discovered subcomplexes of mitoribosomal proteins that were assumed to form prior to their incorporation into assembly intermediates. The complexome profiles after reinduction indeed showed the formation of these subcomplexes before formation of the fully assembled subunit. In the mitochondrial LSU one subcomplex builds the membrane facing protuberance and a second subcomplex forms the central protuberance. In contrast to the preassembled subcomplexes, proteins that were involved in early assembly steps were exclusively found in the fully assembled subunit. Proteins that assemble at the periphery of the mitoribosome during intermediate and late assembly steps where found in soluble form suggesting a pool of unassembled proteins that supply assembly intermediates with proteins.
Taken together, the findings of this thesis suggest a so far unknow building-block model for mitoribosome assembly in which characteristic structures of the yeast mitochondrial ribosome form preassembled subcomplexes prior to their incorporation into the mitoribosome.
3D integration of solid-state memories and logic, as demonstrated by the Hybrid Memory Cube (HMC), offers major opportunities for revisiting near-memory computation and gives new hope to mitigate the power and performance losses caused by the “memory wall”. In this paper we present the first exploration steps towards design of the Smart Memory Cube (SMC), a new Processor-in-Memory (PIM) architecture that enhances the capabilities of the logic-base (LoB) in HMC. An accurate simulation environment has been developed, along with a full featured software stack. All offloading and dynamic overheads caused by the operating system, cache coherence, and memory management are considered, as well. Benchmarking results demonstrate up to 2X performance improvement in comparison with the host SoC, and around 1.5X against a similar host-side accelerator. Moreover, by scaling down the voltage and frequency of PIM’s processor it is possible to reduce energy by around 70% and 55% in comparison with the host and the accelerator, respectively.
Contrary to symbolic learning approaches, which represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB; sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case- based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and case-based methods and show the interdependency between the measure used by a case-based algorithm and the target concept.
Retrieving multiple cases is supposed to be an adequate retrieval strategy for guiding partial-order planners because of the recognized flexibility of these planners to interleave steps in the plans. Cases are combined by merging them. In this paper, we will examine two different kinds of merging cases in the context of partial-order planning. We will see that merging cases can be very difficult if the cases are merged eagerly. On the other hand, if cases are merged by avoiding redundant steps, the guidance of the additional cases tends to decrease with the number of covered goals and retrieved cases in domains having a certain kind of interactions. Thus, to retrieve a single case covering many of the goals of the problem or to retrieve fewer cases covering many of the goals is at least equally effective as to retrieve several cases covering all goals in these domains.
A Case Study on Specifikation,Detection and Resolution of IN Feature Interactions with Estelle
(1994)
We present an approach for the treatment of Feature Interactions in Intelligent Networks. The approach is based on the formal description technique Estelle and consists of three steps. For the first step, a specification style supporting the integration of additional features into a basic service is introduced . As a result, feature integration is achieved by adding specification text, i.e . on a purely syntactical level. The second step is the detection of feature interactions resulting from the integration of additional features. A formal criterion is given that can be used for the automatic detection of a particular class of feature interactions. In the third step, previously detected feature interactions are resolved. An algorithm has been devised that allows the automatical incorporation of high-level design decisions into the formal specification. The presented approach is applied to the Basic Call Service and several supplementary interacting features.
A large set of criteria to evaluate formal methods for reactive systems is presented. To make this set more comprehensible, it is structured according to a Concept-Model of formal methods. It is made clear that it is necessary to make the catalogue more specific before applying it. Some of the steps needed to do so are explained. As an example the catalogue is applied within the context of the application domain building automation systems to three different formal methods: SDL, statecharts, and a temporallogic.
In this paper we give the definition of a solution concept in multicriteria combinatorial optimization. We show how Pareto, max-ordering and lexicographically optimal solutions can be incorporated in this framework. Furthermore we state some properties of lexicographic max-ordering solutions, which combine features of these three kinds of optimal solutions. Two of these properties, which are desirable from a decision maker" s point of view, are satisfied if and only of the solution concept is that of lexicographic max-ordering.
In this paper we develop a data-driven mixture of vector autoregressive models with exogenous components. The process is assumed to change regimes according to an underlying Markov process. In contrast to the hidden Markov setup, we allow the transition probabilities of the underlying Markov process to depend on past time series values and exogenous variables. Such processes have potential applications to modeling brain signals. For example, brain activity at time t (measured by electroencephalograms) will can be modeled as a function of both its past values as well as exogenous variables (such as visual or somatosensory stimuli). Furthermore, we establish stationarity, geometric ergodicity and the existence of moments for these processes under suitable conditions on the parameters of the model. Such properties are important for understanding the stability properties of the model as well as deriving the asymptotic behavior of various statistics and model parameter estimators.
A new approach for modelling time that does not rely on the concept of a clock is proposed. In order to establish a notion of time, system behaviour is represented as a joint progression of multiple threads of control, which satisfies a certain set of axioms. We show that the clock-independent time model is related to the well-known concept of a global clock and argue that both approaches establish the same notion of time.
Coloring terms (rippling) is a technique developed for inductive theorem proving which uses syntactic differences of terms to guide the proof search. Annotations (colors) to terms are used to maintain this information. This technique has several advantages, e.g. it is highly goal oriented and involves little search. In this paper we give a general formalization of coloring terms in a higher-order setting. We introduce a simply-typed lambda calculus with color annotations and present an appropriate (pre-)unification algorithm. Our work is a formal basis to the implementation of rippling in a higher-order setting which is required e.g. in case of middle-out reasoning. Another application is in the construction of natural language semantics, where the color annotations rule out linguistically invalid readings that are possible using standard higher-order unification.
This paper develops a sound and complete transformation-based algorithm forunification in an extensional order-sorted combinatory logic supporting constantoverloading and a higher-order sort concept. Appropriate notions of order-sortedweak equality and extensionality - reflecting order-sorted fij-equality in thecorresponding lambda calculus given by Johann and Kohlhase - are defined, andthe typed combinator-based higher-order unification techniques of Dougherty aremodified to accommodate unification with respect to the theory they generate. Thealgorithm presented here can thus be viewed as a combinatory logic counterpartto that of Johann and Kohlhase, as well as a refinement of that of Dougherty, andprovides evidence that combinatory logic is well-suited to serve as a framework forincorporating order-sorted higher-order reasoning into deduction systems aimingto capitalize on both the expressiveness of extensional higher-order logic and theefficiency of order-sorted calculi.
In grinding, the crystal grain size of the workpiece material is relatively same range compared to the removal depth. This raises a question if an anisotropic material model, which considers the effect of the crystal grain size and orientations, would better predict the process forces when compared to an isotropic material model. Initially, a simple micro-indentation process is chosen to compare the two models. In this work, a crystal plasticity model and an isotropic Johnson-Cooke plasticity model are employed to simulate micro-identation of a twinning induced plasticity (TWIP) steel. The results of the two models are compared using the force-displacement curves from the micro-indentation experiments. In the future, the study will be extended to describe the material removal process during a single grit scratch test.
In this work, we analyze two important and simple models of short rates, namely Vasicek and CIR models. The models are described and then the sensitivity of the models with respect to changes in the parameters are studied. Finally, we give the results for the estimation of the model parameters by using two different ways.
Let \(a_1,\dots,a_n\) be independent random points in \(\mathbb{R}^d\) spherically symmetrically but not necessarily identically distributed. Let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_n\) and for any \(k\)-dimensional subspace \(L\subseteq \mathbb{R}^d\) let \(Vol_L(X) :=\lambda_k(L\cap X)\) be the volume of \(X\cap L\) with respect to the \(k\)-dimensional Lebesgue measure \(\lambda_k, k=1,\dots,d\). Furthermore, let \(F^{(i)}\)(t):= \(\bf{Pr}\) \(\)(\(\Vert a_i \|_2\leq t\)),
\(t \in \mathbb{R}^+_0\) , be the radial distribution function of \(a_i\). We prove that the expectation
functional \(\Phi_L\)(\(F^{(1)}, F^{(2)},\dots, F^{(n)})\) := \(E(Vol_L(X)\)) is strictly decreasing in
each argument, i.e. if \(F^{(i)}(t) \le G^{(i)}(t)t\), \(t \in {R}^+_0\), but \(F^{(i)} \not\equiv G^{(i)}\), we show \(\Phi\) \((\dots, F^{(i)}, \dots\)) > \(\Phi(\dots,G^{(i)},\dots\)). The proof is clone in the more general framework
of continuous and \(f\)- additive polytope functionals.
Treating polyatomic gases in kinetic gas theory requires an appropriate molecule model taking into account the additional internal structure of the gas particles. In this paper we describe two such models, each arising from quite different approaches to this problem. A simulation scheme for solving the corresponding kinetic equations is presented and some numerical results to 1D shockwaves are compared.
Simulation methods like DSMC are an efficient tool to compute rarefied gas flows. Using supercomputers it is possible to include various real gas effects like vibrational energies or chemical reactions in a gas mixture. Nevertheless it is still necessary to improve the accuracy of the current simulation methods in order to reduce the computational effort. To support this task the paper presents a comparison of the classical DSMC method with the so called finite Pointset Method. This new approach was developed during several years in the framework of the European space project HERMES. The comparison given in the paper is based on two different testcases: a spatially homogeneous relaxation problem and a 2-dimensional axisymmetric flow problem at high Mach numbers.
We consider the problem of evacuating an urban area caused by a natural or man-made disaster. There are several planning aspects that need to be considered in such a scenario, which are usually considered separately, due to their computational complexity. These aspects include: Which shelters are used to accommodate evacuees? How to schedule public transport for transit-dependent evacuees? And how do public and individual traffic interact? Furthermore, besides evacuation time, also the risk of the evacuation needs to be considered.
We propose a macroscopic multi-criteria optimization model that includes all of these questions simultaneously. As a mixed-integer programming formulation cannot handle instances of real-world size, we develop a genetic algorithm of NSGA-II type that is able to generate feasible solutions of good quality in reasonable computation times.
We extend the applicability of these methods by also considering how to aggregate instance data, and how to generate solutions for the original instance starting from a reduced solution.
In computational experiments using real-world data modelling the cities of Nice in France and Kaiserslautern in Germany, we demonstrate the effectiveness of our approach and compare the trade-off between different levels of data aggregation.
This paper describes a system that supports softwaredevelopment processes in virtual software corporations. A virtual software corporation consists of a set of enterprisesthat cooperate in projects to fulfill customer needs. Contracts are negotiated in the whole lifecycle of asoftware development project. The negotiations really influence the performance of a company. Therefore, it isuseful to support negotiations and planning decisions with software agents. Our approach integrates software agentapproaches for negotiation support with flexible multiserver workflow engines.
In this article we give a sufficient condition that a simply connected flexible body does not penetrate itself, if it is subjected to a continuous deformation. It is shown that the deformation map is automatically injective, if it is just locally injective and injective on the boundary of the body. Thereby, it is very remarkable that no higher regularity assumption than continuity for the deformation map is required. The proof exclusively relies on homotopy methods and the Jordan-Brouwer separation theorem.
A Consistent Large Eddy Approach for Lattice Boltzmann Methods and its Application to Complex Flows
(2015)
Lattice Boltzmann Methods have shown to be promising tools for solving fluid flow problems. This is related to the advantages of these methods, which are among others, the simplicity in handling complex geometries and the high efficiency in calculating transient flows. Lattice Boltzmann Methods are mesoscopic methods, based on discrete particle dynamics. This is in contrast to conventional Computational Fluid Dynamics methods, which are based on the solution of the continuum equations. Calculations of turbulent flows in engineering depend in general on modeling, since resolving of all turbulent scales is and will be in near future far beyond the computational possibilities. One of the most auspicious modeling approaches is the large eddy simulation, in which the large, inhomogeneous turbulence structures are directly computed and the smaller, more homogeneous structures are modeled.
In this thesis, a consistent large eddy approach for the Lattice Boltzmann Method is introduced. This large eddy model includes, besides a subgrid scale model, appropriate boundary conditions for wall resolved and wall modeled calculations. It also provides conditions for turbulent domain inlets. For the case of wall modeled simulations, a two layer wall model is derived in the Lattice Boltzmann context. Turbulent inlet conditions are achieved by means of a synthetic turbulence technique within the Lattice Boltzmann Method.
The proposed approach is implemented in the Lattice Boltzmann based CFD package SAM-Lattice, which has been created in the course of this work. SAM-Lattice is feasible of the calculation of incompressible or weakly compressible, isothermal flows of engineering interest in complex three dimensional domains. Special design targets of SAM-Lattice are high automatization and high performance.
Validation of the suggested large eddy Lattice Boltzmann scheme is performed for pump intake flows, which have not yet been treated by LBM. Even though, this numerical method is very suitable for this kind of vortical flows in complicated domains. In general, applications of LBM to hydrodynamic engineering problems are rare. The results of the pump intake validation cases reveal that the proposed numerical approach is able to represent qualitatively and quantitatively the very complex flows in the intakes. The findings provided in this thesis can serve as the basis for a broader application of LBM in hydrodynamic engineering problems.
We propose a constraint-based approach for the two-dimensional rectangular packing problem with orthogonal orientations. This problem is to arrange a set of rectangles that can be rotated by 90 degrees into a rectangle of minimal size such that no two rectangles overlap. It arises in the placement of electronic devices during the layout of 2.5D System-in-Package integrated electronic systems. Moffitt et al. [8] solve the packing without orientations with a branch and bound approach and use constraint propagation. We generalize their propagation techniques to allow orientations. Our approach is compared to a mixed-integer program and we provide results that outperform it.
The notion of Q-Gorenstein smoothings has been introduced by Kollar. ([KoJ], 6.2.3). This notion is essential for formulating Kollar's conjectures on smoothing components for rational surface singularities. He conjectures, loosely speaking, that every smoothing of a rational surface singularity can be obtained by blowing down a deformation of a partial resolution, this partial resolution having the property (among others) that the singularities occuring on it all have qG-smoothings. (For more details and precise statements see [Ko], ch. 6.). It is therefore of interest to construct singularities having qG-smoothings.
Beamforming performs spatial filtering to preserve the signal from given directions of interest while suppressing interfering signals and noise arriving from other directions.
For example, a microphone array equipped with beamforming algorithm could preserve the sound coming from a target speaker and suppress sounds coming from other speakers.
Beamformer has been widely used in many applications such as radar, sonar, communication, and acoustic systems.
A data-independent beamformer is the beamformer whose coefficients are independent on sensor signals, it normally uses less computation since the coefficients are computed once. Moreover, its coefficients are derived from the well-defined statistical models, then it produces less artifacts. The major drawback of this beamforming class is its limitation to the interference suppression.
On the other hand, an adaptive beamformer is a beamformer whose coefficients depend on or adapt to sensor signals. It is capable of suppressing the interference better than a data-independent beamforming but it suffers from either too much distortion of the signal of interest or less noise reduction when the updating rate of coefficients does not synchronize with the changing rate of the noise model. Besides, it is computationally intensive since the coefficients need to be updated frequently.
In acoustic applications, the bandwidth of signals of interest extends over several octaves, but we always expect that the characteristic of the beamformer is invariant with regard to the bandwidth of interest. This can be achieved by the so-called broadband beamforming.
Since the beam pattern of conventional beamformers depends on the frequency of the signal, it is common to use a dense and uniform array for the broadband beamforming to guarantee some essential performances together, such as frequency-independence, less sensitive to white noise, high directivity factor or high front-to-back ratio. In this dissertation, we mainly focus on the sparse array of which the aim is to use fewer sensors in the array,
while simultaneously assuring several important performances of the beamformer.
In the past few decades, many design methodologies for sparse arrays have been proposed and were applied in a variety of practical applications.
Although good results were presented, there are still some restrictions, such as the number of sensors is large, the designed beam pattern must be fixed, the steering ability is limited and the computational complexity is high.
In this work, two novel approaches for the sparse array design taking a hypothesized uniform array as a basis are proposed, that is, one for data-independent beamformers and the another for adaptive beamformers.
As an underlying component of the proposed methods, the dissertation introduces some new insights into the uniform array with broadband beamforming. In this context, a function formulating the relations between the sensor coefficients and its beam pattern over frequency is proposed. The function mainly contains the coordinate transform and inverse Fourier transform.
Furthermore, from the bijection of the function and broadband beamforming perspective, we propose the lower and upper bounds for the inter-distance of sensors. Within these bounds, the function is a bijective function that can be utilized to design the uniform array with broadband beamforming.
For data-independent beamforming, many studies have focused on optimization procedures to seek the sparse array deployment. This dissertation presents an alternative approach to determine the location of sensors.
Starting with a weight spectrum of a virtual dense and uniform array, some techniques are used, such as analyzing a weight spectrum to determine the critical sensors, applying the clustering technique to group the sensors into different groups and selecting representative sensors for each group.
After the sparse array deployment is specified, the optimization technique is applied to find the beamformer coefficients. The proposed method helps to save the computation time in the design phase and its beamformer performance outperforms other state-of-the-art methods in several aspects such as the higher white noise gain, higher directivity factor or more frequency-independence.
For adaptive beamforming, the dissertation attempts to design a versatile sparse microphone array that can be used for different beam patterns.
Furthermore, we aim to reduce the number of microphones in the sparse array while ensuring that its performance can continue to compete with a highly dense and uniform array in terms of broadband beamforming.
An irregular microphone array in a planar surface with the maximum number of distinct distances between the microphones is proposed.
It is demonstrated that the irregular microphone array is well-suited to sparse recovery algorithms that are used to solve underdetermined systems with subject to sparse solutions. Here, a sparse solution is the sound source's spatial spectrum that need to be reconstructed from microphone signals.
From the reconstructed sound sources, a method for array interpolation is presented to obtain an interpolated dense and uniform microphone array that performs well with broadband beamforming.
In addition, two alternative approaches for generalized sidelobe canceler (GSC) beamformer are proposed. One is the data-independent beamforming variant, the other is the adaptive beamforming variant. The GSC decomposes beamforming into two paths: The upper path is to preserve the desired signal, the lower path is to suppress the desired signal. From a beam pattern viewpoint, we propose an improvement for GSC, that is, instead of using the blocking matrix in the lower path to suppress the desired signal, we design a beamformer that contains the nulls at the look direction and at some other directions. Both approaches are simple beamforming design methods and they can be applied to either sparse array or uniform array.
Lastly, a new technique for direction-of-arrival (DOA) estimation based on the annihilating filter is also presented in this dissertation.
It is based on the idea of finite rate of innovation to reconstruct the stream of Diracs, that is, identifying an annihilating filter/locator filter for a few uniform samples and the position of the Diracs are then related to the roots of the filter. Here, an annihilating filter is the filter that suppresses the signal, since its coefficient vector is always orthogonal to every frame of signal.
In the DOA context, we regard an active source as a Dirac associated with the arrival direction, then the directions of active sources can be derived from the roots of the annihilating filter. However,
the DOA obtained by this method is sensitive to noise and the number of DOAs is limited.
To address these issues, the dissertation proposes a robust method to design the annihilating filter and to increase the degree-of-freedom of the measurement system (more active sources can be detected) via observing multiple data frames.
Furthermore, we also analyze the performance of DOA with diffuse noise and propose an extended multiple signal classification algorithm that takes diffuse noise into account. In the simulation,
it shows, that in the case of diffuse noise, only the extended multiple signal classification algorithm can estimate the DOAs properly.
A counter-based read circuit tolerant to process variation for low-voltage operating STT-MRAM
(2016)
The capacity of embedded memory on LSIs has kept increasing. It is important to reduce the leakage power of embedded memory for low-power LSIs. In fact, the ITRS predicts that the leakage power in embedded memory will account for 40% of all power consumption by 2024 [1]. A spin transfer torque magneto-resistance random access memory (STT-MRAM) is promising for use as non-volatile memory to reduce the leakage power. It is useful because it can function at low voltages and has a lifetime of over 1016 write cycles [2]. In addition, the STT-MRAM technology has a smaller bit cell than an SRAM. Making the STT-MRAM is suitable for use in high-density products [3–7]. The STT-MRAM uses magnetic tunnel junction (MTJ). The MTJ has two states: a parallel state and an anti-parallel state. These states mean that the magnetization direction of the MTJ’s layers are the same or different. The directions pair determines the MTJ’s magneto- resistance value. The states of MTJ can be changed by the current flowing. The MTJ resistance becomes low in the parallel state and high in the anti-parallel state. The MTJ potentially operates at less than 0.4 V [8]. In other hands, it is difficult to design peripheral circuitry for an STT-MRAM array at such a low voltage. In this paper, we propose a counter-based read circuit that functions at 0.4 V, which is tolerant of process variation and temperature fluctuation.
In cake filtration processes, where particles in a suspension are separated by forming a filter
cake on the filter medium, the resistances of filter cake and filter medium cause a specific pressure
drop which consequently defines the process energy effort. The micromechanics of the filter cake
formation (interactions between particles, fluid, other particles and filter medium) must be considered
to describe pore clogging, filter cake growth and consolidation correctly. A precise 3D modeling
approach to describe these effects is the resolved coupling of the Computational Fluid Dynamics with
the Discrete Element Method (CFD-DEM). This work focuses on the development and validation of a
CFD-DEM model, which is capable to predict the filter cake formation during solid-liquid separation
accurately. The model uses the Lattice-Boltzmann Method (LBM) to directly solve the flow equations
in the CFD part of the coupling and the DEM for the calculation of particle interactions. The developed
model enables the 4-way coupling to consider particle-fluid and particle-particle interactions. The
results of this work are presented in two steps. First, the developed model is validated with an
empirical model of the single particle settling velocity in the transition regime of the fluid-particle
flow. The model is also enhanced with additional particles to determine the particle-particle influence.
Second, the separation of silica glass particles from water in a pressurized housing at constant pressure
is experimentally investigated. The measured filter cake, filter medium and interference resistances
are in a good agreement with the results of the 3D simulations, demonstrating the applicability of the
resolved CFD-DEM coupling for analyzing and optimizing cake filtration processes.
The growing computational power enables the establishment of the Population Balance Equation (PBE)
to model the steady state and dynamic behavior of multiphase flow unit operations. Accordingly, the twophase
flow
behavior inside liquid-liquid extraction equipment is characterized by different factors. These
factors include: interactions among droplets (breakage and coalescence), different time scales due to the
size distribution of the dispersed phase, and micro time scales of the interphase diffusional mass transfer
process. As a result of this, the general PBE has no well known analytical solution and therefore robust
numerical solution methods with low computational cost are highly admired.
In this work, the Sectional Quadrature Method of Moments (SQMOM) (Attarakih, M. M., Drumm, C.,
Bart, H.-J. (2009). Solution of the population balance equation using the Sectional Quadrature Method of
Moments (SQMOM). Chem. Eng. Sci. 64, 742-752) is extended to take into account the continuous flow
systems in spatial domain. In this regard, the SQMOM is extended to solve the spatially distributed
nonhomogeneous bivariate PBE to model the hydrodynamics and physical/reactive mass transfer
behavior of liquid-liquid extraction equipment. Based on the extended SQMOM, two different steady
state and dynamic simulation algorithms for hydrodynamics and mass transfer behavior of liquid-liquid
extraction equipment are developed and efficiently implemented. At the steady state modeling level, a
Spatially-Mixed SQMOM (SM-SQMOM) algorithm is developed and successfully implemented in a onedimensional
physical spatial domain. The integral spatial numerical flux is closed using the mean mass
droplet diameter based on the One Primary and One Secondary Particle Method (OPOSPM which is the
simplest case of the SQMOM). On the other hand the hydrodynamics integral source terms are closed
using the analytical Two-Equal Weight Quadrature (TEqWQ). To avoid the numerical solution of the
droplet rise velocity, an analytical solution based on the algebraic velocity model is derived for the
particular case of unit velocity exponent appearing in the droplet swarm model. In addition to this, the
source term due to mass transport is closed using OPOSPM. The resulting system of ordinary differential
equations with respect to space is solved using the MATLAB adaptive Runge–Kutta method (ODE45). At
the dynamic modeling level, the SQMOM is extended to a one-dimensional physical spatial domain and
resolved using the finite volume method. To close the mathematical model, the required quadrature nodes
and weights are calculated using the analytical solution based on the Two Unequal Weights Quadrature
(TUEWQ) formula. By applying the finite volume method to the spatial domain, a semi-discreet ordinary
differential equation system is obtained and solved. Both steady state and dynamic algorithms are
extensively validated at analytical, numerical, and experimental levels. At the numerical level, the
predictions of both algorithms are validated using the extended fixed pivot technique as implemented in
PPBLab software (Attarakih, M., Alzyod, S., Abu-Khader, M., Bart, H.-J. (2012). PPBLAB: A new
multivariate population balance environment for particulate system modeling and simulation. Procedia
Eng. 42, pp. 144-562). At the experimental validation level, the extended SQMOM is successfully used
to model the steady state hydrodynamics and physical and reactive mass transfer behavior of agitated
liquid-liquid extraction columns under different operating conditions. In this regard, both models are
found efficient and able to follow liquid extraction column behavior during column scale-up, where three
column diameters were investigated (DN32, DN80, and DN150). To shed more light on the local
interactions among the contacted phases, a reduced coupled PBE and CFD framework is used to model
the hydrodynamic behavior of pulsed sieve plate columns. In this regard, OPOSPM is utilized and
implemented in FLUENT 18.2 commercial software as a special case of the SQMOM. The dropletdroplet
interactions
(breakage
and
coalescence)
are
taken
into
account
using
OPOSPM,
while
the
required
information
about
the
velocity
field
and
energy
dissipation
is
calculated
by
the
CFD
model.
In
addition
to
this,
the proposed coupled OPOSPM-CFD framework is extended to include the mass transfer. The
proposed framework is numerically tested and the results are compared with the published experimental
data. The required breakage and coalescence parameters to perform the 2D-CFD simulation are estimated
using PPBLab software, where a 1D-CFD simulation using a multi-sectional gird is performed. A very
good agreement is obtained at the experimental and the numerical validation levels.
A new algorithm for optimization problems with three objective functions is presented which computes a representation for the set of nondominated points. This representation is guaranteed to have a desired coverage error and a bound on the number of iterations needed by the algorithm to meet this coverage error is derived. Since the representation does not necessarily contain nondominated points only, ideas to calculate bounds for the representation error are given. Moreover, the incorporation of domination during the algorithm and other quality measures are discussed.
Nucleophilic substitution of [(η5-cyclopentadienyl)(η6-chlorobenzene)iron(II)] hexafluorophosphate with sodium imidazolate resulted in the formation of [(η5-cyclopentadienyl)(η6-phenyl)iron(II)]imidazole hexafluorophosphate. The corresponding dicationic imidazolium salt, which was obtained by treating this imidazole precursor with methyl iodide, underwent cyclometallation with bis[dichlorido(η5-1,2,3,4,5-pentamethylcyclopentadienyl]iridium(III) in the presence of triethyl amine. The resulting bimetallic iridium(III) complex is the first example of an NHC complex bearing a cationic and cyclometallated [(η5-cyclopentadienyl)(η6-phenyl)iron(II)]+ substituent. As its iron(II) precursors, the bimetallic iridium(III) complex was fully characterized by means of spectroscopy, elemental analysis and single crystal X-ray diffraction. In addition, it was investigated in a catalytic study, wherein it showed high activity in transfer hydrogenation compared to its neutral analogue having a simple phenyl instead of a cationic [(η5-cyclopentadienyl)(η6-phenyl)iron(II)]+ unit at the NHC ligand.
We study the sensor fault estimation and accommodation problems in a data-driven \(\mathcal{H}_\infty\) setting, leading to a data-driven sensor fault-tolerant control scheme. First, we formulate the fault estimation problem as a finite-horizon minimax \(\mathcal{H}_\infty\)-optimization problem in a data-driven setup, whose solution yields the fault estimate. The estimated fault is then used for output compensation. This compensated output and the experimental input are used to achieve certain control objectives in a data-driven \(\mathcal{H}_\infty\) setting. Next, the data-driven \(\mathcal{H}_\infty\) fault estimation and control problems are solved using a subspace predictor-based approach. Finally, the proposed algorithm is applied to the steering subsystem of the remotely operated underwater vehicle.