Refine
Year of publication
- 2000 (81) (remove)
Document Type
- Preprint (81) (remove)
Keywords
- AG-RESY (3)
- HANDFLEX (3)
- Deformable Objects (2)
- Manipulation (2)
- Robotics (2)
- Accounting (1)
- Algebraic Geometry (1)
- Approximation (1)
- Black-Scholes model (1)
- CAx (1)
Faculty / Organisational entity
A new and systematic basic approach to force- and vision-based robot manipulation of deformable (non-rigid) linear objects is introduced. This approach reduces the computational needs by using a simple state-oriented model of the objects. These states describe the relation between the deformable and rigid obstacles, and are derived from the object image and its features. We give an enumeration of possible contact states and discuss the main characteristics of each state. We investigate the performance of robust transitions between the contact states and derive criteria and conditions for each of the states and for two sensor systems, i.e. a vision sensor and a force/torque sensor. This results in a new and task-independent approach in regarding the handling of deformable objects and in a sensor-based implementation of manipulation primitives for industrial robots. Thus, the usage of sensor processing is an appropriate solution for our problem. Finally, we apply the concept of contact states and state transitions to the description of a typical assembly task. Experimental results show the feasibility of our approach: A robot performs several contact state transitions which can be combined for solving a more complex task.
Dynamics of Excited Electrons in Copper and Ferromagnetic Transition Metals: Theory and Experiment
(2000)
Both theoretical and experimental results for the dynamics of photoexcited electrons at surfaces of Cu and the ferromagnetic transition metals Fe, Co, and Ni are presented. A model for the dynamics of excited electrons is developed, which is based on the Boltzmann equation and includes effects of photoexcitation, electron-electron scattering, secondary electrons (cascade and Auger electrons), and transport of excited carriers out of the detection region. From this we determine the time-resolved two-photon photoemission (TR-2PPE). Thus a direct comparison of calculated relaxation times with experimental results by means of TR-2PPE becomes possible. The comparison indicates that the magnitudes of the spin-averaged relaxation time t and of the ratio t_up/t_down of majority and minority relaxation times for the different ferromagnetic transition metals result not only from density-of-states effects, but also from different Coulomb matrix elements M. Taking M_Fe > M_Cu > M_Ni = M_Co we get reasonable agreement with experiments.
The increasing parallelisation of development processes as well as the ongoing trends towards virtual product development and outsourcing of development activities strengthen the need for 3D co-operative design via communication networks. Regarding the field of CAx, none of the existing systems meets all the requirements of very complex process chain. This leads to a tremendous need for the integration of heterogeneous CAx systems. Therefore, MACAO, a platform-independent client for a distributed CAx component system, the so-called ANICA CAx object bus, is presented. The MACAO client is able to access objects and functions provided by different CAx servers distributed over a communication network. Thus, MACAO is a new solution for engineering design and visualisation in shared distributed virtual environments. This paper describes the underlying concepts, the actual prototype implementation, as well as possible application scenarios in the area of co-operative design and visualisation.
Der Trend der letzten Jahre im CAx-Bereich geht eindeutig in Richtung 3D-Modellierung. Der Einsatz dieser Technologie ist jedoch erst dann wirtschaftlich sinnvoll, wenn die generierten Daten nicht ausschließlich als Ersatz für 2D-Zeichnungen dienen, sondern während des gesamten Produkt-entstehungsprozesses eingesetzt werden und auf diese Weise Datendurchgängigkeit gewährleistet wird. Mittlerweile wird ein umfangreiches Spektrum von Anwendungen eingesetzt. Beispielhaft sei-en hier Berechnungs- und Simulationsprogramme oder die 3D-Produktvisualisierung in nicht-technischen Bereichen (z. B. Marketing, Vertrieb) genannt. Viele CA-Systeme bieten zwar eine große Auswahl an Modulen für nahezu alle Bereiche der Produktentwicklung, allerdings ist kein System, unabhängig von seiner Komplexität, in der Lage, alle Anforderungen seiner Anwender zu erfüllen. Deshalb kommen in immer größerem Umfang spezielle Programme für individuelle Probleme zum Einsatz. Der Anwender sieht sich jedoch mit Schwierigkeiten konfrontiert, wenn er versucht, für spezielle Probleme spezielle Anwendungen unterschiedlicher Systemhersteller einzusetzen. Um die Integrati-on der verschiedenen Programme zu ermöglichen, muß er sich auf neutrale Standardschnittstellen für den Produktdatenaustausch (IGES, VDAFS, STEP) verlassen, wobei hier mit Informationsverlusten zu rechnen ist. Außerdem muß er sich mit differierenden Benutzerführungen vertraut machen. Im Bewußtsein dieser Probleme entwickelte die Arbeitsgruppe "CAD/CAM-Strategien der deut-schen Automobilindustrie" einen Vorschlag für eine offene CAx-Systemarchitektur /1/, /2/, /3/. Diese sollte in der Lage sein, alle CAx-Komponenten, die im Laufe des Produktent-stehungsprozesses verwendet werden, zu integrieren. Es sollte unter anderem die folgenden Anforderungen erfüllen: ° Offenheit ° Interoperabilität ° Investitionssicherheit ° Aufhebung der Zwangsbindung des Anwenders an einen Systemhersteller ° Vermeidung redundanter Systeme Die Berücksichtigung der internationalen Standards STEP für den Bereich der Produktdatenmo-dellierung und CORBA für den Bereich der verteilten objektorientierten Systeme, die in den folgen-den Abschnitten kurz dargestellt sind, war für die Erfüllung dieser Anforderungen eine wichtige Voraussetzung
For the next generation of high data rate magnetic recording above 1 Gbit/s, a better understanding of the switching processes for both recording heads and media will be required. In order to maximize the switch-ing speed for such devices, the magnetization precession after the magnetic field pulse termination needs to be suppressed to a maximum degree. It is demonstrated experimentally for ferrite films that the appropriate adjustment of the field pulse parameters and/or the static applied field may lead to a full suppression of the magnetization precession immediately upon termination of the field pulse. The suppression is explained by taking into account the actual direction of the magnetization with respect to the static field direction at the pulse termination.
Wir beschreiben eine Methode zur Approximation von Spannungsgradienten aus diskreten Spannungsdaten. Eine herkömmliche Diskretisierung der Ableitungen aus Funktionswerten führt zu Stabilitätsproblemen, weswegen eine Möglichkeit zur Kontrolle der Ableitungen notwendig ist (Regularisierung). Wir bestimmen zunächst das Funktional der potentiellen Energie und führen zusätzlich ein Fehlerfunktional ein, das die Anpassung an die vorgegebenen diskreten Werte ermöglicht. Durch Gewichtung der beiden Funktionale und Minimierung des Gesamtfunktionals erhält man den gewünschten Ausgleich zwischen der Fehlerkontrolle beim Ableiten einerseits und Kontrolle der Fehler bei den Randwerten andererseits.
In multicriteria optimization problems the connectedness of the set of efficient solutions (pareto set) is of special interest since it would allow the determination of the efficient solutions without considering non-efficient solutions in the process. In the case of the multicriteria problem to minimize matchings the set of efficient solutions is not connected. The set of minimal solutions E pot with respect to the power ordered set contains the pareto set. In this work theorems about connectedness of E pot are given. These lead to an automated process to detect all efficient solutions.
We consider the problem of locating a line or a line segment in three- dimensional space, such that the sum of distances from the linear facility to a given set of points is minimized. An example is planning the drilling of a mine shaft, with access to ore deposits through horizontal tunnels connecting the deposits and the shaft. Various models of the problem are developed and analyzed, and effcient solution methods are given.
Many polynomially solvable combinatorial optimization problems (COP) become NP when we require solutions to satisfy an additional cardinality constraint. This family of problems has been considered only recently. We study a newproblem of this family: the k-cardinality minimum cut problem. Given an undirected edge-weighted graph the k-cardinality minimum cut problem is to find a partition of the vertex set V in two sets V 1 , V 2 such that the number of the edges between V 1 and V 2 is exactly k and the sum of the weights of these edges is minimal. A variant of this problem is the k-cardinality minimum s-t cut problem where s and t are fixed vertices and we have the additional request that s belongs to V 1 and t belongs to V 2 . We also consider other variants where the number of edges of the cut is constrained to be either less or greater than k. For all these problems we show complexity results in the most significant graph classes.
It is well-known that some of the classical location problems with polyhedral gauges can be solved in polynomial time by finding a finite dominating set, i.e. a finite set of candidates guaranteed to contain at least one optimal location. In this paper it is first established that this result holds for a much larger class of problems than currently considered in the literature. The model for which this result can be proven includes, for instance, location problems with attraction and repulsion, and location-allocation problems. Next, it is shown that the approximation of general gauges by polyhedral ones in the objective function of our general model can be analyzed with regard to the subsequent error in the optimal objective value. For the approximation problem two different approaches are described, the sandwich procedure and the greedy algorithm. Both of these approaches lead - for fixed epsilon - to polynomial approximation algorithms with accuracy epsilon for solving the general model considered in this paper.
The paper concerns the equilibrium state of ultra small semiconductor devices. Due to the quantum drift diffusion model, electrons and holes behave as a mixture of charged quantum fluids. Typically the involved scaled Plancks constants of holes, \(\xi\), is significantly smaller than the scaled Plancks constant of electrons. By setting formally \(\xi=0\) a well-posed differential-algebraic system arises. Existence and uniqueness of an equilibrium solution is proved. A rigorous asymptotic analysis shows that this equilibrium solution is the limit (in a rather strong sense) of quantum systems as \(\xi \to 0\). In particular the ground state energies of the quantum systems converge to the ground state energy of the differential-algebraic system as \(\xi \to 0\).
Mean field equations arise as steady state versions of convection-diffusion systems where the convective field is determined as solution of a Poisson equation whose right hand side is affine in the solutions of the convection-diffusion equations. In this paper we consider the repulsive coupling case for a system of 2 convection-diffusion equations. For general diffusivities we prove the existence of a unique solution of the mean field equation by a variational technique. Also we analyse the small-Debye-length limit and prove convergence to either the so-called charge-neutral case or to a double obstacle problem for the limiting potential depending on the data.
The aim of this article is to show that moment approximations of kinetic equations based on a Maximum Entropy approach can suffer from severe drawbacks if the kinetic velocity space is unbounded. As example, we study the Fokker Planck equation where explicit expressions for the moments of solutions to Riemann problems can be derived. The quality of the closure relation obtained from the Maximum Entropy approach as well as the Hermite/Grad approach is studied in the case of five moments. It turns out that the Maximum Entropy closure is even singular in equilibrium states while the Hermite/Grad closure behaves reasonably. In particular, the admissible moments may lead to arbitrary large speeds of propagation, even for initial data arbitrary close to global eqilibrium.
Performance of some preconditioners for the p - and hp -version of the finite element method in 3D
(2000)
The balance space approach (introduced by Galperin in 1990) provides a new view on multicriteria optimization. Looking at deviations from global optimality of the different objectives, balance points and balance numbers are defined when either different or equal deviations for each objective are allowed. Apportioned balance numbers allow the specification of proportions among the deviations. Through this concept the decision maker can be involved in the decision process. In this paper we prove that the apportioned balance number can be formulated by a min-max operator. Furthermore we prove some relations between apportioned balance numbers and the balance set, and see the representation of balance numbers in the balance set. The main results are necessary and sufficient conditions for the balance set to be exhaustive, which means that by multiplying a vector of weights (proportions of deviation) with its corresponding apportioned balance number a balance point is attained. The results are used to formulate an interactive procedure for multicriteria optimization. All results are illustrated by examples.
This paper provides an annotated bibliography of multiple objective combinatorial optimization, MOCO. We present a general formulation of MOCO problems, describe the main characteristics of MOCO problems, and review the main properties and theoretical results for these problems. One section is devoted to a brief description of the available solution methodology, both exact and heuristic. The main part of the paper is devoted to an annotation of the existing literature in the field organized problem by problem. We conclude the paper by stating open questions and areas of future research. The list of references comprises more than 350 entries.
In this paper we address the question of how many objective functions are needed to decide whether a given point is a Pareto optimal solution for a multicriteria optimization problem. We extend earlier results showing that the set of weakly Pareto optimal points is the union of Pareto optimal sets of subproblems and show their limitations. We prove that for strictly quasi-convex problems in two variables Pareto optimality can be decided by consideration of at most three objectives at a time. Our results are based on a geometric characterization of Pareto, strict Pareto and weak Pareto solutions and Helly's Theorem. We also show that a generalization to quasi-convex objectives is not possible, and state a weaker result for this case. Furthermore, we show that a generalization to strictly Pareto optimal solutions is impossible, even in the convex case.
In this paper we investigate the problem offending the Nadir point for multicriteria optimization problems (MOP). The Nadir point is characterized by the component wise maximal values of efficient points for (MOP). It can be easily computed in the bicriteria case. However, in general this problem is very difficult. We review some existing methods and heuristics and propose some new ones. We propose a general method to compute Nadir values for the case of three objectives, based on theoretical results valid for any number of criteria. We also investigate the use of the Nadir point for compromise programming, when the goal is to be as far away as possible from the worst outcomes. We prove some results about (weak) Pareto optimality of the resulting solutions. The results are illustrated by examples.
We consider some continuous-time Markowitz type portfolio problems that consist of maximizing expected terminal wealth under the constraint of an upper bound for the Capital-at-Risk. In a Black-Scholes setting we obtain closed form explicit solutions and compare their form and implications to those of the classical continuous-time mean-variance problem. We also consider more general price processes which allow for larger uctuations in the returns.
Abstract: The recently proposed idea to generate entanglement between photon states via exchange interactions in an ensemble of atoms (J. D. Franson and T. B. Pitman, Phys. Rev. A 60 , 917 (1999) and J. D. Franson et al., (quant- ph/9912121)) is discussed using an S -matix approach. It is shown that if the nonlinear response of the atoms is negligible and no additional atom-atom interactions are present, exchange interactions cannot produce entanglement between photons states in a process that returns the atoms to their initial state. Entanglement generation requires the presence of a nonlinear atomic response or atom-atom interactions.
Abstract: Local field effects on the rate of spontaneous emission and Lamb shift in a dense gas of atoms are discussed taking into account correlations of atomic center-of-mass coordinates. For this the exact retarded propagator in the medium is calculated in independent scattering approximation and employing a virtual-cavity model. The resulting changes of the atomic polarizability lead to modi cations of the medium response which can be of the same order of magnitude but of opposite sign than those due to local field corrections of the dielectric function derived by Morice, Castin, and Dalibard [Phys.Rev.A 51, 3896 (1995)].
Abstract: We identify form-stable coupled excitations of light and matter ("dark-state polaritons") associated with the propagation of quantum fields in Electromagnetically Induced Transparency. The properties of the dark-state polaritons such as the group velocity are determined by the mixing angle between light and matter components and can be controlled by an external coherent field as the pulse propagates. In particular, light pulses can be decelerated and "trapped" in which case their shape and quantum state are mapped onto metastable collective states of matter. Possible applications of this reversible coherent-control technique are discussed.
Abstract: We analyze the above-threshold behavior of a mirrorless parametric oscillator based on resonantly enhanced four wave mixing in a coherently driven dense atomic vapor. It is shown that, in the ideal limit, an arbitrary small flux of pump photons is sufficient to reach the oscillator threshold. We demonstrate that due to the large group velocity delays associated with coherent media, an extremely narrow oscillator linewidth is possible, making a narrow-band source of non-classical radiation feasible.
Abstract: We analyze systematic (classical) and fundamental (quantum) limitations of the sensitivity of optical magnetometers resulting from ac-Stark shifts. We show that incontrast to absorption-based techniques, the signal reduction associated with classical broadening can be compensated in magnetometers based on phase measurements using electromagnetically induced transparency (EIT). However due to ac-Stark associated quantum noise the signal-to-noise ratio of EIT-based magnetometers attains a maximum value at a certain laser intensity. This value is independent on the quantum statistics of the light and defines a standard quantum limit of sensitivity. We demonstrate that an EIT-based optical magnetometer in Faraday configuration is the best candidate to achieve the highest sensitivity of magnetic field detection and give a detailed analysis of such a device.
The basic idea behind selective multiscale reconstruction of functions from error-affected data is outlined on the sphere. The selective reconstruction mechanism is based on the premise that multiscale approximation can be well-represented in terms of only a relatively small number of expansion coefficients at various resolution levels. An attempt is made within a tree algorithm (pyramid scheme) to remove the noise component from each scale coefficient using a priori statistical information (provided by an error covariance kernel of a Gaussian, stationary stochastic model).
Spherical Tikhonov Regularization Wavelets in Satellite Gravity Gradiometry with Random Noise
(2000)
This paper considers a special class of regularization methods for satellite gravity gradiometry based on Tikhonov spherical regularization wavelets with particular emphasis on the case of data blurred by random noise. A convergence rate is proved for the regularized solution, and a method is discussed for choosing the regularization level a posteriori from the gradiometer data.
Being interested in (rotation-)invariant pseudodi erential equations of satellite problems corresponding to spherical orbits, we are reasonably led to generating kernels that depend only on the spherical distance, i. e. in the language of modern constructive approximation form spherical radial basis functions. In this paper approximate identities generated by such (rotation-invariant) kernels which are additionally locally supported are investigated in detail from theoretical as well as numerical point of view. So-called spherical di erence wavelets are introduced. The wavelet transforms are evaluated by the use of a numerical integration rule, that is based on Weyl's law of equidistribution. This approximate formula is constructed such that it can cope with millions of (satellite) data. The approximation error is estimated on the orbital sphere. Finally, we apply the developed theory to the problems of satellite-to-satellite tracking (SST) and satellite gravity gradiometry (SGG).
The satellite-to-satellite tracking (SST) problems are characterized from mathematical point of view. Uniqueness results are formulated. Moreover, the basic relations are developed between (scalar) approximation of the earth's gravitational potential by "scalar basis systems" and (vectorial) approximation of the gravitational eld by "vectorial basis systems". Finally, the mathematical justication is given for approximating the external geopotential field by finite linear combinations of certain gradient fields (for example, gradient fields of multi-poles) consistent to a given set of SST data.
In this short note we prove some general results on semi-stable sheaves on P_2 and P_3 with arbitrary linear Hilbert polynomial. Using Beilinson's spectral sequence, we compute free resolutions for this class of semi-stable sheaves and deduce that the smooth moduli spaces M_{r m + s}(P_2) and M_{r m + r - s}(P_2) are birationally equivalent if r and s are coprime.
Linearized flows past slender bodies can be asymptotically described by a linear Fredholm integral equation. A collocation method to solve this equation is presented. In cases where the spectral representation of the integral operator is explicitly known, the collocation method recovers the spectrum of the continuous operator. The approximation error is estimated for two discretizations of the integral operator and the convergence is proved. The collocation scheme is validated in several test cases and extended to situations where the spectrum is not explicit.
We examine the feasibility polyhedron of the uncapacitated hub location problem (UHL) with multiple allocation, which has applications in the fields of air passenger and cargo transportation, telecommunication and postal delivery services. In particular we determine the dimension and derive some classes of facets of this polyhedron. We develop some general rules about lifting facets from the uncapacitated facility location (UFL) for UHL and projecting facets from UHL to UFL. By applying these rules we get a new class of facets for UHL which dominates the inequalities in the original formulation. Thus we get a new formulation of UHL whose constraints are all facet defining. We show its superior computational performance by benchmarking it on a well known data set.
Da gerade in der heutigen Zeit viele zusammenarbeitende Softwareentwickler benötigt werden, um immer komplexer werdende Applikationen zu entwerfen, geht der Trend mehr und mehr in die Richtung des räumlich getrennten Arbeitens. Begünstigt wird diese Entwicklung nicht zuletzt durch die Möglichkeiten der Kommunikation und des Datenaustauschs, die durch das Internet geboten werden. Auf dieser Basis sollen Werkzeuge konzipiert und entwickelt werden, die eine effiziente verteilte Softwareentwicklung ermöglichen. Die Nutzung des Internet zu diesem Zweck löst das Verbindungsproblem für sehr große Entfernungen, die Nutzung von Webservern und -browsern wird der Anforderung der Betriebssystemunabhängigkeit und der Realisierung der Verteiltheit im Sinne des Client/Server-Prinzips gerecht. Unter dem Oberbegriff "Software Configuration Management" versteht man die Menge aller Aufgaben, die bei der Produktverwaltung im Bereich der Softwareherstellung anfallen. In dieser Ausarbeitung sollen zunächst die Anforderungen an ein webbasiertes SCM-System formuliert, einige technische Möglichkeiten genannt und verschiedene existierende SCM-Produkte, die eine Web-Schnittstelle bieten auf die Anforderungen überprüft und miteinander verglichen werden.
Gerade in einer Zeit, in der das Internet in nahezu alle Bereiche des menschlichen Lebens vorgedrungen ist und sich nicht zuletzt aufgrund seiner unbegrenzt scheinenden Möglichkeiten zur Beschaffung und zum Austausch von Informationen und zur weltweiten Kommunikation eines sehr starken Zuspruchs erfreut, liegt es nicht nur im Sinne von Rechenzentren und Dienstanbietern, eine Möglichkeit zur Abrechnung der in Anspruch genommenen Ressourcen in die Hand zu bekommen. Die Erschließung neuer Regionen, sowie der Ausbau vorhandener Netze in Richtung einer Bereitstellung höherer Bandbreiten zur Verbesserung der Übertragungsgeschwindigkeiten ist mit immensen Kosten verbunden. Es ist nicht Aufgabe dieser Arbeit zu entscheiden, auf welche Art und Weise die Kosten auf die Benutzer umgelegt oder verteilt werden sollen. Wir wollen hier auch keine Vorschläge zu solchen Überlegungen einbringen, da dergleichen die Domäne anderer Disziplinen, wie beispielsweise der Betriebs- und Volkswirtschaftslehre und der Politik, darstellt. Unsere Aufgabe ist es aber, die informatikspezifischen Probleme der rechnerinternen Erfassung von Accountinginformationen zu beleuchten und so gesammelte Werte den Spezialisten anderer Fachgebiete zur weiteren Verarbeitung zu überlassen. So befasst sich diese Arbeit zunächst mit den grundlegenden Eigenschaften und Modellen des zu betrachtenden Datenverkehrs, um im folgenden Voraussetzungen und Möglichkeiten zur Realisierung einer benutzerorientierten Erfassung und Abrechung der genutzten Netzwerkressourcen aufzuzeigen und herauszuarbeiten.
Da die zweiwertige Aussagenlogik nicht nur innerhalb der Logik insgesamt, sondern auch schon in der Aussagenlogik einen vergleichbaren Platz wie die euklidische Planimetrie im Rahmen der gesamten modernen Geometrie einnimmt, soll hier eine Einführung in eine vom tertium non datur unabhängige mehrwertige Logik gegeben werden. Neben Definition und Beispielen mehrwertiger Logiken soll auch auf die Möglichkeit der Axiomatisierung sowie auf Fragen des Entscheidungsproblems in solchen Logiken eingegangen werden.
Besides the work in the field of manipulating rigid objects, currently, there are several research and development activities going on in the field of manipulating non-rigid or deformable objects. Several papers have been published on international conferences in this field from various projects and countries. But there has been no comprehensive work which provides both a representative overview of the state of the art and identifies the important aspects in this field. Thus, we collected these activities and invited the corresponding working groups to present an overview of their research. Altogether, nineteen authors coming from Japan, Germany, Italy, Greece, United Kingdom, and Australia contributed to this book. Their research work covers all the different aspects that occur when manipulating deformable objects. The contributions can be characterized and grouped by the following four aspects: * object modeling and simulation, * planning and control strategies, * collaborative systems, and * applications and industrial experiences. In the following, we give a short motivation and overview of the single chapters of the book. The simulation of deformable objects is one way to approach the problem of manipulating these objects by robots. Based on a physical model of the object and the occurring constraints, the resulting object shape is calculated. In Chapter 2, Hirai presents an energy-based approach, where the internal energy under the geometric constraints is minimized. Frugoli et al. introduce a force-based approach, where the forces between discrete particles are minimized meeting given constraints. Finally, Remde and Henrich extend the energy-based approach to plastic deformation and give a solution of the inverse simulation problem. Even if the object behavior is predicted by simulation, there is still the question of how to control the robot during a single manipulation operation. An additional question is how to retrieve an overall plan for the concatenated manipulation operations. In Chapter 3, Wada investigates the control problems when positioning multiple points of a planar deformable object. McCarrager proposes a control scheme exploiting the flexibility, rather than minimizing it. Abegg et al. use a simple contact state model to describe typical assembly tasks and to derive robust manipulation primitives. Finally, Ono presents an automatic sewing system and suggests a strategy for unfolding fabric. In several manipulation tasks, it is reasonable to apply more than one robot. Especially in cases, where the deformable object has to take a specific shape. Since the robots working at the same object are influencing each other, different control algorithms have to be introduced. In Chapter 4, Yoshida and Kosuge investigates this problem for the task of bending a sheet of metal and exploits the relation ship between the static object deformation and the bending moments. Tanner and Kyriakopoulos regard the deformable object as underactuated mechanical system and make use of the existence of non-holonomic constraints. Both approaches model the deformable object as finite elements. All of the above aspects have their counterpart in different applications and industrial experiences. In Chapter 5, Rizzi et al. present test cases and applications of their approach to simulate the manipulation of fabric, wires, cables, and soft bags. Buckingham and Graham give an overview of two European projects processing white fish including locating, gripping, and deheading the fish. Maruyama outlines the three development phases of a robot system for performing outage-free maintenance of live-line power supply in Japan. Finally, Kämper presents the development of a flexible automatic cabling unit for the wiring of long-tube lighting with plug components.
In the Black-Scholes type financial market, the risky asset S 1 ( ) is supposed to satisfy dS 1 ( t ) = S 1 ( t )( b ( t ) dt + Sigma ( t ) dW ( t ) where W ( ) is a Brownian motion. The processes b ( ), Sigma ( ) are progressively measurable with respect to the filtration generated by W ( ). They are known as the mean rate of return and the volatility respectively. A portfolio is described by a progressively measurable processes Pi1 ( ), where Pi1 ( t ) gives the amount invested in the risky asset at the time t. Typically, the optimal portfolio Pi1 ( ) (that, which maximizes the expected utility), depends at the time t, among other quantities, on b ( t ) meaning that the mean rate of return shall be known in order to follow the optimal trading strategy. However, in a real-world market, no direct observation of this quantity is possible since the available information comes from the behavior of the stock prices which gives a noisy observation of b ( ). In the present work, we consider the optimal portfolio selection which uses only the observation of stock prices.
Abstract: We develop a method of singularity analysis for conformal graphs which, in particular, is applicable to the holographic image of AdS supergravity theory. It can be used to determine the critical exponents for any such graph in a given channel. These exponents determine the towers of conformal blocks that are exchanged in this channel. We analyze the scalar AdS box graph and show that it has the same critical exponents as the corresponding CFT box graph. Thus pairs of external fields couple to the same exchanged conformal blocks in both theories. This is looked upon as a general structural argument supporting the Maldacena hypothesis.
Starting with general hyperbolic systems of conservation laws, a special sub - class is extracted in which classical solutions can be expressed in terms of a linear transport equation. A characterizing property of this sub - class which contains, for example, all linear systems and non - linear scalar equations, is the existence of so called exponentially exact entropies.
Based on general partitions of unity and standard numerical flux functions, a class of mesh-free methods for conservation laws is derived. A Lax-Wendroff type consistency analysis is carried out for the general case of moving partition functions. The analysis leads to a set of conditions which are checked for the finite volume particle method FVPM. As a by-product, classical finite volume schemes are recovered in the approach for special choices of the partition of unity.
An asymptotic preserving numerical scheme (with respect to diffusion scalings) for a linear transport equation is investigated. The scheme is adopted from a class of recently developped schemes. Stability is proven uniformly in the mean free path under a CFL type condition turning into a parabolic CFL condition in the diffusion limit.
Abstract: We analyse 4-dimensional massive "phi" ^ 4 theory at finite temperature T in the imaginary-time formalism. We present a rigorous proof that this quantum field theory is renormalizable, to all orders of the loop expansion. Our main point is to show that the counterterms can be chosen temperature independent, so that the temperature flow of the relevant parameters as a function of T can be followed. Our result confirms the experience from explicit calculations to the leading orders. The proof is based on flow equations, i.e. on the (perturbative) Wilson renormalization group. In fact we will show that the difference between the theories at T > 0 and at T = 0 contains no relevant terms. Contrary to BPHZ type formalisms our approach permits to lay hand on renormalization conditions and counterterms at the same time, since both appear as boundary terms of the renormalization group flow. This is crucial for the proof.
We consider investment problems where an investor can invest in a savings account, stocks and bonds and tries to maximize her utility from terminal wealth. In contrast to the classical Merton problem we assume a stochastic interest rate. To solve the corresponding control problems it is necessary to prove averi cation theorem without the usual Lipschitz assumptions.
We consider the determination of optimal portfolios under the threat of a crash. Our main assumption is that upper bounds for both the crash size and the number of crashes occurring before the time horizon are given. We make no probabilistic assumption on the crash size or the crash time distribution. The optimal strategies in the presence of a crash possibility are characterized by a balance problem between insurance against the crash and good performance in the crash-free situation. Explicit solutions for the log-utility case are given. Our main finding is that constant portfolios are no longer optimal ones.
Chaotic Billiards
(2000)
The frictionless motion of a particle on a plane billiard table The frictionless motion of a particle on a plane billiard table bounded by a closed curve provides a very simple example of a conservative classical system with non-trivial, chaotic dynamics. The limiting cases of strictly regular ("integrable") and strictly irregular ("ergodic") systems can be illustrated, as well as the typical case which shows an intricate mixture of regular and irregular behavior. Irregular orbits are characterized by an extremely sensitivity with respect to the initial conditions. Such billiard systems are exemplarily suited for educational purposes as models for simple systems with complicated dynamics as well as for far-reaching fundamental investigations.
An extremely simple and convenient method is presented for computing eigenvalues in quantum mechanics by representing position and momentum operators in a simple matrix form. The simplicity and success of the method is illustrated by numerical results concerning eigenvalues of bound systems and resonances for hermitian and non-hermitian Hamiltonians as well as driven quantum systems.