Refine
Year of publication
Document Type
- Preprint (1185) (remove)
Keywords
- AG-RESY (17)
- Case-Based Reasoning (16)
- Mehrskalenanalyse (10)
- RODEO (10)
- Approximation (9)
- Fallbasiertes Schliessen (9)
- Wavelet (9)
- Boltzmann Equation (7)
- Inverses Problem (7)
- Location Theory (7)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (608)
- Kaiserslautern - Fachbereich Informatik (346)
- Kaiserslautern - Fachbereich Physik (159)
- Fraunhofer (ITWM) (19)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (17)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (17)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (15)
- Kaiserslautern - Fachbereich Sozialwissenschaften (2)
- Universitätsbibliothek (2)
2D quantum dilaton gravitational Hamiltonian, boundary terms and new definition for total energy
(1995)
The ADM and Bondi mass for the RST model have been first discussed from Hawking and Horowitz's argument. Since there is a nonlocal term in the RST model, the RST lagrangian has to be localized so that Hawking and Horowitz's proposal can be carried out. Expressing the localized RST action in terms of the ADM formulation, the RST Hamiltonian can be derived, meanwhile keeping track of all boundary terms. Then the total boundary terms can be taken as the total energy for the RST model. Our result shows that the previous expression for the ADM and Bondi mass actually needs to be modified at quantum level, but at classical level, our mass formula can be reduced to that given by Bilal and Kogan [5] and de Alwis [6]. It has been found that there is a new contribution to the ADM and Bondi mass from the RST boundary due to the existence of the hidden dynamical field. The ADM and Bondi mass with and without the RST boundary for the static and dynamical solutions have been discussed respectively in detail, and some new properties have been found. The thunderpop of the RST model has also been encountered in our new Bondi mass formula.
This paper considers the numerical solution of a transmission boundary-value problem for the time-harmonic Maxwell equations with the help of a special finite volume discretization. Applying this technique to several three-dimensional test problems, we obtain large, sparse, complex linear systems, which are solved by using BiCG, CGS, BiCGSTAB resp., GMRES. We combine these methods with suitably chosen preconditioning matrices and compare the speed of convergence.
Vorgestellt wird ein System basierend auf einem 3D-Scanner nach dem Licht- schnitt-Prinzip mit dem es möglich ist, einen Menschen innerhalb von 1,5 Sekun- den dreidimensional zu erfassen. Mit Hilfe von Evolutionären Algorithmen wird über eine modellbasierte Dateninterpretation die Auswertung der Meßdaten betrie- ben, so daß beliebige Körpermaße ermittelt werden können. Das Ergebnis ist ein individualisiertes CAD-Modells der Person im Rechner. Ein derartiges Modell kann als virtuelle Kleiderpuppe zur Produktion von Maßbekleidung dienen.
We have presented here a two-dimensional kinetical scheme for equations governing the motion of a compressible flow of an ideal gas (air) based on the Kaniel method. The basic flux functions are computed analytically and have been used in the organization of the flux computation. The algorithm is implemented and tested for the 1D shock and 2D shock-obstacle interaction problems.
The classic approach in robust optimization is to optimize the solution with respect to the worst case scenario. This pessimistic approach yields solutions that perform best if the worst scenario happens, but also usually perform bad on average. A solution that optimizes the average performance on the other hand lacks in worst-case performance guarantee.
In practice it is important to find a good compromise between these two solutions. We propose to deal with this problem by considering it from a bicriteria perspective. The Pareto curve of the bicriteria problem visualizes exactly how costly it is to ensure robustness and helps to choose the solution with the best balance between expected and guaranteed performance.
Building upon a theoretical observation on the structure of Pareto solutions for problems with polyhedral feasible sets, we present a column generation approach that requires no direct solution of the computationally expensive worst-case problem. In computational experiments we demonstrate the effectivity of both the proposed algorithm, and the bicriteria perspective in general.
We consider the problem of evacuating a region with the help of buses. For a given set of possible collection points where evacuees gather, and possible shelter locations where evacuees are brought to, we need to determine both collection points and shelters we would like to use, and bus routes that evacuate the region in minimum time.
We model this integrated problem using an integer linear program, and present a branch-cut-and-price algorithm that generates bus tours in its pricing step. In computational experiments we show that our approach is able to solve instances of realistic size in sufficient time for practical application, and considerably outperforms the usage of a generic ILP solver.
Contrary to symbolic learning approaches, which represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB; sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case- based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and case-based methods and show the interdependency between the measure used by a case-based algorithm and the target concept.
Retrieving multiple cases is supposed to be an adequate retrieval strategy for guiding partial-order planners because of the recognized flexibility of these planners to interleave steps in the plans. Cases are combined by merging them. In this paper, we will examine two different kinds of merging cases in the context of partial-order planning. We will see that merging cases can be very difficult if the cases are merged eagerly. On the other hand, if cases are merged by avoiding redundant steps, the guidance of the additional cases tends to decrease with the number of covered goals and retrieved cases in domains having a certain kind of interactions. Thus, to retrieve a single case covering many of the goals of the problem or to retrieve fewer cases covering many of the goals is at least equally effective as to retrieve several cases covering all goals in these domains.
A Case Study on Specifikation,Detection and Resolution of IN Feature Interactions with Estelle
(1994)
We present an approach for the treatment of Feature Interactions in Intelligent Networks. The approach is based on the formal description technique Estelle and consists of three steps. For the first step, a specification style supporting the integration of additional features into a basic service is introduced . As a result, feature integration is achieved by adding specification text, i.e . on a purely syntactical level. The second step is the detection of feature interactions resulting from the integration of additional features. A formal criterion is given that can be used for the automatic detection of a particular class of feature interactions. In the third step, previously detected feature interactions are resolved. An algorithm has been devised that allows the automatical incorporation of high-level design decisions into the formal specification. The presented approach is applied to the Basic Call Service and several supplementary interacting features.
A large set of criteria to evaluate formal methods for reactive systems is presented. To make this set more comprehensible, it is structured according to a Concept-Model of formal methods. It is made clear that it is necessary to make the catalogue more specific before applying it. Some of the steps needed to do so are explained. As an example the catalogue is applied within the context of the application domain building automation systems to three different formal methods: SDL, statecharts, and a temporallogic.
In this paper we give the definition of a solution concept in multicriteria combinatorial optimization. We show how Pareto, max-ordering and lexicographically optimal solutions can be incorporated in this framework. Furthermore we state some properties of lexicographic max-ordering solutions, which combine features of these three kinds of optimal solutions. Two of these properties, which are desirable from a decision maker" s point of view, are satisfied if and only of the solution concept is that of lexicographic max-ordering.
In this paper we develop a data-driven mixture of vector autoregressive models with exogenous components. The process is assumed to change regimes according to an underlying Markov process. In contrast to the hidden Markov setup, we allow the transition probabilities of the underlying Markov process to depend on past time series values and exogenous variables. Such processes have potential applications to modeling brain signals. For example, brain activity at time t (measured by electroencephalograms) will can be modeled as a function of both its past values as well as exogenous variables (such as visual or somatosensory stimuli). Furthermore, we establish stationarity, geometric ergodicity and the existence of moments for these processes under suitable conditions on the parameters of the model. Such properties are important for understanding the stability properties of the model as well as deriving the asymptotic behavior of various statistics and model parameter estimators.
A new approach for modelling time that does not rely on the concept of a clock is proposed. In order to establish a notion of time, system behaviour is represented as a joint progression of multiple threads of control, which satisfies a certain set of axioms. We show that the clock-independent time model is related to the well-known concept of a global clock and argue that both approaches establish the same notion of time.
Coloring terms (rippling) is a technique developed for inductive theorem proving which uses syntactic differences of terms to guide the proof search. Annotations (colors) to terms are used to maintain this information. This technique has several advantages, e.g. it is highly goal oriented and involves little search. In this paper we give a general formalization of coloring terms in a higher-order setting. We introduce a simply-typed lambda calculus with color annotations and present an appropriate (pre-)unification algorithm. Our work is a formal basis to the implementation of rippling in a higher-order setting which is required e.g. in case of middle-out reasoning. Another application is in the construction of natural language semantics, where the color annotations rule out linguistically invalid readings that are possible using standard higher-order unification.
This paper develops a sound and complete transformation-based algorithm forunification in an extensional order-sorted combinatory logic supporting constantoverloading and a higher-order sort concept. Appropriate notions of order-sortedweak equality and extensionality - reflecting order-sorted fij-equality in thecorresponding lambda calculus given by Johann and Kohlhase - are defined, andthe typed combinator-based higher-order unification techniques of Dougherty aremodified to accommodate unification with respect to the theory they generate. Thealgorithm presented here can thus be viewed as a combinatory logic counterpartto that of Johann and Kohlhase, as well as a refinement of that of Dougherty, andprovides evidence that combinatory logic is well-suited to serve as a framework forincorporating order-sorted higher-order reasoning into deduction systems aimingto capitalize on both the expressiveness of extensional higher-order logic and theefficiency of order-sorted calculi.
Treating polyatomic gases in kinetic gas theory requires an appropriate molecule model taking into account the additional internal structure of the gas particles. In this paper we describe two such models, each arising from quite different approaches to this problem. A simulation scheme for solving the corresponding kinetic equations is presented and some numerical results to 1D shockwaves are compared.
Simulation methods like DSMC are an efficient tool to compute rarefied gas flows. Using supercomputers it is possible to include various real gas effects like vibrational energies or chemical reactions in a gas mixture. Nevertheless it is still necessary to improve the accuracy of the current simulation methods in order to reduce the computational effort. To support this task the paper presents a comparison of the classical DSMC method with the so called finite Pointset Method. This new approach was developed during several years in the framework of the European space project HERMES. The comparison given in the paper is based on two different testcases: a spatially homogeneous relaxation problem and a 2-dimensional axisymmetric flow problem at high Mach numbers.
We consider the problem of evacuating an urban area caused by a natural or man-made disaster. There are several planning aspects that need to be considered in such a scenario, which are usually considered separately, due to their computational complexity. These aspects include: Which shelters are used to accommodate evacuees? How to schedule public transport for transit-dependent evacuees? And how do public and individual traffic interact? Furthermore, besides evacuation time, also the risk of the evacuation needs to be considered.
We propose a macroscopic multi-criteria optimization model that includes all of these questions simultaneously. As a mixed-integer programming formulation cannot handle instances of real-world size, we develop a genetic algorithm of NSGA-II type that is able to generate feasible solutions of good quality in reasonable computation times.
We extend the applicability of these methods by also considering how to aggregate instance data, and how to generate solutions for the original instance starting from a reduced solution.
In computational experiments using real-world data modelling the cities of Nice in France and Kaiserslautern in Germany, we demonstrate the effectiveness of our approach and compare the trade-off between different levels of data aggregation.
This paper describes a system that supports softwaredevelopment processes in virtual software corporations. A virtual software corporation consists of a set of enterprisesthat cooperate in projects to fulfill customer needs. Contracts are negotiated in the whole lifecycle of asoftware development project. The negotiations really influence the performance of a company. Therefore, it isuseful to support negotiations and planning decisions with software agents. Our approach integrates software agentapproaches for negotiation support with flexible multiserver workflow engines.
A new algorithm for optimization problems with three objective functions is presented which computes a representation for the set of nondominated points. This representation is guaranteed to have a desired coverage error and a bound on the number of iterations needed by the algorithm to meet this coverage error is derived. Since the representation does not necessarily contain nondominated points only, ideas to calculate bounds for the representation error are given. Moreover, the incorporation of domination during the algorithm and other quality measures are discussed.
The concept of the Virtual Software Corporation ( VSC) has recently become a practical reality as a result of advances in communication and distributed technologies. However, there are significant difficulties with the management of the software development process within a VSC. The main problem is the significantly increased communicational complexity of the process model for such developments. The more classic managerial hierarchy is generally replaced by a "flatter" network of commitments. Therefore new solution approaches are required to provide the necessary process support. The purpose of this paper is to present a solution approach which models the process based on deontic logic. The approach has been validated against a case study where it was used to model commitments and inter-human communications within the software development process of a VSC. The use of the formalism is exemplified through a prototype system using a layered multi-agent architecture.
We present a deterministic simulation scheme for the Boltzmann Semiconductor Equation. The convergence of the method is shown for a simplified space homogeneous case. Numerical experiments, which are very promising, are also given in this situation. The extension for the application to the space inhomogeneous equation with a self consistent electric field is quoted. Theoretical considerations in that case are in preparation.
Many discrepancy principles are known for choosing the parameter \(\alpha\) in the regularized operator equation \((T^*T+ \alpha I)x_\alpha^\delta = T^*y^\delta\), \(||y-y^d||\leq \delta\), in order to approximate the minimal norm least-squares solution of the operator equation \(Tx=y\). In this paper we consider a class of discrepancy principles for choosing the regularization parameter when \(T^*T\) and \(T^*y^\delta\) are approximated by \(A_n\) and \(z_n^\delta\) respectively with \(A_n\) not necessarily self - adjoint. Thisprocedure generalizes the work of Engl and Neubauer (1985),and particular cases of the results are applicable to the regularized projection method as well as to a degenerate kernel method considered by Groetsch (1990).
This paper investigates the suitability of the mobile agents approach to the problem of integrating a collection of local DBMS into a single heterogeneous large-scale distributed DBMS. The paper proposes a model of distributed transactions as a set of mobile agents and presents the relevant execution semantics. In addition, the mechanisms which are needed to guarantee the ACID properties in the considered environment are discussed.
A distributional solution framework is developed for systems consisting of linear hyperbolic partial differential equations (PDEs) and switched differential algebraic equations (DAEs) which are coupled via boundary conditions. The unique solvability is then characterize in terms of a switched delay DAE. The theory is illustrated with an example of electric power lines modeled by the telegraph equations which are coupled via a switching transformer where simulations confirm the predicted impulsive solutions.
We derive a new class of particle methods for conservation laws, which are based on numerical flux functions to model the interactions between moving particles. The derivation is similar to that of classical Finite-Volume methods; except that the fixed grid structure in the Finite-Volume method is substituted by so-called mass packets of particles. We give some numerical results on a shock wave solution for Burgers equation as well as the well-known one-dimensional shock tube problem.
Compared to conventional techniques in computational fluid dynamics, the lattice Boltzmann method (LBM) seems to be a completely different approach to solve the incompressible Navier-Stokes equations. The aim of this article is to correct this impression by showing the close relation of LBM to two standard methods: relaxation schemes and explicit finite difference discretizations. As a side effect, new starting points for a discretization of the incompressible Navier-Stokes equations are obtained.
A single facility problem in the plane is considered, where an optimal location has to be
identified for each of finitely many time-steps with respect to time-dependent weights and
demand points. It is shown that the median objective can be reduced to a special case of the
static multifacility median problem such that results from the latter can be used to tackle the
dynamic location problem. When using block norms as distance measure between facilities,
a Finite Dominating Set (FDS) is derived. For the special case with only two time-steps, the
resulting algorithm is analyzed with respect to its worst-case complexity. Due to the relation
between dynamic location problems for T time periods and T-facility problems, this algorithm
can also be applied to the static 2-facility location problem.
Information technology support for complex, dynamic, and distributed business processes as they occur in engineering domains requires an advanced process management system which enhances currently available workflow management services with respect to integration, flexibility, and adapt ation. We present an uniform and flexible framework for advanced process management on an a bstract level which uses and adapts agent technology from distributed artificial intelligence for both modelling and enacting of processes. We identify two different frameworks for applying agent tec hnology to process management: First, as a multi-agent system with the domain of process manag ement. Second, as a key infrastructure technology for building a process management system. We will then follow the latter approach and introduce different agent types for managing activities, products, and resources which capture specific views on the process.
In continous location problems we are given a set of existing facilities and we are looking for the location of one or several new facilities. In the classical approaches weights are assigned to existing facilities expressing the importance of the new facilities for the existing ones. In this paper, we consider a pointwise defined objective function where the weights are assigned to the existing facilities depending on the location of the new facility. This approach is shown to be a generalization of the median, center and centdian objective functions. In addition, this approach allows to formulate completely new location models. Efficient algorithms as well as structure results for this algebraic approach for location problems are presented. Extensions to the multifacility and restricted case are also considered.
This paper describes the architecture and concept of operation of a Framework for Adaptive Process Modeling and Execution (FAME). The research addresses the absence of robust methods for supporting the software process management life cycle. FAME employs a novel, model-based approach in providing automated support for different activities in the software development life cycle including project definition, process design, process analysis, process enactment, process execution status monitoring, and execution status-triggered process redesign. FAME applications extend beyond the software development domain to areas such as agile manufacturing, project management, logistics planning, and business process reengineering.
We develop a framework for shape optimization problems under state equation con-
straints where both state and control are discretized by B-splines or NURBS. In other
words, we use isogeometric analysis (IGA) for solving the partial differential equation and a nodal approach to change domains where control points take the place of nodes and where thus a quite general class of functions for representing optimal shapes and their boundaries becomes available. The minimization problem is solved by a gradient descent method where the shape gradient will be defined in isogeometric terms. This
gradient is obtained following two schemes, optimize first–discretize then and, reversely,
discretize first–optimize then. We show that for isogeometric analysis, the two schemes yield the same discrete system. Moreover, we also formulate shape optimization with respect to NURBS in the optimize first ansatz which amounts to finding optimal control points and weights simultaneously. Numerical tests illustrate the theory.
Facility Location Problems are concerned with the optimal location of one or several new facilities, with respect to a set of existing ones. The objectives involve the distance between new and existing facilities, usually a weighted sum or weighted maximum. Since the various stakeholders (decision makers) will have different opinions of the importance of the existing facilities, a multicriteria problem with several sets of weights, and thus several objectives, arises. In our approach, we assume the decision makers to make only fuzzy comparisons of the different existing facilities. A geometric mean method is used to obtain the fuzzy weights for each facility and each decision maker. The resulting multicriteria facility location problem is solved using fuzzy techniques again. We prove that the final compromise solution is weakly Pareto optimal and Pareto optimal, if it is unique, or under certain assumptions on the estimates of the Nadir point. A numerical example is considered to illustrate the methodology.
A General Hilbert Space Approach to Wavelets and Its Application in Geopotential Determination
(1999)
A general approach to wavelets is presented within a framework of a separable functional Hilbert space H. Basic tool is the construction of H-product kernels by use of Fourier analysis with respect to an orthonormal basis in H. Scaling function and wavelet are defined in terms of H-product kernels. Wavelets are shown to be 'building blocks' that decorrelate the data. A pyramid scheme provides fast computation. Finally, the determination of the earth's gravitational potential from single and multipole expressions is organized as an example of wavelet approximation in Hilbert space structure.
In this paper we consider the problem of optimizing a piecewise-linear objective function over a non-convex domain. In particular we do not allow the solution to lie in the interior of a prespecified region R. We discuss the geometrical properties of this problems and present algorithms based on combinatorial arguments. In addition we show how we can construct quite complicated shaped sets R while maintaining the combinatorial properties.
We provide an overview of UNICOM, an inductive theorem prover for equational logic which isbased on refined rewriting and completion techniques. The architecture of the system as well as itsfunctionality are described. Moreover, an insight into the most important aspects of the internalproof process is provided. This knowledge about how the central inductive proof componentof the system essentially works is crucial for human users who want to solve non-trivial prooftasks with UNICOM and thoroughly analyse potential failures. The presentation is focussedon practical aspects of understanding and using UNICOM. A brief but complete description ofthe command interface, an installation guide, an example session, a detailed extended exampleillustrating various special features and a collection of successfully handled examples are alsoincluded.
In the present paper multilane models for vehicular traffic are considered. A microscopic multilane model based on reaction thresholds is developed. Based on this model an Enskog like kinetic model is developed. In particular, care is taken to incorporate the correlations between the vehicles. From the kinetic model a fluid dynamic model is derived. The macroscopic coefficients are deduced from the underlying kinetic model. Numerical simulations are presented for all three levels of description in [10]. Moreover, a comparison of the results is given there.
In this paper the work presented in [6] is continued. The present paper contains detailed numerical investigations of the models developed there. A numerical method to treat the kinetic equations obtained in [6] are presented and results of the simulations are shown. Moreover, the stochastic correlation model used in [6] is described and investigated in more detail.
Cooperative decision making involves a continuous process, assessing the validity ofdata, information and knowledge acquired and inferred by the colleagues, that is, the shared knowledge space must be transparent. The ACCORD methodology provides aninterpretation framework for the mapping of domain facts - constituting the world model of the expert - onto conceptual models, which can be expressed in formalrepresentations. The ACCORD-BPM framework allows a stepwise and inarbitrary reconstruction of the problem solving competence of BPM experts as a prerequisite foran appropriate architecture of both BPM knowledge bases and the BPM-"reasoning device".
A way to derive consistently kinetic models for vehicular traffic from microscopic follow the leader models is presented. The obtained class of kinetic equations is investigated. Explicit examples for kinetic models are developed with a particular emphasis on obtaining models, that give realistic results. For space homogeneous traffic flow situations numerical examples are given including stationary distributions and fundamental diagrams.
In this paper the kinetic model for vehicular traffic developed in [3,4] is considered and theoretical results for the space homogeneous kinetic equation are presented. Existence and uniqueness results for the time dependent equation are stated. An investigation of the stationary equation leads to a boundary value problem for an ordinary differential equation. Existence of the solution and some properties are proved. A numerical investigation of the stationary equation is included.
Multiobjective combinatorial optimization problems have received increasing attention in recent years. Nevertheless, many algorithms are still restricted to the bicriteria case. In this paper we propose a new algorithm for computing all Pareto optimal solutions. Our algorithm is based on the notion of level sets and level curves and contains as a subproblem the determination of K best solutions for a single objective combinatorial optimization problem. We apply the method to the Multiobjective Quadratic Assignment Problem (MOQAP). We present two algorithms for ranking QAP solutions and nally give computational results comparing the methods.
It is often helpful to compute the intrinsic volumes of a set of which only a pixel image is observed. A computational efficient approach, which is suggested by several authors and used in practice, is to approximate the intrinsic volumes by a linear functional of the pixel configuration histogram. Here we want to examine, whether there is an optimal way of choosing this linear functional, where we will use a quite natural optimality criterion that has already been applied successfully for the estimation of the surface area. We will see that for intrinsic volumes other than volume or surface area this optimality criterion cannot be used, since estimators which ignore the data and return constant values are optimal w.r.t. this criterion. This shows that one has to be very careful, when intrinsic volumes are approximated by a linear functional of the pixel configuration histogram.
In this article a new numerical solver for simulations of district heating networks is presented. The numerical method applies the local time stepping introduced in [11] to networks of linear advection equations. In combination with the high order approach of [4] an accurate and very efficient scheme is developed. In several numerical test cases the advantages for simulations of district heating networks are shown.
A map for an autonomous mobile robot (AMR) in an indoor environment for the purpose ofcontinuous position and orientation estimation is discussed. Unlike many other approaches, this map is not based on geometrical primitives like lines and polygons. An algorithm is shown , where the sensordata of a laser range finder can be used to establish this map without a geometrical interpretation of the data. This is done by converting single laser radar scans to statistical representations of the environ-ment, so that a crosscorrelation of an actu al converted scan and this representative results into the actual position and orientation in a global coordinate system. The map itsel f is build of representative scansfor the positions where the AMR has been, so that it is able to find its position and orientation by c omparing the actual scan with a scan stored in the map.
In the Black-Scholes type financial market, the risky asset S 1 ( ) is supposed to satisfy dS 1 ( t ) = S 1 ( t )( b ( t ) dt + Sigma ( t ) dW ( t ) where W ( ) is a Brownian motion. The processes b ( ), Sigma ( ) are progressively measurable with respect to the filtration generated by W ( ). They are known as the mean rate of return and the volatility respectively. A portfolio is described by a progressively measurable processes Pi1 ( ), where Pi1 ( t ) gives the amount invested in the risky asset at the time t. Typically, the optimal portfolio Pi1 ( ) (that, which maximizes the expected utility), depends at the time t, among other quantities, on b ( t ) meaning that the mean rate of return shall be known in order to follow the optimal trading strategy. However, in a real-world market, no direct observation of this quantity is possible since the available information comes from the behavior of the stock prices which gives a noisy observation of b ( ). In the present work, we consider the optimal portfolio selection which uses only the observation of stock prices.
We present a mathematical knowledge base containing the factual know-ledge of the first of three parts of a textbook on semi-groups and automata,namely "P. Deussen: Halbgruppen und Automaten". Like almost all math-ematical textbooks this textbook is not self-contained, but there are somealgebraic and set-theoretical concepts not being explained. These concepts areadded to the knowledge base. Furthermore there is knowledge about the nat-ural numbers, which is formalized following the first paragraph of "E. Landau:Grundlagen der Analysis".The data base is written in a sorted higher-order logic, a variant of POST ,the working language of the proof development environment OmegaGamma mkrp. We dis-tinguish three different types of knowledge: axioms, definitions, and theorems.Up to now, there are only 2 axioms (natural numbers and cardinality), 149definitions (like that for a semi-group), and 165 theorems. The consistency ofsuch knowledge bases cannot be proved in general, but inconsistencies may beimported only by the axioms. Definitions and theorems should not lead to anyinconsistency since definitions form conservative extensions and theorems areproved to be consequences.
It is of basic interest to assess the quality of the decisions of a statistician, based on the outcoming data of a statistical experiment, in the context of a given model class P of probability distributions. The statistician picks a particular distribution P , suffering a loss by not picking the 'true' distribution P' . There are several relevant loss functions, one being based on the the relative entropy function or Kullback Leibler information distance. In this paper we prove a general 'minimax risk equals maximin (Bayes) risk' theorem for the Kullback Leibler loss under the hypothesis of a dominated and compact family of distributions over a Polish observation space with suitably integrable densities. We also find that there is always an optimal Bayes strategy (i.e. a suitable prior) achieving the minimax value. Further, we see that every such minimax optimal strategy leads to the same distribution P in the convex closure of the model class. Finally, we give some examples to illustrate the results and to indicate, how the minimax result reflects in the structure of least favorable priors. This paper is mainly based on parts of this author's doctorial thesis.
The original publication is available at www.springerlink.com. This original publication also contains further results. We study a spherical wave propagating in radius- and latitude-direction and oscillating in latitude-direction in case of fibre-reinforced linearly elastic material. A function system solving Euler's equation of motion in this case and depending on certain Bessel and associated Legendre functions is derived.
This paper addresses a model of analogy-driven theorem proving that is more general and cognitively more adequate than previous approaches. The model works at the level ofproof-plans. More precisely, we consider analogy as a control strategy in proof planning that employs a source proof-plan to guide the construction of a proof-plan for the target problem. Our approach includes a reformulation of the source proof-plan. This is in accordance with the well known fact that constructing ananalogy in maths often amounts to first finding the appropriate representation which brings out the similarity of two problems, i.e., finding the right concepts and the right level of abstraction. Several well known theorems were processed by our analogy-driven proof-plan construction that could not be proven analogically by previous approaches.
The Multiple Objective Median Problem involves locating a new facility so that a vector of performance criteria is optimized over a given set of existing facilities. A variation of this problem is obtained if the existing facilities are situated on two sides of a linear barrier. Such barriers like rivers, highways, borders, or mountain ranges are frequently encountered in practice. In this paper, theory of the Multiple Objective Median Problem with line barriers is developped. As this problem is nonconvex but specially-structured, a reduction to a series of convex optimization problems is proposed. The general results lead to a polynomial algorithm for finding the set of efficient solutions. The algorithm is proposed for bi-criteria problems with different measures of distance.
Starting from the two-scale model for pH-taxis of cancer cells introduced in [1], we consider here an extension accounting for tumor heterogeneity w.r.t. treatment sensitivity and a treatment approach including chemo- and radiotherapy. The effect of peritumoral region alkalinization on such therapeutic combination is investigated with the aid of numerical simulations.
We propose a model for acid-mediated tumor invasion involving two different scales: the microscopic one, for the dynamics of intracellular protons and their exchange with their extracellular counterparts, and the macroscopic scale of interactions between tumor cell and normal cell populations, along with the evolution of extracellular protons. We also account for the tactic behavior of cancer cells, the latter being assumed to biase their motion according to a gradient of extracellular protons (following [2,31] we call this pH taxis). A time dependent (and also time delayed) carrying capacity for the tumor cells in response to the effects of acidity is considered as well. The global well posedness of the resulting multiscale model is proved with a regularization and fixed point argument. Numerical simulations are performed in order to illustrate the behavior of the model.
We consider the multiscale model for glioma growth introduced in a previous work and extend it to account
for therapy effects. Thereby, three treatment strategies involving surgical resection, radio-, and
chemotherapy are compared for their efficiency. The chemotherapy relies on inhibiting the binding
of cell surface receptors to the surrounding tissue, which impairs both migration and proliferation.
Minmax regret optimization aims at finding robust solutions that perform best in the worst-case, compared to the respective optimum objective value in each scenario. Even for simple uncertainty sets like boxes, most polynomially solvable optimization problems have strongly NP-hard minmax regret counterparts. Thus, heuristics with performance guarantees can potentially be of great value, but only few such guarantees exist.
A very easy but effective approximation technique is to compute the midpoint solution of the original optimization problem, which aims at optimizing the average regret, and also the average nominal objective. It is a well-known result that the regret of the midpoint solution is at most 2 times the optimal regret. Besides some academic instances showing that this bound is tight, most instances reveal a way better approximation ratio.
We introduce a new lower bound for the optimal value of the minmax regret problem. Using this lower bound we state an algorithm that gives an instance dependent performance guarantee of the midpoint solution for combinatorial problems that is at most 2. The computational complexity of the algorithm depends on the minmax regret problem under consideration; we show that the sharpened guarantee can be computed in strongly polynomial time for several classes of combinatorial optimization problems.
To illustrate the quality of the proposed bound, we use it within a branch and bound framework for the robust shortest path problem. In an experimental study comparing this approach with a bound from the literature, we find a considerable improvement in computation times.
The relation between the Lattice Boltzmann Method, which has re- cently become popular, and the Kinetic Schemes, which are routinely used in Computational Fluid Dynamics, is explored. A new discrete velocity model for the numerical solution of Navier-Stokes equations for incom- pressible uid ow is presented by combining both the approaches. The new scheme can be interpreted as a pseudo-compressibility method and, for a particular choice of parameters, this interpretation carries over to the Lattice Boltzmann Method.
A new look at the RST model
(1996)
The RST model is augmented by the addition of a scalar field and a boundary term so that it is well-posed and local. Expressing the RST action in terms of the ADM formulation, the constraint structure can be analysed completely. It is shown that from the view point of local field theories, there exists a hidden dynamical field 1 in the RST model. Thanks to the presence of this hidden dynamical field, we can reconstruct the closed algebra of the constraints which guarantee the general invariance of the RST action. The resulting stress tensors TSigma Sigma are recovered to be true tensor quantities. Especially, the part of the stress tensors for the hidden dynamical field 1 gives the precise expression for tSigma . At the quantum level, the cancellation condition for the total central charge is reexamined. Finally, with the help of the hidden dynamical field 1, the fact that the semi-classical static soluti on of the RST model has two independent parameters (P,M), whereas for the classical CGHS model there is only one, can be explained.
Compared to standard numerical methods for hyperbolic systems of conservation laws, Kinetic Schemes model propagation of information by particles instead of waves. In this article, the wave and the particle concept are shown to be closely related. Moreover, a general approach to the construction of Kinetic Schemes for hyperbolic conservation laws is given which summarizes several approaches discussed by other authors. The approach also demonstrates why Kinetic Schemes are particularly well suited for scalar conservation laws and why extensions to general systems are less natural.
Finding a delivery plan for cancer radiation treatment using multileaf collimators operating in ''step-and-shoot mode'' can be formulated mathematically as a problem of decomposing an integer matrix into a weighted sum of binary matrices having the consecutive-ones property - and sometimes other properties related to the collimator technology. The efficiency of the delivery plan is measured by both the sum of weights in the decomposition, known as the total beam-on time, and the number of different binary matrices appearing in it, referred to as the cardinality, the latter being closely related to the set-up time of the treatment. In practice, the total beam-on time is usually restricted to its minimum possible value, (which is easy to find), and a decomposition that minimises cardinality (subject to this restriction) is sought.
This paper presents a new similarity measure and nonlocal filters for images corrupted by multiplicative noise. The considered filters are generalizations of the nonlocal means filter of Buades et al., which is known to be well suited for removing additive Gaussian noise. To adapt to different noise models, the patch comparison involved in this filter has first of all to be performed by a suitable noise dependent similarity measure. To this purpose, we start by studying a probabilistic measure recently proposed for general noise models by Deledalle et al. We analyze this measure in the context of conditional density functions and examine its properties for images corrupted by additive and multiplicative noise. Since it turns out to have unfavorable properties for multiplicative noise we deduce a new similarity measure consisting of a probability density function specially chosen for this type of noise. The properties of our new measure are studied theoretically as well as by numerical experiments. To obtain the final nonlocal filters we apply a weighted maximum likelihood estimation framework, which also incorporates the noise statistics. Moreover, we define the weights occurring in these filters using our new similarity measure and propose different adaptations to further improve the results. Finally, restoration results for images corrupted by multiplicative Gamma and Rayleigh noise are presented to demonstrate the very good performance of our nonlocal filters.
A new solution approach for solving the 2-facility location problem in the plane with block norms
(2015)
Motivated by the time-dependent location problem over T time-periods introduced in
Maier and Hamacher (2015) we consider the special case of two time-steps, which was shown
to be equivalent to the static 2-facility location problem in the plane. Geometric optimality
conditions are stated for the median objective. When using block norms, these conditions
are used to derive a polygon grid inducing a subdivision of the plane based on normal cones,
yielding a new approach to solve the 2-facility location problem in polynomial time. Combinatorial algorithms for the 2-facility location problem based on geometric properties are
deduced and their complexities are analyzed. These methods differ from others as they are
completely working on geometric objects to derive the optimal solution set.
A Nonlinear Ray Theory
(1994)
A proof of the famous Huygens" method of wavefront construction is reviewed and it is shown that the method is embedded in the geometrical optics theory for the calculation of the intensity of the wave based on high frequency approximation. It is then shown that Huygens" method can be extended in a natural way to the construction of a weakly nonlinear wavefront. This is an elegant nonlinear ray theory based on an approximation published by the author in 1975 which was inspired by the work of Gubkin. In this theory, the wave amplitude correction is incorporated in the eikonal equation itself and this leads to a sytem of ray equations coupled to the transport equation. The theory shows that the nonlinear rays stretch due to the wave amplitude, as in the work of Choquet-Bruhat (1969), followed by Hunter, Majda, Keller and Rosales, but in addition the wavefront rotates due to a non-uniform distribution of the amplitude on the wavefront. Thus the amplitude of the wave modifies the rays and the wavefront geometry, which in turn affects the growth and decay of the amplitude. Our theory also shows that a compression nonlinear wavefront may develop a kink but an expansion one always remains smooth. In the end, an exact solution showing the resolution of a linear caustic due to nonlinearity has been presented. The theory incorporates all features of Whitham" s geometrical shock dynamics.
A nonlocal stochastic model for intra- and extracellular proton dynamics in a tumor is proposed.
The intracellular dynamics is governed by an SDE coupled to a reaction-diffusion
equation for the extracellular proton concentration on the macroscale. In a more general context
the existence and uniqueness of solutions for local and nonlocal
SDE-PDE systems are established allowing, in particular, to analyze the proton dynamics model both,
in its local version and the case with nonlocal path dependence.
Numerical simulations are performed
to illustrate the behavior of solutions, providing some insights into the effects of randomness on tumor acidity.
The well-known and powerful proof principle by well-founded induction says that for verifying \(\forall x : P (x)\) for some property \(P\) it suffices to show \(\forall x : [[\forall y < x :P (y)] \Rightarrow P (x)] \) , provided \(<\) is a well-founded partial ordering on the domainof interest. Here we investigate a more general formulation of this proof principlewhich allows for a kind of parameterized partial orderings \(<_x\) which naturallyarises in some cases. More precisely, we develop conditions under which theparameterized proof principle \(\forall x : [[\forall y <_x x : P (y)] \Rightarrow P (x)]\) is sound in thesense that \(\forall x : [[\forall y <_x x : P (y)] \Rightarrow P (x)] \Rightarrow \forall x : P (x)\) holds, and givecounterexamples demonstrating that these conditions are indeed essential.
The problem of finding an optimal location X* minimizing the maximum Euclidean distance to existing facilities is well solved by e.g. the Elzinga-Hearn algorithm. In practical situations X* will however often not be feasible. We therefore suggest in this note a polynomial algorithm which will find an optimal location X^F in a feasible subset F of the plane R^2
In this paper, we study the inverse maximum flow problem under \(\ell_\infty\)-norm and show that this problem can be solved by finding a maximum capacity path on a modified graph. Moreover, we consider an extension of the problem where we minimize the number of perturbations among all the optimal solutions of Chebyshev norm. This bicriteria version of the inverse maximum flow problem can also be solved in strongly polynomial time by finding a minimum \(s - t\) cut on the modified graph with a new capacity function.
Groups can be studied using methods from different fields such as combinatorial group theory or string rewriting. Recently techniques from Gröbner basis theory for free monoid rings (non-commutative polynomial rings) respectively free group rings have been added to the set of methods due to the fact that monoid and group presentations (in terms of string rewriting systems) can be linked to special polynomials called binomials. In the same mood, the aim of this paper is to discuss the relation between Nielsen reduced sets of generators and the Todd-Coxeter coset enumeration procedure on the one side and the Gröbner basis theory for free group rings on the other. While it is well-known that there is a strong relationship between Buchberger's algorithm and the Knuth-Bendix completion procedure, and there are interpretations of the Todd-Coxeter coset enumeration procedure using the Knuth-Bendix procedure for special cases, our aim is to show how a verbatim interpretation of the Todd-Coxeter procedure can be obtained by linking recent Gröbner techniques like prefix Gröbner bases and the FGLM algorithm as a tool to study the duality of ideals. As a side product our procedure computes Nielsen reduced generating sets for subgroups in finitely generated free groups.
We discuss the analytic properties of AdS scalar exchange graphs in the crossed channel. We show that the possible non-analytic terms drop out by virtue of non-trivial properties of generalized hypergeometric functions. The absence of non-analytic terms is a necessary condition for the existence of an operator product expansion for CFT amplitudes obtained from AdS/CFT correspondence.
Linear half-space problems can be used to solve domain decomposition problems between Boltzmann and aerodynamic equations. A new fast numerical method computing the asymptotic states and outgoing distributions for a linearized BGK half-space problem is presented. Relations with the so-called variational methods are discussed. In particular, we stress the connection between these methods and Chapman-Enskog type expansions.
An asymptotic-induced scheme for kinetic semiconductor equations with the diffusion scaling is developed. The scheme is based on the asymptotic analysis of the kinetic semiconductor equation. It works uniformly for all ranges of mean free paths. The velocity discretization is done using quadrature points equivalent to a moment expansion method. Numerical results for different physical situations are presented.
Estimation of P(R kl/gleich S) is considered for the simple stress-strength model of failure. Using the Pareto and Power distributions together with their combined form a useful parametric solution is obtained and is illustrated numerically. It is shown that these models are also applicable when only the tails of distributions for R and S are considered. An application to the failure study concerning the fractures is also included.
The problem of providing connectivity for a collection of applications is largely one of data integration: the communicating parties must agree on thesemantics and syntax of the data being exchanged. In earlier papers [#!mp:jsc1!#,#!sg:BSG1!#], it was proposed that dictionaries of definitions foroperators, functions, and symbolic constants can effectively address the problem of semantic data integration. In this paper we extend that earlier work todiscuss the important issues in data integration at the syntactic level and propose a set of solutions that are both general, supporting a wide range of dataobjects with typing information, and efficient, supporting fast transmission and parsing.
We consider a scale discrete wavelet approach on the sphere based on spherical radial basis functions. If the generators of the wavelets have a compact support, the scale and detail spaces are finite-dimensional, so that the detail information of a function is determined by only finitely many wavelet coefficients for each scale. We describe a pyramid scheme for the recursive determination of the wavelet coefficients from level to level, starting from an initial approximation of a given function. Basic tools are integration formulas which are exact for functions up to a given polynomial degree and spherical convolutions.
We consider a multiple objective linear program (MOLP) max{Cx|Ax = b,x in N_{0}^{n}} where C = (c_ij) is the p x n - matrix of p different objective functions z_i(x) = c_{i1}x_1 + ... + c_{in}x_n , i = 1,...,p and A is the m x n - matrix of a system of m linear equations a_{k1}x_1 + ... + a_{kn}x_n = b_k , k=1,...,m which form the set of constraints of the problem. All coefficients are assumed to be natural numbers or zero. The set M of admissable solutions {hat x} is an admissible solution such that there exists no other admissable solution x' with C{hat x} Cx'. The efficient solutions play the role of optimal solutions for the MOLP and it is our aim to determine the set of all efficient solutions
We investigate one of the classical problems of the theory ofterm rewriting, namely termination. We present an ordering for compar-ing higher-order terms that can be utilized for testing termination anddecreasingness of higher-order conditional term rewriting systems. Theordering relies on a first-order interpretation of higher-order terms anda suitable extension of the RPO.
In this paper we consider the problem of locating one new facility in the plane with respect to a given set of existing facility where a set of polygonal barriers restricts traveling. This non-convex optimization problem can be reduced to a finite set of convex subproblems if the objective function is a convex function of the travel distances between the new and the existing facilities (like e.g. the Median and Center objective functions). An exact Algorithm and a heuristic solution procedure based on this reduction result are developed.
Due to continuously increasing demands in the area of advanced robot control, it became necessary to speed up the computation. One way to reduce the computation time is to distribute the computation onto several processing units. In this survey we present different approaches to parallel computation of robot kinematics and Jacobian. Thereby, we discuss both the forward and the reverse problem. We introduce a classification scheme and classify the references by this scheme.
The term enterprise modelling, synonymous with enterprise engineering, refers to methodologies developed for modelling activities, states, time, and cost within an enterprise architecture. They serve as a vehicle for evaluating and modelling activities resources etc. CIM - OSA (Computer Integrated Manufacturing Open Systems Architecture) is a methodology for modelling computer integrated environments, and its major objective is the appropriate integration of enterprise operations by means of efficient information exchange within the enterprise. PERA is another methodology for developing models of computer integrated manufacturing environments. The department of industrial engineering in Toronto proposed the development of ontologies as a vehicle for enterprise integration. The paper reviews the work carried out by various researchers and computing departments on the area of enterprise modelling and points out other modelling problems related to enterprise integration.
A compact subset E of the complex plane is called removable if all bounded analytic functions on its complement are constant or, equivalently, i f its analytic capacity vanishes. The problem of finding a geometric characterization of the removable sets is more than a hundred years old and still not comp letely solved.
The asymptotic behaviour of a singular-perturbed two-phase Stefan problem due to slow diffusion in one of the two phases is investigated. In the limit the model equations reduce to a one-phase Stefan problem. A boundary layer at the moving interface makes it necessary to use a corrected interface condition obtained from matched asymptotic expansions. The approach is validated by numerical experiments using a front-tracking method.
This report presents the properties of a specification of the domain of process planning for rotary symmetrical workpieces. The specification results from a model for problem solving in this domain that involves different reasoners, one of which is an AI planner that achieves goals corresponding to machining workpieces by considering certain operational restrictions of the domain. When planning with SNLP (McAllester and Rosenblitt, 1991), we will show that the resulting plans have the property of minimizing the use of certain key operations. Further, we will show that, for elastic protected plans (Kambhampati et al., 1996) such as the ones produced by SNLP, the goals corresponding to machining parts of a workpiece are OE-constrained trivial serializable, a special form of trivial serializability (Barrett and Weld, 1994). However, we will show that planning with SNLP in this domain can be very difficult: elastic protected plans for machining parts of a workpiece are nonmergeable. Finally, we will show that, for sufix, prefix or sufix and prefix plans such as the ones produced by state-space planners, it is not possible to have both properties, being OEconstrained trivial serializable and minimizing the use of the key operations, at the same time.
Linearized flows past slender bodies can be asymptotically described by a linear Fredholm integral equation. A collocation method to solve this equation is presented. In cases where the spectral representation of the integral operator is explicitly known, the collocation method recovers the spectrum of the continuous operator. The approximation error is estimated for two discretizations of the integral operator and the convergence is proved. The collocation scheme is validated in several test cases and extended to situations where the spectrum is not explicit.
We present a particle method for the numerical simulation of boundary value problems for the steady-state Boltzmann equation. Referring to some recent results concerning steady-state schemes, the current approach may be used for multi-dimensional problems, where the collision scattering kernel is not restricted to Maxwellian molecules. The efficiency of the new approach is demonstrated by some numerical results obtained from simulations for the (two-dimensional) BEnard's instability in a rarefied gas flow.
We consider investment problems where an investor can invest in a savings account, stocks and bonds and tries to maximize her utility from terminal wealth. In contrast to the classical Merton problem we assume a stochastic interest rate. To solve the corresponding control problems it is necessary to prove averi cation theorem without the usual Lipschitz assumptions.
In this paper we propose a phenomenological model for the formation of an interstitial gap between the tumor and the stroma. The gap
is mainly filled with acid produced by the progressing edge of the tumor front. Our setting extends existing models for acid-induced tumor invasion models to incorporate
several features of local invasion like formation of gaps, spikes, buds, islands, and cavities. These behaviors are obtained mainly due to the random dynamics at the intracellular
level, the go-or-grow-or-recede dynamics on the population scale, together with the nonlinear coupling between the microscopic (intracellular) and macroscopic (population)
levels. The wellposedness of the model is proved using the semigroup technique and 1D and 2D numerical simulations are performed to illustrate model predictions and draw
conclusions based on the observed behavior.
Cancer research is not only a fast growing field involving many branches of science, but also an intricate and diversified field rife with anomalies. One such anomaly is the
consistent reliance of cancer cells on glucose metabolism for energy production even in a normoxic environment. Glycolysis is an inefficient pathway for energy production and normally is used during hypoxic conditions. Since cancer cells have a high demand for energy
(e.g. for proliferation) it is somehow paradoxical for them to rely on such a mechanism. An emerging conjecture aiming to explain this behavior is that cancer cells
preserve this aerobic glycolytic phenotype for its use in invasion and metastasis. We follow this hypothesis and propose a new model
for cancer invasion, depending on the dynamics of extra- and intracellular protons, by building upon the existing ones. We incorporate random perturbations in the intracellular proton dynamics to account
for uncertainties affecting the cellular machinery. Finally, we address the well-posedness of our setting and use numerical simulations to illustrate the model predictions.
The efficient numerical treatment of the Boltzmann equation is a very important task in many fields of application. Most of the practically relevant numerical schemes are based on the simulation of large particle systems that approximate the evolution of the distribution function described by the Boltzmann equation. In particular, stochastic particle systems play an important role in the construction of various numerical algorithms.