Refine
Year of publication
Document Type
- Report (399) (remove)
Language
- English (399) (remove)
Keywords
- numerical upscaling (7)
- hub location (5)
- Elastoplastizität (4)
- Integer programming (4)
- modelling (4)
- poroelasticity (4)
- Darcy’s law (3)
- Dienstgüte (3)
- Elastic BVP (3)
- Elastoplasticity (3)
- Formalisierung (3)
- Heston model (3)
- Hysterese (3)
- Lagrangian mechanics (3)
- Mathematikunterricht (3)
- Modellierung (3)
- effective heat conductivity (3)
- facility location (3)
- non-Newtonian flow in porous media (3)
- polynomial algorithms (3)
- praxisorientiert (3)
- variational inequalities (3)
- virtual material design (3)
- American options (2)
- Bartlett spectrum (2)
- Chisel (2)
- Elastisches RWP (2)
- Elastoplastisches RWP (2)
- Field-programmable gate array (FPGA) (2)
- HJB equation (2)
- Heuristics (2)
- IMRT planning (2)
- Inverses Problem (2)
- Jiang's model (2)
- Jiang-Modell (2)
- Lineare Algebra (2)
- Logistics (2)
- MAC type grid (2)
- Noether’s theorem (2)
- Nonlinear multigrid (2)
- Portfolio optimisation (2)
- Ratenunabhängigkeit (2)
- Regularisierung (2)
- Rotational spinning process (2)
- Special Cosserat rods (2)
- Supply Chain Management (2)
- Variationsungleichungen (2)
- Wavelet (2)
- adaptive refinement (2)
- algorithmic game theory (2)
- asymptotic homogenization (2)
- branch and cut (2)
- discontinuous coefficients (2)
- discrete mechanics (2)
- domain decomposition (2)
- elastoplastic BVP (2)
- energy minimization (2)
- facets (2)
- fast Fourier transform (2)
- fiber orientation (2)
- fiber-fluid interaction (2)
- filling processes (2)
- finite volume method (2)
- free-surface phenomena (2)
- heuristic (2)
- hysteresis (2)
- image analysis (2)
- injection molding (2)
- integer programming (2)
- interface boundary conditions (2)
- linear algebra (2)
- linear elasticity (2)
- mathematical education (2)
- model reduction (2)
- multibody dynamics (2)
- multigrid (2)
- multilayered material (2)
- non-overlapping constraints (2)
- online optimization (2)
- option pricing (2)
- portfolio choice (2)
- power spectrum (2)
- praxis orientated (2)
- rectangular packing (2)
- simulation (2)
- single phase flow (2)
- software development (2)
- stochastic control (2)
- supply chain management (2)
- valid inequalities (2)
- work effort (2)
- (dynamic) network flows (1)
- 3D (1)
- 3d imaging (1)
- : Navier-Stokes equations (1)
- : multiple criteria optimization (1)
- : multiple objective programming (1)
- AG-RESY (1)
- Abstract linear systems theory (1)
- Ad-hoc-Netz (1)
- AmICA (1)
- Assigment (1)
- Asymptotic Expansion (1)
- Asymptotic homogenization (1)
- Automatic Differentiation (1)
- Automatische Differentiation (1)
- Bayesian Model Averaging (1)
- Bell Number (1)
- Berechnungskomplexität (1)
- Betriebsfestigkeit (1)
- Bingham viscoplastic model (1)
- Biot poroelasticity system (1)
- Biot-Savart Operator (1)
- Biot-Savart operator (1)
- Black–Scholes approach (1)
- Blocked Neural Networks (1)
- Boolean polynomials (1)
- Boundary Value Problem (1)
- Brinkman (1)
- Brinkman equations (1)
- CAD (1)
- CFD (1)
- CHAMP <Satellitenmission> (1)
- CIR model (1)
- Capacitated Hub Location (1)
- Capacity decisions (1)
- Code Inspection (1)
- Competitive Analysis (1)
- Compiler (1)
- Complexity theory (1)
- Constant Maturity Credit Default Swap (1)
- Constrained mechanical systems (1)
- Constraint Programming (1)
- Continuum mechanics (1)
- Convex sets (1)
- Coq (1)
- Core (1)
- Cosserat rod (1)
- Credit Default Swaption (1)
- Customer distribution (1)
- Decision support systems (1)
- Delaunay Triangulation (1)
- Delaunay mesh generation (1)
- Design (1)
- Differentialinklusionen (1)
- Discrete linear systems (1)
- Distortion measure (1)
- Domain Decomposition (1)
- Drahtloses Sensorsystem (1)
- Dynamic Network Flows (1)
- Dynamical Coupling (1)
- Education (1)
- Elastoplastic BVP (1)
- Electrophysiology (1)
- Equicofactor matrix polynomials (1)
- Euler number (1)
- Eulerian-Lagrangian formulation (1)
- Existence of Solutions (1)
- Extraction (1)
- FETI (1)
- FPM (1)
- FPTAS (1)
- Facility location (1)
- Fault Prediction (1)
- Filippov theory (1)
- Filippov-Theorie (1)
- Filtering (1)
- Financial Mathematics (1)
- Finite rotations (1)
- Flexible multibody dynamics (1)
- Flooding (1)
- FlowLoc (1)
- Fluid Structure Interaction (1)
- Fokker-Planck Equation (1)
- Fokker-Planck equations (1)
- Folgar-Tucker equation (1)
- Folgar-Tucker model (1)
- Formal Semantics (1)
- Front Propagation (1)
- Fräsen (1)
- Funknetz (1)
- G2++ model (1)
- Generalized LBE (1)
- Geographical Information Systems (1)
- Geomagnetic Field Modelling (1)
- Geomagnetismus (1)
- Geomathematik (1)
- Geometric (1)
- Geostrophic flow (1)
- Gradual Covering (1)
- Gravimetrie (1)
- Greedy Heuristic (1)
- Green’s function (1)
- Grid Generation (1)
- Gröber basis (1)
- HJM (1)
- Hals-Nasen-Ohren-Chirurgie (1)
- Hals-Nasen-Ohren-Heilkunde (1)
- Hankel matrix (1)
- Hardware Description Langauge (HDL) (1)
- Hardware Description Language (HDL) (1)
- Hedge funds (1)
- Helmholtz-Decomposition (1)
- Helmholtz-Zerlegung (1)
- Heston Model (1)
- Heuristic (1)
- Home Health Care (1)
- Homotopie (1)
- Homotopiehochhebungen (1)
- Homotopy (1)
- Homotopy lifting (1)
- Hub Location (1)
- Hub-and-Spoke-System (1)
- Hull White model (1)
- Human resource modeling (1)
- Hysteresis (1)
- Hörgerät (1)
- IMRT planning on adaptive volume structures – a significant advance of computational complexity (1)
- Implantation (1)
- Incompressible Navier-Stokes equations (1)
- Infiltration (1)
- Injectivity of mappings (1)
- Injektivität von Abbildungen (1)
- Inkorrekt gestelltes Problem (1)
- Investigation (1)
- Isabelle/HOL (1)
- Iterative learning control (1)
- Jiang's constitutive model (1)
- Jiangsches konstitutives Gesetz (1)
- Jiang’s Model of Elastoplasticity (1)
- Kaktusgraph (1)
- Kalman Filter (1)
- Kirchhoff and Cosserat rods (1)
- Kirchhoff\\\'s geometrically theory (1)
- Knowledge Extraction (1)
- Kommunikationsprotokoll (1)
- Komplexitätsklasse NP (1)
- Kontinuumsmechanik (1)
- Konvexe Mengen (1)
- LIBOR market model (1)
- Lagrange formalism (1)
- Large deformations (1)
- Lattice Boltzmann (1)
- Lattice Boltzmann method (1)
- Lattice Boltzmann methods (1)
- Lattice Boltzmann models (1)
- Lattice-Boltzmann method (1)
- Least squares approximation (1)
- Level Set method (1)
- Li Ion Batteries (1)
- Linear Programming (1)
- Linear kinematic hardening (1)
- Linear kinematische Verfestigung (1)
- Lineare Optimierung (1)
- Liquid Polymer Moulding (1)
- Load Balancing (1)
- Locational Planning (1)
- MBS simulation (1)
- MILP formulations (1)
- MIP formulations (1)
- Mapping (1)
- Mastoid (1)
- Mastoidektomie (1)
- Mathematical modeling (1)
- Matrix perturbation theory (1)
- Melt spinning (1)
- Mesh-less methods (1)
- Meshfree Method (1)
- Meshfree method (1)
- Metaheuristics (1)
- Methode der Fundamentallösungen (1)
- Mie-Darstellung (1)
- Mie-Representation (1)
- Model Checking (1)
- Model reduction (1)
- Modeling (1)
- Modelling (1)
- Monte Carlo methods (1)
- Monte-Carlo methods (1)
- Multi-dimensional systems (1)
- Multibody simulation (1)
- Multicriteria decision making (1)
- Multipoint flux approximation (1)
- Multiscale problem (1)
- Multiscale problems (1)
- Multiscale structures (1)
- NP-hard (1)
- Nash equilibria (1)
- Navier-Stokes (1)
- Navier-Stokes equation (1)
- Navier-Stokes-Brinkmann system of equations (1)
- Network Location (1)
- Network design (1)
- Networks (1)
- Neumann problem (1)
- Nichtlineare/große Verformungen (1)
- Node Platform Design (1)
- Non-Newtonian flow (1)
- Non-homogeneous Poisson Process (1)
- Nonequilibrium Thermodynamics (1)
- Nonlinear Regression (1)
- Nonlinear energy (1)
- Nonlinear/large deformations (1)
- Numerical modeling (1)
- OCL 2.0 (1)
- Ohrenchirurgie (1)
- One-dimensional systems (1)
- Online Algorithms (1)
- Optimal parameter estimation (1)
- Optimization (1)
- Option pricing (1)
- Ordered Median Function (1)
- Ornstein-Uhlenbeck Process (1)
- Parallel Programming (1)
- Parameter Identification (1)
- Parameter identification (1)
- Parameteridentifikation (1)
- Parametrisation of rotations (1)
- Parsimonious Heston Model (1)
- Parteto surface (1)
- Particle scheme (1)
- Peer-to-Peer-Netz (1)
- Performance of iterative solvers (1)
- Pleated Filter (1)
- Poisson line process (1)
- Poroelastizität (1)
- Preconditioners (1)
- Profiles (1)
- Projection method (1)
- Pseudopolynomial-Time Algorithm (1)
- Quanto option (1)
- RONAF (1)
- Random set (1)
- Rate-independency (1)
- Realization theory (1)
- Recycling (1)
- Regelung (1)
- Reliability Prediction (1)
- Reservierungsprotokoll (1)
- Restricted Shortest Path (1)
- Ripley’s K function (1)
- Roboter (1)
- Rosenbrock methods (1)
- Rotational Fiber Spinning (1)
- Rounding (1)
- Route Planning (1)
- Routing (1)
- SAW filters (1)
- SDL (1)
- SDL-2000 (1)
- SGG (1)
- SIMPLE (1)
- SST (1)
- Satellitengradiometrie (1)
- Schädelchirurgie (1)
- Sensitivitäten (1)
- Shapley Value (1)
- Shapley value (1)
- Shapleywert (1)
- Sheet ofPaper (1)
- Simplex (1)
- Simulation (1)
- Slender body theory (1)
- Solid-Gas Separation (1)
- Solid-Liquid Separation (1)
- Spezifikation (1)
- Spieltheorie (1)
- Sprachprofile (1)
- Standortplanung (1)
- Stationary heat equation (1)
- Stein equation (1)
- Stochastic Differential Equations (1)
- Stokes-Brinkman equations (1)
- Stop- and Play-Operators (1)
- Stop- und Play-Operator (1)
- Stop-und Play-Operator (1)
- Stress-strain correction (1)
- Stücklisten (1)
- Supply Chain Design (1)
- Switching regression model (1)
- System Abstractions (1)
- Theorie schwacher Lösungen (1)
- Thermal Transport (1)
- Train Rearrangement (1)
- Translation Validation (1)
- UML 2 (1)
- UML Profile (1)
- Unstructured Grid (1)
- VCG payment scheme (1)
- VHDL (1)
- Variational inequalities (1)
- Variationsungleichugen (1)
- Vasicek model (1)
- Vectorial Wavelets (1)
- Vektor-Wavelets (1)
- Vektorkugelfunktionen (1)
- Vektorwavelets (1)
- Viscous Fibers (1)
- Winner Determination Problem (WDP) (1)
- Wireless Communication (1)
- Wireless Sensor Network (1)
- Wireless sensor network (1)
- a posteriori error estimates (1)
- a-priori domain decomposition (1)
- acoustic absorption (1)
- adaptive local refinement (1)
- adaptive triangulation (1)
- additive outlier (1)
- aerodynamic drag (1)
- air drag (1)
- algebraic constraints (1)
- algebraic cryptoanalysis (1)
- algorithm by Bortfeld and Boyer (1)
- aliasing (1)
- anisotropic cicosity (1)
- anisotropy (1)
- applied mathematics (1)
- artial differential algebraic equations (1)
- asymptotic (1)
- asymptotic Cosserat models (1)
- asymptotic limits (1)
- automated analog circuit design (1)
- automatic differentiation (1)
- autoregressive process (1)
- basic systems theoretic properties (1)
- batch presorting problem (1)
- battery modeling (1)
- bedingte Aktionen (1)
- behavioral modeling (1)
- ber dynamics (1)
- big triangle small triangle method (1)
- bills of materials (1)
- bin coloring (1)
- binarization (1)
- boudary condistions (1)
- bounce-back rule (1)
- boundary value problems (1)
- bounds (1)
- cactus graph (1)
- calibration (1)
- calls (1)
- cell volume (1)
- change analysis (1)
- circuit sizing (1)
- cliquet options (1)
- clustering (1)
- clustering and disaggregation techniques (1)
- combinatorial procurement (1)
- competetive analysis (1)
- competitive analysis (1)
- compiler (1)
- complexity (1)
- composite materials (1)
- computational fluid dynamics (1)
- computer algebra (1)
- concentrated electrolyte (1)
- constrained mechanical systems (1)
- constraint propagation (1)
- consumption (1)
- contact problems (1)
- continuous optimization (1)
- controlling (1)
- convergence of approximate solution (1)
- convex (1)
- convex optimization (1)
- cooperative game (1)
- core (1)
- corre- lation (1)
- correlation (1)
- coupled flow in plain and porous media (1)
- credit risk (1)
- credit spread (1)
- cuboidal lattice (1)
- curved viscous fibers (1)
- curved viscous fibers with surface tension (1)
- decision support systems (1)
- decomposition (1)
- defect detection (1)
- deformable bodies (1)
- deformable porous media (1)
- delay management (1)
- design centering (1)
- design optimization (1)
- deterministic technical systems (1)
- dial-a-ride (1)
- dif (1)
- differential algebraic equations (1)
- differential inclusions (1)
- differentialalgebraic equations (1)
- discrete facility location (1)
- discrete location (1)
- discrete optimization (1)
- discrete time setting (1)
- discretisation of control problems (1)
- discriminant analysis (1)
- diusion limits (1)
- dividend discount model (1)
- dividends (1)
- domains (1)
- drag models (1)
- drift due to noise (1)
- durability (1)
- dynamic capillary pressure (1)
- dynamic mode (1)
- dynamic network flows (1)
- earliest arrival flows (1)
- effective elastic moduli (1)
- effective thermal conductivity (1)
- efficient set (1)
- eigenvalue problems (1)
- elastoplasticity (1)
- electrochemical diusive processes (1)
- electrochemical simulation (1)
- electronic circuit design (1)
- elliptic equation (1)
- encapsulation (1)
- energy conservation (1)
- error estimates (1)
- estimation of compression (1)
- evolutionary algorithms (1)
- executive compensation (1)
- executive stockholder (1)
- expert system (1)
- explicit jump (1)
- explicit jump immersed interface method (1)
- exponential utility (1)
- extreme equilibria (1)
- extreme solutions (1)
- fatigue (1)
- fiber dynamics (1)
- fiber model (1)
- fiber-fluid interactions (1)
- fiber-turbulence interaction scales (1)
- fibrous insulation materials (1)
- fibrous materials (1)
- film casting process (1)
- filtration (1)
- financial decisions (1)
- finite difference discretization (1)
- finite differences (1)
- finite element method (1)
- finite elements (1)
- finite sample breakdown point (1)
- finite volume discretization (1)
- finite volume discretization discretization (1)
- finite volume discretizations (1)
- finite volume methods (1)
- finite-volume method (1)
- flexible fibers (1)
- flow in heterogeneous porous media (1)
- flow in porous media (1)
- flow resistivity (1)
- fluid-fiber interactions (1)
- fluid-structure interaction (1)
- force-based simulation (1)
- formal verification (1)
- forward starting options (1)
- fptas (1)
- frameindifference (1)
- free boundary value problem (1)
- free surface (1)
- free surface Stokes flow (1)
- full vehicle model (1)
- functional Hilbert space (1)
- fuzzy logic (1)
- general semi-infinite optimization (1)
- generalized Pareto distribution (1)
- genetic algorithms (1)
- geographical information systems (1)
- geomathematics (1)
- geometrically exact rod models (1)
- geometrically exact rods (1)
- glass processing (1)
- global optimization (1)
- global robustness (1)
- graph laplacian (1)
- guarded actions (1)
- harmonic density (1)
- harmonische Dichte (1)
- heterogeneous porous media (1)
- heuristics (1)
- hierarchical shape functions (1)
- human factors (1)
- human visual system (1)
- hydraulics (1)
- hyperealstic (1)
- image processing (1)
- image segmentation (1)
- impinging jets (1)
- improving and feasible directions (1)
- in-house hospital transportation (1)
- incompressible flow (1)
- inertial and viscous-inertial fiber regimes (1)
- innovation outlier (1)
- integral constitutive equation (1)
- intensity maps (1)
- intensity modulated (1)
- intensity modulated radiotherapy planning (1)
- interactive multi-objective optimization (1)
- interactive navigation (1)
- interfa (1)
- interface problem (1)
- interface problems (1)
- invariants (1)
- ion transport (1)
- isotropy test (1)
- kernel estimate (1)
- kernel function (1)
- kinetic derivation (1)
- knowledge management (1)
- knowledge representation (1)
- kooperative Spieltheorie (1)
- large scale optimization (1)
- lattice Boltzmann equation (1)
- learning curve (1)
- level-set (1)
- lid-driven flow in a (1)
- linear elasticity equations (1)
- linear kinematic hardening (1)
- linear optimization (1)
- liquid composite moulding (1)
- liquid film (1)
- lithium-ion battery (1)
- local approximation of sea surface topography (1)
- local robustness (1)
- locally supported (Green’s) vector wavelets (1)
- locally supported wavelets (1)
- location theory (1)
- log utility (1)
- logistic regression (1)
- logistics (1)
- long slender fibers (1)
- macro modeling (1)
- macroscopic equations (1)
- magnetic field (1)
- mass & spring (1)
- mastoid (1)
- mastoidectomy (1)
- mechanism design (1)
- metal foams (1)
- method of fundamental solutions (1)
- microstructure simulatio (1)
- microstructure simulation (1)
- minimaler Schnittbaum (1)
- minimum cut tree (1)
- models (1)
- modified gradient projection method (1)
- moment matching (1)
- multi-asset (1)
- multi-period planning (1)
- multi-stage stochastic programming (1)
- multibody system simulation (1)
- multicriteria optimization (1)
- multigrid methods (1)
- multiobjective evolutionary algorithms (1)
- multiphase flow (1)
- multiple objective optimization (1)
- multiscale problem (1)
- multiscale problems (1)
- multiscale structures (1)
- multivalued fundamental diagram (1)
- nearest neighbour distance (1)
- neighborhod relationships (1)
- network congestion game (1)
- neural network (1)
- non-Newtonian fluids (1)
- non-linear optimization (1)
- non-linear wealth dynamics (1)
- non-local conditions (1)
- non-woven (1)
- nonlinear programming (1)
- nonlinear stochastic systems (1)
- nonlinearity (1)
- nonparametric regression (1)
- numerical simulation (1)
- numerical solution (1)
- object-orientation (1)
- occupational choice (1)
- oil filters (1)
- on-board simulation (1)
- open cell foam (1)
- operator-dependent prolongation (1)
- optimal control (1)
- optimal control theory (1)
- optimal portfolio choice (1)
- optimization (1)
- optimization algorithms (1)
- optimization strategies (1)
- options (1)
- ordered median (1)
- orientation analysis (1)
- orthogonal orientations (1)
- oscillating coefficients (1)
- otorhinolaryngological surgery (1)
- ownership (1)
- pH-sensitive microelectrodes (1)
- paper machine (1)
- parallel computing (1)
- parallel implementation (1)
- particle methods (1)
- path-connected sublevelsets (1)
- permeability of fractured porous media (1)
- phase space (1)
- phase transitions (1)
- piezoelectric periodic surface acoustic wave filters (1)
- planar location (1)
- polar ice (1)
- political districting (1)
- porous media (1)
- porous microstructure (1)
- power utility (1)
- preconditioner (1)
- pressing section of a paper machine (1)
- price of anarchy (1)
- price of stability (1)
- productivity (1)
- project management and scheduling (1)
- projection-type splitting (1)
- pseudo-plastic fluids (1)
- public transit (1)
- public transport (1)
- public transportation (1)
- puts (1)
- quadratic assignment problem (1)
- quantile estimation (1)
- quasistatic deformations (1)
- quickest path (1)
- radiation therapy planning (1)
- radiotherapy planning (1)
- random -Gaussian aerodynamic force (1)
- random set (1)
- random system of fibers (1)
- rate-independency (1)
- rate-indepenhysteresis (1)
- real-life applications. (1)
- real-time (1)
- real-time simulation (1)
- real-world accident data (1)
- regularization (1)
- regularized models (1)
- representative systems of Pareto solutions (1)
- reproducing kernel (1)
- risk (1)
- robust network flows (1)
- robustness (1)
- rotational spinning processes (1)
- safety critical components (1)
- safety function (1)
- sales territory alignment (1)
- satisfiability (1)
- selfish routing (1)
- semi-infinite programming (1)
- sensitivities (1)
- sequences (1)
- sequential test (1)
- series-parallel graphs (1)
- shape (1)
- shape optimization (1)
- simplex (1)
- single layer kernel (1)
- singularity (1)
- slender- body theory (1)
- slender-body theory (1)
- slenderbody theory (1)
- smoothness (1)
- software process (1)
- spherical decomposition (1)
- spinning processes (1)
- stability (1)
- statistical modeling (1)
- steady Richards’ equation (1)
- steady modified Richards’ equation (1)
- stochastic Hamiltonian system (1)
- stochastic averaging. (1)
- stochastic dif (1)
- stochastic volatility (1)
- stokes (1)
- stop and go waves (1)
- stop- and play-operator (1)
- stop- and play-operators (1)
- strategic (1)
- strength (1)
- strong equilibria (1)
- strut thickness (1)
- subgrid approach (1)
- subgrid approximation (1)
- suspension (1)
- swap (1)
- symbolic analysis (1)
- synchrone Sprachen (1)
- synchronous languages (1)
- system simulation (1)
- tabu search (1)
- technology (1)
- territory desgin (1)
- textile quality control (1)
- texture classification (1)
- theorem prover (1)
- thin films (1)
- topological sensitivity (1)
- topology optimization (1)
- total latency (1)
- tr (1)
- trace stability (1)
- traffic flow (1)
- transfer quality (1)
- translation validation (1)
- transportation (1)
- tree method (1)
- turbulence modeling (1)
- turbulence modelling (1)
- two-grid algorithm (1)
- two-way coupling (1)
- types (1)
- unstructured grid (1)
- upscaling (1)
- urban elevation (1)
- variable aggregation method (1)
- variable neighborhood search (1)
- variational formulation (1)
- vector spherical harmonics (1)
- vectorial wavelets (1)
- viscous thermal jets (1)
- visual (1)
- visual interfaces (1)
- visualization (1)
- volatility (1)
- volume of fluid method (1)
- wave propagation (1)
- weak solution theory (1)
- weakly/ strictly pareto optima (1)
- white noise (1)
- wild bootstrap test (1)
Faculty / Organisational entity
In this work, we analyze two important and simple models of short rates, namely Vasicek and CIR models. The models are described and then the sensitivity of the models with respect to changes in the parameters are studied. Finally, we give the results for the estimation of the model parameters by using two different ways.
Algebraic Systems Theory
(2004)
Control systems are usually described by differential equations, but their properties of interest are most naturally expressed in terms of the system trajectories, i.e., the set of all solutions to the equations. This is the central idea behind the so-called "behavioral approach" to systems and control theory. On the other hand, the manipulation of linear systems of differential equations can be formalized using algebra, more precisely, module theory and homological methods ("algebraic analysis"). The relationship between modules and systems is very rich, in fact, it is a categorical duality in many cases of practical interest. This leads to algebraic characterizations of structural systems properties such as autonomy, controllability, and observability. The aim of these lecture notes is to investigate this module-system correspondence. Particular emphasis is put on the application areas of one-dimensional rational systems (linear ODE with rational coefficients), and multi-dimensional constant systems (linear PDE with constant coefficients).
In this paper mathematical models for liquid films generated by impinging jets are discussed. Attention is stressed to the interaction of the liquid film with some obstacle. S. G. Taylor [Proc. R. Soc. London Ser. A 253, 313 (1959)] found that the liquid film generated by impinging jets is very sensitive to properties of the wire which was used as an obstacle. The aim of this presentation is to propose a modification of the Taylor's model, which allows to simulate the film shape in cases, when the angle between jets is different from 180°. Numerical results obtained by discussed models give two different shapes of the liquid film similar as in Taylors experiments. These two shapes depend on the regime: either droplets are produced close to the obstacle or not. The difference between two regimes becomes larger if the angle between jets decreases. Existence of such two regimes can be very essential for some applications of impinging jets, if the generated liquid film can have a contact with obstacles.
Granular systems in solid-like state exhibit properties like stiffness
dependence on stress, dilatancy, yield or incremental non-linearity
that can be described within the continuum mechanical framework.
Different constitutive models have been proposed in the literature either based on relations between some components of the stress tensor or on a quasi-elastic description. After a brief description of these
models, the hyperelastic law recently proposed by Jiang and Liu [1]
will be investigated. In this framework, the stress-strain relation is
derived from an elastic strain energy density where the stable proper-
ties are linked to a Drucker-Prager yield criteria. Further, a numerical method based on the finite element discretization and Newton-
Raphson iterations is presented to solve the force balance equation.
The 2D numerical examples presented in this work show that the stress
distributions can be computed not only for triangular domains, as previoulsy done in the literature, but also for more complex geometries.
If the slope of the heap is greater than a critical value, numerical instabilities appear and no elastic solution can be found, as predicted by
the theory. As main result, the dependence of the material parameter
Xi on the maximum angle of repose is established.
Wireless LANs operating within unlicensed frequency bands require random access schemes such as CSMA/ CA, so that wireless networks from different administrative domains (for example wireless community networks) may co-exist without central coordination, even when they happen to operate on the same radio channel. Yet, it is evident that this Jack of coordination leads to an inevitable loss in efficiency due to contention on the MAC layer. The interesting question is, which efficiency may be gained by adding coordination to existing, unrelated wireless networks, for example by self-organization. In this paper, we present a methodology based on a mathematical programming formulation to determine the
parameters (assignment of stations to access points, signal strengths and channel assignment of both access points and stations) for a scenario of co-existing CSMA/ CA-based wireless networks, such that the contention between these networks is minimized. We demonstrate how it is possible to solve this discrete, non-linear optimization problem exactly for small
problems. For larger scenarios, we present a genetic algorithm specifically tuned for finding near-optimal solutions, and compare its results to theoretical lower bounds. Overall, we provide a benchmark on the minimum contention problem for coordination mechanisms in CSMA/CA-based wireless networks.
Radiotherapy is one of the major forms in cancer treatment. The patient is irradiated with high-energetic photons or charged particles with the primary goal of delivering sufficiently high doses to the tumor tissue while simultaneously sparing the surrounding healthy tissue. The inverse search for the treatment plan giving the desired dose distribution is done by means of numerical optimization [11, Chapters 3-5]. For this purpose, the aspects of dose quality in the tissue are modeled as criterion functions, whose mathematical properties also affect the type of the corresponding optimization problem. Clinical practice makes frequent use of criteria that incorporate volumetric and spatial information about the shape of the dose distribution. The resulting optimization problems are of global type by empirical knowledge and typically computed with generic global solver concepts, see for example [16]. The development of good global solvers to compute radiotherapy optimization problems is an important topic of research in this application, however, the structural properties of the underlying criterion functions are typically not taken into account in this context.
On the Complexity of the Uncapacitated Single Allocation p-Hub Median Problem with Equal Weights
(2007)
The Super-Peer Selection Problem is an optimization problem in network topology construction. It may be cast as a special case of a Hub Location Problem, more exactly an Uncapacitated Single Allocation p-Hub Median Problem with equal weights. We show that this problem is still NP-hard by reduction from Max Clique.
This report reviews selected image binarization and segmentation methods that have been proposed and which are suitable for the processing of volume images. The focus is on thresholding, region growing, and shape–based methods. Rather than trying to give a complete overview of the field, we review the original ideas and concepts of selected methods, because we believe this information to be important for judging when and under what circumstances a segmentation algorithm can be expected to work properly.
We consider a volume maximization problem arising in gemstone cutting industry. The problem is formulated as a general semi-infinite program (GSIP) and solved using an interiorpoint method developed by Stein. It is shown, that the convexity assumption needed for the convergence of the algorithm can be satisfied by appropriate modelling. Clustering techniques are used to reduce the number of container constraints, which is necessary to make the subproblems practically tractable. An iterative process consisting of GSIP optimization and adaptive refinement steps is then employed to obtain an optimal solution which is also feasible for the original problem. Some numerical results based on realworld data are also presented.
Wireless sensor networks are the driving force behind many popular and interdisciplinary research areas, such as environmental monitoring, building automation, healthcare and assisted living applications. Requirements like compactness, high integration of sensors, flexibility, and power efficiency are often very different and cannot be fulfilled by state-of-the-art node platforms at once. In this paper, we present and analyze AmICA: a flexible, compact, easy-to-program, and low-power node platform. Developed from scratch and including a node, a basic communication protocol, and a debugging toolkit, it assists in an user-friendly rapid application development. The general purpose nature of AmICA was evaluated in two practical applications with diametric requirements. Our analysis shows that AmICA nodes are 67% smaller than BTnodes, have five times more sensors than Mica2Dot and consume 72% less energy than the state-of-the-art TelosB mote in sleep mode.
The stationary heat equation is solved with periodic boundary conditions in geometrically complex composite materials with high contrast in the thermal conductivities of the individual phases. This is achieved by harmonic averaging and explicitly introducing the jumps across the material interfaces as additional variables. The continuity of the heat flux yields the needed extra equations for these variables. A Schur-complent formulation for the new variables is derived that is solved using the FFT and BiCGStab methods. The EJ-HEAT solver is given as a 3-page Matlab program in the Appendix. The C++ implementation is used for material design studies. It solves 3-dimensional problems with around 190 Mio variables on a 64-bit AMD Opteron desktop system in less than 6 GB memory and in minutes to hours, depending on the contrast and required accuracy. The approach may also be used to compute effective electric conductivities because they are governed by the stationary heat equation.
Four aspects are important in the design of hydraulic lters. We distinguish between two cost factors and two performance factors. Regarding performance, filter eciencynd lter capacity are of interest. Regarding cost, there are production considerations such as spatial restrictions, material cost and the cost of manufacturing the lter. The second type of cost is the operation cost, namely the pressure drop. Albeit simulations should and will ultimately deal with all 4 aspects, for the moment our work is focused on cost. The PleatGeo Module generates three-dimensional computer models of a single pleat of a hydraulic lter interactively. PleatDict computes the pressure drop that will result for the particular design by direct numerical simulation. The evaluation of a new pleat design takes only a few hours on a standard PC compared to days or weeks used for manufacturing and testing a new prototype of a hydraulic lter. The design parameters are the shape of the pleat, the permeabilities of one or several layers of lter media and the geometry of a supporting netting structure that is used to keep the out ow area open. Besides the underlying structure generation and CFD technology, we present some trends regarding the dependence of pressure drop on design parameters that can serve as guide lines for the design of hydraulic lters. Compared to earlier two-dimensional models, the three-dimensional models can include a support structure.
A fully automatic procedure is proposed to rapidly compute the permeability of porous materials from their binarized microstructure. The discretization is a simplified version of Peskin’s Immersed Boundary Method, where the forces are applied at the no-slip grid points. As needed for the computation of permeability, steady flows at zero Reynolds number are considered. Short run-times are achieved by eliminating the pressure and velocity variables using an Fast Fourier Transform-based and 4 Poisson problembased fast inversion approach on rectangular parallelepipeds with periodic boundary conditions. In reference to calling it a fast method using fictitious or artificial forces, the implementation is called FFF-Stokes. Large scale computations on 3d images are quickly and automatically performed to estimate the permeability of some sample materials. A matlab implementation is provided to allow readers to experience the automation and speed of the method for realistic three-dimensional models.
This report presents a generalization of tensor-product B-spline surfaces. The new scheme permits knots whose endpoints lie in the interior of the domain rectangle of a surface. This allows local refinement of the knot structure for approximation purposes as well as modeling surfaces with local tangent or curvature discontinuities. The surfaces are represented in terms of B-spline basis functions, ensuring affine invariance, local control, the convex hull property, and evaluation by de Boor's algorithm. A dimension formula for a class of generalized tensor-product spline spaces is developed.
In the presented work, we make use of the strong reciprocity between kinematics and geometry to build a geometrically nonlinear, shearable low order discrete shell model of Cosserat type defined on triangular meshes, from which we deduce a rotation–free Kirchhoff type model with the triangle vertex positions as degrees of freedom. Both models behave physically plausible already on very coarse meshes, and show good
convergence properties on regular meshes. Moreover, from the theoretical side, this deduction provides a
common geometric framework for several existing models.
In this article we present a method to generate random objects from a large variety of combinatorial classes according to a given distribution. Given a description of the combinatorial class and a set of sample data our method will provide an algorithm that generates objects of size n in worst-case runtime O(n^2) (O(n log(n)) can be achieved at the cost of a higher average-case runtime), with the generated objects following a distribution that closely matches the distribution of the sample data.
We present a methodology to augment system safety step-by-step and illustrate the approach by the definition of reusable solutions for the detection of fail-silent nodes - a watchdog and a heartbeat. These solutions can be added to real-time system designs, to protect against certain types of system failures. We use SDL as a system design language for the development of distributed systems, including real-time systems.
Interactive graphics has been limited to simple direct illumination that commonly results in an artificial appearance. A more realistic appearance by simulating global illumination effects has been too costly to compute at interactive rates. In this paper we describe a new Monte Carlo-based global illumination algorithm. It achieves performance of up to 10 frames per second while arbitrary changes to the scene may be applied interactively. The performance is obtained through the effective use of a fast, distributed ray-tracing engine as well as a new interleaved sampling technique for parallel Monte Carlo simulation. A new filtering step in combination with correlated sampling avoids the disturbing noise artifacts common to Monte Carlo methods.
Worldwide the installed capacity of renewable technologies for electricity production is
rising tremendously. The German market is particularly progressive and its regulatory
rules imply that production from renewables is decoupled from market prices and electricity
demand. Conventional generation technologies are to cover the residual demand
(defined as total demand minus production from renewables) but set the price at the
exchange. Existing electricity price models do not account for the new risks introduced
by the volatile production of renewables and their effects on the conventional demand
curve. A model for residual demand is proposed, which is used as an extension of
supply/demand electricity price models to account for renewable infeed in the market.
Infeed from wind and solar (photovoltaics) is modeled explicitly and withdrawn from
total demand. The methodology separates the impact of weather and capacity. Efficiency
is transformed on the real line using the logit-transformation and modeled as a stochastic process. Installed capacity is assumed a deterministic function of time. In a case study the residual demand model is applied to the German day-ahead market
using a supply/demand model with a deterministic supply-side representation. Price trajectories are simulated and the results are compared to market future and option
prices. The trajectories show typical features seen in market prices in recent years and the model is able to closely reproduce the structure and magnitude of market prices.
Using the simulated prices it is found that renewable infeed increases the volatility of forward prices in times of low demand, but can reduce volatility in peak hours. Prices
for different scenarios of installed wind and solar capacity are compared and the meritorder effect of increased wind and solar capacity is calculated. It is found that wind
has a stronger overall effect than solar, but both are even in peak hours.
Continuously improving imaging technologies allow to capture the complex spatial
geometry of particles. Consequently, methods to characterize their three
dimensional shapes must become more sophisticated, too. Our contribution to
the geometric analysis of particles based on 3d image data is to unambiguously
generalize size and shape descriptors used in 2d particle analysis to the spatial
setting.
While being defined and meaningful for arbitrary particles, the characteristics
were actually selected motivated by the application to technical cleanliness. Residual
dirt particles can seriously harm mechanical components in vehicles, machines,
or medical instruments. 3d geometric characterization based on micro-computed
tomography allows to detect dangerous particles reliably and with
high throughput. It thus enables intervention within the production line. Analogously
to the commonly agreed standards for the two dimensional case, we
show how to classify 3d particles as granules, chips and fibers on the basis of
the chosen characteristics. The application to 3d image data of dirt particles is
demonstrated.
There is a well known relationship between alternating automata on finite words and symbolically represented nondeterministic automata on finite words. This relationship is of practical relevance because it allows to combine the advantages of alternating and symbolically represented nondeterministic automata on finite words. However, for infinite words the situation is unclear. Therefore, this work investigates the relationship between alternating omega-automata and symbolically represented nondeterministic omega-automata. Thereby, we identify classes of alternating omega-automata that are as expressive as safety, liveness and deterministic prefix automata, respectively. Moreover, some very simple symbolic nondeterminisation procedures are developed for the classes corresponding to safety and liveness properties.
We present the application of a meshfree method for simulations of interaction between fluids and flexible structures. As a flexible structure we consider a sheet of paper. In a two-dimensional framework this sheet can be modeled as curve by the dynamical Kirchhoff-Love theory. The external forces taken into account are gravitation and the pressure difference between upper and lower surface of the sheet. This pressure difference is computed using the Finite Pointset Method (FPM) for the incompressible Navier-Stokes equations. FPM is a meshfree, Lagrangian particle method. The dynamics of the sheet are computed by a finite difference method. We show the suitability of the meshfree method for simulations of fluid-structure interaction in several applications.
A Lattice Boltzmann Method for immiscible multiphase flow simulations using the Level Set Method
(2008)
We consider the lattice Boltzmann method for immiscible multiphase flow simulations. Classical lattice Boltzmann methods for this problem, e.g. the colour gradient method or the free energy approach, can only be applied when density and viscosity ratios are small. Moreover, they use additional fields defined on the whole domain to describe the different phases and model phase separation by special interactions at each node. In contrast, our approach simulates the flow using a single field and separates the fluid phases by a free moving interface. The scheme is based on the lattice Boltzmann method and uses the level set method to compute the evolution of the interface. To couple the fluid phases, we develop new boundary conditions which realise the macroscopic jump conditions at the interface and incorporate surface tension in the lattice Boltzmann framework. Various simulations are presented to validate the numerical scheme, e.g. two-phase channel flows, the Young-Laplace law for a bubble and viscous fingering in a Hele-Shaw cell. The results show that the method is feasible over a wide range of density and viscosity differences.
We prove a general monotonicity result about Nash flows in directed networks and use it for the design of truthful mechanisms in the setting where each edge of the network is controlled by a different selfish agent, who incurs costs when her edge is used. The costs for each edge are assumed to be linear in the load on the edge. To compensate for these costs, the agents impose tolls for the usage of edges. When nonatomic selfish network users choose their paths through the network independently and each user tries to minimize a weighted sum of her latency and the toll she has to pay to the edges, a Nash flow is obtained. Our monotonicity result implies that the load on an edge in this setting can not increase when the toll on the edge is increased, so the assignment of load to the edges by a Nash flow yields a monotone algorithm. By a well-known result, the monotonicity of the algorithm then allows us to design truthful mechanisms based on the load assignment by Nash flows. Moreover, we consider a mechanism design setting with two-parameter agents, which is a generalization of the case of one-parameter agents considered in a seminal paper of Archer and Tardos. While the private data of an agent in the one-parameter case consists of a single nonnegative real number specifying the agent's cost per unit of load assigned to her, the private data of a two-parameter agent consists of a pair of nonnegative real numbers, where the first one specifies the cost of the agent per unit load as in the one-parameter case, and the second one specifies a fixed cost, which the agent incurs independently of the load assignment. We give a complete characterization of the set of output functions that can be turned into truthful mechanisms for two-parameter agents. Namely, we prove that an output function for the two-parameter setting can be turned into a truthful mechanism if and only if the load assigned to every agent is nonincreasing in the agent's bid for her per unit cost and, for almost all fixed bids for the agent's per unit cost, the load assigned to her is independent of the agent's bid for her fixed cost. When the load assigned to an agent is continuous in the agent's bid for her per unit cost, it must be completely independent of the agent's bid for her fixed cost. These results motivate our choice of linear cost functions without fixed costs for the edges in the selfish routing setting, but the results also seem to be interesting in the context of algorithmic mechanism design themselves.
Estelle is an internationally standardized formal description technique (FDT) designed for the specification of distributed systems, in particular communication protocols. An Estelle specification describes a system of communicating components (module instances). The specified system is closed in a topological sense, i.e. it has no ability to interact with some environment. Because of this restriction, open systems can only be specified together with and incorporated with an environment. To overcome this restriction, we introduce a compatible extension of Estelle, called "Open Estelle". It allows the specification of (topologically) open systems, i.e. systems that have the ability to communicate with any environment through a well-defined external interface. We define aformal syntax and a formal semantics for Open Estelle, both based on and extending the syntax and semantics of Estelle. The extension is compatible syntactically and semantically, i.e. Estelle is a subset of Open Estelle. In particular, the formal semantics of Open Estelle reduces to the Estelle semantics in the special case of a closed system. Furthermore, we present a tool for the textual integration of open systems into environments specified in Open Estelle, and a compiler for the automatic generation of implementations directly from Open Estelle specifications.
It is commonly believed that not all degrees of freedom are needed to produce good solutions for the treatment planning problem in intensity modulated radiotherapy treatment (IMRT). However, typical methods to exploit this fact have either increased the complexity of the optimization problem or were heuristic in nature. In this work we introduce a technique based on adaptively refining variable clusters to successively attain better treatment plans. The approach creates approximate solutions based on smaller models that may get arbitrarily close to the optimal solution. Although the method is illustrated using a specific treatment planning model, the components constituting the variable clustering and the adaptive refinement are independent of the particular optimization problem.
It has been empirically verified that smoother intensity maps can be expected to produce shorter sequences when step-and-shoot collimation is the method of choice. This work studies the length of sequences obtained by the sequencing algorithm by Bortfeld and Boyer using a probabilistic approach. The results of this work build a theoretical foundation for the up to now only empirically validated fact that if smoothness of intensity maps is considered during their calculation, the solutions can be expected to be more easily applied.
We present a parsimonious multi-asset Heston model. All single-asset submodels follow the well-known Heston dynamics and their parameters are typically calibrated on implied market volatilities. We focus on the calibration of the correlation structure between the single-asset marginals in the absence of sucient liquid cross-asset option price data. The presented model is parsimonious in the sense that d(d􀀀1)=2 asset-asset cross-correlations are required for a d-asset Heston model. In order to calibrate the model, we present two general setups corresponding to relevant practical situations: (1) when the empirical cross-asset correlations in the risk neutral world are given by the user and we need to calibrate the correlations between the driving Brownian motions or (2) when they have to be estimated from the historical time series. The theoretical background, including the ergodicity of the multidimensional CIR process, for the proposed estimators is also studied.
For the last decade, optimization of beam orientations in intensitymodulated radiation therapy (IMRT) has been shown to be successful in improving the treatment plan. Unfortunately, the quality of a set of beam orientations depends heavily on its corresponding beam intensity proles. Usually, a stochastic selector is used for optimizing beam orientation, and then a single objective inverse treatment planning algorithm is used for the optimization of beam intensity proles. The overall time needed to solve the inverse planning for every random selection of beam orientations becomes excessive. Recently, considerable improvement has been made in optimizing beam intensity proles by using multiple objective inverse treatment planning. Such an approach results in a variety of beam intensity proles for every selection of beam orientations, making the dependence between beam orientations and its intensity proles less important. We take advantage of this property to present a dynamic algorithm for beam orientation in IMRT which is based on multicriteria inverse planning. The algorithm approximates beam intensity proles iteratively instead of doing it for every selection of beam orientation, saving a considerable amount of calculation time. Every iteration goes from an N-beam plan to a plan with N + 1 beams. Beam selection criteria are based on a score function that minimizes the deviation from the prescribed dose, in addition to a reject-accept criterion. To illustrate the eciency of the algorithm it has been applied to an articial example where optimality is trivial and to three real clinical cases: a prostate carcinoma, a tumor in the head and neck region and a paraspinal tumor. In comparison to the standard equally spaced beam plans, improvements are reported in all of the three clinical examples, even, in some cases with a fewer number of beams.
Investigate the hardware description language Chisel - A case study implementing the Heston model
(2013)
This paper presents a case study comparing the hardware description language „Constructing Hardware in a Scala Embedded Language“(Chisel) to VHDL. For a thorough comparison the Heston Model was implemented, a stochastic model used in financial mathematics to calculate option prices. Metrics like hardware utilization and maximum clock rate were extracted from both resulting designs and compared to each other. The results showed a 30% reduction in code size compared to VHDL, while the resulting circuits had about the same hardware utilization. Using Chisel however proofed to be difficult because of a few features that were not available for this case study.
In this article, a new model predictive control approach to nonlinear stochastic systems will be presented. The new approach is based on particle filters, which are usually used for estimating states or parameters. Here, two particle filters will be combined, the first one giving an estimate for the actual state based on the actual output of the system; the second one gives an estimate of a control input for the system. This is basically done by adopting the basic model predictive control strategies for the second particle filter. Later in this paper, this new approach is applied to a CSTR (continuous stirred-tank reactor) example and to the inverted pendulum.
We study the complexity of finding extreme pure Nash equilibria in symmetric network congestion games and analyse how it depends on the graph topology and the number of users. In our context best and worst equilibria are those with minimum respectively maximum total latency. We establish that both problems can be solved by a Greedy algorithm with a suitable tie breaking rule on parallel links. On series-parallel graphs finding a worst Nash equilibrium is NP-hard for two or more users while finding a best one is solvable in polynomial time for two users and NP-hard for three or more. Additionally we establish NP-hardness in the strong sense for the problem of finding a worst Nash equilibrium on a general acyclic graph.
In the ground vehicle industry it is often an important task to simulate full vehicle models based on the wheel forces and moments, which have been measured during driving over certain roads with a prototype vehicle. The models are described by a system of differential algebraic equations (DAE) or ordinary differential equations (ODE). The goal of the simulation is to derive section forces at certain components for a durability assessment. In contrast to handling simulations, which are performed including more or less complex tyre models, a driver model, and a digital road profile, the models we use here usually do not contain the tyres or a driver model. Instead, the measured wheel forces are used for excitation of the unconstrained model. This can be difficult due to noise in the input data, which leads to an undesired drift of the vehicle model in the simulation.
Testing a new suspension based on real load data is performed on elaborate multi channel test rigs. Usually wheel forces and moments measured during driving maneuvers are reproduced on the rig. Because of the complicated interaction between rig and suspension each new rig configuration has to prove its efficiency with respect to the requirements and the configuration might be subject to optimization. This paper deals with modeling a new rig concept based on two hexapods. The real physical rig has been designed and meanwhile built by MOOG-FCS for VOLKSWAGEN. The aim of the simulation project reported here was twofold: First the simulation of the rig together with real VOLKSWAGEN suspension models at a time where the design was not yet finalized was used to verify and optimize the desired properties of the rig. Second the simulation environment was set up in a way that it can be used to prepare real tests on the rig. The model contains the geometric configuration as well as the hydraulics and the controller. It is implemented as an ADAMS/Car template and can be combined with different suspension models to get a complete assembly representing the entire test rig. Using this model, all steps required for a real test run such as controller adaptation, drive file iteration and simulation can be performed. Geometric or hydraulic parameters can be modified easily to improve the setup and adapt the system to the suspension and the load data.
We compare different notions of differentiability of a measure along a vector field on a locally convex space. We consider in the \(L^2\)-space of a differentiable measure the analoga of the classical concepts of gradient, divergence and Laplacian (which coincides with the Ornstein-Uhlenbeck
operator in the Gaussian case). We use these operators for the extension of the basic results of Malliavin and Stroock on the smoothness of finite dimensional image measures under certain nonsmooth mappings to the case of non-Gaussian measures. The proof of this extension is quite direct and does not use any Chaos-decomposition. Finally, the role of this Laplacian in the
procedure of quantization of anharmonic oscillators is discussed.
One approach to multi-criteria IMRT planning is to automatically calculate a data set of Pareto-optimal plans for a given planning problem in a first phase, and then interactively explore the solution space and decide for the clinically best treatment plan in a second phase. The challenge of computing the plan data set is to assure that all clinically meaningful plans are covered and that as many as possible clinically irrelevant plans are excluded to keep computation times within reasonable limits. In this work, we focus on the approximation of the clinically relevant part of the Pareto surface, the process that consititutes the first phase. It is possible that two plans on the Parteto surface have a very small, clinically insignificant difference in one criterion and a significant difference in one other criterion. For such cases, only the plan that is clinically clearly superior should be included into the data set. To achieve this during the Pareto surface approximation, we propose to introduce bounds that restrict the relative quality between plans, so called tradeoff bounds. We show how to integrate these trade-off bounds into the approximation scheme and study their effects.
In this paper we consider the location of stops along the edges of an already existing public transportation network, as introduced in [SHLW02]. This can be the introduction of bus stops along some given bus routes, or of railway stations along the tracks in a railway network. The goal is to achieve a maximal covering of given demand points with a minimal number of stops. This bicriterial problem is in general NP-hard. We present a nite dominating set yielding an IP-formulation as a bicriterial set covering problem. We use this formulation to observe that along one single straight line the bicriterial stop location problem can be solved in polynomial time and present an e cient solution approach for this case. It can be used as the basis of an algorithm tackling real-world instances.
Ownership Domains generalize ownership types. They support programming patterns like iterators that are not possible with ordinary ownership types. However, they are still too restrictive for cases in which an object X wants to access the public domains of an arbitrary number of other objects, which often happens in observer scenarios. To overcome this restriction, we developed so-called loose domains which abstract over several precise domains. That is, similar to the relation between supertypes and subtypes we have a relation between loose and precise domains. In addition, we simplified ownership domains by reducing the number of domains per object to two and hard-wiring the access permissions between domains. We formalized the resulting type system for an OO core language and proved type soundness and a fundamental accessibility property.
Hyperidentities
(1992)
The concept of a free algebra plays an essential role in universal algebra and in computer science. Manipulation of terms, calculations and the derivation of identities are performed in free algebras. Word problems, normal forms, system of reductions, unification and finite bases of identities are topics in algebra and logic as well as in computer science. A very fruitful point of view is to consider structural properties of free algebras. A.I. Malcev initiated a thorough research of the congruences of free algebras. Henceforth congruence permutable, congruence distributive and congruence modular varieties are
intensively studied. A lot of Malcev type theorems are connected to the congruence lattice of free algebras. Here we consider free algebras as semigroups of compositions of terms and more specific as clones of terms. The properties of these semigroups and clones are adequately described by hyperidentities. Naturally a lot of theorems of "semigroup" or "clone" type can be derived. This topic of research is still in its beginning and therefore a lot öf concepts and results cannot be presented in a final and polished form. Furthermore a lot of problems and questions are open which are of importance for the further development of the theory of hyperidentities.
On derived varieties
(1996)
Derived varieties play an essential role in the theory of hyperidentities. In [11] we have shown that derivation diagrams are a useful tool in the analysis of derived algebras and varieties. In this paper this tool is developed further in order to use it for algebraic constructions of derived algebras. Especially the operator \(S\) of subalgebras, \(H\) of homomorphic irnages and \(P\) of direct products are studied. Derived groupoids from the groupoid \(N or (x,y)\) = \(x'\wedge y'\) and from abelian groups are considered. The latter class serves as an example for fluid algebras and varieties. A fluid variety \(V\) has no derived variety as a subvariety and is introduced as a counterpart for solid varieties. Finally we use a property of the commutator of derived algebras in order to show that solvability and nilpotency are preserved under derivation.
A polynomial function \(f : L \to L\) of a lattice \(\mathcal{L}\) = \((L; \land, \lor)\) is generated by the identity function id \(id(x)=x\) and the constant functions \(c_a (x) = a\) (for every \(x \in L\)), \(a \in L\) by applying the operations \(\land, \lor\) finitely often. Every polynomial function in one or also in several variables is a monotone function of \(\mathcal{L}\).
If every monotone function of \(\mathcal{L}\)is a polynomial function then \(\mathcal{L}\) is called orderpolynomially complete. In this paper we give a new characterization of finite order-polynomially lattices. We consider doubly irreducible monotone functions and point out their relation to tolerances, especially to central relations. We introduce chain-compatible lattices
and show that they have a non-trivial congruence if they contain a finite interval and an infinite chain. The consequences are two new results. A modular lattice \(\mathcal{L}\) with a finite interval is order-polynomially complete if and only if \(\mathcal{L}\) is finite projective geometry. If \(\mathcal{L}\) is simple modular lattice of infinite length then every nontrivial interval is of infinite length and has the same cardinality as any other nontrivial interval of \(\mathcal{L}\). In the last sections we show the descriptive power of polynomial functions of
lattices and present several applications in geometry.