Refine
Year of publication
- 2007 (145) (remove)
Document Type
- Doctoral Thesis (64)
- Report (37)
- Preprint (14)
- Periodical Part (13)
- Study Thesis (5)
- Working Paper (4)
- Diploma Thesis (3)
- Article (2)
- Conference Proceeding (2)
- Habilitation (1)
Has Fulltext
- yes (145)
Keywords
- Phasengleichgewicht (4)
- numerical upscaling (4)
- Dienstgüte (3)
- Elastoplastizität (3)
- Model checking (3)
- Networked Automation Systems (3)
- Netzwerk (3)
- Response-Zeit (3)
- Visualisierung (3)
- hub location (3)
- probabilistic model checking (3)
- wahrscheinlichkeitsbasierte Modellverifikation (3)
- Apoptosis (2)
- Asymptotic Expansion (2)
- Automatisierungssystem (2)
- Computergraphik (2)
- Cyclopeptide (2)
- Darcy’s law (2)
- Elastoplasticity (2)
- Fluoreszenz (2)
- Formalisierung (2)
- GPU (2)
- Kontinuumsmechanik (2)
- Mixture Models (2)
- Modellierung (2)
- Multiresolution Analysis (2)
- Netzbasierte Automatisierungssysteme (2)
- Nichtlineare Finite-Elemente-Methode (2)
- Optionspreistheorie (2)
- Sobolev spaces (2)
- Spline-Approximation (2)
- Synchronisation zyklischer Prozesse (2)
- UML (2)
- Verbundbauweise (2)
- anisotropy (2)
- computational fluid dynamics (2)
- computational mechanics (2)
- continuum mechanics (2)
- cyclopeptides (2)
- distributed control systems (2)
- effective heat conductivity (2)
- finite volume method (2)
- heuristic (2)
- localizing basis (2)
- regularization (2)
- single phase flow (2)
- synchronization of cyclic processes (2)
- verteilte Steuerungen (2)
- 2-d kernel regression (1)
- 3D (1)
- A-infinity-bimodule (1)
- A-infinity-category (1)
- A-infinity-functor (1)
- ANTR5 (1)
- ASM (1)
- Abrasiver Verschleiß (1)
- Abrechnungsmanagement (1)
- Access Points (1)
- Accounting Agent (1)
- Ackerschmalwand (1)
- Acrylamid (1)
- Ad-hoc-Netz (1)
- Adaptive Entzerrung (1)
- Adjazenz-Beziehungen (1)
- Algorithmics (1)
- Alterung (1)
- Analyse (1)
- Anionenerkennung (1)
- Anionenrezeptoren (1)
- Anisotropie (1)
- Anthocyane (1)
- Aroniabeere (1)
- Asympotic Analysis (1)
- Asymptotic Analysis (1)
- Asymptotische Entwicklung (1)
- Ausfällen (1)
- Ausschnitt <Öffnung> (1)
- Automated Calibration (1)
- Automatische Messung (1)
- Automatisierungstechnik (1)
- Avirulence (1)
- Bayesian Model Averaging (1)
- Bedeutung (1)
- Bemessung (1)
- Benutzer (1)
- Berry fruit juice (1)
- Beschränkte Arithmetik (1)
- Betriebsfestigkeit (1)
- Bettungsmodul (1)
- Biokatalyse (1)
- Biomechanik (1)
- Biorthogonalisation (1)
- Biotechnologie (1)
- Biotrophy (1)
- Bondindizes (1)
- Boolean polynomials (1)
- Bootstrap (1)
- Boundary Value Problem (1)
- Buntsaftkonzentrat (1)
- CIR model (1)
- CSOs treatment (1)
- CUDA (1)
- Caspase (1)
- Cobalt (1)
- Comet Assay (1)
- Composites (1)
- Computational Fluid Dynamics (1)
- Continuum mechanics (1)
- Controlling (1)
- Coq (1)
- Cyclin-abhängige Kinasen (1)
- DCE <Programm> (1)
- DNA-Schädigung (1)
- Damage (1)
- Dampf-flüssig-flüssig-Gleichgewicht (1)
- Datenbank (1)
- Delaunay mesh generation (1)
- DesLaNAS (1)
- Deutschland / Bundesnetzage (1)
- Dezentralisation (1)
- Diclofenac (1)
- Diskontinuität (1)
- DoS (1)
- DoS-Attacke (1)
- Downlink (1)
- Drahtloses lokales Netz (1)
- Dreidimensionale Strömung (1)
- Durchlaufträger (1)
- Dynamischer Test (1)
- EMPO (1)
- EPR-Spectroscopy (1)
- Earth's disturbing potential (1)
- Eingebettete Systeme (1)
- Eisen (1)
- Elasticity (1)
- Elastizität (1)
- Elektrizitätsverbrauch (1)
- Elektrizitätsversorgung (1)
- Elektronenspinresonanzspektroskopie (1)
- Embedded software (1)
- Endliche Geometrie (1)
- Energieeinsparung (1)
- Epoxidation (1)
- Epoxide (1)
- Erweiterte Realität <Informatik> (1)
- Ethan (1)
- Ethylen (1)
- Existence of Solutions (1)
- Experiment (1)
- Experimentauswertung (1)
- Facility location (1)
- Farbe (1)
- Farbstabilität (1)
- Farnesylpyrophosphat-Synthase (1)
- Fault Prediction (1)
- Feature (1)
- Fehler (1)
- Feige (1)
- Fertigungslogistik (1)
- Festkörper (1)
- Filterauslegung (1)
- Filterkuchenwiderstand (1)
- Filtermittelwiderstand (1)
- Filterversuche (1)
- Filtration (1)
- Filtrierbarkeit (1)
- Finanzmathematik (1)
- Finite-Elemente-Methode (1)
- Flechten (1)
- Fließgelenk (1)
- Flooding Attack (1)
- Flugzeitmassenspektrometrie (1)
- Fluorescence (1)
- Flüssig-Flüssig-Extraktion (1)
- Flüssig-Flüssig-Gleichgewicht (1)
- Flüssig-Flüssig-System (1)
- Flüssigkeitsreibung (1)
- Fokker-Planck Equation (1)
- Framework <Informatik> (1)
- Funknetz (1)
- Gauge Distances (1)
- Genetische Algorithmen (1)
- Geoinformationssystem (1)
- Geometric Ergodicity (1)
- Geometrical Nonlinear Thermomechanics (1)
- Geostrophic flow (1)
- Gleitlager (1)
- Glycogen-Synthase-Kinase-3 (1)
- Golgi-Apparat (1)
- Google Earth (1)
- Granular flow (1)
- Grapevine Fanleaf Virus (1)
- Gröber basis (1)
- HOL (1)
- Halogenasen (1)
- Hamilton-Jacobi-Differentialgleichung (1)
- Harmonische Spline-Funktion (1)
- Haustoria (1)
- Helmholtz Type Boundary Value Problems (1)
- Hepatotoxizität (1)
- Hochdrucktechnik (1)
- Homologische Algebra (1)
- Homotopiehochhebungen (1)
- Homotopy lifting (1)
- Hub-and-Spoke-System (1)
- Hydrogel (1)
- Hydrovinylierung (1)
- Hysterese (1)
- IMRT planning (1)
- IP Address (1)
- IP Traffic Accounting (1)
- ITC (1)
- Identifiability (1)
- In vitro (1)
- In vivo (1)
- Indirubin (1)
- Informationslogistik (1)
- Infrarotspektroskopie (1)
- Injectivity of mappings (1)
- Injektivität von Abbildungen (1)
- Innenstadt (1)
- Instrument (1)
- Integer programming (1)
- Intel XScale (1)
- Interfaces (1)
- Internationale Diversifikation (1)
- Inverses Problem (1)
- Isabelle/HOL (1)
- Isopropylacrylamid Natriummethacrylat N-Vinyl-2-pyrrolidon (1)
- Jiang's constitutive model (1)
- Jiangsches konstitutives Gesetz (1)
- Kategorientheorie (1)
- Keramik <T (1)
- Kinetik (1)
- Knapsack (1)
- Kombinatorische Optimierung (1)
- Komplexitätsklasse NP (1)
- Kopolymere (1)
- Kristallisation (1)
- Kultivierung (1)
- Kurve (1)
- Lagerung (1)
- Leberepithelzelle (1)
- Legendre Wavelets (1)
- Liberalisierung (1)
- Lineare Integralgleichung (1)
- Liquid-liquid-equilibrium (1)
- Literaturempirie (1)
- Locally Supported Zonal Kernels (1)
- Location Theory (1)
- Logistik (1)
- Lysozyme (1)
- MIMO (1)
- Machine Scheduling (1)
- Mapping (1)
- Marine Biotechnologie (1)
- Markov Chain (1)
- Maximum-Likelihood (1)
- Mehrdimensionale Spline-Funktion (1)
- Membranprotein (1)
- Messtechnik (1)
- Meter (1)
- Mindesthaltbarkeitsdatum (1)
- Minimal spannender Baum (1)
- Mischwasserbehandlung (1)
- Mixed Reality (1)
- Mobilfunk (1)
- Model Checking (1)
- Modeling (1)
- Modellgetriebene Entwicklung (1)
- Modularisierung (1)
- Molekularstrahl (1)
- Monomer (1)
- Multigenanalyse (1)
- Multipoint flux approximation (1)
- Multiscale problem (1)
- Multiscale problems (1)
- N (1)
- N-Nitroso-verbindungen (1)
- N-isopropyl acrylamide (1)
- N-tridentate Liganden (1)
- NMR und ITC (1)
- NP (1)
- NP-hard (1)
- Natriumsulfat (1)
- Naturfasern (1)
- Naturstoffverteilung (1)
- Navier-Stokes-Brinkmann system of equations (1)
- Nematode (1)
- Nephrotoxizit (1)
- Network design (1)
- Netzbasierte Automatisierungssysteme (NAS) (1)
- Neural networks (1)
- Nichtlineare Mechanik (1)
- Nichtlineare/große Verformungen (1)
- Nitrone (1)
- Nitrones (1)
- Non-homogeneous Poisson Process (1)
- Nonlinear/large deformations (1)
- Nonparametric AR-ARCH (1)
- Nvidia (1)
- OFDM (1)
- Optimale Portfolios (1)
- Optimierung (1)
- Optimization (1)
- Organoblech (1)
- Ornstein-Uhlenbeck Process (1)
- Ovoid (1)
- Oxidation (1)
- PPARgamma (1)
- PTA (1)
- Parameter identification (1)
- Parameteridentifikation (1)
- Pathogenabwehr (1)
- Peng-Robinson-EoS (1)
- Peptide (1)
- Phylogenie (1)
- Phylogeographie (1)
- Plastizität (1)
- Poly( vinyl pyrrolidone) (1)
- Polyme (1)
- Polymere (1)
- Polymerisation (1)
- Polyphenole (1)
- Polyvinylpyrrolidon (1)
- Portfoliomanagement (1)
- Position Sensitive Device (1)
- Price-Cap-Regulierung (1)
- Problemlösung (1)
- Produktionssystem (1)
- Propanole (1)
- Proteine (1)
- Pseudomonas syringae (1)
- Pumpe (1)
- Pyrazole (1)
- Pyrimidin (1)
- Quadratischer Raum (1)
- Quellung (1)
- Quellung in wässrigen Lösungen (1)
- Querkraft (1)
- Querkrafttragfähigkeit (1)
- Quorum Sensing (1)
- RFID (1)
- RNAi (1)
- RNS-Interferenz (1)
- RNS-Viren (1)
- Radial Basis Functions (1)
- Randwertproblem (1)
- Ratenunabhängigkeit (1)
- Ray casting (1)
- Reaktionskinetik (1)
- Regenwasserbehandlung (1)
- Regulierung (1)
- Reibung (1)
- Reliability Prediction (1)
- Repeated-Batch (1)
- Reservierungsprotokoll (1)
- Resistenz (1)
- Retentionsbodenfilter (1)
- Robot Calibration (1)
- Robotics (1)
- Robotik (1)
- Rogue AP (1)
- Rotational Fiber Spinning (1)
- Rote Traube (1)
- Routing (1)
- Rust effector (1)
- Ruthenium (1)
- SDL (1)
- SDZ IMM125 (1)
- Salzlösung (1)
- Schlauchflechten (1)
- Schnittstelle (1)
- Schwarze Johannisbeere (1)
- Schädigung (1)
- Seismische Tomographie (1)
- Sekundärstruktur (1)
- Sensoren (1)
- Sensorik (1)
- Sepsis (1)
- Serre functor (1)
- Siliciumcarbid (1)
- Simplex-Algorithmus (1)
- Simulationsdaten (1)
- Slender body theory (1)
- Smart Production (1)
- Sobolevräume (1)
- Sodium methacrylate (1)
- Software (1)
- Software Engineering (1)
- Softwareentwicklung (1)
- Softwarespezifikation (1)
- Spezifikation (1)
- Spherical Fast Wavelet Transform (1)
- Spherical Wavelets (1)
- Sphärische Approximation (1)
- Spin trapping (1)
- Spline-Interpolation (1)
- Sprachdefinition (1)
- Sprachprofile (1)
- Sprung-Diffusions-Prozesse (1)
- Stadtentwicklung (1)
- Stadtplanung (1)
- Stahlbetonbau (1)
- Stegöffnung (1)
- Stereovision (1)
- Stilbenderivate (1)
- Stilbene derivatives (1)
- Stochastic Differential Equations (1)
- Stochastische Zinsen (1)
- Stochastische dynamische Optimierung (1)
- Stochastischer Automat (1)
- Stop- und Spieloperator (1)
- Streptomyces (1)
- Sulfonaterkennung (1)
- Supply Chain Management (1)
- Supramolekulare Chemie (1)
- Swelling equilibrium in aqueous solution (1)
- System Abstractions (1)
- T cells (1)
- T-Zellen (1)
- Technische Mechanik (1)
- Thermoformen (1)
- Thermomechanische Behandlung (1)
- Thiazolidindione (1)
- Time-Space Multiresolution Analysis (1)
- Toxikologie (1)
- Tragfähigkeit (1)
- Transaktionskosten (1)
- Transferred proteins (1)
- Translation Validation (1)
- Tribologie (1)
- Tryptophan-Halogenasen (1)
- Trägerbohlwand (1)
- Two-Phase System (1)
- UrbanSim (1)
- User Model (1)
- Variationsungleichungen (1)
- Vasicek model (1)
- Vectorfield approximation (1)
- Vektorfeldapproximation (1)
- Verbundträger (1)
- Verbundwerkstoffe (1)
- Verschleißprüfung (1)
- Vinyl-2-pyrrolidon (1)
- Virusübertragung (1)
- Viscous Fibers (1)
- Viskoelastizität (1)
- Volatilität (1)
- Völklingen (1)
- Wasserstoff-ATPase (1)
- Wave Based Method (1)
- Weibull (1)
- Weinrebe (1)
- Wireless Networks (1)
- Wissensarten (1)
- Wohnen (1)
- Wurzelreaktion (1)
- Wärmeleitung (1)
- Wässrige Lösung (1)
- Xiphinema index (1)
- Zugriffskonflikte (1)
- Zweiphasensysteme (1)
- a-priori domain decomposition (1)
- access conflict (1)
- adjacency relations (1)
- affine arithmetic (1)
- aktive Netzbetreiber (1)
- algebraic cryptoanalysis (1)
- algorithm (1)
- algorithm by Bortfeld and Boyer (1)
- alpha (1)
- anion recognition (1)
- anionic receptors (1)
- anisotropic plasticity (1)
- anthocyanins (1)
- apoptosis (1)
- beta-ungesättigte Carbonylverbindungen (1)
- big triangle small triangle method (1)
- binarization (1)
- biochemical characterisation (1)
- biomechanics (1)
- bocses (1)
- boundary value problem (1)
- bounds (1)
- cake filtration (1)
- chemically crosslinked hydrogels (1)
- chemisch vernetzte Hydrogele (1)
- colour stability (1)
- combinatorial optimization (1)
- compiler (1)
- composite beam (1)
- computer graphics (1)
- conditional quantile (1)
- configurational mechanics (1)
- consistency (1)
- convergence (1)
- convex (1)
- convex optimization (1)
- curved viscous fibers with surface tension (1)
- curves and surfaces (1)
- cut (1)
- cut basis problem (1)
- data structure (1)
- data-adaptive bandwidth choice (1)
- decomposition (1)
- defect detection (1)
- deflections of the vertical (1)
- degenerations of an elliptic curve (1)
- density gradient equation (1)
- discontinuous coefficients (1)
- discriminant analysis (1)
- domain decomposition (1)
- elastoplasticity (1)
- electroporation (1)
- elliptic equation (1)
- energy consumption (1)
- esterases (1)
- experiment (1)
- fibrous insulation materials (1)
- filter media resistance (1)
- filtration (1)
- finite deformations (1)
- finite elements (1)
- finite-volume method (1)
- flow visualization (1)
- fluid structure (1)
- fluorescence (1)
- formal verification (1)
- framework (1)
- free boundary value problem (1)
- functional Hilbert space (1)
- fundamental cut (1)
- gene silencing (1)
- genetic algorithms (1)
- global optimization (1)
- graph and network algorithm (1)
- growth and remodelling (1)
- hPRT-Genmutations-Assay (1)
- halogenases (1)
- heterogeneous porous media (1)
- high-pressure vapour-liquid-liquid equilibria (1)
- hydrodynamische Injektion (1)
- hyperbolic systems (1)
- hysteresis (1)
- image analysis (1)
- image denoising (1)
- image processing (1)
- image segmentation (1)
- infrared spectroscopy (1)
- integer programming (1)
- intensity maps (1)
- intensity modulated radiotherapy planning (1)
- interactive multi-objective optimization (1)
- interface problem (1)
- interference resistance (1)
- interval arithmetic (1)
- inverse problem (1)
- inverse problems (1)
- iterative bandwidth choice (1)
- kernel estimate (1)
- kernel function (1)
- knapsack (1)
- language definition (1)
- language profiles (1)
- lattice Boltzmann (1)
- lichens (1)
- lipases (1)
- liquid-liquid-extraction of natural products (1)
- local approximation of sea surface topography (1)
- local bandwidths (1)
- local multiscale (1)
- localization (1)
- locally supported (Green's) vector wavelets (1)
- locally supported (Green’s) vector wavelets (1)
- lokalisierende Basis (1)
- loss of information (1)
- marine biotechnology (1)
- matrix problems (1)
- measurement (1)
- metal foams (1)
- minimal spanning tree (1)
- minimum cost flows (1)
- minimum fundamental cut basis (1)
- model (1)
- modularisation (1)
- molecular beam (1)
- multi-gene analysis (1)
- multicategory (1)
- multidimensional datasets (1)
- multigrid (1)
- multigrid method (1)
- multinomial regression (1)
- multiplicative decomposition (1)
- multiplikative Zerlegung (1)
- multiscale problem (1)
- nahekritischer Zustandsbereich (1)
- naturfaserverstärkte Kunststoffe (1)
- near-critical ethene+water+propanol (1)
- network flows (1)
- neural network (1)
- non-linear optimization (1)
- non-overlapping constraints (1)
- nonlinear diffusion filtering (1)
- nonlinear finite element method (1)
- numerics (1)
- numerische Mechanik (1)
- ordered median (1)
- oscillating coefficients (1)
- paper machine (1)
- penalization (1)
- peptide (1)
- permeability of fractured porous media (1)
- phase equilibria (1)
- phylogeny (1)
- phylogeography (1)
- planar location (1)
- polyhedral analysis (1)
- polyphenols (1)
- poroelasticity (1)
- porous media (1)
- precipitation (1)
- preconditioner (1)
- probabilistic timed automata (1)
- protein (1)
- qualitative threshold model (1)
- quorum sensing (1)
- rate-independency (1)
- ray casting (1)
- ray tracing (1)
- rectangular packing (1)
- reinforced thermoplastics (1)
- repeated batch cultivation (1)
- reproducing kernel (1)
- retention soil filter (1)
- root-reactions (1)
- salt (1)
- satisfiability (1)
- secondary structure (1)
- seismic tomography (1)
- sensors (1)
- sepsis (1)
- sequences (1)
- shear bearing capacity (1)
- siRNA (1)
- sieve estimate (1)
- simplex algorithm (1)
- smoothing (1)
- smoothness (1)
- sodium sulfate (1)
- spherical approximation (1)
- spin trapping (1)
- splines (1)
- stop- and play-operator (1)
- storage (1)
- sulfonate recognition (1)
- supramolecular chemistry (1)
- textile quality control (1)
- texture classification (1)
- theorem prover (1)
- theory of materials (1)
- thiazolidinediones (1)
- time of flight mass spectrometry (1)
- time series (1)
- time-varying flow fields (1)
- topological asymptotic expansion (1)
- topological incongruence (1)
- topologische Inkongruenz (1)
- transfection (1)
- translation validation (1)
- transportation (1)
- two-grid algorithm (1)
- uncapacitated facility location (1)
- unstructured grid (1)
- urban planning (1)
- variational inequalities (1)
- vector bundles (1)
- vector field visualization (1)
- virus-transmission (1)
- viscoelasticity (1)
- visualization (1)
- water (1)
- web opening (1)
- wild bootstrap test (1)
- zeitabhängige Strömungen (1)
- zoledronic acid (1)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (34)
- Fraunhofer (ITWM) (28)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (20)
- Kaiserslautern - Fachbereich Informatik (19)
- Kaiserslautern - Fachbereich Sozialwissenschaften (10)
- Kaiserslautern - Fachbereich Chemie (9)
- Kaiserslautern - Fachbereich ARUBI (6)
- Kaiserslautern - Fachbereich Biologie (6)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (5)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (5)
The main aim of this work was to obtain an approximate solution of the seismic traveltime tomography problems with the help of splines based on reproducing kernel Sobolev spaces. In order to be able to apply the spline approximation concept to surface wave as well as to body wave tomography problems, the spherical spline approximation concept was extended for the case where the domain of the function to be approximated is an arbitrary compact set in R^n and a finite number of discontinuity points is allowed. We present applications of such spline method to seismic surface wave as well as body wave tomography, and discuss the theoretical and numerical aspects of such applications. Moreover, we run numerous numerical tests that justify the theoretical considerations.
In this paper we construct spline functions based on a reproducing kernel Hilbert space to interpolate/approximate the velocity field of earthquake waves inside the Earth based on traveltime data for an inhomogeneous grid of sources (hypocenters) and receivers (seismic stations). Theoretical aspects including error estimates and convergence results as well as numerical results are demonstrated.
Thermoelasticity represents the fusion of the fields of heat conduction and elasticity in solids and is usually characterized by a twofold coupling. Thermally induced stresses can be determined as well as temperature changes caused by deformations. Studying the mutual influence is subject of thermoelasticity. Usually, heat conduction in solids is based on Fourier’s law which describes a diffusive process. It predicts unnatural infinite transmission speed for parts of local heat pulses. At room temperature, for example, these parts are strongly damped. Thus, in these cases most engineering applications are described satisfactorily by the classical theory. However, in some situations the predictions according to Fourier’s law fail miserable. One of these situations occurs at temperatures near absolute zero, where the phenomenon of second sound1 was discovered in the 20th century. Consequently, non-classical theories experienced great research interest during the recent decades. Throughout this thesis, the expression “non-classical” refers to the fact that the constitutive equation of the heat flux is not based on Fourier’s law. Fourier’s classical theory hypothesizes that the heat flux is proportional to the temperature gradient. A new thermoelastic theory, on the one hand, needs to be consistent with classical thermoelastodynamics and, on the other hand, needs to describe second sound accurately. Hence, during the second half of the last century the traditional parabolic heat equation was replaced by a hyperbolic one. Its coupling with elasticity leads to non-classical thermomechanics which allows the modeling of second sound, provides a passage to the classical theory and additionally overcomes the paradox of infinite wave speed. Although much effort is put into non-classical theories, the thermoelastodynamic community has not yet agreed on one approach and a systematic research is going on worldwide.Computational methods play an important role for solving thermoelastic problems in engineering sciences. Usually this is due to the complex structure of the equations at hand. This thesis aims at establishing a basic theory and numerical treatment of non-classical thermoelasticity (rather than dealing with special cases). The finite element method is already widely accepted in the field of structural solid mechanics and enjoys a growing significance in thermal analyses. This approach resorts to a finite element method in space as well as in time.
The nowadays increasing number of fields where large quantities of data are collected generates an emergent demand for methods for extracting relevant information from huge databases. Amongst the various existing data mining models, decision trees are widely used since they represent a good trade-off between accuracy and interpretability. However, one of their main problems is that they are very instable, which complicates the process of the knowledge discovery because the users are disturbed by the different decision trees generated from almost the same input learning samples. In the current work, binary tree classifiers are analyzed and partially improved. The analysis of tree classifiers goes from their topology from the graph theory point of view to the creation of a new tree classification model by means of combining decision trees and soft comparison operators (Mlynski, 2003) with the purpose to not only overcome the well known instability problem of decision trees, but also in order to confer the ability of dealing with uncertainty. In order to study and compare the structural stability of tree classifiers, we propose an instability coefficient which is based on the notion of Lipschitz continuity and offer a metric to measure the proximity between decision trees. This thesis converges towards its main part with the presentation of our model ``Soft Operators Decision Tree\'\' (SODT). Mainly, we describe its construction, application and the consistency of the mathematical formulation behind this. Finally we show the results of the implementation of SODT and compare numerically the stability and accuracy of a SODT and a crisp DT. The numerical simulations support the stability hypothesis and a smaller tendency to overfitting the training data with SODT than with crisp DT is observed. A further aspect of this inclusion of soft operators is that we choose them in a way so that the resulting goodness function (used by this method) is differentiable and thus allows to calculate the best split points by means of gradient descent methods. The main drawback of SODT is the incorporation of the unpreciseness factor, which increases the complexity of the algorithm.
The provision of quality-of-service (QoS) on the network layer is a major challenge in communication networks. This applies particularly to mobile ad-hoc networks (MANETs) in the area of Ambient Intelligence (AmI), especially with the increasing use of delay and bandwidth sensitive applications. The focus of this survey lies on the classification and analysis of selected QoS routing protocols in the domain of mobile ad-hoc networks. Each protocol is briefly described and assessed, and the results are summarized in multiple tables.
Unternehmen sind heute mit einem globalen Wettbewerb und großen Herausforderungen konfrontiert, wie z. B. kurzen Produktlebenszyklen oder hohen Anforderungen an die Prozess- und Ergebnissicherheit. Verschiedene daten- und technologiegetriebene sowie prozessorientierte Ansätze der Produktionsgestaltung versuchen, Lösungen für Produktionssysteme zu entwickeln, um diese Herausforderungen zu bewältigen und im globalen Wettbewerb bestehen zu können. Jede einzelne Richtung der Produktionsgestaltung weist dabei Vorteile und Defizite auf. Durch „Smart Production Systems“ werden die Grundideen der einzelnen Ansätze, die bisher als unvereinbar galten, zusammengeführt. Dadurch können einzelne Defizite ausgeglichen werden, ohne auf die bekannten Vorteile zu verzichten. Der Grundgedanke von „Smart Production Systems“ ist es, „wissensinkorporierte Objekte“ in Produktionsprozessen einzusetzen, um die Abläufe jederzeit nachvollziehbar und damit das zugehörige Produktionssystem sicher, effizient und flexibel zu gestalten. Wissensinkorporierte Objekte stellen dabei Produktionsobjekte dar, die neben ihrer eigentlichen Funktion informationstechnische Funktionen besitzen, mittels derer sie Daten speichern und wieder bereitstellen können. Der Einsatz wissensinkorporierter Objekte in „Smart Production Systems“ kann mit Hilfe des Konzepts Gestaltungswürfels „Smart Cube“ strukturiert analysiert, geplant, konzeptioniert, bewertet, implementiert, genutzt und gesichert werden. Das Konzept des Gestaltungswürfels „Smart Cube“ wird durch ein Organisations-, ein Referenz-, ein Umsetzungsmodell und mehrere spezifische Anwendungsmodelle beschrieben: 1) Das Organisationsmodell erläutert als Ansatzpunkt von „Smart Production Systems“ die Stellung des Informationsflusses und Materialfluss sowie den Zusammenhang zwischen Datenmenge und Entscheidungsreichweite innerhalb einer Produktionsorganisation. 2) Das Referenzmodell wird aus den drei räumlichen Achsen Wirkungs-, Objekt- und Prozess- sowie Informationsstruktur aufgebaut. Es erfüllt folgende Aufgaben: - Es beschreibt die grundsätzlichen Ausprägungen von Wirkungs-, Objekt- und Prozess- sowie Informationsstrukturen und deren Kombinationsmöglichkeiten innerhalb von Produktionssystemen. - Es erläutert die Zusammenhänge zwischen Technologie-, Logistik- und Kommunikationskomplexität und ermöglicht damit die Einordnung von spezifischen Produktionssystemen in das Referenzmodell. - Mit den verschiedenen Koordinaten der einzelnen Achsen liefert es eine Systematik zur Bezeichnung der verschiedenen möglichen Kombinationen, die es ermöglicht, Anwendungsmodelle aus dem Referenzmodell abzuleiten. 3) Die Anwendungsmodelle konkretisieren das Referenzmodell hinsichtlich der in einem Produktionssystem enthaltenen Objekte und Prozesse. Aufgrund der verschiedenen möglichen Kombinationen der Ausprägungen der einzelnen Strukturen im Referenzmodell lassen sich 27 verschiedene Anwendungsmodelle bilden. 4) Das Umsetzungsmodell beschreibt das Vorgehen bei der Gestaltung von Produktionssystemen mit Hilfe des Gestaltungswürfels „Smart Cube“ . Am Beispiel eines Neugestaltungsprojektes in einem Unternehmen aus der Automobilzulieferindustrie wird gezeigt, wie mit Hilfe des entwickelten Konzeptes Gestaltungswürfel „Smart Cube“ der Einsatz von wissensinkorporierten Objekten systematisch gestaltet werden kann: - Mit der Gestaltung von Kanbankarten zu wissensinkorporierten Objekten kann der manuelle Buchungsaufwand minimiert und der Anteil nicht-wertschöpfender Tätigkeiten klein gehalten werden. - Mit Hilfe des Einsatzes wissensinkorporierter Objekte in einem Fabriksystem können Zielkonflikte zwischen Sicherheit und Effizienz gelöst werden.
Calibration of robots has become a research field of great importance over the last decades especially in the field industrial robotics. The main reason for this is that the field of application was significantly broadened due to an increasing number of fully automated or robot assisted tasks to be performed. Those applications require significantly higher level of accuracy due to more delicate tasks that need to be fulfilled (e.g. assembly in the semiconductor industry or robot assisted medical surgery). In the past, (industrial) robot calibration had to be performed manually for every single robot under lab conditions in a long and cost intensive process. Expensive and complex measurement systems had to be operated by highly trained personnel. The result of this process is a set of measurements representing the robot pose in the task space (i.e. world coordinate system) and as joint encoder values. To determine the deviation, the robot pose indicated by the internal joint encoder values has to be compared to the physical pose (i.e. external measurement data). Hence, the errors in the kinematic model of the robot can be computed and therefore later on compensated. These errors are inevitable and caused by varying manufacturing tolerances and other sources of error (e.g. friction and deflection). They have to be compensated in order to achieve sufficient accuracy for the given tasks. Furthermore for performance, maintenance, or quality assurance reasons the robots may have to undergo the calibration process in constant time intervals to monitor and compensate e.g. ageing effects such as wear and tear. In modern production processes old fashioned procedures like the one mentioned above are no longer suitable. Therefore a new method has to be found that is less time consuming, more cost effective, and involves less (or in the long term even no) human interaction in the calibration process.
Guaranteeing correctness of compilation is a ma jor precondition for correct software. Code generation can be one of the most error-prone tasks in a compiler. One way to achieve trusted compilation is certifying compilation. A certifying compiler generates for each run a proof that it has performed the compilation run correctly. The proof is checked in a separate theorem prover. If the theorem prover is content with the proof, one can be sure that the compiler produced correct code. This paper presents a certifying code generation phase for a compiler translating an intermediate language into assembler code. The time spent for checking the proofs is the bottleneck of certifying compilation. We exhibit an improved framework for certifying compilation and considerable advances to overcome this bottleneck. We compare our implementation featuring the Coq theorem prover to an older implementation. Our current implementation is feasible for medium to large sized programs.
Abstraction is intensively used in the verification of large, complex or infinite-state systems. With abstractions getting more complex it is often difficult to see whether they are valid. However, for using abstraction in model checking it has to be ensured that properties are preserved. In this paper, we use a translation validation approach to verify property preservation of system abstractions. We formulate a correctness criterion based on simulation between concrete and abstract system for a property to be verified. For each distinct run of the abstraction procedure the correctness is verified in the theorem prover Isabelle/HOL. This technique is applied in the verification of embedded adaptive systems. This paper is an extended version a previously published work.
In this thesis we classify simple coherent sheaves on Kodaira fibers of types II, III and IV (cuspidal and tacnode cubic curves and a plane configuration of three concurrent lines). Indecomposable vector bundles on smooth elliptic curves were classified in 1957 by Atiyah. In works of Burban, Drozd and Greuel it was shown that the categories of vector bundles and coherent sheaves on cycles of projective lines are tame. It turns out, that all other degenerations of elliptic curves are vector-bundle-wild. Nevertheless, we prove that the category of coherent sheaves of an arbitrary reduced plane cubic curve, (including the mentioned Kodaira fibers) is brick-tame. The main technical tool of our approach is the representation theory of bocses. Although, this technique was mainly used for purely theoretical purposes, we illustrate its computational potential for investigating tame behavior in wild categories. In particular, it allows to prove that a simple vector bundle on a reduced cubic curve is determined by its rank, multidegree and determinant, generalizing Atiyah's classification. Our approach leads to an interesting class of bocses, which can be wild but are brick-tame.
In this paper, a stochastic model [5] for the turbulent fiber laydown in the industrial production of nonwoven materials is extended by including a moving conveyor belt. In the hydrodynamic limit corresponding to large noise values, the transient and stationary joint probability distributions are determined using the method of multiple scales and the Chapman-Enskog method. Moreover, exponential convergence towards the stationary solution is proven for the reduced problem. For special choices of the industrial parameters, the stochastic limit process is an Ornstein{Uhlenbeck. It is a good approximation of the fiber motion even for moderate noise values. Moreover, as shown by Monte{Carlo simulations, the limiting process can be used to assess the quality of nonwoven materials in the industrial application by determining distributions of functionals of the process.
Embedded systems have become ubiquitous in everyday life, and especially in the automotive industry. New applications challenge their design by introducing a new class of problems that are based on a detailed analysis of the environmental situation. Situation analysis systems rely on models and algorithms of the domain of computational geometry. The basic model is usually an Euclidean plane, which contains polygons to represent the objects of the environment. Usual implementations of computational geometry algorithms cannot be directly used for safety-critical systems. First, a strict analysis of their correctness is indispensable and second, nonfunctional requirements with respect to the limited resources must be considered. This thesis proposes a layered approach to a polygon-processing system. On top of rational numbers, a geometry kernel is formalised at first. Subsequently, geometric primitives form a second layer of abstraction that is used for plane sweep and polygon algorithms. These layers do not only divide the whole system into manageable parts but make it possible to model problems and reason about them at the appropriate level of abstraction. This structure is used for the verification as well as the implementation of the developed polygon-processing library.
This work presents a new framework for Gröbner basis computations with Boolean polynomials. Boolean polynomials can be modeled in a rather simple way, with both coefficients and degree per variable lying in {0, 1}. The ring of Boolean polynomials is, however, not a polynomial ring, but rather the quotient ring of the polynomial ring over the field with two elements modulo the field equations x2 = x for each variable x. Therefore, the usual polynomial data structures seem not to be appropriate for fast Gröbner basis computations. We introduce a specialized data structure for Boolean polynomials based on zero-suppressed binary decision diagrams (ZDDs), which is capable of handling these polynomials more efficiently with respect to memory consumption and also computational speed. Furthermore, we concentrate on high-level algorithmic aspects, taking into account the new data structures as well as structural properties of Boolean polynomials. For example, a new useless-pair criterion for Gröbner basis computations in Boolean rings is introduced. One of the motivations for our work is the growing importance of formal hardware and software verification based on Boolean expressions, which suffer – besides from the complexity of the problems – from the lack of an adequate treatment of arithmetic components. We are convinced that algebraic methods are more suited and we believe that our preliminary implementation shows that Gröbner bases on specific data structures can be capable to handle problems of industrial size.
Given an undirected, connected network G = (V,E) with weights on the edges, the cut basis problem is asking for a maximal number of linear independent cuts such that the sum of the cut weights is minimized. Surprisingly, this problem has not attained as much attention as its graph theoretic counterpart, the cycle basis problem. We consider two versions of the problem, the unconstrained and the fundamental cut basis problem. For the unconstrained case, where the cuts in the basis can be of an arbitrary kind, the problem can be written as a multiterminal network flow problem and is thus solvable in strongly polynomial time. The complexity of this algorithm improves the complexity of the best algorithms for the cycle basis problem, such that it is preferable for cycle basis problems in planar graphs. In contrast, the fundamental cut basis problem, where all cuts in the basis are obtained by deleting an edge, each, from a spanning tree T is shown to be NP-hard. We present heuristics, integer programming formulations and summarize first experiences with numerical tests.
The lattice Boltzmann method (LBM) is a numerical solver for the Navier-Stokes equations, based on an underlying molecular dynamic model. Recently, it has been extended towardsthe simulation of complex fluids. We use the asymptotic expansion technique to investigate the standard scheme, the initialization problem and possible developments towards moving boundary and fluid-structure interaction problems. At the same time, it will be shown how the mathematical analysis can be used to understand and improve the algorithm. First of all, we elaborate the tool "asymptotic analysis", proposing a general formulation of the technique and explaining the methods and the strategy we use for the investigation. A first standard application to the LBM is described, which leads to the approximation of the Navier-Stokes solution starting from the lattice Boltzmann equation. As next, we extend the analysis to investigate origin and dynamics of initial layers. A class of initialization algorithms to generate accurate initial values within the LB framework is described in detail. Starting from existing routines, we will be able to improve the schemes in term of efficiency and accuracy. Then we study the features of a simple moving boundary LBM. In particular, we concentrate on the initialization of new fluid nodes created by the variations of the computational fluid domain. An overview of existing possible choices is presented. Performing a careful analysis of the problem we propose a modified algorithm, which produces satisfactory results. Finally, to set up an LBM for fluid structure interaction, efficient routines to evaluate forces are required. We describe the Momentum Exchange algorithm (MEA). Precise accuracy estimates are derived, and the analysis leads to the construction of an improved method to evaluate the interface stresses. In conclusion, we test the defined code and validate the results of the analysis on several simple benchmarks. From the theoretical point of view, in the thesis we have developed a general formulation of the asymptotic expansion, which is expected to offer a more flexible tool in the investigation of numerical methods. The main practical contribution offered by this work is the detailed analysis of the numerical method. It allows to understand and improve the algorithms, and construct new routines, which can be considered as starting points for future researches.
Stadtentwicklungsplanung erfährt in Mittelstädten im Strukturwandel und Schrumpfungsprozess einen Bedeu-tungszuwachs. Die Renaissance einer umfassenderen integrierten Entwicklungsplanung durch die Programme „Soziale Stadt“ und „Stadtumbau-West“ führt dazu, dass ressortübergreifende Zusammenarbeit innerhalb der Kommunalverwaltung einer Steuerung durch die Stadtentwicklungsplanung bedarf. Die Aufgabe der Erstellung integrativer Konzepte, der Beobachtung von Entwicklungsprozessen, der Schaffung von Ämternetzwerken und besonderer Organisationsformen außerhalb der Hierarchie im Rahmen des Projektmanagements, die Aktivierung der Bürgerschaft und neue Formen der Zusammenarbeit mit wirtschaftlichen Akteuren erfordern eine stärkere Einbeziehung der Stadtentwicklungsplanung in die strategische Verantwortung im Rahmen des Neuen Steuerungsmodells. Schrumpfende Städte bedürfen einer verstärkten Teilhabe aller Bevölkerungsgruppen an einer nachhaltigen Stadtentwicklung. Die kommunale Finanznotlage und ein verstärktes Demokratiebedürfnis führen zu einer Forderung nach einer aktiven Bürgerkommune, die Teilbereiche des öffentlichen Lebens übernimmt. Der Wandel im Selbstverständnis der Verwaltung vom government zur governance ist untrennbar verbunden mit einer verstärkten Einbeziehung der gesellschaftlichen und wirtschaftlichen Akteure in die Stadtentwicklung und führt zu einem neuen Rollenverständnis der Stadtentwicklungsplanung. Die vergangenen Anforderungen der Industriegesellschaft unterscheiden sich von den aktuellen Anforderungen der Dienstleistungs- und Wissensgesellschaft an die städtische Infrastruktur und Gesellschaft. Nachhaltigkeit einer neu zu erarbeitenden Stadtentwicklungsstrategie kann daher nur erzeugt werden, wenn die Änderungsprozesse erfasst und in das Anforderungsprofil der Stadtentwicklung miteinbezogen werden. Zusammen mit den Forderungen nach mehr Partizipation und Kommunikation folgt daraus für die Raumplanung, dass nicht nur ein verantwortungsvoller und handwerklich genauer Einsatz der ingenieurwissenschaftlichen Methoden und Instrumente gefragt ist, sondern auch im verstärkten Maße eine soziale und kommunikative Kompetenz der Planerinnen und Planer gefordert ist. Vor dem Hintergrund der aktuellen Schrumpfungsprozesse und dem Wandel der Industriegesellschaft zur Dienstleistungs- und Wissensgesellschaft wird dem in der wissenschaftlichen Diskussion teilweise in Frage gestellten Leitbild der Europäischen Stadt eine neue Bedeutung zugemessen. Indem die Kernstädte Infrastruktur-leistungen bündeln, ein größeres Angebot für verschiedene Lebensstile zulassen und als Kommunikations- und Knotenpunkt für neue Akteure in der Dienstleistungs- und Wissensgesellschaft dienen, wächst wieder die Be-deutung der Kernstädte im Verhältnis zu den Umlandgemeinden. Zusammenfassend ist festzustellen, dass eine nachhaltige zukunftsorientierte Urban Governance geprägt ist von einer kommunalen Kooperations- und Kommunikationsstruktur, in der Politik, Verwaltung – und hier insbeson-dere die Stadtentwicklungsplanung- gemeinsam mit den Akteuren aus Bürgerschaft und Wirtschaft tragfähige Visionen entwickeln, Ziele und Maßnahmen formulieren und gemeinsam realisieren.
The provision of network Quality-of-Service (network QoS) in wireless (ad-hoc) networks is a major challenge in the development of future communication systems. Before designing and implementing these systems, the network QoS requirements are to be specified. Existing approaches to the specification of network QoS requirements are mainly focused on specific domains or individual system layers. In this paper, we present a holistic, comprehensive formalization of network QoS requirements, across layers. QoS requirements are specified on each layer by defining QoS domain, consisting of QoS performance, reliability, and guarantee, and QoS scalability, with utility and cost functions. Furthermore, we derive preorders on multi-dimensional QoS domains, and present criteria to reduce these domains, leading to a manageable subset of QoS values that is sufficient for system design and implementation. We illustrate our approach by examples from the case study Wireless Video Transmission.
The provision of network Quality-of-Service (network QoS) in wireless (ad-hoc) networks is a major challenge in the development of future communication systems. Before designing and implementing these systems, the network QoS requirements are to be specified. Since QoS functionalities are integrated across layers and hence QoS specifications exist on different system layers, a QoS mapping technique is needed to translate the specifications into each other. In this paper, we formalize the relationship between layers. Based on a comprehensive and holistic formalization of network QoS requirements, we define two kinds of QoS mappings. QoS domain mappings associate QoS domains of two abstraction levels. QoS scalability mappings associate utility and cost functions of two abstraction levels. We illustrate our approach by examples from the case study Wireless Video Transmission.
The performance of oil filters used in the automotive industry can be significantly improved, especially when computer simulation is an essential component of the design process. In this paper, we consider parallel numerical algorithms for solving mathematical models describing the process of filtration, filtering out solid particles from liquid oil. The Navier-Stokes-Brinkmann system of equations is used to describe the laminar flow of incompressible isothermal oil. The space discretization in the complicated filter geometry is based on the finite-volume method. Special care is taken for an accurate approximation of velocity and pressure on the interface between the fluid and the porous media. The time discretization used here is a proper modification of the fractional time step discretization (cf. Chorin scheme) of the Navier-Stokes equations, where the Brinkmann term is considered at both, prediction and correction substeps. A data decomposition method is used to develop a parallel algorithm, where the domain is distributed among processors by using a structured reference grid. The MPI library is used to implement the data communication part of the algorithm. A theoretical model is proposed for the estimation of the complexity of the given parallel algorithm and a scalability analysis is done on the basis of this model. Results of computational experiments are presented, and the accuracy and efficiency of the parallel algorithm is tested on real industrial geometries.
The scope of this diploma thesis is to examine the four generations of asset pricing models and the corresponding volatility dynamics which have been devepoled so far. We proceed as follows: In chapter 1 we give a short repetition of the Black-Scholes first generation model which assumes a constant volatility and we show that volatility should not be modeled as constant by examining statistical data and introducing the notion of implied volatility. In chapter 2, we examine the simplest models that are able to produce smiles or skews - local volatility models. These are called second generation models. Local volatility models model the volatility as a function of the stock price and time. We start with the work of Dupire, show how local volatility models can be calibrated and end with a detailed discussion of the constant elasticity of volatility model. Chapter 3 focuses on the Heston model which represents the class of the stochastic volatility models, which assume that the volatility itself is driven by a stochastic process. These are called third generation models. We introduce the model structure, derive a partial differential pricing equation, give a closed-form solution for European calls by solving this equation and explain how the model is calibrated. The last part of chapter 3 then deals with the limits and the mis-specifications of the Heston model, in particular for recent exotic options like reverse cliquets, Accumulators or Napoleons. In chapter 4 we then introduce the Bergomi forward variance model which is called fourth generation model as a consequence of the limits of the Heston model explained in chapter 3. The Bergomi model is a stochastic local volatility model - the spot price is modeled as a constant elasticity of volatility diffusion and its volatility parameters are functions of the so called forward variances which are specified as stochastic processes. We start with the model specification, derive a partial differential pricing equation, show how the model has to be calibrated and end with pricing examples and a concluding discussion.