## Fachbereich Mathematik

### Refine

#### Year of publication

- 2008 (28) (remove)

#### Document Type

- Doctoral Thesis (15)
- Preprint (8)
- Report (4)
- Study Thesis (1)

#### Keywords

- Level-Set-Methode (2)
- domain decomposition (2)
- mesh generation (2)
- Abgeschlossenheit (1)
- Adjoint system (1)
- Alter (1)
- Annulus (1)
- Automatische Spracherkennung (1)
- Bayes-Entscheidungstheorie (1)
- Behinderter (1)
- Berechnungskomplexität (1)
- Bernstein Kern (1)
- Biot Poroelastizitätgleichung (1)
- CDS (1)
- CPDO (1)
- Center Location (1)
- Circle Location (1)
- Combinatorial Optimization (1)
- Core (1)
- Credit Risk (1)
- Curvature (1)
- Defaultable Options (1)
- Delaunay (1)
- Delaunay triangulation (1)
- Delaunay triangulierung (1)
- Eigenschwingung (1)
- Entscheidungsbaum (1)
- FEM (1)
- Fiber spinning (1)
- Fiber suspension flow (1)
- Filtration (1)
- Finanzmathematik (1)
- First--order optimality system (1)
- Fluid-Struktur-Wechselwirkung (1)
- Gebietszerlegung (1)
- Gebietszerlegungsmethode (1)
- Gittererzeugung (1)
- Gravimetrie (1)
- Harmonische Funktion (1)
- Hub Location Problem (1)
- Hysterese (1)
- Inverses Problem (1)
- Kaktusgraph (1)
- Kopplungsproblem (1)
- Krümmung (1)
- Kugelflächenfunktion (1)
- Level set methods (1)
- Location (1)
- MBS (1)
- MKS (1)
- Markov-Ketten-Monte-Carlo-Verfahren (1)
- Massendichte (1)
- Mathematical Finance (1)
- Mixed integer programming (1)
- Multicriteria optimization (1)
- Multiperiod planning (1)
- Multiskalenapproximation (1)
- Neumann Wavelets (1)
- Neumann wavelets (1)
- Newtonsches Potenzial (1)
- Nichtlineare Approximation (1)
- Niederschlag (1)
- Nonlinear Optimization (1)
- Optimal control (1)
- Orthonormalbasis (1)
- Order of printed copy (1)
- Punktprozess (1)
- Regularisierung (1)
- Semantik (1)
- Shapley value (1)
- Shapleywert (1)
- Signalanalyse (1)
- Sphäre (1)
- Spieltheorie (1)
- Spline (1)
- Standortprobleme (1)
- Stochastic Control (1)
- Stochastische optimale Kontrolle (1)
- Stokes Wavelets (1)
- Stokes wavelets (1)
- Stokes-Gleichung (1)
- Stoßdämpfer (1)
- Systemidentifikation (1)
- Tensorfeld (1)
- Topologieoptimierung (1)
- Topology optimization (1)
- Tropische Geometrie (1)
- Vektorfeld (1)
- Vollständigkeit (1)
- Wellengeschwindigkeit (1)
- Zeitreihe (1)
- Zentrenprobleme (1)
- benders decomposition (1)
- body wave velocity (1)
- cactus graph (1)
- cancer radiation therapy (1)
- change point (1)
- closure approximation (1)
- complexity (1)
- cooperative game (1)
- core (1)
- cuts (1)
- decision support systems (1)
- estimation (1)
- extreme equilibria (1)
- film casting (1)
- filtration (1)
- finite volume method (1)
- fluid structure interaction (1)
- free boundary (1)
- free surface (1)
- freie Oberfläche (1)
- gebietszerlegung (1)
- geometric ergodicity (1)
- gitter (1)
- harmonic density (1)
- harmonische Dichte (1)
- heuristic (1)
- interface problem (1)
- inverse problems (1)
- kooperative Spieltheorie (1)
- level set method (1)
- logical analysis (1)
- logische Analyse (1)
- mathematical modelling (1)
- matrix decomposition (1)
- minimaler Schnittbaum (1)
- minimum cut tree (1)
- monotropic programming (1)
- multiliead collimator sequencing (1)
- multiscale approximation (1)
- network congestion game (1)
- netzgenerierung (1)
- nichtlineare Modellreduktion (1)
- nonlinear model reduction (1)
- nonwovens (1)
- normal mode (1)
- optimal control (1)
- porous media (1)
- poröse Medien (1)
- reproducing kernel (1)
- reproduzierender Kern (1)
- rheology (1)
- schlecht gestellt (1)
- splitting function (1)
- stationarity (1)
- stochastic optimal control (1)
- tension problems (1)
- total latency (1)
- transmission conditions (1)
- well-posedness (1)
- Übergangsbedingungen (1)

This thesis is devoted to the study of tropical curves with emphasis on their enumerative geometry. Major results include a conceptual proof of the fact that the number of rational tropical plane curves interpolating an appropriate number of general points is independent of the choice of points, the computation of intersection products of Psi-classes on the moduli space of rational tropical curves, a computation of the number of tropical elliptic plane curves of given degree and fixed tropical j-invariant as well as a tropical analogue of the Riemann-Roch theorem for algebraic curves. The result are obtained in joint work with Hannah Markwig and/or Andreas Gathmann.

The goal of a multicriteria program is to explore different possibilities and their respective compromises which adequately represent the nondominated set. An exact description will in most cases fail because the number of efficient solutions is either too large or even infinite. We approximate the nondominated by computing a finite collection of nondominated points. Different ideas have been applied, including nonnegative weighted scalarization, Tchebycheff weighted scalarization, block norms and epsilon-constraints. Block norms are the building blocks for the inner and outer approximation algorithms proposed by Klamroth. We review these algorithms and propose three different variants. However, block norm based algorithms require to solve a sequence of subproblems, the number of subproblems becomes relatively high for six criteria and even intractable for real applications with nine criteria. Thus, we use bilevel linear programming to derive an approximation algorithm. We finally analyze and compare the approximation quality, running time and numerical convergence of the proposed methods.

This thesis shows an approach to combine the advantages of MBS tyre models and FEM models for the use in full vehicle simulations. The procedure proposed in this thesis aims to describe a nonlinear structure with a Finite Element approach combined with nonlinear model reduction methods. Unlike most model reduction methods - as the frequently used Craig-Bampton approach - the method of Proper Orthogonal Decomposition (POD) offers a projection basis suitable for nonlinear models. For the linear wave equation, the POD method is studied comparing two different choices of snapshot sets. Set 1 consists of deformation snapshots, and set 2 additionally contains velocities and accelerations. An error analysis proves no convergence guarantee for deformations only. For inclusion of derivatives it yields an error bound diminishing for small time steps. The numerical results show a better behaviour for the derivative snapshot method, as long as the sum of the left-over eigenvalues is significant. For the reduction of nonlinear systems - especially when using commercial software - it is necessary to decouple the reduced surrogate system from the full model. To achieve this, a lookup table approach is presented. It makes use of the preceding computation step with the full model necessary to set up the POD basis (training step). The nonlinear term of inner forces and the stiffness matrix are output and stored in a lookup table for the reduced system. Numerical examples include a nonlinear string in Matlab and an airspring computed in Abaqus. Both examples show that effort reductions of two orders of magnitude are possible within a reasonable error tolerance. The lookup approaches perform faster than the Trajectory Piecewise Linear (TPWL) method and produce comparable errors. Furthermore, the Abaqus example shows the influence of training excitation on the quality of the reduced model.

We present a new efficient and robust algorithm for topology optimization of 3D cast parts. Special constraints are fulfilled to make possible the incorporation of a simulation of the casting process into the optimization: In order to keep track of the exact position of the boundary and to provide a full finite element model of the structure in each iteration, we use a twofold approach for the structural update. A level set function technique for boundary representation is combined with a new tetrahedral mesh generator for geometries specified by implicit boundary descriptions. Boundary conditions are mapped automatically onto the updated mesh. For sensitivity analysis, we employ the concept of the topological gradient. Modification of the level set function is reduced to efficient summation of several level set functions, and the finite element mesh is adapted to the modified structure in each iteration of the optimization process. We show that the resulting meshes are of high quality. A domain decomposition technique is used to keep the computational costs of remeshing low. The capabilities of our algorithm are demonstrated by industrial-scale optimization examples.

In this thesis, the coupling of the Stokes equations and the Biot poroelasticity equations for fluid flow normal to porous media is investigated. For that purpose, the transmission conditions across the interfaces between the fluid regions and the porous domain are derived. A proper algorithm is formulated and numerical examples are presented. First, the transmission conditions for the coupling of various physical phenomena are reviewed. For the coupling of free flow with porous media, it has to be distinguished whether the fluid flows tangentially or perpendicularly to the porous medium. This plays an essential role for the formulation of the transmission conditions. In the thesis, the transmission conditions for the coupling of the Stokes equations and the Biot poroelasticity equations for fluid flow normal to the porous medium in one and three dimensions are derived. With these conditions, the continuous fully coupled system of equations in one and three dimensions is formulated. In the one dimensional case the extreme cases, i.e. fluid-fluid interface and fluid impermeable solid interface, are considered. Two chapters of the thesis are devoted to the discretisation of the fully coupled Biot-Stokes system for matching and non-matching grids, respectively. Therefor, operators are introduced that map the internal and boundary variables to the respective domains via Stokes equations, Biot equations and the transmission conditions. The matrix representation of some of these operators is shown. For the non-matching case, a cell-centred grid in the fluid region and a staggered grid in the porous domain are used. Hence, the discretisation is more difficult, since an additional grid on the interface has to be introduced. Corresponding matching functions are needed to transfer the values properly from one domain to the other across the interface. In the end, the iterative solution procedure for the Biot-Stokes system on non-matching grids is presented. For this purpose, a short review of domain decomposition methods is given, which are often the methods of choice for such coupled problems. The iterative solution algorithm is presented, including details like stopping criteria, choice and computation of parameters, formulae for non-dimensionalisation, software and so on. Finally, numerical results for steady state examples, depth filtration and cake filtration examples are presented.

Minimum Cut Tree Games
(2008)

In this paper we introduce a cooperative game based on the minimum cut tree problem which is also known as multi-terminal maximum flow problem. Minimum cut tree games are shown to be totally balanced and a solution in their core can be obtained in polynomial time. This special core allocation is closely related to the solution of the original graph theoretical problem. We give an example showing that the game is not supermodular in general, however, it is for special cases and for some of those we give an explicit formula for the calculation of the Shapley value.

Grey-box modelling deals with models which are able to integrate the following two kinds of information: qualitative (expert) knowledge and quantitative (data) knowledge, with equal importance. The doctoral thesis has two aims: the improvement of an existing neuro-fuzzy approach (LOLIMOT algorithm), and the development of a new model class with corresponding identification algorithm, based on multiresolution analysis (wavelets) and statistical methods. The identification algorithm is able to identify both hidden differential dynamics and hysteretic components. After the presentation of some improvements of the LOLIMOT algorithm based on readily normalized weight functions derived from decision trees, we investigate several mathematical theories, i.e. the theory of nonlinear dynamical systems and hysteresis, statistical decision theory, and approximation theory, in view of their applicability for grey-box modelling. These theories show us directly the way onto a new model class and its identification algorithm. The new model class will be derived from the local model networks through the following modifications: Inclusion of non-Gaussian noise sources; allowance of internal nonlinear differential dynamics represented by multi-dimensional real functions; introduction of internal hysteresis models through two-dimensional "primitive functions"; replacement respectively approximation of the weight functions and of the mentioned multi-dimensional functions by wavelets; usage of the sparseness of the matrix of the wavelet coefficients; and identification of the wavelet coefficients with Sequential Monte Carlo methods. We also apply this modelling scheme to the identification of a shock absorber.

In this thesis, we investigate a statistical model for precipitation time series recorded at a single site. The sequence of observations consists of rainfall amounts aggregated over time periods of fixed duration. As the properties of this sequence depend strongly on the length of the observation intervals, we follow the approach of Rodriguez-Iturbe et. al. [1] and use an underlying model for rainfall intensity in continuous time. In this idealized representation, rainfall occurs in clusters of rectangular cells, and each observations is treated as the sum of cell contributions during a given time period. Unlike the previous work, we use a multivariate lognormal distribution for the temporal structure of the cells and clusters. After formulating the model, we develop a Markov-Chain Monte-Carlo algorithm for fitting it to a given data set. A particular problem we have to deal with is the need to estimate the unobserved intensity process alongside the parameter of interest. The performance of the algorithm is tested on artificial data sets generated from the model. [1] I. Rodriguez-Iturbe, D. R. Cox, and Valerie Isham. Some models for rainfall based on stochastic point processes. Proc. R. Soc. Lond. A, 410:269-288, 1987.

Gegenstand dieser Arbeit ist die kanonische Verbindung klassischer globaler Schwerefeldmodellierung in der Konzeption von Stokes (1849) und Neumann (1887) und moderner lokaler Multiskalenberechnung mittels lokalkompakter adaptiver Wavelets. Besonderes Anliegen ist die "Zoom-in"-Ermittlung von Geoidhöhen aus lokal gegebenen Schwereanomalien bzw. Schwerestörungen.

This paper provides a brief overview of two linear inverse problems concerned with the determination of the Earth’s interior: inverse gravimetry and normal mode tomography. Moreover, a vector spline method is proposed for a combined solution of both problems. This method uses localised basis functions, which are based on reproducing kernels, and is related to approaches which have been successfully applied to the inverse gravimetric problem and the seismic traveltime tomography separately.

In this work we study and investigate the minimum width annulus problem (MWAP), the circle center location or circle location problem (CLP) and the point center location or point location problem (PLP) on Rectilinear and Chebyshev planes as well as in networks. The relations between the problems have served as a basis for finding of elegant solution, algorithms for both new and well known problems. So, MWAP was formulated and investigated in Rectilinear space. In contrast to Euclidean metric, MWAP and PLP have at least one common optimal point. Therefore, MWAP on Rectilinear plane was solved in linear time with the help of PLP. Hence, the solution sequence was PLP-->MWAP. It was shown, that MWAP and CLP are equivalent. Thus, CLP can be also solved in linear time. The obtained results were analysed and transfered to Chebyshev metric. After that, the notions of circle, sphere and annulus in networks were introduced. It should be noted that the notion of a circle in a network is different from the notion of a cycle. An O(mn) time algorithm for solution of MWAP was constructed and implemented. The algorithm is based on the fact that the middle point of an edge represents an optimal solution of a local minimum width annulus on this edge. The resulting complexity is better than the complexity O(mn+n^2logn) in unweighted case of the fastest known algorithm for minimizing of the range function, which is mathematically equivalent to MWAP. MWAP in unweighted undirected networks was extended to the MWAP on subsets and to the restricted MWAP. Resulting problems were analysed and solved. Also the p–minimum width annulus problem was formulated and explored. This problem is NP–hard. However, the p–MWAP has been solved in polynomial O(m^2n^3p) time with a natural assumption, that each minimum width annulus covers all vertexes of a network having distances to the central point of annulus less than or equal to the radius of its outer circle. In contrast to the planar case MWAP in undirected unweighted networks have appeared to be a root problem among considered problems. During investigation of properties of circles in networks it was shown that the difference between planar and network circles is significant. This leads to the nonequivalence of CLP and MWAP in the general case. However, MWAP was effectively used in solution procedures for CLP giving the sequence MWAP-->CLP. The complexity of the developed and implemented algorithm is of order O(m^2n^2). It is important to mention that CLP in networks has been formulated for the first time in this work and differs from the well–studied location of cycles in networks. We have constructed an O(mn+n^2logn) algorithm for well–known PLP. The complexity of this algorithm is not worse than the complexity of the currently best algorithms. But the concept of the solution procedure is new – we use MWAP in order to solve PLP building the opposite to the planar case solution sequence MWAP-->PLP and this method has the following advantages: First, the lower bounds LB obtained in the solution procedure are proved to be in any case better than the strongest Halpern’s lower bound. Second, the developed algorithm is so simple that it can be easily applied to complex networks manually. Third, the empirical complexity of the algorithm is equal to O(mn). MWAP was extended to and explored in directed unweighted and weighted networks. The complexity bound O(n^2) of the developed algorithm for finding of the center of a minimum width annulus in the unweighted case does not depend on the number of edges in a network, because the problems can be solved in the order PLP-->MWAP. In the weighted case computational time is of order O(mn^2).

We study the complexity of finding extreme pure Nash equilibria in symmetric network congestion games and analyse how it depends on the graph topology and the number of users. In our context best and worst equilibria are those with minimum respectively maximum total latency. We establish that both problems can be solved by a Greedy algorithm with a suitable tie breaking rule on parallel links. On series-parallel graphs finding a worst Nash equilibrium is NP-hard for two or more users while finding a best one is solvable in polynomial time for two users and NP-hard for three or more. Additionally we establish NP-hardness in the strong sense for the problem of finding a worst Nash equilibrium on a general acyclic graph.

In many medical, financial, industrial, e.t.c. applications of statistics, the model parameters may undergo changes at unknown moment of time. In this thesis, we consider change point analysis in a regression setting for dichotomous responses, i.e. they can be modeled as Bernoulli or 0-1 variables. Applications are widespread including credit scoring in financial statistics and dose-response relations in biometry. The model parameters are estimated using neural network method. We show that the parameter estimates are identifiable up to a given family of transformations and derive the consistency and asymptotic normality of the network parameter estimates using the results in Franke and Neumann Franke Neumann (2000). We use a neural network based likelihood ratio test statistic to detect a change point in a given set of data and derive the limit distribution of the estimator using the results in Gombay and Horvath (1994,1996) under the assumption that the model is properly specified. For the misspecified case, we develop a scaled test statistic for the case of one-dimensional parameter. Through simulation, we show that the sample size, change point location and the size of change influence change point detection. In this work, the maximum likelihood estimation method is used to estimate a change point when it has been detected. Through simulation, we show that change point estimation is influenced by the sample size, change point location and the size of change. We present two methods for determining the change point confidence intervals: Profile log-likelihood ratio and Percentile bootstrap methods. Through simulation, the Percentile bootstrap method is shown to be superior to profile log-likelihood ratio method.

The purpose of this paper is the canonical connection of classical global gravity field determination following the concept of Stokes (1849), Bruns (1878), and Neumann (1887) on the one hand and modern locally oriented multiscale computation by use of adaptive locally supported wavelets on the other hand. Essential tools are regularization methods of the Green, Neumann, and Stokes integral representations. The multiscale approximation is guaranteed simply as linear difference scheme by use of Green, Neumann, and Stokes wavelets, respectively. As an application, gravity anomalies caused by plumes are investigated for the Hawaiian and Iceland areas.

We present results and views about a project in assisted living. The scenario is a room in which an elderly and/or disabled person lives who is not able to perform certain actions due to restricted mobility. We enable the person to express commands verbally that will then be executed automatically. There are several severe problems involved that complicate the situation. The person may utter the command in a rather unexpected way, the person makes an error or the action cannot be performed due to several reasons. In our approach we present an architecture with three components: The recognition component that contains novel features in the signal processing, the analysis component that logically analyzes the command, and the execution component that performs the action automatically. All three components communicate with each other.

This thesis covers two important fields in financial mathematics, namely the continuous time portfolio optimisation and credit risk modelling. We analyse optimisation problems of portfolios of Call and Put options on the stock and/or the zero coupon bond issued by a firm with default risk. We use the martingale approach for dynamic optimisation problems. Our findings show that the riskier the option gets, the less proportion of his wealth the investor allocates to the risky asset. Further, we analyse the Credit Default Swap (CDS) market quotes on the Eurobonds issued by Turkish sovereign for building the term structure of the sovereign credit risk. Two methods are introduced and compared for bootstrapping the risk-neutral probabilities of default (PD) in an intensity based (or reduced form) credit risk modelling approach. We compare the market-implied PDs with the actual PDs reported by credit rating agencies based on historical experience. Our results highlight the market price of the sovereign credit risk depending on the assigned rating category in the sampling period. Finally, we find an optimal leverage strategy for delivering the payments promised by a Constant Proportion Debt Obligation (CPDO). The problem is solved via the introduction and explicit solution of a stochastic control problem by transforming the related Hamilton-Jacobi-Bellman Equation into its dual. Contrary to the industry practise, the optimal leverage function we derive is a non-linear function of the CPDO asset value. The simulations show promising behaviour of the optimal leverage function compared with the one popular among practitioners.

In this paper we develop a data-driven mixture of vector autoregressive models with exogenous components. The process is assumed to change regimes according to an underlying Markov process. In contrast to the hidden Markov setup, we allow the transition probabilities of the underlying Markov process to depend on past time series values and exogenous variables. Such processes have potential applications to modeling brain signals. For example, brain activity at time t (measured by electroencephalograms) will can be modeled as a function of both its past values as well as exogenous variables (such as visual or somatosensory stimuli). Furthermore, we establish stationarity, geometric ergodicity and the existence of moments for these processes under suitable conditions on the parameters of the model. Such properties are important for understanding the stability properties of the model as well as deriving the asymptotic behavior of various statistics and model parameter estimators.

Given a directed graph G = (N,A), a tension is a function from A to R which satisfies Kirchhoff\\\'s law for voltages. There are two well-known tension problems on graphs. In the minimum cost tension problem (MCT), a cost vector is given and a tension satisfying lower and upper bounds is seeked such that the total cost is minimum. In the maximum tension problem (MaxT), the graph contains 2 special nodes and an arc between them. The aim is to find the maximum tension on this arc. In this study we assume that both problems are feasible and have finite optimal solutions and analyze their inverse versions under rectilinear and Chebyshev distances. In the inverse minimum cost tension problem we adjust the cost parameter to make a given feasible solution the optimum, whereas in inverse maximum tension problem the bounds of the arcs are modified. We show, by extending the results of Ahuja and Orlin (2002), that these inverse tension problems are in a way \\\"dual\\\" to the inverse network flows. We prove that the inverse minimum cost tension problem under rectilinear norm is equivalent to solving a minimum cost tension problem, while under unit weight Chebyshev norm it can be solved by finding a minimum mean cost residual cut. Moreover, inverse maximum tension problem under rectilinear norm can be solved as a maximum tension problem on the same graph with new arc bounds. Finally, we provide a generalization of the inverse problems to monotropic programming problems with linear costs.

In this dissertation we consider mesoscale based models for flow driven fibre orientation dynamics in suspensions. Models for fibre orientation dynamics are derived for two classes of suspensions. For concentrated suspensions of rigid fibres the Folgar-Tucker model is generalized by incorporating the excluded volume effect. For dilute semi-flexible fibre suspensions a novel moments based description of fibre orientation state is introduced and a model for the flow-driven evolution of the corresponding variables is derived together with several closure approximations. The equation system describing fibre suspension flows, consisting of the incompressible Navier-Stokes equation with an orientation state dependent non-Newtonian constitutive relation and a linear first order hyperbolic system for the fibre orientation variables, has been analyzed, allowing rather general fibre orientation evolution models and constitutive relations. The existence and uniqueness of a solution has been demonstrated locally in time for sufficiently small data. The closure relations for the semiflexible fibre suspension model are studied numerically. A finite volume based discretization of the suspension flow is given and the numerical results for several two and three dimensional domains with different parameter values are presented and discussed.

This dissertation deals with the optimization of the web formation in a spunbond process for the production of artificial fabrics. A mathematical model of the process is presented. Based on the model, two kind of attributes to be optimized are considered, those related with the quality of the fabric and those describing the stability of the production process. The problem falls in the multicriteria and decision making framework. The functions involved on the model of the process are non linear, non convex and non differentiable. A strategy in two steps; exploration and continuation, is proposed to approximate numerically the Pareto frontier and alternative methods are proposed to navigate the set and support the decision making process. The proposed strategy is applied to a particular production process and numerical results are presented.

This thesis is devoted to deal with the stochastic optimization problems in various situations with the aid of the Martingale method. Chapter 2 discusses the Martingale method and its applications to the basic optimization problems, which are well addressed in the literature (for example, [15], [23] and [24]). In Chapter 3, we study the problem of maximizing expected utility of real terminal wealth in the presence of an index bond. Chapter 4, which is a modification of the original research paper joint with Korn and Ewald [39], investigates an optimization problem faced by a DC pension fund manager under inflationary risk. Although the problem is addressed in the context of a pension fund, it presents a way of how to deal with the optimization problem, in the case there is a (positive) endowment. In Chapter 5, we turn to a situation where the additional income, other than the income from returns on investment, is gained by supplying labor. Chapter 6 concerns a situation where the market considered is incomplete. A trick of completing an incomplete market is presented there. The general theory which supports the discussion followed is summarized in the first chapter.

The dissertation deals with the application of Hub Location models in public transport planning. The author proposes new mathematical models along with different solution approaches to solve the instances. Moreover, a novel multi-period formulation is proposed as an extension to the general model. Due to its high complexity heuristic approaches are formulated to find a good solution within a reasonable amount of time.

A modular level set algorithm is developed to study the interface and its movement for free moving boundary problems. The algorithm is divided into three basic modules : initialization, propagation and contouring. Initialization is the process of finding the signed distance function from closed objects. We discuss here, a methodology to find an accurate signed distance function from a closed, simply connected surface discretized by triangulation. We compute the signed distance function using the direct method and it is stored efficiently in the neighborhood of the interface by a narrow band level set method. A novel approach is employed to determine the correct sign of the distance function at convex-concave junctions of the surface. The accuracy and convergence of the method with respect to the surface resolution is studied. It is shown that the efficient organization of surface and narrow band data structures enables the solution of large industrial problems. We also compare the accuracy of the signed distance function by direct approach with Fast Marching Method (FMM). It is found that the direct approach is more accurate than FMM. Contouring is performed through a variant of the marching cube algorithm used for the isosurface construction from volumetric data sets. The algorithm is designed to keep foreground and background information consistent, contrary to the neutrality principle followed for surface rendering in computer graphics. The algorithm ensures that the isosurface triangulation is closed, non-degenerate and non-ambiguous. The constructed triangulation has desirable properties required for the generation of good volume meshes. These volume meshes are used in the boundary element method for the study of linear electrostatics. For estimating surface properties like interface position, normal and curvature accurately from a discrete level set function, a method based on higher order weighted least squares is developed. It is found that least squares approach is more accurate than finite difference approximation. Furthermore, the method of least squares requires a more compact stencil than those of finite difference schemes. The accuracy and convergence of the method depends on the surface resolution and the discrete mesh width. This approach is used in propagation for the study of mean curvature flow and bubble dynamics. The advantage of this approach is that the curvature is not discretized explicitly on the grid and is estimated on the interface. The method of constant velocity extension is employed for the propagation of the interface. With least squares approach, the mean curvature flow has considerable reduction in mass loss compared to finite difference techniques. In the bubble dynamics, the modules are used for the study of a bubble under the influence of surface tension forces to validate Young-Laplace law. It is found that the order of curvature estimation plays a crucial role for calculating accurate pressure difference between inside and outside of the bubble. Further, we study the coalescence of two bubbles under surface tension force. The application of these modules to various industrial problems is discussed.

Finding a delivery plan for cancer radiation treatment using multileaf collimators operating in ''step-and-shoot mode'' can be formulated mathematically as a problem of decomposing an integer matrix into a weighted sum of binary matrices having the consecutive-ones property - and sometimes other properties related to the collimator technology. The efficiency of the delivery plan is measured by both the sum of weights in the decomposition, known as the total beam-on time, and the number of different binary matrices appearing in it, referred to as the cardinality, the latter being closely related to the set-up time of the treatment. In practice, the total beam-on time is usually restricted to its minimum possible value, (which is easy to find), and a decomposition that minimises cardinality (subject to this restriction) is sought.

An optimal control problem for a mathematical model of a melt spinning process is considered. Newtonian and non--Newtonian models are used to describe the rheology of the polymeric material, the fiber is made of. The extrusion velocity of the polymer at the spinneret as well as the velocity and temperature of the quench air serve as control variables. A constrained optimization problem is derived and the first--order optimality system is set up to obtain the adjoint equations. Numerical solutions are carried out using a steepest descent algorithm.

We present an optimal control approach for the isothermal film casting process with free surfaces described by averaged Navier-Stokes equations. We control the thickness of the film at the take-up point using the shape of the nozzle. The control goal consists in finding an even thickness profile. To achieve this goal, we minimize an appropriate cost functional. The resulting minimization problem is solved numerically by a steepest descent method. The gradient of the cost functional is approximated using the adjoint variables of the problem with fixed film width. Numerical simulations show the applicability of the proposed method.

The desire to model in ever increasing detail geometrical and physical features has lead to a steady increase in the number of points used in field solvers. While many solvers have been ported to parallel machines, grid generators have left behind. Sequential generation of meshes of large size is extremely problematic both in terms of time and memory requirements. Therefore, the need for developing parallel mesh generation technique is well justified. In this work a novel algorithm is presented for automatic parallel generation of tetrahedral computational meshes based on geometrical domain decomposition. It has a potential to remove this bottleneck. Different domain decomposition approaches and criteria have been investigated. Questions regarding time and memory consumption, efficiency of computations and quality of generated surface and volume meshes have been considered. As a result of the work parTgen (partitioner and parallel tetrahedral mesh generator) software package based on the developed algorithm has been created. Several real-life examples of relatively complex structures involving large meshes (of order 10^7-10^8 elements) are given. It has been shown that high mesh quality is achieved. Memory and time consumption are reduced significantly, and parallel algorithm is efficient.