## Dissertation

### Filtern

#### Fachbereich / Organisatorische Einheit

- Fachbereich Mathematik (206) (entfernen)

#### Erscheinungsjahr

#### Dokumenttyp

- Dissertation (206) (entfernen)

#### Schlagworte

- Product Pricing with Additive Influences - Algorithms and Complexity Results for Pricing in Social Networks (2017)
- We introduce and investigate a product pricing model in social networks where the value a possible buyer assigns to a product is influenced by the previous buyers. The selling proceeds in discrete, synchronous rounds for some set price and the individual values are additively altered. Whereas computing the revenue for a given price can be done in polynomial time, we show that the basic problem PPAI, i.e., is there a price generating a requested revenue, is weakly NP-complete. With algorithm Frag we provide a pseudo-polynomial time algorithm checking the range of prices in intervals of common buying behavior we call fragments. In some special cases, e.g., solely positive influences, graphs with bounded in-degree, or graphs with bounded path length, the amount of fragments is polynomial. Since the run-time of Frag is polynomial in the amount of fragments, the algorithm itself is polynomial for these special cases. For graphs with positive influence we show that every buyer does also buy for lower prices, a property that is not inherent for arbitrary graphs. Algorithm FixHighest improves the run-time on these graphs by using the above property. Furthermore, we introduce variations on this basic model. The version of delaying the propagation of influences and the awareness of the product can be implemented in our basic model by substituting nodes and arcs with simple gadgets. In the chapter on Dynamic Product Pricing we allow price changes, thereby raising the complexity even for graphs with solely positive or negative influences. Concerning Perishable Product Pricing, i.e., the selling of products that are usable for some time and can be rebought afterward, the principal problem is computing the revenue that a given price can generate in some time horizon. In general, the problem is #P-hard and algorithm Break runs in pseudo-polynomial time. For polynomially computable revenue, we investigate once more the complexity to find the best price. We conclude the thesis with short results in topics of Cooperative Pricing, Initial Value as Parameter, Two Product Pricing, and Bounded Additive Influence.

- Convex Analysis for Processing Hyperspectral Images and Data from Hadamard Spaces (2017)
- This thesis brings together convex analysis and hyperspectral image processing. Convex analysis is the study of convex functions and their properties. Convex functions are important because they admit minimization by efficient algorithms and the solution of many optimization problems can be formulated as minimization of a convex objective function, extending much beyond the classical image restoration problems of denoising, deblurring and inpainting. \(\hspace{1mm}\) At the heart of convex analysis is the duality mapping induced within the class of convex functions by the Fenchel transform. In the last decades efficient optimization algorithms have been developed based on the Fenchel transform and the concept of infimal convolution. \(\hspace{1mm}\) The infimal convolution is of similar importance in convex analysis as the convolution in classical analysis. In particular, the infimal convolution with scaled parabolas gives rise to the one parameter family of Moreau-Yosida envelopes, which approximate a given function from below while preserving its minimum value and minimizers. The closely related proximal mapping replaces the gradient step in a recently developed class of efficient first-order iterative minimization algorithms for non-differentiable functions. For a finite convex function, the proximal mapping coincides with a gradient step of its Moreau-Yosida envelope. Efficient algorithms are needed in hyperspectral image processing, where several hundred intensity values measured in each spatial point give rise to large data volumes. \(\hspace{1mm}\) In the \(\textbf{first part}\) of this thesis, we are concerned with models and algorithms for hyperspectral unmixing. As part of this thesis a hyperspectral imaging system was taken into operation at the Fraunhofer ITWM Kaiserslautern to evaluate the developed algorithms on real data. Motivated by missing-pixel defects common in current hyperspectral imaging systems, we propose a total variation regularized unmixing model for incomplete and noisy data for the case when pure spectra are given. We minimize the proposed model by a primal-dual algorithm based on the proximum mapping and the Fenchel transform. To solve the unmixing problem when only a library of pure spectra is provided, we study a modification which includes a sparsity regularizer into model. \(\hspace{1mm}\) We end the first part with the convergence analysis for a multiplicative algorithm derived by optimization transfer. The proposed algorithm extends well-known multiplicative update rules for minimizing the Kullback-Leibler divergence, to solve a hyperspectral unmixing model in the case when no prior knowledge of pure spectra is given. \(\hspace{1mm}\) In the \(\textbf{second part}\) of this thesis, we study the properties of Moreau-Yosida envelopes, first for functions defined on Hadamard manifolds, which are (possibly) infinite-dimensional Riemannian manifolds with negative curvature, and then for functions defined on Hadamard spaces. \(\hspace{1mm}\) In particular we extend to infinite-dimensional Riemannian manifolds an expression for the gradient of the Moreau-Yosida envelope in terms of the proximal mapping. With the help of this expression we show that a sequence of functions converges to a given limit function in the sense of Mosco if the corresponding Moreau-Yosida envelopes converge pointwise at all scales. \(\hspace{1mm}\) Finally we extend this result to the more general setting of Hadamard spaces. As the reverse implication is already known, this unites two definitions of Mosco convergence on Hadamard spaces, which have both been used in the literature, and whose equivalence has not yet been known.

- Small self-centralizing subgroups in defect groups of finite classical groups (2017)
- In this thesis, we consider a problem from modular representation theory of finite groups. Lluís Puig asked the question whether the order of the defect groups of a block \( B \) of the group algebra of a given finite group \( G \) can always be bounded in terms of the order of the vertices of an arbitrary simple module lying in \( B \). In characteristic \( 2 \), there are examples showing that this is not possible in general, whereas in odd characteristic, no such examples are known. For instance, it is known that the answer to Puig's question is positive in case that \( G \) is a symmetric group, by work of Danz, Külshammer, and Puig. Motivated by this, we study the cases where \( G \) is a finite classical group in non-defining characteristic or one of the finite groups \( G_2(q) \) or \( ³D_4(q) \) of Lie type, again in non-defining characteristic. Here, we generalize Puig's original question by replacing the vertices occurring in his question by arbitrary self-centralizing subgroups of the defect groups. We derive positive and negative answers to this generalized question. \[\] In addition to that, we determine the vertices of the unipotent simple \( GL_2(q) \)-module labeled by the partition \( (1,1) \) in characteristic \( 2 \). This is done using a method known as Brauer construction.

- Graph Coloring Applications and Defining Sets in Graph Theory (2001)
- Abstract The main theme of this thesis is about Graph Coloring Applications and Defining Sets in Graph Theory. As in the case of block designs, finding defining sets seems to be difficult problem, and there is not a general conclusion. Hence we confine us here to some special types of graphs like bipartite graphs, complete graphs, etc. In this work, four new concepts of defining sets are introduced: • Defining sets for perfect (maximum) matchings • Defining sets for independent sets • Defining sets for edge colorings • Defining set for maximal (maximum) clique Furthermore, some algorithms to find and construct the defining sets are introduced. A review on some known kinds of defining sets in graph theory is also incorporated, in chapter 2 the basic definitions and some relevant notations used in this work are introduced. chapter 3 discusses the maximum and perfect matchings and a new concept for a defining set for perfect matching. Different kinds of graph colorings and their applications are the subject of chapter 4. Chapter 5 deals with defining sets in graph coloring. New results are discussed along with already existing research results, an algorithm is introduced, which enables to determine a defining set of a graph coloring. In chapter 6, cliques are discussed. An algorithm for the determination of cliques using their defining sets. Several examples are included.

- Portfolio Optimization with Risk Constraints in the View of Stochastic Interest Rates (2017)
- We discuss the portfolio selection problem of an investor/portfolio manager in an arbitrage-free financial market where a money market account, coupon bonds and a stock are traded continuously. We allow for stochastic interest rates and in particular consider one and two-factor Vasicek models for the instantaneous short rates. In both cases we consider a complete and an incomplete market setting by adding a suitable number of bonds. The goal of an investor is to find a portfolio which maximizes expected utility from terminal wealth under budget and present expected short-fall (PESF) risk constraints. We analyze this portfolio optimization problem in both complete and incomplete financial markets in three different cases: (a) when the PESF risk is minimum, (b) when the PESF risk is between minimum and maximum and (c) without risk constraints. (a) corresponds to the portfolio insurer problem, in (b) the risk constraint is binding, i.e., it is satisfied with equality, and (c) corresponds to the unconstrained Merton investment. In all cases we find the optimal terminal wealth and portfolio process using the martingale method and Malliavin calculus respectively. In particular we solve in the incomplete market settings the dual problem explicitly. We compare the optimal terminal wealth in the cases mentioned using numerical examples. Without risk constraints, we further compare the investment strategies for complete and incomplete market numerically.

- Asymptotics for change-point tests and change-point estimators (2017)
- In change-point analysis the point of interest is to decide if the observations follow one model or if there is at least one time-point, where the model has changed. This results in two sub- fields, the testing of a change and the estimation of the time of change. This thesis considers both parts but with the restriction of testing and estimating for at most one change-point. A well known example is based on independent observations having one change in the mean. Based on the likelihood ratio test a test statistic with an asymptotic Gumbel distribution was derived for this model. As it is a well-known fact that the corresponding convergence rate is very slow, modifications of the test using a weight function were considered. Those tests have a better performance. We focus on this class of test statistics. The first part gives a detailed introduction to the techniques for analysing test statistics and estimators. Therefore we consider the multivariate mean change model and focus on the effects of the weight function. In the case of change-point estimators we can distinguish between the assumption of a fixed size of change (fixed alternative) and the assumption that the size of the change is converging to 0 (local alternative). Especially, the fixed case in rarely analysed in the literature. We show how to come from the proof for the fixed alternative to the proof of the local alternative. Finally, we give a simulation study for heavy tailed multivariate observations. The main part of this thesis focuses on two points. First, analysing test statistics and, secondly, analysing the corresponding change-point estimators. In both cases, we first consider a change in the mean for independent observations but relaxing the moment condition. Based on a robust estimator for the mean, we derive a new type of change-point test having a randomized weight function. Secondly, we analyse non-linear autoregressive models with unknown regression function. Based on neural networks, test statistics and estimators are derived for correctly specified as well as for misspecified situations. This part extends the literature as we analyse test statistics and estimators not only based on the sample residuals. In both sections, the section on tests and the one on the change-point estimator, we end with giving regularity conditions on the model as well as the parameter estimator. Finally, a simulation study for the case of the neural network based test and estimator is given. We discuss the behaviour under correct and mis-specification and apply the neural network based test and estimator on two data sets.

- Modeling Road Roughness with Conditional Random Fields (2016)
- A vehicles fatigue damage is a highly relevant figure in the complete vehicle design process. Long term observations and statistical experiments help to determine the influence of differnt parts of the vehicle, the driver and the surrounding environment. This work is focussing on modeling one of the most important influence factors of the environment: road roughness. The quality of the road is highly dependant on several surrounding factors which can be used to create mathematical models. Such models can be used for the extrapolation of information and an estimation of the environment for statistical studies. The target quantity we focus on in this work ist the discrete International Roughness Index or discrete IRI. The class of models we use and evaluate is a discriminative classification model called Conditional Random Field. We develop a suitable model specification and show new variants of stochastic optimizations to train the model efficiently. The model is also applied to simulated and real world data to show the strengths of our approach.

- Signature Standard Bases over Principal Ideal Rings (2016)
- By using Gröbner bases of ideals of polynomial algebras over a field, many implemented algorithms manage to give exciting examples and counter examples in Commutative Algebra and Algebraic Geometry. Part A of this thesis will focus on extending the concept of Gröbner bases and Standard bases for polynomial algebras over the ring of integers and its factors \(\mathbb{Z}_m[x]\). Moreover we implemented two algorithms for this case in Singular which use different approaches in detecting useless computations, the classical Buchberger algorithm and a F5 signature based algorithm. Part B includes two algorithms that compute the graded Hilbert depth of a graded module over a polynomial algebra \(R\) over a field, as well as the depth and the multigraded Stanley depth of a factor of monomial ideals of \(R\). The two algorithms provide faster computations and examples that lead B. Ichim and A. Zarojanu to a counter example of a question of J. Herzog. A. Duval, B. Goeckner, C. Klivans and J. Martin have recently discovered a counter example for the Stanley Conjecture. We prove in this thesis that the Stanley Conjecture holds in some special cases. Part D explores the General Neron Desingularization in the frame of Noetherian local domains of dimension 1. We have constructed and implemented in Singular and algorithm that computes a strong Artin Approximation for Cohen-Macaulay local rings of dimension 1.

- Gröbner Bases over Extention Fields of \(\mathbb{Q}\) (2016)
- Gröbner bases are one of the most powerful tools in computer algebra and commutative algebra, with applications in algebraic geometry and singularity theory. From the theoretical point of view, these bases can be computed over any field using Buchberger's algorithm. In practice, however, the computational efficiency depends on the arithmetic of the coefficient field. In this thesis, we consider Gröbner bases computations over two types of coefficient fields. First, consider a simple extension \(K=\mathbb{Q}(\alpha)\) of \(\mathbb{Q}\), where \(\alpha\) is an algebraic number, and let \(f\in \mathbb{Q}[t]\) be the minimal polynomial of \(\alpha\). Second, let \(K'\) be the algebraic function field over \(\mathbb{Q}\) with transcendental parameters \(t_1,\ldots,t_m\), that is, \(K' = \mathbb{Q}(t_1,\ldots,t_m)\). In particular, we present efficient algorithms for computing Gröbner bases over \(K\) and \(K'\). Moreover, we present an efficient method for computing syzygy modules over \(K\). To compute Gröbner bases over \(K\), starting from the ideas of Noro [35], we proceed by joining \(f\) to the ideal to be considered, adding \(t\) as an extra variable. But instead of avoiding superfluous S-pair reductions by inverting algebraic numbers, we achieve the same goal by applying modular methods as in [2,4,27], that is, by inferring information in characteristic zero from information in characteristic \(p > 0\). For suitable primes \(p\), the minimal polynomial \(f\) is reducible over \(\mathbb{F}_p\). This allows us to apply modular methods once again, on a second level, with respect to the modular factors of \(f\). The algorithm thus resembles a divide and conquer strategy and is in particular easily parallelizable. Moreover, using a similar approach, we present an algorithm for computing syzygy modules over \(K\). On the other hand, to compute Gröbner bases over \(K'\), our new algorithm first specializes the parameters \(t_1,\ldots,t_m\) to reduce the problem from \(K'[x_1,\ldots,x_n]\) to \(\mathbb{Q}[x_1,\ldots,x_n]\). The algorithm then computes a set of Gröbner bases of specialized ideals. From this set of Gröbner bases with coefficients in \(\mathbb{Q}\), it obtains a Gröbner basis of the input ideal using sparse multivariate rational interpolation. At current state, these algorithms are probabilistic in the sense that, as for other modular Gröbner basis computations, an effective final verification test is only known for homogeneous ideals or for local monomial orderings. The presented timings show that for most examples, our algorithms, which have been implemented in SINGULAR [17], are considerably faster than other known methods.

- Interest Rate Modeling - The Potential Approach and Multi-Curve Potential Models (2016)
- This thesis is concerned with interest rate modeling by means of the potential approach. The contribution of this work is twofold. First, by making use of the potential approach and the theory of affine Markov processes, we develop a general class of rational models to the term structure of interest rates which we refer to as "the affine rational potential model". These models feature positive interest rates and analytical pricing formulae for zero-coupon bonds, caps, swaptions, and European currency options. We present some concrete models to illustrate the scope of the affine rational potential model and calibrate a model specification to real-world market data. Second, we develop a general family of "multi-curve potential models" for post-crisis interest rates. Our models feature positive stochastic basis spreads, positive term structures, and analytic pricing formulae for interest rate derivatives. This modeling framework is also flexible enough to accommodate negative interest rates and positive basis spreads.