Refine
Year of publication
- 1999 (523) (remove)
Document Type
- Preprint (395)
- Article (73)
- Doctoral Thesis (28)
- Course Material (6)
- Master's Thesis (6)
- Report (5)
- Lecture (3)
- Study Thesis (3)
- Working Paper (2)
- Diploma Thesis (1)
Has Fulltext
- yes (523) (remove)
Keywords
- Case-Based Reasoning (11)
- AG-RESY (6)
- Praktikum (6)
- Fallbasiertes Schliessen (5)
- HANDFLEX (5)
- Location Theory (5)
- PARO (5)
- case-based problem solving (5)
- Abstraction (4)
- Fallbasiertes Schließen (4)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Informatik (267)
- Kaiserslautern - Fachbereich Mathematik (129)
- Kaiserslautern - Fachbereich Physik (76)
- Kaiserslautern - Fachbereich Chemie (19)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (10)
- Fraunhofer (ITWM) (6)
- Kaiserslautern - Fachbereich ARUBI (6)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (5)
- Kaiserslautern - Fachbereich Biologie (2)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (2)
Mechanised reasoning systems and computer algebra systems have apparentlydifferent objectives. Their integration is, however, highly desirable, since in manyformal proofs both of the two different tasks, proving and calculating, have to beperformed. Even more importantly, proof and computation are often interwoven andnot easily separable. In the context of producing reliable proofs, the question howto ensure correctness when integrating a computer algebra system into a mechanisedreasoning system is crucial. In this contribution, we discuss the correctness prob-lems that arise from such an integration and advocate an approach in which thecalculations of the computer algebra system are checked at the calculus level of themechanised reasoning system. This can be achieved by adding a verbose mode to thecomputer algebra system which produces high-level protocol information that can beprocessed by an interface to derive proof plans. Such a proof plan in turn can beexpanded to proofs at different levels of abstraction, so the approach is well-suited forproducing a high-level verbalised explication as well as for a low-level machine check-able calculus-level proof. We present an implementation of our ideas and exemplifythem using an automatically solved extended example.
We propose a specification language for the formalization of data types with par-tial or non-terminating operations as part of a rewrite-based logical frameworkfor inductive theorem proving. The language requires constructors for designat-ing data items and admits positive/negative conditional equations as axioms inspecifications. The (total algebra) semantics for such specifications is based onso-called data models. We present admissibility conditions that guarantee theunique existence of a distinguished data model with properties similar to thoseof the initial model of a usual equational specification. Since admissibility of aspecification requires confluence of the induced rewrite relation, we provide aneffectively testable confluence criterion which does not presuppose termination.
To prove difficult theorems in a mathematical field requires substantial know-ledge of that field. In this paper a frame-based knowledge representation formalismis presented, which supports a conceptual representation and to a large extent guar-antees the consistency of the built-up knowledge bases. We define a semantics ofthe representation by giving a translation into the underlaying logic.
The amount of user interaction is the prime cause of costs in interactiveprogram verification. This paper describes an internal analogy techniquethat reuses subproofs in the verification of state-based specifications. Itidentifies common patterns of subproofs and their justifications in orderto reuse these subproofs; thus significant savings on the number of userinteractions in a verification proof are achievable.
We present an empirical study of mathematical proofs by diagonalization, the aim istheir mechanization based on proof planning techniques. We show that these proofs canbe constructed according to a strategy that (i) finds an indexing relation, (ii) constructsa diagonal element, and (iii) makes the implicit contradiction of the diagonal elementexplicit. Moreover we suggest how diagonal elements can be represented.
The problem of providing connectivity for a collection of applications is largely one of data integration: the communicating parties must agree on thesemantics and syntax of the data being exchanged. In earlier papers [#!mp:jsc1!#,#!sg:BSG1!#], it was proposed that dictionaries of definitions foroperators, functions, and symbolic constants can effectively address the problem of semantic data integration. In this paper we extend that earlier work todiscuss the important issues in data integration at the syntactic level and propose a set of solutions that are both general, supporting a wide range of dataobjects with typing information, and efficient, supporting fast transmission and parsing.
Chains of Recurrences (CRs) are a tool for expediting the evaluation of elementary expressions over regular grids. CR based evaluations of elementaryexpressions consist of 3 major stages: CR construction, simplification, and evaluation. This paper addresses CR simplifications. The goal of CRsimplifications is to manipulate a CR such that the resulting expression is more efficiently to evaluate. We develop CR simplification strategies which takethe computational context of CR evaluations into account. Realizing that it is infeasible to always optimally simplify a CR expression, we give heuristicstrategies which, in most cases, result in a optimal, or close-to-optimal expressions. The motivations behind our proposed strategies are discussed and theresults are illustrated by various examples.
Algorithms in Singular
(1999)
Top-down and bottom-up theorem proving approaches have each specific ad-vantages and disadvantages. Bottom-up provers profit from strong redundancycontrol and suffer from the lack of goal-orientation, whereas top-down provers aregoal-oriented but have weak calculi when their proof lengths are considered. Inorder to integrate both approaches our method is to achieve cooperation betweena top-down and a bottom-up prover: The top-down prover generates subgoalclauses, then they are processed by a bottom-up prover. We discuss theoreticaspects of this methodology and we introduce techniques for a relevancy-basedfiltering of generated subgoal clauses. Experiments with a model eliminationand a superposition-based prover reveal the high potential of our cooperation approach.The author was supported by the Deutsche Forschungsgemeinschaft (DFG).
We examine an approach for demand-driven cooperative theorem proving.We briefly point out the problems arising from the use of common success-driven cooperation methods, and we propose the application of our approachof requirement-based cooperative theorem proving. This approach allows for abetter orientation on current needs of provers in comparison with conventional co-operation concepts. We introduce an abstract framework for requirement-basedcooperation and describe two instantiations of it: Requirement-based exchangeof facts and sub-problem division and transfer via requests. Finally, we reporton experimental studies conducted in the areas superposition and unfailing com-pletion.The author was supported by the Deutsche Forschungsgemeinschaft (DFG).
HOT is an automated higher-order theorem prover based on HTE, an extensional higher-order tableaux calculus (Kohlhase 95). The first part of the paper introduces a variant of the calculus which closely corresponds to the proof procedure implemented in HOT. The second part discusses HOT's design that can be characterized as a concurrent Blackboard architecture. We show the usefulness of the implementation by including benchmark results for over one hundred solved problems from logic and set theory.
Orderings on polynomial interpretations of operators represent a powerful technique for proving thetermination of rewriting systems. One of the main problems of polynomial orderings concerns thechoice of the right interpretation for a given rewriting system. It is very difficult to develop techniquesfor solving this problem. Here, we present three new heuristic approaches: (i) guidelines for dealingwith special classes of rewriting systems, (ii) an algorithm for choosing appropriate special polynomialsas well as (iii) an extension of the original polynomial ordering which supports the generation ofsuitable interpretations. All these heuristics will be applied to examples in order to illustrate theirpractical relevance.
We study families V of curves in P2(C) of degree d having exactly r singular points of given topological or analytic types. We derive new sufficient conditions for V to be T-smooth (smooth of the expected dimension), respectively to be irreducible. For T-smoothness these conditions involve new invariants of curve singularities and are conjectured to be asymptotically proper, i.e., optimal up to a constant factor. To obtain the results, we study the Castelnuovo function, prove the irreducibility of the Hilbert scheme of zero-dimensional schemes associated to a cluster of infinitely near points of the singularities and deduce new vanishing theorems for ideal sheaves of zero-dimensional schemes in P2. Moreover, we give a series of examples of cuspidal curves where the family V is reducible, but where ss1(P2nC) coincides (and is abelian) for all C 2 V .
After the notion of Gröbner bases and an algorithm for constructing them was introduced by Buchberger [Bu1, Bu2] algebraic geometers have used Gröbner bases as the main computational tool for many years, either to prove a theorem or to disprove a conjecture or just to experiment with examples in order to obtain a feeling about the structure of an algebraic variety. Nontrivial problems coming either from logic, mathematics or applications usually lead to nontrivial Gröbner basis computations, which is the reason why several improvements have been provided by many people and have been implemented in general purpose systems like Axiom, Maple, Mathematica, Reduce, etc., and systems specialized for use in algebraic geometry and commutative algebra like CoCoA, Macaulay and Singular. The present paper starts with an introduction to some concepts of algebraic geometry which should be understood by people with (almost) no knowledge in this field. In the second chapter we introduce standard bases (generalization of Gr"obner bases to non-well-orderings), which are needed for applications to local algebraic geometry (singularity theory), and a method for computing syzygies and free resolutions. The last chapter describes a new algorithm for computing the normalization of a reduced affine ring and gives an elementary introduction to singularity theory. Then we describe algorithms, using standard bases, to compute infinitesimal deformations and obstructions, which are basic for the deformation theory of isolated singularities. It is impossible to list all papers where Gr"obner bases have been used in local and global algebraic geometry, and even more impossible to give an overview about these contributions. We have, therefore, included only references to papers mentioned in this tutorial paper. The interested reader will find many more in the other contributions of this volume and in the literature cited there.
Algorithmic ideal theory
(1999)
Algebraic geometers have used Gröbner bases as the main computational tool for many years, either to prove a theorem or to disprove a conjecture or just to experiment with examples in order to obtain a feeling about the structure of an algebraic variety. Non-trivial mathematical problems usually lead to non-trivial Gröbner basis computations, which is the reason why several improvements and efficient implementations have been provided by algebraic geometers (for example, the systems CoCoA, Macaulay and SINGULAR). The present paper starts with an introduction to some concepts of algebraic geometry which should be understood by people with (almost) no knowledge in this field. In the second chapter we introduce standard bases (generalization of Gröbner bases to non-well-orderings), which are needed for applications to local algebraic geometry (singularity theory), and a method for computing syzygies and free resolutions. In the third chapter several algorithms for primary decomposition of polynomial ideals are presented, together with a discussion of improvements and preferable choices. We also describe a newly invented algorithm for computing the normalization of a reduced affine ring. The last chapter gives an elementary introduction to singularity theory and then describes algorithms, using standard bases, to compute infinitesimal deformations and obstructions, which are basic for the deformation theory of isolated singularities. It is impossible to list all papers where Gröbner basis have been used in local and global algebraic geometry, and even more impossible to give an overview about these contributions. We have, therefore, included only a few references to papers which contain interesting applications and which are not mentioned in this tutorial paper. The interested reader will find many more in the other contributions of this volume and in the literature cited there.
Singular algebraic curves, their existence, deformation, families (from the local and global point of view) attract continuous attention of algebraic geometers since the last century. The aim of our paper is to give an account of results, new trends and bibliography related to the geometry of equisingular families of algebraic curves on smooth algebraic surfaces over an algebraically closed field of characteristic zero. This theory is founded in basic works of Plücker, Severi, Segre, Zariski, and has tight links and finds important applications in singularity theory, topology of complex algebraic curves and surfaces, and in real algebraic geometry.
If \(A\) generates a bounded cosine function on a Banach space \(X\) then the negative square root \(B\) of \(A\) generates a holomorphic semigroup, and this semigroup is the conjugate potential transform of the cosine function. This connection is studied in detail, and it is used for a characterization of cosine function generators in terms of growth conditions on the semigroup generated by \(B\). This characterization relies on new results on the inversion of the vector-valued conjugate potential transform.
Let \(X\) be a Banach lattice. Necessary and sufficient conditions for a linear operator \(A:D(A) \to X\), \(D(A)\subseteq X\), to be of positive \(C^0\)-scalar type are given. In addition, the question is discussed which conditions on the Banach lattice imply that every operator of positive \(C^0\)-scalar type is necessarily of positive scalar type.
We consider regularizing iterative procedures for ill-possed problems with random and nonrandom additive errors. The rate of square-mean convergence for iterative procedures with random errors is studied. The comparison theorem is established for the convergence of procedures with and without additive errors.
Vigenere-Verschlüsselung
(1999)
The computational complexity of combinatorial multiple objective programming problems is investigated. NP-completeness and #P-completeness results are presented. Using two definitions of approximability, general results are presented, which outline limits for approximation algorithms. The performance of the well known tree and Christofides' heuristics for the TSP is investigated in the multicriteria case with respect to the two definitions of approximability.
Convex Operators in Vector Optimization: Directional Derivatives and the Cone of Decrease Directions
(1999)
The paper is devoted to the investigation of directional derivatives and the cone of decrease directions for convex operators on Banach spaces. We prove a condition for the existence of directional derivatives which does not assume regularity of the ordering cone K. This result is then used to prove that for continuous convex operators the cone of decrease directions can be represented in terms of the directional derivatices . Decrease directions are those for which the directional derivative lies in the negative interior of the ordering cone K. Finally, we show that the continuity of the convex operator can be replaced by its K-boundedness.
The notion of the balance number introduced in [3,page 139] through a certain set contraction procedure for nonscalarized multiobjective global optimization is represented via a min-max operation on the data of the problem. This representation yields a different computational procedure for the calculation of the balance number and allows us to generalize the approach for problems with countably many performance criteria.
In this paper we consider the problem of locating one new facility in the plane with respect to a given set of existing facility where a set of polygonal barriers restricts traveling. This non-convex optimization problem can be reduced to a finite set of convex subproblems if the objective function is a convex function of the travel distances between the new and the existing facilities (like e.g. the Median and Center objective functions). An exact Algorithm and a heuristic solution procedure based on this reduction result are developed.
Let rC and rD be two convexdistance funtions in the plane with convex unit balls C and D. Given two points, p and q, we investigate the bisector, B(p,q), of p and q, where distance from p is measured by rC and distance from q by rD. We provide the following results. B(p,q) may consist of many connected components whose precise number can be derived from the intersection of the unit balls, C nd D. The bisector can contain bounded or unbounded 2-dimensional areas. Even more surprising, pieces of the bisector may appear inside the region of all points closer to p than to q. If C and D are convex polygons over m and m vertices, respectively, the bisector B(p,q) can consist of at most min(m,n) connected components which contain at most 2(m+n) vertices altogether. The former bound is tight, the latter is tight up to an additive constant. We also present an optimal O(m+n) time algorithm for computing the bisector.
In planar location problems with barriers one considers regions which are forbidden for the siting of new facilities as well as for trespassing. These problems areimportant since they reflect various real-world situations.The resulting mathematical models have a non-convex objectivefunction and are therefore difficult to tackle using standardmethods of location theory even in the case of simple barriershapes and distance funtions.For the case of center objectives with barrier distancesobtained from the rectilinear or Manhattan metric it is shown that the problem can be solved by identifying a finitedominating set (FDS) the cardinality of which is bounded bya polynomial in the size of the problem input. The resultinggenuinely polynomial algorithm can be combined with bound computations which are derived from solving closely connectedrestricted location and network location problems.It is shown that the results can be extended to barrier center problems with respect to arbitrary block norms having fourfundamental directions.
In this paper we deal with an NP-hard combinatorial optimization problem, the k-cardinality tree problem in node weighted graphs. This problem has several applications , which justify the need for efficient methods to obtain good solutions. We review existing literature on the problem. Then we prove that under the condition that the graph contains exactly one trough, the problem can be solved in ploynomial time. For the general NP-hard problem we implemented several local search methods to obtain heuristics solutions, which are qualitatively better than solutions found by constructive heuristics and which require significantly less time than needed to obtain optimal solutions. We used the well known concepts of genetic algorithms and tabu search with useful extensions. We show that all the methods find optimal solutions for the class of graphs containing exactly one trough. The general performance of our methods as compared to other heuristics is illustrated by numerical results.
Given a finite set of points in the plane and a forbidden region R, we want to find a point X not an element of int(R), such that the weighted sum to all given points is minimized. This location problem is a variant of the well-known Weber Problem, where we measure the distance by polyhedral gauges and allow each of the weights to be positive or negative. The unit ball of a polyhedral gauge may be any convex polyhedron containing the origin. This large class of distance functions allows very general (practical) settings - such as asymmetry - to be modeled. Each given point is allowed to have its own gauge and the forbidden region R enables us to include negative information in the model. Additionally the use of negative and positive weights allows to include the level of attraction or dislikeness of a new facility. Polynomial algorithms and structural properties for this global optimization problem (d.c. objective function and a non-convex feasible set) based on combinatorial and geometrical methods are presented.
In this paper a new trend is introduced into the field of multicriteria location problems. We combine the robustness approach using the minmax regret criterion together with Pareto-optimality. We consider the multicriteria Weber location problem which consists of simultaneously minimizing a number of weighted sum-distance functions and the set of Pareto-optimal locations as its solution concept. For this problem, we characterize the Pareto-optimal solutions within the set of robust locations for the original weighted sum-distance functions. These locations have both the properties of stability and non-domination which are required in robust and multicriteria programming.
Complete presentations provide a natural solution to the word problem in monoids and groups. Here we give a simple way to construct complete presentations for the direct product of groups, when such presentations are available for the factors. Actually, the construction we are referring to is just the classical construction for direct products of groups, which has been known for a long time, but whose completeness-preserving properties had not been detected. Using this result and some known facts about Coxeter groups, we sketch an algorithm to obtain the complete presentation of any finite Coxeter group. A similar application to Abelian and Hamiltonian groups is mentioned.
Compared to standard numerical methods for hyperbolic systems of conservation laws, Kinetic Schemes model propagation of information by particles instead of waves. In this article, the wave and the particle concept are shown to be closely related. Moreover, a general approach to the construction of Kinetic Schemes for hyperbolic conservation laws is given which summarizes several approaches discussed by other authors. The approach also demonstrates why Kinetic Schemes are particularly well suited for scalar conservation laws and why extensions to general systems are less natural.
Nonlinear dissipativity, asymptotical stability, and contractivity of (ordinary) stochastic differential equations (SDEs) with some dissipative structure and their discretizations are studied in terms of their moments in the spirit of Pliss (1977). For this purpose, we introduce the notions and discuss related concepts of dissipativity, growth- bounded and monotone coefficient systems, asymptotical stability and contractivity in wide and narrow sense, nonlinear A-stability, AN-stability, B-stability and BN-stability for stochastic dynamical systems - more or less as stochastic counterparts to deterministic concepts. The test class of in a broad sense interpreted dissipative SDEs as natural analogon to dissipative deterministic differential systems is suggested for stochastic-numerical methods. Then, in particular, a kind of mean square calculus is developed, although most of ideas and analysis can be carried over to general "stochastic Lp-case" (p * 1). By this natural restriction, the new stochastic concepts are theoretically meaningful, as in deterministic analysis. Since the choice of step sizes then plays no essential role in related proofs, we even obtain nonlinear A-stability, AN-stability, B-stability and BN-stability in the mean square sense for this implicit method with respect to appropriate test classes of moment-dissipative SDEs.
Nonlinear stochastic dynamical systems as ordinary stochastic differential equations and stochastic difference methods are in the center of this presentation in view of the asymptotical behaviour of their moments. We study the exponential p-th mean growth behaviour of their solutions as integration time tends to infinity. For this purpose, the concepts of nonlinear contractivity and stability exponents for moments are introduced as generalizations of well-known moment Lyapunov exponents of linear systems. Under appropriate monotonicity assumptions we gain uniform estimates of these exponents from above and below. Eventually, these concepts are generalized to describe the exponential growth behaviour along certain Lyapunov-type functionals.
In this paper we discuss a special class of regularization methods for solving the satellite gravity gradiometry problem in a spherical framework based on band-limited spherical regularization wavelets. Considering such wavelets as a reesult of a combination of some regularization methods with Galerkin discretization based on the spherical harmonic system we obtain the error estimates of regularized solutions as well as the estimates for regularization parameters and parameters of band-limitation.
Seinen Versuch, den Begriff der negativen Größen in die Weltweisheit einzuführen beginnt der neununddreißigjährige Immanuel Kant mit einer grundsätzlichen Erörterung über einen etwaigen Gebrauch, den man in der Weltweisheit von der Mathematik ma-chen kann. Dabei stellt er die These auf, daß Mathematik grundsätzlich nur auf zweierlei Art in die Philosophie eingreifen könne. Eine erste Möglichkeit sieht Kant in der Nachahmung mathematischer Methoden bei der Darstellung von Philosophie, die andere Möglichkeit besteht für ihn in der konkreten Anwendung mathematischer Theorien in der Naturlehre. Die zuerst genannte Möglichkeit beurteilt Kant ausgesprochen negativ; seine Kritik an dem von Comenius zunächst ganz allgemein formulierten und dann von Christian Wolff insbesondere für die Philosophie favorisierten Programm einer Präsentation der Philosophie nach mathematischem Vorbild einer Darstellung more geometrico demonstrata ist hinlänglich bekannt. Die Verwendung von Mathematik in der Naturlehre sieht Kant zwar durchaus positiv; in den Metaphysischen Anfangsgründen der Naturwissenschaft wird er gut zwei Jahrzehnte später sogar jene berühmte Behauptung hinzufügen, daß in jeder besonderen Naturlehre nur so viel eigentliche Wissenschaft angetroffen werden könne, als darin Mathematik anzutreffen ist. Dennoch weist Kant mit aller Deutlichkeit auf die engen Grenzen des Wirkungsbereichs solcher Anwendungen von Mathematik hin, denn seiner Meinung nach würden aber auch nur die zur Naturlehre gehörigen Einsichten von derartigem mathematischem Zugriff profitieren.
We consider the "representation type" of the classification problem of vector bundles on a projective curve. We prove that this problem is always either finite, or tame, or wild and we completely describe those curves which are of finite, resp. tame, vector bundle type. We also give a complete list of indecomposable vector bundles for the finite and tame cases.
Let P2r be the projective plane blown up at r generic points. Denote by E0; E1; : : : ; Er the strict transform of a generic straight line on P2 and the exceptional divisors of the blown-up points on P2r respectively. We consider the variety Virr of all irreducible curves C with k nodes as the only singularities and give asymptotically nearly optimal sufficient conditions for its smoothness, irreducibility and non-emptiness. Moreover, we extend our conditions for the smoothness and the irreducibility on families of reducible curves. For r ^ 9 we give the complete answer concerning the existence of nodal curves in Virr.
Many discrepancy principles are known for choosing the parameter \(\alpha\) in the regularized operator equation \((T^*T+ \alpha I)x_\alpha^\delta = T^*y^\delta\), \(||y-y^d||\leq \delta\), in order to approximate the minimal norm least-squares solution of the operator equation \(Tx=y\). In this paper we consider a class of discrepancy principles for choosing the regularization parameter when \(T^*T\) and \(T^*y^\delta\) are approximated by \(A_n\) and \(z_n^\delta\) respectively with \(A_n\) not necessarily self - adjoint. Thisprocedure generalizes the work of Engl and Neubauer (1985),and particular cases of the results are applicable to the regularized projection method as well as to a degenerate kernel method considered by Groetsch (1990).
Spektralsequenzen
(1999)
In the scalar case one knows that a complex normalized function of boundedvariation \(\phi\) on \([0,1]\) defines a unique complex regular Borel measure\(\mu\) on \([0,1]\). In this note we show that this is no longer true in generalin the vector valued case, even if \(\phi\) is assumed to be continuous. Moreover, the functions \(\phi\) which determine a countably additive vectormeasure \(\mu\) are characterized.
A compact subset E of the complex plane is called removable if all bounded analytic functions on its complement are constant or, equivalently, i f its analytic capacity vanishes. The problem of finding a geometric characterization of the removable sets is more than a hundred years old and still not comp letely solved.
Let (Epsilon_k) be a sequence of experiments with the same finite parameter set. Suppose only that identification of the parameter is possible asymptotically. For large classes of information functionals we show that their exponential rates of convergence towards complete information coincide. As a special case we obtain the rate of the Shannon capacity of product experiments.
In 1979, J.M. Bernardo argued heuristically that in the case of regular product experiments his information theoretic reference prior is equal to Jeffreys' prior. In this context, B.S. Clarke and A.R. Barron showed in 1994, that in the same class of experiments Jeffreys' prior is asymptotically optimal in the sense of Shannon, or, in Bayesian terms, Jeffreys' prior is asymptotically least favorable under Kullback Leibler risk. In the present paper, we prove, based on Clarke and Barron's results, that every sequence of Shannon optimal priors on a sequence of regular iid product experiments converges weakly to Jeffreys' prior. This means that for increasing sample size Kullback Leibler least favorable priors tend to Jeffreys' prior.
The following two norms for holomorphic functions \(F\), defined on the right complex half-plane \(\{z \in C:\Re(z)\gt 0\}\) with values in a Banach space \(X\), are equivalent:
\[\begin{eqnarray*} \lVert F \rVert _{H_p(C_+)} &=& \sup_{a\gt0}\left( \int_{-\infty}^\infty \lVert F(a+ib) \rVert ^p \ db \right)^{1/p}
\mbox{, and} \\ \lVert F \rVert_{H_p(\Sigma_{\pi/2})} &=& \sup_{\lvert \theta \lvert \lt \pi/2}\left( \int_0^\infty \left \lVert F(re^{i \theta}) \right \rVert ^p\ dr \right)^{1/p}.\end{eqnarray*}\] As a consequence, we derive a description of boundary values ofsectorial holomorphic functions, and a theorem of Paley-Wiener typefor sectorial holomorphic functions.