Refine
Year of publication
Document Type
- Preprint (1185) (remove)
Keywords
- AG-RESY (17)
- Case-Based Reasoning (16)
- Mehrskalenanalyse (10)
- RODEO (10)
- Approximation (9)
- Fallbasiertes Schliessen (9)
- Wavelet (9)
- Boltzmann Equation (7)
- Inverses Problem (7)
- Location Theory (7)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (608)
- Kaiserslautern - Fachbereich Informatik (346)
- Kaiserslautern - Fachbereich Physik (159)
- Fraunhofer (ITWM) (19)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (17)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (17)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (15)
- Kaiserslautern - Fachbereich Sozialwissenschaften (2)
- Universitätsbibliothek (2)
Anhand des vom Gutachterausschuß der Stadt Kaiserlautern zur Verfügung gestellten Datenmaterials soll untersucht werden, welche Faktoren den Verkehrswert eines bebauten Grundstücks beeinflussen. Mit diesen Erkenntnissen soll eine möglichst einfache Formel ermittelt werden, die eine Schätzung für den Verkehrswert liefert, und die dabei die in der Vergangenheit erzielten Kaufpreise berücksichtigt. Für die Lösung dieser Aufgabe bietet sich das Verfahren der multiplen linearen Regression an. Auf die theoretischen Grundlagen soll hier nicht näher eingegangen werden, man findet sie in jedem Buch über mathematische Statistik, oder in [1]. Bei der Analyse der Daten wurde im großen und ganzen der Weg eingeschlagen, den Angelika Schwarz in [1] beschreibt. Ihre Ergebnisse lassen sich jedoch nicht direkt übertragen, da die dort betrachteten Grundstücke unbebaut waren. Da bei der statistischen Auswertung großer Datenmengen ein immenser Rechenaufwand anfällt, ist es unverzichtbar, professionelle statistische Software einzusetzen. Es stand das Programm S-Plus 2.0 (PC-Version für Windows) zur Verfügung. Sämtliche Berechnungen und alle Grafiken in diesem Bericht wurden in S-Plus erstellt.
We consider the problem to evacuate several regions due to river flooding, where sufficient time is given to plan ahead. To ensure a smooth evacuation procedure, our model includes the decision which regions to assign to which shelter, and when evacuation orders should be issued, such that roads do not become congested.
Due to uncertainty in weather forecast, several possible scenarios are simultaneously considered in a robust optimization framework. To solve the resulting integer program, we apply a Tabu search algorithm based on decomposing the problem into better tractable subproblems. Computational experiments on random instances and an instance based on Kulmbach, Germany, data show considerable improvement compared to an MIP solver provided with a strong starting solution.
Zeitreihen und Modalanalyse
(1987)
Die Arbeit ist zu verstehen als ein Teil im großen Projekt der Universität Kaiserslautern, das sich unter dem Namen Technomathematik um die dringend erforderliche Verständigung zwischen Technik und Mathematik bemüht.; Der große Leitfaden war das Buch von Natke: Einführung in Theorie und Praxis der Zeitreihen- und Modalanalyse, Schilderung der wesentlichen dort verwendeten Ideen der indirekten Systemidentifikation sowie des wahrscheinlichkeitstheoretischen und physikalisch-technischen Hintergrundes.
Bei der Programmierung geht es in vielfältiger Form um Identifikation von Individuen: Speicherorte,Datentypen, Werte, Klassen, Objekte, Funktionen u.ä. müssen definierend oder selektierend identifiziert werden.Die Ausführungen zur Identifikation durch Zeigen oder Nennen sind verhältnismäßig kurz gehalten,wogegen der Identifikation durch Umschreiben sehr viel Raum gewidmet ist. Dies hat seinen Grunddarin, daß man zum Zeigen oder Nennen keine strukturierten Sprachformen benötigt, wohl aber zumUmschreiben. Daß die Betrachtungen der unterschiedlichen Formen funktionaler Umschreibungen soausführlich gehalten sind, geschah im Hinblick auf ihre Bedeutung für die Begriffswelt der funktionalen Programmierung. Man hätte zwar die Formen funktionaler Umschreibungen auch im Mosaikstein "Programmzweck versus Programmform" im Kontext des dort dargestellten Konzepts funktionaler Programme behandeln können, aber der Autor meint, daß der vorliegende Aufsatz der angemessenerePlatz dafür sei.
We present a convenient notation for positive/negativeADconditional equations. Theidea is to merge rules specifying the same function by using caseAD, ifAD, matchAD, and letADexpressions.Based on the presented macroADruleADconstruct, positive/negativeADconditional equational specifiADcations can be written on a higher level. A rewrite system translates the macroADruleADconstructsinto positive/negativeADconditional equations.
The Internet has fallen prey to its most successful service, the World-Wide Web. The networksdo not keep up with the demands incurred by the huge amount of Web surfers. Thus, it takeslonger and longer to obtain the information one wants to access via the World-Wide Web.Many solutions to the problem of network congestion have been developed in distributed sys-tems research in general and distributed file and database systems in particular. The introduc-tion of caching and replication strategies has proven to help in many situations and thereforethese techniques are also applied to the WWW. Although most problems and associated solu-tions are known, some circumstances are different with the Web, forcing the adaptation ofknown strategies. This paper gives an overview about these differences and about currentlydeployed, developed, and evaluated solutions.
We have developed a middleware framework for workgroup environments that can support distributed software development and a variety of other application domains requiring document management and change management for distributed projects. The framework enables hypermedia-based integration of arbitrary legacy and new information resources available via a range of protocols, not necessarily known in advance to us as the general framework developers nor even to the environment instance designers. The repositories in which such information resides may be dispersed across the Internet and/or an organizational intranet. The framework also permits a range of client models for user and tool interaction, and applies an extensible suite of collaboration services, including but not limited to multi-participant workflow and coordination, to their information retrievals and updates. That is, the framework is interposed between clients, services and repositories - thus "middleware". We explain how our framework makes it easy to realize a comprehensive collection of workgroup and workflow features we culled from a requirements survey conducted by NASA.
Abstract: Winding number transitions from quantum to classical behavior are studied in the case of the 1+1 dimensional Mottola-Wipf model with the space coordinate on a circle for exploring the possibility of obtaining transitions of second order. The model is also studied as a prototype theory which demonstrates the procedure of such investigations. In the model at hand we find that even on a circle the transitions remain those of first order.
Abstract: Following our earlier investigations we examine the quantum-classical winding number transition in the Abelian-Higgs system. It is demonstrated that the winding number transition in this system is of the smooth second order type in the full range of parameter space. Comparison of the action of classical vortices with that of the sphaleron supports our finding.
In recent years several computational systems and techniques fortheorem proving by analogy have been developed. The obvious prac-tical question, however, as to whether and when to use analogy hasbeen neglected badly in these developments. This paper addresses thisquestion, identifies situations where analogy is useful, and discussesthe merits of theorem proving by analogy in these situations. Theresults can be generalized to other domains.
Using particle methods to solve the Boltzmann equation for rarefied gases numerically, in realistic streaming problems, huge differences in the total number of particles per cell arise. In order to overcome the resulting numerical difficulties the application of a weighted particle concept is well-suited. The underlying idea is to use different particle masses in different cells depending on the macroscopic density of the gas. Discrepance estimates and numerical results are given.
Given a finite set of points in the plane and a forbidden region R, we want to find a point X not an element of int(R), such that the weighted sum to all given points is minimized. This location problem is a variant of the well-known Weber Problem, where we measure the distance by polyhedral gauges and allow each of the weights to be positive or negative. The unit ball of a polyhedral gauge may be any convex polyhedron containing the origin. This large class of distance functions allows very general (practical) settings - such as asymmetry - to be modeled. Each given point is allowed to have its own gauge and the forbidden region R enables us to include negative information in the model. Additionally the use of negative and positive weights allows to include the level of attraction or dislikeness of a new facility. Polynomial algorithms and structural properties for this global optimization problem (d.c. objective function and a non-convex feasible set) based on combinatorial and geometrical methods are presented.
We introduce a class of models for time series of counts which include INGARCH-type models as well as log linear models for conditionally Poisson distributed data. For those processes, we formulate simple conditions for stationarity and weak dependence with a geometric rate. The coupling argument used in the proof serves as a role model for a similar treatment of integer-valued time series models based on other types of thinning operations.
By means of the limit and jump relations of classical potential theory the framework of a wavelet approach on a regular surface is established. The properties of a multiresolution analysis are verified, and a tree algorithm for fast computation is developed based on numerical integration. As applications of the wavelet approach some numerical examples are presented, including the zoom-in property as well as the detection of high frequency perturbations. At the end we discuss a fast multiscale representation of the solution of (exterior) Dirichlet's or Neumann's boundary-value problem corresponding to regular surfaces.
This work is dedicated to the wavelet modelling of regional and temporal variations of the Earth's gravitational potential observed by GRACE. In the first part, all required mathematical tools and methods involving spherical wavelets are introduced. Then we apply our method to monthly GRACE gravity fields. A strong seasonal signal can be identified, which is restricted to areas, where large-scale redistributions of continental water mass are expected. This assumption is analyzed and verified by comparing the time series of regionally obtained wavelet coefficients of the gravitational signal originated from hydrology models and the gravitational potential observed by GRACE. The results are in good agreement to previous studies and illustrate that wavelets are an appropriate tool to investigate regional time-variable effects in the gravitational field.
In this paper we introduce a multiscale technique for the analysis of deformation phenomena of the Earth. Classically, the basis functions under use are globally defined and show polynomial character. In consequence, only a global analysis of deformations is possible such that, for example, the water load of an artificial reservoir is hardly to model in that way. Up till now, the alternative to realize a local analysis can only be established by assuming the investigated region to be flat. In what follows we propose a local analysis based on tools (Navier scaling functions and wavelets) taking the (spherical) surface of the Earth into account. Our approach, in particular, enables us to perform a zooming-in procedure. In fact, the concept of Navier wavelets is formulated in such a way that subregions with larger or smaller data density can accordingly be modelled with a higher or lower resolution of the model, respectively.
Wavelets on closed surfaces in Euclidean space R3 are introduced starting from a scale discrete wavelet transform for potentials harmonic down to a spherical boundary. Essential tools for approximation are integration formulas relating an integral over the sphere to suitable linear combinations of functional values (resp. normal derivatives) on the closed surface under consideration. A scale discrete version of multiresolution is described for potential functions harmonic outside the closed surface and regular at infinity. Furthermore, an exact fully discrete wavelet approximation is developed in case of band-limited wavelets. Finally, the role of wavelets is discussed in three problems, namely (i) the representation of a function on a closed surface from discretely given data, (ii) the (discrete) solution of the exterior Dirichlet problem, and (iii) the (discrete) solution of the exterior Neumann problem.
A multiscale method is introduced using spherical (vector) wavelets for the computation of the earth's magnetic field within source regions of ionospheric and magnetospheric currents. The considerations are essentially based on two geomathematical keystones, namely (i) the Mie representation of solenoidal vector fields in terms of toroidal and poloidal parts and (ii) the Helmholtz decomposition of spherical (tangential) vector fields. Vector wavelets are shown to provide adequate tools for multiscale geomagnetic modelling in form of a multiresolution analysis, thereby completely circumventing the numerical obstacles caused by vector spherical harmonics. The applicability and efficiency of the multiresolution technique is tested with real satellite data.
In this paper, the reflection and refraction of a plane wave at an interface between .two half-spaces composed of triclinic crystalline material is considered. It is shown that due to incidence of a plane wave three types of waves namely quasi-P (qP), quasi-SV (qSV) and quasi-SH (qSH) will be generated governed by the propagation condition involving the acoustic tensor. A simple procedure has been presented for the calculation of all the three phase velocities of the quasi waves. It has been considered that the direction of particle motion is neither parallel nor perpendicular to the direction of propagation. Relations are established between directions of motion and propagation, respectively. The expressions for reflection and refraction coefficients of qP, qSV and qSH waves are obtained. Numerical results of reflection and refraction coefficients are presented for different types of anisotropic media and for different types of incident waves. Graphical representation have been made for incident qP waves and for incident qSV and qSH waves numerical data are presented in two tables.
Wannier-Stark states for semiconductor superlattices in strong static fields, where the interband Landau-Zener tunneling cannot be neglected, are rigorously calculated. The lifetime of these metastable states was found to show multiscale oscillations as a function of the static field, which is explained by an interaction with above-barrier resonances. An equation, expressing the absorption spectrum of semiconductor superlattices in terms of the resonance Wannier-Stark states is obtained and used to calculate the absorption spectrum in the region of high static fields.
In this work, we discuss the resonance states of a quantum particle in a periodic potential plus static force. Originally this problem was formulated for a crystalline electron subject to the static electric field and is known nowadays as the Wannier-Stark problem. We describe a novel approach to the Wannier-Stark problem developed in recent years. This approach allows to compute the complex energy spectrum of a Wannier-Stark system as the poles of a rigorously constructed scattering matrix and, in this sense, solves the Wannier-Stark problem without any approximation. The suggested method is very efficient from the numerical point of view and has proven to be a powerful analytic tool for Wannier-Stark resonances appearing in different physical systems like optical or semiconductor superlattices.
In this report we give an overview of the development of our new Waldmeisterprover for equational theories. We elaborate a systematic stepwise design process, startingwith the inference system for unfailing Knuth - Bendix completion and ending up with animplementation which avoids the main diseases today's provers suffer from: overindulgencein time and space.Our design process is based on a logical three - level system model consisting of basicoperations for inference step execution, aggregated inference machine, and overall controlstrategy. Careful analysis of the inference system for unfailing completion has revealed thecrucial points responsible for time and space consumption. For the low level of our model,we introduce specialized data structures and algorithms speeding up the running system andcutting it down in size - both by one order of magnitude compared with standard techniques.Flexible control of the mid - level aggregation inside the resulting prover is made possible by acorresponding set of parameters. Experimental analysis shows that this flexibility is a pointof high importance. We go on with some implementation guidelines we have found valuablein the field of deduction.The resulting new prover shows that our design approach is promising. We compare oursystem's throughput with that of an established system and finally demonstrate how twovery hard problems could be solved by Waldmeister.
Mit der schnellen Verbreitung der CAx-Techniken in der deutschen Automobilindustrie wächst die Notwendigkeit einer besseren Integration der CAx-Systeme in die Prozeßketten und der Beherrschung der Produktinformationsflüsse. Aufgrund dieser Tatsachen ist in den letzten Jah-ren ein Wandel der CAx-Systemarchitekturen von geschloßenen, monolithischen zu offen inte-grierten Systemen erkennbar. Im folgenden wird dieser Prozeß sowie dessen Implikationen auf die Anwendung und auf die Systemhersteller analysiert. Ausgehend von der Initiative der deutschen Automobilindustrie wurde das Projekt ANICA (Analysis of Interfaces of various CAD/CAM-Systems) gestartet. In diesem Projekt werden die Schnittstellen zu den Systemkernen einiger CAx-Hersteller untersucht und ein Konzept für kooperierende CAx-Systeme in der Automobilindustrie wird entwickelt.
This paper presents the systematic synthesis of a fairly complex digitalcircuit and its CPLD implementation as an assemblage of communicatingasynchronous sequential circuits. The example, a VMEbus controller, waschosen because it has to control concurrent processes and to arbitrateconflicting requests.
Vigenere-Verschlüsselung
(1999)
Die Verfahren der Induktiven Logischen Programmierung (ILP) [Mug93] haben die Aufgabe, aus einer Menge von positiven Beispielen E+, einer Menge von negativen Beispielen E und dem Hintergrundwissen B ein logisches Programm P zu lernen, das aus einer Menge von definiten Klauseln C : l0 l1, : : : ,ln besteht. Da der Hypothesenraum für Hornlogik unendlich ist, schränken viele Verfahren die Hypothesensprache auf eine endliche ein. Auch wird oft versucht, die Hypothesensprache so einzuschränken, dass nur Programme gelernt werden können, für die die Konsistenz entscheidbar ist. Eine andere Motivation, die Hypothesensprache zu beschränken, ist, dass das Wissen über das Zielprogramm, das schon vorhanden ist, ausgenutzt werden soll. So sind für bestimmte Anwendungen funktionsfreie Hypothesenklauseln ausreichend, oder es ist bekannt, dass das Zielprogramm funktional ist.
Verbale Sacherschließung
(1998)
Das Skript gibt eine Einführung in die Geschichte, die Terminologie und die Verfahren der verbalen Sacherschließung. Im deutschsprachigen und englischsprachigen Raum etablierte Verfahren, wie die "Regeln für den Schlagwortkatalog (RSWK)" und die "Library of Congress Subject Headings (LCSH)", werden eingehend beschrieben und Aspekte der Kooperation und Tauglichkeit für Online-Kataloge diskutiert. Charakteristika sowie Vor- und Nachteile der automatischen Indexierung werden anhand des Verfahrens "Maschinelle Indexierung zur verbesserten Literaturerschließung in Online Systemen (MILOS)" dargestellt.
The mathematical modelling of problems in science and engineering leads often to partial differential equations in time and space with boundary and initial conditions.The boundary value problems can be written as extremal problems(principle of minimal potential energy), as variational equations (principle of virtual power) or as classical boundary value problems.There are connections concerning existence and uniqueness results between these formulations, which will be investigated using the powerful tools of functional analysis.The first part of the lecture is devoted to the analysis of linear elliptic boundary value problems given in a variational form.The second part deals with the numerical approximation of the solutions of the variational problems.Galerkin methods as FEM and BEM are the main tools. The h-version will be discussed, and an error analysis will be done.Examples, especially from the elasticity theory, demonstrate the methods.
The shortest path problem in which the \((s,t)\)-paths \(P\) of a given digraph \(G =(V,E)\) are compared with respect to the sum of their edge costs is one of the best known problems in combinatorial optimization. The paper is concerned with a number of variations of this problem having different objective functions like bottleneck, balanced, minimum deviation, algebraic sum, \(k\)-sum and \(k\)-max objectives, \((k_1, k_2)-max, (k_1, k_2)\)-balanced and several types of trimmed-mean objectives. We give a survey on existing algorithms and propose a general model for those problems not yet treated in literature. The latter is based on the solution of resource constrained shortest path problems with equality constraints which can be solved in pseudo-polynomial time if the given graph is acyclic and the number of resources is fixed. In our setting, however, these problems can be solved in strongly polynomial time. Combining this with known results on \(k\)-sum and \(k\)-max optimization for general combinatorial problems, we obtain strongly polynomial algorithms for a variety of path problems on acyclic and general digraphs.
Conditional Compilation (CC) is frequently used as a variation mechanism in software product lines (SPLs). However, as a SPL evolves the variable code realized by CC erodes in the sense that it becomes overly complex and difficult to understand and maintain. As a result, the SPL productivity goes down and puts expected advantages more and more at risk. To investigate the variability erosion and keep the productivity above a sufficiently good level, in this paper we 1) investigate several erosion symptoms in an industrial SPL; 2) present a variability improvement process that includes two major improvement strategies. While one strategy is to optimize variable code within the scope of CC, the other strategy is to transition CC to a new variation mechanism called Parameterized Inclusion. Both of these two improvement strategies can be conducted automatically, and the result of CC optimization is provided. Related issues such as applicability and cost of the improvement are also discussed.
Value Preserving Strategies and a General Framework for Local Approaches to Optimal Portfolios
(1999)
We present some new general results on the existence and form of value preserving portfolio strategies in a general semimartingale setting. The concept of value preservation will be derived via a mean-variance argument. It will also be embedded into a framework for local approaches to the problem of portfolio optimisation.
We present a distributed system, Dott, for approximately solving the Trav-eling Salesman Problem (TSP) based on the Teamwork method. So-calledexperts and specialists work independently and in parallel for given time pe-riods. For TSP, specialists are tour construction algorithms and experts usemodified genetic algorithms in which after each application of a genetic operatorthe resulting tour is locally optimized before it is added to the population. Aftera given time period the work of each expert and specialist is judged by a referee.A new start population, including selected individuals from each expert and spe-cialist, is generated by the supervisor, based on the judgments of the referees.Our system is able to find better tours than each of the experts or specialistsworking alone. Also results comparable to those of single runs can be found muchfaster by a team.
Rules are an important knowledge representation formalism in constructive problem solving. On the other hand, object orientation is an essential key technology for maintaining large knowledge bases as well as software applications. Trying to take advantage of the benefits of both paradigms, we integrated Prolog and Smalltalk to build a common base architecture for problem solving. This approach has proven to be useful in the development of two knowledge-based systems for planning and configuration design (CAPlan and Idax). Both applications use Prolog as an efficient computational source for the evaluation of knowledge represented as rules.
Retrieval of cases is one important step within the case-based reasoning paradigm. We propose an improvement of this stage in the process model for finding most similar cases with an average effort of O[log2n], n number of cases. The basic idea of the algorithm is to use the heterogeneity of the search space for a density-based structuring and to employ this precomputed structure, a k-d tree, for efficient case retrieval according to a given similarity measure sim. In addition to illustrating the basic idea, we present the expe- rimental results of a comparison of four different k-d tree generating strategies as well as introduce the notion of virtual bounds as a new one that significantly reduces the retrieval effort from a more pragmatic perspective. The presented approach is fully implemented within the (Patdex) system, a case-based reasoning system for diagnostic applications in engineering domains.
We present the adaptation process in a CBR application for decision support in the domain of industrial supervision. Our approach uses explanations to approximate relations between a problem description and its solution, and the adaptation process is guided by these explanations (a more detailed presentation has been done in [4]).
The paper explores the role of artificial intelligence techniques in the development of an enhanced software project management tool, which takes account of the emerging requirement for support systems to address the increasing trend towards distributed multi-platform software development projects. In addressing these aims this research devised a novel architecture and framework for use as the basis of an intelligent assistance system for use by software project managers, in the planning and managing of a software project. This paper also describes the construction of a prototype system to implement this architecture and the results of a series of user trials on this prototype system.
Requirements engineering (RE) is a necessary part of the software development process, as it helps customers and designers identify necessary system requirements. If these stakeholders are separated by distance, we argue that a distributed groupware environment supporting a cooperative requirements engineering process must be supplied that allows them to negotiate software requirements. Such a groupware environment must support aspects of joint work relevant to requirements negotiation: synchronous and asynchronous collaboration, telepresence, and teledata. It should also add explicit support for a structured RE process, which includes the team's ability to discuss multiple perspectives during requirements acquisition and traceability. We chose the TeamWave software platform as an environment that supplied the basic collaboration capabilities, and tailored it to fit the specific needs of RE.
To prove difficult theorems in a mathematical field requires substantial know-ledge of that field. In this paper a frame-based knowledge representation formalismis presented, which supports a conceptual representation and to a large extent guar-antees the consistency of the built-up knowledge bases. We define a semantics ofthe representation by giving a translation into the underlaying logic.
We tested the GYROSTAR ENV-05S. This device is a sensor for angular velocity. There- fore the orientation must be calculated by integration of the angular velocity over time. The devices output is a voltage proportional to the angular velocity and relative to a reference. The test where done to find out under which conditions it is possible to use this device for estimation of orientation.
Abstract: The calculation of absorption cross sections for minimal scalars in supergravity backgrounds is an important aspect of the investigation of AdS/CFT correspondence and requires a matching of appropriate wave functions. The low energy case has attracted particular attention. In the following the dependence of the cross section on the matching point is investigated. It is shown that the low energy limit is independent of the matching point and hence exhibits universality. In the high energy limit the independence is not maintained, but the result is believed to possess the correct energy dependence.
Universal Shortest Paths
(2010)
We introduce the universal shortest path problem (Univ-SPP) which generalizes both - classical and new - shortest path problems. Starting with the definition of the even more general universal combinatorial optimization problem (Univ-COP), we show that a variety of objective functions for general combinatorial problems can be modeled if all feasible solutions have the same cardinality. Since this assumption is, in general, not satisfied when considering shortest paths, we give two alternative definitions for Univ-SPP, one based on a sequence of cardinality contrained subproblems, the other using an auxiliary construction to establish uniform length for all paths between source and sink. Both alternatives are shown to be (strongly) NP-hard and they can be formulated as quadratic integer or mixed integer linear programs. On graphs with specific assumptions on edge costs and path lengths, the second version of Univ-SPP can be solved as classical sum shortest path problem.
We have computed ensembles of complete spectra of the staggered Dirac operator using four-dimensional SU(2) gauge fields, both in the quenched approximation and with dynamical fermions. To identify universal features in the Dirac spectrum, we compare the lattice data with predictions from chiral random matrix theory for the distribution of the low-lying eigenvalues. Good agreement is found up to some limiting energy, the so-called Thouless energy, above which random matrix theory no longer applies. We determine the dependence of the Thouless energy on the simulation parameters using the scalar susceptibility and the number variance.