Refine
Year of publication
Document Type
- Preprint (1033)
- Doctoral Thesis (621)
- Report (399)
- Article (193)
- Conference Proceeding (26)
- Diploma Thesis (22)
- Periodical Part (21)
- Working Paper (12)
- Master's Thesis (11)
- Lecture (7)
Language
- English (2367) (remove)
Keywords
- AG-RESY (47)
- PARO (25)
- SKALP (15)
- Visualisierung (13)
- Wavelet (13)
- Case-Based Reasoning (11)
- Inverses Problem (11)
- RODEO (11)
- Mehrskalenanalyse (10)
- finite element method (10)
Faculty / Organisational entity
- Fachbereich Mathematik (939)
- Fachbereich Informatik (652)
- Fachbereich Physik (243)
- Fraunhofer (ITWM) (203)
- Fachbereich Maschinenbau und Verfahrenstechnik (112)
- Fachbereich Elektrotechnik und Informationstechnik (81)
- Fachbereich Chemie (58)
- Fachbereich Biologie (35)
- Fachbereich Sozialwissenschaften (22)
- Fachbereich Wirtschaftswissenschaften (12)
We consider the problem to evacuate several regions due to river flooding, where sufficient time is given to plan ahead. To ensure a smooth evacuation procedure, our model includes the decision which regions to assign to which shelter, and when evacuation orders should be issued, such that roads do not become congested.
Due to uncertainty in weather forecast, several possible scenarios are simultaneously considered in a robust optimization framework. To solve the resulting integer program, we apply a Tabu search algorithm based on decomposing the problem into better tractable subproblems. Computational experiments on random instances and an instance based on Kulmbach, Germany, data show considerable improvement compared to an MIP solver provided with a strong starting solution.
We present a convenient notation for positive/negativeADconditional equations. Theidea is to merge rules specifying the same function by using caseAD, ifAD, matchAD, and letADexpressions.Based on the presented macroADruleADconstruct, positive/negativeADconditional equational specifiADcations can be written on a higher level. A rewrite system translates the macroADruleADconstructsinto positive/negativeADconditional equations.
In this thesis we extend the worst-case modeling approach as first introduced by Hua and Wilmott (1997) (option pricing in discrete time) and Korn and Wilmott (2002) (portfolio optimization in continuous time) in various directions.
In the continuous-time worst-case portfolio optimization model (as first introduced by Korn and Wilmott (2002)), the financial market is assumed to be under the threat of a crash in the sense that the stock price may crash by an unknown fraction at an unknown time. It is assumed that only an upper bound on the size of the crash is known and that the investor prepares for the worst-possible crash scenario. That is, the investor aims to find the strategy maximizing her objective function in the worst-case crash scenario.
In the first part of this thesis, we consider the model of Korn and Wilmott (2002) in the presence of proportional transaction costs. First, we treat the problem without crashes and show that the value function is the unique viscosity solution of a dynamic programming equation (DPE) and then construct the optimal strategies. We then consider the problem in the presence of crash threats, derive the corresponding DPE and characterize the value function as the unique viscosity solution of this DPE.
In the last part, we consider the worst-case problem with a random number of crashes by proposing a regime switching model in which each state corresponds to a different crash regime. We interpret each of the crash-threatened regimes of the market as states in which a financial bubble has formed which may lead to a crash. In this model, we prove that the value function is a classical solution of a system of DPEs and derive the optimal strategies.
Distributed systems are omnipresent nowadays and networking them is fundamental for the continuous dissemination and thus availability of data. Provision of data in real-time is one of the most important non-functional aspects that safety-critical networks must guarantee. Formal verification of data communication against worst-case deadline requirements is key to certification of emerging x-by-wire systems. Verification allows aircraft to take off, cars to steer by wire, and safety-critical industrial facilities to operate. Therefore, different methodologies for worst-case modeling and analysis of real-time systems have been established. Among them is deterministic Network Calculus (NC), a versatile technique that is applicable across multiple domains such as packet switching, task scheduling, system on chip, software-defined networking, data center networking and network virtualization. NC is a methodology to derive deterministic bounds on two crucial performance metrics of communication systems:
(a) the end-to-end delay data flows experience and
(b) the buffer space required by a server to queue all incoming data.
NC has already seen application in the industry, for instance, basic results have been used to certify the backbone network of the Airbus A380 aircraft.
The NC methodology for worst-case performance analysis of distributed real-time systems consists of two branches. Both share the NC network model but diverge regarding their respective derivation of performance bounds, i.e., their analysis principle. NC was created as a deterministic system theory for queueing analysis and its operations were later cast in a (min,+)-algebraic framework. This branch is known as algebraic Network Calculus (algNC). While algNC can efficiently compute bounds on delay and backlog, the algebraic manipulations do not allow NC to attain the most accurate bounds achievable for the given network model. These tight performance bounds can only be attained with the other, newly established branch of NC, the optimization-based analysis (optNC). However, the only optNC analysis that can currently derive tight bounds was proven to be computationally infeasible even for the analysis of moderately sized networks other than simple sequences of servers.
This thesis makes various contributions in the area of algNC: accuracy within the existing framework is improved, distributivity of the sensor network calculus analysis is established, and most significantly the algNC is extended with optimization principles. They allow algNC to derive performance bounds that are competitive with optNC. Moreover, the computational efficiency of the new NC approach is improved such that this thesis presents the first NC analysis that is both accurate and computationally feasible at the same time. It allows NC to scale to larger, more complex systems that require formal verification of their real-time capabilities.
The Internet has fallen prey to its most successful service, the World-Wide Web. The networksdo not keep up with the demands incurred by the huge amount of Web surfers. Thus, it takeslonger and longer to obtain the information one wants to access via the World-Wide Web.Many solutions to the problem of network congestion have been developed in distributed sys-tems research in general and distributed file and database systems in particular. The introduc-tion of caching and replication strategies has proven to help in many situations and thereforethese techniques are also applied to the WWW. Although most problems and associated solu-tions are known, some circumstances are different with the Web, forcing the adaptation ofknown strategies. This paper gives an overview about these differences and about currentlydeployed, developed, and evaluated solutions.
We have developed a middleware framework for workgroup environments that can support distributed software development and a variety of other application domains requiring document management and change management for distributed projects. The framework enables hypermedia-based integration of arbitrary legacy and new information resources available via a range of protocols, not necessarily known in advance to us as the general framework developers nor even to the environment instance designers. The repositories in which such information resides may be dispersed across the Internet and/or an organizational intranet. The framework also permits a range of client models for user and tool interaction, and applies an extensible suite of collaboration services, including but not limited to multi-participant workflow and coordination, to their information retrievals and updates. That is, the framework is interposed between clients, services and repositories - thus "middleware". We explain how our framework makes it easy to realize a comprehensive collection of workgroup and workflow features we culled from a requirements survey conducted by NASA.
We consider a highly-qualified individual with respect to her choice between two distinct career paths. She can choose between a mid-level management position in a large company and an executive position within a smaller listed company with the possibility to directly affect the company’s share price. She invests in the financial market includ- ing the share of the smaller listed company. The utility maximizing strategy from consumption, investment, and work effort is derived in closed form for logarithmic utility. The power utility case is discussed as well. Conditions for the individual to pursue her career with the smaller listed company are obtained. The participation constraint is formulated in terms of the salary differential between the two posi- tions. The smaller listed company can offer less salary. The salary shortfall is offset by the possibility to benefit from her work effort by acquiring own-company shares. This gives insight into aspects of optimal contract design. Our framework is applicable to the pharma- ceutical and financial industry, and the IT sector.
Crowd condition monitoring concerns the crowd safety and concerns business performance metrics. The research problem to be solved is a crowd condition estimation approach to enable and support the supervision of mass events by first-responders and marketing experts, but is also targeted towards supporting social scientists, journalists, historians, public relations experts, community leaders, and political researchers. Real-time insights of the crowd condition is desired for quick reactions and historic crowd conditions measurements are desired for profound post-event crowd condition analysis.
This thesis aims to provide a systematic understanding of different approaches for crowd condition estimation by relying on 2.4 GHz signals and its variation in crowds of people, proposes and categorizes possible sensing approaches, applies supervised machine learning algorithms, and demonstrates experimental evaluation results. I categorize four sensing approaches. Firstly, stationary sensors which are sensing crowd centric signals sources. Secondly, stationary sensors which are sensing other stationary signals sources (either opportunistic or special purpose signal sources). Thirdly, a few volunteers within the crowd equipped with sensors which are sensing other surrounding crowd centric device signals (either individually, in a single group or collaboratively) within a small region. Fourthly, a small subset of participants within the crowd equipped with sensors and roaming throughout a whole city to sense wireless crowd centric signals.
I present and evaluate an approach with meshed stationary sensors which were sensing crowd centric devices. This was demonstrated and empirically evaluated within an industrial project during three of the world-wide largest automotive exhibitions. With over 30 meshed stationary sensors in an optimized setup across 6400m2 I achieved a mean absolute error of the crowd density of just 0.0115
people per square meter which equals to an average of below 6% mean relative error from the ground truth. I validate the contextual crowd condition anomaly detection method during the visit of chancellor Mrs. Merkel and during a large press conference during the exhibition. I present the approach of opportunistically sensing stationary based wireless signal variations and validate this during the Hannover CeBIT exhibition with 80 opportunistic sources with a crowd condition estimation relative error of below 12% relying only on surrounding signals in influenced by humans. Pursuing this approach I present an approach with dedicated signal sources and sensors to estimate the condition of shared office environments. I demonstrate methods being viable to even detect low density static crowds, such as people sitting at their desks, and evaluate this on an eight person office scenario. I present the approach of mobile crowd density estimation by a group of sensors detecting other crowd centric devices in the proximity with a classification accuracy of the crowd density of 66 % (improvement of over 22% over a individual sensor) during the crowded Oktoberfest event. I propose a collaborative mobile sensing approach which makes the system more robust against variations that may result from the background of the people rather than the crowd condition with differential features taking information about the link structure between actively scanning devices, the ratio between values observed by different devices, ratio of discovered crowd devices over time, team-wise diversity of discovered devices, number of semi- continuous device visibility periods, and device visibility durations into account. I validate the approach on multiple experiments including the Kaiserslautern European soccer championship public viewing event and evaluated the collaborative mobile sensing approach with a crowd condition estimation accuracy of 77 % while outperforming previous methods by 21%. I present the feasibility of deploying the wireless crowd condition sensing approach to a citywide scale during an event in Zurich with 971 actively sensing participants and outperformed the reference method by 24% in average.
Abstract: Winding number transitions from quantum to classical behavior are studied in the case of the 1+1 dimensional Mottola-Wipf model with the space coordinate on a circle for exploring the possibility of obtaining transitions of second order. The model is also studied as a prototype theory which demonstrates the procedure of such investigations. In the model at hand we find that even on a circle the transitions remain those of first order.