## Wissenschaftlicher Artikel

### Filtern

#### Fachbereich / Organisatorische Einheit

#### Erscheinungsjahr

#### Dokumenttyp

- Wissenschaftlicher Artikel (155) (entfernen)

#### Sprache

- Englisch (155) (entfernen)

#### Schlagworte

- AG-RESY (32)
- PARO (22)
- SKALP (10)
- resonances (8)
- Wannier-Stark systems (7)
- Quantum mechanics (6)
- lifetimes (6)
- HANDFLEX (5)
- motion planning (5)
- quantum mechanics (5)

- The Xilinx Zynq: A Modern System on Chip for Software Defined Radios (2016)
- Software defined radios can be implemented on general purpose processors (CPUs), e.g. based on a PC. A processor offers high flexibility: It can not only be used to process the data samples, but also to control receiver functions, display a waterfall or run demodulation software. However, processors can only handle signals of limited bandwidth due to their comparatively low processing speed. For signals of high bandwidth the SDR algorithms have to be implemented as custom designed digital circuits on an FPGA chip. An FPGA provides a very high processing speed, but also lacks flexibility and user interfaces. Recently the FPGA manufacturer Xilinx has introduced a hybrid system on chip called Zynq, that combines both approaches. It features a dual ARM Cortex-A9 processor and an FPGA, that offer the flexibility of a processor with the processing speed of an FPGA on a single chip. The Zynq is therefore very interesting for use in SDRs. In this paper the application of the Zynq and its evaluation board (Zedboard) will be discussed. As an example, a direct sampling receiver has been implemented on the Zedboard using a high-speed 16 bit ADC with 250 Msps.

- Minimizing the Number of Apertures in Multileaf Collimator Sequencing with Field Splitting (2015)
- In this paper we consider the problem of decomposing a given integer matrix A into a positive integer linear combination of consecutive-ones matrices with a bound on the number of columns per matrix. This problem is of relevance in the realization stage of intensity modulated radiation therapy (IMRT) using linear accelerators and multileaf collimators with limited width. Constrained and unconstrained versions of the problem with the objectives of minimizing beam-on time and decomposition cardinality are considered. We introduce a new approach which can be used to find the minimum beam-on time for both constrained and unconstrained versions of the problem. The decomposition cardinality problem is shown to be NP-hard and an approach is proposed to solve the lexicographic decomposition problem of minimizing the decomposition cardinality subject to optimal beam-on time.

- CRATER: Case-based Reasoning Framework for Engineering an Adaptation Engine in Self-Adaptive Software Systems (2015)
- Self-adaptation allows software systems to autonomously adjust their behavior during run-time by handling all possible operating states that violate the requirements of the managed system. This requires an adaptation engine that receives adaptation requests during the monitoring process of the managed system and responds with an automated and appropriate adaptation response. During the last decade, several engineering methods have been introduced to enable self-adaptation in software systems. However, these methods lack addressing (1) run-time uncertainty that hinders the adaptation process and (2) the performance impacts resulted from the complexity and the large number of the adaptation space. This paper presents CRATER, a framework that builds an external adaptation engine for self-adaptive software systems. The adaptation engine, which is built on Case-based Reasoning, handles the aforementioned challenges together. This paper is braced with an experiment illustrating the benefits of this framework. The experimental results shows the potential of CRATER in terms handling run-time uncertainty and adaptation remembrance that enhances the performance for large number of adaptation space.

- A well-balanced solver for the Saint Venant Equations with variable cross-section (2014)
- In this paper we construct a numerical solver for the Saint Venant equations. Special attention is given to the balancing of the source terms, including the bottom slope and variable cross- sectional profiles. Therefore a special discretization of the pressure law is used, in order to transfer analytical properties to the numerical method. Based on this approximation a well- balanced solver is developed, assuring the C-property and depth positivity. The performance of this method is studied in several test cases focusing on accurate capturing of steady states.

- AXI4-Stream Upsizing/Downsizing Data Width Converters for Hardware-In-the-Loop Simulations (2013)
- Hardware prototyping is an essential part in the hardware design flow. Furthermore, hardware prototyping usually relies on system-level design and hardware-in-the-loop simulations in order to develop, test and evaluate intellectual property cores. One common task in this process consist on interfacing cores with different port specifications. Data width conversion is used to overcome this issue. This work presents two open source hardware cores compliant with AXI4-Stream bus protocol, where each core performs upsizing/downsizing data width conversion.

- Homogeneous Penalizers and Constraints in Convex Image Restoration (2012)
- Recently convex optimization models were successfully applied for solving various problems in image analysis and restoration. In this paper, we are interested in relations between convex constrained optimization problems of the form \({\rm argmin} \{ \Phi(x)\) subject to \(\Psi(x) \le \tau \}\) and their penalized counterparts \({\rm argmin} \{\Phi(x) + \lambda \Psi(x)\}\). We recall general results on the topic by the help of an epigraphical projection. Then we deal with the special setting \(\Psi := \| L \cdot\|\) with \(L \in \mathbb{R}^{m,n}\) and \(\Phi := \varphi(H \cdot)\), where \(H \in \mathbb{R}^{n,n}\) and \(\varphi: \mathbb R^n \rightarrow \mathbb{R} \cup \{+\infty\} \) meet certain requirements which are often fulfilled in image processing models. In this case we prove by incorporating the dual problems that there exists a bijective function such that the solutions of the constrained problem coincide with those of the penalized problem if and only if \(\tau\) and \(\lambda\) are in the graph of this function. We illustrate the relation between \(\tau\) and \(\lambda\) for various problems arising in image processing. In particular, we point out the relation to the Pareto frontier for joint sparsity problems. We demonstrate the performance of the constrained model in restoration tasks of images corrupted by Poisson noise with the \(I\)-divergence as data fitting term \(\varphi\) and in inpainting models with the constrained nuclear norm. Such models can be useful if we have a priori knowledge on the image rather than on the noise level.

- 100% Green Computing At The Wrong Location? (2012)
- Modern society relies on convenience services and mobile communication. Cloud computing is the current trend to make data and applications available at any time on every device. Data centers concentrate computation and storage at central locations, while they claim themselves green due to their optimized maintenance and increased energy efﬁciency. The key enabler for this evolution is the microelectronics industry. The trend to power efﬁcient mobile devices has forced this industry to change its design dogma to: ”keep data locally and reduce data communication whenever possible”. Therefore we ask: is cloud computing repeating the aberrations of its enabling industry?

- The Generalized Assignment Problem with Minimum Quantities (2012)
- We consider a variant of the generalized assignment problem (GAP) where the amount of space used in each bin is restricted to be either zero (if the bin is not opened) or above a given lower bound (a minimum quantity). We provide several complexity results for different versions of the problem and give polynomial time exact algorithms and approximation algorithms for restricted cases. For the most general version of the problem, we show that it does not admit a polynomial time approximation algorithm (unless P=NP), even for the case of a single bin. This motivates to study dual approximation algorithms that compute solutions violating the bin capacities and minimum quantities by a constant factor. When the number of bins is fixed and the minimum quantity of each bin is at least a factor \(\delta>1\) larger than the largest size of an item in the bin, we show how to obtain a polynomial time dual approximation algorithm that computes a solution violating the minimum quantities and bin capacities by at most a factor \(1-\frac{1}{\delta}\) and \(1+\frac{1}{\delta}\), respectively, and whose profit is at least as large as the profit of the best solution that satisfies the minimum quantities and bin capacities strictly. In particular, for \(\delta=2\), we obtain a polynomial time (1,2)-approximation algorithm.

- Complexity and Approximability of the Maximum Flow Problem with Minimum Quantities (2012)
- We consider the maximum flow problem with minimum quantities (MFPMQ), which is a variant of the maximum flow problem where the flow on each arc in the network is restricted to be either zero or above a given lower bound (a minimum quantity), which may depend on the arc. This problem has recently been shown to be weakly NP-complete even on series-parallel graphs. In this paper, we provide further complexity and approximability results for MFPMQ and several special cases. We first show that it is strongly NP-hard to approximate MFPMQ on general graphs (and even bipartite graphs) within any positive factor. On series-parallel graphs, however, we present a pseudo-polynomial time dynamic programming algorithm for the problem. We then study the case that the minimum quantity is the same for each arc in the network and show that, under this restriction, the problem is still weakly NP-complete on general graphs, but can be solved in strongly polynomial time on series-parallel graphs. On general graphs, we present a \((2 - 1/\lambda) \)-approximation algorithm for this case, where \(\lambda\) denotes the common minimum quantity of all arcs.

- A Scalable Component-Based Architecture for Online Services of Library Catalogs (2004)
- In recent years, more and more publications and material for studying and teaching, e. g. for Web-based teaching (WBT), appear "online" and digital libraries are built to manage such publications and online materials. Therefore, the most important concerns are related to the problem of durable, sustained storage and the management of content together with its metadata existing in heterogeneous styles and formats. In this paper, we present specific techniques and their use to support metadata-based catalog services. Such semistructured metadata (represented as XML fragments), which belong to online learning resources, need efficient XML-based query support, scalable result set processing, and comprehensive facilities for personalization purposes. We discuss the associated problems, subsequently derive the concepts of a suitable architecture, and finally outline the realization by means of our prototype system that is based on the J2EE component model.