When designing autonomous mobile robotic systems, there usually is a trade-off between the three opposing goals of safety, low-cost and performance.
If one of these design goals is approached further, it usually leads to a recession of one or even both of the other goals.
If for example the performance of a mobile robot is increased by making use of higher vehicle speeds, then the safety of the system is usually decreased, as, under the same circumstances, faster robots are often also more dangerous robots.
This decrease of safety can be mitigated by installing better sensors on the robot, which ensure the safety of the system, even at high speeds.
However, this solution is accompanied by an increase of system cost.
In parallel to mobile robotics, there is a growing amount of ambient and aware technology installations in today's environments - no matter whether in private homes, offices or factory environments.
Part of this technology are sensors that are suitable to assess the state of an environment.
For example, motion detectors that are used to automate lighting can be used to detect the presence of people.
This work constitutes a meeting point between the two fields of robotics and aware environment research.
It shows how data from aware environments can be used to approach the abovementioned goal of establishing safe, performant and additionally low-cost robotic systems.
Sensor data from aware technology, which is often unreliable due to its low-cost nature, is fed to probabilistic methods for estimating the environment's state.
Together with models, these methods cope with the uncertainty and unreliability associated with the sensor data, gathered from an aware environment.
The estimated state includes positions of people in the environment and is used as an input to the local and global path planners of a mobile robot, enabling safe, cost-efficient and performant mobile robot navigation during local obstacle avoidance as well as on a global scale, when planning paths between different locations.
The probabilistic algorithms enable graceful degradation of the whole system.
Even if, in the extreme case, all aware technology fails, the robots will continue to operate, by sacrificing performance while maintaining safety.
All the presented methods of this work have been validated using simulation experiments as well as using experiments with real hardware.
Synapses play a central role in the information propagation in the nervous system. A better understanding of synaptic structures and processes is vital for advancing nervous disease research. This work is part of an interdisciplinary project that aims at the quantitative examination of components of the neuromuscular junction, a synaptic connection between a neuron and a muscle cell.
The research project is based on image stacks picturing neuromuscular junctions captured by modern electron microscopes, which permit the rapid acquisition of huge amounts of image data at a high level of detail. The large amount and sheer size of such microscopic data makes a direct visual examination infeasible, though.
This thesis presents novel problem-oriented interactive visualization techniques that support the segmentation and examination of neuromuscular junctions.
First, I introduce a structured data model for segmented surfaces of neuromuscular junctions to enable the computational analysis of their properties. However, surface segmentation of neuromuscular junctions is a very challenging task due to the extremely intricate character of the objects of interest. Hence, such problematic segmentations are often performed manually by non-experts and thus requires further inspection.
With NeuroMap, I develop a novel framework to support proofreading and correction of three-dimensional surface segmentations. To provide a clear overview and to ease navigation within the data, I propose the surface map, an abstracted two-dimensional representation using key features of the surface as landmarks. These visualizations are augmented with information about automated segmentation error estimates. The framework provides intuitive and interactive data correction mechanisms, which in turn permit the expeditious creation of high-quality segmentations.
While analyzing such segmented synapse data, the formulation of specific research questions is often impossible due to missing insight into the data. I address this problem by designing a generic parameter space for segmented structures from biological image data. Furthermore, I introduce a graphical interface to aid its exploration, combining both parameter selection as well as data representation.
This Ph.D. project as a landscape research practice focuses on the less widely studied aspects of urban agriculture landscape and its application in recreation and leisure, as well as landscape beautification. I research on the edible landscape planning and design, its criteria, possibilities, and traditional roots for the particular situation of Iranian cities and landscapes. The primary objective is preparing a conceptual and practical framework for Iranian professions to integrate the food landscaping into the new greenery and open spaces developments. Furthermore, finding the possibilities of synthesis the traditional utilitarian gardening with the contemporary pioneer viewpoints of agricultural landscapes is the other significant proposed achievement.
Finished tasks and list of achieved results:
• Recognition the software and hardware principles of designing the agricultural landscape based on the Persian gardens
• Multidimensional identity of agricultural landscape in Persian gardens
• Principles of architectural integration and the characteristics of the integrative landscape in Persian gardens
• Distinctive characteristics of agricultural landscape in Persian garden
• Introducing the Persian and historical gardens as the starting point for reentering the agricultural phenomena into the Iranian cities and landscape
• Assessment the structure of Persian gardens based on the new achievements and criteria of designing the urban agriculture
• Investigate the role of Persian gardens in envisioning the urban agriculture in
Iranian cities’ landscape.
Reading as a cultural skill is acquired over a long period of training. This thesis supports the idea that reading is based on specific strategies that result from modification and coordination of earlier developed object recognition strategies. The reading-specific processing strategies are considered to be more analytic compared to object recognition strategies, which are described as holistic. To enable proper reading skills these strategies have to become automatized. Study 1 (Chapter 4) examined the temporal and visual constrains of letter recognition strategies. In the first experiment two successively presented stimuli (letters or non-letters) had to be classified as same or different. The second stimulus could either be presented in isolation or surrounded by a shape, which was either similar (congruent) or different (incongruent) in its geometrical properties to the stimulus itself. The non-letter pairs were presented twice as often as the letter pairs. The results demonstrated a preference for the holistic strategy also in letters, even if the non- letter set was presented twice as often as the letter set, showing that the analytic strategy does not replace the holistic one completely, but that the usage of both strategies is task-sensitive. In Experiment 2, we compared the Global Precedence Effect (GPE) for letters and non-letters in central viewing, with the global stimulus size close to the functional visual field in whole word reading (6.5◦ of visual angle) and local stimuli close to the critical size for fluent reading of individual letters (0.5◦ of visual angle). Under these conditions, the GPE remained robust for non-letters. For letters, however, it disappeared: letters showed no overall response time advantage for the global level and symmetric congruence effects (local-to-global as well as global-to-local interference). These results indicate that reading is based on resident analytic visual processing strategies for letters. In Study 2 (Chapter 5) we replicated the latter result with a large group of participants as part of a study in which pairwise associations of non-letters and phonological or non-phonological sounds were systematically trained. We investigated whether training would eliminate the GPE also for non-letters. We observed, however, that the differentiation between letters and non-letter shapes persists after training. This result implies that pairwise association learning is not sufficient to overrule the process differentiation in adults. In addition, subtle effects arising in the letter condition (due to enhanced power) enable us to further specify the differentiation in processing between letters and non-letter shapes. The influence of reading ability on the GPE was examined in Study 3 (Chapter 6). Children with normal reading skills and children with poor reading skills were instructed to detect a target in Latin or Hebrew Navon letters. Children with normal reading skills showed a GPE for Latin letters, but not for Hebrew letters. In contrast, the dyslexia group did not show GPE for either kind of stimuli. These results suggest that dyslexic children are not able to apply the same automatized letter processing strategy as children with normal reading skills do. The difference between the analytic letter processing and the holistic non-letter processing was transferred to the context of whole word reading in Study 4 (Chapter 7). When participants were instructed to detect either a letter or a non-letter in a mixed character string, for letters the reaction times and error rates increased linearly from the left to the right terminal position in the string, whereas for non-letters a symmetrical U-shaped function was observed. These results suggest, that the letter-specific processing strategies are triggered automatically also for more word-like material. Thus, this thesis supports and expands prior results of letter-specific processing and gives new evidences for letter-specific processing strategies.
Im Rahmen dieser Arbeit sollten weiterführende Erkenntnisse über die Regulation des Na+/H+-Antiporters AtSOS1 erbracht werden. Die Analyse von Mutanten, die den zytosolischen AtSOS1 C terminus überexprimieren, bestätigte eine im Vergleich zum Wildtyp erhöhte Salztoleranz. Diese Feststellung lässt sich an verschiedenen Beobachtungen festmachen: Unter Salzstressbedingungen i.) akkumulieren die Überexpressionsmutanten deutlich weniger Natrium im Spross, ii.) sie blühen früher, iii.) sie weisen eine geringere Expression des Salz-induzierten Gens wrky25 auf, iv.) sie häufen geringere Mengen „kompatibler Solute“ an und v.) sie speichern weniger Stärke im Vergleich zum Wildtyp.
Zusammenfassend lässt sich festhalten, dass die Überexpression der C-terminalen Domäne des SOS1 zu einer erhöhten Salztoleranz der entsprechenden Mutanten durch erhöhte Aktivierung des endogenen SOS1-Transporters führt. Es lässt sich spekulieren, dass negative Regulatoren des SOS-Signalwegs vom löslichen C-terminus abgefangen werden, wodurch ihre inhibierende Funktion auf das endogene SOS-Netzwerk verloren geht.
Im Gegensatz dazu führt der Verlust des SOS1-Transporters in den sos1 Knockout-Pflanzen zu einer erhöhten Salzsensitivität. Diese Feststellung lässt sich wiederum an verschiedenen Beobachtungen festmachen: Unter Salzstressbedingungen i.) akkumulieren die Knockout-Mutanten deutlich mehr Natrium im Spross sowie vor allem in der Wurzel, ii.) sie blühen verzögert bis gar nicht, iii.) sie weisen eine höhere Expression des Salzstress-Indikatorgens wrky25 auf, iv.) sie häufen große Mengen kompatibler Solute in Form löslicher Zucker an und v.) sie speichern mehr Stärke im Vergleich zum Wildtyp.
In der vorliegenden Arbeit wurden die Interaktionen zwischen dem SOS1 C terminus und den regulatorischen At14-3-3 Proteinen υ, ω, κ und λ, sowie zwischen AtTST1/AtVIK1 und 14-3-3 κ und λ mittels Bimolekularer Fluoreszenz-Komplementation verifiziert. Sie binden den SOS1 C terminus an der Stelle 1112TRQNTMVESSDEEDEDEG1129, den AtTST1 an der Stelle 361DDGAGDDDDSDNDLR375. Beide Bindemotive weisen einen hohen Anteil negativ geladener Aspartat- und Glutamat-Reste auf. Durch die Analyse von At14 3 3 λκ Knockout-Pflanzen wurden diese Proteine als Signalstoffe im Zuckerhaushalt von A. thaliana identifiziert. Ihr Fehlen führt zu einer Veränderung im „sugar sensing“ bzw. „sugar signaling“. Diese Behauptung lässt sich an verschiedenen Beobachtungen festmachen: Unter Hochzucker-Bedingungen i.) akkumulieren die Knockout-Mutanten mehr Biomasse, ii.) sie akkumulieren weniger Zucker und iii.) sie weisen eine gesteigerte Expression der Glukose-reprimierten Gene cab1 und suc2 auf.
This work introduces a promising concept for the preparation of new nano-sized receptors. Mixed monolayer protected gold nanoparticles (AuNPs) for low molecular weight compounds were prepared featuring functional groups on their surfaces. It has been shown that these AuNPs can engage in interactions with peptides in aqueous media. Quantitative binding information was obtained from DOSY-NMR titrations indicating that nanoparticles containing a combination of three orthogonal functional groups are more efficient in binding to dipeptides than mono or difunctionalised analogues. The strategy is highly modular and easily allows adapting the receptor selectivity to a
given substrate by varying the type, number, and ratio of binding sites on the nanoparticle
This thesis presents a novel, generic framework for information segmentation in document images.
A document image contains different types of information, for instance, text (machine printed/handwritten), graphics, signatures, and stamps.
It is necessary to segment information in documents so that to process such segmented information only when required in automatic document processing workflows.
The main contribution of this thesis is the conceptualization and implementation of an information segmentation framework that is based on part-based features.
The generic nature of the presented framework makes it applicable to a variety of documents (technical drawings, magazines, administrative, scientific, and academic documents) digitized using different methods (scanners, RGB cameras, and hyper-spectral imaging (HSI) devices).
A highlight of the presented framework is that it does not require large training sets, rather a few training samples (for instance, four pages) lead to high performance, i.e., better than previously existing methods.
In addition, the presented framework is simple and can be adapted quickly to new problem domains.
This thesis is divided into three major parts on the basis of document digitization method (scanned, hyper-spectral imaging, and camera captured) used.
In the area of scanned document images, three specific contributions have been realized.
The first of them is in the domain of signature segmentation in administrative documents.
In some workflows, it is very important to check the document authenticity before processing the actual content.
This can be done based on the available seal of authenticity, e.g., signatures.
However, signature verification systems expect pre-segmented signature image, while signatures are usually a part of document.
To use signature verification systems on document images, it is necessary to first segment signatures in documents.
This thesis shows that the presented framework can be used to segment signatures in administrative documents.
The system based on the presented framework is tested on a publicly available dataset where it outperforms the state-of-the-art methods and successfully segmented all signatures, while less than half of the found signatures are false positives.
This shows that it can be applied for practical use.
The second contribution in the area of scanned document images is segmentation of stamps in administrative documents.
A stamp also serves as a seal for documents authenticity.
However, the location of stamp on the document can be more arbitrary than a signature depending on the person sealing the document.
This thesis shows that a system based on our generic framework is able to extract stamps of any arbitrary shape and color.
The evaluation of the presented system on a publicly available dataset shows that it is also able to segment black stamps (that were not addressed in the past) with a recall and precision of 83% and 73%, respectively.
%Furthermore, to segment colored stamps, this thesis presents a novel feature set which is based on intensity gradient, is able to extract unseen, colored, arbitrary shaped, textual as well as graphical stamps, and outperforms the state-of-the-art methods.
The third contribution in the scanned document images is in the domain of information segmentation in technical drawings (architectural floorplans, maps, circuit diagrams, etc.) containing usually a large amount of graphics and comparatively less textual components. Further, as in technical drawings, text is overlapping with graphics.
Thus, automatic analysis of technical drawings uses text/graphics segmentation as a pre-processing step.
This thesis presents a method based on our generic information segmentation framework that is able to detect the text, which is touching graphical components in architectural floorplans and maps.
Evaluation of the method on a publicly available dataset of architectural floorplans shows that it is able to extract almost all touching text components with precision and recall of 71% and 95%, respectively.
This means that almost all of the touching text components are successfully extracted.
In the area of hyper-spectral document images, two contributions have been realized.
Unlike normal three channels RGB images, hyper-spectral images usually have multiple channels that range from ultraviolet to infrared regions including the visible region.
First, this thesis presents a novel automatic method for signature segmentation from hyper-spectral document images (240 spectral bands between 400 - 900 nm).
The presented method is based on a part-based key point detection technique, which does not use any structural information, but relies only on the spectral response of the document regardless of ink color and intensity.
The presented method is capable of segmenting (overlapping and non-overlapping) signatures from varying backgrounds like, printed text, tables, stamps, logos, etc.
Importantly, the presented method can extract signature pixels and not just the bounding boxes.
This is substantial when signatures are overlapping with text and/or other objects in image. Second, this thesis presents a new dataset comprising of 300 documents scanned using a high-resolution hyper-spectral scanner. Evaluation of the presented signature segmentation method on this hyper-spectral dataset shows that it is able to extract signature pixels with the precision and recall of 100% and 79%, respectively.
Further contributions have been made in the area of camera captured document images. A major problem in the development of Optical Character Recognition (OCR) systems for camera captured document images is the lack of labeled camera captured document images datasets. In the first place, this thesis presents a novel, generic, method for automatic ground truth generation/labeling of document images. The presented method builds large-scale (i.e., millions of images) datasets of labeled camera captured / scanned documents without any human intervention. The method is generic and can be used for automatic ground truth generation of (scanned and/or camera captured) documents in any language, e.g., English, Russian, Arabic, Urdu. The evaluation of the presented method, on two different datasets in English and Russian, shows that 99.98% of the images are correctly labeled in every case.
Another important contribution in the area of camera captured document images is the compilation of a large dataset comprising 1 million word images (10 million character images), captured in a real camera-based acquisition environment, along with the word and character level ground truth. The dataset can be used for training as well as testing of character recognition systems for camera-captured documents. Various benchmark tests are performed to analyze the behavior of different open source OCR systems on camera captured document images. Evaluation results show that the existing OCRs, which already get very high accuracies on scanned documents, fail on camera captured document images.
Using the presented camera-captured dataset, a novel character recognition system is developed which is based on a variant of recurrent neural networks, i.e., Long Short Term Memory (LSTM) that outperforms all of the existing OCR engines on camera captured document images with an accuracy of more than 95%.
Finally, this thesis provides details on various tasks that have been performed in the area closely related to information segmentation. This includes automatic analysis and sketch based retrieval of architectural floor plan images, a novel scheme for online signature verification, and a part-based approach for signature verification. With these contributions, it has been shown that part-based methods can be successfully applied to document image analysis.
Die Entwicklung von Revitalisierungskonzepten für Wohnimmobilien ist ein komplexer und zeitintensiver Prozess, bei dem umfassendes Fachwissen und weitreichende Erfahrungen notwendig sind. Heterogene Gebäudetypen mit unterschiedlichen Eigenschaften und Handlungsbedarfen machen den Konzeptentwicklungsprozess noch komplizierter. Diese Arbeit bietet einen Katalog mit priorisierten Handlungsempfehlungen zur Entwicklung von Revitalisierungsvarianten für Mehrfamilienhäuser aus den 1970er Jahren in den alten Bundesländern. Die Immobilien tragen mit ca. 2,4 Mio. Wohnung wesentlich zur Wohnraumversorgung in den alten Bundesländern bei und wurden bisher unzureichend erforscht. Darüber hinaus stehen durch das Alter, den häufig geringen Modernisierungszustand und vorhandene Potenziale der Mehrfamilienhäuser meist kurz- bis mittelfristig grundlegende Revitalisierungen an.
Die erarbeiteten Handlungsempfehlungen basieren auf Auswertungen von Daten professionell-gewerblicher Wohnungsanbieter, über 13.700 Energieverbrauchsausweisen, Mieterbefragungen und der Datenbasis Gebäudebestand des IWU. Außerdem stützen sich die Empfehlungen auf eine Sekundäranalyse einer repräsentativen Wohnnachfrageuntersuchung für Deutschland sowie auf zwanzig Expertenbefragungen und umfangreiche Literaturanalysen. Durch eine Immobilienanalyse werden verallgemeinerungsfähige Aussagen über bauliche und technische Eigenschaften und Handlungsbedarfe für die Mehrfamilienhäuser gewonnen. Daneben erfolgt anhand einer Nachfrageanalyse die Bestimmung potenzieller Nachfragegruppen und deren Wohnanforderungen sowie daraus abgeleitet nachfrageseitige Handlungsbedarfe für die Mehrfamilienhäuser. Für die ermittelten baulichen und technischen sowie nachfrageseitigen Handlungsbedarfe werden durch eine Maßnahmenanalyse geeignete Revitalisierungsmaßnahmen gefunden. Diese Maßnahmen werden im Katalog der Handlungsempfehlungen angelehnt an ein Kundenanforderungsmodell nach technischen Gesichtspunkten und Nachfrageaspekten priorisiert. Anwender des Empfehlungskatalogs können ihre individuelle kaufmännische Perspektive einbringen, um ganzheitliche Revitalisierungskonzepte zu entwickeln. Durch eine entwickelte Berechnungshilfe können Kosten und Wirtschaftlichkeit der Konzepte bewertet werden.
Die Handlungsempfehlungen zielen auf technische, funktionale, energetische, wirtschaftliche, soziale und architektonische Verbesserungen bei den Mehrfamilienhäusern. In zwei Fallstudien werden der Katalog der Handlungsempfehlungen und die Berechnungshilfe angewendet. Die Fallstudien deuten darauf hin, dass mit Hilfe des Katalogs der Handlungsempfehlungen und der Berechnungshilfe Revitalisierungsvarianten für Mehrfamilienhäuser aus den 1970er Jahren effizient entwickelt und deren Kosten und Wirtschaftlichkeit effizient eingeschätzt werden können. Die Forschungsergebnisse der Arbeit sind insbesondere für Wohnungseigentümer, Projektentwickler, Ingenieure, Berater und Investoren nützlich.
C-H activations(C-H bond weakening effects) under impact of transition metal atoms
are theoretically investigated,
two model systems are used, one is CH3MX, the other is
n-ButMX, (X=F,Cl,Br,I,H,CN, M include all transition metal
atoms from group 4 to group 10).