000 Allgemeines, Wissenschaft
Refine
Document Type
- Doctoral Thesis (9)
- Article (2)
- Master's Thesis (2)
- Bachelor Thesis (1)
- Part of a Book (1)
- Conference Proceeding (1)
Has Fulltext
- yes (16)
Keywords
- Strukturationstheorie (2)
- 5G (1)
- Actor Engagement (1)
- Actor Roles (1)
- Agile Development (1)
- Analytical Quality Assurance (1)
- Aufbau von Steuerungsmechanismen (1)
- Bachelorarbeit (1)
- Business Model Innovation (1)
- Entstehung (1)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (4)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (3)
- Kaiserslautern - Fachbereich Informatik (3)
- Distance and Independent Studies Center (DISC) (2)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (2)
- Universität (2)
Neben Vorteilen sind möglicherweise auch negative und zeitlich begrenzte Effekte durch Projekte der Entwicklungszusammenarbeit (EZ) zu verzeichnen (Forschungslücke). Daher soll in dieser Masterarbeit die Hypothese untersucht werden, wonach die Förderung von Forest Communities als EZ-Instrument für eine dauerhafte Waldbewirtschaftung und nach-haltige Entwicklung geeignet ist. Es werden die Stärken und Schwächen am Praxisbeispiel in Kambodscha anhand der Brundtland Nachhaltigkeits-Definition untersucht (Systemati-sierung). Folgende Forschungsfragen sollen im Rahmen dieser Arbeit beantwortet werden:
1. Welche Bedeutung haben Forest Communities in Kambodscha für den Klimaschutz?
2. Auf welche Faktoren lässt sich eine positive Entwicklungsdynamik in Forest Communities zurückführen? Was stellen Hemmnisse dar?
3. Welche Handlungsmöglichkeiten zur Förderung einer positiven Entwicklungs-dynamik in Forest Communities mit besonderem Blick auf Klimaschutz hat die EZ?
Verschiedene Ereignisse, wie die Nuklearkatastrophe in Fukushima oder kürzlich die Corona Pandemie, haben gezeigt, dass globale Lieferketten immer anfälliger für Risiken und Unsicherheiten unterschiedlicher Art werden. Aus diesem Grund wird das Thema Supply Chain Resilience sowohl für Forscher als auch für Manager zunehmend wichtiger.
Da in der bestehenden Forschung häufig theoretisch-konzeptionelle Ansätze zuvorderst stehen, beschäftigt sich diese Arbeit unter Einnahme einer strukturationstheoretischen Perspektive mit der Frage, wie ein an Praktiken orientierter Ansatz zur Steigerung der Resilienz globaler Lieferketten aussehen kann.
Dazu bedient sich diese Arbeit einer vergleichenden qualitativen Fallstudie, bei der einerseits mehrere Unternehmen der deutschen Spielwarenbranche befragt und ein Großunternehmen der Elektroindustrie analysiert werden. Dazu werden mehrere qualitative Methoden eingesetzt, die Daten trianguliert und im Anschluss mittels qualitativer Inhaltsanalyse aufbereitet und interpretiert.
Als Ergebnis entsteht eine Sammlung von insgesamt 29 Praktiken, die entlang der drei Resilienzphasen Readiness, Response und Recovery eingeordnet werden können. Weiterhin zeigt sich, dass die identifizierten Praktiken ebenfalls anhand des Status der Implementierung kategorisiert werden können.
Aus dieser Erkenntnis ergibt sich eine Matrix, in der Resilienzpraktiken entlang beider Kategoriensysteme aufgetragen werden können und somit einen Überblick über den Status der Resilienz einer globalen Lieferkette liefert. Diese Matrix bildet die Grundlage für einen Supply Chain Resilience Management Ansatz, der im Rahmen dieser Arbeit entwickelt und erläutert wird. Dieser bietet eine Handlungsanleitung für Manager und unterstützt somit beim Streben nach mehr Resilienz entlang der Lieferkette.
Damit erweitert diese Arbeit nicht nur die bestehende Literatur zur Supply Chain Resilience um einen strukturationstheoretischen Ansatz, sondern liefert einen entscheidenden Beitrag zum Management globaler Lieferketten.
Wenn drei oder mehr rechtlich selbstständige Organisationen zusammenarbeiten, um nicht nur ihre eigenen Ziele, sondern auch gemeinsame Ziele auf Netzwerkebene zu erreichen, spricht man von einem Zusammenschluss in Form
eines inter-organisationalen Netzwerks. Diese Netzwerke gewinnen in letzter Zeit in den verschiedensten Bereichen enorm an Verbreitung und Bedeutung.
In der Praxis wird jedoch deutlich, dass für ein effektives Zusammenarbeiten der einzelnen Netzwerkmitglieder oft große Hürden genommen werden müssen, da inter-organisationale Netzwerke sehr komplexe Gebilde sind. Der Aufbau der
Netzwerksteuerung hat daher einen bedeutenden Einfluss auf den Erfolg des Netzwerks. In der Literatur finden sich allerdings noch keine empirischen Untersuchungen zum Aufbau von Netzwerksteuerung und förderlichen Bedingungen, die daraus abgeleitet werden können. Die vorliegende Arbeit setzt an diesem Punkt an und untersucht die Entstehung und Weiterentwicklung von
Netzwerksteuerungsmechanismen bei unterschiedlichen Arten von inter-organisationalen Netzwerken. Dazu wurden Daten in insgesamt acht Fallstudien
erhoben und mit Hilfe eines Multilevel-Ansatzes und einer
strukturationstheoretischen Perspektive analysiert.
Die Ergebnisse machen deutlich, dass Steuerungsmechanismen in inter-organisationalen Netzwerken durch die wiederkehrende Anwendung von drei Praktiken, „Bedarfsanalyse“, „Ausloten von Synergieeffekten“ und
„Formalisierung“, aufgebaut werden. Diese lassen Strukturen entstehen, die
ihrerseits eine Wirkung auf die Praktiken und den Aufbau weiterer Strukturen
entfalten. Die Strukturen ermöglichen den Übergang in drei unterschiedliche
Entwicklungsstadien. Dieser Aufbau wird in der vorliegenden Arbeit als ineinander verzahnte Metamorphose konzeptualisiert, die nie abgeschlossen ist. Unterschiede beim Aufbau von Steuerungsmechanismen lassen sich bei den beobachteten
Netzwerken insbesondere darin feststellen, ob sie staatlich initiiert wurden oder aus
einer Bedarfsfeststellung für eine Vernetzung in einem bestimmten gesellschaftlichen Bereich entstanden sind. Diese initialen Bedingungen
bestimmen darüber, welche Steuerungsformen im Verlauf angenommen werden.
Als förderliche Bedingungen arbeitet die vorliegende Arbeit heraus, dass
insbesondere die Bedarfsanalyse zu Anfang große Bedeutung hat. Bei dem Aufbau
von Netzwerken wird häufig nur auf den Bedarf für eine Vernetzung fokussiert, nicht
aber erhoben, welchen Bedarf das aufzubauende Netzwerk hat. Darüber hinaus spielt der Netzwerkmobilisierer oder die Netzwerkmobilisiererin eine große Rolle. Das Netzwerk kann in angedachter Form nur dann aufgebaut werden, wenn der
Netzwerkmobilisierer oder die Netzwerkmobilisiererin als Führungsperson so lange
erhalten bleibt, bis das dritte Entwicklungsstadium erreicht wird.
In the context of distributed networked control systems, many issues affect the performance and functionality of the connected subsystems, mainly raised because of the communication medium imposed into the system structure. The communication functionality must generally cope with the data exchange requirements between system entities. Therefore, due to the limited communication resources, especially in wireless networks, an optimal algorithm for the assignment of the communication resources and proper selection of the right Medium Access Control (MAC) protocol are highly needed.
In this dissertation, we studied several problems raised by communication networks in wireless networked control systems, with a particular focus on the effect of standard Medium Access Control (MAC) protocols on the overall control system performance. We examined the effect of both the Time Division Multiple Access (TDMA) and the Orthogonal Frequency Division Multiple Access (OFDMA) protocols and developed a set of distributed algorithms that suit their specification requirements.
As a benchmark, we used a vehicle dynamics optimal control problem where the objective of the optimization problem is to penalize the maximal utilization of the tire's adhesion forces for a given driving maneuver. The problem was decomposed into a distributed form using primal and dual decomposition techniques, and solving algorithms were derived using both primal and dual subgradient methods. The problem solver was tested with respect to a wireless networked system structure and evaluated for different communication typologies, such as uni-directional, bidirectional, and broadcasting topology.
Later, the setup of the solution algorithms was extended concerning the specification of the TDMA and OFDMA protocols, and we introduced an event-triggered scheme into the solver algorithm. The proposed event-triggered scheme is mainly utilized to reduce communication between concurrent computation subsystems, which is primarily intended to facilitate real-time efficiency.
Next, we investigated the effect of the data exchange between subsystems on the overall solver performance and adapted the sensitivity analysis concept within the event-based communication scheme. An adaptive sensitivity-based TDMA algorithm was developed to manage the extensive communication resource requests, and channel utilization was adapted for the optimal solution behavior.
In the last part of the thesis, we extended our research direction to the multi-vehicle concept and investigated the communication resource allocation problem in the context of the OFDMA protocol. We developed an adaptive sensitivity-based OFDMA protocol based on linking the evolution of the application layer to the communication layer and assigning the communication resources concerning the sensitivity analysis of the optimization problem at the application layer.
The mapping of a virtual network service onto a physical network infrastructure is a challenging task due to the joint allocation of virtual resources across nodes and links, the diverse technical requirements of end-users, the coordination between multiple host domains, and others. This issue is exacerbated further by the extension of virtualization to the next-generation radio access network (NG-RAN) architecture and the provisioning of radio access network (RAN) slicing. To that end, this article focuses on the mapping problem of the virtual network functions (VNFs), as well as their internal and external virtual links (VLs), of a RAN slice subnet onto intelligent points of presence (I-PoPs) and transport networks in the NG-RAN architecture. In this context, in contrast to the majority of the state-of-the-art proposals, which frequently fail to achieve performance objectives and neglect resource allocation constraints, this article introduces automation and intelligence at an architectural level to map VNFs and VLs onto their corresponding physical nodes and links, with the goal of achieving superior efficiency in virtual resource utilization while granting the performance of a RAN slice subnet. Benefiting from a top-down approach, the key contributions of this article are: (i) to extend the architectural framework of network slicing towards the NG-RAN architecture and provide a comprehensive overview and critical analysis of the components and functionalities of a RAN slice subnet; (ii) to integrate the Experiential Network Intelligence (ENI) framework into a joint architecture of the network functions virtualization–management and orchestration (NFV–MANO), Third Generation Partnership Project-network slicing management system (3GPP-NSMS), and I-PoPs in order to render automation and intelligence to the management and orchestration aspects of a RAN slice subnet in the NG-RAN architecture; and (iii) to propose a learning-assisted architectural solution for mapping the VNFs, as well as their internal and external VLs, of a RAN slice subnet onto the underlying I-PoPs and transport networks.
With growing prevalence, agile methodology also pervades domains which adhered to conventional models for decades. At the same time, the demand for safety critical applications and thus rigorous quality assurance increases. This raises the question whether agile methodology is able to support the required level of quality assurance.
This master’s thesis aims at analyzing the situation of analytical quality assurance in agile environments in order to identify shortcomings and provide potential solutions. The author derives an initial hypothesis based on his own professional experience, stating that analytical quality assurance is not sufficiently considered by agile development models and agile transformation. This hypothesis is split into eight sub-hypotheses, each describing a particular problem or challenge. Qualitative interviews with seven experts and complementary literature researches are performed to confirm given hypotheses, identify further challenges, and collect appropriate solution proposals. Eventually, based on the elicited data, five hypotheses as well as the initial hypothesis are corroborated and five new challenges are added. Furthermore, twenty-six potential solutions for relevant hypotheses are collected and presented. The solutions comprise established approaches, such as Dynamic System Development Model or Explorative Testing but also innovative ideas, including the Three-Field Agile approach publicized by this thesis.
Altogether, it is found that agile methodology largely not supports traditional analytical quality assurance in its concepts and even worse, some of the core principles are contradictive. However, numerous solutions are found and presented that address particular discrepancies and have the capability to ease the pictured situation.
The number of sensors used in modern devices is rapidly increasing, and the interaction with sensors demands analog-to-digital data conversion (ADC). A conventional ADC in leading-edge technologies faces
many issues due to signal swings, manufacturing deviations, noise, etc. Designers of ADCs are moving to the
time domain and digital designs techniques to deal with these issues. This work pursues a novel self-adaptive
spiking neural ADC (SN-ADC) design with promising features, e.g., technology scaling issues, low-voltage
operation, low power, and noise-robust conditioning. The SN-ADC uses spike time to carry the information.
Therefore, it can be effectively translated to aggressive new technologies to implement reliable advanced sensory electronic systems. The SN-ADC supports self-x (self-calibration, self-optimization, and self-healing) and
machine learning required for the internet of things (IoT) and Industry 4.0. We have designed the main part of
SN-ADC, which is an adaptive spike-to-digital converter (ASDC). The ASDC is based on a self-adaptive complementary metal–oxide–semiconductor (CMOS) memristor. It mimics the functionality of biological synapses,
long-term plasticity, and short-term plasticity. The key advantage of our design is the entirely local unsupervised
adaptation scheme. The adaptation scheme consists of two hierarchical layers; the first layer is self-adapted, and
the second layer is manually treated in this work. In our previous work, the adaptation process is based on 96 variables. Therefore, it requires considerable adaptation time to correct the synapses’ weight. This paper proposes a
novel self-adaptive scheme to reduce the number of variables to only four and has better adaptation capability
with less delay time than our previous implementation. The maximum adaptation times of our previous work
and this work are 15 h and 27 min vs. 1 min and 47.3 s. The current winner-take-all (WTA) circuits have issues, a
high-cost design, and no identifying the close spikes. Therefore, a novel WTA circuit with memory is proposed.
It used 352 transistors for 16 inputs and can process spikes with a minimum time difference of 3 ns. The ASDC
has been tested under static and dynamic variations. The nominal values of the SN-ADC parameters’ number
of missing codes (NOMCs), integral non-linearity (INL), and differential non-linearity (DNL) are no missing
code, 0.4 and 0.22 LSB, respectively, where LSB stands for the least significant bit. However, these values are
degraded due to the dynamic and static deviation with maximum simulated change equal to 0.88 and 4 LSB and
6 codes for DNL, INL, and NOMC, respectively. The adaptation resets the SN-ADC parameters to the nominal
values. The proposed ASDC is designed using X-FAB 0.35 µm CMOS technology and Cadence tools.
In the pre-seed phase before entering a market, new ventures face the complex, multi-faceted, and uncertain task of designing a business model. Founders accomplish this task within the framework of an innovation process, the so-called business model innovation process. However, because a set of feasible opportunities to design a viable business model is often not predictable in this early phase (Alvarez & Barney, 2007), business model ideas have to be revised multiple times, which corresponds to experimenting with alternative business models (Chesbrough, 2010). This also brings scholars to the relevant, but seldom noticed field of research on experimentation as a cognitive schema (Felin et al., 2015; Gavetti & Levinthal, 2000). The few scholars that discussed the importance of such thought experimentation did not elaborate on the manifestations of this phenomenon. Thus, building on qualitative interviews with entrepreneurs, the current state of the research has a gap that offers this dissertation the ability to clearly conceptualise the manifestation of experimentation as a cognitive schema in business model innovation. The results extend previous conceptualisations of experimentation by illustrating the interplay of three different forms of thought experimentation, namely purposeful interactions, incidental interactions, and theorising. In addition, the role of individuals in business model innovation has recently been recognised by scholars (Amit & Zott, 2015; Snihur & Zott, 2020). It is noticed that not only the founders themselves but also many other actors play a central role in this process to support a new venture on its way to designing a viable business model, such as accelerators or public institutions. It thus stands to reason that in addition to understanding how new ventures design their business model, it is also important to study how different actors are involved in this process. Building on qualitative interviews with entrepreneurs, this gap offers this dissertation the ability to study how different actors are involved in business model innovation and conceptualise actor engagement behaviours in this context. The results reveal six different actor engagement behaviours, including teaching, supporting, mobilising, co-developing, sharing, and signalling behaviour. Furthermore, it stands to reason, that entrepreneurs and external actors each play a certain role in business model innovation. Certain behavioural patterns and types of resource contributions may be characteristic for a group of actors, leading to the emergence of distinct actor roles. Thus, in this dissertation a role concept is established to illustrate how actors are involved in designing a new business model, including 13 actor roles. These actor roles are divided into task-oriented and network-oriented roles. Building on this, a variety of role dynamics are unveiled. Moreover, special attention is given to role temporality. Building on two case studies and a quantitative survey, the results reveal how actor roles are played at a certain point in time, thereby concretising them in relation to certain stages of the pre-seed phase.
In recent decades, there has been increasing interest in analyzing the behavior of complex systems. A popular approach for analyzing such systems is a network analytic approach where the system is represented by a graph structure (Wassermann&Faust 1994, Boccaletti et al. 2006, Brandes&Erlebach 2005, Vespignani 2018): Nodes represent the system’s entities, edges their interactions. A large toolbox of network analytic methods, such as measures for structural properties (Newman 2010), centrality measures (Koschützki et al. 2005), or methods for identifying communities (Fortunato 2010), is readily available to be applied on any network structure. However, it is often overlooked that a network representation of a system and the (technically applicable) methods contain assumptions that need to be met; otherwise, the results are not interpretable or even misleading. The most important assumption of a network representation is the presence of indirect effects: If A has an impact on B, and B has an impact on C, then A has an impact on C (Zweig 2016, Brandes et al. 2013). The presence of indirect effects can be explained by ”something” flowing through the network by moving from node to node. Such network flows (or network processes) may be the propagation of information in social networks, the spread of infections, or entities using the network as infrastructure, such as in transportation networks. Also several network measures, particularly most centrality measures, assume the presence of such a network process, but additionally assume specific properties of the network processes (Borgatti 2005). Then, a centrality value indicates a node’s importance with respect to a process with these properties.
While this has been known for several years, only recently have datasets containing real-world network flows become accessible. In this context, the goal of this dissertation is to provide a better understanding of the actual behavior of real-world network processes, with a particular focus on centrality measures: If real-world network processes turn out to show different properties than those assumed by classic centrality measures, these measures might considerably under- or overestimate the importance of nodes for the actual network flow. To the best of our knowledge, there are only very few works addressing this topic.
The contributions of this thesis are therefore as follows: (i) We investigate in which aspects real-world network flows meet the assumptions contained about them in centrality measures. (ii) Since we find that the real-world flows show considerably different properties than assumed, we test to which extent the found properties can be explained by models, i.e., models based on shortest paths or random walks. (iii) We study whether the deviations from the assumed behavior have an impact on the results of centrality measures.
To this end, we introduce flow-based variants of centrality measures which are either based on the assumed behavior or on the actual behavior of the real-world network flow. This enables systematic evaluation of the impact of each assumption on the resulting rankings of centrality measures.
While–on a large scale–we observe a surprisingly large robustness of the measures against deviations in their assumptions, there are nodes whose importance is rated very differently when the real-world network flow is taken into account. (iv) As a technical contribution, we provide a method for an efficient handling of large sets of flow trajectories by summarizing them into groups of similar trajectories. (v) We furthermore present the results of an interdisciplinary research project in which the trajectories of humans in a network were analyzed in detail. In general, we are convinced that a process-driven perspective on network analysis in which the network process is considered in addition to the network representation, can help to better understand the behavior of complex systems.