• search hit 8 of 9
Back to Result List

Multi-Sensory Data Analysis and On-Line Evaluation for Advanced Process Control and Yield Optimization in Polymer Film Industry

  • The current procedures for achieving industrial process surveillance, waste reduction, and prognosis of critical process states are still insufficient in some parts of the manufacturing industry. Increasing competitive pressure, falling margins, increasing cost, just-in-time production, environmental protection requirements, and guidelines concerning energy savings pose new challenges to manufacturing companies, from the semiconductor to the pharmaceutical industry. New, more intelligent technologies adapted to the current technical standards provide companies with improved options to tackle these situations. Here, knowledge-based approaches open up pathways that have not yet been exploited to their full extent. The Knowledge-Discovery-Process for knowledge generation describes such a concept. Based on an understanding of the problems arising during production, it derives conclusions from real data, processes these data, transfers them into evaluated models and, by this open-loop approach, reiteratively reflects the results in order to resolve the production problems. Here, the generation of data through control units, their transfer via field bus for storage in database systems, their formatting, and the immediate querying of these data, their analysis and their subsequent presentation with its ensuing benefits play a decisive role. The aims of this work result from the lack of systematic approaches to the above-mentioned issues, such as process visualization, the generation of recommendations, the prediction of unknown sensor und production states, and statements on energy cost. Both science and commerce offer mature statistical tools for data preprocessing, analysis and modeling, and for the final reporting step. Since their creation, the insurance business, the world of banking, market analysis, and marketing have been the application fields of these software types; they are now expanding to the production environment. Appropriate modeling can be achieved via specific machine learning procedures, which have been established in various industrial areas, e.g., in process surveillance by optical control systems. Here, State-of-the-art classification methods are used, with multiple applications comprising sensor technology, process areas, and production site data. Manufacturing companies now intend to establish a more holistic surveillance of process data, such as, e.g., sensor failures or process deviations, to identify dependencies. The causes of quality problems must be recognized and selected in real time from about 500 attributes of a highly complex production machine. Based on these identified causes, recommendations for improvement must then be generated for the operator at the machine, in order to enable timely measures to avoid these quality deviations. Unfortunately, the ability to meet the required increases in efficiency – with simultaneous consumption and waste minimization – still depends on data that are, for the most part, not available. There is an overrepresentation of positive examples whereas the number of definite negative examples is too low. The acquired information can be influenced by sensor drift effects and the occurrence of quality degradation may not be adequately recognized. Sensorless diagnostic procedures with dual use of actuators can be of help here. Moreover, in the course of a process, critical states with sometimes unexplained behavior can occur. Also in these cases, deviations could be reduced by early countermeasures. The generation of data models using appropriate statistical methods is of advantage here. Conventional classification methods sometimes reach their limits. Supervised learning methods are mostly used in areas of high information density with sufficient data available for the classes under examination. However, there is a growing trend (e.g., spam filtering) to apply supervised learning methods to underrepresented classes, the datasets of which are, at best, outliers or not at all existent. The application field of One-Class Classification (OCC) deals with this issue. Standard classification procedures (e.g., k-nearest-neighbor classifier, support vector machines) can be modified in adjustment to such problems. Thereby, a control system is able to classify statements on changing process states or sensor deviations. The above-described knowledge discovery process was employed in a case study from the polymer film industry, at the Mondi Gronau GmbH, taken as an example, and accomplished by a real-data survey at the production site and subsequent data preprocessing, modeling, evaluation, and deployment as a system for the generation of recommendations. To this end, questions regarding the following topics had to be clarified: data sources, datasets and their formatting, transfer pathways, storage media, query sequences, the employed methods of classification, their adjustment to the problems at hand, evaluation of the results, construction of a dynamic cycle, and the final implementation in the production process, along with its surplus value for the company. Pivotal options for optimization with respect to ecological and economical aspects can be found here. Capacity for improvement is given in the reduction of energy consumption, CO\(_2\) emissions, and waste at all machines. At this one site, savings of several million euros per month can be achieved. One major difficulty so far has been hardly accessible process data which, distributed on various data sources and unconnected, in some areas led to an increased analysis effort and a lack of holistic real-time quality surveillance. Monitoring of specifications and the thus obtained support for the operator at the installation resulted in a clear disadvantage with regard to cost minimization. The data of the case study, captured according to their purposes and in coordination with process experts, amounted to 21,900 process datasets from cast film extrusion during 2 years’ time, including sensor data from dosing facilities and 300 site-specific energy datasets from the years 2002–2014. In the following, the investigation sequence is displayed: 1. In the first step, industrial approaches according to Industrie 4.0 and related to Big Data were investigated. The applied statistical software suites and their functions were compared with a focus on real-time data acquisition from database systems, different data formats, their sensor locations at the machines, and the data processing part. The linkage of datasets from various data sources for, e.g., labeling and downstream exploration according to the knowledge discovery process is of high importance for polymer manufacturing applications. 2. In the second step, the aims were defined according to the industrial requirements, i.e. the critical production problem called “cut-off” as the main selection, and with regard to their investigation with machine learning methods. Therefore, a system architecture corresponding to the polymer industry was developed, containing the following processing steps: data acquisition, monitoring \& recommendation, and self-configuration. 3. The novel sensor datasets, with 160–2,500 real and synthetic attributes, were acquired within 1-min intervals via PLC and field bus from an Oracle database. The 160 features were reduced to 6 dimensions with feature reduction methods. Due to underrepresentation of the critical class, the learning approaches had to be modified and optimized for one-class classification, which achieved 99% accuracy after training, testing and evaluation with real datasets. 4. In the next step, the 6-dimensional dataset was scaled into lower 1-, 2-, or 3-dimensional space with classical and non-classical mapping approaches for downstream visualization. The mapped view was separated into zones of normal and abnormal process conditions by threshold setting. 5. Afterwards, the boundary zone was investigated and an approach for trajectory extraction consisting of condition points in sequence was developed, to optimize the prediction behavior of the model. The extracted trajectories were trained, tested and evaluated by State-of-the-art classification methods, achieving a 99% recognition ratio. 6. In the last step, the best methods and processing parts were converted into a specifically developed domain-specific graphical user interface for real-time visualization of process condition changes. The requirements of such an interface were discussed with the operators with regard to intuitive handling, interactive visualization and recommendations (as e.g., messaging and traffic lights), and implemented. The software prototype was tested at a laboratory machine. Correct recognition of abnormal process problems was achieved at a 90\% ratio. The software was afterwards transferred to a group of on-line production machines. As demonstrated, the monthly amount of waste arising at machine M150 could be decreased from 20.96% to 12.44% during the application time. The frequency of occurrence of the specific problem was reduced by 30% related to monthly savings of 50,000 EUR. In the approach pertaining to the energy prognosis of load profiles, monthly energy data from 2002 to 2014 (about 36 trajectories with three to eight real parameters each) were used as the basis, analyzed and modeled systematically. The prognosis quality increased with approaching target date. Thereby, the site-specific load profile for 2014 could be predicted with an accuracy of 99%. The achievement of sustained cost reductions of several 100,000 euros, combined with additional savings of EUR 2.8 million, could be demonstrated. The process improvements achieved while pursuing scientific targets could be successfully and permanently integrated at the case study plant. The increase in methodical and experimental knowledge was reflected by first economical results and could be verified numerically. The expectations of the company were more than fulfilled and further developments based on the new findings were initiated. Among the new finding are the transfer of the scientific findings onto more machines and even the initiation of further studies expanding into the diagnostics area. Considering the size of the enterprise, future enhanced success should also be possible for other locations. In the course of the grid charge exemption according to EEG, the energy savings at further German locations can amount to 4–11% on a monetary basis and at least 5% based on energy. Up to 10% of materials and cost can be saved with regard to waste reduction related to specific problems. According to projections, material savings of 5–10 t per month and time savings of up to 50 person-hours are achievable. Important synergy effects can be created by the knowledge transfer.
Metadaten
Author:Michael Kohlert
URN:urn:nbn:de:hbz:386-kluedo-41754
ISBN:978-3-95974-002-9
Advisor:Andreas König, Tamara Chistyakova
Document Type:Doctoral Thesis
Language of publication:English
Date of Publication (online):2015/09/22
Date of first Publication:2015/09/22
Publishing Institution:Technische Universität Kaiserslautern
Granting Institution:Technische Universität Kaiserslautern
Acceptance Date of the Thesis:2015/07/10
Date of the Publication (Server):2015/09/23
Page Number:XXII, 158
Faculties / Organisational entities:Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik
DDC-Cassification:6 Technik, Medizin, angewandte Wissenschaften / 620 Ingenieurwissenschaften und Maschinenbau
Licence (German):Standard gemäß KLUEDO-Leitlinien vom 30.07.2015