2011, vol. 4, núm. 4http://hdl.handle.net/2099/121992024-03-28T08:11:43Z2024-03-28T08:11:43ZManaging the IE (Industrial Engineering) Mindset: A quantitative investigation of Toyota’s practical thinking shared among employeesMarksberry, PhillipParsley, Davidhttp://hdl.handle.net/2099/122252020-07-22T21:59:13Z2012-05-22T08:03:52ZManaging the IE (Industrial Engineering) Mindset: A quantitative investigation of Toyota’s practical thinking shared among employees
Marksberry, Phillip; Parsley, David
Purpose: The goal of this work was to investigate the managerial practices of today to understand if Toyota is sheltering themselves from these newer practices or embracing them like most believe.
Design/methodology/approach: This work utilizes a new form of data mining named Latent Semantic Analysis (LSA) to analyze an organizations ideal management practices.
Findings: This work shows quantitatively that TPS favors earlier versions of industrial engineering compared to the optimization techniques available today.
Originality/value: The use of data mining to analyze organizational management practices
2012-05-22T08:03:52ZMarksberry, PhillipParsley, DavidPurpose: The goal of this work was to investigate the managerial practices of today to understand if Toyota is sheltering themselves from these newer practices or embracing them like most believe.
Design/methodology/approach: This work utilizes a new form of data mining named Latent Semantic Analysis (LSA) to analyze an organizations ideal management practices.
Findings: This work shows quantitatively that TPS favors earlier versions of industrial engineering compared to the optimization techniques available today.
Originality/value: The use of data mining to analyze organizational management practicesA Framework for successful new product developmentBhuiyan, Nadiahttp://hdl.handle.net/2099/122242020-07-22T21:59:12Z2012-05-22T07:51:10ZA Framework for successful new product development
Bhuiyan, Nadia
Purpose: The purpose of this paper is to propose a framework of critical success factors, metrics, and tools and techniques for implementing metrics for each stage of the new product development (NPD) process.
Design/methodology/approach: To achieve this objective, a literature review was undertaken to investigate decades of studies on NPD success and how it can be achieved. These studies were scanned for common factors for firms that enjoyed success of new products on the market.
Findings: The paper summarizes NPD success factors, suggests metrics that should be used to measure these factors, and proposes tools and techniques to make use of these metrics. This was done for each stage of the NPD process, and brought together in a framework that the authors propose should be followed for complex NPD projects.
Research limitations/implications: Several different research directions could provide additional useful information both to firms finding critical success factors (CSF) and measuring product development success as well as to academics performing research in this area. The main research opportunity exists in implementing or testing the proposed framework.
Practical implications: The framework can be followed by managers of complex NPD projects to ensure success.
Originality/value: While many studies have been conducted on critical success factors for NPD, these studies tend to be fragmented and focus on one or a few phases of the NPD process. To the authors’ knowledge, this is the first time a framework that synthesizes these studies into a single framework.
2012-05-22T07:51:10ZBhuiyan, NadiaPurpose: The purpose of this paper is to propose a framework of critical success factors, metrics, and tools and techniques for implementing metrics for each stage of the new product development (NPD) process.
Design/methodology/approach: To achieve this objective, a literature review was undertaken to investigate decades of studies on NPD success and how it can be achieved. These studies were scanned for common factors for firms that enjoyed success of new products on the market.
Findings: The paper summarizes NPD success factors, suggests metrics that should be used to measure these factors, and proposes tools and techniques to make use of these metrics. This was done for each stage of the NPD process, and brought together in a framework that the authors propose should be followed for complex NPD projects.
Research limitations/implications: Several different research directions could provide additional useful information both to firms finding critical success factors (CSF) and measuring product development success as well as to academics performing research in this area. The main research opportunity exists in implementing or testing the proposed framework.
Practical implications: The framework can be followed by managers of complex NPD projects to ensure success.
Originality/value: While many studies have been conducted on critical success factors for NPD, these studies tend to be fragmented and focus on one or a few phases of the NPD process. To the authors’ knowledge, this is the first time a framework that synthesizes these studies into a single framework.Designing and implementation of an intelligent manufacturing systemAlmeida, Fernando L. F.http://hdl.handle.net/2099/122232020-07-22T21:59:12Z2012-05-22T07:07:21ZDesigning and implementation of an intelligent manufacturing system
Almeida, Fernando L. F.
Purpose: The goal of XPRESS is to establish a breakthrough for the factory of the future with a new flexible production concept based on the generic idea of “specialized intelligent process units” (“Manufactrons”) integrated in cross-sectoral learning networks for a customized production. XPRESS meets the challenge to integrate intelligence and flexibility at the “highest” level of the production control system as well as at the “lowest” level of the singular machine.
Design/methodology/approach: Architecture of a manufactronic networked factory is presented, making it possible to generate particular manufactrons for the specific tasks, based on the automatic analysis of its required features.
Findings: The manufactronic factory concept meets the challenge to integrate intelligence and flexibility at the “highest” level of the production control system as well as at the “lowest” level of the singular machine. The quality assurance system provided a 100% inline quality monitoring, destructive costs reduced 30%-49%, the ramp-up time for the set-up of production lines decreased up to 50% and the changeover time decreased up to 80%.
Research limitations/implications: Specific features of the designed manufactronic architecture, namely the transport manufactrons, have been tested as separate mechanisms which can be merged into the final comprehensive at a later stage.
Practical implications: This concept is demonstrated in the automotive and aeronautics industries, but can be easily transferred to nearly all production processes. Using the manufactronic approach, industrial players will be able to anticipate and to respond to rapidly changing consumer needs, producing high-quality products in adequate quantities while reducing costs.
Originality/value: Assembly units composed of manufactrons can flexibly perform varying types of complex tasks, whereas today this is limited to a few pre-defined tasks. Additionally, radical innovations of the manufactronic networked factory include the knowledge and responsibility segregation and trans-sectoral process learning in specialist knowledge networks.
2012-05-22T07:07:21ZAlmeida, Fernando L. F.Purpose: The goal of XPRESS is to establish a breakthrough for the factory of the future with a new flexible production concept based on the generic idea of “specialized intelligent process units” (“Manufactrons”) integrated in cross-sectoral learning networks for a customized production. XPRESS meets the challenge to integrate intelligence and flexibility at the “highest” level of the production control system as well as at the “lowest” level of the singular machine.
Design/methodology/approach: Architecture of a manufactronic networked factory is presented, making it possible to generate particular manufactrons for the specific tasks, based on the automatic analysis of its required features.
Findings: The manufactronic factory concept meets the challenge to integrate intelligence and flexibility at the “highest” level of the production control system as well as at the “lowest” level of the singular machine. The quality assurance system provided a 100% inline quality monitoring, destructive costs reduced 30%-49%, the ramp-up time for the set-up of production lines decreased up to 50% and the changeover time decreased up to 80%.
Research limitations/implications: Specific features of the designed manufactronic architecture, namely the transport manufactrons, have been tested as separate mechanisms which can be merged into the final comprehensive at a later stage.
Practical implications: This concept is demonstrated in the automotive and aeronautics industries, but can be easily transferred to nearly all production processes. Using the manufactronic approach, industrial players will be able to anticipate and to respond to rapidly changing consumer needs, producing high-quality products in adequate quantities while reducing costs.
Originality/value: Assembly units composed of manufactrons can flexibly perform varying types of complex tasks, whereas today this is limited to a few pre-defined tasks. Additionally, radical innovations of the manufactronic networked factory include the knowledge and responsibility segregation and trans-sectoral process learning in specialist knowledge networks.Towards reducing traffic congestion using cooperative adaptive cruise control on a freeway with a rampArnaout, GeorgesBowling, Shannonhttp://hdl.handle.net/2099/122212020-07-22T21:59:13Z2012-05-21T16:53:22ZTowards reducing traffic congestion using cooperative adaptive cruise control on a freeway with a ramp
Arnaout, Georges; Bowling, Shannon
Purpose: In this paper, the impact of Cooperative Adaptive Cruise Control (CACC) systems on traffic performance is examined using microscopic agent-based simulation. Using a developed traffic simulation model of a freeway with an on-ramp - created to induce perturbations and to trigger stop-and-go traffic, the CACC system’s effect on the traffic performance is studied. The previously proposed traffic simulation model is extended and validated. By embedding CACC vehicles in different penetration levels, the results show significance and indicate the potential of CACC systems to improve traffic characteristics and therefore can be used to reduce traffic congestion. The study shows that the impact of CACC is positive but is highly dependent on the CACC market penetration. The flow rate of the traffic using CACC is proportional to the market penetration rate of CACC equipped vehicles and the density of the traffic.
Design/methodology/approach: This paper uses microscopic simulation experiments followed by a quantitative statistical analysis. Simulation enables researchers manipulating the system variables to straightforwardly predict the outcome on the overall system, giving researchers the unique opportunity to interfere and make improvements to performance. Thus with simulation, changes to variables that might require excessive time, or be unfeasible to carry on real systems, are often completed within seconds.
Findings: The findings of this paper are summarized as follow:
• Provide and validate a platform (agent-based microscopic traffic simulator) in which any CACC algorithm (current or future) may be evaluated.
• Provide detailed analysis associated with implementation of CACC vehicles on freeways.
• Investigate whether embedding CACC vehicles on freeways has a significant positive impact or not.
Research limitations/implications: The main limitation of this research is that it has been conducted solely in a computer laboratory. Laboratory experiments and/or simulations provide a controlled setting, well suited for preliminary testing and calibrating of the input variables. However, laboratory testing is by no means sufficient for the entire methodology validation. It must be complemented by fundamental field testing. As far as the simulation model limitations, accidents, weather conditions, and obstacles in the roads were not taken into consideration. Failures in the operation of the sensors and communication of CACC design equipment were also not considered. Additionally, the special HOV lanes were limited to manual vehicles and CACC vehicles. Emergency vehicles, buses, motorcycles, and other type of vehicles were not considered in this dissertation. Finally, it is worthy to note that the human factor is far more sophisticated, hard to predict, and flexible to be exactly modeled in a traffic simulation model perfectly. Some human behavior could occur in real life that the simulation model proposed would fail to model.
Practical implications: A high percentage of CACC market penetration is not occurring in the near future. Thus, reaching a high penetration will always be a challenge for this type of research. The public accessibility for such a technology will always be a major practical challenge. With such a small headway safety gap, even if the technology was practically proven to be efficient and safe, having the public to accept it and feel comfortable in using it will always be a challenge facing the success of the CACC technology.
Originality/value: The literature on the impact of CACC on traffic dynamics is limited. In addition, no previous work has proposed an open-source microscopic traffic simulator where different CACC algorithms could be easily used and tested. We believe that the proposed model is more realistic than other traffic models, and is one of the very first models to model the behavior CACC vehicles on freeways.
2012-05-21T16:53:22ZArnaout, GeorgesBowling, ShannonPurpose: In this paper, the impact of Cooperative Adaptive Cruise Control (CACC) systems on traffic performance is examined using microscopic agent-based simulation. Using a developed traffic simulation model of a freeway with an on-ramp - created to induce perturbations and to trigger stop-and-go traffic, the CACC system’s effect on the traffic performance is studied. The previously proposed traffic simulation model is extended and validated. By embedding CACC vehicles in different penetration levels, the results show significance and indicate the potential of CACC systems to improve traffic characteristics and therefore can be used to reduce traffic congestion. The study shows that the impact of CACC is positive but is highly dependent on the CACC market penetration. The flow rate of the traffic using CACC is proportional to the market penetration rate of CACC equipped vehicles and the density of the traffic.
Design/methodology/approach: This paper uses microscopic simulation experiments followed by a quantitative statistical analysis. Simulation enables researchers manipulating the system variables to straightforwardly predict the outcome on the overall system, giving researchers the unique opportunity to interfere and make improvements to performance. Thus with simulation, changes to variables that might require excessive time, or be unfeasible to carry on real systems, are often completed within seconds.
Findings: The findings of this paper are summarized as follow:
• Provide and validate a platform (agent-based microscopic traffic simulator) in which any CACC algorithm (current or future) may be evaluated.
• Provide detailed analysis associated with implementation of CACC vehicles on freeways.
• Investigate whether embedding CACC vehicles on freeways has a significant positive impact or not.
Research limitations/implications: The main limitation of this research is that it has been conducted solely in a computer laboratory. Laboratory experiments and/or simulations provide a controlled setting, well suited for preliminary testing and calibrating of the input variables. However, laboratory testing is by no means sufficient for the entire methodology validation. It must be complemented by fundamental field testing. As far as the simulation model limitations, accidents, weather conditions, and obstacles in the roads were not taken into consideration. Failures in the operation of the sensors and communication of CACC design equipment were also not considered. Additionally, the special HOV lanes were limited to manual vehicles and CACC vehicles. Emergency vehicles, buses, motorcycles, and other type of vehicles were not considered in this dissertation. Finally, it is worthy to note that the human factor is far more sophisticated, hard to predict, and flexible to be exactly modeled in a traffic simulation model perfectly. Some human behavior could occur in real life that the simulation model proposed would fail to model.
Practical implications: A high percentage of CACC market penetration is not occurring in the near future. Thus, reaching a high penetration will always be a challenge for this type of research. The public accessibility for such a technology will always be a major practical challenge. With such a small headway safety gap, even if the technology was practically proven to be efficient and safe, having the public to accept it and feel comfortable in using it will always be a challenge facing the success of the CACC technology.
Originality/value: The literature on the impact of CACC on traffic dynamics is limited. In addition, no previous work has proposed an open-source microscopic traffic simulator where different CACC algorithms could be easily used and tested. We believe that the proposed model is more realistic than other traffic models, and is one of the very first models to model the behavior CACC vehicles on freeways.What do information reuse and automated processing require in engineering design? Semantic processOssi Nykänen, OssiSalonen, JaakkoMarkkula, MikkoRanta, PekkaRokala, MarkusHelminen, MattiAlarotu, VänniNurmi, JuhaPalonen, TuijaKoskinen, Kari T.Pohjolainen, Seppohttp://hdl.handle.net/2099/122202020-07-22T21:59:14Z2012-05-21T16:42:04ZWhat do information reuse and automated processing require in engineering design? Semantic process
Ossi Nykänen, Ossi; Salonen, Jaakko; Markkula, Mikko; Ranta, Pekka; Rokala, Markus; Helminen, Matti; Alarotu, Vänni; Nurmi, Juha; Palonen, Tuija; Koskinen, Kari T.; Pohjolainen, Seppo
Purpose: The purpose of this study is to characterize, analyze, and demonstrate machine-understandable semantic process for validating, integrating, and processing technical design information. This establishes both a vision and tools for information reuse and semi-automatic processing in engineering design projects, including virtual machine laboratory applications with generated components.
Design/methodology/approach: The process model has been developed iteratively in terms of action research, constrained by the existing technical design practices and assumptions (design documents, expert feedback), available technologies (pre-studies and experiments with scripting and pipeline tools), benchmarking with other process models and methods (notably the RUP and DITA), and formal requirements (computability and the critical information paths for the generated applications). In practice, the work includes both quantitative and qualitative components.
Findings: Technical design processes may be greatly enhanced in terms of semantic process thinking, by enriching design information, and automating information validation and transformation tasks. Contemporary design information, however, is mainly intended for human consumption, and needs to be explicitly enriched with the currently missing data and interfaces. In practice, this may require acknowledging the role of technical information or knowledge engineer, to lead the development of the semantic design information process in a design organization. There is also a trade-off between machine-readability and system complexity that needs to be studied further, both empirically and in theory.
Research limitations/implications: The conceptualization of the semantic process is essentially an abstraction based on the idea of progressive design. While this effectively allows implementing semantic processes with, e.g., pipeline technologies, the abstraction is valid only when technical design is organized into reasonably distinct tasks.
Practical implications: Our work points out a best practice for technical information management in progressive design that can be applied on different levels.
Social implications: Current design processes may be somewhat impaired by legacy practices that do not promote information reuse and collaboration beyond conventional task domains. Our work provides a reference model to analyze and develop design activities as formalized work-flows. This work should lead into improved industry design process models and novel CAD/CAM/PDM applications, thereby strengthening industry design processes.
Originality/value: While extensively studied, semantic modeling in technical design has been largely dominated by the idea of capturing design artifacts without a clear rationale why this is done and what level of detail should be favored in models. In the semantic process presented in this article, the utility and the chief quality criteria of semantic models (of technical information and artifacts) are explicitly established by the semantic processing pipeline(s). This constructively explains the significance of semantic models as communication and information requirement interfaces, with concrete use cases.
2012-05-21T16:42:04ZOssi Nykänen, OssiSalonen, JaakkoMarkkula, MikkoRanta, PekkaRokala, MarkusHelminen, MattiAlarotu, VänniNurmi, JuhaPalonen, TuijaKoskinen, Kari T.Pohjolainen, SeppoPurpose: The purpose of this study is to characterize, analyze, and demonstrate machine-understandable semantic process for validating, integrating, and processing technical design information. This establishes both a vision and tools for information reuse and semi-automatic processing in engineering design projects, including virtual machine laboratory applications with generated components.
Design/methodology/approach: The process model has been developed iteratively in terms of action research, constrained by the existing technical design practices and assumptions (design documents, expert feedback), available technologies (pre-studies and experiments with scripting and pipeline tools), benchmarking with other process models and methods (notably the RUP and DITA), and formal requirements (computability and the critical information paths for the generated applications). In practice, the work includes both quantitative and qualitative components.
Findings: Technical design processes may be greatly enhanced in terms of semantic process thinking, by enriching design information, and automating information validation and transformation tasks. Contemporary design information, however, is mainly intended for human consumption, and needs to be explicitly enriched with the currently missing data and interfaces. In practice, this may require acknowledging the role of technical information or knowledge engineer, to lead the development of the semantic design information process in a design organization. There is also a trade-off between machine-readability and system complexity that needs to be studied further, both empirically and in theory.
Research limitations/implications: The conceptualization of the semantic process is essentially an abstraction based on the idea of progressive design. While this effectively allows implementing semantic processes with, e.g., pipeline technologies, the abstraction is valid only when technical design is organized into reasonably distinct tasks.
Practical implications: Our work points out a best practice for technical information management in progressive design that can be applied on different levels.
Social implications: Current design processes may be somewhat impaired by legacy practices that do not promote information reuse and collaboration beyond conventional task domains. Our work provides a reference model to analyze and develop design activities as formalized work-flows. This work should lead into improved industry design process models and novel CAD/CAM/PDM applications, thereby strengthening industry design processes.
Originality/value: While extensively studied, semantic modeling in technical design has been largely dominated by the idea of capturing design artifacts without a clear rationale why this is done and what level of detail should be favored in models. In the semantic process presented in this article, the utility and the chief quality criteria of semantic models (of technical information and artifacts) are explicitly established by the semantic processing pipeline(s). This constructively explains the significance of semantic models as communication and information requirement interfaces, with concrete use cases.Integrated methodological frameworks for modelling agent-based advanced supply chain planning systems: a systematic literature reviewSanta-Eulalia, Luis AntonioHalladjian, GeorginaD'Amours, SophieFrayret, Jean-Marchttp://hdl.handle.net/2099/122182020-07-22T21:59:12Z2012-05-21T15:44:55ZIntegrated methodological frameworks for modelling agent-based advanced supply chain planning systems: a systematic literature review
Santa-Eulalia, Luis Antonio; Halladjian, Georgina; D'Amours, Sophie; Frayret, Jean-Marc
Purpose: The objective of this paper is to provide a systematic literature review of recent developments in methodological frameworks for the modelling and simulation of agent-based advanced supply chain planning systems.
Design/methodology/approach: A systematic literature review is provided to identify, select and make an analysis and a critical summary of all suitable studies in the area. It is organized into two blocks: the first one covers agent-based supply chain planning systems in general terms, while the second one specializes the previous search to identify those works explicitly containing methodological aspects.
Findings: Among sixty suitable manuscripts identified in the primary literature search, only seven explicitly considered the methodological aspects. In addition, we noted that, in general, the notion of advanced supply chain planning is not considered unambiguously, that the social and individual aspects of the agent society are not taken into account in a clear manner in several studies and that a significant part of the works are of a theoretical nature, with few real-scale industrial applications. An integrated framework covering all phases of the modelling and simulation process is still lacking in the literature visited.
Research limitations/implications: The main research limitations are related to the period covered (last four years), the selected scientific databases, the selected language (i.e. English) and the use of only one assessment framework for the descriptive evaluation part.
Practical implications: The identification of recent works in the domain and discussion concerning their limitations can help pave the way for new and innovative researches towards a complete methodological framework for agent-based advanced supply chain planning systems.
Originality/value: As there are no recent state-of-the-art reviews in the domain of methodological frameworks for agent-based supply chain planning, this paper contributes to systematizing and consolidating what has been done in recent years and uncovers interesting research gaps for future studies in this emerging field
2012-05-21T15:44:55ZSanta-Eulalia, Luis AntonioHalladjian, GeorginaD'Amours, SophieFrayret, Jean-MarcPurpose: The objective of this paper is to provide a systematic literature review of recent developments in methodological frameworks for the modelling and simulation of agent-based advanced supply chain planning systems.
Design/methodology/approach: A systematic literature review is provided to identify, select and make an analysis and a critical summary of all suitable studies in the area. It is organized into two blocks: the first one covers agent-based supply chain planning systems in general terms, while the second one specializes the previous search to identify those works explicitly containing methodological aspects.
Findings: Among sixty suitable manuscripts identified in the primary literature search, only seven explicitly considered the methodological aspects. In addition, we noted that, in general, the notion of advanced supply chain planning is not considered unambiguously, that the social and individual aspects of the agent society are not taken into account in a clear manner in several studies and that a significant part of the works are of a theoretical nature, with few real-scale industrial applications. An integrated framework covering all phases of the modelling and simulation process is still lacking in the literature visited.
Research limitations/implications: The main research limitations are related to the period covered (last four years), the selected scientific databases, the selected language (i.e. English) and the use of only one assessment framework for the descriptive evaluation part.
Practical implications: The identification of recent works in the domain and discussion concerning their limitations can help pave the way for new and innovative researches towards a complete methodological framework for agent-based advanced supply chain planning systems.
Originality/value: As there are no recent state-of-the-art reviews in the domain of methodological frameworks for agent-based supply chain planning, this paper contributes to systematizing and consolidating what has been done in recent years and uncovers interesting research gaps for future studies in this emerging fieldActivity modes selection for project crashing through deterministic simulationMohanty, AshokMishra, JibiteshSatpathy, Biswajithttp://hdl.handle.net/2099/122172020-07-22T21:59:13Z2012-05-21T15:27:00ZActivity modes selection for project crashing through deterministic simulation
Mohanty, Ashok; Mishra, Jibitesh; Satpathy, Biswajit
Purpose: The time-cost trade-off problem addressed by CPM-based analytical approaches, assume unlimited resources and the existence of a continuous time-cost function. However, given the discrete nature of most resources, the activities can often be crashed only stepwise. Activity crashing for discrete time-cost function is also known as the activity modes selection problem in the project management. This problem is known to be NP-hard. Sophisticated optimization techniques such as Dynamic Programming, Integer Programming, Genetic Algorithm, Ant Colony Optimization have been used for finding efficient solution to activity modes selection problem. The paper presents a simple method that can provide efficient solution to activity modes selection problem for project crashing.
Design/methodology/approach: Simulation based method implemented on electronic spreadsheet to determine activity modes for project crashing. The method is illustrated with the help of an example.
Findings: The paper shows that a simple approach based on simple heuristic and deterministic simulation can give good result comparable to sophisticated optimization techniques.
Research limitations/implications: The simulation based crashing method presented in this paper is developed to return satisfactory solutions but not necessarily an optimal solution.
Practical implications: The use of spreadsheets for solving the Management Science and Operations Research problems make the techniques more accessible to practitioners. Spreadsheets provide a natural interface for model building, are easy to use in terms of inputs, solutions and report generation, and allow users to perform what-if analysis.
Originality/value: The paper presents the application of simulation implemented on a spreadsheet to determine efficient solution to discrete time cost tradeoff problem.
2012-05-21T15:27:00ZMohanty, AshokMishra, JibiteshSatpathy, BiswajitPurpose: The time-cost trade-off problem addressed by CPM-based analytical approaches, assume unlimited resources and the existence of a continuous time-cost function. However, given the discrete nature of most resources, the activities can often be crashed only stepwise. Activity crashing for discrete time-cost function is also known as the activity modes selection problem in the project management. This problem is known to be NP-hard. Sophisticated optimization techniques such as Dynamic Programming, Integer Programming, Genetic Algorithm, Ant Colony Optimization have been used for finding efficient solution to activity modes selection problem. The paper presents a simple method that can provide efficient solution to activity modes selection problem for project crashing.
Design/methodology/approach: Simulation based method implemented on electronic spreadsheet to determine activity modes for project crashing. The method is illustrated with the help of an example.
Findings: The paper shows that a simple approach based on simple heuristic and deterministic simulation can give good result comparable to sophisticated optimization techniques.
Research limitations/implications: The simulation based crashing method presented in this paper is developed to return satisfactory solutions but not necessarily an optimal solution.
Practical implications: The use of spreadsheets for solving the Management Science and Operations Research problems make the techniques more accessible to practitioners. Spreadsheets provide a natural interface for model building, are easy to use in terms of inputs, solutions and report generation, and allow users to perform what-if analysis.
Originality/value: The paper presents the application of simulation implemented on a spreadsheet to determine efficient solution to discrete time cost tradeoff problem.Scheduling of a computer integrated manufacturing system: a simulation studyBhuiyan, NadiaGouw, GerardYazdi, Daryooshhttp://hdl.handle.net/2099/122162020-07-22T21:59:12Z2012-05-21T15:02:14ZScheduling of a computer integrated manufacturing system: a simulation study
Bhuiyan, Nadia; Gouw, Gerard; Yazdi, Daryoosh
Purpose: The purpose of this paper is to study the effect of selected scheduling dispatching rules on the performance of an actual CIM system using different performance measures and to compare the results with the literature.
Design/methodology/approach: To achieve this objective, a computer simulation model of the existing CIM system is developed to test the performance of different scheduling rules with respect to mean flow time, machine efficiency and total run time as performance measures.
Findings: Results suggest that the system performs much better considering the machine efficiency when the initial number of parts released is maximum and the buffer size is minimum. Furthermore, considering the average flow time, the system performs much better when the selected dispatching rule is either Earliest Due Date (EDD) or Shortest Process Time (SPT) with buffer size of five and the initial number of parts released of eight.
Research limitations/implications: In this research, some limitations are: a limited number of factors and levels were considered for the experiment set-up; however the flexibility of the model allows experimenting with additional factors and levels. In the simulation experiments of this research, three scheduling dispatching rules (First In/First Out (FIFO), EDD, SPT) were used. In future research, the effect of other dispatching rules on the system performance can be compared. Some assumptions can be relaxed in future work.
Practical implications: This research helps to identify the potential effect of a selected number of dispatching rules and two other factors, the number of buffers and initial number of parts released, on the performance of the existing CIM systems with different part types where the machines are the major resource constraints.
Originality/value: This research is among the few to study the effect of the dispatching rules on the performance of the CIM systems with use of terminating simulation analysis. This is also significant given the nature of the CIM systems that are mostly used to produce different parts in varying quantities and thus do not produce parts on a continuing basis. This research is amongst the first to study the combined effect of dispatching rule and the buffer size in the CIM systems where the job arrivals are predetermined and depend on the completion of the existing parts in the system. A description of how buffer size and initial part release is related to the performance of the CIM system under study for the studied priority dispatching rule is also provided.
2012-05-21T15:02:14ZBhuiyan, NadiaGouw, GerardYazdi, DaryooshPurpose: The purpose of this paper is to study the effect of selected scheduling dispatching rules on the performance of an actual CIM system using different performance measures and to compare the results with the literature.
Design/methodology/approach: To achieve this objective, a computer simulation model of the existing CIM system is developed to test the performance of different scheduling rules with respect to mean flow time, machine efficiency and total run time as performance measures.
Findings: Results suggest that the system performs much better considering the machine efficiency when the initial number of parts released is maximum and the buffer size is minimum. Furthermore, considering the average flow time, the system performs much better when the selected dispatching rule is either Earliest Due Date (EDD) or Shortest Process Time (SPT) with buffer size of five and the initial number of parts released of eight.
Research limitations/implications: In this research, some limitations are: a limited number of factors and levels were considered for the experiment set-up; however the flexibility of the model allows experimenting with additional factors and levels. In the simulation experiments of this research, three scheduling dispatching rules (First In/First Out (FIFO), EDD, SPT) were used. In future research, the effect of other dispatching rules on the system performance can be compared. Some assumptions can be relaxed in future work.
Practical implications: This research helps to identify the potential effect of a selected number of dispatching rules and two other factors, the number of buffers and initial number of parts released, on the performance of the existing CIM systems with different part types where the machines are the major resource constraints.
Originality/value: This research is among the few to study the effect of the dispatching rules on the performance of the CIM systems with use of terminating simulation analysis. This is also significant given the nature of the CIM systems that are mostly used to produce different parts in varying quantities and thus do not produce parts on a continuing basis. This research is amongst the first to study the combined effect of dispatching rule and the buffer size in the CIM systems where the job arrivals are predetermined and depend on the completion of the existing parts in the system. A description of how buffer size and initial part release is related to the performance of the CIM system under study for the studied priority dispatching rule is also provided.Modelling healthcare internal service supply chains for the analysis of medication delivery errors and amplification effectsBehzad, BanafshehMoraga, Reinaldo J.Chen, Shi-Jie (Gary)http://hdl.handle.net/2099/122022020-07-22T21:59:13Z2012-05-16T16:28:50ZModelling healthcare internal service supply chains for the analysis of medication delivery errors and amplification effects
Behzad, Banafsheh; Moraga, Reinaldo J.; Chen, Shi-Jie (Gary)
Purpose: Healthcare is a universally used service that hugely affects economies and the quality of life. The research of service supply chains has found a significant role in the past decade. The main research goal of this paper is to model and simulate the internal service supply chains of a healthcare system to study the effects of different parameters on the outputs and capability measures of the processes. The specific objectives are to analyse medication delivery errors in a community hospital based on the results of the models and to explore the presence of bullwhip effect in the internal service supply chains of the hospital.
Design/methodology/approach: System dynamics which is an approach for understanding the behaviour of complex systems, used as a methodology to model two internal service supply chains of the hospital with a sub-model created to simulate medication delivery errors in the hospital. The models are validated using the actual data of the hospital and the results are analyzed based on experimental design techniques.
Findings: It is observed that the bullwhip effect may not occur in a hospital’s internal service supply chains. Furthermore the paper points out the conditions for reducing the medication delivery error in a hospital.
Research limitations/implications: Because of the community hospital’s data availability the type of service supply chains modelled in this paper, are small service supply chains, representing only the tasks which are done inside the hospital. To better observe the bullwhip effect in healthcare service supply chains, the chains should be modelled more generally.
Originality/value: The original system dynamics modelling of the internal service supply chains of a community hospital, with a sub-model simulating the medication delivery error.
2012-05-16T16:28:50ZBehzad, BanafshehMoraga, Reinaldo J.Chen, Shi-Jie (Gary)Purpose: Healthcare is a universally used service that hugely affects economies and the quality of life. The research of service supply chains has found a significant role in the past decade. The main research goal of this paper is to model and simulate the internal service supply chains of a healthcare system to study the effects of different parameters on the outputs and capability measures of the processes. The specific objectives are to analyse medication delivery errors in a community hospital based on the results of the models and to explore the presence of bullwhip effect in the internal service supply chains of the hospital.
Design/methodology/approach: System dynamics which is an approach for understanding the behaviour of complex systems, used as a methodology to model two internal service supply chains of the hospital with a sub-model created to simulate medication delivery errors in the hospital. The models are validated using the actual data of the hospital and the results are analyzed based on experimental design techniques.
Findings: It is observed that the bullwhip effect may not occur in a hospital’s internal service supply chains. Furthermore the paper points out the conditions for reducing the medication delivery error in a hospital.
Research limitations/implications: Because of the community hospital’s data availability the type of service supply chains modelled in this paper, are small service supply chains, representing only the tasks which are done inside the hospital. To better observe the bullwhip effect in healthcare service supply chains, the chains should be modelled more generally.
Originality/value: The original system dynamics modelling of the internal service supply chains of a community hospital, with a sub-model simulating the medication delivery error.