Journal of Operations Management
Công bố khoa học tiêu biểu
* Dữ liệu chỉ mang tính chất tham khảo
Warehouse operations often include a step where large numbers of boxed products must be rapidly packed into shipping cartons, but little attention has been paid to the benefits that can be obtained by using a good packing rule. The work reported here is directed at determining the connection between packing rule and packing performance. First, the components of a packing rule are determined. Then, the components are combined into sixteen different packing rules and are analyzed to determine the resultant packing performances. The analysis is made using a computer program that simulates carton packing. Actual data from a company's operations are used as well as a set of synthetic data with known best‐case packing characteristics.
The best packing rule is to pack the largest boxes first and align the longest‐middle‐shortest box dimensions with the longest‐middle‐shortest carton dimensions. This packing rule reduces the carton's average empty space by about one third when compared to disorganized procedures, and reduces the worst‐case results by about half. Despite these significant improvements, even the best rules could not reduce the average amount of empty space below about 20%.
Using detailed survey data collected from 95 large factories in North Carolina, this article examines the reasons why some factories are more productive than others. Six distinct measures of productivity are used as independent variables in the regression analyses reported. The results suggest five major themes that are shared by the most productive plants: simpler flow of materials through the process, valuing people, attending to quality, investing in hardware, and accounting for the industry's productivity growth. Several factors such as size and unionization are of no apparent importance to an explanation of cross‐factory productivity differences.
Humanitarian organizations (HOs) often base their warehouse locations on individuals' experience and knowledge rather than on decision‐support tools. Many HOs run separate supply chains for emergency response and ongoing operations. Based on reviews of humanitarian network design literature combined with an in‐depth case study of United Nations High Commissioner for Refugees (UNHCR), this paper presents a warehouse location model for joint prepositioning that incorporates political and security situation factors. Although accessibility, co‐location, security, and human resources are crucial to the practice of humanitarian operations management, such contextual factors have not been included in existing network optimization models before. We found that when quantified, and modeled, such factors are important determinants of network configuration. In addition, our results suggest that joint prepositioning for emergency response and ongoing operations allows for expansion of the global warehouse network, and reducing cost and response time.
Product development is recognized as cross‐functional, knowledge‐intensive work that has become increasingly important in the fast‐paced, globally competitive environment. Researchers and practicing managers contend that design engineers may play an important role in product development efforts. However, their effect on the product development process is not well understood and the extent of their impact on product development performance has not been adequately accessed. This research defines the changing role of design engineers, and it discusses their impact on setting clear project targets and sharing knowledge about customers. The study investigates the impact of these variables on product development productivity. Data collected from 205 manufacturing firms were used to create valid and reliable instruments to assess role change of design engineers, clarity of project targets, shared knowledge about customers and product development productivity. Results from structural equation modeling indicate that as the role of design engineers expands the clarity of project targets increase. This increase impacts the extent of shared knowledge about customers. Increases in the clarity of project targets and shared knowledge about customers appear to enhance product development productivity.
Most scheduling/lot sizing models for the single‐machine problem assume that aggregate demand equals aggregate production; and that backorders are to be avoided. Where working inventories are low, the scheduler may wish to avoid short production runs and willingly incur some backorder penalties so as to increase the length of production runs and reduce setup costs per unit of time. The model proposed here identifies optimal lot sizes with respect to the backorder/setup cost relationships. Use of the model will result in an optimally balanced inventory even when aggregate inventory levels are changing.
A procedure is presented for calculating stochastic costs, which include operator (labor) and inventory costs, associated with dynamic line balancing. Dynamic line balancing, unlike the traditional methods of assembly and production line balancing, assigns operators to one or more operations, where each operation has a predetermined processing time and is defined as a group of identical parallel stations. Operator costs and inventory costs are stochastic because they are functions of the assignment process employed in balancing the line, which may vary throughout the balancing period, and the required flow rate. Earlier studies focused on the calculation of the required number of stations and demonstrated why the initial and final inventories at the different operations are balanced.
The cost minimization method developed in the article can be used to evaluate and compare the assignment of operators to stations for various assignment heuristics. Operator costs and inventory costs are the components of the cost function. The operator costs are based on the operations to which operators are assigned and are calculated for the entire work week regardless of whether an operator is given only a partial assignment which results in idle time. It is assumed that there is no variation in station speeds, no learning curve effect for operators' performance times, and no limit on the number of operators available for assignment. The costs associated with work‐in‐process inventories are computed on a “value added” basis. There is no charge for finished goods inventory after the last operation or raw material before the first operation.
The conditions which must be examined before using the cost evaluation method are yield, input requirements, operator requirements, scheduling requirements and output requirements. Yield reflects the output of good units at any operation. The input requirement accounts for units discarded or in need of reworking. The operator requirements define the calculation of operator‐hours per hour, set the minimum number of operators at an operation, and require that the work is completed. The scheduling requirements ensure that operators are either working or idle at all times, and that no operator is assigned to more than one operation at any time. The calculation of the output reflects the yield, station speed, and work assignments at the last operation on the line.
An application of the cost evaluation method is discussed in the final section of the article. Using a simple heuristic to assign operators, the conditions for yield, inputs, operators, scheduling, and output are satisfied. The costs are then calculated for operators and inventories.
In conclusion, the cost evaluation method for dynamic balancing enables a manager to compare the costs of assigning operators to work stations. Using this method to calculate the operator and inventory costs, a number of different heuristics for assigning operators in dynamic balancing can be evaluated and compared for various configurations of the production line. The least cost solution procedure then can be applied to a real manufacturing situation with similar characteristics.
Due to the heritage and history of operations management, its research methodologies have been confined mainly to that of quantitative modeling and, on occasion, statistical analysis. The field has been changing dramatically in recent years. Firms now face numerous worldwide competitive challenges, many of which require major improvements in the operations function. Yet, the research methodologies in operations have largely remained stagnant. The paradigm on which these methodologies are based, while useful, limits the kinds of questions researchers can address.
This paper presents a review and critique of the research in operations, itemizing the shortcomings identified by researchers in the field. These researchers suggest a new research agenda with an integrative view of operations' role in organizations, a wider application of alternative research methodologies, greater emphasis on benefit to the operations manager, cross‐disciplinary research with other functional areas, a heavier emphasis on sociotechnical analysis over the entire production system, and empirical field studies. Some of the alternative research methodologies mentioned include longitudinal studies, field experiments, action research, and field studies.
Following a description of the nature of research, three stages in the research cycle are identified: description, explanation, and testing. Although research can deal with any stage in this cycle, the majority of attention currently seems to focus on the explanation stage. The paper then discusses historical trends in the philosophy of science, starting with positivism, expanding into empiricism, and then leading to post‐positivism. The impacts of each of these trends on research in operations (which remains largely in the positivist mode) are described. Discussion of the importance of a plurality of research methods concludes the section.
A framework for research paradigms is then developed based on two key dimensions of research methodologies: the rational versus existential structure of the research process and the natural versus artificial basis for the information used in the research. These dimensions are then further explored in terms of thirteen characteristic measures. Next, research methodologies commonly used in other fields as well as operations are described in reference to this framework. Methodologies include those traditional to operations such as normative and descriptive modeling, simulation, surveys, case and field studies as well as those more common to other fields such as action research, historical analysis, expert panels, scenarios, interviewing, introspection, and hermeneutics. Examples from operations or allied fields are given to illustrate the methodologies.
Past research publications in operations are plotted on the framework to see the limitations of our current paradigms relative to the richness of other fields. We find that operations methodologies tend to lie on the more rational end of the framework while spanning the natural/artificial dimension, though the majority of research is at the artificial pole.
Last, recommendations are made for applying the framework and paradigms to research issues in operations management. The topics of quality management and technology implementation are used as examples to illustrate how a wide variety of methodologies might be employed to research a much broader range of issues than has currently been researched.
As in other fields, promotion and tenure decisions of academicians in POM are very closely related to their publication achievements. Such achievements are generally measured by where academicians publish rather than just what they publish. Therefore, the perceived quality or image of POM journals is important to the faculty and researchers in this field. Not surprisingly, several previous studies have attempted to rank order journals belonging to related fields such as accounting, finance, economics and management. Unfortunately, for POM journals, there exist little published data accepted and shared by all in this respect.
The primary objectives of the study are to establish the perceived relevancy and quality ratings of 20 selected journals that are frequently used to disseminate POM‐related research work. The results are based on a questionnaire survey of those Decision Sciences Institute members who listed POM as their primary area of interest (DSI code N).
Regarding relevancy, the
The results provide some evidence of an apparent incongruity between the notions of journal relevancy and journal quality, as perceived by the respondents. Some journals that received high quality ratings were found only moderately relevant to POM research. On the other hand, some journals received poor quality ratings but were rated highly for relevancy. The opinions of the associate and full professors, as well as those with better publication associations with the included journals, were found strikingly similar to those of the entire sample surveyed.
Some discrepancy was evident as to what the respondents and their administrative evaluators think are the top journals. The faculty evaluators tend to consider
Much of the current literature in the field of production and inventory control systems stresses the need to revise traditional forms of thinking regarding production processes, the role of inventories for work in process, and the need for reduced lead times or flow times. Group technology, manufacturing cells, and other means of incorporating repetitive manufacturing techniques into traditional job‐shop settings constitute the leading edge in system development.
Still, there is resistance to these dramatic changes, and traditional “business as usual” methods still predominate. This study attempts to illustrate graphically the cost justification associated with reduction in lead times which generally results from these new concepts. In most job shops today, lead times are much longer than they need to be due to inflation of lead time estimates. Actual lead times for the manufacture of fabricated and assembled products have been shown to be a direct consequence of the planning lead times used in the MRP planning process—a form of self‐fulfilling prophesy.
The research employs a simulation model of a factory using MRP as a planning tool in a multiproduct, multilevel production environment. Manufacturing costs constitute the dependent variable in the experiments, defined as the sum of material costs (including expedite premiums), direct labor costs (including overtime premiums), inventory carrying costs, and overhead costs. The independent variable being manipulated is the planned lead time offset used in the MRP planning process. Twenty values of planned lead time are evaluated ranging from a value that includes no slack time at all (pure assembly line) up to a value that allows 95% slack (queue) time which, unfortunately, is not uncommon in many job shops today. Stochastic variables in the model include customer demand and actual processing times—the sum of set‐up and run times.
The result of the study is a cost curve formed over the range of independent lead time variables that is constructed using nonlinear regression techniques. The conclusions from the resultant graph clearly indicate the cost consequences of long lead times, with exponential cost increases beyond the 80–90% queue time level. Total costs are 41% higher at the maximum lead time allowance compared to the minimum. Clearly, this study demonstrates the need for lead time reduction, either through downward adjustment of MRP planned lead times or by introducing new manufacturing concepts.
This study examines relationships among a firm’s innovativeness, its unexpected product failure costs, and financial performance. When a firm chooses to develop more innovative products and processes, product reliability outcomes become more uncertain. These uncertainties in turn may lead to unexpected warranty claims costs, as well as other costs that can erode the advantages of an innovation leadership position. This study empirically tests these propositions using publically reported warranty and financial data from 2003 to 2013, representing 482 unique firms. Consistent with prior studies, our estimation of the direct effects of firm innovativeness on financial performance shows an inverted–u–shaped relationship. Importantly, we find that more innovative firms also experience more unexpected product failure costs, and, consistent with organizational information processing theory, the negative impacts of these costs on financial performance extend well beyond the direct costs associated with remediating warranty claims. Further, we find that this relationship is robust to differing levels of industry innovativeness. Hence, our study suggests that product failure risks associated with firm innovativeness are significant, and act to at least partially offset the financial benefits of innovation leadership. In addition, standard accounting for product warranty claims may substantially understate the true costs associated with product failures, which appear to generate significant SG&A, fixed asset, and inventory costs above and beyond direct warranty processing costs. Our study also demonstrates a novel usage of warranty claims data. We discuss the implications of these findings for both managers and researchers.
- 1
- 2
- 3
- 4
- 5
- 6
- 10