Autodesk and ProModel have teamed up to offer the best in factory design, optimization, and layout. Robust ‘real-life process and behavior’ simulation. About a year ago, Trevor English wrote a great post about how to optimize your factory using ProModel Corporation’s Process Simulator Autodesk Edition and Autodesk Factory Design utilities: “Leveraging predictive analytics, you can quickly simulate your factory model and get process improvement insights, all with the click of a button. This easy-to-use solution provides designers and engineers with little or no experience in process modeling the most affordable tools.” If you are more visual, Paul Munford also created this superb overview video that showcases Process Simulator Autodesk Edition’s capabilities to Factory Design Utilities (FDU). Process Simulator Autodesk Edition is great for building a process flow from scratch and then easily creating the starter AutoCAD layout. Our other integrated tool, ProModel Autodesk Edition, allows you to start with an existing AutoCAD layout and very rapidly create a simulation model from the layout directly in Auto CAD. Check out the overview video of ProModel Autodesk Edition Read the full Inventor Blog Post |
Tag Archives: promodel
Latest Process Simulator and Process Simulator Autodesk Edition Updates
We are pleased to announce the latest updates to the Process Simulator and Process Simulator Autodesk Editions.
Enhancements and Improvements
- Improvements to window reload behavior (i.e. Scenario Manager window)
- Improvements with add-in load behavior
- Improvements with array stability when using 64-bit Office
- Improvements with Output Viewer load speed
- Licensing stability enhancements
- Improved model error report
- Updated localization
Material Handling and Autodesk Edition
- Path Network refinement
- Auto Orientation of Entities on Conveyors – users can now toggle on or off auto orientation of entities within Process Simulator Material Handling and Process Simulator Autodesk Edition
Download a 30-day evaluation of Process Simulator from the Process Simulator Eval Page
Download a 30-day evaluation of Process Simulator Autodesk Edition from the Autodesk App Store
Purchase a Process Simulator Autodesk Edition subscription from the new ProModel Online Store

Let us know what you think by leaving a reply below. Thanks!
Latest ProModel and MedModel Optimization Suite Updates
We are pleased to announce the latest updates to the ProModel and MedModel Optimization Suites and Autodesk Editions.
Auto orientation for entities graphics while on conveyors. Entities by default now rotate based on the orientation of the conveyor settings widthwise (right side of the entity graphic) or lengthwise (top of the entity graphic).
- Improved functionality for background color changes while panning during simulation
- Runtime errors improvements
Download a 30-day evaluation of ProModel Autodesk Edition from the Autodesk App Store.
Purchase a ProModel Autodesk Edition subscription from the new ProModel Online Store.
Process Simulator and ProModel Now Integrate with AutoCAD and Inventor by Autodesk
We are very excited to announce Process Simulator Autodesk® Edition and ProModel Autodesk® Edition. Each product integrates with Autodesk® AutoCAD® and Autodesk® Inventor® to provide you a more valuable manufacturing plant design and process improvement capability.
For more information about Process Simulator Autodesk Edition, including videos, a downloadable pdf, and a 30-day evaluation copy go to the Process Simulator Autodesk Edition webpage.
For more information about ProModel Autodesk Edition, including videos, a downloadable pdf, and a 30-day evaluation copy go to the ProModel Autodesk Edition webpage.
If you are at the Autodesk University event this week (11/19-11/21) in Las Vegas stop by booth MFG210 to get a live demo and talk to our team.
Just-In-Time for the Holidays!
A few weeks back, prior to the Thanksgiving holiday (in the USA), we released Service Pack 1 (SP1) for the 2016 version of our products. For those of you that know the purpose and traditions of this holiday, you also know it is a time to show gratitude as well as a time to feast on turkey, mashed potatoes and gravy, and pumpkin pie. The timing of the release definitely made the holiday more enjoyable for us and enabled stress-free indulging.
However, the release really wasn’t for us but rather for you, our customers! So with more holidays to come this month, I hope you are able to feast on the new features and capabilities we have introduced in the 2016 SP1 release. It is focused around allowing you to capture, collect and extract custom statistics for your process improvement and analysis. Starting lightly with seemingly subtle changes and implementation of logic functions to the real “meat” of a new API for statistics extraction, there really is quite a bit to consume.
(To learn more about the SP1 release of ProModel and MedModel, please see the What’s New and Release Webinar sections below. If you are a Process Simulator user, you can go to the Process Simulator “What’s New?” page on our corporate website and see a detailed description along with an accompanying webinar. )
What’s New in the ProModel / MedModel 2016 SP1 release?
Resource Distance Traveled Statistics
The distance your resources travel over the course of a simulation is now collected and reported in Output Viewer. It is displayed as Total Distance Traveled in Resource Summary reports and tracks the overall distance traveled for individual units.
Identify Captured Resource Units
In addition to determining which resource you’ve captured, you can now find out the specific unit of that resource as well. With the new OwnedResourceUnit() function, you can get the unit number of any resource owned by an entity. This is useful when collecting custom resource statistics at the individual unit level.
In-Process Resource Utilization Statistics
Access the utilization of your resources at any time during simulation. Using the newly modified PercentUtil() function, you can find out the utilization of individual units of a resource or a summary of all units of a given resource type. This allows you to make dynamic logical decisions or write out custom statistics to a CSV or Excel file.
Utilization for a specific resource unit
Utilization of all units for a resource type
Quickly Access Element Definitions
Have you ever been writing or viewing logic, and wanted to quickly see the definition (or details) of a specific subroutine, array, location, etc.? Well, now you can! Simply highlight that model element in logic, press the F12 key, and you will be taken to that element’s specific record in its edit table. For example, if you highlight a subroutine name in logic and press F12, you will be taken to its exact record in the Subroutine table.
Programmatic Export of Statistics
The statistical results of your simulation runs can be programmatically accessed through a new API to Output Viewer. You can get the raw data, down to the individual replication, or have it summarized or grouped (just like in Output Viewer) prior to accessing it. Either way, you can access your results, for example to load into Excel or a database, for analysis outside of Output Viewer.
As an example, this is a Time Plot in Output Viewer where the time series data is averaged over a custom period of 15 minute intervals.
Using the new API, you can have Output Viewer summarize the data into 15 minute intervals prior to exporting it to Excel.
The exported format makes it easy to create a Pivot table and Pivot chart in Excel.
Enhancements
- When exporting Array data at the end of simulation, the replication number is no longer written out in the Excel Sheet name if “Export after final replication only” is checked.
- Minitab version 17.3 is now supported
Release Webinar
We even followed up the release with a live webinar showcasing the new capabilities of SP1. So, if after devouring the descriptions above, you still find yourself hungry for additional details, head on over to our YouTube channel or view the webinar directly below. You can also log into our Solutions Café to have a second or more complete helping of SP1.
And Happy Holidays!
Remembering Rob Bateman

Charles Harrell, Founder ProModel Corporation
Last month we lost a long-time member of the ProModel family and, for many of us, a beloved friend. Following a sudden incident of heart failure while working out in the gym, Rob Bateman passed away on October 11th 2015. We at ProModel will remember him as a warm, energetic, impassioned leader and friend whose life was devoted to the pursuit of excellence and selfless service. His absence will continue to be profoundly felt in the months and years to come.
I first met Rob just over 25 years ago when he was doing graduate studies at BYU. He took a simulation class from me and I could tell he was excited about the potential benefits of simulation. So after completing a stint with the US state department as a foreign-service officer in 1990, Rob began working as a ProModel distributor. With his international background and grasp of simulation, Rob eventually become the Vice President of International Operations and later established an independent company (Dynamisis A.G.) for directing all international operations for ProModel.
Robert Bateman
(April 4, 1958 – October 11, 2015)
Rob was an extraordinary individual with remarkable talents. He was one of those individuals who was always on-the-go and seemed to cram more into one day than most of us manage to accomplish in several days. At the same time, he maintained a zest for life and could frequently be seen buzzing around in his sports car with his signature driving cap or biking into work in his cycling shorts and helmet.
Here are a few of the many talents Rob displayed:
- He was very knowledgeable…about everything. No matter what subject was being discussed, he always had something intelligent to contribute to the discussion. On top of his formal education, which culminated with a Ph.D. in Public Administration/Political Science, Rob filled his spare moments reading books or one of his 14 magazines he subscribed to.
- As a consummate teacher he was passionate about getting people exposed to simulation. He wrote the first textbook on ProModel for use in college courses. For the past decade, when he wasn’t working with distributors to promote ProModel he was teaching at the local university.
- He was an effective mentor and gave many individuals their first start in their careers. When several international distributors were asked what they remember about Rob, they invariably said he treated them as valued partners and became someone they could always turn to for advice.
- He was resourceful and knew how to get by on very little sleep, food and comforts. When there wasn’t sufficient budget or resources to support an initiative he believed in, he somehow always managed to find the means needed to get the job done.
- He was a real cosmopolitan and world traveler. If you ever called Rob, you would be just as likely reaching him at some airport as in his office. And there didn’t seem to be any country where he felt uncomfortable or couldn’t speak the language.
- He was a friend to all and he never let business stand in the way of personal relationships. He took time to express an interest in others and always sensed if one was having a bad day or dealing with problems at home. He would do whatever he could to lift them up and help them keep things in perspective.
- Finally, Rob was funny and had an infectious sense of humor. He could tell endless stories of his travel exploits where he encountered bizarre situations like returning to his car only to find all of his tires stolen. Though Rob took his commitments seriously, he never took himself too seriously.
Here are a few memories related by some of the distributers who worked with him.
A Distributor in Germany and Austria relates, “On my first trip to Utah to visit with Rob as a new ProModel rep, I had the feeling I was meeting with an old friend. I was impressed by his hospitality and the time that he gave me.”
A Brazilian distributor recalled meeting Rob the first time 21 years ago and thinking to himself, “Who is this guy who can conduct a meeting with high level business leaders, comfortably use legal and business terms in both German and English and then turn around the following day and teach a simulation course in Spanish to a group of engineers. How can one person have so many skills?”
As another of his co-workers commented, “I’ve been in rooms with him teaching and negotiating with Nigerians, Germans, Japanese, Brazilians, Mexicans, and more. No matter the nationality, Rob could relate and connect. He was confident, knowledgeable, and personable.”
On a personal note, one co-worker related: “This past year Rob joined the cycling team that I belong to called Team C4C (“Cycle for Cure”). The team was formed to raise money for health-related charities such as the Huntsman Cancer Center and the National MS Society. Rob immediately identified with the purpose of the group and quickly became one of the strongest riders on the team.”
This same co-worker related how Rob was instrumental in helping him complete a grueling ‘Ultimate Challenge’ cycling event saying, “I will always cherish a picture I have of Rob and me crossing the finish line together at Snowbird after riding 100 miles and climbing 10,000 feet in one day. I could not have made it without his encouragement along the way.”
For all those who have been influenced by his exemplary life, Rob will always be remembered as a leader, mentor and friend. Perhaps it is fitting that he pursued a career in simulation modeling since he seems to have understood the impact that models can have, not only on organizations, but on the people around him. Those who knew Rob, know that he was a model of the best that a human being can be, and for that he will always be remembered.
Demystifying Big Data
We live in a data-rich world. It’s been that way for a while now. “Big Data” is now the moniker that permeates every industry. For the sake of eliciting a point from the ensuing paragraphs, consider the following:
FA-18 / Extension / Expenditure / Life / Depot / Operations / Hours / Fatigue
Taken independently, the words above mean very little. However, if placed in context, and with the proper connections applied, we can adequately frame one of the most significant challenges confronting Naval Aviation:
A higher than anticipated demand for flight operations of the FA-18 aircraft has resulted in an increased number of flight hours being flown per aircraft. This has necessitated additional depot maintenance events to remedy fatigue life expenditure issues in order to achieve an extension of life cycles for legacy FA-18 aircraft.
The point here is that it is simply not enough to aggregate data for the sake of aggregation. The true value in harnessing data is knowing which data are important, which are not, and how to tie the data together. Often times subscribing to the “big data” school of thought has the potential of distraction and misdirection. I would argue that any exercise in “data” must first begin with a methodical approach to answering the following questions:
“What challenge are we trying to overcome?”
“What are the top 3 causes of the challenge?”
“Which factors are in my control and which ones are not?”
“Do I have access to the data that affect the questions above?”
“How can I use the data to address the challenge?”
While simply a starting point, the above questions will typically allow us to frame the issue, understand the causal effects of the issue, and most importantly facilitate the process of honing in on the data that are important and systematically ignore the data that are not.
To apply a real-world example of the methodology outlined above, consider the software application ProModel has provided to the U.S. Navy – the Naval Synchronization Toolset (NST).
“What challenge are we trying to overcome?”
Since 2001, the U.S. Navy has participated in overseas contingency operations (Operation Enduring Freedom and Operation Iraqi Freedom) and the legacy FA-18 aircraft (A-D) has consumed more its life expectancy at a higher rate. Coupled with the delay in Initial Operating Capability (IOC) of the F-35C aircraft, the U.S. Navy has been required to develop and sustain a Service Life Extension Program (SLEP) to extend the life of legacy FA-18 aircraft well beyond their six thousand hour life expectancy and schedule and perform high flight hour inspections and major airframe rework maintenance events. The challenge is: “how does the Navy effectively manage the strike fighter inventory (FA-18) via planned and unplanned maintenance, to ensure strike fighter squadrons are adequately sourced with the right number of FA-18s at the right time?”
“What are the top 3 causes of the challenge?”
- Delay in IOC of the F-35C
- Higher flight hour (utilization) and fatigue life expenditure
- Fixed number of legacy FA-18 in the inventory
“Which factors are in my control and which ones are not?”
In:
- High flight hour inspection maintenance events
- Airframe rework (depot events)
Out:
- Delay in IOC of the F-35C
- Fixed number of legacy FA-18 in the inventory
“Do I have access to the data that affect the questions above?”
Yes. The planned IOC of the F-35C, flight hour utilization of FA-18 aircraft, and projected depot capacity and requirements are all data that is available and injected into the NST application.
“How can I use the data to address the challenge?”
Using the forecasted operational schedules of units users can proactively source FA-18 aircraft to the right squadron at the right time; balanced against maintenance events, depot rework requirements, and overall service life of each aircraft.
—
Now that the challenge has been framed, the constraints have been identified, and the data identified, the real work can begin. This is not to say that there is one answer to a tough question or even that there is a big red “Easy” button available. Moreover, it has allowed us to ensure that we do not fall victim to fretting over an issue that is beyond our control or spend countless hours wading through data that may not be germane.
NST was designed and developed with the points made above in mind. The FA-18 is a data-rich aircraft. However, for the sake of the users, NST was architecturally designed to be mindful of only the key fatigue life expenditure issues that ultimately affect whether the aircraft continues its service life or becomes a museum piece. In the end, NST’s users are charged with providing strike fighter aircraft to units charged with carrying out our national security strategy. By leveraging the right data, applying rigor to the identification of issues in and out of their control, and harnessing the technology of computational engines, they do precisely that.
Simulating Impatient Customers
ProModel Guest Blogger: Dr. Farhad Moeeni, Professor of Computer & Information Technology, Arkansas State University
Simulation is one of the required courses for the MBA degree with MIS concentration at Arkansas State University. The course was developed a few years ago with the help of a colleague (Dr. John Seydel). We use Simulation Using Promodel, Third Ed. (Harrell, et al., Ghosh and Bowden, McGraw-Hill) for the course. In addition, students have access to the full-version of the Promodel software in our Data Automation Laboratory. The course has attracted graduate students from other areas including arts and sciences, social sciences and engineering technology who took the course as elective or for enhancing research capability. Students experience the entire cycle of simulation modeling and analysis through comprehensive group projects with a focus on business decision making.
Most elements of waiting lines are shared by various queuing systems regardless of entity types such as human, inanimate, or intangible. However, a few features are unique to human entities and service systems, two of which are balking and reneging. One of the fairly recent class projects included modeling the university’s main cafeteria with various food islands. Teams were directed to also model balking and reneging, which was challenging. The project led to studying various balking and reneging scenarios and their modeling implications, which was very informative.
Disregarding the simple case of balking caused by queue capacity, balking and reneging happens because of impatience. Balking means customers evaluate the waiting line, anticipate the required waiting time upon arrival (most likely by observing the queue length) and decide whether to join the queue or leave. In short, balking happens when the person’s tolerance for waiting is less than the anticipated waiting time at arrival. Reneging happens after a person joins the queue but later leaves because he/she feels waiting no longer is tolerable or has utility. Literature indicates that both decisions can be the result of complex behavioral traits, criticality of the service and service environment (servicescape). Therefore, acquiring information about and modeling balking or reneging can be hard. However, it offers additional information on service effectiveness that is hard to derive from analyzing waiting times and queue length.
For modeling purposes, the balking and reneging behavior is usually converted into some probability distributions or rules to trigger them. To alleviate complexity, simplified approaches have been suggested in the literature. Each treatment is based on simplifying assumptions and only approximates the behavior of customers. This article addresses some approaches to simulate balking. Reneging will be covered in future articles. Scenarios to model balking behavior include:
- On arrival, the entity joins the queue only if the queue length is less than a specified number but balks otherwise.
- On arrival, the entity joins the queue if the queue length is less than or equal to a specified number. However, if the length of the queue exceeds, the entity joins the queue with probability and balks with probability (Bernoulli distribution).
- The same as Model 2 but several (Bernoulli) conditional probability distribution is constructed for various queue lengths (see the Example).
- On arrival, a maximum tolerable length of queue is determined from a discrete probability distribution for each entity. The maximum number is then compared with the queue length at the moment of arrival to determine whether or not the entity balks.
The first three approaches model the underlying tolerance for waiting implicitly. Model 4 allows tolerance variation among customers to be modeled explicitly.
A simulation example of Model 3 is presented. The purpose is to demonstrate the structure of the model and not model efficiency and compactness. The model includes a single server, FCFS discipline, random arrival and service. The conditional probability distributions of balking behavior are presented in the table. The data must be extracted from the field. The simulation model is also presented below. After running the models for 10 hours, 55 (10% of) customers balked. Balking information can be very useful in designing or fine-tuning queuing systems in addition to other statistics such as average/maximum waiting time or queue length, etc.
Condition |
Conditional Probability Distribution |
|
Probability of Joining the Queue (p) | Probability of Balking (1-p) | |
Queue Length <= 4 | 1.00 | 0 |
5<=Queue Length <= 10 | 0.7 | 0.3 |
Queue Length > 10 | 0.2 | 0.8 |
About Dr. Moeeni:
Dr. Farhad Moeeni is professor of Computer and Information Technology and the Founder of the Laboratory for the Study of Automatic Identification at Arkansas State University. He holds a M.S. degree in industrial engineering and a Ph.D. in operations management and information systems, both from the University of Arizona.
His articles have been published in various scholarly outlets including Decision Sciences Journal, International journal of Production Economics, International Journal of Production Research, International Transactions in Operational Research, Decision Line, and several others. He has also co-authored two book chapters on the subject of automatic identification with applications in cyber logistics and e-supply chain management along with several study books in support of various textbooks. .
He is a frequent guest lecturer on the subject of information systems at the “Centre Franco Americain”, University of Caen, France.
Current research interests are primarily in the design, analysis and implementation of automatic identification for data quality and efficiency, RFID-based real-time location sensing with warehousing applications, and supply chain management. Methodological interests include design of experiments, simulation modeling and analysis, and other operations research techniques. He is one of the pioneers in instructional design and the teaching of automatic identification concepts within MIS programs and also is RFID+ certified.
Dr. Moeeni is currently the principle investigator of a multi-university research project funded by Arkansas Science and Technology Authority, Co-founder of Consortium for Identity Systems Research and Education (CISRE), and on the Editorial Board of the International Journal of RF Technologies: Research and Applications.
Contact Information
moeeni@astate.edu
In the OR with Dale Schroyer
I generally find that in healthcare, WHEN something needs to happen is more important than WHAT needs to happen. It’s a field that is rife with variation, but with simulation, I firmly believe that it can be properly managed. Patient flow and staffing are always a top concern for hospitals, but it’s important to remember that utilization levels that are too high are just as bad as levels that are too low, and one of the benefits of simulation in healthcare is the ability to staff to demand.
Check out Dale’s work with Robert Wood Johnson University Hospital where they successfully used simulation to manage increased OR patient volume:
About Dale
Since joining ProModel in 2000, Dale has been developing simulation models used by businesses to perform operational improvement and strategic planning. Prior to joining ProModel Dale spent seven years as a Sr. Corporate Management Engineering Consultant for Baystate Health System in Springfield, MA where he facilitated quality improvement efforts system wide including setting standards and facilitating business re-engineering teams. Earlier he worked as a Project Engineer at the Hamilton Standard Division of United Technologies.
Dale has a BS in Mechanical Engineering from the University of Michigan and a Masters of Management Science from Lesley University. He is a certified Six Sigma Green Belt and is Lean Bronze certified.
NEW! ProModel’s Patient Flow Solution:
ProModel Healthcare Solutions:
Teaching Process Management Using ProModel
ProModel Guest Blogger: Scott Metlen, Ph.D. – Business Department Head and Associate Professor at University of Idaho
Understanding process management, the design, implementation, management and control, and continuous improvement of the enterprise wide set of an organizations processes is the key to well deployed strategies. It was not until Tim Cook made Apple’s total set of processes world class including all supply chain linked processes (Brownlee, 2012) that Apple hit its amazing climb to become the world’s highest valued company; even though the company had cutting edge products before his arrival. Gaining effective understanding of process management is not easy due to the strategic variability inherent in the portfolio of products that companies sell, and in markets they service. This strategic variability (Rajan, 2011) in turn drives variability in many processes that an organization uses to operate. For instance, different markets require different marketing plans supported by different processes. Order processes often vary by product and target market. Employee skill sets differ by product requiring different hiring and training processes. Different products, whether it be services or goods that have a slight variation require, at the very least, an adjustment to the production process. Adding to, and often caused by the variability just mentioned, are multiple process steps, each with different duration times and human resource skills. Depending on what product is currently being produced, process steps, process step order and duration time, interdependency between the process steps, and business rules all vary. Where a product is in its life cycle will drive the experience curve, again creating variation across products. In addition, the numerous interfaces with other processes all vary depending on the product being produced. All of these sources of variability can make process management hard to do, teach, and learn. One tool that helps with process management in the face of variance is discrete event simulation and one of the best software suites to use is ProModel. ProModel is a flexible program with excellent product support from the company.
Effective process management is a multi-step process. The first step of process management is to determine the process flow while at the same time determining the value and non-value added process steps. Included in the process flow diagram for each step are the duration times by product and resources needed at each step, and product routes. Also needed at this time are business rules governing the process such as working hours, safety envelopes, quality control, queueing rules, and many others. Capturing this complex interrelated system begins by visiting the process and talking with the process owner and operators. Drawing the diagram and listing other information is a good second step, but actually building and operating the process is when a person truly understands the process and its complexities. Of course many of the processes we want to improve are already built and are in use. In most cases, students will not be able to do either of these. However, building a verified and validated simulation model is a good proxy for doing the real thing, as the model will never validate against the actual process output unless all of the complexity is included or represented in the model. In the ‘Systems and Simulation’ course at the University of Idaho students first learn fundamentals of process management including lean terms and tools. Then they are given the opportunity to visit a company in the third week of class as a member of a team to conduct a process improvement project. In this visit students meet the process owner and operators. If the process is a production process, they walk the floor and discuss the process and the delta between expected and actual output. If the process is an information flow process, such as much of an order process, the students discuss the process and, again, the delta between expected and realized output. Over the next six weeks students take the preliminary data and begin to build a simulation model of the current state of the process. During this time period students discover that they do not have all the data and information they need to replicate the actual process. In many cases they do not have the data and/or information because the company does not have that information or how the model is operated is not the same as designed. Students then have to contact the process owner and operators throughout the six weeks to determine the actual business rules used and/or make informed assumptions to complete their model.
Once the model has been validated and the students have a deep understanding of the process, students start modeling process changes that will eliminate waste in the system, increase output, and decrease cost. Examples of methods used to improve the process include changing business rules, adding strategically placed buffers and resources, and reallocating resources. To determine the most effective way to improve the process, a cost benefit analysis in the form of an NPV analysis is completed. The students use the distribution of outputs from the original model to generate appropriate output and then compare that output to output pulled from the distributions of each improvement scenario. This comparison is then used to determine a 95% confidence interval for the NPV and the probability of the NPV being zero or less. Finally, several weeks before the semester is finished, students travel to the company to present their findings and recommendations.
Student learning on these projects is multifaceted. Learning how to use ProModel is the level that the students are most aware of during the semester, as it takes much of their time. However, by the end of the semester they talk about improving their ability to manage processes, work in teams, deal with ambiguity, manage multiple projects, present to high level managers, and maintain steady communication with project owners.
Utilizing external projects and discrete event simulation to teach process management has been used in the College of Business and Economics at the University of Idaho for the past six years. As a result, the Production and Operation area has grown from 40 to 150 students and from five to 20 projects per semester. More importantly, students who complete this course are being sought out and hired by firms based on the transformational learning and skill sets students acquired through the program.
References:
Rajan Suri. Beyond Lean: It’s About Time. 2011 Technical Report, Center for Quick Response Manufacturing, University of Wisconsin-Madison.
Brownlee, John. Apples’s Secret Weapon 06/13/2012. http://www.cnn.com/2012/06/12/opinion/brownlee-apple-secret/index.html?hpt=hp_t2. 12/301/2014.
Scott Metlen Bio:
http://www.uidaho.edu/cbe/business/scottmetlen