One of ProModel’s Biggest Supporters Reflects on a Career in Academic Optimization!

ProModel Guest Blogger:  Linda Ann Riley, Ph.D., Adjunct Professor of Engineering, University of New Haven; former Associate Dean and recently retired professor from the School of Engineering, Computing and Construction Management at Roger Williams University.

Riley photo

Linda Ann Riley, Ph.D.

When Sandra Petty, Academic Coordinator at ProModel Corporation, invited me to contribute to ProModel’s guest blog, it gave me an opportunity to reflect on an academic career with one ever present constant, the ProModel’s suite of simulation software products.  My Universities may have changed, yet each year for the past twenty or so I have taught at least one, (many times far more) discrete-event simulation courses to an undergraduate, graduate, corporate or government audience.  Regardless of the class, a Ph.D. or a freshman undergraduate, I have continued to use ProModel since its early days as one of the first Window’s based simulation products.  As ProModel Corporation has introduced new products, MedModel, ServiceModel, Process Simulator and Portfolio Simulator, my students have had an invaluable opportunity to be exposed to some of the best simulation products in the industry.

Each simulation class that I teach involves an external project where students work with non-proprietary data from industry, government or non-profit entities. Working only with the ProModel Student Package, I have seen some of the most impactful and innovative uses of ProModel simulation software. From modeling casino floor slot machine layout to nuclear reactor evacuation scenarios, the variety of applications for the software has been virtually limitless.  The simulation skillset acquired by students is one of the primary factors companies have cited when hiring students with ProModel experience.  Through the years, the aerospace, health care, automotive, logistics and defense industries have identified significant value in students graduating with exposure to ProModel’s suite of products.

I too, have benefited from using ProModel software.  For my entire career, my research has focused on productivity/process analysis and optimization.  For the past twenty years, ProModel software has played a central role as an application tool for this research.  As ProModel Corporation has evolved with additional products and capabilities, so too has my research.  In the early years, I focused on health care process and facility layout improvement using MedModel to simulate patient queuing alternatives, throughput strategies and identification of system waste. From there, my research moved to rare event simulation such as security breaches, hazardous materials transportation incidents and hybrid simulation that incorporated both a discrete-event and continuous element.  At that time, I used external code and output from other programs as inputs to ProModel. During this period of research, I also focused with Ph.D. students on new approaches to multi-objective evolutionary algorithms as well as meta-heuristics for optimizing large-scale discrete-event simulations using SimRunner as a starting point.

More recently, my research has concentrated on managing and controlling risk in complex infrastructure projects using discrete-event simulation for stochastic scheduling.   In the construction industry, traditional project management and scheduling approaches for highly-complex construction projects typically use methods such as CPM (critical path method), PERT (program evaluation and review technique) or Monte Carlo simulation. For the most part, these methods rely on deterministic, tractable mathematical models underlying the schedule. The ability to accurately predict project schedule outcome and manage performance volatility is essential to controlling risk.  Prior to ProModel Corporation introducing Project and Portfolio Simulator, I would simulate the stochastic nature of these schedules in ProModel.

Even though I have recently retired from a full-time academic career, I will continue to teach discrete-event simulation using ProModel in an adjunct faculty capacity.  Looking to the future, my research will focus primarily on the incorporation and design of intelligent interfaces that identify and apply algorithms for the optimization problem and constraints under study. This implies perhaps an additional layer of code incorporated into the optimization process. Ultimately, this intelligent interface could “learn” to recognize common optimization scenarios, select starting and stopping rules, and potentially also interface with the system improvement framework.

As a further extension to the intelligent interface, dynamic algorithmic visualization capabilities might be incorporated into the optimization procedures.  Immersive technologies are used in many simulation arenas.  Incorporating immersive visualization into optimization would serve to bring a transparency between the modeling and optimization processes. This would allow users and decision makers to interactively view, and potentially redirect the optimization process. In essence, this feature would provide the decision maker the ability to immerse him or herself into the model, thus “directing” both the simulation and optimization processes.

In retrospect, discrete-event simulation and the ProModel Corporation have played a central role in my development as both a teacher and researcher.  I look forward to what the future holds for both the company and the field of discrete-event simulation.

About Dr. Linda Ann Riley

Contact Information: linda.ann.riley@gmail.com

Linda Ann Riley, Ph.D. presently serves as an Adjunct Professor of Engineering for the University of New Haven’s graduate program in Engineering and Operations Management. She recently retired as full professor from the School of Engineering, Computing and Construction Management at Roger Williams University (RWU) where she worked for twelve years. At RWU, she held the positions of Associate Dean, Engineering Program Coordinator and Professor of Engineering. She has over thirty years of teaching experience in both engineering and business and is the recipient of a number of corporate, university and national excellence in teaching awards. Dr. Riley is the author/co-author of over 100 articles, technical and research reports, and book contributions. Her area of scholarly interest involves the optimization of stochastic systems using simulation and evolutionary algorithms.

In addition, Dr. Riley is an active researcher with notable success in grant writing, grant and contract management, creating collaborative research partnerships and research administration. She is responsible for developing and writing over 150 competitive research/consulting proposals and has been awarded or procured contracts for clients in excess of twenty-five million dollars. Prior to her position at Roger Williams University, Dr. Riley spent 17 years at New Mexico State University (NMSU) holding positions as Director of the University Center for Economic Development Research and Assistance, Assistant Director for the Center for Business Research and Services and Director of the Advanced Modeling and Simulation Laboratory. She also held faculty positions in both the Colleges of Business and Engineering at NMSU.

In addition to teaching and research, Dr. Riley is active in consulting. She has extensive consulting experience in organizational productivity/process improvement implementing six sigma, lean, system dynamics, simulation and optimization approaches. She has extensive experience in the design, communication and implementation of strategic and economic development plans. Also, she worked for a number of years with the National Laboratories on technology commercialization strategies.

Dr. Riley is actively involved in attracting women and under-represented groups into science, engineering, mathematics and technology fields. She is a national speaker on the challenges of attracting women and under-represented groups into these fields and served as Chair of the American Society for Engineering Education Northeastern Section and National Chairperson of American Society of Mechanical Engineers Diversity Metrics Committee. Dr. Riley is a member of several professional business and engineering societies and has served as reviewer and/or editorial board member for business, healthcare and engineering journals.

Dr. Riley received her undergraduate degree from Boston University, earned an M.B.A from Suffolk University, completed a post-graduate fellowship at Brown University and earned her M.S. in Industrial Engineering and a Ph.D. in Business with a major field in Logistics from New Mexico State University. Also, for eleven years, Dr. Riley held the position of Vice-Chair of the Board for a large financial institution. In conjunction with this position, she completed 56 credits of Board of Directors courses and was awarded the Friedrich W. Raiffeisen and Edward W. Filene Awards.

Demystifying Big Data

Rob Wedertz – Director, Navy Programs

Rob Wedertz – Director, Navy Programs

We live in a data-rich world.  It’s been that way for a while now.  “Big Data” is now the moniker that permeates every industry.  For the sake of eliciting a point from the ensuing paragraphs, consider the following:

FA-18 / Extension / Expenditure / Life / Depot / Operations / Hours / Fatigue

Taken independently, the words above mean very little.  However, if placed in context, and with the proper connections applied, we can adequately frame one of the most significant challenges confronting Naval Aviation:

A higher than anticipated demand for flight operations of the FA-18 aircraft has resulted in an increased number of flight hours being flown per aircraft.  This has necessitated additional depot maintenance events to remedy fatigue life expenditure issues in order to achieve an extension of life cycles for legacy FA-18 aircraft.

120613-N-VO377-095  ARABIAN GULF (June 13, 2012) An F/A-18C Hornet assigned to the Blue Blasters of Strike Fighter Squadron (VFA) 34 launches from the flight deck of the Nimitz-class aircraft carrier USS Abraham Lincoln (CVN 72). Lincoln is deployed to the U.S. 5th Fleet area of responsibility conducting maritime security operations, theater security cooperation efforts and combat flight operations in support of Operation Enduring Freedom. (U.S. Navy photo by Mass Communication Specialist 2nd Class Jonathan P. Idle/Released)

(U.S. Navy photo by Mass Communication Specialist 2nd Class Jonathan P. Idle/Released)

The point here is that it is simply not enough to aggregate data for the sake of aggregation.  The true value in harnessing data is knowing which data are important, which are not, and how to tie the data together.  Often times subscribing to the “big data” school of thought has the potential of distraction and misdirection.  I would argue that any exercise in “data” must first begin with a methodical approach to answering the following questions:

“What challenge are we trying to overcome?”

“What are the top 3 causes of the challenge?”

“Which factors are in my control and which ones are not?”

“Do I have access to the data that affect the questions above?”

“How can I use the data to address the challenge?”

weeds sept 2015 blog graphic

While simply a starting point, the above questions will typically allow us to frame the issue, understand the causal effects of the issue, and most importantly facilitate the process of honing in on the data that are important and systematically ignore the data that are not.

To apply a real-world example of the methodology outlined above, consider the software application ProModel has provided to the U.S. Navy – the Naval Synchronization Toolset (NST).

“What challenge are we trying to overcome?”

Since 2001, the U.S. Navy has participated in overseas contingency operations (Operation Enduring Freedom and Operation Iraqi Freedom) and the legacy FA-18 aircraft (A-D) has consumed more its life expectancy at a higher rate.  Coupled with the delay in Initial Operating Capability (IOC) of the F-35C aircraft, the U.S. Navy has been required to develop and sustain a Service Life Extension Program (SLEP) to extend the life of legacy FA-18 aircraft well beyond their six thousand hour life expectancy and schedule and perform high flight hour inspections and major airframe rework maintenance events.  The challenge is: “how does the Navy effectively manage the strike fighter inventory (FA-18) via planned and unplanned maintenance, to ensure strike fighter squadrons are adequately sourced with the right number of FA-18s at the right time?”

“What are the top 3 causes of the challenge?”

  • Delay in IOC of the F-35C
  • Higher flight hour (utilization) and fatigue life expenditure
  • Fixed number of legacy FA-18 in the inventory

“Which factors are in my control and which ones are not?”

 In:

  • High flight hour inspection maintenance events
  • Airframe rework (depot events)

Out:

  • Delay in IOC of the F-35C
  • Fixed number of legacy FA-18 in the inventory

“Do I have access to the data that affect the questions above?”

Yes.  The planned IOC of the F-35C, flight hour utilization of FA-18 aircraft, and projected depot capacity and requirements are all data that is available and injected into the NST application.

“How can I use the data to address the challenge?”

Using the forecasted operational schedules of units users can proactively source FA-18 aircraft to the right squadron at the right time; balanced against maintenance events, depot rework requirements, and overall service life of each aircraft.

Now that the challenge has been framed, the constraints have been identified, and the data identified, the real work can begin.  This is not to say that there is one answer to a tough question or even that there is a big red “Easy” button available.  Moreover, it has allowed us to ensure that we do not fall victim to fretting over an issue that is beyond our control or spend countless hours wading through data that may not be germane.

NST was designed and developed with the points made above in mind.  The FA-18 is a data-rich aircraft.  However, for the sake of the users, NST was architecturally designed to be mindful of only the key fatigue life expenditure issues that ultimately affect whether the aircraft continues its service life or becomes a museum piece.  In the end, NST’s users are charged with providing strike fighter aircraft to units charged with carrying out our national security strategy.  By leveraging the right data, applying rigor to the identification of issues in and out of their control, and harnessing the technology of computational engines, they do precisely that.

Simulating Impatient Customers

ProModel Guest Blogger: Dr. Farhad Moeeni, Professor of Computer & Information Technology, Arkansas State University

Dr. Farhed Moeeni - Prof. of Computer & Information Technology, Arkansas State University

Dr. Farhad Moeeni 

Simulation is one of the required courses for the MBA degree with MIS concentration at Arkansas State University.  The course was developed a few years ago with the help of a colleague (Dr. John Seydel).  We use Simulation Using Promodel, Third Ed. (Harrell, et al., Ghosh and Bowden, McGraw-Hill) for the course.  In addition, students have access to the full-version of the Promodel software in our Data Automation Laboratory. The course has attracted graduate students from other areas including arts and sciences, social sciences and engineering technology who took the course as elective or for enhancing research capability.  Students experience the entire cycle of simulation modeling and analysis through                                          comprehensive group projects with a focus on business decision making.

Most elements of waiting lines are shared by various queuing systems regardless of entity types such as human, inanimate, or intangible.  However, a few features are unique to human entities and service systems, two of which are balking and reneging.  One of the fairly recent class projects included modeling the university’s main cafeteria with various food islands. Teams were directed to also model balking and reneging, which was challenging. The project led to studying various balking and reneging scenarios and their modeling implications, which was very informative.

Disregarding the simple case of balking caused by queue capacity, balking and reneging happens because of impatience.  Balking means customers evaluate the waiting line, anticipate the required waiting time upon arrival (most likely by observing the queue length) and decide whether to join the queue or leave. In short, balking happens when the person’s tolerance for waiting is less than the anticipated waiting time at arrival.  Reneging happens after a person joins the queue but later leaves because he/she feels waiting no longer is tolerable or has utility.  Literature indicates that both decisions can be the result of complex behavioral traits, criticality of the service and service environment (servicescape). Therefore, acquiring information about and modeling balking or reneging can be hard.  However, it offers additional information on service effectiveness that is hard to derive from analyzing waiting times and queue length.

For modeling purposes, the balking and reneging behavior is usually converted into some probability distributions or rules to trigger them. To alleviate complexity, simplified approaches have been suggested in the literature.  Each treatment is based on simplifying assumptions and only approximates the behavior of customers. This article addresses some approaches to simulate balking.  Reneging will be covered in future articles.  Scenarios to model balking behavior include:

  1. On arrival, the entity joins the queue only if the queue length is less than a specified number but balks otherwise.
  2. On arrival, the entity joins the queue if the queue length is less than or equal to a specified number. However, if the length of the queue exceeds, the entity joins the queue with probability  and balks with probability  (Bernoulli distribution).
  3. The same as Model 2 but several (Bernoulli) conditional probability distribution is constructed for various queue lengths (see the Example).
  4. On arrival, a maximum tolerable length of queue is determined from a discrete probability distribution for each entity. The maximum number is then compared with the queue length at the moment of arrival to determine whether or not the entity balks.

The first three approaches model the underlying tolerance for waiting implicitly.  Model 4 allows tolerance variation among customers to be modeled explicitly.

A simulation example of Model 3 is presented. The purpose is to demonstrate the structure of the model and not model efficiency and compactness.  The model includes a single server, FCFS discipline, random arrival and service.  The conditional probability distributions of balking behavior are presented in the table. The data must be extracted from the field.  The simulation model is also presented below. After running the models for 10 hours, 55 (10% of) customers balked. Balking information can be very useful in designing or fine-tuning queuing systems in addition to other statistics such as average/maximum waiting time or queue length, etc.

Condition

Conditional Probability Distribution

Probability of Joining the Queue (p) Probability of Balking (1-p)
Queue Length <= 4 1.00 0
5<=Queue Length <= 10 0.7 0.3
Queue Length > 10 0.2 0.8

Prof Mooeini Sim Chart

About Dr. Moeeni:

Dr. Farhad Moeeni is professor of Computer and Information Technology and the Founder of the Laboratory for the Study of Automatic Identification at Arkansas State University. He holds a M.S. degree in industrial engineering and a Ph.D. in operations management and information systems, both from the University of Arizona.

His articles have been published in various scholarly outlets including Decision Sciences Journal, International journal of Production Economics, International Journal of Production Research, International Transactions in Operational Research, Decision Line, and several others. He has also co-authored two book chapters on the subject of automatic identification with applications in cyber logistics and e-supply chain management along with several study books in support of various textbooks. .

He is a frequent guest lecturer on the subject of information systems at the “Centre Franco Americain”, University of Caen, France.

Current research interests are primarily in the design, analysis and implementation of automatic identification for data quality and efficiency, RFID-based real-time location sensing with warehousing applications, and supply chain management. Methodological interests include design of experiments, simulation modeling and analysis, and other operations research techniques. He is one of the pioneers in instructional design and the teaching of automatic identification concepts within MIS programs and also is RFID+ certified.

Dr. Moeeni is currently the principle investigator of a multi-university research project funded by Arkansas Science and Technology Authority, Co-founder of Consortium for Identity Systems Research and Education (CISRE), and on the Editorial Board of the International Journal of RF Technologies: Research and Applications.

Contact Information

moeeni@astate.edu

ProModel Solutions Presented at the 2015 Patient Flow Summit

In May ProModel joined a diverse and talented group of healthcare professionals in Las Vegas to share best practices for improving process and positively impacting the quality of patient care.  Presenters provided views on a wide variety of patient flow issues including population health management, RTLS systems, healthcare reform, readmissions, surgical variability and ED processes.

Not only did ProModel have an exhibit at the event where we were able to officially unveil our new Patient Flow RX solution, but we were also very lucky to have ProModel client and user David Fernandez MHA there to give an insightful and informative presentation on his successful use of simulation in the healthcare world.  Fernandez is VP of Cancer Hospital, Neuroscience and Perioperative Services at Robert Wood Johnson University Hospital and his presentation “Let My Patients Flow! Streamlining the OR Suite” described his use of lean management principles and simulation modeling to improve patient flow in the OR.

David Fernandez MHA, Robert Wood Johnson University Hospital discusses his use of simulation for improving patient flow in the OR

David Fernandez MHA, Robert Wood Johnson University Hospital discusses his use of simulation for improving patient flow in the OR

Among numerous other presentations, keynote speaker Eugene Litvak PhD, President & CEO of Institute for Healthcare Optimization was there to address the application of queuing theory to healthcare processes, as he believes it is a methodology that will correctly address the challenge of hospitals to match random patient demand to fixed capacity.

The Patient Flow Summit helped hospital leaders from all over the world learn the latest about optimizing capacity, streamlining operations, improving patient care, and increasing fiscal performance.

Presenters provided views on a wide variety of patient flow issues

Presenters provided views on a wide variety of patient flow issues

ProModels (L) Kurt Shampine, VP and (R) Dan Hickman, CTO – unveiling Pateint Flow Rx!

ProModels (L) Kurt Shampine, VP and (R) Dan Hickman, CTO – unveiling Pateint Flow Rx!

Teaching Systems Analysis and Modeling

ProModel Guest Blogger: Robert Loomis, Ph.D. Adjunct Professor, Florida Institute of Technology; NASA (Retired)

Loomis

Robert Loomis, Ph.D.

I teach a number of courses for the Florida Institute of Technology, one of which (Systems Analysis and Modeling) is a 17 week graduate level survey course in Systems Analysis, various types of modeling and how the modeling fits into the SA process.  This course is designed to be “a mile wide and an inch deep” in that it introduces several topics that could, by themselves, be the subject of dedicated courses.

One of the challenges in teaching a course such as this (particularly in an MBA environment) is to find tools that are effective and demonstrate the concepts well without becoming bogged down in the mechanics of the tools employed.  It also helps if the students find them engaging to use.  I ended up writing some of my own applications for certain deterministic models in order to meet those requirements and to emphasize the concepts that I felt were important.

I chose ProModel to use as a simulation package for a number of reasons. It has:

  • A graphical User Interface that is attractive, easy to use, and (at least at the level my class uses) easy to learn.
  • Outstanding documentation.
  • An excellent Professor Package.
  • An excellent Student Package. It is modestly-priced and fully-featured (limited only by the size of the model that can be created).
  • A Workstation Simulator (added by ProModel this year) that is extremely useful for instructors and students.

I have also found the ProModel staff to be responsive, courteous, and willing to help with any issues that may arise. I believe ProModel recognizes that offering an excellent value and support in the teaching environment will pay long-term dividends as the students move into their professional environment, and I applaud ProModel for their insight.

About Robert Loomis

Robert Loomis received a BSEE from Michigan State University, and an MS and Ph.D. in Industrial Engineering from Texas A&M University.  For the last 30 years he has worked for NASA and the United Space Alliance (USA) in the space and aerospace environment as a safety and reliability expert. His NASA positons included Chairman of the Kennedy Space Center (KSC) Safety Engineering Review Panel, Chairman of the KSC Ground Risk Review Panel, Manager of Data Systems at NASA Headquarters, Deputy Director of Safety at Dryden Flight Research Center (DFRC), and Head of the Independent Technical Authority at DFRC. He held numerous positions with USA, culminating in Corporate Director of Mission Assurance.  Dr. Loomis’ recognitions include the NASA QASAR Award, the NASA Exceptional Public Service Medal the Astronauts Silver Snoopy Award; the IEEE Millennium Medal; IEEE Reliability Society Lifetime Achievement Award; and Leadership and Teamwork Awards from the United Space Alliance.  He is a Senior Member of the IEEE and a Fellow of the Society of Reliability Engineers. He is an adjunct professor at Florida Tech; and most importantly, a Full-Time Grandfather to the three nicest, smartest, and best-looking grandchildren on the planet.

ProModel Customized Solutions: Hogistics

There are times when the best solution is a customized solution. For those situations ProModel has a highly skilled and agile development team ready to work with you to develop the one of a kind predictive analytic solution that meets your needs.

We have created unique custom predictive analytic technology applications and training programs for the Army, Navy, and Government Agencies, as well as companies in the Aerospace & Defense Manufacturing, Pharmaceutical, Healthcare and Services Industries.

Here is just one of our latest custom solutions:

Hogistics, created in partnership with Zoetis, a global animal health company that delivers medicines and vaccines, complemented by diagnostic products and genetic tests and supported by a range of services.

Zoetis, in conjunction with ProModel, created an analytics tool to help farmers predict barn and system pig growth, mortality, and sales volume over time. This helps them achieve an optimal pound per pig as prescribed by the packing companies to which they sell.

Pork production is a highly variable process due to many factors including, genetics, environment, and disease. Other events, such as vaccine protocols, weather and individual animal care can also have an effect. Hogistics will help swine producers with the following:

  • Marketing weight and weight distribution projections weeks ahead of time
  • Data to optimize animals per sale
  • Scenario analysis capabilities to evaluate animal heath, vaccines, husbandry education
  • Pig placement, marketing and transportation schedule data
  • An inventory management tool for placement and planning of multiple pig flows

If you are interested in Hogistics view the Hogistics page on Zoetis website

Contact saleshelp@promodel.com to find out how ProModel can help your organization make better decisions – faster.

Finding Impartiality in S/W Applications

Rob Wedertz - SME, NST

Rob Wedertz – Director, Navy Programs

As long as I can remember I’ve been a fan of and often used the expression, “I don’t want to build the microwave, I just want to press cook”.  (I’ve never been able to remember when or from whom I first heard it – my apologies for the lack of attribution).  While I’ve sometimes been fascinated by the inner workings and origin of things, in the end I’ve come to adopt the modern world and the pace at which it moves.  I simply don’t have the time of day to dig into the underbellies of things and investigate the underpinnings of how they work nor their interdependencies.  My aversion to such activities was upended when I joined ProModel and led (as a PM) our development team’s efforts to support Verification, Validation, and Accreditation at the behest of our sponsor’s modeling & simulation accreditation agent.  While I do not intend to “build the microwave” here, I would like to describe how I learned that the act of “pressing cook” must be accompanied by complete and total impartiality of the software application.

Software, in a variety of formats, is often used to tell a story.  When it comes to entertainment-based software, and for the sake of the longevity of it, the story should be a very good one.  Thus the reason many folks spend countless hours trying to “level up” (it’s all about the journey, not the destination).  During my college days, I was exposed to Pascal and learned that the methodology (computer language) for telling a story was via if, then, else, while, etc. statements.  Truth be told, I didn’t particularly enjoy trying to figure out how to make the computer say “hello” via that methodology.  Again, I am more of a “show me the story” kind of person, than a “how did you make the story” kind of person.  In that regard I’m quite fond of the software that exists today.  My iPad is a bevy of mindless apps that keep my 5 year old entertained while putting miles on the family wagon.  However, when it comes to decision-support software, the stuff under the hood REALLY does matter and is often as equally important as the story itself.  Through the VV&A journey we’ve traveled to date, I’ve become more and more focused on inner-workings of “the microwave”, both out of necessity and surprisingly out of curiosity.

Our software applications tell stories that often culminate in multi-million dollar and in some cases, billion dollar implications, not necessarily to the good.  Not only must the story be stringently accurate, it must also be 100% impartial (or agnostic) to those who might be directly impacted by the results.  We accomplish that impartiality by ensuring that we never begin our development processes with an end result in mind.  That is not to say that we do not begin with an end-state in mind (i.e. – what is that you want to know?)  The difference is nuanced in print, but significant when it comes to applying the right level of acumen and forethought into software development.  The true genius of leveraging software applications to solve complex problems is that once you’ve figured out “where and why it hurts”, you can use predictive analytics, modeling, and regression analysis to attack the root of the ailment.  In my simplistic mind, our software is being used to treat the condition rather than the symptom.

The rigor that has been applied to the VV&A of our specific DoD program of record is staggering when compared to similar applications.  And it should be.  While many software developers are not particularly fond of documenting source code and defining why a certain script was used, in the end it has made both our customers and us extremely confident about our methodologies, processes, and coding standards.  Frankly, (although I’d never admit it to the folks who raked us through the coals) we’ve become a better development team because of it.  Combine the penultimate requirements associated with VV&A with our use of the Agile/SCRUM development methodology, we’ve accomplished the delivery of an application that withstands painstaking scrutiny and is adaptive enough to answer evolving customer demands and utility.  In the end, the vetting our software application has endured at the hands of the accreditation agent is not the value added propositions our customer demanded, although it was a necessary evolution.  What really matters is that we’ve produced a traceable software application that is impartial.  It may not always give you the answer you want, but it will always give the answer that you need – the truth.