Successful results-driven experience in IT program/project management, focusing on collaborating with multiple businesses and IT workstreams to define detailed business process requirements into workable enterprise software solutions for retail, finance, pharmaceutical, and inventory processes. A successful proven track record in leading cross-functional international teams of project managers while managing expectations and delivering projects of greater than $10M within stakeholder expectations. Provided an in-depth knowledge of SDLC using Agile and Waterfall project management methodologies (Scrum Master (SMC)), MS IT Management/Project Management (AMU)), and a talent for developing business requirements delivering workable technology solutions. Rich holds a Bachelor of Science in Political Science from Northern Illinois University and a Master of Science in Information Technology/Project Management from American Military University. He is currently a Project Manager III for Bradford Hammacher Group in Niles, IL/
A discussion concerning adding a retirement requirement to holding public office
In my recent travels to Wyoming, I was approached by a kindly older gentleman who asked if I hailed from the great State of Wyoming. I informed him I did not but asked what he was involved with, as the telltale giveaway was the clipboard in his hands. He informed me that he was gathering signatures to include a referendum on the ballot to limit the upper age at which one could run for Congress (U.S. Representative, to be precise). I asked him what caused him to pursue this endeavor. He told me that because of elected officials like President Biden and Senator Diane Feinstein’s older age, he felt that they had become a hindrance to the proper running of the nation. A recent Pew Research opinion survey showed that 79% of the country agrees with the gentleman from Wyoming. Most Americans favor maximum age limits for federal elected officials and Supreme Court Justices (1/4/2023 Pew Research https://www.pewresearch.org/short-reads/2023/10/04/most-americans-favor-maximum-age-limits-for-federal-elected-officials-supreme-court-justices/). Noticing that the gentleman from Wyoming only mentioned Democrats, I asked if this included Senators Mitch McConnel and Charles Grassley. He did not know who those two Senators were. This majority of Americans surveyed included Republicans and Democrats, but more so Republicans who favor the maximum age limit for elected officials but not so much for SCOTUS. The U.S. Constitution only states a minimum age requirement; Representatives must be 25 years old, Senators 30 years, and the President is 35 years old (for a side note: President also includes natural-born citizenship and a two term-limit, which is interesting as the Speaker of the House can be foreign-born but could not assume the Presidency should that event occur). The Constitution does not include a maximum age requirement. So, the question is, should the U.S. Constitution have a maximum age limit? Can Congress institute such a limitation? Several websites are working towards instituting an upper age limit: change.org is one such site. They offer several arguments for justifying an upper limit:
The current average age of the Senate is 61.8 years; for the U.S. House, it is 57.8 years. The two Presidential candidates are in their 70s. These are the highest averages ever in our history. (https://www.change.org/p/president-of-the-united-states-maximum-age-limit-for-congress-and-presidency)
They theorize that by making a mandatory retirement age of 65, “we could bring in younger candidates who are better equipped to deal with the issues that America faces today and events we will face in the future. Moreover, “By implementing these maximum age limits on Congressional and Presidential positions, we can ensure that our country continues to be innovative and help prevent using ineffective and outdated solutions for modern-day problems. It would also provide a better representation of the interests of American people, considering only about 13% of the American population is 65 and over.” Depending on which site you scroll through, the maximum age for holding public office should be 65 – 70 years. YouGov.com stated, “How would these limits impact the makeup of our current Congress? Our analysis found that if senators over 60 were barred from holding office, 71% of current senators would be ineligible to serve. If the limit were 70, 30% would be ineligible. If it were 80, 6% would be ineligible.” As YouGov.com points out, instituting a mandatory retirement age would be messy. At age 60, over 2/3rds of the Senate would have to retire. I am not sure about anyone else over 60 years of age, but I was not anywhere near interested in retiring. There is no retirement requirement where I currently work, and at 67 years, I am still not interested in retiring. Lest we forget, there are laws currently on the books that are supposed to prevent a slew of discrimination practices, one of which is age discrimination. Discrimination aside, consider that not all older adults age the same. I know people in their 70’s, 80’s, and 90’s who are sharp as a tack. I also know people in that age group who are not the sharpest or brightest crayons in the box. However, the same can be said for all adult age groups. Thus, age should not be used to determine one’s acuity. Let us consider the most significant built-in factor in the Constitution for regulating when a person can hold office; we usually call it an election. Moreover, all any qualified adult must do is register to vote and actually vote to be involved. It is that easy. By qualified, we have stipulated that an individual must be a citizen, over 18 years old (that is right, a minimal age qualification), and reside at the residence listed on their registration form. A citizen can only vote in the elections in the districts from which they reside. I must admit that being an informed voter takes much work. In Warren Township alone, I have 17 districts where I can vote from precinct to President (National, Statewide, Congressional, Legislative, State Rep, County Board, Judicial, Unincorporated(?), Township, Park District, Library, three school districts, Fire Protection, and Precinct Committeeperson). That is a lot to keep track of, from the current officeholder and their record to the candidates and what each of them proposes. Living in a Democratic Republic is a lot of responsibility and work. However, it is not meant to be easy. If it were, then all Governments would be Democratic Republics. Voting allows the average citizen to register their decision about who they want representing them in each public office. If a voter is dissatisfied with the incumbent, vote against them. If enough other people feel the same way, the incumbent loses and retires from that office. Otherwise, they remain in office until the next election. Thus, we have term and age limits all wrapped in one package every election. The key is to get enough, or a majority, of like-minded people to participate in the election to remove the disliked incumbent and elect the desired candidate; that is where the work comes in. Currently, only about 70% of all qualified citizens bother registering to vote (U.S. Census), with only 55% bothering to vote in November 2022. Of that voting group, the numbers breakdown accordingly: As per U.S. Census data for the November 2022 General Election Age Registered % Voting % 18-24 49 50 25-34 62 39 35-44 69 49 45-54 72 54 55-64 74 61 65-74 77 68 75+ 76 65 While the number voting compared to registered is up over past years, the numbers only get worse in off-year elections:
(Credit: Mona Chalabi/Carnegie Corporation of New York) As you can see from the above graphic, approximately 10% of those registered actually vote in local elections where critical decisions affecting all citizens occur: services like police and fire, housing, libraries, schools, streets and sanitation, water, schools, and how our elections are run are decided at the local level. Nevertheless, only 10-15% of those registered participate in those elections. So, in the end, putting an artificial retirement age on elected officials only hurts us. We lose out on the experience older citizens bring to the table. While younger people have the energy and the knowledge of newer tools we could use in running society, older citizens bring lifelong experience in applying new things. We have nothing to prove, no axe to grind, no ego to bruise. When I am wrong, I readily admit it. Why sacrifice this experience and knowledge acquired through aging, which can only help our nation? As stated earlier, if you do not like the incumbent, work to remove them from office using the old tried and true method: vote them out of office in the next term-limiting election.
Two hundred years ago, the industrial revolution changed the way the world produced goods, transported them, and communicated. Today, Artificial Intelligence (AI) is a new revolution that will be for more impactful than ever imagined. Any tasks that can be codified and programmed into a computer, even the jobs of so-called knowledge workers, such as Project Managers (PM), will be automated. However, has AI had a beneficial impact on Project Management in assisting projects to conclude successfully? This study examines the practical application of AI on Project Management, and whether that impact has been practical. Previous studies show that using AI in conjunction with Earned Value Management (EVM) methods, can add value, but practical applications still have a way to go to meet project needs.
Keywords: Artificial Intelligence, AI, Machine Learning, Project Management, Earned Value Management, EVM.
A Master Thesis Submitted to the Faculty of American Military University by Richard Garling
The author hereby grants the American Public University System the right to display these contents for educational purposes.
The author assumes total responsibility for meeting the requirements set by United States copyright law for the inclusion of any materials that are not the author’s creation or in the public domain.
I dedicate this thesis to my wife. Without her love and devotion, without her belief in me and her ungodly amount of patience, support, and encouragement, I could not have completed this work. I do all of this for her.
ACKNOWLEDGMENTS
I wish to thank Dr. John Rhome and Dr. Novadean Watson-Stone for their support and the knowledge they imparted while I was a student of theirs. Their guidance was most appreciated, especially when I had doubts if I was going in the right direction. Dr. Stone was particularly helpful in getting me to wrap my head around what a literature review provided. Dr. John, for putting my head on straight when I would doubt myself. I am forever in your debt.
Two hundred years ago, the industrial revolution changed the way the world produced goods, transported them, and communicated. Today, Artificial Intelligence (AI) is a new revolution that will be for more impactful than ever imagined. Any tasks that can be codified and programmed into a computer, even the jobs of so-called knowledge workers, such as Project Managers (PM), will be automated. However, has AI had a beneficial impact on Project Management in assisting projects to conclude successfully? This study examines the practical application of AI on Project Management, and whether that impact has been practical. Previous studies show that using AI in conjunction with Earned Value Management (EVM) methods, can add value, but practical applications still have a way to go to meet project needs.
Two hundred years ago, the industrial revolution began making drastic changes to the world. It was changing production, transportation, communication, and it was occurring everywhere. Many thought it would never impact their little corner of the world, yet it would. The poor cobbler who had produced shoes one pair at a time, found himself replaced by a machine that could spit out hundreds of shoes a day. The worst part, the human operating the machine did not have the knowledge the cobbler had of making shoes, and that machine operator did not need to understand how to make shoes.
Today, we see the beginnings of a new revolution in performing tasks. This new revolution could cause changes for more impactful than the Industrial Revolution ever imagined. Commonly referred to as the AI (Artificial Intelligence) revolution, it is also known as Machine Learning (ML), Cognitive Computing, Natural Language Processing (NLP).
AI is estimated to replace over 1.8 million jobs, but it will create 2.3 million more jobs; and it could create over $2.9 trillion in business value (Kashyap, 2019). AI will impact everything, including production, transportation, communication and decision making. Any tasks that can be codified and programmed into a computer, even the jobs of so-called knowledge workers, such as Project Managers (PM), will be automated. Furthermore, AI would complete these tasks much quicker, more efficiently, and economically.
An example of a knowledge worker trade that was once human-intensive, that required a tremendous amount of experience, and is now run almost exclusively on computers, is stock trading. Gone are the trading pits that used to employ over 5500 traders. Today, around 500 traders do most of their trading on networks using AI tools to make decisions. The Chicago Mercantile Exchange switched to trading commodities via a network in 2015; today, sophisticated algorithms match buyers and traders (Davenport & Kirby, 2016).
Questions arise as to whether a machine will ever replace humans. Many scoffed at the mere thought of robots replacing humans and at the idea that computers would be able to think like humans, make decisions like humans. Not likely since computers have one underlying problem, they are not human. Humans can create a device that can perform many mundane, repetitious routine tasks. Humans can develop machines that make decisions based on what is “learned.” However, AI tools lack one essential component; humans can perform multiple tasks; they can switch to a different task on a whim; machines cannot. Computers are good at delivering the same jobs over and over as per programming, nothing more; And computers are not self-aware.
AI devices are designed and built to receive information from their environment and are programmed to take actions that increase the likelihood of a successful conclusion. AI is capable of interpreting externally fed data correctly, of learning from such input and make decisions from that data based on algorithms programmed into the system. Sometimes this process is referred to as automation. However, automation is a controlled process. Automation follows the logic and the rules programmed into it; AI can reflect intelligence, even human intelligence (Lahmann, Keiser, & Stierli, 2018). Nevertheless, AI is still limited by what has been programmed into its system, nothing more.
Traditional Project Management is a temporary effort to create something unique, such as a product or a process (Institute, 2019). A project has a beginning and an end. Managing a project is a process. PMI describes five process groups and ten knowledge areas a project can go through from start to finish; on a high level, these processes encompass Initiation, Planning, Executing, Monitoring and Controlling, and Closing phases of a project. Project Management is a manually intensive and data-driven endeavor, all of which AI thrives. AI is nothing without data; lots of data (Lee, 2018). It could use existing tools, such as MS Project (“Microsoft Project Software,” n.d.) and many other project management software, each serving as repositories for the data generated by project activities. Many project management applications can store information concerning scheduling, costs, determining earned value, but the Project manager must tell the system to create these reports manually; some of these tools can analyze the data. Many of the project management software’s claim to have AI capabilities, that no human intervention is required, that it automates simple tasks, creates a greater understanding of the status of project performance (Russell & Norvig, 2016). Still, the reality is they are not full-blown AI-controlled project management tools. Some come close to project assistants, but human intervention is required. After all, someone must input the data used by machine learning, and a project manager is still going to need to decide a course of action.
Earned Value Management (EVM) integrates the project scope, schedule, and costs into a systematic process used in project forecasting (Fleming & Koppelman, 2010). EVM is the accurate measurement of a project’s work performed against its baseline plan. EVM provides the project manager with precise information concerning the current status of the project at a given point; is it behind schedule, ahead of schedule, on time, over or under budget. The project manager using EVM can tell at any given moment where the project should be in the number of tasks completed and how much money should have has been spent up to now. Determining the precise status using EVM is an intensely manual process. The project manager needs to determine project scope, secure resources, create the schedule, determine the costs, gain approval, set a baseline for the project, and measure actual performance all through the project. If the project should fall behind, the PM must determine the cause and the cure for getting the project back on track. The project manager could spend days putting the information together if the project is a significant endeavor. AI could prove to be useful in these circumstances by making the calculations in mere seconds.
Problem Statement
The problem this paper will address is in determining if AI and EVM used together can improve project success significantly. Due to the intensely manual characteristics of traditional waterfall project management and the nature of the data generated by projects, the ability to utilize AI effectively to forecast project success is questionable. The ability to combine AI with EVM to assist the success of projects would also be questionable. Questionable mostly because no two projects ever run the same, and because EVM adds additional work onto an already heavy workload. AI is very data-dependent, and that data must be clean data with no ambiguities in formatting. Machine Learning loves structured data, data that is categorized, labeled, searchable. Structured data is more straightforward to analyze than unstructured data. Unstructured data has no defined formatting; it is difficult to collect, process, and analyze. EVM also uses structured data, such as the project schedule, costs, plan. All of these require a structured input, such as the reporting of time worked against the project plan. The question one asks is what parts of AI will work well with project management. Expert systems, a system that emulates the decision-making capabilities of a human expert, could be a possible tool. Chat-Bots, also known as conversational agents, mimic written or spoken human speech. They are used to simulate a conversation with real people. Project success predictor tools can predict project success before the project starts (Boudreau, 2019).
Purpose of the Research Project:
The purpose of this study is to evaluate AI tools using EVM and if they improve the success rate of projects or the ability to forecast that success rate. It will also examine AI tools not being used with EVM to determine if applying them could improve the project success rate. Can AI, in conjunction with EVM, assist the Project Manager in making decisions during the project; if so, which tools work most effectively with earned value metrics. Using AI to integrate and evaluate the triple constraint of projects – the scope, schedule, and budget – all significant components in EVM formulas used for accurate measurement analysis of the status of the project at a given point of time in the project’s lifecycle. Instead of the Project Manager spending countless hours working the numbers, machine learning algorithms could analyze the data in seconds, providing the needed information and suggest a possible direction.
Hypothesis:
This study intends to prove or disprove the following hypotheses through the evaluation of existing literature and practical examples.
1. Artificial Intelligence or machine learning tools can, when integrated with Earned Value Management tools, assist Project Managers in increasing project completion success rates above 95%.
2. Alternatively, AI cannot be successfully integrated with EVM to increase project success rates above 95%.
The Significance if This Study
This study intends to advance the understanding of applying AI using EVM to project management. It will identify the various component phases of the project management plan and discuss possible solutions to how AI could apply. Much like the Industrial Revolution 200 years ago, the AI revolution is going to change the way humans do just about everything, even project management. This study will concentrate on how AI, and machine learning, using EVM, can assist project managers in making decisions throughout the project lifecycle. Will AI replace Project Managers? Not likely. Recall the advent of the Automatic Teller Machine (ATM) and how it would spell the end of bank tellers as a profession, and yet there are more tellers today than when ATMs were first introduced (Bessen, n.d.). This study will examine possible solutions to apply AI tools and which AI tools to use, which fit well with many, if not all, aspects of project management. This study will concentrate on applying AI using EVM to project management, so it increases the success rate significantly. It will primarily focus on integrating EVM metrics measurements into AI algorithms. This study will concentrate on determining if there is any improvement in the percentage rate of successful project completions that applied AI with EVM to project management. History, as far back as 2013, has shown that 50% of businesses experienced an IT project failure. By 2016 that number had increased to 55%. Much of the project failure was due to poor planning, with over 56% of projects failing to meet expectations (Florentine, 2017). 85% of businesses say that AI will significantly change the way they do business in the next five years (Project Management Institute, n.d.). Studies have shown that Project Managers can use up over 54% of their time on administrative project tasks, tasks that could be handled by AI (Kashyap, 2019).
EVM tools have been used successfully for over fifty years by the military in measuring the performance of projects. This study will focus on using EVM formulas resulting in the development of algorithms that monitor and analyze key metrics like Cost Performance Index (CPI) versus actual costs (AC) versus planned costs (PC). These algorithms would focus on real-time reporting aiding decision making concerning the direction of the project and recommending any actions needed. These real-time observations, produced by AI algorithms, would allow the Project Manager to manage the project, relieving them of the mundane, but the necessary chore of gathering and manipulating data. Furthermore, these algorithms could accurately predict the success of the project at a 15% to 20% completion point, as is done manually today (Fleming & Koppelman, 2010). Integrating EVM formulas with AI algorithms would help assist project managers in completing projects if applied correctly. This study will include literature reviews on existing materials available on the integration of AI and EVM in project management today. It will explore if EVM used in AI is useful, and if not, why not. Furthermore, if EVM in AI was effective, could it be as useful if applied elsewhere in project management. This study will consider the different AI algorithms available today and determine potential applications using EVM in project management.
Literature Review
Project Management
Kerzner (2017) defines a project as having a specific objective creating business value and should target completion within a specific timeframe to explicit requirements. A project must have a defined start and end dates, a defined budget, consume human and non-human resources (money, people, equipment). Project management is the application of knowledge, skills, tools needed to achieve the goals of the project. Information and communicating are keys to managing projects to a successful conclusion.
Project Management has been both a savior and a problem when it comes to project management. It has accomplished significant endeavors such as the Hoover Dam, but it has trouble with completing, successfully, 50% of the projects started in information technology. Today, information technology runs the world (Lee, 2018). However, recent history, as far back as 2013, has shown that 50% of businesses experienced an IT project failure. By 2016 that number had increased to 55%. Much of the project failure was due to poor planning, with over 56% of projects failing to meet expectations (Florentine, 2017). Today, 85% of businesses say that AI will significantly change the way they do business in the next five years (Project Management Institute, n.d.). Studies have shown that Project Managers (PM) use up over 54% of their time on administrative project tasks, tasks that could be handled by AI (Kashyap, 2019).
Robertson and Robertson (2013) note the importance of the risk register. The documentation for each risk should include a risk owner; every owner can own more than one risk. The risk owner’s responsibility is for tracking the status of the risk. The risk owner is responsible for assisting in developing a risk plan for each of their risks. They are responsible for notifying the team and management, and the threat has become an issue, and for launching the approved risk plan for the occurring risk.
A description of the risk should be concise, to the point. It should contain the risk description, the trigger event, the probability of the risk occurring. The explanation should describe the risk and the impact on the project should it happen. This explanation should contain the plan to mitigate the risk should it become an issue (Kendrick, 2009).
Knowing the work and the risks are the best defense for handling problems and delays. Kerzner (2017) defines risk as a measurement of the probability and consequence of not achieving the defined project goal. Assessing potential overall project risks brings to the forefront the need for changing project objectives. It is these risk analysis tools that allow the PM to transform an impossible project into a successful project (Campbell, 2012). Project risks become increasingly difficult when dealing with an unrealistic timeline or target date when given insufficient resources or insufficient funding. Shishodia, Dixit, and Verma (2018) found that schedule, resource, and scope risks are the most prominent risk categories in Engineering and Construction (E&C), Information Systems/Technology (IS/IT), and New Product Development (NPD) projects, respectively.
Similarly, exciting vital insights have been drawn from the detailed cross-sector analysis, depicting different risk categories based on novelty, technology, complexity, and pace (NTCP) project characteristics (Shishodia et al., 2018). Knowing the risks can help to set realistic expectations, levels of deliverables, and the work required given the resources and funding provided. Managing risks means communicating and being ready to take preventive action. Gido and Clements (2012) felt the PM could not be risk-averse; accepting risk will happen, and it is part of the job; doing nothing is not an option. Kendrick (2009) describes the need for the PM to set the tone of their projects by encouraging open and frank discussions on potential risks. According to Kendrick (2009), because technical projects are highly varied, they have unique aspects and objectives which many times differ from previous work, that no two technical projects are alike. The PM needs to encourage identifying risks, the potential impact on the project, and the likelihood of occurrence requires developing risk response plans and monitoring those risks (Kendrick, 2009).
Gido (2012), Kerzner (2017), and Kendrick (2009) advocate performing qualitative and quantitative risk analysis and prioritize risks by ranking them in order of probability and impact. Ranking risks by their likely probabilities allow the PM to identify what the project team feels are the risks that will need in-depth analysis to determine potential impact costs on the project. Qualitative risk analysis defines the roles and responsibilities for determining risks, budgets, and schedule impacts to the project. The risk register and probability/impact matrix would contain all the information developed during the analysis.
PMI’s PMBOK (Institute, 2019) instructs that the PM can determine risk ranking by assessing the probability of the risk occurring. The benefit of this analysis allows the PM to concentrate on high priority risks, thus reducing the level of uncertainty. Probabilities are determined using expert judgment, interviews, or meeting with individuals chosen for their expertise in the area of concern to the project. These experts can be either internal or external to the project.
Kerzner (2017) listed several Quantitative Analysis methods commonly used to analyze risk. These included payoff matrices, decision analysis, expected value, and Monte Carlo process. Monte Carlo attempts to create a series of probability distributions, transforming these numbers into useful information that reflects any cost, technical issues, or schedule problems associated with the risk.
Shishodia et al. (2018) described impact analysis to investigate the effect risk will have on the project’s schedule, cost, quality, ability to meet project scope. The impact analyses will also look at the positive or negatives effects of a risk on the project. If the level of impact is significant enough, and its probability of occurring high enough, it will merit quantitative analyses to determine the effect it will have on the project.
Inputs to the qualitative risk analysis process include the project risk management plan. Here, the roles and responsibilities of managing risk are defined. Budgets, schedules, resources are defined as well. The scope baseline is considered an input; it includes the approved scope statement, the Work Breakdown Schedule (WBS), and the WBS Dictionary. These inputs can only change through an approved change control procedure (Mullaly, 2011).
To understand Earned Value Management (EVM), one must first understand what makes up a project. PMBOK (Institute, 2019) defines projects as having a start and a finish, not meant to be an ongoing endeavor. Of concern to the Project Manager (PM) are the parts between the beginning and the end. Furthermore, it is how these parts come together to perform the required tasks at the right time, bringing the project to a successful conclusion that is known as project integration in which EVM plays a valuable role (Fleming and Koppelman, 2010).
Integration management comprises the processes and activities that identify, describe, join, and synchronize the various processes and activities within the process groups (PMBOK, 2013). EVM is what helps to ensure it stays on course in a synchronized order within the parameters established by the scope of the project. EVM is a set of tools that allow the Project Manager to measure that performance to determine if the project is on course or if it is in trouble. It can be applied using a minimal number of tools such as the scope, the Work Breakdown Structure (WBS), the project schedule, and regular reporting, tools that any good PM should use when managing a project. However, EVM works best when using all the tools available to the PM.
Project Integration puts the team on the planning path bringing together expert judgment to review the charter and scope requirements. From this review, the team can begin creating the WBS from which the project plan/schedule/budget draws information. The WBS allows the team to tie together the different tasks to a specific deliverable, which relates to a specific requirement of the business. It also allows the team to ensure that all tasks are completed in order and on time. It is one of the main tools used in EVM to measure the progress of the project.
As Campbell points out in his book “Communications Skills for Project managers” (Campbell, 2012), getting team members to work together is also an essential part of integration management. Part of the challenge with many projects is that the teams involved come from a variety of departments. Getting them to work together has its issues; each department can have its own set of rules and requirements by which it completes the work tasked to it. Staying ahead of these obstacles requires a considerable amount of skill on the part of the PM. Integration plays a massive role in defining the skills a Project manager will need.
Project Managers are unique people. The expectation is that they bring their projects to a successful conclusion with hopefully just enough resources, money, and time. The expectation levels are high, and the pressure extreme. Regularly asked to take on a new endeavor, to use resources that have not worked together before, and make it all work to produce something new. As Kendrick (2012) and Kerzner (2017) points out, Project Managers are no one’s boss; yet expected to get people to do the work required for the project and held responsible if they fail. It would take a special kind of leader to ensure smooth execution.
Leadership is no longer limited to one or two executives at the top of an organization. There are many different levels of leadership in any company, especially in today’s global economy, where resources specialize in each area of business. Everyone in the company must be a leader if the organization is to survive and thrive (Tichy and Cohen, 1997). Without good strong leadership, nothing works. Projects and project teams can get totally out of control because there was not good leadership running the group.
All through a project, the PM must establish and reinforce the vision and strategy by continuously communicating the message. Communicating helps to build trust, build a team, influence, mentor, and monitor the project and team performance. After all, it is people, Kendrick (2012) notes, not plans that complete projects. It is the plan that keeps the people going in a single direction towards a goal. The Project Manager, inspiring others to find their voice, keep the goals and objectives front and center. A successful project is a result of everyone agreeing on what needs doing and then doing the work. From initiation to closing, the project depends on the willingness of all involved to accept, to synchronize action, to solve problems, and to react to changes. Communication amongst everyone is all that is required (Verzuh, 2012).
However, amongst all the traits a leader needs, this one must be earned, and it is the one most admired, personal integrity. It is the foundation of leadership. It brings with it trust as we want to believe in our leaders, encompass faith and confidence in them, that they believe in the direction we are all going (Kerzner, 2014).
There are five success factors every project must meet to be successful. First is an agreement amongst the team as to the goals of the project. Second, a plan with a clear path to completion with clearly defined responsibilities used to determine progress in the project; third, continuous effective communications understood by all involved; fourth, controlling scope; and fifth, management support (Verzuh, 2012).
Determining success and measuring progress is where Earned Value Management (EVM) comes into the picture. EVM allows the PM to keep track of the progress of the project to the point of getting early warning signals of trouble ahead.
Earned Value Management (EVM)
Earned Value Project Management (EVPM), lamented by many a PM as being too much work with limited value. Resources push back at the PM, saying that there is too much documentation for little return. They have trouble seeing the value that EVPM brings to the table. Fleming and Koppelman (2010) describe EVPM as the project management technique for objectively measuring project performance and progress. They point out that EVPM is a disciplined approach to ensuring that the project stays on course and on time. Kerzner (2015) describes EVPM as a systematic process that uses earned value as the primary tool for integrating cost, schedule, technical performance management, and risk management. EVPM can determine the actual status of a project at any given point in the project, but only when following organizational rules, requiring a disciplined approach.
EVPM got its start back in the late 1800s when industrial engineers on the factory floors in the U.S. wanted to measure their production performance. These engineers created a three-dimensional way to measure the performance of work done on the factory floor. They created a baseline called planned standards, and then they measured earned standards at a given point against the actual expenses to measure the performance of the factory. Today, their formula is the most basic form of earned value management today (Fleming & Koppelman, 2010).
Approximately sixty years later, the U.S. Navy introduced the PERT (Program Evaluation Review Technique) to the industry as a scheduling and risk management tool. The idea was to promote the use of logic flow diagrams in project planning and to measure the statistical success of using these flow diagrams. It did not last very long because it was cumbersome to apply (Fleming & Koppelman, 2010).
However, PERT, when combined with the Critical Path Method (CPM) in 1957, could manage project scheduling and reporting. PERT/CPM is a method used to analyze the amount of time required to complete project tasks. PERT/CPM is used more when time is a significant consideration, not cost, in completing a project. It is considered an event-oriented method rather than a start and completion method, part of the reason why PERT works well with the CPM. The problem at the time was that computers had not become sophisticated enough to be able to support the concept (Archibald and Villoria, 1966).
Archibald and Villoria (1966) showed that a Pert/Cost concept could measure earned value. The implementation of Pert/Cost in industry required eleven reporting formats, one of which was Cost of Work Report, and within it, there was a format called value of work performed. Pert/Cost standard lasted about three years, mostly due to its cumbersome use, and industry not particularly liking uninvited intervention.
Fleming and Koppelman (2010) would go on to describe how, in 1965, the U.S. Air Force created a set of standards allowing it to oversee industry performance without it telling industry what to do. What the Air Force did was to develop a series of broad-based criteria and asked that industry satisfy these broad-based criteria using their existing management systems. These criteria developed into the C/SCSC (Cost/Schedule Control Systems Criteria) that every company wishing to do business with the government was required to meet (Fleming & Koppelman, 2010).
Moreover, the results of these new criteria were impressive. However, problems also arose. The original 35 criteria grew, at one point reaching 174, some being very rigid and dogmatic, mostly inflexible, taking away from the original intent of being unobtrusive. In 1995 the National Defense Industrial Association rewrote the Department of Defense (DoD) formal earned value criteria and called the new list of 32 criteria the Earned Value Management System (EVMS) (Fleming & Koppelman, 2010). Eventually, these new criteria would become part of the American National Standard Institute/Electronic Industries Alliance guidelines, which we usually call ANSI guidelines. Furthermore, from this came a broad acceptance of the new criteria by industry.
Why Earned Value Project Management (EVPM)?
There are many reasons why every project should use EVPM. As Fleming and Koppelman (2010) describe, EVPM provides a single management system that all projects should employ. The relationship of the work scheduled to work completed provides an actual gauge of whether one is meeting the goals of the project. The most critical association is of work completed to how much money spent to accomplish the work provides an accurate picture of the actual performance cost.
Fleming and Koppelman (2010) understood that EVPM requires the integration of the triple constraint: Scope, Cost, Time, allowing for the accurate measurement of integrated performance throughout the life of the project. Integration is a big issue in managing a project. Many times, the project management team defines the project one way, the development teams another way, and further, still QA will look at another way. Everyone is reading the same sheet of music, but they are singing a different song. The requirement of the Work Breakdown Structure (WBS) has helped to bring alignment amongst the various teams impacted by the project. Its hierarchical structure helps to define the scope of the project in easily understood terminology to both the project team and the business sponsors (Fleming & Koppelman, 2010).
Study after study conducted by the Department of Defense (DoD) shows that those projects using EVPM have demonstrated a pattern of consistent and predictable performance history (Fleming & Koppelman, 2010). The studies have shown that results of project performance using EVPM early performance indicators as early as at the 10% -20% project completion point. The ability to show at that early stage the direction the project is going allows the Project Manager to adjust course making corrections long before it is too late.
Project Performance Metrics in EVM
The critical requirements for using metrics are that the project is baselined at the appropriate time and finding the real reason for any baseline change. Two critical documents in the project are the project plan schedule and the WBS (Kerzner, 2014).
Included with the WBS is a document that further defines each work package activity of the WBS known as a WBS Dictionary. Kerzner (2014) described the WBS Dictionary as a detailed description of the work to be done, setting activity precedents, and successors. It also lists dependencies within and outside of the project, such as corporate servers that may be needed to house the result of the work package. It also lists the resource(s) responsible for developing the work package and the duration level of effort, usually in hourly units, to accomplish the work. The WBS Dictionary would include the hourly rate for the resource (Fleming & Koppelman, 2010) (Kerzner, 2015).
Fleming & Koppelman (2010) would use the WBS and the project schedule to begin assembling the following metrics used in the EVM process extensively:
Planned Value (PV)
Fleming and Koppelman (2010), Subramanian and Ramachandran (2010) describe the importance of gathering all the information for preparing the schedule for the project. A PM can now measure the value of the work that should be done at any given point in the project because they have a defined task with a defined unit of measure to be done by a prescribed time. This information is known as the Planned Value (PV) of the project.
The Planned Value (PV): The PV (aka Budgeted Cost of Work Scheduled (BCWS)) (Institute, 2019) is the approved budgeted cost for each work package. Sometimes referred to as the performance measurement baseline (PMB), and the total PV of the project is known as Budget-at-Completion (BAC). BAC includes planned duration and cost for the activity (Fleming and Koppelman, 2010) (Subramanian & Ramachandran, 2010).
Earned Value (EV)
Subramanian and Ramachandran (2010) then explain the Earned Value (EV) (aka: Budgeted Cost of Work Performed (BCWP)), is the value of the approved budgeted amount at a given point in the project (Fleming and Koppelman, 2010) (Subramanian & Ramachandran, 2010). For example, if PV = $912 per day of work planned, by day three, the expected EV would equal $2736.00 worth of work completed according to the plan. If PV = $10,000 per day work performed and the project duration are 60 days, BAC would equal $600,000.00. PV by day fifteen should equal $150,000.00.
From this base, the PM now has the Planned Value (PV); you can measure Earned Value (EV) from the status reports, which indicate the actual work done. Using the information from the status report, the PM can determine the cost of the actual project (AC). Determine the Cost Variance (CV) by subtracting your AC from the EV: CV = EV – AC.
Cost Performance Index (CPI) can be determined by taking the EV and dividing it by the AC: CPI = EV/AC. CPI is used to determine if the project is on track with its costs. A CPI of greater than 1.0 means the project is under budget, while a CPI of less than 1.0 means the project is over budget. Over budget implies that the project is spending more and getting less, while under budget could mean the project is getting a bigger bang of production for their buck.
EVM will alert the Project Manager to any problems with the budget and schedule at any chosen point. So long as the scope, WBS, schedule, accurate regular reporting, and completing EVM measurements will provide performance measurements the PM can use (Fleming and Koppelman, 2010) (Subramanian & Ramachandran, 2010).
Actual Cost (AC)
Fleming and Koppelman (2010) show the Actual Cost (AC), (aka: Actual Cost of Work Performed (ACWP)), as the number of hours multiplied by the rate per hour. As each resource finishes the day’s work as planned, they record their time, in MS Project, for example. If the PM takes the work activity or work package defined to take two resources five days to accomplish, and each resource costs $57.00 per hour, and a workday is eight hours per day, then the PV should come to a total of $4560.00 (80 hours x $57.00). By day three, the Project Manager would have the expectation that the PV of work completed would equal $2736.00. However, AC came in at $3648.00, and only two days of planned work accomplished. According to the results, the project is overspending and behind schedule. The expected cost was supposed to have been a total of $2736.00 for 48 hours of work.
Using the Cost Variance (CV) formula, the PM can determine where the project stands at this point: CV = EV – AC. Cost Variance (CV) is a way to determine cost performance on a project. It is equal to the Earned Value (EV) minus the Actual Costs (AC). This measurement is critical as it indicates the relationship of physical performance to actual costs (Fleming and Koppelman, 2010) (Subramanian & Ramachandran, 2010). From the above example the formula would look like: CV = $2736.00 – $3648.00 = -$912.00.
Another way to look at the same relationship is through the Cost Performance Index (CPI). It is considered the more critical of the Earned Value Metrics (Fleming & Koppelman, 2010). A value of less than 1.0 would mean the project is spending more than it is getting, while a value greater than one means the project is spending less and getting more. From the above examples the formula would look like this:
CPI = EV/AC
CPI = $2736.00/$3648.00 = .75
As one can see, project CPI is less than 1.0. The project is spending more than it is getting done. The project is getting $.75 worth of work for every $1.00 spent.
The financial report shows what is needed to complete the project as far as cost is concerned. The PM uses the Estimate-to-Complete (ETC) and the Estimate-at-Completion (EAC) to indicate what is needed to complete the project. Use the following formula to determine ETC:
ETC = BAC – EV
ETC = $4560.00 – $2736.00 = $1824.00
Per Fleming and Koppelman (2010), use the above formula if expected project completion to be on time and budget. If as expected the project is neither on time nor within budget and this track is expected to continue, then they would use the following formula to determine ETC:
ETC = (BAC – EV)/CPI
Or
ETC = ($4560.00 – $2736.00)/.75 = $2432.00
Estimate-to-Complete is the number of funds needed to complete the project (Fleming and Koppelman, 2010) (Subramanian & Ramachandran, 2010). The method used to calculate the amount depends on the circumstances. From the above example, the variance experienced would continue for the remainder of the project.
Both Fleming and Koppelman (2010) and Subramanian & Ramachandran (2010) would use the same logic for determining the Estimated-at-Completion costs in that the variances experienced will continue. The formula to be used is as follows:
The EAC is equal to $6080.00, and the Variance-at-Completion (VAC) would be equal to:
VAC = BAC – EAC
Thus
VAC = $4560.00 – $6080.00 = -$1520.00
Must Have Documents in EVPM
As described by Kerzner (Kerzner, 2018), Kendrick (Kendrick, 2012) and the Project Management Institute (Institute, 2019), there are four documents that every project must have, as a minimum, in order to employ EVPM:
The Scope
The most important document that can assure success in a project is the project scope document. EVPM cannot be effectively employed unless the Project Manager has accurately captured the project scope; in Agile, the Scrum Master must define done. It is impossible to measure done with an ill-defined done definition. The reason for EVPM is to be able to measure the work of the project as it progresses.
The scope, as defined by PMI, is the process of developing a detailed description of the project and product. The key benefit of this process is that it describes the product, service, or result boundaries by defining which of the requirements collected will be included in and excluded from the project scope (Institute, 2019).
The WBS (Work Breakdown Schedule)
The WBS, as defined in PMI PMBOK (Institute, 2019), is a decomposition of the work deliverables into manageable work packages; it organizes and defines the total scope of the project. The WBS shows, in hierarchical form, each task required to complete the project. Several Tasks can become work packages of various durations, usually one day to one week. These packages are then aligned by precedent to determine a project schedule. The WBS will also include the resources needed to do the work on a package.
Project Schedule
The project schedule deals with the placement of the defined scope and the tasks needed to accomplish the goals of the scope into a fixed timeframe allowing measuring progress throughout the life of the project. Kerzner (2018) and Kendrick (2012) suggest that these two rules are not unique to earned value project management and that they are fundamental to all proper project management. They go on to advocate that the project schedule is likely the best tool available for managing the day to day communications on any project. Moreover, Campbell (2012) and Gido and Clements (2017) agree that one of the best ways to control a project plan is to monitor performance regularly with the use of a formal scheduling routine. The recommendation from the two is to schedule the authorized work in a manner that describes the sequence of work and identifies the significant task interdependencies required to meet the requirements of the program. Include identifying physical products, milestones, technical performance goals, or other indicators used to measure progress. Also identify, at least monthly, the significant differences between both planned and actual schedule performance and planned and actual cost performance and provide the reasons for the variances in detail needed by program management.
The Budget
Ultimately, the project must have a budget. Fleming and Koppelman (2010) note that without knowing the costs of the different tasks that make up the project, the Project Manager has nothing to use in which to measure. These steps are particularly critical to an EVMP. Once establishing the baseline, the actual performance against the baseline will need to be measured regularly for the duration of the project. Periodically the Project Manager will want to measure how well the project is performing against the baseline. Project performance will be precisely measured employing EVM, generally expressed as a cost or schedule performance variance from the baseline. Such variances will give an early warning of impending problems and are used to determine whether corrective action is required for the project to stay within the defined parameters (Institute, 2019) (Kendrick, 2012) (Kerzner, 2018).
Baseline
Baselining, according to Kerzner (2018), is the process of Establishing a Performance Measurement Baseline (PMB), a baseline against which measuring performance is an essential requirement of EVPM. The PMB is the reference point against which a project will measure its actual accomplished work, telling whether the project team is keeping up with the planned schedule, and the amount of work accomplished relative to the monies spent.
Collaborative Software
Technology advancements have led to the growth in collaborative software such as Facebook, Twitter, and Instagram. Davenport and Kirby (2018) point out that advancements in collaborative communications platforms such as company intranets, instant messaging, and email, are now commonly found in almost all companies. Microsoft Project (“Microsoft Project Software,” n.d.) and many other project management tools include dashboard reporting systems. Bayern (2019) observed that Project Management methodologies have been moving away from a centralized control school of thought to a socialized control school of thought. Kerzner (2018) pointed out that these changes in management methods are mostly due to the globalization of business. Lee (2018) noted that more resources are located internationally in other nations, including India, China, Singapore, Mexico, Costa Rica, and others. This global community will require more elaborate communications and reporting tools in order to manage global resources ensuring projects are on-time and on-budget. New tools are required to ensure factually based decisions, evidence, not guesses, or opinion. Lee (2018) points out that managing these resources is one that AI can play a significant role.
PMI Process (Data-Intensive)
Project Management has developed a process that incorporates all the tasks needed in which to reach a goal from initiation to conclusion. Two of the most common methodologies are The Project Management Institute’s Project Management Book of Knowledge (PMBOK) (Institute, 2019), and Jim Highsmith (2010) developed the Agile methodology of project management, an iterative approach to software project management (Highsmith, 2010). However, the project completion success rate remains low; the need for determining the ability of a project to be completed successfully has increased.
Kendrick (2012) and Kerzner (2018) point out that the amount of data gathered in a project can be pervasive. Projects can be very data-intensive. Furthermore, that data can come in many forms, according to Boudreau (2019). From the start of a project, the documentation includes developing the project management plan, including the scope, schedule, cost, configuration, and change management plans. Moreover, Kerzner (2018) and Kendrick (2012) note, you can add the requirements management plan, the scope baseline, create the work breakdown structure (WBS), the schedule baseline (schedule), and the cost performance baseline (budget) to the list of documentation. Next are the quality management plan and process improvement plan and the human resources plan. Add the communications plan, the risk management plan, and finally, the procurement plan, giving all fifteen of the project planning components (Institute, 2019). Dam, Tran, Grundy, Ghose, and Kamei (2019) and Highsmith (2010) describes attempts to decrease the amount of paperwork required by developing the Agile Project Methodology. Dam et al. (2019) and Highsmith (2010) advocated only doing the documentation necessary and no more.
There are multiple formats in which to store the information gathered for all the above-described processes in project management. The WBS, for example, could be done within MS Project (“Microsoft Project Software,” n.d.); it could also be done in MS Excel (“Microsoft 365 Business,” n.d.) or on a paper napkin, as can the schedule or the budget of the project. The scope of the project is often an MS Word (“Microsoft 365 Business,” n.d.) document. Minutes for meetings can be tape-recorded or written in a text file. While tasks are part of the WBS, resources are informed of upcoming tasks by the PM via email or even verbally. Resources provide weekly status reports on completed tasks verbally in a meeting, or a written report delivered to the PM by email or through an integrated project software system. MS Project Server (“Microsoft Project Software,” n.d.) provides how to report time per task automatically. Keeping project information documented in a useful manner can be a time-intensive endeavor (Kerzner, 2018) (Kendrick, 2012) (Gido and Clements, 2012) (Boudreau, 2019).
Boudreau (2019) shows that Artificial Intelligence (AI) is entering the world of project management. Project Managers are known to be quick on their feet, having to make decisions at a moment’s notice, sometimes based more on intuition than on facts. The need for intuitiveness is due to the length of time, as explained above, it would take to gather the information needed in which to base a decision on facts. While these facts are available, it takes time and effort to gather it in a form useful to the PM. He argues that AI can be an essential assistant to the PM in accomplishing the goals of the project.
Unfortunately, much of this data is lost because it is not easily accessible due to the myriad of mediums used as storage, no two being the same. AI depends on data, lots of data. Moreover, this data must be clean (Hosley, 1987). Some of the first tasks required in using AI systems are standardizing and cleaning up past data if it even exists. There is little to no standardization maintained between project managers, let alone organization. Furthermore, AI thrives on continuity in data. Boudreau (2019) reminds us that project management, by its very makeup, is more a moving target, thus making it challenging to apply AI.
Intelligent Agents
Boudreau (2019) showed that many AI tools used to help manage projects, including project success predictor tools, stakeholder management, virtual assistants, change control, risk management, and Natural Language Processing (NLP) help with analyzing resource needs and assignments. While there are tools that help with the WBS and scheduling verification, Jordan (2018) points out that these are not known to have the ability to learn from the data, so are not exact AI tools.
AI can help integrate the administration of projects without needing any input, according to Lahmann, Keiser, and Stierli (2018) and Ko, C., & Cheng, M. (2007). AI devices help perceive the environment taking action to increase the likelihood of a successful outcome. In project management, AI would be able to manage multiple projects with few resources. It does not require much input, with many of the tasks done automatically. AI can help with making decisions automatically. It can help with identifying the right personnel for a task identifying the skills and experience needed to accomplish the defined task. AI can aid Project Managers in helping them make informed decisions (Munir, 2019).
Boudreau (2019) suggests that project predictor tools can help to determine if a project has a high chance of success before it starts. Savings in resources and energy could be enormous by analyzing projects for success before execution. However, the tool must have a high rate of reliability (Boudreau, 2019), something needing further analysis and research. Wauters and Vanhoucke (2015) have shown some of these AI tools algorithms to be highly accurate in their predictions, primarily when used against EVM/ES methods where the datasets are similar. The issue they confronted occurs when increasing the discrepancies in datasets show AI prediction limitations.
Stakeholder management involves using NLP and sentiment analysis, assisting PM’s in communication and managing people. The focus is on assisting in managing project resources and stakeholders. One issue pointed out is that AI may be able to offer commonly known suggestions for handling an upset resource; it would take human intervention to resolve the issues of concern to the upset resource (Boudreau, 2019). Some of these AI tools using NLP can distinguish the assembly of personalities by analyzing the numerous documents and messages created during a project’s lifetime. NLP can decipher utterances using language subset and nuances special to project management. Tests have shown abilities to reveal information based on these utterances in emails, status reports, meeting recordings showing that a resource or a stakeholder believes the project to be on course or in jeopardy of falling behind (Munir, 2019).
When a request made to consider a change in the scope of a project, analyzing the impact on the scope, the schedule, and the budget can be an enormous task. The PM must determine if the change fits into the existing scope or changes it altogether. Will the requested change impact the project schedule; if so, how much? Will extra resources be required? What will the cost be? Will the requested change impact other projects currently in the queue? AI tools used to manage change requests could collect all the necessary data, perform the analysis, and produce a more accurate assessment of the overall impact to the project and the company program (Kerzner, 2018)
Auth, Jokisch, and Dürk (2019) described Automated Project Management (APM) as all PM tasks and activities able to be automated. Automated Project Management Systems (APMS) focuses on software applications that support scheduling, budgeting, resources. APMS systems are not expert systems noting that the use of AI was not the original intention of APMS. APM is now tied closer to AI to include data-driven project management, predictive project analytics, and project management bots (Davenport, 2018) (Jordan, 2018).
AI concentrates on the development of intelligent agents, according to Russel and Norvig (2016). Intelligent agents can perceive their environment and take actions derived from that environment. These systems can act autonomously, persist for more extended periods, adapt to changes, and track objectives (Russell & Norvig, 2016). These agents can strive for the best results or the most valued results under uncertainty (Auth et al., 2019). AI is utilizing mathematical and scientific models and methods including statistics/stochastics, computer science, psychology, cognition, and neuroscience (Auth et al., 2019)
Project duration has concerned many in project management. Wauters and Vanhoucke (2016) conducted several studies that concentrate on predicting the final duration length with any degree of accuracy. Fleming and Koppelman (2012) have noted that manually managing a project lengthens the entire duration, especially with EVM due to the number of calculations involved. Determining the current state of the project involves using Earned Value Analysis (EVA). Subramanian and Ramachandran (2010) described the four aspects of EVA analysis as Cost variance (CV), Schedule Variance (SV), Cost Performance Index (CPI), and Schedule Performance Index (SPI). A CV allows the Project Manager to determine if the project is running over-budget; SV shows schedule status; CPI measures the efficiency of the amount spent and the value recovered, and SPI indicates the rate of progress for the project. EVA provides a method assessing the performance of the project by examining the scope of the project, its schedule, with cost on performance. Project management and the game Go have similarities in common; they demand creativity, intuition, and strategic thinking. AI was able to defeat the world leader in the Go game, a human. One need only imagine the possibilities for AI in managing projects.
Artificial Intelligence
Lee (2019) describes AI as being dedicated to solving problems and finding answers using machines and logic on tasks that generally required humans to perform in the past. He notes that AI has proven to be very good at pattern recognition, identifying facial patterns, buying habits of consumers, analyzing large amounts of data to pull out hidden patterns. Zujus (2018) notes that most AI applications will follow what is known as narrow AI, designed to perform one cognitive task and perform that one task well, not designed to do any real thinking. It can learn based on the parameters defined, the data fed it, and not beyond them (Zujus, 2018).
The need to define an appropriate solution that fits the problem is of the essence in building AI solutions that work. Project management is, unfortunately, not a natural problem-solution fit for AI, according to Munir (2019). Solutions that help guide organizations have been developed by creating methods helping to create use cases. These use cases help companies consider technology factors, the organization’s data, and the application domain and environment. They will need to identify domain issues and possible AI solutions. Hofmann, Johnk, Protschky, and Urbach (2020) developed a five-step method for developing use cases that have helped to connect AI solutions to organizational issues. Following design science research paradigms with situational method engineering, their five-step use case developing tool addressed the unintuitive nature of projects. It helped provide AI solutions that fit a company’s needs.
Wauters and Vanhoucke (2015), Wauters and Vanhoucke (2016), and Wauters and Vanhoucke (2017) conducted several research projects predicting project duration using AI. They dealt with questions concerned with AI’s ability to predict the final duration with a degree of accuracy (Wauters & Vanhoucke, 2016). One method of the studies showed that using Monte Cristo simulations with principal component analysis and cross-validation, they could predict project duration with a high degree of accuracy. Principal Component Analysis is a statistical procedure that analyzes components using an orthogonal transformation by converting sets of possibly correlated variable observations into uncorrelated linear variables called principal components. Cross-validation is used in machine learning models as a resampling procedure when data is limited. (Wauters & Vanhoucke, 2016) were able to show that by using large topologically diverse datasets benchmarked against Earned Value Management/Earned Schedule (EVM/ES) methods that the AI methods outperformed the EVM/ES methods so long as the datasets were similar. The AI methods were able to predict with high accuracy the duration outcome of the project, even in the early and mid-state stages of the project. By gradually increasing the discrepancies between the datasets, they were able to show the limitations of the AI methods.
Wauters and Vanhoucke (2014) explored using Support Vector Machine (SVM) regression and AI against EVM/ES methods. Support Vector Machines are methods that stem from Artificial Intelligence and attempt to learn the relation between data inputs and one or multiple output values. However, the application of these methods requires more exploration in a project control context. Wauters and Vanhoucke (2014), in their research, used a forecasting analysis that compares the proposed Support Vector Regression model with the best performing Earned Value and Earned Schedule methods described by Lipke (2009). They then tuned the parameters of the SVM using a cross-validation and grid search procedure, after which they conducted a sizeable computational experiment. Their results showed that the SVM regression outperforms the currently available forecasting methods. Additionally, a robustness experiment setup investigated the performance of the proposed method when the discrepancy between training and test set becomes larger.
Bhavsar, Shah, and Gopalan (2019) analyzed using automation of Business Process Re-Engineering (BPR) with Software Engineering Management (SEM) and Software Project Management (SPM). They determined that AI will be the best approach and scope of automation SEM processes for software development organizations as Software Project Management (SPM) is a scientific art for planning, controlling execution, and monitoring. SPM approaches focus more on the essential requirements for the success of software project development(Bhavsar et al., 2019). BPR (Business process reengineering) projects are undertaken by organizations which are outward searching for necessary amendment within the organization performance and expecting radical changes in variables. Fundamentally, such organizations are unit trendsetters in their relative domains and market segments. BPR projects are generally large and take a long time along with significant inflow capital. BPR focuses on redesigning organizational workflows and business processes. BPR helps organizations to restructure their processes by aiming at the bottom-up design of their business processes. According to Joshi and Dangwal (2012), BPR is one in all the foremost ubiquitous development strategies used across the world.
Bhavsar et al. (2019) concluded that using BPR with change management is essential in software engineering management. They indicated that human managerial parameters would proficiently influence the execution and implementation of BPR and acceptance of software system improvement methodologies. Their evaluation indicated that a significant rise of AI had enabled a way to potential transformation for the BPR for software development organizations. AI, they theorized, will be the potential game-changer for the software project management and development life cycle processes. AI can help project managers to focus on establishing organizational goals by cost optimization and improving the quality of the product. Bhavsar et al. (2019) felt that human intuition, feelings, ideas, emotions, and passion could not be considered or replaced by AI. AI cannot be an alternative to a project manager but could be a helpful assistant to project managers in augmenting the effort of the software project development and management team and in improving the significant level of the success of the project by eliminating repetitive operations from the project.
Bhavsar et al. (2019) recognized that at this stage, a conceptual prototyping model requires a robust protocol design that enables integration of SEM with AI. They recognized that software industries had been widely adopting Agile methodologies in their software project and application development processes. However, some limitations need integration with other Agile-based frameworks or transitional waterfall methods, which can bring an Agile business process reengineering in the structure of the software development organization. They concluded that BPR has been enabling organizational capabilities towards the implementation of new initiatives with fewer complexities. It just requires a Process Life Cycle Method (PLCF) suitable for the organizational structure. Dam et al. (2019) pointed out that the rise of Artificial intelligence (AI) has the potential to transform the practice of project management significantly. Project management, they indicated, has a sizeable socio-technical element with many uncertainties arising from variability in human aspects; customer’s needs, developer’s performance, and team dynamics, for example. AI can assist project managers and team members by automating repetitive, high-volume tasks to enable project analytics for estimation and risk prediction, providing actionable recommendations, and even making decisions. AI is potentially a game-changer for project management in helping to accelerate productivity and increase project success rates. Dam et al. proposed a framework where leveraging AI technologies offer support for managing Agile projects, which have become increasingly popular in the industry (Dam et al., 2019). Agile, they felt, would be a good fit for automation in AI because of the structured methodology used in managing projects. They noted that Agile centers around a product backlog; backlogs are a list of items, customer requirements, and requests. User’s stories describe what the customer wants to do in software. Execution in Agile development divides into sprints involving sprint planning. Each sprint uses a burndown, or burnup chart for tracking progress, making all the documentation above ripe for AI automation.
Machine Learning
Machine learning (ML) is a subset of AI that uses statistical techniques to give computers the ability to learn from data without being explicitly programmed. Audrius Zujus (2018) points out that AI and ML are acronyms that have been used interchangeably by many companies in recent years due to the success of some ML methods in the field of AI. ML denotes a program’s ability to learn, while AI encompasses learning along with other functions.
Theobald (2018), discussed how machine learning is heavily dependent on code input. He observed how machines could perform a set task using input data rather than relying on a direct input command. Boudreau (2019) observed that ultimately, ML uses data for two things: prediction or classification. Theobald (2018) notes the commonly used algorithms in ML are a calculus-based mathematical formula designed to find the least error between correlations in the data. One conventional algorithm is minimizing the cost function, and it measures the performance of an ML model for given data. It quantifies the error between predicted and expected values presenting it in the form of a real number. Boudreau (2019) observed that ML is good at running multiple scenarios and selecting the best one or the one with the highest probability of success; it is good at making a prediction. Whereas, Monte Carlo simulation models the probability of different outcomes, saying here are the likeliest outcomes. The advantage, according to Boudreau (2019), is that Monte Carlo gives a range of possibilities. The disadvantage is that it is not good at making a prediction; whereas, ML is. Monte Carlo is saying, here are the best options that fit the question asked. Boudreau (2019) and Theobald (2018) point out that ML needs a large amount of data in which to make a valid prediction. Search engines commonly use ML due to their predictive abilities. ML works well with supervised, unsupervised, and reinforcement learning datasets. ML is also well suited for use in Agile projects due to the number of iterations that allow the opportunity for continuous improvement (Boudreau, 2019).
Project Management is at a point where it needs to find a way, a process, a method that will assist the methodology in completing more projects successfully. As noted earlier by Florentine (2017), over 50% of projects attempted, fail. Undoubtedly a number way too high. Furthermore, 85% of businesses say that AI will significantly change the way they do business in the next five years (Project Management Institute, n.d.). Studies have shown that Project Managers can use up over 54% of their time on administrative project tasks, tasks that could be handled by AI (Kashyap, 2019). These tasks include developing the scope of the project, creating requirements, developing the WBS, project scheduling, budgeting, using EVM extensively to determine if the project is progressing as expected. Applying AI tools may be the answer. Examining the AI tools being used today in many different technologies on many different business processes and practices could be used to either assist the project manager or run the project with limited human input. This research aims to determine usage and what may be possible.
Methodology
This paper will utilize past and current research in the existing literature first to analyze the history of AI usage in project management and the effectiveness of these efforts. Then, the study shall examine the history of using EVM formulas in project management and the effectiveness of these efforts. The main objective is to show the usage of each of these tools in project management. By first proving the effectiveness of these tools, the ability to rationalize integrating them to create practical tools that can assist project managers becomes apparent. Examining past studies can reveal what questions were asked and answered. Examining previous studies may show the effectiveness in increasing project success when applying AI, or even when applying EVM. Past studies can show if integrating AI and EVM successfully increased the completion rate of projects.
Step one of this paper will be an extensive analysis of the existing literature to determine the use of AI in project management. While EVM has been utilized very effectively for over 50 years in project management, AI is a relatively new phenomenon.
In that analysis, this paper will examine the various AI algorithms available, its usage in assisting project managers in managing a project, and measuring their overall effectiveness. This paper pays attention to project data organization, and the dependency algorithms have on this data to work effectively. Data is crucial to AI usage as AI requires lots of data. Furthermore, it must be clean, well-organized structured data. This paper will examine the usage of both structured and unstructured AI data used in the algorithms used in project management. Structured data is data organized into fixed fields within the rows and columns usually in a table; it is data that is searchable because it categorized and labeled (Boudreau, 2019). Structured data is easily searchable by AI algorithms.
Nevertheless, a contention in project management is that much of the data produced by projects is unstructured data, data that does not have a pre-defined model. Examples of unstructured data include Natural Language Processing (NLP), aka audio data. Images and text files are unstructured data; examples include weekly status reports, PowerPoint presentations. Projects have a great mix of both structured and unstructured data. However, there are algorithms available that can analyze both structured and unstructured data efficiently. AI usage of unstructured data can be essential in decision making in a project.
Step two of the research will examine using EVM in project management. EVM has a long history of over 50 years of usage in managing projects. The earned value represents the actual value of work accomplished at a given point in time. Earned Value (EV) represents the budgeted cost of work performed (BCWP), and when compared to Budgeted Cost of Work Scheduled (BCWS), giving the Project Manager insight into the health of their project. Add to this calculation the Actual Cost of Work Performed (ACWP), determining the Cost Variance (CV) between EV and AC (Actual Cost) will tell the Project Manager if the project is under, over, or right on budget compared to the expectation of the project plan. Using AI algorithms should help to lessen the amount of time it takes to compute these formulas manually. Integration of AI and EVM into existing algorithms and the effectiveness of these integrations would require access to the data for specific earned value project metrics, such as AC, PV, and Earned Value (EV), the standard KPI’s of EVM. Project performance metrics would need to be included, such as CV, Cost Performance Index (CPI), and Schedule Variance (SV). Project Prediction formula includes EAC (Estimate at Completion) cost (AC plus PCWR (Planned Cost of Work Remaining)). AI can perform these calculations quickly using AI algorithms. However, have they been used effectively to help projects to a successful conclusion?
By examining the results of past studies, this paper will determine the effectiveness of those results. Would a different way of applying the results allow for an increase in successfully concluded projects?
Limitations of the Study
A limitation of this study is its reliance on using previous studies rather than performing real-world studies with practitioners. Part of the reason for that limitation is that usage of AI with EVM in managing projects is just beginning, and as such, the application is low. Further limitation involves the time needed to complete this Master’s Thesis, so the number of algorithms analyzed will be purposely limited to those explicitly utilizing EVM.
Artificial Neural Networks (ANN) are computing systems that try to mimic biological neural networks, mostly mimicking animal brain function. These ANN systems learn to perform tasks by example, without being pre-programmed with task rules. An example of an ANN system is image recognition. Through supervised learning training databases, these ANNs learn to identify images that are labeled, so presented with an image of a cat, the system can go through its databases to identify the image presented correctly.
ANNs are mathematical models based on a collection of connected units known as artificial neurons with three different layers; input, hidden, and output layers, each composed of numerous neurons. These connections mimic the synapses in a bio-brain and can transmit signals to each neuron, each processing the information received and passing it along to other neurons. These signals are real numbers, and each neuron processes the sum of its inputs using a non-linear function. Weights are added as the neuron’s learn, which decrease or increase the signal strength. The signal travels through each of the layers, which perform different processes on a signal input. These signals travel through the input layer through to the output layer. Obtain signal control by setting thresholds causing the signal to be transmitted when the threshold is reached or crossed (Iranmanesh and Zarezadeh, 2008). ANNs can have more than three layers previously mentioned. Iranmanesh and Zarezadeh (2008) created ANN’s to forecast the actual cost (AC) to improve EVM containing five inputs and five outputs with one hidden layer. Their study was a comparison between real and forecasted data showing better performance based on the MAPE criterion. Mean Absolute Percentage Error (MAPE) is a statistical measure of prediction accuracy for forecasting methods and mainly used in trend estimations and loss functions for regression problems in machine learning. Iranmanesh and Zarezadeh (2008) study expressed it as a ratio using the following formula:
Iranmanesh and Zarezadeh (2008) study utilize 100 randomly simulated projects, each with 92 tasks with various precedence networks. ProGen (“project-generator/project_generator,” 2020) software created the simulated projects. They determined that a core piece of data needed would be ACWP. Along with the EAC, these pieces of data needed to be estimated accurately. Because EAC formulas are a combination of numerous data elements, including BCWS, BCWP, and ACWP, they can be shown as a time-cost S-curve, as displayed in figure 1 below (Iranmanesh and Zarezadeh, 2008):
Iranmanesh and Zarezadeh (2008) used ANNs because of their ability to approximate numerous functions; including non-linear functions, and the ability to “piecewise” approximations. Piecewise allows for the building of non-linear models. Defined Piecewise functions use multiple sub-functions applied to intervals of the primary function. Piecewise functions are used extensively in image identification applications. Neural network forecasting involves training and learning. Learning is a supervised function involving historical data with proper inputs and desired outputs given to the network. During the learning process, the network constructs input-output mappings adjusting weighting and biases during each pass through while optimizing each time to minimize error. Repeating this learning process minimizes error until meeting a satisfactory criterion. ANNs have the innate ability to learn and see the nuances present in EVM to predict the AC in a project accurately.
Iranmanesh and Zarezadeh (2008) study would use five different neural network types with different levels of neurons within the hidden layer. These neurons capture Neural Networks (NN) with optimal architecture with the error calculated MAPE criterion on the test data. NN is a model whose layered structures are like the networked structure of neurons in the brain, with layers of connected nodes. NNs can learn from data—so it can be trained to recognize patterns, classify data, and forecast future events. Table 1 below shows the results of the test data MAPE results (Iranmanesh and Zarezadeh, 2008):
Upon comparing the errors, Iranmanesh and Zarezadeh (2008) chose the hidden layer with five neurons, since it had the least amount of errors, for training the NN. Further testing on two randomly selected projects produced the following forecasted ACWP and EAC results (Iranmanesh and Zarezadeh, 2008):
The continuous line in both Figs. 2 and 3 is the real ACWP value, and the dashed line is the forecasted value. As can be seen, the forecasting error is low. Table 2 below shows the absolute error for both projects (Iranmanesh and Zarezadeh, 2008):
Their results confirmed a strong relationship between forecasted and actual costs, as well as using ANNs in forecasting projects.
Decision Trees commonly model decisions and their possible consequences generally identify paths to a goal and utilized in project management. They help project managers to take in to account all the possible variables, including time, cost, resource availability, to determine the best option for a decision (Wauters and Vanhoucke, 2016).
Bagging, also known as bootstrap aggregation, is used to reduce the variance in a decision tree algorithm. The idea is to create subsets of training data chosen randomly with replacement, using each subset to train the decision tree. Bagging is used in machine learning as an ensemble meta-algorithm to increase stability and accuracy in algorithms used in statistical classification and regression. Bagging also helps to prevent overfitting. Overfitting is a condition where a statistical model describes the random error in data instead of the relationship between the variables. Overfitting can be handled by removing layers or cutting the number of elements in the hidden layer, thereby reducing the network’s capacity. Two other methods of regulating overfitting are to apply regularization, such as adding cost to the loss function for larger weights or use dropout layers, which randomly remove certain features setting them to zero. While generally used in decision tree methods, bagging specializes in model averaging approach (Wauters and Vanhoucke, 2016).
The random forest, or random decision forest, is a classification algorithm consisting of numerous decision trees. It uses bagging and feature randomness when building each tree to try to create an uncorrelated forest of trees whose prediction by committee is more accurate than that of any individual tree. Feature randomness is node splitting in a random forest-based on a random subset of features in each tree. The random forest looks for patterns in a seeming forest of randomness.
Wauters and Vanhoucke (2016) employed random forest, k-means, and SVM methodologies to test dynamic scheduling and project control. K-Means clusters observations to its closest mean average, partitioning the results into Voronoi cells. Supervised SVM’s learning models use learning algorithms that analyze data used for classification and regression analysis. SVM’s use classification algorithms for two-group classification problems and work well with small amounts of data. Once SVM’s receive sets of labeled training data for multiple categories, they work well with the labeled data that can be linearly separated. When the data cannot be linearly separated, there are kernel functions that allow for linear separation; linear and nonlinear kernels help SVM to find the decision boundaries without changing the data.
Wauters and Vanhoucke (2016) studies involved generating data for different periods from which the algorithms could learn and then compare the results of the tests with EVM metrics. They divided their methodology into four blocks; data generation, data pre-processing, grid search, and testing.
Wauters and Vanhoucke (2016) generated data made up of two separate phases; first, the baseline data involved creating the project network, along with costs and durations. Early start calculations were used from the CPM to create the schedule. Each of the project networks was created fictitiously by controlling the Serial/Parallel (SP) indicator.
Progress data showed variation in the activity durations using Monte Carlo simulations. Monte Carlo techniques emulate activities in projects. These activities are carried out hundreds of times in order to show process variability and measure it. Events are determined using random numbers subjected to allocated probabilities. Allocated probabilities are created through a probability distribution to control the degree and probability of variability in activity durations. Wauters and Vanhoucke (2016) expressed this variability and distribution using the following formula:
Where a and b equal the upper and lower random variable limit, Γ is the gamma function with two shape parameters, calculated using the following formula:
With the mean, the upper and lower random variable limits allow for a more extensive array of distributed shapes suited for project simulations desiring different outcomes. Using the Monte Carlo simulations allowed for the calculation of EVM measures, which gives the project manager an idea of the health of the project at any given moment. The attributes used as inputs for the AI methods used in Wauters and Vanhoucke (2016) study shown in Table 3 below:
The SPI and CPI use the EVM metrics of PV, EV, and AC. Time forecasting uses EAC with PV and Earned Duration (ED) or ES used as subdivisions. Actual Duration (AD) and Planned Duration (PD) used in the EAC calculations. The BAC is based on the project baseline for total project cost if every task executed according to plan. Estimate at Completion (EAC) takes sensitivity calculations into account.
The creation of the data set in Wauters and Vanhoucke (2016) study consisted of a training set, a validation set, and a test set. The first phase divided the dataset into a percentage for training and the remainder for the test set. The training set would be divided further into a training set and a validation set. The validation set would fine-tune the algorithm, with the smaller training set used for learning. Boosting, which uses regression trees, is an optimization technique that minimizes loss by adding a new tree at each step and is applied during each pass-through of the training data set until reaching a level of satisfaction with the results.
Finally, Wauters and Vanhoucke (2016) study would use the MAPE as a statistical measure of prediction accuracy for forecasting methods. As previously stated above, MAPE assists with trend estimations and loss functions for regression problems in machine learning. Wauters and Vanhoucke (2016) expressed the MAPE as a ratio using the following formula:
The computational phase of Wauters and Vanhoucke (2016) study comprised four sections, including data generation, attributes, data pre-processing and training, validation, and testing. Data generation consists of baseline data and the progress data. In both studies, Wauters and Vanhoucke (2016) and Wauters and Vanhoucke (2017) compared the performance of AI methods to EVM and Elshaer forecasting methods (Elshaer, 2013). For AI, the methods implemented were R (R Core Team, 2013) using the R template. The R template allows for the inputting of parameters for the training and validation phases. By using 5-fold cross-validation to select optimal parameters, these parameters applied to the test set. Table 4 below shows the best parameter settings for the AI techniques used in Wauters and Vanhoucke (2016) study and Table 5 shows the MAPE for different values of t; four levels of t; 50%, 90%, 95%, and 99%:
Table 5 shows that the best time for forecasting accuracy and computational expense is 0.9, with an average rate of 7.03%. Table 6 below shows the performance across the board of early, on-time, and late scenarios:
The steepest difference in performance is for on-time scenarios where forecasting for serial projects versus parallel projects are about 70% more accurate. AI methods show a 50% increase, but still less than that of the EVM and Elshaer methods. A given assumption is that as the project progresses and more is known, the more accurate duration can be forecasted, as shown in figure 4 below (Wauters and Vanhoucke, 2016), also shown in table 6 above:
As table 6 above shows improved forecasting as the project progresses, the results show lower improvement in the On-Time Scenario, were methods with a factor 1 performance yield the best results; this is with an assumption that the project progresses as planned. Wauters and Vanhoucke (2016) showed from this research that while AI methods improved as the project proceeded, compared to EVM and Elshaer methods, the improvement was less steep. The implication is that AI methods are better on average, but that improvement shrinks as more knowledge on project progress is known. They consider this finding to be significant since EVM/ES forecasting methods do not fare well in forecasting the early to mid-stages of the project. Their results imply that the AI methods are superior to forecasting than the EVM/ES methods. They showed that the mean and standard deviations for the AI methods are considerably lower than that of the earned value/earned schedule methods, especially in the early to mid-stages of the project when accurate forecasting is most needed.
Wauters and Vanhoucke (2017) study concentrated a k-Nearest Neighbor (k-NN) extension for forecasting with EVM. The k-NN method allowed for reducing the size of the training set and as a predicting method for predicting the real duration of a project. They found that the k-NN method increased forecasting stability. Stability for forecasting methods is determined stable if estimates or values do not deviate between successive tests. Stability in EVM shows a successful project if the CPI is stable. Significant variations in the EVM metrics are signs of a troubled project.
Wauters and Vanhoucke (2017) introduced the use of k-NN by as a predictor benchmarked against EVM and AI methods and to reduce the size of the training data set, so it has similar results. The AI methods form Wauters and Vanhoucke (2016) used smaller data sets, a process known as hybridizing. To identify the nearest neighbors at a given data point, k-NN uses historical data. While there are many k-NN variants, Wauters and Vanhoucke (2017) used a multidimensional binary search tree, also known as the k-d tree or k-dimensional tree. A k-d tree in computer science is a data structure used to organize number points with k dimensions using a binary search with constraints on it. It is beneficial for ranging nearest neighbor searches. Applications include credit risk, marketing, media audience forecasting, and loan payment predicting. The goal of using k-NN here is to determine the final duration of new observations and to be able to predict outcomes of the training instances closest to the new observations used. The following formula was used by Wauters and Vanhoucke (2017) to calculate the square root of the square difference’s attributes j in the training data set I and new observations:
Furthermore, once identifying the k-NN instance, the instance y value predicted output calculates as follows:
In order to prevent overfitting, and to tune the AI parameters, Wauters and Vanhoucke (2017) would repeatedly subdivide the training dataset and a separate validation set into smaller sets using cross-validation. Wauters and Vanhoucke (2016) used cross-validation successfully in a controlled environment. Once opportune parameters are found, the AI model is trained on the initial dataset and applied to the test set. Benchmarking against EVM model results occur once completing the training of the AI model.
Decision trees, with recursive partitioning, would repeatedly split the solutions into multiple regions maximizing specific measurements, entropy for example. A downside of a decision tree is their inherent instability, where a small change in the data leads to unexpected split points. The instability resolution came in using bagging and random forest techniques. Bagging selects splitting variables randomly from the predictors, and random forests restrict that selection to a smaller group of selectors. By boosting, they turn weak algorithms into stronger ones. SVM’s, with linear regression, map predictors into a higher plane using linear kernel functions to establish separability between training points.
Wauters and Vanhoucke (2017) would use the MAPE and Mean Lags to determine stability. The MAPEs formula used is like what Iranmanesh and Zarezadeh (2008) used in their study. Mean lag measures the lag structure in dynamic models and is used to estimate the average delay in linear regressive measurements and is determined using the following formula;
The results are shown below in Table 7
Wauters and Vanhoucke (2017) would observe that the AI methods and the nearest neighbor methods do not vary much in forecasting accuracy and stability, while the EVM methods differ significantly, especially within the MAPE measurements, as shown in table 7. Wauters and Vanhoucke (2017) were able to show that using the stability results from Wauters and Vanhoucke (2015) to the AI methods used in Wauters and Vanhoucke (2017), that the AI methods worked better than the EVM methods that Elshaer (2013) study showed, especially in the early and late scenarios.
Wauters and Vanhoucke (2017) were able to show in this study, and previous studies; Wauters and Vanhoucke, 2014; Wauters and Vanhoucke (2015); and Wauters and Vanhoucke, (2016) that while the AI methods were effective in specific scenarios, the EVM methods were more effective in high-risk scenarios. AI performs better in the early and late stages of projects, even better when the project is well along in execution where it provides ample information.
Discussion and Conclusion
This study began asking two questions; can artificial Intelligence, when integrated with EVM tools, assist Project Managers in increasing project completion success rates above 95%? It appears that AI has a way to go to help improve the successful completion rate of projects. However, it has made a good start, and with further research, we will see many assistive tools in the next 5-10 years that will make Project Managers’ life much more comfortable. Furthermore, keep in mind, computers are good at delivering the same jobs over and over as per programming, nothing more; They can learn, per programming; however, computers are not self-aware.
Many of the studies, as cited in the literature review, concentrated on using AI to predict project success than having focused on any practical use of AI tools in assisting Project Managers in managing projects. Thus, most studies have limited themselves to determining if AI can be used to predict project success. All the studies cited in the literature review and the results have shown limited success in using AI to predict project outcomes (Iranmanesh and Zarezadeh, 2008; Wauters and Vanhoucke, 2014; Wauters and Vanhoucke, 2015; Wauters and Vanhoucke, 2016; Wauters and Vanhoucke, 2017). Moreover, while the AI methods were effective in specific scenarios, the EVM methods were more effective in high-risk scenarios. AI performs better in the early and late stages of projects, even better when the project is well along in execution where it provides ample information.
Earned Value Project Management (EVPM), as described by Kerzner (2015), is a systematic process that uses earned value as the primary tool for integrating cost, schedule, technical performance management, and risk management. EVPM can determine the actual status of a project at any given point in the project, but only when following organizational rules, requiring a disciplined approach.
There are many reasons why every project should use EVPM. Fleming and Koppelman (2010) describe how EVPM provides a single management system that all projects should employ. The relationship of the work scheduled to work completed, to managing the costs and the schedule, provides an actual gauge of whether one is meeting the goals of the project. The most critical association is of work completed to how much money spent to accomplish the work provides an accurate picture of the actual performance cost.
EVM integrates the project scope, schedule, and costs into an organized process used in project forecasting (Fleming & Koppelman, 2010). However, EVM also provides an accurate measurement of the project’s work as performed against its baseline. It provides the project manager with detailed information on the status of the project; is it behind schedule, ahead of schedule, on time, over or under budget. Information provided by EVM is practical information project managers can use to determine project direction.
AI-assisted Project management is an enabled system that can handle day-to-day project management operations without subsequent human intervention; only the initial setup required; after that, the enabled system runs on its own. The power of AI will be able to automate many tasks in project management. Examples include determining project requirements. It could outline project tasks. It could determine task precedence and scheduling. AI could determine qualified resource availability, cost and budgeting, status reports, risk estimations, and possible recourse for resolving problems. AI provides early, accurate forecasting of project success (Wauters and Vanhoucke, 2017).
Selecting the most viable projects is a matter of determining project success and financial viability. Which project has a higher chance of success, which project fits within the organization’s overall plan, which project shows the highest return on investment (ROI), are questions Program Managers ask and answer when determining which projects get the green light to move forward. It is all about prioritization. An AI selection tool would need to include success forecasting AI algorithms, such as those suggested by Wauters and Vanhoucke (2016). As noted above, studies have shown these tools to have limited success rates. Each of those studies used EVM methods only to compare EVM to AI methods, not to use the EVM methods integrated directly into an AI algorithm.
Chatbots are programs that mimic human conversations; they would not pass the Turing test. The Turing test was developed by Alan Turing (Turing, 1950) as a definition for testing a machine’s ability to think like a human; to exhibit human intelligence such that one could not tell the difference between humans and machines. Chatbot’s common usage is to answer questions from users that lead to providing services and answering a preset list of questions. Chatbots typically used in dialog systems like information acquisition or customer service applications. Chatbots use Natural Language Processing (NLP) algorithms to capture keywords in the voice input to pull together a response from a list of likely output. As stated earlier, NLP can decipher utterances using language subset and nuances special to project management. Tests have shown abilities to reveal information based on these utterances in emails, status reports, meeting recordings showing that a resource or a stakeholder believes the project to be on course or in jeopardy of falling behind (Munir, 2019). Chatbots, as virtual assistants, include usage in conversational commerce, eCommerce, education, finance, health, and news. Chatbots usage includes messaging applications, speech assistants, to create automated communication and personalized customer experiences. Chatbots can be used to schedule meetings. Simply ask the scheduling bot to arrange an hour-long meeting on a given topic, and it can access all the participant’s calendars, including available conference rooms, quickly finding the optimal time to meet. If someone decides to opt-out of the meeting, the bot can quickly reschedule the meeting for another time. As Turing described, a Chatbot, while not exhibiting total human intelligence, is such a machine that one would have difficulty telling the Chatbot apart from the human.
A constant problem many Project Managers must face daily is getting project team members to provide various pieces of status information concerning their portion of the project. Chatbots can remind team members that a report is due, RFI’s are due, estimates for work are due, to enter time spent on tasks. All this information is vital to running an active project. Chatbots can provide an interface to allow project members to submit the needed information quickly. The chatbot would be able to use this information to produce daily, weekly, monthly status reports on the project; the chatbot would be able to identify any issues and roadblocks notifying the Project Manager of the specific issues as well as possible solutions.
Chatbots are particularly adept at forecasting risk in the project. By examining the project schedule, a chatbot could determine if a task’s status is on schedule, or it could be the cause of a possible delay in the project timeline. A Chatbot’s ability to decipher incoming status reports would allow it to determine the percentage of work completed. Examining the value of completed work and impact on the critical path, Chatbots determine if the project timeline is being met or is behind schedule. The Chatbot would make all necessary adjustments to the schedule to determine if an issue exists, adjust the scheduling of required resources, and determine the overall impact on the project. The Chatbot could mine data to analyze the team behavior to determine if tasks completed are per project schedule or if there will be problems. Many industries today, including finance, banking, counterterrorism, have found AI Chatbots useful as predictive tools.
AI can be an add-on to many project functions. No project management software has full AI capabilities. As discussed above, many of the AI tools available are expanding capabilities, but not offering full AI capabilities (Theobald, 2018). Chatbots, or Virtual Assistants (VA), for example, can execute tasks by merely speaking a few lines of instructions or requests (Theobald, 2018). Chatbots are used extensively in the customer service industries, such as banking. A PM would simply request the project Chatbot to deliver a report on the status of the project. The VA would pull together all the information required from a variety of sources. The VA could be programmed to do the same work a PM today does in order to produce the status report. The PM would look at the project schedule to determine expected work completed by the date the status report is due. The PM would determine the work, the resource doing the work, the time required to complete the work, the status of the work as of the last status report. The PM reaches out to each of the identified resources via email or in-person to request their status report. Each of the resources will submit their status reports either in-person or via email. The PM begins collating the information into a comprehensive status report and then sends it to all the stakeholders listed in the communications plan. A VA could easily handle these tasks automatically, and within a matter of hours, all by the PM merely saying, Hey Siri. Nevertheless, the capability of voicing a command to a project software is still in the future (Boudreau, 2019).
As discussed early in the results, evaluating project risks brings the need for changing project objectives during a project. It is these risk analysis tools that allow the PM to transform an impossible project into a successful project (Campbell, 2012). Using AI to evaluate project risks becomes increasingly more accessible in determining the impact on the project timeline (Boudreau, 2019). Shishodia, Dixit, and Verma (2018) found that schedule, resource, and scope risks are the most prominent risk categories in projects. AI could assist in determining the impact on each of these aspects of the project.
Similarly, AI could extract insights drawn from the detailed cross-sector analysis, depicting different risk categories based on NTCP project characteristics (Shishodia et al., 2018). Managing risks requires accurate communications and being ready with a plan. The PM cannot be risk-averse and must accept risk will happen; it is part of the job; doing nothing is not an option (Gido and Clements, 2012). Identifying risks, analyzing the potential impact on the project, and the likelihood of occurrence requires developing risk response plans, and monitoring those risks, AI could be invaluable (Kendrick, 2009).
Integration management and EVM work together to ensure the processes and activities that identify, describe, join, and synchronize the various processes and activities within the process groups stay on course (Institute, 2019). EVM tools allow Project Managers to measure project performance to confirm it is on course. Scope, the Work Breakdown Structure (WBS), the project schedule, and regular reporting, are all the tools when managing a project. However, EVM is best when using all the tools available.
As valuable as EVM methods are with analyzing the current status of a project in AC, PV, PS, and risks, the test results presented in previous research show only limited utilization in forecasting project schedule completion when using AI methods. While EVM utilization is extensive in forecasting project completion, much of the remainder of EVM methods neglect using EV, AC, ACWP, and PV integration with AI applications. Determining why a project is behind, ahead, or even on schedule automatically would be a great benefit to project management (Boudreau, 2019). Investigating how the AI algorithms determine project forecasting would help in pointing out that a project is behind schedule will need further study; what is needed is how these algorithms could provide an answer putting the project back on track per plan. Further study will be needed to project what it would cost or even what resources or plan of action would be required to put the project back on track.
Numerous AI tools help manage projects. These include project success predictor tools, stakeholder management tools such as NLP, virtual assistants such as Chatbots, which can assist in managing change control and risk management, and analyzing resources needs and assignments. While there are tools that help with the WBS and scheduling verification, these tools are not currently known to have the ability to learn from the data, so are not exact AI tools.
As pointed out by AI Lahmann et al. (2018), AI can help integrate the administration of projects without requiring input. AI’s ability to perceive the environment allows for taking action increases the likelihood of a successful outcome. Furthermore, AI could manage multiple projects. AI programming would allow for making decisions automatically. It can help with identifying the right personnel, skills, and experience needed to finish the defined task. AI can aid Project Managers in making informed decisions (Munir, 2019).
Project predictor tools could help to determine if a project has a high probability of success (Boudreau, 2019). Savings in resources and energy could be vast by evaluating projects for success before starting the project. However, the tool requires a high-reliability rating (Boudreau, 2019), thus needing further analysis and research. Wauters and Vanhoucke (2015) have shown the accuracy of these AI algorithms in their research, primarily when used against EVM/ES methods where the datasets are similar, especially in the early phases of the project or when the project nears completion. The problem they met occurs when increasing the differences in datasets showing AI prediction limitations.
The PM could use NLP to assist in stakeholder management and sentiment analysis. Communication and managing people using AI may be able to offer commonly known suggestions for handling an upset resource. However, it would take human intervention to resolve the issues of an upset team member (Boudreau, 2019).
Change requests made to consider a change in the scope of a project requiring impact analysis on the scope, the schedule, and the budget can be a massive task. Project Managers determine if the change fits the existing scope or changes it overall. Does the change require extra resources, or can the current team members manage? What will the cost impact be? Will the requested change impact other projects currently in the queue? AI could provide answers to manage change requests collecting all the data, analyzing, and produce a calculation of the overall impact on the project (Boudreau, 2019).
Auth, Jokisch, and Dürk (2019) described Automated Project Management (APM) as software applications supporting scheduling, budgeting, resources, reporting, risk analysis, change control. APMs tied closer to AI to include data-driven project management, predictive project analytics, and project management bots could help to drive up success rates for projects; further research could be beneficial.
AI focusses on the development of intelligent agents that distinguish their environment, allowing them to take actions based on that environment (Russel and Norvig, 2016). These systems can act freely, continue for prolonged periods, adjust to changes, and track objectives. They strive for the best results under uncertain conditions, much like a project’s environment (Auth et al., 2019). AI uses scientific, mathematical models and methods with statistics/stochastics, computer science, psychology, cognition, and neuroscience methodologies, producing results based on facts, not emotion.
AI is in the early stages of development and is moving quickly. AI very well may create an estimated 2.3 million jobs, and it could create over $2.9 trillion in business value (Kashyap, 2019). Current tools have made project management more robust, but they do not meet the Turing test. It still takes a human to decide.
The question this study aimed to answer was whether AI, when integrated with EVM, would improve project completion success rate to over 95% remains to be determined. Previous studies have not utilized integrating EVM fully with AI, concentrating only on forecasting project successful completion. Being able to determine with any certainty that a project is on-time, within budget at a given point has not been determined and needs further research.
This study has shown that there are possibilities for the practical application of AI integrated with ECM method metrics. The study shows that there has been valuable research in determining the successful outcome of projects, even early in the project process. However, more work is needed to develop practical applications that can assist Project Managers in completing projects successfully.
Further research in integrating EVM metrics with AI algorithms, especially in practical usage, to assist project management, needs to be continued. While analyzing project success is useful, increasing research in the practical application, less in the theoretical application, will likely see an increase in project success rates.
Archibald, R. D., & Villoria, R. L. (1966). Network-based management systems (PERT/CPM). New York: Wiley.
Auth, G., Jokisch, O., & Dürk, C. (2019). Revisiting automated project management in the digital age – a survey of AI approaches. Online Journal of Applied Knowledge Management, 7(1), 27-39. https://doi.org/10.36965/ojakm.2019.7(1)27-39
Bessen, J. (n.d.). HOW COMPUTER AUTOMATION AFFECTS OCCUPATIONS: TECHNOLOGY, JOBS, AND SKILLS. Retrieved from http://www.bu.edu/law/files/2015/11/NewTech-2.pdf
Bhavsar, K., Shah, V., & Gopalan, S. (2019). Business Process Reengineering: A Scope of Automation in Software Project Management using Artificial Intelligence. International Journal of Engineering and Advanced Technology, 9(2), 3589-3594. https://doi.org/10.35940/ijeat.b2640.129219
Boudreau, P. (2019). Applying artificial intelligence to project management. Toronto, Canada: Independently Published
Campbell, P. M. (2012). Communications skills for project managers. New York, NY: Amacom American Managemen.
Dam, H. K., Tran, T., Grundy, J., Ghose, A., & Kamei, Y. (2019). Towards Effective AI-
Powered Agile Project Management. 2019 IEEE/ACM 41st International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER). https://doi.org/10.1109/icse-nier.2019.00019
Davenport, T. H., & Kirby, J. (2016). Only Humans Need Apply: Winners and Losers in the Age of Smart Machines. New York, NY: HarperCollins.
Davenport, T. H. (2018). The AI Advantage: How to Put the Artificial Intelligence Revolution toWork. MIT Press.
Elshaer, R. (2013). Impact of sensitivity information on the prediction of project’s duration using earned schedule method. International Journal of Project Management, 31(4), 579-588. https://doi.org/10.1016/j.ijproman.2012.10.006
Fleming, Q. W., & Koppelman, J. M. (2010). Earned value project management. Newtown Square, PA: Project Management Institute.
Florentine, S. (2017, February 27). IT project success rates finally improving. Retrieved January 18, 2020, from https://www.cio.com/article/3174516/it-project-success-rates-finally-improving.html
Highsmith, J. A. (2010). Agile Project Management: Creating Innovative Products. Addison-Wesley Professional.
Gido, J., & Clements, J. P. (2017). Successful project management. Australia [etc.: South-Western Cengage Learning.
IEEE Computer Society Predicts the Future of Tech: Top 10 Technology Trends for 2019 • IEEE Computer Society. (n.d.). Retrieved from https://www.computer.org/web/pressroom/ieee-cs-top-technology-trends-2019
Ihekweaba, O., Ihekweaba, C., & Inyiama, H. C. (2013). Intelligent Agent-based Framework for
Project Integration Management [Paper presentation]. Proceedings on the International
Institute, P. M. (2019). A Guide to the Project Management Body of Knowledge (PMBOK(R) Guide-Sixth Edition / Agile Practice Guide Bundle (HINDI) (6th ed.). Project Management Institute.
Iranmanesh, S. H., & Zarezadeh, M. (2008). Application of Artificial Neural Network to Forecast Actual Cost of a Project to Improve Earned Value Management System. International Journal of Social, Behavioral, Educational, Economic, Business andIndustrial Engineering, 2(6).
Joshi, C. S., & Dangwal, P. G. (2012). Management of business process reengineering projects: a case study. Journal of Project, Program & Portfolio Management, 3(1), 78. https://doi.org/10.5130/pppm.v3i1.2783
Kendrick, T. (2009). Identifying and managing project risk: Essential tools for failure-proofing your project. New York: AMACON.
Kendrick, T. (2012). Results without authority: Controlling a project when the team doesn’treport to you (2nd ed.). New York, NY: AMACOM.
Kerzner, H. (2014). Project recovery: Case studies and techniques for overcoming projectfailure. Hoboken, NJ: John Wiley & Sons, Inc
Kerzner, H. (2015). Project management 2.0: Leveraging tools, distributed collaboration, andmetrics for project success. Hoboken, NJ: John Wiley & Sons.
Kerzner, H. (2017). Project Management: A Systems Approach to Planning, Scheduling, and Controlling [Kindle 12] (12th ed.).
Kerzner, H. (2018). Project Management Best Practices: Achieving Global Excellence (4th ed.). John Wiley & Sons
Ko, C., & Cheng, M. (2007). Dynamic Prediction of Project Success Using Artificial Intelligence. Journal of Construction Engineering and Management, 133(4), 316-324. https://doi.org/10.1061/(asce)0733-9364(2007)133:4(316)
Robertson, S., & Robertson, J. (2013). Mastering the requirements process: Getting requirements right. Upper Saddle River, NJ: Addison-Wesley.
Russell, S., & Norvig, P. (2016). Artificial Intelligence: A Modern Approach. Createspace Independent Publishing Platform.
Shishodia, A., Dixit, V., & Verma, P. (2018). Project risk analysis based on project characteristics. Benchmarking: An International Journal, 25(3), 893-918. https://doi.org/10.1108/bij-06-2017-0151
Subramanian, V., & Ramachandran, R. (2010). McGraw-Hill’s PMP certification mathematics: project management professional exam preparation. New York, NY: McGraw-Hill.
Theobald, O. (2018). Machine Learning for Absolute Beginners: A Plain English Introduction (2nd ed.). Independently Published.
Tichy, N. M., & Cohen, E. B. (2009). The leadership engine: How winning companiesbuild leaders at every level. New York, NY: Harper Business.
Verzuh, E. (2015). The fast forward MBA in project management, fourth edition. Hoboken, NJ: John Wiley & Sons.
Wauters, M., & VanHoucke, M. (2014). Support vector machine regression for project control forecasting. Automation in Construction, 47, 92-106. https://doi.org/10.1016/j.autcon.2014.07.014
Wauters, M., & Vanhoucke, M. (2015). Study of the Stability of Earned Value Management Forecasting. Journal of Construction Engineering and Management, 141(4), 04014086. https://doi.org/10.1061/(asce)co.1943-7862.0000947
Wauters, M., & Vanhoucke, M. (2016). A comparative study of Artificial Intelligence methods for project duration forecasting. Expert Systems with Applications, 46, 249-261. https://doi.org/10.1016/j.eswa.2015.10.008
Wauters, M., & Vanhoucke, M. (2017). A Nearest Neighbour extension to project duration forecasting with Artificial Intelligence. European Journal of Operational Research, 259(3), 1097-1111. https://doi.org/10.1016/j.ejor.2016.11.018
There are huge mounds of data being gathered today by a multitude of organizations around the world. Governments, private and public companies, not-for-profit organizations are all gathering data. Over 2.5 quintillion bytes of data are generated and stored per day. The question does arise as to what to do with all that data? Can it serve a useful purpose? Tools for analyzing reams of data of information at speeds and accuracy inconceivable ten years, or even twenty years ago, have been developed. From this data company’s feel, they can derive patterns that will help to increase sales. From this processing, people can determine the proper course of exercise and diet that best fits them. This paper aims to explore the various efforts being used to analyze big data and the rewards and failures that have resulted from this effort.
Introduction
There are huge mounds of data being gathered today by a multitude of organizations around the world. From Governments to private and public companies it is estimated that over 2.5 quintillion bytes of data are generated and stored per day (Laudon, 2016). The question does arise as to what to do with all that data? Why is it being generated? Is there information within this huge mound of data that could be culled for some useful purpose? Many companies and organizations are working toward developing tools that will allow exploring this information at speeds and accuracy unimaginable ten years, or even twenty years ago. Much of this ability to accurately cull massive amounts of data has come about due to advancements in technology and data processing that allow for the analysis of data at greater speeds and accuracy. From this data, companies can derive patterns of customer purchasing. From this processing, people can determine the proper course of exercise and diet that best fits them. This paper will explore the various efforts being used to analyze big data and the rewards and failures that have resulted from this effort.
Types of Big Data Collected
There are many kinds of data gathered from a variety of sources. In many cases companies are gathering data they didn’t realize would have some value, such as addressing customer needs or increasing sales. Green Mountain Coffee had been gathering and storing voice and text data for years. This data went unused until Green Mountain invested in analyzing structured and unstructured audio and text data. Green Mountain uses this analysis to learn more about customer behavior buying habits and patterns. By learning more about what customers want, what issues they were having with their twenty different brands and over two hundred different drinks, Green Mountain could produce information that would lead to increased sales. Information responding to specific points of customer confusion or concern helped to produce answers posted on web pages and social media sites. Customer queries and the answers to those queries became a response used by customer service representatives when responding to similar queries by other customers. All of this analysis led to a better experience for Green Mountain’s customers. AutoZone used data showing the types of automobiles owned by people living near their stores. This data was used to create sales specials unique to that store. AutoZone would use this data to adjust inventory to fit the types of cars prevalent in the neighborhoods surrounding the store.
Technologies Used to Gather Big data
Green Mountain obtained the services of Calabrio Speech Analytics to analyze the mounds of data generated from its call centers. Calabrio provides sophisticated audio and text analytics that unlock the goldmine of information in a contact center, transforming every interaction into usable data (Calabria Speech, 2018). AutoZone (AutoZone, 2018) used NuoDb (NuoDb, 2018) database software system to derive automobile types owned by potential customers surrounding its stores. Sears developed a big data system using Apache Hadoop to target groups within its sixty million credit card customers with special sales and promotions. Sears spent heavily in information technology spending more than all other non-computing firms except for Boeing Corporation. Using Apache Hadoop, Sears was able to analyze immense amounts of data weekly what used to take six weeks using Teradata warehouse software and SAS servers. Sears old system could use only 10% of the data available; today it uses 100% of the data. In the past it could only retain this same data for short periods of time, usually less than ninety days, now it keeps all of the data. Today, Sears sells its knowledge of developing big data analysis tools using Apache Hadoop to other companies by setting up a subsidiary company, Metascale.
Big Data and The Benefits Derived
Sears was at one time the retailer in the United States. Then Wal-Mart, Home Depot, Lowe’s and Amazon came along (D’Onfro, 2015). Sears has been losing ground ever since and was looking for a way to stop the bleeding. Sears realized it had a huge customer base which contained unseen data. Sears determined that it could use this data to help stem the tide and turn around its fortunes. By investing heavily in information technology, Sears figured it could regain ground lost by increasing sales to this huge customer base. With sixty million potential customers, it all made sense. By investing in Apache Hadoop, it could better analyze the data it had and identify targeted groups in which to sell products. A deeper understanding of customer buying habits or patterns would increase sales. Sears has had incomplete success since it has failed to address the fundamental issue of Sears’s cost structure. Sears’s cost structure is amongst the highest in the industry, and it has kept it from translating its big data efforts into success. Green Mountain Coffee wanted to improve the customer experience by addressing points of confusion and their buying habits. Using this information to address customer needs and requirements, it theorized, would help to increase sales and solidify the market position. Today, management can quickly identify pain points and issues before they get out of hand.
Where Big Data Worked
Examples of decisions where big data has helped improve either products or services are prevalent in consumer applications. Personal devices companies, such as Fitbit, Sony, and Garmin have helped people to analyze their exercise routines, diets, and sleep patterns (Laudon, 2016). These devices connect to the internet allowing users to join with others users to compare how their routines are working in comparison to others. Under Armour’s (UA) Map My Walk allows users to create a profile, log workouts from walking, running, and bicycle riding; even over different terrains. UA is a mobile device application commonly used on iPhones or Android devices It tracks users routines, sets up diets for them, creates goal setting, and users can join any number of groups worldwide (UA Record, 2018). Skyscanner and Trivago (Trivago, 2016) use big data systems to provide mobile applications allowing travelers to determine the best options available for purchasing airline tickets, reserving hotel rooms, and renting cars when traveling.
Where Big Data Did Not Work
But not all big data ventures are advisable or well thought out. Google developed an algorithm it claimed could accurately show how many people nationwide had contracted influenza. Google theorized it could determine the number of people with influenza and their locations by using the search data from its search engine. The numbers Google showed constantly over-estimated flu rates when compared to conventional data gathered by other groups including the Center for Disease Control and Prevention. What Google failed to take into account, searches are sometimes controlled by emotion. The number of searches increased as media coverage and social media posts increased, which caused an inflated number of returns in a Google search. Sears’s use of big data has, so far, not brought it back to a profitable state. One theory may be that it is not asking the right questions in which to query their huge amounts of data. Until Sears fixes its broken cost structure, using big data; even selling it to its competitors, will not right this broken ship (Laudon, 2016). Wal-Mart understood that it needed to control its cost structure (Songini, 2006). Sears has yet to grasp it.
Conclusion
In conclusion, does big data bring big rewards? It can if the right questions are asked. Google and Sears are examples of where the right questions are not being asked. Sears was close, but it failed to fix fundamental problems with its structure, it was unable to put itself in a position of competitive advantage. Green Mountain and Starbucks have both utilized big data to meet customer needs (Huff, 2014). AutoZone can control its inventory to meet customer needs and control costs. Travelers now enjoy the ability to change travel on the fly. Amazon allows its customer to do comparison shopping with competitors selling similar or same products even if they’re not on Amazon (D’Onfro, 2015) (Peterson, 2015). Big Data analysis has its benefits, but it has drawbacks. Much is dependent on asking the right questions.
References:
AutoZone | Auto Parts & Accessories | Repair Guides & More. (n.d.).
Retrieved January 28, 2018, from https://www.autozone.com/
D’Onfro, J. (2015, July 25). Wal-Mart is losing the war against Amazon. Retrieved from
One way of categorizing access controls is defining what they do. There are three different kinds of implementation: administrative, physical, and technical/logical (Peltier, 2013).
Administrative controls are the policies and procedures and are useful for dealing with insider threats. Physical controls are security guards, cameras, locks on doors and equipment. Technical controls are the encrypted devices like smart cards, biometrics readers, transmission protocols, which protect information systems and the information contained within.
The main access control models include the following
Mandatory Access Control (MAC) – granting access by system policy. Often used with sensitive government systems where the system is top secret and confidential. It relies on sensitivity labels for data and classification levels for users.
Discretionary Access Control (DAC) – DAC is considered to be the more common access control model. Access permission is identity-based. All objects have an owner who grants access permission. Windows is an example of DAC. Creating a file in Windows makes you the owner automatically.
Role Based Access Control (RBAC) – Referred to as nondiscretionary access control and users are granted access based on their job or role within the organization. This model works well for organizations with a constant turnover of personnel (Peltier, 2013).
User Access Management – Ensures that only those with authorization have access to the system and those that don’t have the authority are kept out. ISO 27002 defines where user access management is to be used (Layton, 2016):
User registration – It describes the way users access the system and the type of access allowed.
Privilege Management – Used to adjust access when job or responsibilities change for the user within the organization. The principle of least privilege is applied, always grant the least privilege needed to accomplish the task.
Password Management – Determines the length of passwords, the formatting of passwords, how often should they change, how long between changes can the same password be used again?
Unattended User Equipment – Defines how long an unattended laptop runs before being timed out and shutting down to prevent accessing information.
References:
Layton, T. P. (2016). Information Security: Design, Implementation, Measurement, and Compliance. Boca Raton, FL: CRC Press.
Peltier, T. R. (2013). Information Security Fundamentals, Second Edition. Boca Raton, FL: CRC Press.
When considering IT security, an organization must judge the level of that security based on the level of risk to the organization. An example would be two organizations with a presence on the Internet. One is a small religious congregation with a simple website used to communicate its mission with parishioners. The other is an eCommerce site transacting multi-millions of dollars in business annually. While hacking is possible with both websites, the level of intrusion by an outside party is likely more significant with the eCommerce website than it is with the religious site.
Risk management is an integral part of an information security program. It provides the foundation for building an adequate response at a level sufficient enough to support the organizational objectives while not hindering them (Peltier, 2013). Doing a risk-assessment allows the organization to build a cost-effective IT security system that protects the vital information of the organization. Conducting risk-assessment early in the development of the information system avoids the cost of having to retrofit down the road due to an unknown risk. It allows for the alignment of information security with business objectives. Risk assessment is the business process of identifying threats and the impact of those threats (Layton, 2016).
Senior management must be involved and be in total support of the development of an IT security system and be primarily involved with the risk assessment. As the mission owners, they will be in the best position to identify potential risks as well as determining the risk level. It is important to note that risk assessment is a business function, not an IT function. It can only devise the technical solution to what the business identifies what needs protecting. From the risk assessment, we can develop the policies needed to govern the security of the information system.
The risk assessment will identify vulnerabilities, while risk management will identify which techniques to use to protect against them.
First, enlist those on the frontlines of your organization, the employees. They use the system day-in and day-out and will be full of useful insights on what needs protecting.
Protect assets according to their value. Understand what the most valuable information assets are that the organizations possess and set your security levels by that assessment. Protecting everything is costly and inefficient and usually not needed.
Automate processes and functions. Use artificial intelligence and machine learning; behavioral analytics are becoming critical tools in mitigating security risks.
Create a security roadmap with management support and is appropriately budgeted. A security system plan goes nowhere if management doesn’t support it and the best way to show that support is through adequate budgeting.
Make your IT security department an equal branch of the entire company. IT security operates effectively in the company’s where the department is represented at the board table (AT Kearney, n.d.).
References:
AT Kearney. (n.d.). The Golden Rules of Operational Excellence in Information Security Management. Retrieved April 7, 2019, from https://www.atkearney.co.jp/documents/10192/7073823/The+Golden+Rules+of+Operational+Excellence+in+Information+Security+Management.pdf/118c56c7-b3d8-4e88-871f-3d7a00cebc8c
Layton, T. P. (2016). Information Security: Design, Implementation, Measurement, and Compliance. Boca Raton, FL: CRC Press.
Peltier, T. R. (2013). Information Security Fundamentals, Second Edition. Boca Raton, FL: CRC Press.
Information and communicating are keys to managing projects to a successful conclusion. Knowing the work and the risks are the best defense for handling problems and delays. Assessing potential overall project risks brings to the forefront the need for changing project objectives. It is these risk analysis tools that allow the Project Manager to transform an impossible project into a successful project (Campbell, 2012). Project risks become increasingly difficult when dealing with an unrealistic timeline or target date when given insufficient resources, or insufficient funding. Knowing the risks can help to set realistic expectation levels of deliverables and the work required given the resources and funding provided. Managing risks means communicating and being ready to take preventive action. The PM cannot be risk-averse; accepting risk will happen is part of the job, doing nothing is not an option (Gido & Clements, 2012). The PM needs to set the tone of their projects by encouraging open and frank discussions on potential risks. The PM needs to encourage identifying risks, the potential impact of the project, and the likelihood of occurrence, develop risk response plans, and monitor those risks. This paper will explore using qualitative and quantitative risk analysis in determining overall project risk.
Project Managers (PM) use qualitative risk analysis to determine the probability of a risk occurring and the impact it could have on the project (PMBOK, 2013). A risk is an uncertain event whose occurrence could put the project in jeopardy if not addressed properly. The PM can use qualitative risk analysis to assess the probability the potential risk has to occur using a variety of inputs including the risk management plan, the scope baseline, the work breakdown structure (WBS), enterprise environmental factors, and organizational process assets. The PM will use expert judgment to develop probability and impact assessment; which he will input the results from these estimates into a probability/impact matrix. The PM will use the results from the probability/impact matrix and the expert judgment to determine a ranking of the potential risks to determine which of these risks require further in-depth analysis to develop detailed mitigation plans.
Planning
for risks is a must in any project. A framework needs to be followed that includes identifying risks,
analyzing and prioritizing, developing responses, establishing contingencies,
and monitoring and controlling these risks\ (Verzuh, 2012)
Managing
risks have to be considered an enterprise
capacity. This consideration means the
project risk register has to associate each risk
with a strategic goal of the company
(Kerzner, 2015). If the risk solution is not
connected to a strategic goal of the company,
then there is the added risk of failing to meet the strategic objectives
of the company.
These
detailed response plans and the work that goes into developing them, are quantitative risk analysis. The benefit of
quantitative risk analysis is that it helps the PM and upper management to
determine what resources and time commitment
to handling a risk should it occur, and at what cost. Knowing the impact costs
of the high probability risks helps organizational management to decide if the
risks of taking on a project far outweigh the benefits. One of the tools used
in making a go-no-go determination is a
cost-benefits analysis to be discussed later in this paper.
Sources of project risks include unrealistic schedules, few resources, thin budgets, no or ill-defined metrics meaning ineffective or guesswork measurements, poor project leadership, poorly defined requirements or planning, ineffective change control plans leading to scope creep. Other examples of risk include upgrading old technology to new technologies; availability of resources; excessive revisions to a website before it’s finally acceptable to the customer; price increases of a planned product before it’s time to buy the product. The risks that could occur run the gambit of possibilities depending on the nature of the project.
This
paper will discuss how qualitative and quantitative risk analysis is used to
provide the information needed to make decisions concerning projects. The
information derived from using qualitative and quantitative risk analysis helps
to provide direction to a project, often changing the scope of the project due
to findings in the analysis. The answers provided here will determine if moving
forward with the project is worth the risk.
Perform
qualitative risk analysis prioritizes risks by ranking them in order of
probability and impact. Ranking risks by
their likely probabilities allow the PM
to identify what the project team feels are the risks that will need in-depth
analysis to determine potential impact costs on the project. Roles and
responsibilities for determining risks, budgets, and schedule impacts can be defined in qualitative risk analysis. Risk
categories are determined; probabilities and areas of impact are defined. The risk register and
probability/impact matrix contain all the information developed during the
analysis.
The ranking is determined by assessing the probability the risk will occur. The benefit of this analysis is it allows the PM to concentrate on high priority risks reducing the level of uncertainty (PMBOK, 2013). Probabilities are determined by using expert judgment, interviews, or meetings with individuals chosen for their expertise in the area of concern to the project. These experts can be either internal or external to the project. The probability level of each risk is determined in each meeting; details are examined and justified as are levels of probability.
Impact
analysis investigates the effect risk will have on the project’s schedule; cost, quality, ability to meet project scope.
The impact analyses will also look at the positive or negatives effects of a
risk on the project. If the level of
impact is great enough and its probability of occurring high enough, it will
merit quantitative analyses to determine the exact
effect, it will have on the project.
Inputs to the qualitative risk analysis process include the project risk management plan. Here, the roles and responsibilities of managing risk are defined. Budgets, schedules, resources are defined as well. The scope baseline is considered an input; includes the approved scope statement, the WBS, and the WBS Dictionary. These inputs can only change through approved change control procedures (Mullaly, 2011).
The
risk register serves as both input and as output to the qualitative risk
process. It is used to identify and track all risks connected with a project.
It covers all of the outcomes of the various risk processes used to identify
the risks. Each identified risk is assigned a
unique number, is given a risk name, and assigned a risk owner, an explanation
of the risk, the probability of the risk occurring, as well as including the
rank of the risk. The risk register includes a trigger and a list of potential
responses. The impact of the project should the risk become an issue, a plan of
action (mitigation), and the current status of the risk is also included
(Schwalbe, 2014).
The
identification number is used to give a unique identification code to the risk
to differentiate from all the other risks. There is the chance that some risks,
even though dissimilar, will seem to be similar. The team applies a unique
numbering system to identify the risk,
therefore, avoiding confusion.
Each
risk should have an easily understood name that accurately describes the risk
in a few words. The purpose of this name is to make it easy to identify a given
risk when simply glancing at the whole list (Robertson & Robertson, 2013).
Every risk should have a risk owner; every owner can own more than one risk. The risk owner responsibility is for tracking the status of the risk. The risk owner is responsible for assisting in developing a risk plan for each of their risks. They are responsible for notifying the team and management that the risk has become an issue and for launching the approved risk plan for the occurring risk.
A
description of the risk should be concise, to the point. It should contain the
risk description, the trigger event, the probability of the risk occurring. The
explanation should describe why this is considered a risk and the impact on the
project should it occur. This explanation should contain the plan to mitigate
the risk should it become an issue (Kendrick, 2012).
The probability of risk occurrence is very important in developing possible responses and deciding to commit resources to mitigate the risk should it occur. A PM can chart these probabilities, and the impact of the project, using a probability/impact matrix. The matrix is divided into categories; high risk; low risk; medium risk. The matrix makes it easier to identify which risks are high risks and need special attention because of their likelihood of occurring (PMBOK, 2013). See Figure 1 below for an example of a probability/impact matrix.
Once the PM has completed
identifying risks, determining their probability of occurring, plotted this
data in a matrix, he can determine a rank for each risk. This rank allows the
PM to identify quickly the most important risks that will have the
highest impact on the project and will require extra resources should it occur.
We used the probability of occurrence and
the impact level to determine rank. Those risks with high probability and
greatest impact were ranked highest. Risk probability
assessments explore the possibility of a risk occurring, while risk impact
analysis investigates the potential effect the risk can have, such as
budgeting, on the project. The probability of a risk occurring is determined, and each risk gets a risk rating.
Each risk can then be plotted using a probability/impact matrix and categorized
as high, medium, or low level of impact
on the project (Schwalbe, 2014). The trigger tells the team to watch
for a specific event to occur to tell them a risk is happening
Potential
response(s) are a list of the plans, and their location in the system that
tells the team how to deal with a risk when it occurs. A risk response plan is
a defined action designed to prevent or minimize the impact or occurrence of an
adverse event (Gido & Clements, 2012). Risk response plans can be designed
to avoid a potential risk, mitigate the risk, or accept it. Avoidance means
eliminating the risk by either choosing a different course of action or
designing a resolution to it. Mitigation can also design a solution, but also
includes ways to minimize the risk impact. Accepting means dealing with the
risk should it occur, otherwise do nothing. Many
low probabilities or low impact risks are accepted due to the small likelihood of
occurrence. These responses would be of sufficient detail to allow for easy
determination of the impact costs. Response describes the impact of the project in the event a risk becomes an
issue. The impact, should a risk occur, defines
what the cost would be to the project. In the case of a negative risk, it costs
the project dollars in time and resources. If the risk is positive, the cost is
in the loss of a potential gain. Status tells the team if the risk has
potential or is considered unlikely to occur.
See Table 1 below for an example of a risk register.
Table 1 – Example Risk Registry
No
Rank
Risk
Description
Category
Root Cause
Triggers
Potential Responses
Risk Owner
Probability
Impact
Risk Score
Status
R1
1
Project loss
Member could be reassigned or leave company
Project risk
Management decision
Team member no longer here
Bring in replacement
PM
10
10
100
Mitigation plan in-place in case risk occurs
R2
2
Increase in health costs
Health costs could increase
Budget risk
Non-use of system; users
discovering unrealized health issues causing temporary increase in health
costs
budget reports showing increased costs
Increase training on system usage; increase enforcement by
management on required usage of system
HR
5
5
25
Plan in place to increase awareness of trigger action;
management to be informed
R3
3
Hard to use system
The system could prove harder to use for a variety of
reasons
System use
Poor design; non-intuitive
navigation; poor usage training
low usage; a high number of
complaints
Further training; increase
incentives; surveys to determine usage issues
PM/HR
5
5
25
Plan to track usage and increase trigger actions in place
R4
5
Low number of users
Potentially no one will use the system for a variety of
reasons.
Systems usage
Non-interest; ineffective
enforcement of required usage by employees
Low usage numbers; lack of
feedback on system
Increased enforcement of usage requirements; conduct
survey to determine issues with non-usage
Knowledge of how risk can potentially impact a project is the best way to avoid costly delays in a project. It is incumbent on managing projects to a successful conclusion. Overall project risk assessment provides the needed information to make changes in project strategy and meeting project objectives. Thoroughly understanding and assessing risks can change an impossible project into a successful project. Expectations can be altered for projects with few resources and unrealistic schedules, and assessing risks expose evidence of these cases.
Risks have one of two possible values; either
it occurred or didn’t occur. Qualitative risk analysis will put risks into a
range of possible values; usually high-medium-low. Qualitative methods do not
use numeric values. Qualitative risk analysis aid in the quick determination of a risk occurring and its
potential impact.
Quantitative analysis requires deeper analysis into the risk; it requires more work gathering data to determine the magnitude of impact the risk will have on the project. Quantitative analysis works towards greater precision revealing more about the risk than qualitative analysis. It is the process of mathematically analyzing the effect of risks on overall project goals. The quantitative analysis puts risk into a tighter specific range of fractions of zero, where the range is from zero to one, or between zero and one hundred percent (Kendrick, 2012). Quantitative analysis of that high probability, high impact risks may be estimated down to hours, days of slippage, money units clarify the precise impact of the project. Sensitivity Analysis, rigorous statistical analysis, decision trees, and simulations provide deeper information into the potential risks and could aid overall project risk analysis. The key benefit of quantitative analysis is the information produced that allows for effective decision making and the removal of uncertainty. Communications are key (High Cost of Low Performance: The Essential Role of Communications, 2013).
The
key inputs are the risk management plan, the cost management plan, the schedule
management plan, enterprise environmental
factors, and organizational process assets; information from past projects; such as planning documents (PMBOK, 2013).
Considering each risk by itself, one would think it would be easily manageable. You would be correct, so long as this single risk is the only risk. But put all of the identified risks together and they could prove insurmountable enough to cancel the project. Overall project risk comes from aggregating the data to show a complete picture of the total impact on the project.
As
planning for the project nears completion the team should have multitudes of
information available. Assessing project risk should be easier at this point in
the project. There are some tools the PM
can use further to analyze project risk
including statistical analysis, metrics, and modeling and simulation tools.
These tools can be used to suggest changes, control outcomes, and execute the
project to a successful conclusion.
Methods for assessing overall project risk have been shown to be effective in lowering the impact of the project as well as providing information for making appropriate decisions on moving forward with a project. These assessments can build support for less risky projects while canceling more risky projects. These methods can help to compare projects to see which help to meet the organization’s objectives better than other projects. Information is provided to allow for altering unrealistic project objectives and provide for needed funding reserves. They also improve communications as the information is formulated (Kerzner, 2014).
There are some ways in which to determine the level of overall risk in a project. There are some measurements where aggregation serves as a means to determine overall risk assessment. One method is to add up all of the consequences of the entire project risks. This method is “loss time’s likelihood”, based on the estimated cost, or time involved multiplied by the risk probability, aggregated for the whole project (Kendrick, 2012). One way to add these consequences is to add up the contingency plans of all the risks.
Using the Program Evaluation and Review Technique (PERT) expected estimates could generate similar data as aggregating the consequences. PERT provides estimates as to the most likely, optimistic, and pessimistic amount of time it would take to complete a task. Adding these estimates up gives the PM a range of how long it would take to complete a project. Each risk can use PERT to provide a three-point estimate that is aggregated with all other risk estimates to determine the overall impact on the project.
The PM has to keep in mind that these are guestimates only providing a baseline in which to work. Consequence measurements assume that all risks are independent with no correlation to other risks. This independence is not entirely true as in some cases risk becomes more likely when other risks have occurred (Marchewka, 2015). Once a risk has occurred, the team is concentrating on problem-solving to the neglect of the rest of the project, making it likely more risks will occur as a result.
Quantitative
analysis includes mathematical and statistical modeling allowing the PM to
simulate different outcomes.
Discrete
probability distributions allow only for integer or whole number outcomes. It’s
an either/or outcome. It is much like flipping a coin where eventually you end
with 50% heads and 50% tails as an outcome. In risk analysis this would
analogous to determining if it will rain on the day of an outdoor wedding;
either it will or it won’t.
Continuous
probability distributions are useful where an event could have numerous
possible outcomes depending on the value given. Continuous probability
distributions are good for developing models of risk analysis.
Three
such continuous probability distribution models are:
The normal distribution; commonly referred to as a standard bell curve where the mean and standard deviation determine the shape of the distribution; the probability defines an area under the curve.
PERT is a three-point measurement for defining the area under the curve using optimistic, most likely, and pessimistic estimations.
Triangular distribution uses similar measurements as PERT; the difference is the weight given to the mean and standard deviation.
Risk
monitoring means assigning a risk to one individual, usually a member of the
team where the risk will have the greater impact, to plan and monitor for the
trigger events of the risk. It is very important to review all risks regularly to determine if the probability of
occurring or the impact of the project
has changed. Many times these changes can be identified due to progressive
elaboration; we’ve learned more as we have progressed with the project. The
team may also be able to identify other risks not
considered when the risk management plans were initially developed. Scope, schedule, or budget changes may
have occurred as the project progressed.
Risk audits involve using an outside manager to review the team to ensure the proper processes are in place and used. The auditor has to ensure that monitoring processes are in place for identifying trigger events when they occur, to ensure that a communication plan is defined and ready for action should a risk event occur.
Risk review meetings should be held at regular intervals; usually monthly, and should include stakeholders, managers, and the project team. All participants in a project need to be keenly aware of the risks and the current status of each.
Earned
Value Management (EVM) will help to provide an early indication to the PM and
upper management, of potential project risks (Fleming, 2010). It can indicate
that the project will need more money to complete
unless actions are taken to change upcoming events. The project scope may need changing, perhaps reduced. Perhaps
additional risk needs to be taken or
considered. EVM is a tool the PM uses to track project performance allowing for
early warnings that the project is off
track
Simulations
and modeling are quantitative analysis tools that allow the PM to examine different possible outcome scenarios and
determine the probabilities of each event are
occurring. Monte Carlo simulation is one such technique which randomly produces
values for a variable with a specific probability distribution. The Monte Carlo
simulation goes through some iterations
and records the outcomes. The Monte Carlo simulations are used for either continuous or discrete probability
distributions (Marchewka, 2015)
“Program”
is a term that means a group of related projects managed in a way so that
resources and funds are effectively utilized.
The main objective for program management is better overall control of
interconnected projects than there would if each project were left to their own devices. Projects can be
run in sequence or parallel to each other. While projects have specific target
dates to hit, programs can be open-ended too. Programs may contain only a few
projects to as many as hundred’s of projects.
Risk
management for a program can be as simple as aggregating the risk and response
plans for small programs to sophisticated strategies to deliver benefits and
value. The main purpose of program management is to deal effectively with the
thousands of different activities and tasks that are difficult to manage within
a single project. Program management can provide organizational strategies in
planning a risk response for each
project, allowing for the project to create a response that meets organization
objectives (Harpham, 2015).
Overall
risk analysis in a project consists of aggregating all the risk probabilities
and impacts to determine the level of risk to the overall project itself (Wurzler, 2013).
If the cost of the risk is greater than the benefits derived from completing
the project, than the project is either reexamined and changed or canceled altogether. In programs,
the risk could morph into a sum greater
than its individual components. That is, while each
project’s risk is not so great, add them
up could produce a result too great for the organization to take on.
The
PM needs to take note that no matter which methods he uses, he has to ensure
that all risks are accounted and planned for (Verzuh, 2012). Many times the PM
cannot get the information needed to do a
proper quantitative analysis. Many times members of the organizational
community think it’s a waste of time to conduct a risk analysis. Many contracts I’ve
held I have heard people explain that they already know about a risk and
will deal it with it if it occurs when it occurs. I have used qualitative
extensively because of being left with no other choice having exasperated all other
options. As pointed out in this paper, the
PM has to do the qualitative analysis
first before any quantitative analysis can begin. Doing qualitative first leads
to deeper quantitative analysis. The PM has to identify the risk first before
determining its impact.
References:
Campbell, P. M.
(2012). Communications
skills for project managers. New York, NY: Amacom
American Managemen.
Fleming, Q. W.,
& Koppelman, J. M. (2010). Earned value project management.
Newtown Square, PA: Project Management Institute.
Gido, J., &
Clements, J. P. (2012). Successful project management.
Australia [etc.: South-
Western Cengage Learning.
Harpham, B. (2015,
March 30). ProjectManagement.com – Leveraging the Best Knowledge
Robertson, S., &
Robertson, J. (2013). Mastering the requirements process: Getting
requirements right.
Upper Saddle River, NJ: Addison-Wesley.
Schwalbe, K. (2014).
Information technology
project management. Boston, MA: Course
Technology.
Verzuh, E. (2012). The fast
forward MBA in project management, fourth edition. Hoboken, NJ:
John Wiley & Sons.
Wurzler, J. (2013). Information risks and risk management. Retrieved from Sans Institute website: https://www.sans.org/reading-room/whitepapers/dlp/information-risks-risk-management-34210
This paper will discuss the difference between business-oriented networks and enterprise social networks. What makes up a social-networking site and what makes up a business-oriented social networking site and why are they different?
Another topic concerns why do organizations need a business continuity plan? Why is it important to determine what business will do if a disaster, man-made or natural, strikes? Three issues that a business continuity plan should cover concern identifying assets, controlling the flow if information, and determining who is in charge, what is the line of succession?
Many companies have created profitable businesses from selling small amounts of products previously sold in bigger chunks. Music and books are two such products. The market started asking to buy single songs and not the whole album; so how do you serve that market and still make a profit? Micropayment systems seemed to have come up with an answer, and it serves more than just the entertainment industry. This paper will discuss how this system works and how it is expanding beyond music and books.
How business-oriented networks and enterprise social networks differ
A social network is defined as a place where people can create a personalized homepage writing about events in their lives, post pictures of the family, short video’s, music, post and exchange thoughts and ideas, and link to their friend’s websites; in essence, they create their space on the internet. They can even tag, or create hashtags (#thisisme) for their content, add keywords, so the content is searchable. Others can comment, like, or share your content should you allow them to do so (Turban, 2012).
Mobile devices play a big role in social networking; enter mobile social networking. Users can let their social network know that they’re checking in at a local restaurant for dinner, or they’re at a favorite watering hole watching a live band. They can even post pictures as well as live videos of the event. Many entertainers encourage people attending these events to share their experience as it is free advertising and helps to increase sales of their products such as music or films. Data has shown that mobile social networking has increased subscribers by substantial amounts; the number of mobile subscribers accessing Facebook increased over 100% in one year from2009 to 2010 (Stackpole, 2012). Two basic types of mobile social networks exist; companies are partnering up with wireless carriers such as Yahoo and MySpace via AT&Ts wireless network, and companies that do not have such relationships with cell companies; they use other means in which to attract users such as mobile apps. Examples of these include Classmates.com and Couchsurfing.com.
Business Oriented sites, also known as professional social networks, primary objective are to foster business relationships amongst its members and subscribers. Examples of these sites include LinkedIn, ViaDeo, and Zing.
Many businesses are increasingly using these sites to increase their business contacts, especially in a global economy. Social networks make it easier to maintain contact with colleagues around the world. Companies can advertise businesses services and products. They can show expertise in their field using multiple media including educational presentations, articles, videos; all at virtually no cost to host. This low cost is especially beneficial to small companies. The small business can contact potential customers on the other side of the world. This type of connection was impossible ten years ago. This type of cross-border networking makes globalization available for the individual and the small business (Turban, 2012).
The need for a business continuity plan
The number one reason for a company to have a formalized disaster plan in place is to ensure ongoing business activities that provide business continuity with the least amount of disruption and costs for the business. Many businesses have failed due to a lack of disaster recovery planning because they could not guarantee business continuity that would allow them to survive an unexpected disaster (Turban, 2012).
Part of a company’s security efforts involve preparing for man-made or natural disasters since these may occur without any warning at any time. It is best to prepare an actual plan that will work effectively in the face of calamity. The most important part of this plan is the business continuity plan. It addresses the question of how the business is going to operate should an unexpected disaster occur. Most likely a disaster will be localized; if it’s a worldwide disaster, a plan would be unnecessary.
A disaster recovery plan confronts two issues: one how does the business recover from a disaster; two, how does it continue to operate should a disaster affect the business. This continuity is especially true for global businesses since, as stated earlier, a disaster is usually a localized event, and the business will want to ensure the rest of the business continues as those operations will help to pay for the cost of recovery. It is imperative for a business to have a disaster recovery plan in place to obtain insurance that covers the cost of recovery from the disaster. Disaster recovery describes the events linking the business continuity plan to safeguarding and ensuring recovery.
The purpose of a business continuity plan is to keep the business running after a disaster occurs. All functions and departments need to have in place an effective recovery plan. Part of that plan includes asset protection and recovery, even replacement. The plan needs to detail who makes the decisions, even to the point of secession. The plan needs to concentrate on total recovery from a total loss. The plan needs to be kept current due to changes in technologies, circumstances, even personnel. Critical applications need identifying. And the plan needs to be kept in a safe and accessible place.
A disaster recovery plan needs to cover many areas sufficiently to ensure recovery. It includes identification of assets and their value; including people, buildings, hardware, software, data, and supplies. A threat assessment needs to be conducted to include man-made and natural threats from inside or outside the business. Conduct a vulnerability assessment, and calculate the probability for exploitation, and evaluate all policies and procedures.
Micropayment Systems
The music and book industries today allow consumers set up accounts allowing them to buy single songs, even individual chapters in a book at very low prices. Accumulation of single-item purchases occurs until the amount makes it cost-effective to submit the payment to the credit card company. These systems are known as closed-loop systems. The credit card companies are not enamored with these systems because it has caused them to lower their fees to capture what is becoming a huge industry. Much of this type of business was unimaginable 15 years ago. Today, transactions worth billions of dollars are being handled daily (Schonfeld, 2009))
The shopkeeper gathers all the purchases subscribers make until they are sufficient enough to submit to the credit card company to be cost effective. It’s much like gathering all the days sales at a cash only business and depositing the money in the bank at the end of the day. The problem is the shopkeeper risks waiting a long time on some low volume customers. By aggregating these purchases together, the shopkeepers lower their costs per transaction to the credit card company enabling to operate profitably.
Other industries that are successfully taking advantage of micropayments include mobile banking and microfinancing to poor, underdeveloped areas of Africa; M-Pesa has been successfully operating in Kenya and Tanzania, and has spread to India and Afghanistan as well. Cell phone technology has allowed people in remote areas to apply for and receives loans as small as $100.00 allowing them to finance business operations and providing electricity to their homes using solar powered generators (Mutiga, 2014).
Conclusion
The difference between business-oriented networks and enterprise social networks occur in the area of concentration addressed by the individual sites and how they meet their subscriber needs. FaceBook addresses the need for their subscribers to interact on a social basis, sharing thoughts, family news and photos, opinions are important to this group. LinkedIn allows for sharing contact information, providing specifications on products, showing expertise, getting a job. Mobile devices have played a big role in both businesses oriented as well as the social networking sites so that people can share and talk with each other at anytime from anywhere.
The need for a disaster recovery plan must include a business continuity plan. With a business continuity plan, the business will find it difficult, if not impossible, to find the resources needed to recover from a man-made or natural disaster. Business continuity ensures that the business has continual incoming resources adequately funding the recovery.
Micropayment of become successful because it addressed a need and demand in the marketplace for services otherwise unavailable without it. The market was demanding the ability to be supplied with goods and services on a smaller scale rather than previously available; especially with entertainment and finance products. Entrepreneur’s stepped up and began to determine that if they aggregated their sales receipts and submitted them in bundles rather than individually, they could cut their costs while also serving their customers. The rest of the market, mostly the credit card industry, had to change to meet the demand.
References:
Mutiga, M. (2014, January 20). Kenyaâ’s Banking Revolution Lights a Fire – The New
It’s funny how the supposed secure RFID chips on credit cards are less secure than the magnetic stripe cards are. According to Chase Bank and American Express, the chips are built with 128-bit encryption and Triple-DES (Data Encryption Standard) protecting the data on the chip. Furthermore, the chips, theoretically, send a unique, single-use code with each transaction that does not match the number on the card (Johnson, 2009). And now wallets are available at Amazon that contains shields within your wallet preventing the reading of the card chips by portable readers of RFID chips.
Part of the problem with fighting computer criminals is that once companies have developed the means to fight the latest virus or DoS attack, computer criminals have developed another way to attack. Anti-virus software is only as good as the last known virus. A computer virus, phishing scams, Trojan horses, and DoS (Denial of Service) is used to get information, prevent the use of your computer, gather the information that can be used illegally or sold for illegal use. The biggest threat to computer security has been found to be the user. In the early days, users would write their passwords on a sticky note and post it on their monitors. You may laugh at this notion, but it’s very true. The problem with fighting computer crime is the inability of many people to understand that they’re being spammed or phished. They download a file containing a Trojan horse virus thinking it’s from their best friend, or they’re gullible enough into believing that their child, who is standing next to them, is badly injured in a Nigerian hospital and needs emergency help. The file they download carries a code that will allow their computer to be used in a vast network of other computers to infect many more computers. These computers provide enough power to cause heavy-duty denial-of-service attacks on many well-known companies or government services (Brumfield, 2015)
Grameen Koota set up micro-finance and is part of Grameen Bank in Bangalore, India. It provides small loans to poor or low-income clients. One of the problems that Grameen Koota was trying to bridge by developing a mobile loan and payment system was the inability of the poor to grow their way out of poverty. They didn’t have access to the means of capital they needed to provide even for necessities of life. Grameen Koota had a desire to grow their business. Since business and government were unwilling to set up the needed infrastructure due to the heavy costs of that infrastructure, and the people had a need, and the people owned the means; an abundance of cell phones, the opportunity to provide easy banking and financial services presented itself (Turban, 2012).
M-Pesa has been providing mobile banking and micro-financing to poor, underdeveloped areas of Africa; Kenya, and Tanzania since 2009, and that has since spread to India and Afghanistan. Many people with just a simple flip phone can apply for and receive loans as small as $100.00 to finance providing electricity to their homes using solar-powered generators. And the system allows the borrower to pay the bank using the phone. Grameen Koota had a problem with providing loans when it came time to collect payment; his collectors became victims of armed robberies while returning to the bank with the day’s receipts. One can only imagine the immensity of such a change to a rural area that has never had electricity. And the ability to be able to communicate directly with buyers anywhere in the world, versus just in their local area, allowing these people to sell their products allows them to work their way out of poverty and live a better life (Mutiga, 2014).
References:
Brumfield, J. (2015, April 13). Verizon 2015 Data Breach Investigations Report – About Verizon
Enterprise Solutions. Retrieved from http://news.verizonenterprise.com/2015/04/2015-data-
breach-report-info
Johnson, J. (2009, September 30). RFID Credit Cards and Theft: Tech Clinic. Retrieved from
Today’s post will explore the success of Starbucks and Netflix on the Internet, particularly with Social Media. It will explore why Starbucks puts so much emphasis on social media like Facebook and Twitter and how these sites compare to their homegrown site. The question for Netflix is whether or not the Cinematch search tool is responsible for NetFlix’s success as a business or is it due to them moving to total streaming of movies and TV shows that created their success?
Starbucks
Starbucks, according to some people, makes great coffee. Their staffs of baristas are friendly, and their stores located just about everywhere in America. They even have stores in China. They’re known for their killer social media strategy (Huff, 2014).
Here are some of the stats:
● 36 million Facebook likes
● 12 million Twitter followers
● 93K YouTube subscribers
Those numbers are very impressive. There’s no doubt Starbucks is big on social media, but, why do they do it? Starbucks focus is on its customer base. Their customers are young, social media savvy, and affluent. They’re into the latest thing. On Facebook, the Starbucks management doesn’t post too often; they let their fans do all the talking. But when management does post, it’s usually fun things like contests, tips on things, as well as low-key sales pitches. Starbucks also allows customers to reload their Starbucks mobile card from Facebook. It’s all about creating relationships with existing customers to increase sales and add new customers. Free advertising from existing customer from their feedback adds on new customers at virtually no cost.
On Twitter, Starbucks connects with followers who want to catch up on the latest news and updates and the staff uses Twitter as a service reaching out to customers who are talking about their experiences with the stores and products. The staff checks out Twitter all day long to help keep satisfied customers satisfied and to settle any problems quickly before they get out of hand.
The similarities between Starbucks homegrown site, http://mystarbucksidea.force.com/, And Facebook or Twitter is that while getting customers is good, keeping them is even better. With over 23,000 stores, the company has reached a point where advertising on TV or radio has only so much impact. It no practices f-Commerce where developing social relationships online becomes critically important to keep customers (Turban, 2012). There are similarities on each of the social media sites that Starbucks has a presence, such as encouraging ideas for new drinks are food, attending social events at a nearby store. But each of these sites serves a different clientele. Facebook, for instance, is more family oriented than Twitter, which is more individualistic. All have a love for coffee which is the commonality of this community. Starbucks needs to use these other social media outlets so that it captures every possible customer. And the relationship needs to be tailored to fit the audience, a time-honored tradition in sales. The message can be the same, only stated differently for each audience (McNamara & Moore-Mangin, 2015).
Netflix
Was the reason for Netflix’s success due to implementing the Cinematch search engine on its system? Yes, it was a major contribution because of its ability to conduct extensive data mining; this software agent uses data mining apps to sort through a database of more than 3 billion films and customers’ rental history. Cinematch suggests different movies to rent to the customer. It’s a personalization similar to that offered by amazon.com when it suggests different book titles customers. The basis of the recommendation is a comparison of the individual’s likes and preferences, comparing them to people with similar tastes. With this type of suggestive system, Netflix tells subscribers which movies they probably would like and shows a comparison of what other similar people are watching.
Netflix has already successfully moved from just DVD rentals to streaming video. They have, in fact, been offering television series shows that has drawn in an even larger audience that has helped to increase their revenues beyond what just renting movies could do (Cohen, 2013).
Conclusion
Both Starbucks and Netflix have successfully moved into the Web 2.0 world using social media and search tools effectively to meet their customer’s needs and demands. Netflix moved successfully from being a DVD movie rental business doing mail-order only business, to becoming the preeminent streaming entertainment company with millions of subscribers. Both companies managed to take current technology and develop systems that meet the customer’s needs and making themselves very profitable with bright futures.
References:
Cohen, P. (2013, April 23). Forbes Welcome. Retrieved from http://www.forbes.com/sites/petercohan
Amazon.com has numerous elements that allow customers to personalize and customize features and products. The question to ask is how effective are the various elements? Do they cause a customer to buy more products? Wal-Mart, too, has similar elements built into the eCommerce (EC) site. Is it effective and will it be a cause of concern for Amazon? Will Amazon continue to be the dominant force in etailing or will Wal-Mart, being the economic juggernaut that it is, prove to be a force to be reckoned with? Wal-Mart, after all, does have a history of displacing so-called leaders in the market when it opens a new store. This paper explores these questions to help better understand the dynamics at play here in eCommerce.
Personalization
Three personalization items to note are “Wishlists,” “Featured Recommendations” and “Recently Viewed.” Wish Lists allow the customer to create separate lists of items they might like to buy at some future time for themselves or someone else. An interesting thing about these lists is that if the customer waits long enough, they could see a significant drop in price. Amazon provides a way for me to schedule recurring orders for products that I use on a regular basis. The “Recommended for me” and “Recently Viewed” are Amazon’s way of suggestive advertising to see if the customer would consider buying more. It’s much like add-on products or like accessorizing; adding a matching pair of shoes to the dress you just bought (Amazon, 2016).
Where the real personalization takes place is with meeting the customers’ needs and one of the things that Amazon is known for is being a pioneer in personalization. Their use of data mining technology to make the consumer shopping experience much more memorable and exciting is being mimicked by all others, including Wal-Mart. Amazon uses the data gathered on its customer’s activities, besides to make the shopping experience more memorable to the shopper, it informs sellers what they should carry in inventory, how much they should carry in inventory, and what times of the year they should carry this suggested inventory (Rao, 2013).
Customization
Something to take note of is the types of customization in question here; one is for customizing products to meet consumer’s needs; much like what Dell Computers does for example. Secondly is for customizing web experience such as in allowing consumers to choose what they would like to see on their “page” and what the website shows you based on your previous activity. A customizable product would be difficult for Amazon to do since they’re an eTailer and not a manufacturer like Dell Computers for instance; that doesn’t prevent Amazon from aligning with manufacturers, like Dell to provide the consumer the ability to buy customizable products through Amazon. Amazon would certainly have to ensure a good fit since Amazon is a destination and most people wouldn’t consider Amazon to be a destination for buying a car, for instance. But Amazon does, to somewhat the same extent as Dell, provide available customization on some products, for instance, golf clubs or purchasing dress shirts. But it’s limited to what the manufacturer is willing to offer, much the same as Dell does, for instance. As for customization of the interface of either Amazon or Wal-Mart’s websites, there is no evidence that either allows for such customization (Amazon, 2016).
Amazon versus Wal-Mart
Will Wal-Mart be able to beat out Amazon online? Likely it will be an interesting battle, especially since Amazon recently became bigger than Wal-Mart with a market cap of $246 billion versus $230 billion respectively. Even though Wal-Mart’s overall sales are still greater than Amazon’s, Amazon is smoking Wal-Mart in eCommerce (D’Onfro, 2015). Amazon’s EC shopping has been seeing bigger and bigger sales percentage increases than Wal-Mart’s EC and brick-and-mortar combined, with the share of EC percentage of total sales rising from a mere 0.6% to 7% from 1999 to 2015 showing quarterly increases almost triple that of brick and mortar.
But other numbers spell out a clearer picture of the differences between Amazon and Wal-Mart: Wal-Mart has far more employees: 2.2 million to Amazon’s 154,100. Net sales are clearly a victory for Wal-Mart coming in with $482.2 billion versus Amazon’s $88.988 billion. But the following is where the difference is: Amazon’s year over year growth versus Wal-Marts has been 20% to 1.9%. Amazon’s product offerings equal 250 million versus Wal-Marts mere 4.2 million. Amazon adds 75,000 new products per day while Wal-Mart opened 115 new supercenters last year, and Amazon reaches 244 million active users with 154,000 employees versus Wal-Mart’s 2.2 million employees. The numbers tell the story (Peterson, 2015).
Avatars
In EC avatars have become quite common. They’re used extensively in eLearning and customer support. These are referred to as picons (personal icons), but that has long since stopped. Avatar as a word is derived from Hindu and is stands for the “descent” of a deity in a terrestrial out of body form (Avater-Wikipedia, 2015). Using an avatar can certainly be more efficient for the company since it doesn’t have actually to pay an actor or hire an actual human to interface with a customer; it seems that some people could be turned off from using one. It’s much like going through an automated answering system when you call your insurance company, very frustrating. Since the company using the avatar has to try to predict what the customer is going to ask commonly, it makes it difficult for that customer who asks a question that doesn’t quite fit the mold.
But in virtual world websites like in Second Life, the blend of virtual-world EC and the real world creates opportunities for creative marketers. Companies like MacDonald’s and Dell have created few instances of selling real-world products in virtual worlds to real-world customers and delivered them to their real-world addresses (Hemp, 2006)
Banner Advertising
A banner ad is an advertisement usually displayed across the top of a web page or along the side of the page and is commonly served up by an ad server. This advertising form is embedded into the web page. Its intention is to attract traffic to an advertiser’s website accessible because the ad hyperlinks to the advertiser’s website. Web banners function much like traditional advertisements in print media function: they serve to attract immediate attention to whatever the advertiser is selling in the hopes the viewer will be attracted and enticed enough to click on the ad. Interestingly, this data can be tracked from the ad: how many times the ad displayed; how many clicks; how far those who clicked went into the hyperlinked website from clicking in and out to actually purchasing the product (Web Banner-Wikipedia, 2015). What makes any ad popular? Banner ads are a quick and easy way to place your wares in front of millions of people all at once. Traditional full-page newspaper ads don’t even get that kind of coverage. Tracking a web page banner ad is certainly far simpler than tracking an ad in the yellow pages. And you can advertise just about any product from automobiles to children’s toys to food. Banner ads are much like billboard advertising because people are likely only taking a quick glance; makes it so they’re more appropriate for brand reinforcement than for unique product advertising.
Conclusion
As you can see, Amazon doesn’t need to fear Wal-Mart running it over in the EC world anytime soon. In fact, Wal-Mart needs to pick up the pace a tad bit it would seem (Peterson, 2015). Judging from the numbers, it seems Wal-Mart’s cost per sale is higher than Amazon’s. After all, Amazon is doing much more overall with a lot less personnel than Wal-Mart.
Avatars and banner ads certainly have their place in the EC world. Baner ads placement or use is limited: banner ads are placed somewhere in whatever medium happens to be popular at the moment. It used to be newspapers and magazines; now it’s the internet. Avatars are useful in areas like eLearning or introducing potential customers initially to a product or a service. The hope here, like banner ads, is that you click to exploring further into the connected website and possibly buy.
References:
Amazon.com: Online Shopping for Electronics, Apparel, Computers, Books, DVDs & more. (2016, February 13). Retrieved from https://www.amazon.com
Avatar (computing) – Wikipedia, the free encyclopedia. (2015, October 28). Retrieved February 13, 2016, from https://en.wikipedia.org/wiki/Avatar_%28computing%29
D’Onfro, J. (2015, July 25). Wal-Mart is losing the war against Amazon. Retrieved from http://www.businessinsider.com/wal-mart-ecommerce-vs-amazon-2015-7
Hemp, P. (2006, June). Avatar-Based Marketing. Retrieved from https://hbr.org/2006/06/avatar-based-marketing
Peterson, H. (2015, July 13). The key differences between Wal-Mart and Amazon in one chart. Retrieved from http://www.businessinsider.com/amazon-vs-wal-mart-in-one-chart-2015-7