Imagine you are an HR-Manager, and you would like to know which employees are likely to stay, and which might leave your company. Besides you would like to understand which factors contribute to leaving your company. You have gathered data in the past well, in this case Kaggle simulated a dataset for you, but just imagineand now you can start with this Hands On Lab — Predict Employee Leave to build your prediction model to see if that can help you.
In this lab, you will learn how to create a machine learning module that predicts whether an employee will stay or leave your company.
We are aware of the limitations of the dataset but the objective of this hands on lab is to inspire you to explore the possibilities of using machine learning for your own research, and not to build the next HR-solution. We created a starting experiment for you on the Azure AI Gallery to give you a smooth start. You will follow several steps to explore the data and build a machine learning model to predict whether an employee will leave or not, and why.
You will build this prediction model with the Azure Machine Learning Studio. The complete model will look like this:. There are several options to start with Azure ML. You will need a Windows LiveID to sign in. Now you can start with the starting experiment from the Azure AI Gallery. This experiment uses a simulated dataset from Kaggle. This will open the Azure Machine Learning Studio in a browsers, and duke nukem 3d cheats can copy the experiment to your free workspace.
In the Starting experiment: Predict Employee Leave experiment, you will find the Employee Leave data on the canvas, together with a Summarize Data module. Therefore, we start with running the experiment, by clicking on the run button in the menu at the bottom. The output port is the little circle under every module on the canvas:. You can scroll through the different columns, and by selecting them, you get an overview in the panel on the right.
We can continue inspecting the dataset. Another way to get a first impression of the data. Therefore we use the Summarize Data modulewhich gives us insights about the data. You can right-click on the output port of the Summarize Data module and select Visualize. We also get an idea about the variance and distribution of the data. Therefore we drag the Split Data module on the canvas. You can find this module in the menu left, next to the canvas. You can either click throught the various options, or use the search function.
When you have found the Split Data module, you can drag it on the canvas, and connect the output port of the dataset to the inport port of the Split Data module. You can connect the modules by left-clicking on the output port, and keep you mouse button down while draging it to the module you want to connect it to.
We set a seed, so we can repeat this experiment. Since we have split the data, we can continue to work with the training data set.
We first select the Train model module and drag it on the canvas. But when we do so you will a little red exclamation mark. First we will select the dependent variable. Therefore, we have to click on the Launch column selector. Furthermore, we have to select the algorithm to train the model with.The true cost of replacing an employee can often be quite large. In other words, the cost of replacing employees for most employers remains significant. This is due to the amount of time spent to interview and find a replacement, sign-on bonuses, and the loss of productivity for several months while the new employee gets accustomed to the new role.
Understanding why and when employees are most likely to leave can lead to actions to improve employee retention as well as possibly planning new hiring in advance. I will be using a step-by-step systematic approach using a method that could be used for a variety of ML problems. In this study, we will attempt to solve the following problem statement is:.
Given that we have data on former employees, this is a standard supervised classification problem where the label is a binary variable, 0 active employee1 former employee. In this study, our target variable Y is the probability of an employee leaving the company.
I will use this dataset to predict when employees are going to quit by understanding the main drivers of employee churn. First, we import the dataset and make of a copy of the source file for this analysis. The dataset contains 1, rows and 35 columns.
The data provided has no missing values. In HR Analytics, employee data is unlikely to feature large ratio of missing values as HR Departments typically have all personal and employment data on-file. However, the type of documentation data is being kept in i. A few observations can be made based on the information and histograms for numerical features:. In this section, a more details Exploratory Data Analysis is performed.
The age distributions for Active and Ex-employees only differs by one year; with the average age of ex-employees at A kernel density estimation KDE is a non-parametric way to estimate the probability density function of a random variable. Gender distribution shows that the dataset features a higher relative proportion of male ex-employees than female ex-employees, with normalised gender distribution of ex-employees in the dataset at The dataset features three marital status: Married employeesSingle employeesDivorced employees.
Travel metrics associated with Business Travel status were not disclosed i.
The average number of years at the company for currently active employees is 7. The average number of years wit current manager for currently active employees is 4. Some employees have overtime commitments. The data clearly show that there is significant larger portion of employees with OT that have left the company.
In the supplied dataset, the percentage of Current Employees is Hence, this is an imbalanced class problem. Machine learning algorithms typically work best when the number of instances of each classes are roughly equal. We will have to address this target feature imbalance prior to implementing our Machine Learning algorithms. It is worth remembering that correlation coefficients only measure linear correlations. In this section, we undertake data pre-processing steps to prepare the datasets for Machine Learning algorithm implementation.
Machine Learning algorithms can typically only have numerical values as their predictor variables. Hence Label Encoding becomes necessary as they encode categorical labels with numerical values. To avoid introducing feature importance for categorical features with large numbers of unique values, we will use both Label Encoding and One-Hot Encoding as shown below.
Feature Scaling using MinMaxScaler essentially shrinks the range such that the range is now between 0 and n.Kaggle Competition - House Prices; Advanced Regression Techniques Walkthrough
Machine Learning algorithms perform better when input numerical variables fall within a similar scale. In this case, we are scaling between 0 and 5.
HR Datasets for Analytics Mastery
Prior to implementation or applying any Machine Learning algorithms, we must decouple training and testing dataframe from our master dataset. Classification Accuracy is the number of correct predictions made as a ratio of all predictions made. It is the most common evaluation metric for classification problems.GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
If nothing happens, download GitHub Desktop and try again. If nothing happens, download Xcode and try again. If nothing happens, download the GitHub extension for Visual Studio and try again. Trying to predict if the best and most experienced employees leave prematurely based on features listed above, using vanilla Neural Network techniques:.
The original dataset is stored in the 'Original Kaggle Dataset' folder. The cleaned data and code is stored in the 'cleaned data' folder. All programming carried out in Matlab. Skip to content. Dismiss Join GitHub today GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
Open Data Stack Exchange is a question and answer site for developers and researchers interested in open data. It only takes a minute to sign up. They would probably have to be anonymized. On any of the following topics. OPM is the focal point for providing statistical information about the Federal civilian workforce.
Customers include Federal government agencies, researchers, the media, and the general public. I've been compiling datasets related to HR for a while now, and store them on my GitHub repository, not always with all the attribution I should. Another dataset is at Kaggleand IBM hosts a popular dataset on employee attrition.
HR Datasets for Analytics Mastery
Sign up to join this community. The best answers are voted up and rise to the top. Home Questions Tags Users Unanswered. Are there any Open datasets for Human Resources? Ask Question.
Building an Employee Churn Model in Python to Develop a Strategic Retention Plan
Asked 3 years, 10 months ago. Active 2 years, 6 months ago. Viewed 4k times. Stanislav Kralin 2, 1 1 gold badge 8 8 silver badges 31 31 bronze badges.
PierreS PierreS 7 7 silver badges 10 10 bronze badges. Active Oldest Votes. All these datasets are contrived, fictional, simulated, etc. Yes, they were all released as open source. Again, none of them are real data, but are all synthetic. Sign up or log in Sign up using Google. Sign up using Facebook. Sign up using Email and Password. Post as a guest Name. Email Required, but never shown. The Overflow Blog. The Overflow How many jobs can be done at home?
Socializing with co-workers while Social distancing.Every now and then I enjoy hopping over to Kaggle to see if there are any interesting data sets that I may want to play with. Earlier this week I came across a fictional dataset on staff attrition.
You can find the dataset using the link below:. The dataset was shared by a Mr. I had other questions to ask:. The dataset is simulated and contained the following fields:. To solve this problem, I decided to use a combination of three type of analytics:.
I think I should point out when exactly this sort of thing would be used in the real world. I think doing so would fit the model seamlessly into the workflow of the organization.
I intend to learn Tensor Flow very soon however. For this task I created a model with 1 input layer, 4 hidden layers and an output layer. With the model created, I can predict who will leave the organization and the next thing I needed to find out was why. For this, I decided to use correlation analysis to determine which factors contribute the most to staff attrition. I found that variable most correlated to attrition was job satisfaction which had a score of A negative correlation means that the likelihood of departure increases as job satisfaction decreases makes sense right?
To find ways to address job satisfaction among employees, I did deeper investigation into the factors that contributed to job satisfaction. I did another correlation analysis on job satisfaction and I discovered that the time spent at the company and the number of projects worked on had the greatest impact this was true for staff who left and staff who stayed.
Now that I had a machine learning model that can predict whether or not a member of staff may leave and I had an idea of what contributes to job satisfaction, I had what I needed to build a system that creates recommendations for improving job satisfaction for employees who may be leaving soon.
Now that I have a retention profile, the next step is to somehow create a system that would incorporate the model that I created earlier to provide users with recommendations on treating with the members of staff who are likely to leave the company. The logic I came up with is fairly simple:. The output I got looked something like this:. The final product is an actionable insight.
Prescriptive Analytics can be useful for bolstering the decision making power of staff who may be new to managerial positions and may be acting or being trained for such positions. I suppose that in some cases it can even be used to for evaluating potential candidates for such positions. I personally feel that for this type of scenario, Prescriptive Analytics can be useful for getting to the root cause of employee dissatisfaction and finding ways to strengthen the relationship between staff and management.GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
If nothing happens, download GitHub Desktop and try again. If nothing happens, download Xcode and try again. If nothing happens, download the GitHub extension for Visual Studio and try again. I want to thank Natasha for the blog on People Analytic for providing background knowledge for me to understand Human Resources departmets.
I also want to thank IBM analysts for providing data set on Kaggle. Companies hire many employees every year. To create a positive working and learning environment, firms invest time and money in trianing the new members and also to get existing employees involved as well. The goal of these programs aim to increase the effectiveness of the employees and in doing so the firm as a whole can have better output in long run.
The single most important feature we are interested in is attrition. Attrition in human resources refer to the gradual loss of employees over time. In general relatively high attrition is problematic for companies. Human Resource professionals often asume a leadership role in designing company compensation programs, work culture and motivation systems that help the organization retain top employees.
There are total of samples and 35 features.
A Case Study in using IBM Watson Studio Machine Learning Services
Among the target, Attrition, there are candidates committed to Yes i. From our analysis below, we see that the Laboratory Technician who spent a year at the firm and than left sat on a high of The second is Sales Representative that stayed at the firm for a year, at 9. The third group of people who stayed at the firm for a year and left are Research Scientist, at a shy of These are the top three demographics that contribute to the Attrition the highest.
Skip to content. Dismiss Join GitHub today GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. Sign up. R Branch: master. Find file. Sign in Sign up. Go back. Launching Xcode If nothing happens, download Xcode and try again.Recipes are community-created content.
The recipe has been replaced by an official IBM Developer tutorial. Please use the tutorial instead:. The recipe will follow the main steps of methods for data science and data mining such as CRISP-DM Cross Industry Standard Process for Data Mining and the IBM Data Science Methodology and will focus on tasks for data understanding, data preparation, modeling, evaluation and deployment of a machine learning model for predictive analytics. At the same time the recipe will also dive into the use of the profiling tool and the dashboards of IBM Watson Studio to support data understanding as well as the Refine tool to solve straightforward data preparation and transformation tasks.
IBM has defined a Data Science Methodology that consists of 10 stages that form an iterative process for using data to uncover insights. Each stage plays a vital role in the context of the overall methodology. According to both methodologies every project starts with Business Understanding where the problem and objectives are defined.
This is followed in the IBM Data Science Method by the Analytical Approach phase where the data scientist can define the approach to solving the problem. Once the Data Scientist has an understanding of their data and has sufficient data to get started, they move on to the Data Preparation phase. This phase is usually very time consuming. During and after cleaning the data, the data scientist generally performs exploration — such as descriptive statistics to get an overall feel for the data and clustering to look at the relationships and latent structure of the data.
This process is often iterated several time until the data scientist is satisfied with their data set. The model training stage is where machine learning is used in building a predictive model. This model is trained and then evaluated by statistical measures such as prediction accuracy, sensitivity, specificity etc.
Once the model is deemed sufficient, the model is deployed and used for scoring on unseen data. The IBM Data Science Methodology adds an additional Feedback stage for obtaining feedback from using the model which will then be used to improve the model. Both methods are highly iterative by nature.
In this recipe we will focus on the phases starting with data understanding and then continue from there preparing the data, building a model, evaluating the model and then deploying and testing the model. The purpose will be to develop models to predict customer churn. Aspects related to analyzing the causes of these churns in order to improve the business is — on the other hand — out of the scope of this recipe. This means that we will be working with various kinds of classification models that can, given an observation of a customer defined by a set of features, give a prediction whether this specific client is at risk of churning or not.
IBM Watson Studio provides users with environment and tools to solve business problems by collaboratively working with data. Users can choose the tools needed to analyze and visualize data, to cleanse and shape data, to ingest streaming data, or to create, train, and deploy machine learning models. In context of data science, IBM Watson Studio can be viewed as an integrated, multi-role collaboration platform that support the developer, data engineer, business analyst and last but not least the data scientist in the process of solving a data science problem.
For the developer role other components of the IBM Cloud platform may be relevant as well in building applications that utilizes machine learning services. The data scientist however can be build the model using a variety of tools ranging from RStudio and Jupyter Notebooks using a programmatic style, SPSS Modeler Flows adopting a diagrammatic style or the Model Builder component for creating IBM Watson Machine Learning Service which supports a semi-automated style of generating machine learning models.