Predicting Diabetes Using Machine Learning – A Medium-Level ML Project
Predicting Diabetes Using Machine Learning – A Medium-Level ML Project
This project demonstrates how to predict whether a patient has diabetes using machine learning. We use the Pima Indians Diabetes dataset and walk through a full ML workflow including data cleaning, model building, and evaluation.
Tools Required: Python, pandas, scikit-learn, matplotlib, seaborn
Step 1: Import Required Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
Step 2: Load the Dataset
Download the dataset from Kaggle: Pima Indians Diabetes Dataset
df = pd.read_csv("diabetes.csv")
df.head()
Step 3: Explore and Clean the Data
df.info()
df.describe()
Replace invalid 0 values in key health-related columns:
cols = ['Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI']
df[cols] = df[cols].replace(0, np.nan)
df.fillna(df.mean(), inplace=True)
Step 4: Visualize the Data
sns.countplot(x='Outcome', data=df)
plt.title("Diabetes Count (0 = No, 1 = Yes)")
plt.show()
sns.pairplot(df, hue='Outcome')
plt.show()
Step 5: Split and Scale the Data
X = df.drop('Outcome', axis=1)
y = df['Outcome']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
Step 6: Train the Model
model = RandomForestClassifier(n_estimators=100, random_state=42)
model.fit(X_train, y_train)
Step 7: Make Predictions and Evaluate
y_pred = model.predict(X_test)
print("Accuracy:", accuracy_score(y_test, y_pred))
print("Classification Report:\n", classification_report(y_test, y_pred))
# Confusion matrix
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues')
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title('Confusion Matrix')
plt.show()
Step 8: Feature Importance
importances = model.feature_importances_
features = df.columns[:-1]
plt.barh(features, importances)
plt.xlabel("Feature Importance")
plt.title("Important Features in Prediction")
plt.show()
Step 9: Conclusion
- Random Forest gave strong accuracy on the diabetes prediction task.
- Glucose, BMI, and Age were highly important features.
- This workflow can be adapted to other health or classification problems.
What's Next?
- Try logistic regression or XGBoost for comparison.
- Tune hyperparameters using GridSearchCV.
- Deploy this model using Streamlit or Flask for doctors to use!
Comments
Post a Comment