Computer Science (AI/ML) • Final Year

Parusha Sindhu

Machine Learning Engineer focused on building interpretable and fair AI systems

I design ML models that don’t just predict — they explain decisions, uncover bias, and ensure reliable outcomes in real-world applications.

97%
Model Accuracy
3
Production Projects
3 Years
NCC Cadet
model_explainability.py
# SHAP-based feature importance import shap import xgboost explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(X_test) # Generate interpretable visualization shap.summary_plot( shap_values, X_test, plot_type="bar" ) # Fairness audit across demographics from fairlearn.metrics import demographic_parity_difference dpd = demographic_parity_difference( y_true, y_pred, sensitive_features=groups ) print(f"Parity score: {dpd:.3f}")

Why Explainability & Fairness Matter

In production ML systems, a model's accuracy is only half the story. Stakeholders need to understand why a decision was made, and regulators demand proof that systems don't discriminate.

I specialize in building ML pipelines that are both high-performing and interpretable — using SHAP and LIME to surface feature importance, and fairness frameworks to audit bias before deployment.

Production-Ready Code

Clean, documented pipelines that integrate with existing systems

Model Interpretability

SHAP and LIME explanations that non-technical stakeholders can understand

Fairness Auditing

Quantified bias metrics across demographic groups pre-deployment

Model Performance Metrics
Accuracy
97.2%
Precision
95.8%
Recall
94.3%
F1-Score
95.0%
Tech Stack
Python
XGB
SHAP
Mongo

Key Projects

Real-world ML systems with measurable impact

01

XAI Cultural Heritage Prediction

Explainable AI Multi-class Classification SHAP

Developed an ML pipeline to predict cultural heritage site significance using ensemble models (XGBoost, CatBoost). Integrated SHAP for per-prediction feature attribution, enabling archaeologists and policy-makers to understand why each site received its classification. Delivered interactive visualizations showing how region, material composition, and structural age influence predictions.

SHAP-driven explanations for every prediction
Ensemble models for robust classification
Interactive dashboards for stakeholders
View on GitHub
02

Fairness AI Dashboard

Full-Stack Bias Auditing MongoDB

Built a production-grade bias evaluation system to audit trained models across demographic groups. Computes disparate impact, equalized odds, and demographic parity metrics in real-time. Backend stores audit logs in MongoDB; frontend dashboard visualizes fairness gaps and flags models that fail regulatory thresholds before deployment.

Real-time bias metric computation
MongoDB audit trail & versioning
Group-level performance comparison
View on GitHub
03

Fetal Health Classification

Healthcare ML CatBoost 97% Accuracy

Trained a high-precision CatBoost classifier on cardiotocography data to detect fetal distress across three health classes (Normal, Suspect, Pathological). Achieved 97.2% accuracy through feature engineering, SMOTE-based class balancing, and Bayesian hyperparameter tuning. Validated with confusion matrix analysis showing >91% recall on critical Pathological class.

97.2% accuracy, 91%+ recall on critical class
CatBoost with Bayesian optimization
Full confusion matrix + precision/recall
View on GitHub

Technical Expertise

Production-grade tools and frameworks

Machine Learning

Python
Scikit-learn
XGBoost
CatBoost

Explainability & Fairness

SHAP
LIME
Fairlearn
AI Fairness 360

Data & Infrastructure

MongoDB
SQL
Flask
Git

Visualization

Matplotlib
Seaborn
Plotly
Tableau
NCC Logo

Leadership & Discipline

National Cadet Corps (NCC) — Lance Corporal

3 years of service building leadership, discipline, and teamwork under structured military-style training. Promoted to Lance Corporal for demonstrated command capabilities and consistent performance.

Let's Work Together

I'm actively seeking ML engineering roles and research collaborations in Explainable AI and Algorithmic Fairness. Open to full-time positions, internships, and impactful projects.