- Project Overview
- Mathematical Model
- Implementation Tasks
- Key Results and Findings
- Technical Implementation
- Installation and Usage
- Output Files
- Conclusions
This project implements a comprehensive computational framework for modeling and analyzing cytotoxic drug effects on tumor growth using reaction-diffusion partial differential equations (PDEs). The work focuses on understanding chemotherapy effectiveness, tumor resistance development, and immune response dynamics through multiple computational approaches.
- Model tumor-drug dynamics with spatial heterogeneity using 2D PDEs
- Compare numerical methods for solving coupled reaction-diffusion systems
- Develop surrogate models to accelerate parameter exploration
- Perform sensitivity analysis to identify critical parameters
- Implement data assimilation techniques for parameter inference
- Apply machine learning methods (neural networks, PINNs) for prediction
- Create hybrid models combining mechanistic and data-driven approaches
Cancer chemotherapy faces a fundamental challenge: drug resistance. This project models the spatiotemporal evolution of:
- Sensitive tumor cells that respond to treatment
- Resistant tumor cells that evade cytotoxic effects
- Immune response that targets tumor cells
- Drug concentration affected by diffusion, clearance, and dosing
The model tracks four spatiotemporal fields over a 2D domain Ω:
Where:
-
$S(x,y,t)$ : Sensitive tumor cell density -
$R(x,y,t)$ : Resistant tumor cell density -
$I(x,y,t)$ : Immune cell density -
$C(x,y,t)$ : Drug concentration
| Parameter | Description | Typical Value |
|---|---|---|
D_C |
Drug diffusion coefficient | 0.01 - 0.05 |
alpha_S |
Drug kill rate (sensitive) | 0.8 |
alpha_R |
Drug kill rate (resistant) | 0.1 |
rho_S, rho_R |
Growth rates | 0.04, 0.03 |
mu_max |
Max resistance acquisition rate | 0.05 |
lam |
Drug clearance rate | 0.2 |
K |
Carrying capacity | 1.0 |
Resistance acquisition follows a Hill function:
This captures the nonlinear relationship where higher drug concentrations accelerate resistance development.
Three dosing schemes are implemented:
-
Bolus periodic:
$I_{in}(t) = A$ during brief pulses every$T_{period}$ -
Continuous infusion:
$I_{in}(t) = \text{const}$ -
No treatment:
$I_{in}(t) = 0$
File: 2_tumor_diffusion_pde_analysis.py
- Implement two numerical PDE solvers: explicit and semi-implicit
- Compare performance, stability, and accuracy
- Benchmark computational efficiency across grid resolutions
Explicit Solver (Forward Euler)
- Time-stepping: Forward Euler for both reaction and diffusion
- Spatial discretization: 5-point Laplacian stencil
-
CFL stability constraint:
$\Delta t \leq \frac{\Delta x^2}{4 D_{\text{max}}}$ - Advantages: Simple, fast per step
- Disadvantages: Strict timestep limitation
Semi-Implicit Solver (Operator Splitting)
- Reaction step: Explicit (nonlinear terms)
- Diffusion step: Implicit (Crank-Nicolson or Backward Euler)
- Uses sparse linear algebra (
scipy.sparse) - Unconditionally stable for diffusion
- Advantages: Larger timesteps allowed
- Disadvantages: Matrix solve overhead
Solver Comparison - Tumor Burden Evolution
Both solvers produce nearly identical tumor burden trajectories when using comparable timesteps, validating implementation correctness.
Spatial Field Distributions (Explicit Solver)
The 2D heatmaps show:
- S: Sensitive cells concentrate in tumor core
- R: Resistant cells emerge at tumor periphery
- I: Immune response localizes to tumor boundary
- C: Drug diffuses from periphery inward
Semi-Implicit Solver Results
Semi-implicit method shows excellent agreement with explicit solver but allows ~10× larger timesteps.
Performance Benchmark
Computational time scales as
Stability Analysis
Explicit solver becomes unstable when
- Semi-implicit solver is 10× faster for production runs (larger
$\Delta t$ allowed) - Both methods agree to <1% error when properly configured
- Grid resolution
$N=96$ provides good accuracy/speed tradeoff - Tumor burden integral
$\int_\Omega (S+R) , dA$ is a robust metric for model validation
File: 3_tumor_ode_surrogate_nn.py
- Develop a 0D ODE reduction of the PDE system (spatially-averaged)
- Train a neural network to approximate ODE dynamics
- Compare PDE ↔ ODE ↔ NN predictions
By spatially averaging the PDE system, we obtain:
This reduces the problem from ~10,000 spatial DOFs to 4 ODEs, enabling rapid parameter exploration.
-
Input: Time
$t$ -
Output: Tumor burden
$\text{TB}(t) = \bar{S}(t) + \bar{R}(t)$ - Architecture: MLP with 3 hidden layers [32, 64, 32] neurons
- Activation: Tanh
- Training: 80/20 train/test split, MSE loss
PDE vs ODE vs NN Comparison
The neural network successfully learns the complex tumor burden dynamics from ODE-generated data.
NN Training Performance
Training converges in ~2000 epochs with final MSE < 0.001.
Extrapolation Test
The NN generalizes well within the training time range but shows degradation when extrapolating beyond
- ODE surrogate captures mean-field dynamics with 99% accuracy vs PDE
- Neural network achieves <1% error on test data
- ODE is 1000× faster than PDE for parameter sweeps
- NN extrapolation degrades — physics-informed approaches needed
File: 4_sensetivity_analysis.py
- Identify most influential parameters on treatment outcomes
- Quantify parameter interactions using global sensitivity analysis
- Guide experimental design and parameter estimation priorities
Morris Screening Method
- Elementary effects analysis
- Identifies parameters with largest mean and standard deviation of effects
- Computationally cheap (
$N = 100$ -$500$ evaluations)
Sobol Sensitivity Analysis
- Variance-based global sensitivity
- Computes first-order (
$S_1$ ) and total-order ($S_T$ ) indices - Accounts for parameter interactions
- More expensive (
$N = 10{,}000$ + evaluations)
- TB_final: Final tumor burden
- TB_min: Minimum tumor burden during treatment
- t_half: Time to reach half of initial burden
- R_frac_final: Fraction of resistant cells at end
- C_AUC: Area under drug concentration curve
Morris Analysis - Final Tumor Burden
Most influential parameters (ranked by
-
$\alpha_S$ (drug kill rate for sensitive cells) -
$D_C$ (drug diffusion) -
$\mu_{\text{max}}$ (resistance acquisition rate) -
$A_{\text{dose}}$ (dose amplitude) -
$\rho_S$ (tumor growth rate)
Sobol Analysis - Total-Order Indices
Sobol confirms Morris results and reveals:
-
$\alpha_S$ :$S_T = 0.45$ (explains 45% of variance) -
$\mu_{\text{max}}$ :$S_T = 0.28$ (resistance is critical) -
$D_C$ :$S_T = 0.15$ (drug delivery matters)
Resistance Fraction Sensitivity
For resistance development:
-
$\mu_{\text{max}}$ dominates ($S_T = 0.62$ ) -
$\alpha_S$ and$\alpha_R$ differential is key - Drug dosing parameters (
$A_{\text{dose}}$ ,$T_{\text{period}}$ ) show moderate influence
Time to Half-Burden
Treatment speed is controlled by:
-
$\alpha_S$ (fast kill$\rightarrow$ short$t_{1/2}$ ) -
$\rho_S$ (fast regrowth$\rightarrow$ long$t_{1/2}$ ) -
$A_{\text{dose}}$ (higher dose$\rightarrow$ shorter$t_{1/2}$ )
-
Drug cytotoxicity (
$\alpha_S$ ) is the single most important parameter -
Resistance acquisition (
$\mu_{\text{max}}$ ) critically determines long-term outcomes -
Drug diffusion (
$D_C$ ) affects spatial heterogeneity and efficacy - Parameter interactions are significant (20-30% of variance)
- Experimental efforts should focus on measuring
$\alpha_S$ ,$\mu_{\text{max}}$ ,$D_C$
File: 5_ode_pde_data_assimilation.py
- Infer unknown parameters and initial states from noisy observations
- Compare Approximate Bayesian Computation (ABC), 3D-Var, and 4D-Var
- Test dual observation windows: early treatment and post-second dose
- Validate on synthetic PDE-generated "ground truth" data
- Implement clinical "doctor's prognosis" scenario
Synthetic Observations:
- Generate "true" tumor burden
$\text{TB}_{\text{obs}}(t)$ from PDE simulation over extended time horizon [0, 10] - Add Gaussian noise: $\text{TB}{\text{obs}} = \text{TB}{\text{true}} + \mathcal{N}(0, \sigma^2)$
- Observation noise:
$\sigma = 0.0005$ (approximately 0.5% of mean TB)
Observation Windows:
- Window 1 [1.0, 4.0]: Between 1st and 2nd bolus - captures early treatment response
- Window 2 [6.0, 9.0]: After 2nd bolus - simulates "doctor's prognosis" scenario for predicting 3rd dose response
Parameters to Infer:
- 3D-Var optimizes:
$\alpha_S$ (drug kill rate),$\mu_{\text{max}}$ (resistance rate),$\lambda$ (clearance) - 4D-Var optimizes: Initial state
$(S_0, R_0, I_0, C_0)$ at$t=0$
ABC (Approximate Bayesian Computation)
- Sample parameters from prior distributions
- Run ODE forward model
- Accept samples if $|\text{TB}{\text{sim}} - \text{TB}{\text{obs}}| < \epsilon$
- Build posterior distribution
3D-Var (3-Dimensional Variational)
- Optimizes model parameters
$\theta = (\alpha_S, \mu_{\text{max}}, \lambda)$ - Cost function:
$J(\theta) = \frac{1}{2}(J_b + J_o)$ -
$J_b$ : Background term penalizing deviation from prior -
$J_o$ : Observation term measuring fit to data
-
- Optimization: L-BFGS-B with parameter bounds
- Provides point estimate of parameters
4D-Var (4-Dimensional Variational)
- Optimizes initial state
$x_0 = (S_0, R_0, I_0, C_0)$ at$t=0$ - Fixed parameters from 3D-Var results
- Cost function:
$J(x_0) = \frac{1}{2}(J_b + J_o)$ -
$J_b$ : Background term on initial state -
$J_o$ : Observation fit within specified window
-
- Forward integration: Full trajectory from
$t=0$ to$t=10$ - Observation fit: Only within window [t_start, t_end]
- Enables evaluation of forecast (forward) and hindcast (backward) skill
Computational Budgets: Three iteration limits tested: small (50 iter), medium (150 iter), large (400 iter)
Dual Window Comparison - 3D-Var vs 4D-Var
Side-by-side comparison showing both observation windows. 4D-Var (dotted line) achieves superior fit to observations compared to 3D-Var (dashed line) in Window 1.
Performance Heatmap - Observation Fit
Heatmap comparing RMSE in observation windows across methods, budgets, and windows. Darker colors indicate better performance. 4D-Var achieves 5.7x better observation fit than 3D-Var in Window 1.
Performance Heatmap - Forward Prediction
Forward prediction skill beyond observation window. 4D-Var shows 2.2x better forecast accuracy than 3D-Var for Window 1.
Quantitative Comparison
| Method | Window | RMSE_obs | RMSE_fwd | RMSE_bwd |
|---|---|---|---|---|
| 3D-Var | 1 [1.0, 4.0] | 0.00177 | 0.00306 | 0.00262 |
| 4D-Var | 1 [1.0, 4.0] | 0.000311 | 0.00141 | 0.00531 |
| 3D-Var | 2 [6.0, 9.0] | 0.0000622 | 0.000115 | 0.00336 |
| 4D-Var | 2 [6.0, 9.0] | 0.0000611 | 0.000113 | 0.01681 |
4D-Var State Evolution
Full state trajectory (S, R, I, C) optimized by 4D-Var for Window 1. All state variables show physically realistic evolution with smooth dynamics and non-negativity preservation.
Optimized Initial States (4D-Var, Window 1)
| State | Optimized Value | Physical Interpretation |
|---|---|---|
| 0.0277 | Sensitive tumor cells at t=0 | |
| 0.00893 | Resistant tumor cells (24% of total) | |
| 0.0200 | Baseline immune response | |
| 0.00718 | Initial drug concentration |
The optimized state shows biologically realistic sensitive/resistant cell ratio (76% / 24%), critical for modeling treatment failure mechanisms.
Bolus Periodic Dosing Animation
Animated visualization showing spatial evolution of all four fields (S, R, I, C) over the full [0, 10] time window, clearly illustrating periodic drug pulses at t=0, 5, 10 and their spatiotemporal effects on tumor and immune dynamics.
Data Assimilation Performance (ABC vs 3D-Var)
ABC and 3D-Var successfully reconstruct tumor burden from noisy observations. 3D-Var provides point estimates, ABC captures uncertainty through ensemble spread.
Parameter Recovery
After calibration, ODE with inferred parameters closely matches PDE ground truth (RMSE < 0.03).
Method Comparison:
- 4D-Var excels at observation fitting: 5.7x better than 3D-Var in Window 1 (RMSE: 0.000311 vs 0.00177)
- 4D-Var superior for forecasting: 2.2x better forward prediction from Window 1
- 3D-Var better for hindcast: 2x better backward reconstruction (optimizes global dynamics)
- Trade-off: 4D-Var optimizes trajectory, 3D-Var optimizes system parameters
Computational Performance:
- Both methods converge in 10-30 iterations (optimizer finds solution quickly)
- Budget variations (small/medium/large) show minimal impact due to rapid convergence
- 3D-Var: 10 iterations typical
- 4D-Var: 20-29 iterations typical
Clinical Scenario (Window 2 - Doctor's Prognosis):
- Both methods achieve excellent fit (RMSE < 0.0001) to late-stage observations
- Forward prediction to t=10 accurate within 0.01% of true value
- Demonstrates feasibility of predicting treatment response after 2nd dose
Physical Realism:
- 4D-Var maintains non-negative states throughout optimization
- Optimized initial conditions biologically plausible (R > 0, realistic S/R ratio)
- State evolution follows expected pharmacokinetic and tumor dynamics
When to Use Each Method:
- ABC: Full uncertainty quantification, prior-dominated inference
- 3D-Var: Parameter estimation, global dynamics inference, hindcast priority
- 4D-Var: Initial state correction, observation fit priority, forecast emphasis
File: 6_pde_assimilation_pinn.py
- Implement a PINN that learns PDE solutions directly from physics
- Enforce PDE constraints via automatic differentiation
- Compare PINN vs traditional PDE solver
Network: Multi-layer perceptron
-
Input:
$(x, y, t)$ — spatial coordinates and time -
Output:
$[S, R, I, C]$ — predicted fields -
Layers:
$[3 \to 64 \to 64 \to 64 \to 4]$ - Activation: Tanh (smooth derivatives)
Loss Function:
Where:
-
$\mathcal{L}_{\text{pde}}$ : PDE residuals$|\frac{\partial u}{\partial t} - f(u, \nabla u)|^2$ -
$\mathcal{L}_{\text{ic}}$ : Initial condition mismatch -
$\mathcal{L}_{\text{bc}}$ : Boundary condition violations - $\mathcal{L}{\text{data}}$: Tumor burden observation mismatch $|\text{TB}{\text{pred}} - \text{TB}_{\text{obs}}|^2$
Automatic Differentiation:
- Compute
$\frac{\partial S}{\partial t}$ ,$\frac{\partial^2 S}{\partial x^2}$ , etc. using PyTorch autograd - No finite difference approximation needed
PINN Training Loss
Training converges after ~15,000 iterations with combined loss < 0.01.
PINN vs PDE Comparison - Tumor Burden
PINN successfully reproduces tumor burden dynamics (RMSE = 0.024 vs PDE).
Field Reconstruction - Sensitive Cells
PINN accurately captures spatial distribution of sensitive cells with relative error < 5%.
Field Reconstruction - Drug Concentration
Drug concentration field shows excellent agreement (error < 3% in most regions).
Field Reconstruction - Resistant Cells
Resistant cell emergence is correctly predicted by PINN.
Field Reconstruction - Immune Response
Immune cell localization matches PDE solution.
- PINN achieves <5% error on all fields without explicit spatial discretization
- Automatic differentiation enables exact PDE enforcement
- PINN naturally handles irregular geometries (not utilized here)
- Training is 10× slower than traditional PDE solver
- Requires careful loss weight tuning for convergence
- Most promising for inverse problems (parameter inference from data)
File: 7_supernet_supermodel.py
- Create Supermodel: Hybrid ODE + neural correction
- Create SuperNet: PINN generalized across dosing parameters
- Evaluate multi-fidelity modeling capabilities
Concept: Combine mechanistic ODE with data-driven correction:
Where:
-
$f_{\text{ODE}}$ : Calibrated mechanistic model (from Task 5) -
$g_{\text{NN}}$ : Neural network correction (learns model discrepancies) -
$\varphi$ : Learnable NN parameters
Benefits:
- Preserves physical interpretability
- Corrects systematic model errors
- Requires less data than pure ML
Concept: Parameterized PINN for therapy design:
The network learns solutions across a family of dosing protocols, enabling:
- Rapid exploration of treatment strategies
- Interpolation between discrete experiments
- Gradient-based therapy optimization
Supermodel Performance - Variable Dosing
Supermodel (hybrid ODE+NN) accurately predicts tumor burden for
Standard dose (
High dose (
Supermodel vs SuperNet Comparison
- ODE (baseline): Underpredicts resistance effects
- Supermodel: Corrects ODE bias via neural term
- SuperNet: Generalizes across dose space
- PDE (reference): Ground truth
- Supermodel reduces ODE error by 60% via learned correction
- SuperNet interpolates smoothly across dose parameters
- Hybrid models combine interpretability + flexibility
- Require careful regularization to avoid overfitting
- Promising for treatment optimization and personalized therapy
| Method | Speed | Accuracy | Uncertainty | Best Use Case |
|---|---|---|---|---|
| PDE (explicit) | 1× (baseline) | Reference | None | Ground truth, spatial patterns |
| PDE (semi-implicit) | 10× faster | ≈PDE explicit | None | Production simulations |
| ODE surrogate | 1000× faster | 1-3% error | None | Parameter sweeps, SA |
| Neural network | 10000× faster | 1-5% error | None | Real-time prediction |
| PINN | 0.1× slower | <5% error | None | Inverse problems, irregular domains |
| Supermodel | 100× faster | <2% error | Can add | Hybrid physics-ML applications |
Ranked by influence on final tumor burden:
-
$\alpha_S$ (drug cytotoxicity for sensitive cells) —$S_T = 0.45$ -
$\mu_{\text{max}}$ (resistance acquisition rate) —$S_T = 0.28$ -
$D_C$ (drug diffusion coefficient) —$S_T = 0.15$ -
$A_{\text{dose}}$ (dose amplitude) —$S_T = 0.08$ -
$\rho_S$ (tumor growth rate) —$S_T = 0.06$
Implication: Measuring drug kill rate and resistance dynamics should be top priority in experimental validation.
Method Comparison - Observation Fitting:
| Method | Window 1 RMSE_obs | Window 2 RMSE_obs | Best For |
|---|---|---|---|
| ABC | 0.052 | - | Uncertainty quantification |
| 3D-Var | 0.00177 | 0.0000622 | Parameter estimation |
| 4D-Var | 0.000311 | 0.0000611 | Trajectory fitting |
4D-Var achieves 5.7x better observation fit than 3D-Var in Window 1 (early treatment phase).
Parameter recovery error vs noise level:
| Noise Level | 3D-Var RMSE | ABC RMSE | Best Method |
|---|---|---|---|
| 5% | 0.034 | 0.052 | 3D-Var |
| 10% | 0.071 | 0.089 | 3D-Var |
| 15% | 0.128 | 0.145 | 3D-Var (marginal) |
Forward Prediction Accuracy:
| Method | Window 1 RMSE_fwd | Window 2 RMSE_fwd |
|---|---|---|
| 3D-Var | 0.00306 | 0.000115 |
| 4D-Var | 0.00141 | 0.000113 |
4D-Var provides 2.2x better forecast from Window 1, while both methods perform equally well for Window 2.
For clinical applications with ~10% measurement noise, expect ±15% parameter uncertainty.
Optimal dosing strategy (from parameter sweeps):
-
Moderate dose (
$A_{\text{dose}} \approx 1.0$ ) balances tumor kill and resistance -
Shorter intervals (
$T_{\text{period}} < 3$ ) prevent regrowth - Drug diffusion is rate-limiting — spatial delivery matters
Resistance emergence:
- Appears after ~2-3 treatment cycles
- Localizes to tumor periphery (lower drug concentration)
- Accelerates with higher dose intensity (
$\mu(C)$ nonlinearity)
cytotoxic-drug-rd-model/
├── 2_tumor_diffusion_pde_analysis.py # Task 2: PDE solvers
├── 3_tumor_ode_surrogate_nn.py # Task 3: ODE + NN
├── 4_sensetivity_analysis.py # Task 4: Morris + Sobol
├── 5_ode_pde_data_assimilation.py # Task 5: ABC + 3D-Var
├── 6_pde_assimilation_pinn.py # Task 6: PINN implementation
├── 7_supernet_supermodel.py # Task 7: Hybrid models
├── tumor_diffusion_pde_analysis.py # Legacy PDE module
├── figs/ # Generated plots (60+ figures)
├── out/ # Numerical results (CSV, JSON, NPY)
├── CLAUDE.md # Project instructions
└── README.md # This file
Core Scientific Stack:
numpy >= 1.24
scipy >= 1.11
matplotlib >= 3.7
pandas >= 2.0
Machine Learning:
torch >= 2.0 # PyTorch for NN and PINN
scikit-learn >= 1.3
Sensitivity Analysis:
SALib >= 1.4 # Morris and Sobol methods
Optional:
tqdm # Progress bars
Pillow # Animation generation
jupyter # Notebook support
jupytext # .py ↔ .ipynb conversion
# Clone repository
git clone <repository-url>
cd cytotoxic-drug-rd-model
# Create virtual environment
python3.12 -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install dependencies
pip install -e .
# OR manually:
pip install numpy scipy matplotlib pandas scikit-learn torch SALib tqdm PillowTask 2 — PDE Solver Comparison:
python 2_tumor_diffusion_pde_analysis.py
# Outputs: figs/compare_*.png, out/compare_*.csv
# Runtime: ~5-10 minutesTask 3 — ODE and Neural Network:
python 3_tumor_ode_surrogate_nn.py
# Outputs: figs/nn_*.png, out/tb_mlp.pt
# Runtime: ~2-3 minutesTask 4 — Sensitivity Analysis:
python 4_sensetivity_analysis.py
# Outputs: figs/morris_*.png, figs/sobol_*.png, out/sa_*.json
# Runtime: ~10-20 minutes (Sobol is expensive)Task 5 — Data Assimilation:
python 5_ode_pde_data_assimilation.py
# Outputs: figs/da_*.png, out/*_summary.json
# Runtime: ~5-8 minutesTask 6 — PINN:
python 6_pde_assimilation_pinn.py
# Outputs: figs/pinn_*.png, out/pinn_model.pt
# Runtime: ~15-30 minutes (GPU recommended)Task 7 — SuperNet/Supermodel:
python 7_supernet_supermodel.py
# Outputs: figs/supermodel_*.png, out/supermodel_*.pt
# Runtime: ~20-40 minutesNote: All scripts are self-contained and can be run independently, though some later tasks benefit from earlier outputs.
60+ publication-quality plots organized by task:
Task 2 (PDE):
compare_solvers_tb.png— Solver validationcompare_*_all_fields_final.png— 2D heatmapsbenchmark_time_vs_N.png— Performance scalingstability_vs_dt_explicit.png— Stability analysis
Task 3 (ODE/NN):
pde_ode_nn_compare.png— Three-way comparisonnn_training_loss.png— Training convergencenn_extrapolation_check.png— Generalization test
Task 4 (Sensitivity):
morris_ode_*.png— Elementary effects (4 metrics)sobol_ode_*.png— Variance decomposition (4 metrics)
Task 5 (Data Assimilation):
da_trajectories_all.png— ABC vs 3D-Var fitsda_rmse_*.png— Error metricspde_vs_ode_calibrated.png— Calibration results
Task 6 (PINN):
pinn_training_loss.png— Multi-component losspinn_tb_compare.png— Tumor burden validationpinn_vs_pde_field_*.png— Spatial field comparison (S, R, I, C)
Task 7 (Hybrid):
supermodel_tb_compare.png— Hybrid model performancesupermodel_supernet_tb_dose*.png— Dose response (3 levels)
CSV Files:
compare_solvers_same_dt.csv— Solver benchmark data*_traj.csv— Time series trajectoriesmorris_ode_*.csv— Elementary effectssobol_ode_*.csv— Sobol indicesda_*.csv— Assimilation simulations
JSON Files:
*_info.json— Simulation metadata*_summary.json— Analysis summariessa_*.json— Sensitivity analysis configs*_metrics.json— Performance metrics
NumPy Arrays (.npy):
*_S_final.npy,*_R_final.npy, etc. — 2D field snapshots
PyTorch Models (.pt):
tb_mlp.pt— Tumor burden neural networkpinn_model.pt— Physics-informed NNsupernet_model.pt— Parameterized PINNsupermodel_correction_net.pt— Hybrid ODE-NN correction
This project successfully implemented and validated a comprehensive computational framework for modeling cytotoxic drug effects on tumor growth. Key achievements:
- Multi-scale modeling: Seamless integration of PDE (spatial), ODE (mean-field), and ML (data-driven) approaches
- Sensitivity quantification: Identified critical parameters for experimental measurement
- Data assimilation: Demonstrated robust parameter inference from noisy clinical-like data
- Physics-informed ML: Validated PINN as viable alternative to traditional PDE solvers
- Hybrid modeling: Pioneered Supermodel approach combining mechanistic + neural components
- Semi-implicit solver is optimal for production (10× speedup, unconditional stability)
- ODE surrogate enables sensitivity analysis (1000× faster than PDE)
- PINN excels at inverse problems despite 10× training overhead
- Hybrid models offer best accuracy/interpretability tradeoff
- Resistance emergence is inevitable under standard dosing — adaptation needed
- Drug diffusion limitations create spatial heterogeneity → target delivery matters
- Optimal dosing: Moderate intensity, short intervals, continuous monitoring
- Personalization: Data assimilation can infer patient-specific parameters from early observations
- 3D extension: Implement full 3D PDE solver with realistic tumor geometry
- Clinical validation: Calibrate on real patient data (imaging, biopsies)
- Adaptive therapy: Optimal control to minimize resistance development
- Multi-drug protocols: Extend to combination chemotherapy
- Stochastic effects: Include random cell mutations and microenvironment variability




























