DeepLens is a differentiable optical lens simulator that supports multiple optical models (eg., geometric, diffractive, hybrid, neural, and interpolation). It can be used for (1) image simulation, (2) optical design, and (3) end-to-end optics-algorithm co-design (End2end-Imaging). DeepLens enables researchers to rapidly prototype and optimize custom optical systems through differentiable simulation.
Docs • Tutorials • Community • PyPI
- Differentiable Simulation. DeepLens builds on differentiable physical simulation and enables accurate, efficient gradient calculation for lens optimization.
- Automated Design. DeepLens demonstrates outstanding optimization power compared with classical optimization, especially for complex optical systems (e.g., mobile lenses, metasurfaces, and AR/VR displays). Automated lens design is demonstrated with curriculum learning and optical regularization losses.
- Multiple Optical Models. DeepLens supports not only geometric ray tracing, but also various other optical models, including hybrid ray-wave models, neural lens representations, and reference-data interpolation.
- Image Simulation. DeepLens delivers photorealistic image simulations with spatially varying and depth-dependent aberration simulation, bridging sim-to-real gaps when combined with End2end-Imaging.
Additional features (available via collaboration):
- Kernel Acceleration. Achieves >10x speedup and >90% GPU memory reduction with custom GPU kernels across NVIDIA and AMD platforms.
- Polarization Ray Tracing. Simulates polarization ray tracing and differentiable optimization of coating films.
- Non-Sequential Ray Tracing. Simulates differentiable non-sequential ray tracing model for stray light analysis and optimization.
- Distributed Optimization. Supports distributed simulation and optimization for billion-level of ray tracing and high-resolution (>100k x 100k) diffractive propagation.
DeepLens supports comprehensive lens analysis (spot diagram, PSF, MTF, distortion, etc.) and photorealistic image simulation with spatially-varying, depth-dependent aberrations.
Fully automated lens design from scratch with differentiable optimization. Try it with AutoLens!
A surrogate network for efficient lens PSF representation ang image simulation (spatially-varying aberration + defocus).
Design hybrid refractive-diffractive lenses with differentiable ray-wave model.
DeepLens serves as the differentiable optics engine in End2endImaging, an end-to-end differentiable computational imaging framework. End2endImaging integrates optics (DeepLens), sensor/ISP simulation, and neural reconstruction networks into a single PyTorch computation graph, enabling joint optimization of the entire camera pipeline.
Clone this repo:
git clone https://github.com/singer-yang/DeepLens
cd DeepLens
Create a conda environment:
conda create -n deeplens_env python=3.12
conda activate deeplens_env
# Linux and Mac
pip install torch torchvision
# Windows
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu128
pip install -r requirements.txt
or
conda env create -f environment.yml -n deeplens_env
Run the demo code:
python 0_hello_geolens.py
DeepLens repo structure:
DeepLens/
│
├── deeplens/
│ ├── geolens.py (multi-element refractive lens)
│ ├── hybridlens.py (refractive + diffractive hybrid lens)
│ ├── diffraclens.py (pure diffractive lens)
│ ├── paraxiallens.py (thin-lens model)
│ ├── psfnetlens.py (neural surrogate lens)
│ ├── geometric_surface/ (spheric, aspheric, aperture, etc.)
│ ├── diffractive_surface/(DOE surfaces)
│ ├── phase_surface/ (phase-only surfaces)
│ ├── light/ (Ray, ComplexWave)
│ ├── material/ (glass catalogs)
│ ├── imgsim/ (PSF convolution, monte carlo)
│ ├── geolens_pkg/ (eval, optim, vis, io mixins)
│ └── surrogate/ (MLP, Siren neural surrogates)
│
├── 0_hello_geolens.py (code tutorials)
├── ...
└── write_your_own_code.py
Join our Slack workspace and WeChat Group (singeryang1999) to connect with our core contributors, receive the latest industry updates, and be part of our community. For any inquiries, contact Xinge Yang (xinge.yang@kaust.edu.sa).
We welcome all contributions. To get started, please read our Contributing Guide or check out open questions. All project participants are expected to adhere to our Code of Conduct. A list of contributors can be viewed in Contributors and below:
If you use DeepLens in your research, please cite the paper. See more in History of DeepLens.
@article{yang2024curriculum,
title={Curriculum learning for ab initio deep learned refractive optics},
author={Yang, Xinge and Fu, Qiang and Heidrich, Wolfgang},
journal={Nature communications},
volume={15},
number={1},
pages={6572},
year={2024},
publisher={Nature Publishing Group UK London}
}





