Publications & Talks
You can also browse My Google Scholar profile.
Journal Publications
C. Isil, H. C. Koydemir, M. Eryilmaz, K. de Haan, N. Pillar, K. Mentesoglu, A. F. Unal, Y. Rivenson, S. Chandrasekaran, O. B. Garner, and A. Ozcan
Virtual Gram staining of label-free bacteria using darkfield microscopy and deep learning
Arxiv (under review), 2024.
[Abstract] [PDF]Gram staining has been one of the most frequently used staining protocols in microbiology for over a century, utilized across various fields, including diagnostics, food safety, and environmental monitoring. Its manual procedures make it vulnerable to staining errors and artifacts due to, e.g., operator inexperience and chemical variations. Here, we introduce virtual Gram staining of label-free bacteria using a trained deep neural network that digitally transforms darkfield images of unstained bacteria into their Gram-stained equivalents matching brightfield image contrast. After a one-time training effort, the virtual Gram staining model processes an axial stack of darkfield microscopy images of label-free bacteria (never seen before) to rapidly generate Gram staining, bypassing several chemical steps involved in the conventional staining process. We demonstrated the success of the virtual Gram staining workflow on label-free bacteria samples containing Escherichia coli and Listeria innocua by quantifying the staining accuracy of the virtual Gram staining model and comparing the chromatic and morphological features of the virtually stained bacteria against their chemically stained counterparts. This virtual bacteria staining framework effectively bypasses the traditional Gram staining protocol and its challenges, including stain standardization, operator errors, and sensitivity to chemical variations.
J. Hu, K. Liao, N. U. Dinc, C. Gigli, B. Bai, T. Gan, X. Li, H. Chen, X. Yang, Y. Li, C. Isil, M. S. S. Rahman, J. Li, X. Hu, M. Jarrahi, D. Psaltis, and A. Ozcan
Subwavelength imaging using a Solid-Immersion Diffractive Optical Processor
eLight, 2024.
[Abstract] [PDF]Phase imaging is widely used in biomedical imaging, sensing, and material characterization, among other fields. However, direct imaging of phase objects with subwavelength resolution remains a challenge. Here, we demonstrate subwavelength imaging of phase and amplitude objects based on all-optical diffractive encoding and decoding. To resolve subwavelength features of an object, the diffractive imager uses a thin, high-index solid-immersion layer to transmit high-frequency information of the object to a spatially-optimized diffractive encoder, which converts/encodes high-frequency information of the input into low-frequency spatial modes for transmission through air. The subsequent diffractive decoder layers (in air) are jointly designed with the encoder using deep-learning-based optimization, and communicate with the encoder layer to create magnified images of input objects at its output, revealing subwavelength features that would otherwise be washed away due to diffraction limit. We demonstrate that this all-optical collaboration between a diffractive solid-immersion encoder and the following decoder layers in air can resolve subwavelength phase and amplitude features of input objects in a highly compact design. To experimentally demonstrate its proof-of-concept, we used terahertz radiation and developed a fabrication method for creating monolithic multi-layer diffractive processors. Through these monolithically fabricated diffractive encoder-decoder pairs, we demonstrated phase-to-intensity transformations and all-optically reconstructed subwavelength phase features of input objects (with linewidths of ~ λ/3.4, where λ is the illumination wavelength) by directly transforming them into magnified intensity features at the output. This solid-immersion-based diffractive imager, with its compact and cost-effective design, can find wide-ranging applications in bioimaging, endoscopy, sensing and materials characterization.
C. Isil, T. Gan, F. O. Ardic, K. Mentesoglu, J. Digani, H. Karaca, H. Chen, J. Li, D. Mengu, M. Jarrahi, K. Akşit, and A. Ozcan
All-optical image denoising using a diffractive visual processor
Light:science & applications, 2024.
[Abstract] [PDF]Image denoising, one of the essential inverse problems, targets to remove noise/artifacts from input images. In general, digital image denoising algorithms, executed on computers, present latency due to several iterations implemented in, e.g., graphics processing units (GPUs). While deep learning-enabled methods can operate non-iteratively, they also introduce latency and impose a significant computational burden, leading to increased power consumption. Here, we introduce an analog diffractive image denoiser to all-optically and non-iteratively clean various forms of noise and artifacts from input images – implemented at the speed of light propagation within a thin diffractive visual processor that axially spans <250 × λ, where λ is the wavelength of light. This all-optical image denoiser comprises passive transmissive layers optimized using deep learning to physically scatter the optical modes that represent various noise features, causing them to miss the output image Field-of-View (FoV) while retaining the object features of interest. Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of ~30–40%. We experimentally demonstrated the effectiveness of this analog denoiser architecture using a 3D-printed diffractive visual processor operating at the terahertz spectrum. Owing to their speed, power-efficiency, and minimal computational overhead, all-optical diffractive denoisers can be transformative for various image display and projection systems, including, e.g., holographic displays.
Y. Li, T. Gan, B. Bai, C. Isil, M. Jarrahi, and A. Ozcan
Optical information transfer through random unknown diffusers using electronic encoding and diffractive decoding
Advanced Photonics, 2023.
[Abstract] [PDF]Free-space optical information transfer through diffusive media is critical in many applications, such as biomedical devices and optical communication, but remains challenging due to random, unknown perturbations in the optical path. We demonstrate an optical diffractive decoder with electronic encoding to accurately transfer the optical information of interest, corresponding to, e.g., any arbitrary input object or message, through unknown random phase diffusers along the optical path. This hybrid electronic-optical model, trained using supervised learning, comprises a convolutional neural network-based electronic encoder and successive passive diffractive layers that are jointly optimized. After their joint training using deep learning, our hybrid model can transfer optical information through unknown phase diffusers, demonstrating generalization to new random diffusers never seen before. The resulting electronic-encoder and optical-decoder model was experimentally validated using a 3D-printed diffractive network that axially spans <70λ, where λ = 0.75 mm is the illumination wavelength in the terahertz spectrum, carrying the desired optical information through random unknown diffusers. The presented framework can be physically scaled to operate at different parts of the electromagnetic spectrum, without retraining its components, and would offer low-power and compact solutions for optical information transfer in free space through unknown random diffusive media.
M. S. S. Rahman, T. Gan, E. A. Deger, C. Isil, M. Jarrahi, and A. Ozcan
Learning Diffractive Optical Communication Around Arbitrary Opaque Occlusions
Nature Communications, 2023.
[Abstract] [PDF]Free-space optical communication becomes challenging when an occlusion blocks the light path. Here, we demonstrate a direct communication scheme, passing optical information around a fully opaque, arbitrarily shaped occlusion that partially or entirely occludes the transmitter’s field-of-view. In this scheme, an electronic neural network encoder and a passive, all-optical diffractive network-based decoder are jointly trained using deep learning to transfer the optical information of interest around the opaque occlusion of an arbitrary shape. Following its training, the encoder-decoder pair can communicate any arbitrary optical information around opaque occlusions, where the information decoding occurs at the speed of light propagation through passive light-matter interactions, with resilience against various unknown changes in the occlusion shape and size. We also validate this framework experimentally in the terahertz spectrum using a 3D-printed diffractive decoder. Scalable for operation in any wavelength regime, this scheme could be particularly useful in emerging high data-rate free-space communication systems.
C. Isil, D. Mengu, Y. Zhao, A. Tabassum, J. Li, Y. Luo, M. Jarrahi, and A. Ozcan
Super-resolution image display using diffractive decoders
Science Advances, 2022.
[Abstract] [PDF]High-resolution image projection over a large field of view (FOV) is hindered by the restricted space-bandwidth product (SBP) of wavefront modulators. We report a deep learning–enabled diffractive display based on a jointly trained pair of an electronic encoder and a diffractive decoder to synthesize/project super-resolved images using low-resolution wavefront modulators. The digital encoder rapidly preprocesses the high-resolution images so that their spatial information is encoded into low-resolution patterns, projected via a low SBP wavefront modulator. The diffractive decoder processes these low-resolution patterns using transmissive layers structured using deep learning to all-optically synthesize/project super-resolved images at its output FOV. This diffractive image display can achieve a super-resolution factor of ~4, increasing the SBP by ~16-fold. We experimentally validate its success using 3D-printed diffractive decoders that operate at the terahertz spectrum. This diffractive image decoder can be scaled to operate at visible wavelengths and used to design large SBP displays that are compact, low power, and computationally efficient.
C. Isil, K. de Haan, Z. Gorocs, H. Ceylan Koydemir, S. Peterman, D. Baum, F. Song, T. Skandakumar, E. Gumustekin, and A. Ozcan
Phenotypic analysis of microalgae populations using label-free imaging flow cytometry and deep learning
ACS Photonics, 2021.
[Abstract] [PDF]Environmental factors such as temperature, nutrients, and pollutants affect the growth rates and physical characteristics of microalgae populations. As algae play a vital role in marine ecosystems, the monitoring of algae is important to observe the state of an ecosystem. However, analyzing these microalgae populations using conventional light microscopy is time-consuming and requires experts to both identify and count the algal cells, which in turn considerably limits the volume of the samples that can be measured in each experiment. In this work we use a high-throughput and field-portable imaging flow cytometer to perform automated label-free phenotypic analysis of marine microalgae populations using image processing and deep learning. The imaging flow cytometer provides color intensity and phase images of microalgae contained in a liquid sample by capturing and reconstructing the lens-free color holograms of the continuously flowing liquid at a flow rate of 100 mL/h. We extracted the spatial and spectral features of each algal cell in a sample from these holographic images and performed automated algae identification using convolutional neural networks. These features, alongside the composition and growth rate of the algae within the samples, were analyzed to understand the interactions between different algae populations as well as the effects of toxin exposure. As proof of concept, we demonstrated the effectiveness of the system by analyzing the impact of various concentrations of copper on microalgae monocultures and mixtures.
C. Isil, F. S. Oktem, and A. Koc
Deep iterative reconstruction for phase retrieval
Applied Optics, 2019.
[Abstract] [PDF]The classical phase retrieval problem is the recovery of a constrained image from the magnitude of its Fourier transform. Although there are several well-known phase retrieval algorithms, including the hybrid input-output (HIO) method, the reconstruction performance is generally sensitive to initialization and measurement noise. Recently, deep neural networks (DNNs) have been shown to provide state-of-the-art performance in solving several inverse problems such as denoising, deconvolution, and superresolution. In this work, we develop a phase retrieval algorithm that utilizes two DNNs together with the model-based HIO method. First, a DNN is trained to remove the HIO artifacts, and is used iteratively with the HIO method to improve the reconstructions. After this iterative phase, a second DNN is trained to remove the remaining artifacts. Numerical results demonstrate the effectiveness of our approach, which has little additional computational cost compared to the HIO method. Our approach not only achieves state-of-the-art reconstruction performance but also is more robust to different initialization and noise levels.
C. Isil, M. Yorulmaz, B. Solmaz, A. B. Turhan, C. Yurdakul, S. Unlu, E. Ozbay, and A. Koc
Resolution enhancement of wide-field interferometric microscopy by coupled deep autoencoders
Applied Optics, 2018.
[Abstract] [PDF]Wide-field interferometric microscopy is a highly sensitive, label-free, and low-cost biosensing imaging technique capable of visualizing individual biological nanoparticles such as viral pathogens and exosomes. However, further resolution enhancement is necessary to increase detection and classification accuracy of subdiffraction-limited nanoparticles. In this study, we propose a deep-learning approach, based on coupled deep autoencoders, to improve resolution of images of L-shaped nanostructures. During training, our method utilizes microscope image patches and their corresponding manual truth image patches in order to learn the transformation between them. Following training, the designed network reconstructs denoised and resolution-enhanced image patches for unseen input.
Conference Talks (Selected)
C. Isil, D. Mengu, Y. Zhao, A. Tabassum, J. Li, Y. Luo, M. Jarrahi, and A. Ozcan
Super-resolution image projection using a diffractive optical decoder
CLEO: Fundamental Science, 2023.
[PDF]C. Isil, K. de Haan, H. Ceylan Koydemir, Z. Gorocs, D. Baum, F. Song, T. Skandakumar, E. Gumustekin, and A. Ozcan
Label-free analysis of micro-algae populations using a high-throughput holographic imaging flow cytometer and deep learning
Label-free Biomedical Imaging and Sensing, 2021.
[PDF]C. Isil and F. S. Oktem
Model-based phase retrieval with deep denoiser prior
Computational Optical Sensing and Imaging, 2020.
[PDF]C. Isil, F. S. Oktem, and A. Koc
Deep learning-based hybrid approach for phase retrieval
Computational Optical Sensing and Imaging, 2018.
[PDF]C. Isil and F. S. Oktem
A phase-space approach to diffraction-limited resolution
Adaptive Optics: Analysis, Methods & Systems, 2018.
[PDF]C. Isil, B. Solmaz, and A. Koc
Variational autoencoders with triplet loss for representation learning
Signal Processing and Communications Applications Conference (SIU), 2018.
[PDF]M. Yorulmaz, C. Isil, E. Seymour, C. Yurdakul, B. Solmaz, A. Koc, and S. Unlu
Single-particle imaging for biosensor applications
Emerging Imaging and Sensing Technologies for Security and Defence II, 2017.
[PDF]