Our Publications

Free-form optimization of nanophotonic devices: from classical methods to deep learning

Nanophotonics 2022

2022.01.12 | Adjoint method, free-form optimization, machine learning, photonic device design, reinforcement learning

Nanophotonic devices have enabled microscopic control of light with an unprecedented spatial resolution by employing subwavelength optical elements that can strongly interact with incident waves. However, to date, most nanophotonic devices have been designed based on fixed-shape optical elements, and a large portion of their design potential has remained unexplored. It is only recently that free-form design schemes have been spotlighted in nanophotonics, offering routes to make a break from conventional design constraints and utilize the full design potential. In this review, we systematically overview the nascent yet rapidly growing field of

Juho Park, Sanmun Kim, Daniel Wontae Nam, Haejun Chung, Chan Y. Park and Min Seok Jang.

Structural optimization of a one-dimensional freeform metagrating deflector via deep reinforcement learning

ACS Photonics 2022

2021.12.30 | Metasurface freeform metagrating structural optimization inverse design deep learning reinforcement learning

The increasing demand on a versatile high-performance metasurface requires a freeform design method that can handle a huge design space, which is many orders of magnitude larger than that of conventional fixed-shape optical structures. In this work, we formulate the designing process of one-dimensional freeform Si metasurface beam deflectors as a reinforcement learning problem to find their optimal structures consistently without requiring any prior metasurface data. During training, a deep Q-network-based agent stochastically explores the device design space around the learned trajectory optimized for deflection efficiency. The devices discovered by the agents show overall improvements in maximum efficiency compared to the ones that state-of-the-art baseline methods find at various wavelengths and deflection angles.

D. Seo, D. W. Nam, J. Park, C. Y. Park, and M. S. Jang

Inverse design of organic light-emitting diode structure based on deep neural networks

Nanophotonics 2021

2021.11.04 | Deep neural network, genetic algorithm, inverse design, light extraction efficiency, organic light-emitting diodes

The optical properties of thin-film light emitting diodes (LEDs) are strongly dependent on their structures due to light interference inside the devices. However, the complexity of the design space grows exponentially with the number of design parameters, making it challenging to optimize the optical properties of multilayer LEDs with rigorous electromagnetic simulations. In this work, we demonstrate an artificial neural network that can predict the light extraction efficiency of an organic LED structure in 30 ms, which is ∼103 times faster than the rigorous simulation in a single-treaded execution with root-mean-squared error

Sanmun Kim, Jeong Min Shin, Jaeho Lee, Chanhyung Park, Songju Lee, Juho Park, Dongjin Seo, Sehong Park, Chan Y. Park and Min Seok Jang

Efficiently Learning the Value Distribution for Actor-Critic Methods

ICML 2021

2021.09.13 | Reinforcement Learning

Reinforcement Learning (RL) has become one of the major categories in the field of machine learning in the recent years via the breakthroughs such as approximation of complex non-linear function through deep neural networks. Among these, a distributional perspective on the value function estimation has contributed on taking a big jump in the performance of RL algorithms. However, proper discussions of distributional RL (DRL) are still limited to specific algorithms or network architectures such as Q-learning or deterministic policy gradient. In the situation at hand, we have worked to address some of the critical aspects in RL that was left out in the distributional perspective. The details and the findings of this journey can be found in our recent work 'GMAC: A Distributional'

Daniel Wontae Nam, Younghoon Kim, Chan Y. Park

Investigating Pixel Robustness using Input Gradients

2019.08.30 | Computer Vision

This post aims to cover main concepts from the paper ‘Where to be Adversarial Perturbations Added? Investigating and Manipulating Pixel Robustness using Input Gradients’ by Hwang et al _. The paper connects the gradients of input features to the robustness of a classification model, and shows that the robustness can be manipulated indirectly through changing the gradient flows within the model. Adversarial attack can be defined as a process of generating adversarial examples to a given classifier, which are samples that are misclassified by the model but are only slightly different from correctly classified samples drawn from the data distribution . Projected Gradient Descent (PGD) is a popular attack method that iteratively generates adversarial examples as the following

Jisung Hwang, Younghoon Kim, Sanghyuk Chun, Jaejun Yoo, Ji-Hoon Kim & Dongyoon Han

Distilling Curiosity for Exploration

2019.07.29 | Reinforcement Learning

This post is an introduction to the paper 'Curiosity Bottleneck: Exploration by Distilling Task-Specific Novelty' by Kim et al. The paper deals with informative exploration method when task-irrelevant noise are present within the visual observation. By distilling the informative from the uninformative, the agent is able to successfully ignore the distractive visual entities when making decision about choice of action or calculating the intrinsic reward for exploration. Exploration vs. exploitation is a well known paradox in reinforcement learning. A careful tradeoff between the two is required for the optimal performance of the learning algorithms.

Youngjin Kim, Wontae Nam, Hyunwoo Kim, Ji-Hoon Kim, Gunhee Kim