We strive to build an interdisciplinary team working at the domain of health, using deep learning, NLP, and neuroscience. Our core values are:

  • Experimental:  We value scientific rigor, focusing on researching under strong scientific grounds and conducting sound experiments that provide definitive and repeatable findings.  
  • Computational:  Our scientific nature is to use algorithms, mathematical models, strong theoretical background, and strong coding skills.

Lab Entry

We welcomed students from all disciplines but those with a huge passion for research.  All Master/Ph.D. students are required to publish at top-tier conferences/journals:  HCI (CHI, UIST),  NLP (ACL, EMNLP),  Brain (Neurocomputing, Journal of Neural Engineering).

Focus Area

Our lab focus area can be explained by the data we worked on mostly: 1) neuroimages (e.g., fMRI), 2) EEG signals, and 3) language.

Although these focus areas are slightly different in their data structures, our lab views them from these shared research challenges:

  • Few-shot learning - contributes to the development of a model that can learn quickly.  There are many proposed ways such as transfer learning, meta-learning, and multi-task learning.
  • Reinforcement learning - contributes to the use of reinforcement learning for more effective training.   In backpropagation-based learning, one has to define a loss function.  However, it is also possible to optimize a model through behavioral rewards.  For example, in text generation, since the ROUGE score is non-differentiable, it may be wise to use reinforcement learning instead to optimize such a score.
  • Cross-modal learning - contributes to learning the alignment or mapping function between different modalities.  For example, converting fMRI images to face images, EEG signals to stimuli images, text to images, or vice versa.  In such a model, one has to work with generative adversarial networks or diffusion networks.
  • Explainable AI - trying to understand where knowledge lives in the neural network.   The question is a bit scientific, but in a practical sense, sometimes the model messes up and we want to know how to change it.  Another practical question is why the neural network suggests the decision, which could help domains like medical or business better understand.
  • Neural Architecture Search: contributes to the automatic finding of the optimal micro-and macro architecture for the neural network through pruning and reinforcement learning.
  • Applications - contributes to the creative use and comparison of existing techniques on unsolved industrial/practical real-world problems.  Examples:  EEG (e.g., cognitive enhancements, motor Imagery, emotion/cognition recognition, BCI spellers), Text (e.g., summarization, depression identification, social media analysis), fMRI (e.g., diagnosis).