We strive to build an interdisciplinary team working at the area of deep learning, neuroscience, and HCI. Our core values are:

  • Experimental:  We value scientific rigor, focusing on researching under strong scientific grounds and conducting sound experiments that provide definitive and repeatable findings.  
  • Computational:  Our scientific nature is to use algorithms, mathematical models, strong theoretical background, and strong coding skills.

Lab Entry

We welcomed students from all disciplines but those with a huge passion for research and/or making real industrial product.  All Master/Ph.D. students are encouraged to publish at top-tier conferences/journals:  HCI (CHI, UIST),  NLP (ACL, EMNLP),  CV (CVPR, ICCV), Brain (Journal of Neural Engineering).

Current Projects


  1. Distillation, Pruning, Grafting: utilizing distillation + pruning for better models
  2. Query-based summarization:   designing effective models for query-based summarization
  3. Medical visual QA:  designing effective models for medical visual QA
  4. Multimodality:  understanding and utilizing multimodal signals (e.g., text, image) to design better models
  5. Fundamental models for Thai language:  developing foundational models (e.g., embedding, GPT) for Thai language
  6. EEG + Wellbeing:  understand EEG and utilize it to understand human behaviors and well-being


Note: some of the demos can be found in

  1. Glucose monitoring: contributes to the use of Raman spectroscopy and the development of Raman wearables to monitor blood glucose in real-time
  2. BCI speller: contributes to the development of BCI speller using EEG paradigms such as P300, SSVEP, Hybrid P300-SSVEP and motor imagery for locked-in patients.
  3. AI-assisted radiology software:  contributes to the development of automatic annotation and reporting for fMRI/MRI images.
  4. Conversational AI:   contributes to the development of chatGPT-like systems in domains such as legal, logistics, medical, insurance, corporate, etc.   There will many components:  (1) instruction (parameter efficient) tuning with dialogue datasets, (2) reinforcement learning, (3) retrieval augmentation, (4) quantization for efficiency, (5) support images, (6) support English and Thai.
  5. Code generation:  contributes to automatic code generation, testing and deployment using large language models.
  6. ThaiGovAI:  contributes to the development of models that can help turn informal text into formal text used in government, as well as generating common forms used in government operations.
  7. Research writing assistant: contributes to the development of models that can help style the writing into certain conference style.
  8. AI-generated content detector:  contributes to the development of detector used to check plagiarism through AI generation.