Junlin Wang

Duke University

junlin.wang2[at]duke.edu

[Google Scholar]
[Github]
[Projects]
[CV]

About Me

I am a 4th year Computer Science PhD at Duke Unviersity, advised by Bhuwan Dhingra and previously also advised by Sam Wiseman from 2022-2023.

Before Duke, I work closely with Sameer Singh on Machine Learning Interpretaions and Natural Language Processing Projects. I also worked as a research intern in Together AI, AWS, Intel, Tencent and Applied AI lab at Comcast.

My research experience has been prmiarily on LLM reasoning, agents, and alignment.



News

  • I will be presenting at EMNLP 2024!

Publications

  • Mixture-of-Agents thumbnail

    Mixture-of-Agents Enhances Large Language Model Capabilities

    Junlin Wang, Jue Wang, Ben Athiwaratkun, Ce Zhang, James Zou
    arXiv

    BibTeX
    @misc{wang2024mixtureofagentsenhanceslargelanguage,
      title={Mixture-of-Agents Enhances Large Language Model Capabilities},
      author={Junlin Wang and Jue Wang and Ben Athiwaratkun and Ce Zhang and James Zou},
      year={2024},
      eprint={2406.04692},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2406.04692},
    }
  • Token Economies thumbnail

    Reasoning in Token Economies: Budget-Aware Evaluation of LLM Reasoning Strategies

    Junlin Wang, Siddhartha Jain, Dejiao Zhang, Baishakhi Ray, Varun Kumar, Ben Athiwaratkun
    EMNLP 2024

    BibTeX
    @article{wang2024reasoning,
      title   = {Reasoning in Token Economies: Budget-Aware Evaluation of LLM Reasoning Strategies},
      author  = {Junlin Wang and Siddhartha Jain and Dejiao Zhang and Baishakhi Ray and Varun Kumar and Ben Athiwaratkun},
      year    = {2024},
      journal = {arXiv preprint arXiv: 2406.06461}
    }
  • ReCaLL thumbnail

    ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods

    Roy Xie, Junlin Wang, Ruomin Huang, Minxing Zhang, Rong Ge, Jian Pei, Neil Zhenqiang Gong, Bhuwan Dhingra
    EMNLP 2024

    BibTeX
    @article{xie2024recall,
      title   = {ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods},
      author  = {Roy Xie and Junlin Wang and Ruomin Huang and Minxing Zhang and Rong Ge and Jian Pei and Neil Zhenqiang Gong and Bhuwan Dhingra},
      year    = {2024},
      journal = {arXiv preprint arXiv: 2406.15968}
    }
  • Raccoon thumbnail

    Raccoon: Prompt Extraction Benchmark of LLM-Integrated Applications

    Junlin Wang*, Tianyi Yang*, Roy Xie, Bhuwan Dhingra
    ACL 2024 Findings

    BibTeX
    @article{wang2024raccoon,
      title   = {Raccoon: Prompt Extraction Benchmark of LLM-Integrated Applications},
      author  = {Junlin Wang and Tianyi Yang and Roy Xie and Bhuwan Dhingra},
      year    = {2024},
      journal = {arXiv preprint arXiv: 2406.06737}
    }
  • LLM-Resistant Math Word Problem thumbnail

    LLM-Resistant Math Word Problem Generation via Adversarial Attacks

    Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra
    EMNLP 2024 Findings

    BibTeX
    @inproceedings{Xie2024adversarial,
      title  = {Adversarial Math Word Problem Generation},
      author = {Roy Xie and Chengxuan Huang and Junlin Wang and Bhuwan Dhingra},
      year   = {2024},
      url    = {https://openreview.net/forum?id=bJz5uGzEe6&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3Daclweb.org%2FACL%2FARR%2F2024%2FJune%2FAuthors%23your-submissions)}
    }
  • NeuroComparatives thumbnail

    NeuroComparatives: Neuro-Symbolic Distillation of Comparative Knowledge

    Phillip Howard*, Junlin Wang*, Vasudev Lal, Gadi Singer, Yejin Choi, Swabha Swayamdipta
    NAACL 2024 Findings

    BibTeX
    @misc{howard2023neurocomparatives,
      title={NeuroComparatives: Neuro-Symbolic Distillation of Comparative Knowledge},
      author={Phillip Howard and Junlin Wang and Vasudev Lal and Gadi Singer and Yejin Choi and Swabha Swayamdipta},
      year={2023},
      eprint={2305.04978},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
    }
  • Maestro thumbnail

    Maestro: A Gamified Platform for Teaching AI Robustness

    Margarita Geleta, Jiacen Xu, Manikanta Loya, Junlin Wang, Sameer Singh, Zhou Li and Sergio Gago Masague
    EAAI 2023

    BibTeX
    @inproceedings{DBLP:conf/aaai/GeletaXLW00M23,
      author    = {Margarita Geleta and Jiacen Xu and Manikanta Loya and Junlin Wang and Sameer Singh and Zhou Li and Sergio Gago Masagu{'{e}}},
      editor    = {Brian Williams and Yiling Chen and Jennifer Neville},
      title     = {Maestro: {A} Gamified Platform for Teaching {AI} Robustness},
      booktitle = {Thirty-Seventh {AAAI} Conference on Artificial Intelligence, {AAAI} 2023, Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence, {IAAI} 2023, Thirteenth Symposium on Educational Advances in Artificial Intelligence, {EAAI} 2023, Washington, DC, USA, February 7-14, 2023},
      pages     = {15816-15824},
      publisher = {{AAAI} Press},
      year      = {2023},
      url       = {https://doi.org/10.1609/aaai.v37i13.26878},
      doi       = {10.1609/AAAI.V37I13.26878}
    }
  • Gradient-based Analysis thumbnail

    Gradient-based Analysis of NLP Models is Manipulable

    Junlin Wang*, Jens Tuyls*, Eric Wallace and Sameer Singh
    EMNLP 2020 Findings

    BibTeX
    @inproceedings{wang2020gradientbased,
      Author = {Junlin Wang, Jens Tuyls, Eric Wallace and Sameer Singh},
      Booktitle = {Empirical Methods in Natural Language Processing Findings},
      Year = {2020},
      Title = {Gradient-based Analysis of NLP Models is Manipulable}
    }
  • AllenNLP Interpret thumbnail

    AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models

    Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, and Sameer Singh
    Demo at EMNLP 2019     Best Demo Award

    BibTeX
    @inproceedings{Wallace2019AllenNLP,
      Author = {Eric Wallace and Jens Tuyls and Junlin Wang and Sanjay Subramanian and Matt Gardner and Sameer Singh},
      Booktitle = {Empirical Methods in Natural Language Processing},
      Year = {2019},
      Title = {AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models}
    }