Recent Projects
Regularization for Predictive Coding Networks [github]
Investigating novel regularization techniques for neural networks using predictive coding instead of backpropagation for learning. (Deep Learning course project, Jan 2023)
Overcoming Theoretical Limitations of Soft Attention [github]
Found explicit encoding of a regular language and a context free, non-regular language using Transformer soft attention. (Advanced Formal Language Theory course project, Aug 2022)
Distinguishing Logical Fallacies from Valid Arguments [github][poster]
Created a dataset and trained a transformer to classify statements as logically valid or fallacious with an accuracy of 89% on the test set. (Computational Semantics for NLP course project, Jul 2022)