Recent large language models (LLMs) have enabled significant advancements for open-domain dialogue systems due to their ability to generate coherent natural language responses to any user request. Their ability to memorize and perform compositional reasoning has enabled accurate execution of dialogue related tasks, such as language understanding and response generation. However, these models suffer from limitations, such as, hallucination, undesired capturing of biases, difficulty in generalization to specific policies, and lack of interpretability.. To tackle these issues, the natural language processing community proposed methods, such as, injecting knowledge into language models during training or inference, retrieving related knowledge using multi-step inference and API/tools, and so on. In this talk, I plan to provide an overview of our and other work that aim to address these challenges.