Large Language Models (LLMs) have revolutionized natural language processing, yet they struggle with inconsistent reasoning, particularly in novel domains and complex logical sequences. This research introduces Proof of Thought, a framework that enhances the reliability and transparency of LLM outputs. Our approach bridges LLM-generated ideas with formal logic verification, employing a custom interpreter to convert LLM outputs into First Order Logic constructs for theorem prover scrutiny. Central to our method is an intermediary JSON-based Domain-Specific Language, which by design balances precise logical structures with intuitive human concepts. This hybrid representation enables both rigorous validation and accessible human comprehension of LLM reasoning processes. Key contributions include a robust type system with sort management for enhanced logical integrity, explicit representation of rules for clear distinction between factual and inferential knowledge, and a flexible architecture that allows for easy extension to various domain-specific applications. We demonstrate Proof of Thought's effectiveness through benchmarking on StrategyQA and a novel multimodal reasoning task, showing improved performance in open-ended scenarios. By providing verifiable and interpretable results, our technique addresses critical needs for AI system accountability and sets a foundation for human-in-the-loop oversight in high-stakes domains.
I shall admit I've spent too much time though building my own journalling apps too. What's the right tradeoff- I wonder - between writing your own journalling apps v/s time spent journalling.
Systematic means "across the board, following an intentional plan" so it's also a broad brush statement. Presumably you meant systemic, which actually does have a specific meaning and wouldn't really apply here. It's a common misconception though.
Once you start prompt-engineering, the answers, in seclusion look pretty great - and show some understanding of the domain.
Yet, if you donot do this - they end up spitting out some randomly associated phrases/answers. This is a problem when you're asking a model, a question who's answer you donot completely know. How do you trust it to give the right answer?
If you donot know the answer beforehand - you cannot prompt-engineer the model to give the "right answers".
"Expert systems" from the 80s and the 90s, were pretty good at giving the "right answers" inside a closed domain. Those were truly wonderful systems.