This repo is for the paper LUQ: Long-text Uncertainty Quantification for LLMs. An early version has been provided, and the complete code will be available soon.
Update: We have recently included the more advanced Llama3-8b-instruct as our NLI tool. By utilizing VLLM, we can significantly increase the speed of inference and achieve better performance.