Discussion about this post

User's avatar
Alexander Salazar's avatar

reate_ground_truth2 is creating a "model-generated ground truth" rather than a true ground truth based on source documents. I mean, the questions are grounded in the source material, but the answers come from the model's general knowledge. So It's not truly a "ground truth" in the traditional sense. Right?

Expand full comment
Meng Li's avatar

Core of the GraphRAG Project:

1. Entity Knowledge Graph Generation: Initially, a large language model (LLM) is used to extract entities and their interrelations from source documents, creating an entity knowledge graph.

2. Community Summarization: Related entities are further grouped into communities, and summaries are generated for each community. These summaries serve as partial answers during queries.

3. Final Answer Generation: For user questions, partial answers are extracted from the community summaries and then re-summarized to form the final answer.

This approach not only enhances the comprehensiveness and diversity of answers but also demonstrates higher efficiency and scalability when handling large-scale textual data.

Expand full comment
6 more comments...

No posts