Skip to main content

Augmenting KG Hierarchies Using Neural Transformers

  • Conference paper
  • First Online:
Advances in Information Retrieval (ECIR 2024)

Abstract

This work leverages neural transformers to generate hierarchies in an existing knowledge graph. For small (\({<}\)10,000 node) domain-specific KGs, we find that a combination of few-shot prompting with one-shot generation works well, while larger KG may require cyclical generation. Hierarchy coverage increased by 98% for intents and 95% for colors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
eBook
USD 99.00
Price excludes VAT (USA)
Softcover Book
USD 129.99
Price excludes VAT (USA)

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Carta, S., Giuliani, A., Piano, L., Podda, A.S., Pompianu, L., Tiddia, S.G.: Iterative zero-shot LLM prompting for knowledge graph construction (2023). http://arxiv.org/abs/2307.01128

  2. Dong, X.L., et al.: Knowledge vault: a web-scale approach to probabilistic knowledge fusion. In: The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2014, New York, NY, USA, 24–27 August 2014, pp. 601–610 (2014). http://www.cs.cmu.edu/~nlao/publication/2014.kdd.pdf

  3. Google: Google product type taxonomy. http://www.google.com/basepages/producttype/taxonomy.en-US.txt, 21 September 2021 version

  4. Meyer, L.P., et al.: LLM-assisted knowledge graph engineering: experiments with ChatGPT (2023). http://arxiv.org/abs/2307.06917

  5. OpenAI: GPT-4 technical report (2023). http://arxiv.org/abs/2303.08774

  6. Pan, S., Luo, L., Wang, Y., Chen, C., Wang, J., Wu, X.: Unifying large language models and knowledge graphs: a roadmap (2023). http://arxiv.org/abs/2306.08302

  7. Parnami, A., Lee, M.: Learning from few examples: a summary of approaches to few-shot learning (2022)

    Google Scholar 

  8. Song, Y., Wang, T., Mondal, S.K., Sahoo, J.P.: A comprehensive survey of few-shot learning: evolution, applications, challenges, and opportunities (2022). http://arxiv.org/abs/2205.06743

  9. Stokman, F., Vries, P.: Structuring knowledge in a graph. In: van der Veer, G.C., Mulder, G. (eds.) Human-Computer Interaction, pp. 186–206. Springer, Cham (1988). https://doi.org/10.1007/978-3-642-73402-1_12

  10. Touvron, H., et al.: Llama 2: open foundation and fine-tuned chat models (2023). http://arxiv.org/abs/2307.09288

  11. Ugander, J., Karrer, B., Backstrom, L., Marlow, C.: The anatomy of the Facebook social graph. CoRR abs/1111.4503 (2011). http://arxiv.org/abs/1111.4503

  12. Ying, R., He, R., Chen, K., Eksombatchai, P., Hamilton, W.L., Leskovec, J.: Graph convolutional neural networks for web-scale recommender systems. CoRR abs/1806.01973 (2018). http://arxiv.org/abs/1806.01973

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sanat Sharma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sharma, S., Poddar, M., Kumar, J., Blank, K., King, T. (2024). Augmenting KG Hierarchies Using Neural Transformers. In: Goharian, N., et al. Advances in Information Retrieval. ECIR 2024. Lecture Notes in Computer Science, vol 14612. Springer, Cham. https://doi.org/10.1007/978-3-031-56069-9_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-56069-9_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-56068-2

  • Online ISBN: 978-3-031-56069-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics