Technical requirements
All code files relevant to this chapter are available at https://github.com/PacktPublishing/Graph-Machine-Learning/tree/main/Chapter12. Please refer to the Practical exercises section of Chapter 1, Getting Started with Graphs, for guidance on how to set up the environment to run the examples in this chapter, either using Poetry, pip, or Docker.
LLMs are powerful but require significant computational resources, especially as their size increases (e.g., 3B, 7B, 12B, and 40B parameters). Not everyone has access to the necessary hardware to run these models locally. As a result, pay-per-use APIs (such as ChatGPT, Claude, and Gemini) are available to query remote LLMs. However, this chapter aims to provide model-agnostic examples and suggestions for running powerful LLMs locally on commonly available machines.
In our examples, we’ll use the OpenAI library to interact with a server running the LLM. It can be used either with an API key (e.g., OpenAI...