Skip to main content

OpenInference LlamaIndex Instrumentation

Project description

OpenInference LlamaIndex Instrumentation

Python auto-instrumentation library for LlamaIndex.

These traces are fully OpenTelemetry compatible and can be sent to an OpenTelemetry collector for viewing, such as arize-phoenix.

pypi

Installation

pip install openinference-instrumentation-llama-index

Compatibility

llama-index version openinference-instrumentation-llama-index version
>=0.12.3 >=4.0
>=0.11.0 >=3.0
>=0.10.43 >=2.0, <3.0
>=0.10.0, <0.10.43 >=1.0, <0.2
>=0.9.14, <0.10.0 0.1.3

Quickstart

Install packages needed for this demonstration.

python -m pip install --upgrade \
    openinference-instrumentation-llama-index \
    opentelemetry-sdk \
    opentelemetry-exporter-otlp \
    "opentelemetry-proto>=1.12.0" \
    arize-phoenix

Start the Phoenix app in the background as a collector. By default, it listens on http://localhost:6006. You can visit the app via a browser at the same address.

The Phoenix app does not send data over the internet. It only operates locally on your machine.

python -m phoenix.server.main serve

The following Python code sets up the LlamaIndexInstrumentor to trace llama-index and send the traces to Phoenix at the endpoint shown below.

from openinference.instrumentation.llama_index import LlamaIndexInstrumentor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import SimpleSpanProcessor

endpoint = "http://127.0.0.1:6006/v1/traces"
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))

LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)

To demonstrate tracing, we'll use LlamaIndex below to query a document.

First, download a text file.

import tempfile
from urllib.request import urlretrieve
from llama_index.core import SimpleDirectoryReader

url = "https://raw.githubusercontent.com/Arize-ai/phoenix-assets/main/data/paul_graham/paul_graham_essay.txt"
with tempfile.NamedTemporaryFile() as tf:
    urlretrieve(url, tf.name)
    documents = SimpleDirectoryReader(input_files=[tf.name]).load_data()

Next, we'll query using OpenAI. To do that you need to set up your OpenAI API key in an environment variable.

import os

os.environ["OPENAI_API_KEY"] = "<your openai key>"

Now we can query the indexed documents.

from llama_index.core import VectorStoreIndex

query_engine = VectorStoreIndex.from_documents(documents).as_query_engine()
print(query_engine.query("What did the author do growing up?"))

Visit the Phoenix app at http://localhost:6006 to see the traces.

More Info

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file openinference_instrumentation_llama_index-4.2.1.tar.gz.

File metadata

File hashes

Hashes for openinference_instrumentation_llama_index-4.2.1.tar.gz
Algorithm Hash digest
SHA256 615603bd9bcc6bbf24e5ea2d2cfcd11e484102b8d8d131f7777543eca1f4d2b7
MD5 639f05875758977b7bcd9c1482808e85
BLAKE2b-256 5cefec75993016edb1bd7c7b9de50c3a6db039da0ab5da1d3aaa1216a92cb360

See more details on using hashes here.

File details

Details for the file openinference_instrumentation_llama_index-4.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for openinference_instrumentation_llama_index-4.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 74102a2f1532dfac069ff8e241d498695f481bc3fa6c017d4df6c285fbc7d6cd
MD5 e083f0f1d175a7d0e34ab150ea0142dc
BLAKE2b-256 a206aaca472113b499924cbc0b7ea2dd18d61d1f4ad7120f9fb0fd30bb4d8eec

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page