Exporters

Processar e exportar seus dados de telemetria

The content of this page may be outdated and some links may be invalid. A newer version of this page exists in English.

More information ...

To see the changes to the English page since this page was last updated: visit GitHub compare 546f3e88..5db74ea6 and search for content/en/docs/languages/python/exporters.md.

Send telemetry to the OpenTelemetry Collector to make sure it’s exported correctly. Using the Collector in production environments is a best practice. To visualize your telemetry, export it to a backend such as Jaeger, Zipkin, Prometheus, or a vendor-specific backend.

Available exporters

The registry contains a list of exporters for Python.

Among exporters, OpenTelemetry Protocol (OTLP) exporters are designed with the OpenTelemetry data model in mind, emitting OTel data without any loss of information. Furthermore, many tools that operate on telemetry data support OTLP (such as Prometheus, Jaeger, and most vendors), providing you with a high degree of flexibility when you need it. To learn more about OTLP, see OTLP Specification.

This page covers the main OpenTelemetry Python exporters and how to set them up.

OTLP

Collector Setup

To try out and verify your OTLP exporters, you can run the collector in a docker container that writes telemetry directly to the console.

In an empty directory, create a file called collector-config.yaml with the following content:

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
exporters:
  debug:
    verbosity: detailed
service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [debug]
    metrics:
      receivers: [otlp]
      exporters: [debug]
    logs:
      receivers: [otlp]
      exporters: [debug]

Now run the collector in a docker container:

docker run -p 4317:4317 -p 4318:4318 --rm -v $(pwd)/collector-config.yaml:/etc/otelcol/config.yaml otel/opentelemetry-collector

This collector is now able to accept telemetry via OTLP. Later you may want to configure the collector to send your telemetry to your observability backend.

Dependências

Se você deseja enviar dados de telemetria para um endpoint OTLP (como o OpenTelemetry Collector, Jaeger ou Prometheus), você pode escolher entre dois protocolos diferentes para transportar seus dados:

Comece instalando os pacotes do exporter necessários como dependências do seu projeto antes de prosseguir.

pip install opentelemetry-exporter-otlp-proto-http
pip install opentelemetry-exporter-otlp-proto-grpc

Uso

Em seguida, configure o exporter para apontar para um endpoint OTLP no seu código.

from opentelemetry.sdk.resources import SERVICE_NAME, Resource

from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor

from opentelemetry import metrics
from opentelemetry.exporter.otlp.proto.http.metric_exporter import OTLPMetricExporter
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader

# Nome do serviço é necessário para a maioria dos backends
resource = Resource(attributes={
    SERVICE_NAME: "nome-do-seu-serviço"
})

tracerProvider = TracerProvider(resource=resource)
processor = BatchSpanProcessor(OTLPSpanExporter(endpoint="<traces-endpoint>/v1/traces"))
tracerProvider.add_span_processor(processor)
trace.set_tracer_provider(tracerProvider)

reader = PeriodicExportingMetricReader(
    OTLPMetricExporter(endpoint="<traces-endpoint>/v1/metrics")
)
meterProvider = MeterProvider(resource=resource, metric_readers=[reader])
metrics.set_meter_provider(meterProvider)
from opentelemetry.sdk.resources import SERVICE_NAME, Resource

from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor

from opentelemetry import metrics
from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import OTLPMetricExporter
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader

# Nome do serviço é necessário para a maioria dos backends
resource = Resource(attributes={
    SERVICE_NAME: "nome-do-seu-serviço"
})

tracerProvider = TracerProvider(resource=resource)
processor = BatchSpanProcessor(OTLPSpanExporter(endpoint="seu-endpoint-aqui"))
tracerProvider.add_span_processor(processor)
trace.set_tracer_provider(tracerProvider)

reader = PeriodicExportingMetricReader(
    OTLPMetricExporter(endpoint="localhost:5555")
)
meterProvider = MeterProvider(resource=resource, metric_readers=[reader])
metrics.set_meter_provider(meterProvider)

Console

Para depurar sua instrumentação ou ver os valores localmente em desenvolvimento, você pode usar exporters que escrevem dados de telemetria no console (stdout).

O ConsoleSpanExporter e o ConsoleMetricExporter estão inclusos no pacote opentelemetry-sdk.

from opentelemetry.sdk.resources import SERVICE_NAME, Resource

from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor, ConsoleSpanExporter

from opentelemetry import metrics
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader, ConsoleMetricExporter

# Nome do serviço é necessário para a maioria dos backends,
# e embora não seja necessário para exportação no console,
# é bom definir o nome do serviço de qualquer maneira.
resource = Resource(attributes={
    SERVICE_NAME: "nome-do-seu-serviço"
})

tracerProvider = TracerProvider(resource=resource)
processor = BatchSpanProcessor(ConsoleSpanExporter())
tracerProvider.add_span_processor(processor)
trace.set_tracer_provider(tracerProvider)

reader = PeriodicExportingMetricReader(ConsoleMetricExporter())
meterProvider = MeterProvider(resource=resource, metric_readers=[reader])
metrics.set_meter_provider(meterProvider)

Jaeger

Backend Setup

Jaeger natively supports OTLP to receive trace data. You can run Jaeger in a docker container with the UI accessible on port 16686 and OTLP enabled on ports 4317 and 4318:

docker run --rm \
  -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
  -p 16686:16686 \
  -p 4317:4317 \
  -p 4318:4318 \
  -p 9411:9411 \
  jaegertracing/all-in-one:latest

Usage

Now following the instruction to setup the OTLP exporters.

Prometheus

To send your metric data to Prometheus, you can either enable Prometheus’ OTLP Receiver and use the OTLP exporter or you can use the Prometheus exporter, a MetricReader that starts an HTTP server that collects metrics and serialize to Prometheus text format on request.

Backend Setup

You can run Prometheus in a docker container, accessible on port 9090 by following these instructions:

Create a file called prometheus.yml with the following content:

scrape_configs:
  - job_name: dice-service
    scrape_interval: 5s
    static_configs:
      - targets: [host.docker.internal:9464]

Run Prometheus in a docker container with the UI accessible on port 9090:

docker run --rm -v ${PWD}/prometheus.yml:/prometheus/prometheus.yml -p 9090:9090 prom/prometheus --enable-feature=otlp-write-receive

Dependências

Instale o pacote de exporter como uma dependência para sua aplicação:

pip install opentelemetry-exporter-prometheus

Atualize sua configuração do OpenTelemetry para usar o exporter e enviar dados para seu backend Prometheus:

from prometheus_client import start_http_server

from opentelemetry import metrics
from opentelemetry.exporter.prometheus import PrometheusMetricReader
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.resources import SERVICE_NAME, Resource

# Nome do serviço é necessário para a maioria dos backends
resource = Resource(attributes={
    SERVICE_NAME: "nome-do-seu-serviço"
})

# Iniciar cliente Prometheus
start_http_server(port=9464, addr="localhost")
# Inicializar PrometheusMetricReader que puxa métricas do SDK
# sob demanda para responder a solicitações de extração
reader = PrometheusMetricReader()
provider = MeterProvider(resource=resource, metric_readers=[reader])
metrics.set_meter_provider(provider)

Com o código acima, você pode acessar suas métricas em http://localhost:9464/metrics. O Prometheus ou um OpenTelemetry Collector com o receptor Prometheus pode extrair as métricas deste endpoint.

Zipkin

Backend Setup

You can run Zipkin on in a Docker container by executing the following command:

docker run --rm -d -p 9411:9411 --name zipkin openzipkin/zipkin

Dependências

Para enviar seus dados de rastro para o Zipkin, você pode escolher entre dois protocolos diferentes para transportar seus dados:

Instale o pacote de exporter como uma dependência para sua aplicação:

pip install opentelemetry-exporter-zipkin-proto-http
pip install opentelemetry-exporter-zipkin-json

Atualize sua configuração do OpenTelemetry para usar o exporter e enviar dados para seu backend Zipkin:

from opentelemetry import trace
from opentelemetry.exporter.zipkin.proto.http import ZipkinExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import SERVICE_NAME, Resource

resource = Resource(attributes={
    SERVICE_NAME: "nome-do-seu-serviço"
})

zipkin_exporter = ZipkinExporter(endpoint="http://localhost:9411/api/v2/spans")

provider = TracerProvider(resource=resource)
processor = BatchSpanProcessor(zipkin_exporter)
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
from opentelemetry import trace
from opentelemetry.exporter.zipkin.json import ZipkinExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import SERVICE_NAME, Resource

resource = Resource(attributes={
    SERVICE_NAME: "nome-do-seu-serviço"
})

zipkin_exporter = ZipkinExporter(endpoint="http://localhost:9411/api/v2/spans")

provider = TracerProvider(resource=resource)
processor = BatchSpanProcessor(zipkin_exporter)
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)

Custom exporters

Finally, you can also write your own exporter. For more information, see the SpanExporter Interface in the API documentation.

Batching span and log records

The OpenTelemetry SDK provides a set of default span and log record processors, that allow you to either emit spans one-by-on (“simple”) or batched. Using batching is recommended, but if you do not want to batch your spans or log records, you can use a simple processor instead as follows:

from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import SimpleSpanProcessor

processor = SimpleSpanProcessor(OTLPSpanExporter(endpoint="seu-endpoint-aqui"))

Last modified March 21, 2025: Test-and-fix results (#6593) (0369108b)