Skip to main content

Sink to Elasticsearch

This guide shows how to create a pipeline that writes streaming data to an Elasticsearch index for full-text search and analytics.

Prerequisites

  • TypeStream installed and running
  • An Elasticsearch instance accessible from the TypeStream server

Register an Elasticsearch connection

Before creating the pipeline, register your Elasticsearch instance. In the GUI, navigate to Connections > Elasticsearch and add:

  • URL: Your Elasticsearch endpoint (e.g. http://elasticsearch:9200)
  • Credentials: Username and password (if authentication is enabled)

The server monitors connection health and the connection appears as a sink option in the graph builder.

Build the pipeline

  1. Drag a Kafka Source and select your topic
  2. Optionally add transform or enrichment nodes
  3. Drag an Elasticsearch Sink from the palette (appears under Database Sinks after registering a connection)
    • Set the index name
    • Configure the document ID strategy and write method
  4. Click Create Job

Elasticsearch sink configuration

FieldDescription
index_nameElasticsearch index to write to
document_id_strategyHow to derive the document _id from records
write_methodWrite behavior: INSERT or UPSERT
behavior_on_null_valuesHow to handle null field values

How it works

Under the hood, TypeStream creates a Kafka Connect Elasticsearch sink connector. The pipeline writes processed records to an intermediate Kafka topic, and the connector forwards them to Elasticsearch. Credentials are resolved server-side from the registered connection -- they never appear in pipeline definitions.

See also