Skip to main content

Architecture

The zymtrace platform is only available for on-premises/self-hosted installation, allowing you to host and manage zymtrace entirely within your infrastructure for full control over your data and setup. The platform consists of two main components:

  1. zymtrace profiler: This is the eBPF-based agent that needs to be installed on the machines running your applications. It collects performance profiles from both GPU & CPU-bound workloads.
  2. Backend services: The backend services store, process, and analyze performance profiles. All our core backend services are written in Rust πŸ¦€. The front-end is a combination of ReactJS and WASM.

The diagram below depicts a high-level architecture of the components:

Architecture Architecture
info

Need a hosted zymtrace backend?

We can provision a dedicated SaaS version of the zymtrace backend for you. Email us at [email protected] or request access here

Components overview​

zymtrace profiler​

The zymtrace profiler runs on each node, deployed either as a Kubernetes DaemonSet or as a standalone binary on a standard VM. It collects performance profiles of resource-intensive processes on the node, aggregates and compresses them, and sends them to the backend via gRPC. TLS is supported by default, with an option to disable it if needed. Here's a more detailed description of how it works:

Agent architecture Agent architecture
  1. Unwinder eBPF programs are loaded into the kernel.
  2. The kernel verifies the safety of the BPF program. If approved, the program is attached to probes and triggered upon specific events.
  3. The eBPF programs collect data and pass it to userspace via maps.
  4. The agent retrieves the collected data from maps. This data includes process-specific and interpreter-specific meta-information, which helps the eBPF unwinder programs perform mixed-stack unwinding across different languages (e.g., Python calling into C libraries).
  5. The agent pushes stack traces, metrics, and metadata to the zymtrace backend for analysis.
  6. Easily identify & optimize the most inefficient functions across your entire infrastructure. Refer to the user guide for supported profiling visualizations.
info

CPU profiling uses perf-events and tracepoints. uprobes are used in the context of GPU profiling to correlate CPU stack traces with the CUDA kernels they launched. Detailed documentation on GPU profiler coming soon.

zymtrace backend services​

The zymtrace backend is designed to store, process and visualize profiling data efficiently. Below is an overview of the key backend services and their roles:

ingest service​

The ingest service receives profiling data from the zymtrace profiler, with its endpoint typically exposed via an ingress controller if deployed in a Kubernetes cluster.

If configured, the ingest service can pull native symbols from public Linux distributions to enable symbolizing native code. This is particularly useful for profiling native applications written in languages like C, C++ & Rust; but may not be required for interpreted languages like Java, Python, or Ruby.

The service is also responsible for storing profiling events in ClickHouse, a high-performance database designed for ultra-fast querying and analysis.

symDB service​

The symDB service handles symbol resolution upon request. It retrieves native symbols stored in S3/Minio or fetches them from the global symbolization service. It also uses debuginfod as a fallback if symbols are not in the global bucket. This service is critical for converting raw profiling data into meaningful stack traces by resolving both native and interpreted symbols.

identity service​

zymtrace provides the granularity to segregate your profiling data into different projects within an organization. The identity service currently manages these projects. It associates incoming profiling data from the ingest service with the correct project, laying the foundation for future user authentication and role-based access control.

Storage​

  • ClickHouse: Stores all profiling events.
  • ScyllaDB: Stores symbols specific to interpreted languages.
  • S3/Minio: Stores native symbols.

Global Symbolization​

We provide a public service that collects and maintains symbol information for all packages in the repositories of various popular Linux distributions. Our system continuously crawls these distributions to ensure up-to-date symbol data. Applications built from these repositories are automatically symbolized, requiring no action from the user.

This service is hosted on Google Cloud Storage (GCS). Customers also have the option to clone the bucket for on-premise useβ€”doing so is particularly useful in environments without internet access.

zymtrace supports the following Linux distributions:

  • Alpine Linux
  • Debian
  • Fedora
  • Ubuntu

Get started​

zymtrace backend (On-Premises)​

Refer to our on-premises installation guide for detailed instructions.

zymtrace profiler​

Refer to the profiler host agent installation guide for more details.