Quick Start to APM

With APM you get critical performance insights from inside your application code. You can easily trace how each request hitting your servers travels across your mesh of microservices. ArchSaber is capable of collecting and processing such traces at scale, and generating meaningful performance metrics and inter-service dependency graphs from them. With flexible alerting capabilities, you can be sure that all kinds of performance hits are addressed ASAP.

service breakup

2-steps to APM

  1. Install ArchSaber's APM agent on your server on which your application is running (don't worry - we support docker and kubernetes as well).
  2. Instrument your code to send performance data to the agent.

Your dashboard will be ready for you with access to a wealth of critical performance data of your applications and how they interact with each other.

Install ArchSaber's APM agent

The job of the agent is to process performance data (transaction traces) that your application generates using the instrumentation libraries and send it to ArchSaber for visualization and analysis. To install the agent, you will need to copy your license key from the account tab of your dashboard. Please register here, if you haven't already and verify your account before proceeding. The steps to install the agent depend upon how you want to deploy it:

  1. Server
  2. Docker
  3. Kubernetes

ArchSaber's agent is fully compatible with most of the popular open-source instrumentation libraries out there. If you are using Jaeger, Zipkin or one of DataDog's open-source APM client libraries to instrument your application, ArchSaber's agent will just work. The agent listens for traces from your application on the following ports

Port Protocol Function
8126 TCP accept traces from datadog libraries over HTTP
5775 UDP accept zipkin.thrift over compact thrift protocol
6831 UDP accept jaeger.thrift over compact thrift protocol
6832 UDP accept jaeger.thrift over binary thrift protocol

Instrument your code

Now that the agent is ready to accept traces, you will need to instrument your application. This step depends on the language your application is written in. You can refer to the table below, for instructions on how to instrument your app.

Language Recommended Instrumentation Library Example Usage Supported Language Frameworks
Python dd-trace-py Auto instrumentation Django, Flask and others
Java dd-trace-java Using Java agent Tomcat, Jetty, Websphere, Weblogic, Spring-Web and others
Ruby dd-trace-rb Rails quickstart Rails, Sinatra, Rack and others
Go dd-trace-go Gin middleware Gin, gRPC, Gorilla and others
Node.JS dd-trace-js express express, graphql
C++ jaeger-client-cpp Custom instrumentation
C# jaeger-client-csharp Custom instrumentation

By default, the jaeger-client-* libraries send traces to ports 6831 / 6832 to agent running on localhost and the dd-trace-* libraries send traces to port 8126 on localhost. While this works when your application and the ArchSaber's agent are directly deployed on server, you will need to take a few additional steps for containerized environments to make sure that the traces generated by your application reach the agent.


If you've deployed the agent on nodes of your cluster as a Kubernetes daemonset, it is easy to expose a routable name for the node to your applications running inside pods.

      fieldPath: spec.nodeName

You will then need to use this environment variable (ARCHSABER_AGENT_HOSTNAME) to configure the tracer in your application.


If you run your applications inside a docker container, and you've deployed the agent as a docker container on the same network (non-default bridge) as your application container, you can simply use the agent container's name in your application's tracer.

Note: Currently, aggregated performance statistics are only available when using dd-trace-* libraries to instrument your application. For jaeger-client-* libraries, you can still view your traces in the dashboard

results matching ""

    No results matching ""