Central (on-prem) Install
In this simple guide, we’ll go over the basic steps required to move beyond local deployment and get Digma up and running in a Kubernetes cluster that multiple people can connect to.
Last updated
In this simple guide, we’ll go over the basic steps required to move beyond local deployment and get Digma up and running in a Kubernetes cluster that multiple people can connect to.
Last updated
Digma is deployed into the K8s cluster into its own namespace. Depending on your application deployment architecture you may want to deploy Digma with different parameters to enable the right connectivity.
You should pay attention to the following regarding the deployment architecture:
OTEL Collector – Your application should be able to send observability data to the IP/DNS of this endpoint. You may need to configure your setup to allow this traffic. You may also choose to expose it as a public IP in your deployment (see below under Cloud Deployment)
Analytics-API – This endpoint needs to be accessible to the IDE plugin. If you are deploying Digma into your internal network and use a VPN to access that IP you can choose not to expose this service as a public IP (see below under Cloud Deployment).
Jaeger – Digma bundles its own Jaeger service that aggregates sample traces for various insights, performance metrics, and exceptions. If you do not wish to expose this endpoint or prefer to configure your APM as the trace source, you can choose to disable this endpoint. Digma does offer enhancements over Jaeger such as a two-way mapping between the code and the trace.
UI - This endpoint serves the Digma web application as well as provides templates and other UI elements for Digma email reports.
Prerequisites:
Access to a Kubernetes cluster
Helm installed locally
Have a license key. You can use the one provided to you or create a free Digma Account to receive one.
Installing Digma in your org is recommended using our Helm chart.
You can use a values.yaml
file to configure many aspects of how Digma will function in your environment. In this section, we'll review the critical ones, but you can find a more exhaustive list on our GitHub repo here.
The license key is the only mandatory parameter for setting up Digma.
How you define Digma's networking is really up to your organization's preferences and needs. You cant choose to have the backend services exposed publically or internally, use any type of ingress controller, or choose a load balancer instead.
You can refer to these examples:
Nginx controller with private networking
To activate Digma's email notifications feature you will need an email gateway API key and URL that will be used to send out the emails, these should be set in the values file as shown below. In addition, you can set other preferences regarding delivery times and recipients.
The daily reports often include links to issues, which require the report HTML to reference the Digma ui
service DNS/IP. If you have set up a specific ingress for that service, you'll need to also specify it in the settings file to ensure the links are functional. Enter the DNS/IP used for the ui
service as uiExternalBaseUrl
below. The deployment will try to autodetect it if not directly specified.
If you're using image pull secrets to avoid image repository throttling you can specify them globally using this value:
To check everything is working properly we can check the pod status and make sure they are all in the ‘Running’ state:
kubectl get pods -n digma
For example, this is the expected output:
Step 4: Get the IP/DNS value for the Digma deployment
Run the following command to get the address assigned to the Collector, Plugin-API, and Jaeger endpoints (if enabled). You’ll need these to complete the setup. Note that external load balancers for public IPs may take additional time to set up the address.
kubectl get services --namespace digma
Depending on your setup type get the public or internal IP for the following ser
vices:
otel-collector:
Receiver for OTEL observability
analytics-api:
Provides the plugin with data and issues
jaeger-ui:
Digma's embedded Jaeger services for displaying traces
Capture these addresses as you’ll need them later to setup your IDE plugin.
Step 5: Final validation
You can try calling the following API to validate connectivity and ensure Digma is up and running. You’ll need to use the ANALYTICS-API address you’ve captured in the above step. If you've set an access token, you need to provide it as well as a header for the request, as seen in the example below:
If you are using the optional digmaAnalytics.accesstoken
parameter, add the following argument: -H 'Digma-Access-Token: Token <ACCESS_TOKEN>
to the curl
command.
If you received a non-error response back you’re good to go for the next step!
Once Digma is up and running you can now set your IDE plugin to connect to it. To do that, open the plugin settings (Go to IntelliJ IDEA -> Settings/Preferences and search for ‘Digma’)
Set the Digma API URL
parameter using the analytics-api
value you’ve captured previously (By default this should be prefixed as ‘https’ and use port 5051)
Set the Runtime observability backend URL
parameter using the otel-collector
value you’ve captured previously
Set the Jaeger Query URL
(if this option was enabled) using the jaeger-ui
address you’ve captured previously.
Click Apply
/OK
to enable the changes and check that the Digma UI is not indicating any connection errors.