For AI agents: A markdown version of this page is available at https://docs.datadoghq.com/feature_flags/server/java.md.
A documentation index is available at /llms.txt.
This product is not supported for your selected Datadog site. ().
Java Feature Flags support is experimental and requires enabling an experimental flag in the tracer. See the Configuration section for details.
Overview
This page describes how to instrument a Java application with the Datadog Feature Flags SDK. Datadog feature flags provide a unified way to remotely control feature availability in your app, experiment safely, and deliver new experiences with confidence.
The Java SDK integrates feature flags directly into the Datadog APM tracer and implements the OpenFeature standard for maximum flexibility and compatibility.
If you're using Datadog APM and your application already has the Datadog Java tracer and Remote Configuration enabled, skip to Initialize the OpenFeature provider. You only need to add the OpenFeature dependencies and initialize the provider.
Feature flagging is integrated into the Datadog Java APM tracer. You need the tracer JAR and the OpenFeature SDK dependencies.
Add the following dependencies to your build.gradle:
build.gradle
dependencies{// OpenFeature SDK for flag evaluation
implementation'dev.openfeature:sdk:1.18.2'// Datadog OpenFeature Provider
implementation'com.datadoghq:dd-openfeature:1.57.0'}
Add the following dependencies to your build.gradle.kts:
build.gradle.kts
dependencies{// OpenFeature SDK for flag evaluation
implementation("dev.openfeature:sdk:1.18.2")// Datadog OpenFeature Provider
implementation("com.datadoghq:dd-openfeature:1.57.0")}
Add the following dependencies to your pom.xml:
pom.xml
<dependencies><!-- OpenFeature SDK for flag evaluation --><dependency><groupId>dev.openfeature</groupId><artifactId>sdk</artifactId><version>1.18.2</version></dependency><!-- Datadog OpenFeature Provider --><dependency><groupId>com.datadoghq</groupId><artifactId>dd-openfeature</artifactId><version>1.57.0</version></dependency></dependencies>
Configuration
If your Datadog Agent already has Remote Configuration enabled for other features (like Dynamic Instrumentation or Application Security), you can skip the Agent configuration and go directly to Application configuration.
Agent configuration
Configure your Datadog Agent to enable Remote Configuration:
datadog.yaml
# Enable Remote Configurationremote_configuration:enabled:true# Set your API keyapi_key:<YOUR_API_KEY>
Application configuration
If your application already runs with -javaagent:dd-java-agent.jar and has Remote Configuration enabled (DD_REMOTE_CONFIG_ENABLED=true), you only need to add the experimental feature flag (DD_EXPERIMENTAL_FLAGGING_PROVIDER_ENABLED=true). Skip the tracer download and JVM configuration steps.
Configure your Java application with the required environment variables or system properties:
# Required: Enable Remote Configuration in the tracerexportDD_REMOTE_CONFIG_ENABLED=true# Required: Enable experimental feature flagging supportexportDD_EXPERIMENTAL_FLAGGING_PROVIDER_ENABLED=true# Required: Your Datadog API keyexportDD_API_KEY=<YOUR_API_KEY>
# Required: Service nameexportDD_SERVICE=<YOUR_SERVICE_NAME>
# Required: Environment (e.g., prod, staging, dev)exportDD_ENV=<YOUR_ENVIRONMENT>
# Optional: VersionexportDD_VERSION=<YOUR_APP_VERSION>
# Start your application with the tracerjava -javaagent:path/to/dd-java-agent.jar -jar your-application.jar
The Datadog feature flagging system starts automatically when the tracer is initialized with both Remote Configuration and the experimental flagging provider enabled. No additional initialization code is required in your application.
Feature flagging requires both DD_REMOTE_CONFIG_ENABLED=true and DD_EXPERIMENTAL_FLAGGING_PROVIDER_ENABLED=true. Without the experimental flag, the feature flagging system does not start and the Provider returns the programmatic default.
Add the Java tracer to the JVM
For instructions on how to add the -javaagent argument to your application server or framework, see Add the Java Tracer to the JVM.
Make sure to include the feature flagging configuration flags:
-Ddd.remote.config.enabled=true
-Ddd.experimental.flagging.provider.enabled=true
Initialize the OpenFeature provider
Initialize the Datadog OpenFeature provider in your application startup code. The provider connects to the feature flagging system running in the Datadog tracer.
importdev.openfeature.sdk.OpenFeatureAPI;importdev.openfeature.sdk.Client;importdatadog.trace.api.openfeature.Provider;importdev.openfeature.sdk.exceptions.ProviderNotReadyError;importorg.slf4j.Logger;importorg.slf4j.LoggerFactory;publicclassApp{privatestaticfinalLoggerlogger=LoggerFactory.getLogger(App.class);privatestaticClientclient;publicstaticvoidmain(String[]args)throwsException{// Initialize the Datadog providerlogger.info("Initializing Datadog OpenFeature Provider...");OpenFeatureAPIapi=OpenFeatureAPI.getInstance();try{// Set provider and wait for initial configuration (recommended)api.setProviderAndWait(newProvider());client=api.getClient("my-app");logger.info("OpenFeature provider initialized successfully");}catch(ProviderNotReadyErrore){// Handle gracefully - app will use default flag valueslogger.warn("Provider not ready (no tracer/config available), continuing with defaults",e);client=api.getClient("my-app");logger.info("App will use default flag values until provider is ready");}catch(Exceptione){logger.error("Failed to initialize OpenFeature provider",e);throwe;}// Your application code here}}
Use setProviderAndWait() to block evaluation until the initial flag configuration is received from Remote Configuration. This ensures flags are ready before the application starts serving traffic. The default timeout is 30 seconds.
ProviderNotReadyError is an OpenFeature SDK exception thrown when the provider times out during initialization. Catching it allows the application to start with default flag values if Remote Configuration is unavailable. If not caught, the exception propagates and may prevent application startup. Handle this based on your availability requirements.
Asynchronous initialization
For non-blocking initialization, use setProvider() and listen for provider events:
importdev.openfeature.sdk.ProviderEvent;OpenFeatureAPIapi=OpenFeatureAPI.getInstance();Clientclient=api.getClient();// Listen for provider state changesclient.on(ProviderEvent.PROVIDER_READY,(event)->{logger.info("Feature flags ready!");});client.on(ProviderEvent.PROVIDER_ERROR,(event)->{logger.error("Provider error: {}",event.getMessage());});client.on(ProviderEvent.PROVIDER_STALE,(event)->{logger.warn("Provider configuration is stale");});// Set provider asynchronouslyapi.setProvider(newProvider());
Set the evaluation context
The evaluation context defines the subject (user, device, session) for flag evaluation. It determines which flag variations are returned based on targeting rules.
importdev.openfeature.sdk.EvaluationContext;importdev.openfeature.sdk.MutableContext;// Create an evaluation context with a targeting key and attributesEvaluationContextcontext=newMutableContext("user-123").add("email","user@example.com").add("tier","premium");//Usethecontextforflagevaluations(seenextsection)
The targetingKey (for example, user-123) is the primary identifier used for consistent flag evaluations and percentage-based rollouts. It’s typically a user ID, session ID, or device ID.
Evaluate flags
Evaluate feature flags using the OpenFeature client. All flag types are supported: Boolean, string, integer, double, and object.
// Simple Boolean evaluationbooleanenabled=client.getBooleanValue("checkout.new",false,context);if(enabled){// New checkout flow}else{// Old checkout flow}// Get detailed evaluation resultimportdev.openfeature.sdk.FlagEvaluationDetails;FlagEvaluationDetails<Boolean>details=client.getBooleanDetails("checkout.new",false,context);logger.info("Value: {}",details.getValue());logger.info("Variant: {}",details.getVariant());logger.info("Reason: {}",details.getReason());
// Evaluate string flags (e.g., UI themes, API endpoints)Stringtheme=client.getStringValue("ui.theme","light",context);StringapiEndpoint=client.getStringValue("payment.api.endpoint","https://api.example.com/v1",context);
importdev.openfeature.sdk.Value;// Evaluate object/JSON flags for complex configurationValueconfig=client.getObjectValue("ui.config",newValue(),context);// Access structured dataif(config.isStructure()){Valuetimeout=config.asStructure().getValue("timeout");Valueendpoint=config.asStructure().getValue("endpoint");}
Error handling
The OpenFeature SDK uses a default value pattern. If evaluation fails for any reason, the default value you provide is returned.
importdev.openfeature.sdk.ErrorCode;// Check evaluation details for errorsFlagEvaluationDetails<Boolean>details=client.getBooleanDetails("checkout.new",false,context);if(details.getErrorCode()!=null){switch(details.getErrorCode()){caseFLAG_NOT_FOUND:logger.warn("Flag does not exist: {}","checkout.new");break;casePROVIDER_NOT_READY:logger.warn("Provider not initialized yet");break;caseTARGETING_KEY_MISSING:logger.warn("Evaluation context missing targeting key");break;caseTYPE_MISMATCH:logger.error("Flag value type doesn't match requested type");break;default:logger.error("Evaluation error for flag: {}","checkout.new",details.getErrorCode());}}
Common error codes
Error Code
Description
Resolution
PROVIDER_NOT_READY
Initial configuration not received
Wait for provider initialization or use setProviderAndWait()
FLAG_NOT_FOUND
Flag doesn’t exist in configuration
Check flag key or create flag in Datadog UI
TARGETING_KEY_MISSING
No targeting key in evaluation context
Provide a targeting key when creating context
TYPE_MISMATCH
Flag value can’t be converted to requested type
Use correct evaluation method for flag type
INVALID_CONTEXT
Evaluation context is null
Provide a valid evaluation context
Advanced configuration
Custom initialization timeout
Configure how long the provider waits for initial configuration:
PROVIDER_CONFIGURATION_CHANGED is an optional OpenFeature event. Check the Datadog provider documentation to verify this event is supported in your version.
Multiple clients
Use named clients to organize context and flags by domain or team:
// Named clients share the same provider instance but can have different contextsClientcheckoutClient=api.getClient("checkout");ClientanalyticsClient=api.getClient("analytics");// Each client can have its own evaluation contextEvaluationContextcheckoutContext=newMutableContext("session-abc");EvaluationContextanalyticsContext=newMutableContext("user-123");booleannewCheckout=checkoutClient.getBooleanValue("checkout.ui.new",false,checkoutContext);booleanenhancedAnalytics=analyticsClient.getBooleanValue("analytics.enhanced",false,analyticsContext);
The Provider instance is shared globally. Client names are for organizational purposes only and don’t create separate provider instances. All clients use the same underlying Datadog provider and flag configurations.
Best practices
Initialize early
Initialize the OpenFeature provider as early as possible in your application lifecycle (for example, in main() or application startup). This ensures flags are ready before business logic executes.
Use meaningful default values
Always provide sensible default values that maintain safe behavior if flag evaluation fails:
// Good: Safe default that maintains current behaviorbooleanuseNewAlgorithm=client.getBooleanValue("algorithm.new",false,context);// Good: Conservative default for limitsintrateLimit=client.getIntegerValue("rate.limit",100,context);
Create context once
Create the evaluation context once per request/user/session and reuse it for all flag evaluations:
// In a web filter or request handlerEvaluationContextuserContext=newMutableContext(userId).add("email",user.getEmail()).add("tier",user.getTier());// Reuse context for all flags in this requestbooleanfeatureA=client.getBooleanValue("feature.a",false,userContext);booleanfeatureB=client.getBooleanValue("feature.b",false,userContext);
Rebuilding the evaluation context for every flag evaluation adds unnecessary overhead. Create the context once at the start of the request lifecycle, then pass it to all subsequent flag evaluations.
Handle initialization failures (optional)
Consider handling initialization failures if your application can function with default flag values:
try{api.setProviderAndWait(newProvider());}catch(ProviderNotReadyErrore){// Log error and continue with defaultslogger.warn("Feature flags not ready, using defaults",e);// Application will use default values for all flags}
If feature flags are critical for your application to function, let the exception propagate to prevent startup.
Use consistent targeting keys
Use consistent, stable identifiers as targeting keys:
Good: User IDs, session IDs, device IDs
Avoid: Timestamps, random values, frequently changing IDs
Monitor flag evaluation
Use the detailed evaluation results for logging and debugging:
Before investigating specific errors, confirm these prerequisites are in place:
The Datadog Agent is healthy and reachable: See APM Connection Errors to verify Agent connectivity.
The experimental flagging provider is enabled on the tracer: Set DD_EXPERIMENTAL_FLAGGING_PROVIDER_ENABLED=true.
Required tracer environment variables are set: DD_API_KEY, DD_ENV, and DD_SITE.
Your DD_ENV value appears in the Feature Flag environments list: Confirm your environment is visible in the Feature Flag Environments settings.
After confirming all prerequisites, continue with the following sections if feature flags still aren’t working.
Debug flag evaluations
If flags evaluate but return unexpected values, use getBooleanDetails() instead of getBooleanValue(). The Details variant of each evaluation method returns a FlagEvaluationDetails object that exposes the provider’s internal state, including the reason, variant, and any error code.
Review the logged output to understand why the provider returned a particular result.
Monitor provider state changes
Add event listeners early in your application startup to observe provider life cycle transitions:
importdev.openfeature.sdk.ProviderEvent;client.on(ProviderEvent.PROVIDER_READY,(event)->{logger.info("Feature flag provider is ready");});client.on(ProviderEvent.PROVIDER_ERROR,(event)->{logger.error("Feature flag provider error: {}",event.getMessage());});client.on(ProviderEvent.PROVIDER_STALE,(event)->{logger.warn("Feature flag provider configuration is stale");});client.on(ProviderEvent.PROVIDER_CONFIGURATION_CHANGED,(event)->{logger.info("Feature flag configuration updated");});
A PROVIDER_STALE or PROVIDER_ERROR event after a period of normal operation indicates a loss of connectivity to the Agent or a Remote Configuration disruption.
Provider not ready
Problem: PROVIDER_NOT_READY errors when evaluating flags
PROVIDER_NOT_READY is returned when flag evaluation is attempted before the provider has received its first configuration from Remote Configuration. This state persists until the tracer receives its initial flag configuration payload from the Agent.
Common causes:
Async initialization: setProvider() was used instead of setProviderAndWait(). Evaluations that happen before the first Remote Configuration payload arrives return PROVIDER_NOT_READY.
Initialization timeout: setProviderAndWait() timed out (default 30 seconds) and threw ProviderNotReadyError, which was caught. The application continues evaluating flags while still waiting for the first configuration.
Solutions:
Enable debug logging to see the feature flagging system startup sequence. These messages are emitted at DEBUG level—set DD_TRACE_DEBUG=true to see them:
[dd.trace] Feature Flagging system starting
[dd.trace] Feature Flagging system started
Wait for Remote Configuration sync (can take 30-60 seconds after publishing flags)
Verify flags are published in Datadog UI to the correct service and environment
If none of these apply, verify the Datadog Agent is healthy and reachable. See APM Connection Errors.
EVP proxy not available error
Problem: Logs show Cannot create backend API client since agentless mode is disabled, and agent does not support EVP proxy.
Problem: Experiment exposures aren’t appearing in Datadog
Solution: Verify the flag is associated with an experiment in the Datadog UI. Exposures are only recorded for flags that are part of an experiment—standard feature flags without an experiment association do not generate exposure events.
Further reading
Additional helpful documentation, links, and articles: