Understanding runtime control with feature flags
Runtime control is being able to influence the behavior of an application without having to restart the application or rewrite the code itself.
These behaviors can be as simple as the color of a button or a dialog window’s default attributes. They can also be as impactful as the designated endpoint for resource acquisition, or the enablement of a feature.
The key takeaway is that you can control the software in any way that proves necessary while it is running whether running in your debugger, running in a production environment, or running on customer hardware.
TL;DR
- Runtime control alters behavior without application restarts.
- It enables dynamic updates across all environments while running.
- Feature flags provide simple, real‑time behavior switching.
- Multi‑variant flags enable complex, flexible runtime configurations.
Runtime? What time?
Runtime is the time when a program is actively being executed by the host software. The code in the browser you’re likely reading this in is actively in ‘runtime’. When you open an app on your phone, the program displayed is in ‘runtime’.
Other “times” include design time, development time, deployment time, and more. But even these “times” benefit from runtime control. More on that later.
Runtime control vs. compile-time configuration
Runtime control lets you adjust an application’s behavior while it’s running; no rebuilds, no redeploys, no restarts required. This is typically done through dynamic mechanisms like feature flags, which your app evaluates at execution time. That means teams can roll features out gradually, target specific users, and instantly disable anything misbehaving in production without touching the deployment pipeline.
Compile‑time configuration, on the other hand, locks decisions into the build itself. Any adjustment requires updating code or config, rebuilding the artifact, and rolling out a new version. This slows iteration and makes recovery heavier compared to runtime control.
The relationship between runtime control and feature flags
Wait. Are we still talking about feature flags? Yes.
Feature flags are still very much a part of the equation. They–and the control they bring–can be thought of as the most rudimentary way to engage in runtime control.
Feature flags (or feature management) is a subset of the capabilities provided by solutions such as Unleash. After all, if we’re toggling a flag, we are still ultimately controlling the software at runtime using the same mechanisms.
A feature flag as it is commonly presented is as a simple “yes/no” answer to a question “asked” from the code.
if( unleashSDK.isEnabled(‘cool-new-broken-feature’) == true )
runCoolBrokenFeature()
else
runBoringFunctionalFeature()
In the pseudo code above, we see the most basic example of a feature flag. When the program gets to this instruction, it will defer to Unleash (via the SDK) for what this value is. Start the app running this code and you can control what feature is called using the Unleash dashboard.
There’s no need to restart the app, push out a patch, change a parameter in some config file, or anything else to determine how the app runs.
It may seem trivial to just restart an app with new features, but consider when you have apps executing in the wild in the tens, hundreds, thousands, and so on.
This is the heart of runtime control.
Moving beyond boolean toggles with multivariate flags
Flagging software can support more than just “on/off” values. We can serve words, numbers, and creative combinations of types.
When bound to the “on/off” dichotomy, options can be limited. However, once the power of multi-variation types are understood, the nature of the entire software development lifecycle is brought to a new level of agility.
Take the example above with “new” feature and “old” feature. What if we wanted a stable and experimental version to exist in the wild for different users? Would we make two “on/off” flags? No need.
A single flag that indicates a number can be used and the code can look like this:
var myFeatureValue = unleashSDK.getVariation(‘my-feature’)
if( myFeatureValue == ‘classic’ )
runClassicFeature()
else if( myFeatureValue == ‘new’ )
runNewFeature()
else if( myFeatureValue == ‘experimental’ )
runExperimentalFeature()
We’re now able to control (technically) any number of different implementations. Here’s how this can be valuable:
- Less clutter in the Unleash UI (one flag to rule them all)
- For developers, the ability to test multiple implementations without a rebuild
- Multiple types of solutions can be provided to targeted groups as needed
The above example, however, is only to illustrate how quickly we can increase capabilities of a flagging system by stepping beyond “on/off” to solve the same problem.
Runtime control in practice: Log-level control
A simple example of runtime control that is only possible with multivariate flags (non “on/off” flags) can be seen with something as simple as log-level control.
Logging is at the heart of troubleshooting software issues, especially server software which often runs without a graphic display.
However, there is the matter of how much to log. The cost of verbose logging (especially when we’re talking about microservices which could scale to the thousands) is significant in a number of ways:
- too much noise making it hard to find relevant information,
- storage/retention costs,
- bandwidth costs,
- and so on.
However, the cost of not logging enough could make the whole point of having logs irrelevant as you will not have enough information when something goes awry.
If we tied this functionality to a flag, not only could we control the dial on this as we see fit while running but we could also do so granularly (and automatically via REST calls, integrations, etc). The example below shows the single line that would handle this “magic”:
setLogLevel( unleashSDK.getVariation(‘log-level’) )
This is can be a huge boon to DevOps engineers and developers alike. And this is only the tip of the iceberg.
Consider other types of values that Unleash can be used to control:
- text/url combinations (via JSON)
- RGB values (string/JSON)
- min/max values (array/JSON)
Conclusion: Why does runtime control matter?
“Why” is the ultimate and most important question. If “why” is not satisfied then “how” and all that follows is irrelevant.
In this case, the answer is simple: Greater control over a wider variety of things. Feature flagging is a concept of controlling features.
Over the years in this space I’ve had the pleasure of seeing fellow developers (and even your humble narrator) stretch the concept to do so much more than simply be concerned with features.
The examples above were not hypothetical but actually employed in the wild by a variety of different cohorts both personal and professional.
Feature flags are great for product teams, for enablement processes, for bringing a greater agility to the release process. We will never live in a world where feature flags are no longer necessary.
However, running software involves more than simply putting new features into the wild, sunsetting, or rollbacks.
By expanding our understanding of this concept into runtime control we can discover a world where the systems we build are instrumented for a level of control only dreamed of at the same speed and precision that we’ve learned to expect from tools such as Unleash.
FAQs about runtime control
What is the difference between feature flags and runtime control?
Feature flags are a type of runtime control but not all runtime control is a feature flag. Traditional feature flags work as simple on/off switches that enable or disable specific features. Runtime control is the broader capability: it lets engineering teams dynamically adjust application behavior without restarting software or deploying new code. Modern feature flag platforms have expanded beyond binary toggles to support complex data types like text strings, numbers, and JSON payloads, effectively blending the two concepts. The key distinction is scope: feature flags target specific features, while runtime control can govern any aspect of application behavior across the entire system.
How does runtime control improve application logging?
It allows DevOps teams to dynamically adjust log levels across microservices without ever requiring an application restart. Instead of being locked into a highly verbose logging state that consumes excessive storage and bandwidth, engineers can toggle between basic monitoring and deep diagnostic levels in real time. This ensures that the right amount of information is captured only when it is actually needed to efficiently troubleshoot production issues.
At what stages of the software lifecycle is runtime control most valuable?
Runtime control is most valuable when software is actively running, whether it’s in a local debugger, staging environment, or production. By allowing teams to adjust behavior on the fly, runtime reduces the need for emergency patches and lowers operational risk across all of these environments.
Can runtime control manage more than just user-facing features?
Yes, runtime control extends far beyond visual user interface toggles to govern critical backend operations and infrastructure parameters. Teams can use it to dynamically alter resource acquisition endpoints, adjust API rate limits, or execute safe infrastructure migrations by seamlessly routing traffic to new microservices on the fly. This level of precise instrumentation provides developers with granular control over systemic application performance.
