Skip to main content

Exploring the Convergence of Observability and Security - Part 2: Logs, Metrics and Traces

Pete Goldin
APMdigest

With input from industry experts — both analysts and vendors — this 8-part blog series will explore what is driving the convergence of observability and security, the challenges and advantages, and how it may transform the IT landscape.

Start with: Exploring the Convergence of Observability and Security - Part 1

One reason why observability and security make a good pairing is that traditional telemetry signals — metrics, logs, and traces — are helpful to maintain both performance and security.

"The convergence of security and observability is happening throughout the observability landscape, and telemetry pipelines are enabling organizations to make that happen," explains Buddy Brewer, Chief Product Officer at Mezmo. "Security engineers, developers, and SREs use telemetry pipelines to access telemetry data effectively and efficiently. Many are also adopting standards like OpenTelemetry to ease their data ingestion woes and allow teams across the organization to use standardized data and break down silos."

Brewer cites a recent ESG report showing that metrics, logs, and traces account for 86% of application data by volume. He maintains that this data is essential for SecOps teams to understand what parts of an application are working properly, identify errors, and determine how to address those errors. The same report shows that 69% of SecOps teams regularly or continuously access data from these three sources.

"Traditional application performance signals help SecOps by serving as a proof point that you are watching for outlier issues, for example, you are able to see and flag when something doesn't look right in your system," says Jam Leomi, Lead Security Engineer at Honeycomb. "This outlier data is surfaced in real-time using observability tools and can serve as an early indicator that something malicious is going on."

"There are emerging use cases for issues such as Kubernetes security or CSPM, where there does seem to be a big advantage to adding security capabilities to the traditional three pillars of logs, metrics and traces for observability," says Asaf Yigal, CTO of Logz.io. "Whether you have ops-type teams that can act on that data themselves or use it as a better informed stream of data to channel to their dedicated security teams, the reality is that cloud apps and infrastructure are so complex and fast moving, security has to be part of the picture for everyone involved."

Leomi of Honeycomb adds that the convergence of tools can help distinguish between performance and security issues, saying, "While a lot of the data surfaced in observability tools can look like an average system bottleneck or performance issue, applying the security lens to it could bring to light potential indicators of a security event."

Colin Fallwell, Field CTO of Sumo Logic agrees, "Many security incidents impact operations. For example, one can expect serious performance degradation to occur in a DDOS attack. Telemetry like tracing and logging data is naturally going to carry header information from web requests, IP information, and much, much more. Metrics are the canaries in the coal mine and serve as an early warning that something is wrong or trending out of the norm. All this data is valuable to security use cases as well. Deep application visibility, and deviations from the norm on authentication, access, processing, and DB access are table stakes for operations and highly valuable to SecOps. Consider how valuable this data is to security teams when trying to understand the impact and blast radius of security events."

Performance signals provide technologists with a detailed look into the health of their applications — if there are any bottlenecks, the signals can help locate where it's occurring and why, Joe Byrne, VP of Technology Strategy and CTO Adviser at Cisco AppDynamics adds. "For SecOps teams, detecting potential security threats before an attack is crucial, so having real-time insight into applications' performance would benefit them. SecOps teams can leverage observability tools to determine if any performance delays are due to vulnerabilities or security threats, allowing them to take immediate action to achieve resolution."

Let's look at each type of performance signal individually.

Logs

Log analytics tools have been serving cybersecurity teams for years, says Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA). "Logs are a record of what happened on a device or piece of software. Real time analysis will point to ongoing security incidents and forensic analysis will help security teams reconstruct an incident."

Use the player or download the MP3 below to listen to EMA-APMdigest Podcast Episode 2 — Shamus McGillicuddy talks about Network Observability, the convergence of observability and security, and more.

Click here for a direct MP3 download of Episode 2 - Part 1

Logs provide a detailed record of application behavior, and can be used for troubleshooting issues, identifying performance bottlenecks, and detecting security threats. These are the time-stamped records of events, notes Roger Floren, Principal Product Manager at Red Hat.

"It's all about the logs to some extent — it always has been and always will be," says Yigal from Logz.io. "Consider that the SIEM — the virtual nervous system of the modern security ecosystem, for decades now — is a centralized repository for security data, and its primary job has always been to consume and provide analysis on top of mountains of log data. And this is telemetry running the full gamut from ITOps logs to security data coming in from other purpose-built security tooling. So, there's that: you have to maintain visibility and analysis into your log data, and it's a foundational element of security practices.

Ajit Sancheti, GM, Falcon LogScale at CrowdStrike outlines the history: "DevOps, ITOps and SecOps teams need to be able to access different types of data for a variety of use cases, such as investigating threats, debugging network issues, maximizing application performance and much more. In the past, this meant that these individual teams would deploy siloed monitoring, SIEM and log management tools. Additionally, many of the log management tools on the market lacked the scale to centrally collect and store all logs and allow large numbers of users to simultaneously access and query this data."

"Today, organizations are finally able to log security and observability data in one place," Sancheti continues. "This is due to innovations like index-free logging architectures, which enable organizations to ingest a petabyte of data per day (or more)."

Chaim Mazal, Chief Security Officer at Gigamon says the challenge is that logging tools see things in hindsight, they do not detect threats in real time. It's only when log data and network-derived intelligence are integrated that SecOps teams can detect threats or performance issues in real-time before they harm or slow the business down.

"Once integrated and SecOps teams gain the deep observability required, they can shift toward a proactive security posture and ensure cloud security across their infrastructure whether it's located on-premises, in private clouds, in containers, or in the public cloud," Mazal adds.

Metrics

Performance metrics can also be used to identify security events in some cases.

"Deep performance signals such as identifying a workload's performance through metrics including CPU usage, system calls, memory usage, etc. allows security customers to determine aberrations from normal behavior," says Prashant Prahlad, VP of Cloud Security Products at Datadog.

For example,metrics can help to identify a possible denial of service attack if an unexpected and dramatic spike in usage is seen, according to Kirsten Newcomer, Director, Cloud and DevSecOps Strategy at Red Hat.

Yigal from Logz.io adds, "We see massive value in helping organizations quickly translate their huge volumes of logs into more immediately useful metrics from the traditional IT ops side, saving both time and money. But there's also the notion of introducing more security content, creating and tracking more security-relevant trends, so we do see some organizations moving in this direction."

Traces

Some experts say the key observability signal that makes a difference for security is traces. Newcomer from Red Hat says traces provide data about how information is flowing through a system and can be used to visualize unexpected errors and events.

"Security staff have always been dealing with logs. Metrics are also helpful. Traces are a new kind of information that observability brings into the picture," explains Mike Loukides, VP of Emerging Tech Content at O'Reilly Media. "They let you ask detailed questions about what's happening in the application — the sorts of questions that could help you to spot a compromise early on."

"To take an overly simple example: any system that's online will see failed login attempts all the time. These will be in the logs, and they don't tell you much," he continues. "When a failed login attempt is followed by a successful login from the same IP address, that might tell you something — or it might be that an authorized user mistyped his password. That's about as far as logging will take you. But when that now-authorized user starts interacting with parts of the system that they shouldn't have access to, you know you have a real problem. You can ask questions like: How did they get in? When did they get in? And what did they do while they were in our system? And that's the kind of information that you're going to get from traces."

Prahlad from Datadog concludes, "The applications get instrumented with libraries for tracing and the exact same traces are used to detect attacks. In many cases SecOps detect these aberrations from the performance data and identify security issues much more quickly — all without additional instrumentation and performance overheads."

Go to: Exploring the Convergence of Observability and Security - Part 3: Tools

Pete Goldin is Editor and Publisher of APMdigest

The Latest

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC). Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come ... But how did we get here? Where are we going? Is this all too complicated? ...

On November 18, a single database permission change inside Cloudflare set off a chain of failures that rippled across the Internet. Traffic stalled. Authentication broke. Workers KV returned waves of 5xx errors as systems fell in and out of sync. For nearly three hours, one of the most resilient networks on the planet struggled under the weight of a change no one expected to matter ... Cloudflare recovered quickly, but the deeper lesson reaches far beyond this incident ...

Chris Steffen and Ken Buckler from EMA discuss the Cloudflare outage and what availability means in the technology space ...

Every modern industry is confronting the same challenge: human reaction time is no longer fast enough for real-time decision environments. Across sectors, from financial services to manufacturing to cybersecurity and beyond, the stakes mirror those of autonomous vehicles — systems operating in complex, high-risk environments where milliseconds matter ...

Technology's role in the workplace has expanded rapidly, framing how we work and communicate. Now, with the explosion of new and innovative AI-driven tools, people are struggling to navigate how to work in this new emergent era. And although the majority of these applications are designed to make our lives easier, for many knowledge workers, they've become a source of stress and anxiety. "Technostress" ... describes the feelings of being overwhelmed by constant connectivity and cognitive overload from information and notifications, and it's on the rise ...

People want to be doing more engaging work, yet their day often gets overrun by addressing urgent IT tickets. But thanks to advances in AI "vibe coding," where a user describes what they want in plain English and the AI turns it into working code, IT teams can automate ticketing workflows and offload much of that work. Password resets that used to take 5 minutes per request now get resolved automatically ...

Governments and social platforms face an escalating challenge: hyperrealistic synthetic media now spreads faster than legacy moderation systems can react. From pandemic-related conspiracies to manipulated election content, disinformation has moved beyond "false text" into the realm of convincing audiovisual deception ...

Traditional monitoring often stops at uptime and server health without any integrated insights. Cross-platform observability covers not just infrastructure telemetry but also client-side behavior, distributed service interactions, and the contextual data that connects them. Emerging technologies like OpenTelemetry, eBPF, and AI-driven anomaly detection have made this vision more achievable, but only if organizations ground their observability strategy in well-defined pillars. Here are the five foundational pillars of cross-platform observability that modern engineering teams should focus on for seamless platform performance ...

Exploring the Convergence of Observability and Security - Part 2: Logs, Metrics and Traces

Pete Goldin
APMdigest

With input from industry experts — both analysts and vendors — this 8-part blog series will explore what is driving the convergence of observability and security, the challenges and advantages, and how it may transform the IT landscape.

Start with: Exploring the Convergence of Observability and Security - Part 1

One reason why observability and security make a good pairing is that traditional telemetry signals — metrics, logs, and traces — are helpful to maintain both performance and security.

"The convergence of security and observability is happening throughout the observability landscape, and telemetry pipelines are enabling organizations to make that happen," explains Buddy Brewer, Chief Product Officer at Mezmo. "Security engineers, developers, and SREs use telemetry pipelines to access telemetry data effectively and efficiently. Many are also adopting standards like OpenTelemetry to ease their data ingestion woes and allow teams across the organization to use standardized data and break down silos."

Brewer cites a recent ESG report showing that metrics, logs, and traces account for 86% of application data by volume. He maintains that this data is essential for SecOps teams to understand what parts of an application are working properly, identify errors, and determine how to address those errors. The same report shows that 69% of SecOps teams regularly or continuously access data from these three sources.

"Traditional application performance signals help SecOps by serving as a proof point that you are watching for outlier issues, for example, you are able to see and flag when something doesn't look right in your system," says Jam Leomi, Lead Security Engineer at Honeycomb. "This outlier data is surfaced in real-time using observability tools and can serve as an early indicator that something malicious is going on."

"There are emerging use cases for issues such as Kubernetes security or CSPM, where there does seem to be a big advantage to adding security capabilities to the traditional three pillars of logs, metrics and traces for observability," says Asaf Yigal, CTO of Logz.io. "Whether you have ops-type teams that can act on that data themselves or use it as a better informed stream of data to channel to their dedicated security teams, the reality is that cloud apps and infrastructure are so complex and fast moving, security has to be part of the picture for everyone involved."

Leomi of Honeycomb adds that the convergence of tools can help distinguish between performance and security issues, saying, "While a lot of the data surfaced in observability tools can look like an average system bottleneck or performance issue, applying the security lens to it could bring to light potential indicators of a security event."

Colin Fallwell, Field CTO of Sumo Logic agrees, "Many security incidents impact operations. For example, one can expect serious performance degradation to occur in a DDOS attack. Telemetry like tracing and logging data is naturally going to carry header information from web requests, IP information, and much, much more. Metrics are the canaries in the coal mine and serve as an early warning that something is wrong or trending out of the norm. All this data is valuable to security use cases as well. Deep application visibility, and deviations from the norm on authentication, access, processing, and DB access are table stakes for operations and highly valuable to SecOps. Consider how valuable this data is to security teams when trying to understand the impact and blast radius of security events."

Performance signals provide technologists with a detailed look into the health of their applications — if there are any bottlenecks, the signals can help locate where it's occurring and why, Joe Byrne, VP of Technology Strategy and CTO Adviser at Cisco AppDynamics adds. "For SecOps teams, detecting potential security threats before an attack is crucial, so having real-time insight into applications' performance would benefit them. SecOps teams can leverage observability tools to determine if any performance delays are due to vulnerabilities or security threats, allowing them to take immediate action to achieve resolution."

Let's look at each type of performance signal individually.

Logs

Log analytics tools have been serving cybersecurity teams for years, says Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA). "Logs are a record of what happened on a device or piece of software. Real time analysis will point to ongoing security incidents and forensic analysis will help security teams reconstruct an incident."

Use the player or download the MP3 below to listen to EMA-APMdigest Podcast Episode 2 — Shamus McGillicuddy talks about Network Observability, the convergence of observability and security, and more.

Click here for a direct MP3 download of Episode 2 - Part 1

Logs provide a detailed record of application behavior, and can be used for troubleshooting issues, identifying performance bottlenecks, and detecting security threats. These are the time-stamped records of events, notes Roger Floren, Principal Product Manager at Red Hat.

"It's all about the logs to some extent — it always has been and always will be," says Yigal from Logz.io. "Consider that the SIEM — the virtual nervous system of the modern security ecosystem, for decades now — is a centralized repository for security data, and its primary job has always been to consume and provide analysis on top of mountains of log data. And this is telemetry running the full gamut from ITOps logs to security data coming in from other purpose-built security tooling. So, there's that: you have to maintain visibility and analysis into your log data, and it's a foundational element of security practices.

Ajit Sancheti, GM, Falcon LogScale at CrowdStrike outlines the history: "DevOps, ITOps and SecOps teams need to be able to access different types of data for a variety of use cases, such as investigating threats, debugging network issues, maximizing application performance and much more. In the past, this meant that these individual teams would deploy siloed monitoring, SIEM and log management tools. Additionally, many of the log management tools on the market lacked the scale to centrally collect and store all logs and allow large numbers of users to simultaneously access and query this data."

"Today, organizations are finally able to log security and observability data in one place," Sancheti continues. "This is due to innovations like index-free logging architectures, which enable organizations to ingest a petabyte of data per day (or more)."

Chaim Mazal, Chief Security Officer at Gigamon says the challenge is that logging tools see things in hindsight, they do not detect threats in real time. It's only when log data and network-derived intelligence are integrated that SecOps teams can detect threats or performance issues in real-time before they harm or slow the business down.

"Once integrated and SecOps teams gain the deep observability required, they can shift toward a proactive security posture and ensure cloud security across their infrastructure whether it's located on-premises, in private clouds, in containers, or in the public cloud," Mazal adds.

Metrics

Performance metrics can also be used to identify security events in some cases.

"Deep performance signals such as identifying a workload's performance through metrics including CPU usage, system calls, memory usage, etc. allows security customers to determine aberrations from normal behavior," says Prashant Prahlad, VP of Cloud Security Products at Datadog.

For example,metrics can help to identify a possible denial of service attack if an unexpected and dramatic spike in usage is seen, according to Kirsten Newcomer, Director, Cloud and DevSecOps Strategy at Red Hat.

Yigal from Logz.io adds, "We see massive value in helping organizations quickly translate their huge volumes of logs into more immediately useful metrics from the traditional IT ops side, saving both time and money. But there's also the notion of introducing more security content, creating and tracking more security-relevant trends, so we do see some organizations moving in this direction."

Traces

Some experts say the key observability signal that makes a difference for security is traces. Newcomer from Red Hat says traces provide data about how information is flowing through a system and can be used to visualize unexpected errors and events.

"Security staff have always been dealing with logs. Metrics are also helpful. Traces are a new kind of information that observability brings into the picture," explains Mike Loukides, VP of Emerging Tech Content at O'Reilly Media. "They let you ask detailed questions about what's happening in the application — the sorts of questions that could help you to spot a compromise early on."

"To take an overly simple example: any system that's online will see failed login attempts all the time. These will be in the logs, and they don't tell you much," he continues. "When a failed login attempt is followed by a successful login from the same IP address, that might tell you something — or it might be that an authorized user mistyped his password. That's about as far as logging will take you. But when that now-authorized user starts interacting with parts of the system that they shouldn't have access to, you know you have a real problem. You can ask questions like: How did they get in? When did they get in? And what did they do while they were in our system? And that's the kind of information that you're going to get from traces."

Prahlad from Datadog concludes, "The applications get instrumented with libraries for tracing and the exact same traces are used to detect attacks. In many cases SecOps detect these aberrations from the performance data and identify security issues much more quickly — all without additional instrumentation and performance overheads."

Go to: Exploring the Convergence of Observability and Security - Part 3: Tools

Pete Goldin is Editor and Publisher of APMdigest

The Latest

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC). Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come ... But how did we get here? Where are we going? Is this all too complicated? ...

On November 18, a single database permission change inside Cloudflare set off a chain of failures that rippled across the Internet. Traffic stalled. Authentication broke. Workers KV returned waves of 5xx errors as systems fell in and out of sync. For nearly three hours, one of the most resilient networks on the planet struggled under the weight of a change no one expected to matter ... Cloudflare recovered quickly, but the deeper lesson reaches far beyond this incident ...

Chris Steffen and Ken Buckler from EMA discuss the Cloudflare outage and what availability means in the technology space ...

Every modern industry is confronting the same challenge: human reaction time is no longer fast enough for real-time decision environments. Across sectors, from financial services to manufacturing to cybersecurity and beyond, the stakes mirror those of autonomous vehicles — systems operating in complex, high-risk environments where milliseconds matter ...

Technology's role in the workplace has expanded rapidly, framing how we work and communicate. Now, with the explosion of new and innovative AI-driven tools, people are struggling to navigate how to work in this new emergent era. And although the majority of these applications are designed to make our lives easier, for many knowledge workers, they've become a source of stress and anxiety. "Technostress" ... describes the feelings of being overwhelmed by constant connectivity and cognitive overload from information and notifications, and it's on the rise ...

People want to be doing more engaging work, yet their day often gets overrun by addressing urgent IT tickets. But thanks to advances in AI "vibe coding," where a user describes what they want in plain English and the AI turns it into working code, IT teams can automate ticketing workflows and offload much of that work. Password resets that used to take 5 minutes per request now get resolved automatically ...

Governments and social platforms face an escalating challenge: hyperrealistic synthetic media now spreads faster than legacy moderation systems can react. From pandemic-related conspiracies to manipulated election content, disinformation has moved beyond "false text" into the realm of convincing audiovisual deception ...

Traditional monitoring often stops at uptime and server health without any integrated insights. Cross-platform observability covers not just infrastructure telemetry but also client-side behavior, distributed service interactions, and the contextual data that connects them. Emerging technologies like OpenTelemetry, eBPF, and AI-driven anomaly detection have made this vision more achievable, but only if organizations ground their observability strategy in well-defined pillars. Here are the five foundational pillars of cross-platform observability that modern engineering teams should focus on for seamless platform performance ...