0

I have a multi-step, branching pipeline using TPL Dataflow, that I would like to get working with Unified Logging (through Azure Application Insights).

My issue is that the APIs at my disposal (no pun intended) all seem to be based on the use of IDisposable objects to represent scopes.

For instance the System.Diagnostics approach looks like these:

void Foo(){
  using var a = new Activity("I'm doing some work!");
  DoTheWork();
}

void Bar(){
  using var a = ActivitySource.StartActivity(name: "I'm doing some work!", kind: ActivityKind.Internal, tags: []);
  DoTheWork();
}

And the app Insights SDK uses:

void Baz(){
  using var op = telemetryClient.StartOperation<RequestTelemetry>("I'm doing some work!");
  DoTheWork();
}

All the work that involves tracing metrics or emitting log messages happens in DoTheWork which in the above examples takes place while the IDisposable object a is still in scope and hasn't been disposed yet.

But when you use TPL Dataflow, the rest of the work to be done is handled after you return from the function. That means that if you want the Activity to still be in scope you need to ignore the fact that it is an IDisposable, and you need somehow to keep it around till all the work that might happen in the pipeline for that specific activity is done.

Are there any accepted design patterns or idioms that are used with platforms like TPL Dataflow to allow it to work nicely with App Insights, supporting logging and nesting of activities etc?

Update:

I should add that my pipeline looks like this:

Decode -> Split -> ProcessRequest -> EncodeResults -> [Store, AuditLog, TrackGaps]

So the Activity I want to kick off is first known about in the Decode method, and should be in effect for all the methods that run thereafter. But there will be numerous requests being processed simultaneously, so the approach should honour the model imposed by async/await and cope with context switching.

3
  • Unified Logging was a quasi-marketing term (if not just a book title)) for something supplanted by OpenTelemeytry in its true role. Nothing is explained by talking about Unified Logging and AppInsights, an unrelated monitoring tool whose marketing material mentioned Unified Logging while it was fashionable, before switching to Observability Commented Jun 18, 2024 at 7:05
  • Is the real question how to add observability to your Dataflow pipeline? Use the ActivitySource and Metrics classes, on which .NET's OpenTelemetry support is based. Start the Activity in the head block and pass that, or an ActivityContext along with your messages down the pipeline, adding events or tags on each step. You can have one activity per block that's linked to the previous through the ActivityContext, or you can pass the activity down the pipeline, adding an Event for each step Commented Jun 18, 2024 at 7:07
  • After that, you only have to configure OTEL in your Program.cs and add exporters to AppInsights, Jaeger, .NET Aspire's Dashboard or anything else you want. The traces sent to the monitoring tools contain enough information (parent activity, events, etc) so any tool can reconstruct the trace. Commented Jun 18, 2024 at 7:13

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.