Show HN: A VS Code extension to visualise Rust logs in the context of your code
github.comWe made a VS Code extension [1] that lets you visualise logs and traces in the context of your code. It basically lets you recreate a debugger-like experience (with a call stack) from logs alone.
This saves you from browsing logs and trying to make sense of them outside the context of your code base.
We got this idea from endlessly browsing traces emitted by the tracing crate [3] in the Google Cloud Logging UI. We really wanted to see the logs in the context of the code that emitted them, rather than switching back-and-forth between logs and source code to make sense of what happened.
It's a prototype [2], but if you're interested, we’d love some feedback.
---
References:
[1]: VS Code: marketplace.visualstudio.com/items?itemName=hyperdrive-eng.traceback
[2]: Github: github.com/hyperdrive-eng/traceback
[3]: Crate: docs.rs/tracing/latest/tracing
Good idea!
This probably saves resources by eliminating need to re-run code to walk through error messages again.
Integration with time-travel debugging would even more useful; https://news.ycombinator.com/item?id=30779019
From https://news.ycombinator.com/item?id=31688180 :
> [ eBPF; Pixie, Sysdig, Falco, kubectl-capture,, stratoshark, ]
> Jaeger (Uber contributed to CNCF) supports OpenTracing, OpenTelemetry, and exporting stats for Prometheus.
From https://news.ycombinator.com/item?id=39421710 re: distributed tracing:
> W3C Trace Context v1: https://www.w3.org/TR/trace-context-1/#overview
Thanks for sharing all these links, super handy! I really appreciate it.
NP; improving QA feedback loops with IDE support is probably as useful as test coverage and test result metrics
/? vscode distributed tracing: https://www.google.com/search?q=vscode+distributed+tracing :
- jaegertracing/jaeger-vscode: https://github.com/jaegertracing/jaeger-vscode
/? line-based display of distributed tracing information in vs code: https://www.google.com/search?q=line-based%20display%20of%20... :
- sprkl personal observability platform: https://github.com/sprkl-dev/use-sprkl
Theoretically it should be possible to correlate deployed code changes with the logs and traces preceding 500 errors; and then recreate the failure condition given a sufficient clone of production (in CI) to isolate and verify the fix before deploying new code.
Practically then, each PR generates logs, traces, and metrics when tested in a test deployment and then in production. FWIU that's the "personal" part of sprkl.