Lightrun illustration
  • 18-Apr-2021
  • 6 min read
Avatar
Author Ilan Peleg
Share
Lightrun illustration

Announcing Lightrun Cloud: Shifting Left Observability, One Developer at a Time

Avatar
Ilan Peleg
18-Apr-2021
6 min read

We’re proud to announce the general availability of Lightrun Cloud – a completely free and self-service version of the Lightrun platform.

We consider Lightrun Cloud to be a major milestone in our constant journey to empower developers with better observability tooling and welcome you to sign up for a free account.

 

How we got here – traditional observability

Observability, as it is practiced in most organizations today, relies on an old paradigm:

  • Log everything you can
  • Comb through the logged information later to gain valuable insights

The tendency to do things this way can be attributed back to “the old days” when software ran mostly in on-premises data centers and deployments were not a weekly (or daily) occurrence; rather, they were something you did once every quarter or once every year.

Back then, getting new code-level information from a running application was hard: if you didn’t log for it, you wouldn’t have it – so you logged everything.

Those days, however, are long gone; many companies today follow a Cloud Native approach that, when combined with mainline DevOps methodologies, allows for scaling services up and down incredibly fast and without much organizational overhead. This, in turn, led to much faster delivery schedules becoming the norm – with some teams deploying tens or hundreds of times every single day.

 

Adapting to the new world

When cheap, reliable infrastructure became readily available, customers and users began to expect a certain level of operational excellence from the services they rely on. This explosion in demand brought with it severe observability challenges; to operate these gigantic systems at scale, organizations needed new tools and, perhaps more importantly, new approaches to help them understand what’s actually going on inside their production systems.

This, in part, prompted more and more engineering leaders to allocate development resources to manage, monitor, and scale production environments. Teams started to understand that codifying, versioning, and using automated tools for various parts of their work in the areas of security (Snyk), testing (Perfecto), and operations (Hashicorp), to name a few, enable developers on those teams to truly “own” the reliability of the systems they build. From the aforementioned DevOps culture through the rise of the SRE (Site Reliability Engineer) to the (previously frowned upon) paradigm of “testing in production”, developers slowly started becoming more involved in running the products they build themselves.

At roughly the same time, observability tooling vendors took it upon themselves to become the “single source of truth” for cloud application performance: big APM players like Datadog, Dynatrace, and New Relic started offering a variety of tools (distributed tracers and ML analysis of logs come to mind) in order to keep up with the rapid pace of changes.

These solutions, however, still rely on that old paradigm: they look at information after the fact and attempt to provide the relevant insights then and there. They also tend to be mostly geared towards people in charge of the production environments the software is running in, as opposed to the developers who actually wrote the software. According to IDC, there are 26M+ developers worldwide today; however, only a fraction of them actually practice observability in their day-to-day workflows.

As the complexity of the software we build and the speed at which we need to deliver it both increase dramatically, this is simply not enough anymore. We need to figure out a way to bridge the gap between development and production environments before it becomes too wide for us to handle on a continuous basis.

 

Observability must shift left

Currently, when we want to learn new, application-level information about our software running in production, we need to either:

  • Rummage through data-packed dashboards passively, in the hopes of capturing that specific bit of information we need
  • Construct complex queries in our monitoring and logging systems to find that exact piece of data
  • Add more logs or instrument more metrics, retrigger our CI/CD pipeline, then look through the logs – which can be an iterative process that requires a lot of context switches

As our production environments become more complex, the number of unknown unknowns – things we don’t know we don’t know, and therefore can not account for during development – increases significantly too. This fear of missing out on that key bit of information that solves a specific problem, and the current non-agile paradigm of “log everything, analyze later” both result in observability costs growing at an alarming rate.

As other disciplines – like security, testing, and operations – show us, these problems can be mitigated by incorporating “a practice” – in our case observability – into the developer’s workflow. Generally speaking, for each current workflow we can match a dev-native workflow, one that helps streamline the practice of observability for the everyday developer:

workflow-comparison

The key problem is that most observability technologies – even cutting-edge ones – tend to focus on operators and IT teams, defaulting to complicated GUIs and complex dashboards that detail infrastructure metrics. They work by collecting information “on the right”, in production, and analyzing it to find the answers to the questions IT teams ask – paying attention to the larger picture and missing the details along the way.

In contrast, the developers who wrote the software are after a different subset of information. They want to know what is going on simply by looking at how applications behave in production. They want to ask questions using the tools of their trade (IDEs, CLIs, code & scripts) and get whatever data they need, in real-time, and in any cardinality that is required to solve the problem at hand.

 

Enter Lightrun Cloud

Lightrun Cloud is the self-serve version of Lightrun – an IDE-native observability platform, that enables developers to securely add logs, metrics and traces to production and staging environments in real-time, on-demand. We work where you work, and require no hotfixes, redeployments, or restarts.

Developers use Lightrun for multiple code-level observability needs, including:

  • Code-level alerts
  • Feature verification
  • Testing / debugging in production

Sign up for your free account today to get started with real-time observability – right from your IDE.

Share

It’s Really not that Complicated.

You can actually understand what’s going on inside your live applications. It’s a registration form away.

Get Lightrun

Lets Talk!

Looking for more information about Lightrun and debugging?
We’d love to hear from you!
Drop us a line and we’ll get back to you shortly.

By submitting this form, I agree to Lightrun’s Privacy Policy and Terms of Use.