Understanding New Relic Logs and Data Management

Author

Reads 977

Monitor Displaying Computer Application
Credit: pexels.com, Monitor Displaying Computer Application

New Relic logs are a treasure trove of information that can help you diagnose issues and optimize your application's performance. New Relic collects logs from your application, server, and infrastructure, providing a comprehensive view of your system's behavior.

This data is collected in a structured format, making it easier to analyze and query. New Relic's proprietary data format, called "NRDB", allows for fast and efficient querying of log data.

With New Relic logs, you can gain insights into user behavior, application errors, and performance bottlenecks. By analyzing this data, you can identify trends and patterns that can help you improve your application's performance and user experience.

New Relic's data management capabilities also enable you to set up alerts and notifications for critical issues, ensuring that you're always aware of potential problems before they impact your users.

Configuration

To configure Cribl Stream to output to New Relic, you'll need to select a Worker Group and then configure the New Destination modal. This involves entering a unique Output ID to identify the New Relic definition.

Credit: youtube.com, Use Guided Install To Add Logs to New Relic One

Under Authentication, you can select an Authentication method from the dropdown, which can use an internal field named __newRelic_apiKey or a configured API key. You can also configure Optional Settings, such as adjusting the Persistent Queue, Processing, Retries, and Advanced settings.

The Advanced Settings section allows you to fine-tune the configuration, including validating server certs, using round-robin DNS, compressing the payload body, and setting request timeouts and concurrency limits. You can also adjust the Max body size, Buffer memory limit, and Max events per request to optimize data transfer and memory usage.

Here are some key Advanced Settings to consider:

After configuring the New Relic Destination, you can create Routes to send data to New Relic and create visualizations incorporating the Cribl Stream-supplied data.

Configure Cribl Stream Output

To configure Cribl Stream output, start by selecting Products and then Cribl Stream on the top bar. Under Worker Groups, choose a Worker Group.

Credit: youtube.com, How To Configure Cribl Internal Metrics for Cribl Stream

You'll have two options: configure a new output or clone an existing one. If you clone, Cribl Stream will add -CLONE to the original Output ID.

To configure a new output, enter a unique name in the Output ID field. This will help identify the New Relic definition.

Under Authentication, select an authentication method from the dropdown. You can choose to use an internal field named __newRelic_apiKey or configure an API key in the Authentication method settings.

You can also adjust Optional Settings, such as Persistent Queue, Processing, Retries, and Advanced settings. However, these are not required for basic configuration.

To save your changes, select Save, then Commit & Deploy.

Persistent Queue Settings

Persistent queue settings allow you to control how messages are handled in your system.

The maximum number of messages in a queue, also known as the "max size" setting, can be configured to prevent queues from growing too large. This helps prevent performance issues and ensures that messages are processed efficiently.

Credit: youtube.com, CloudHub Persistent Queues | VM Queue | Use cases | Limitations

By setting a maximum size, you can prevent queues from consuming too much memory or storage space. This is especially important in systems with limited resources.

The "timeout" setting determines how long a message will be retained in the queue before it's automatically removed. This helps prevent stale messages from accumulating in the queue.

You can also configure the "retry delay" setting, which controls how often a message will be retried if it fails to process successfully. This helps prevent overwhelming the system with retry attempts.

Advanced Settings

In the Advanced Settings section, you can fine-tune your Cribl Stream configuration to suit your needs.

You can toggle Validate server certs to Yes to reject certificates that aren't authorized by a CA in the CA certificate path. This ensures that only trusted certificates are used.

Round-robin DNS is a useful feature that cycles through multiple IPv6 addresses returned by a DNS server, helping to distribute traffic evenly. Simply toggle it to Yes to enable this feature.

Credit: youtube.com, 050 Configuration and Advanced Settings

Compressing the payload body before sending is a good idea, as it's recommended by Cribl and defaults to Yes.

You can adjust the Request timeout to suit your needs, with a default value of 30 seconds. If a request takes longer than this to complete, it will be aborted.

The Request concurrency setting determines the maximum number of concurrent requests per Worker Process, with a default value of 5. You can adjust this value between 1 and 32.

Buffer memory limit is a crucial setting that determines the total amount of memory used to buffer outgoing requests. If left blank, it defaults to 5 times the max body size, or you can set it to 0 to disable the limit.

Max events per request allows you to set the maximum number of events to include in the request body, with a default value of 0 (unlimited).

A low Flush period can cause the payload size to be smaller than the configured Max body size, so be sure to adjust this setting accordingly.

By adding extra HTTP headers, you can include additional information in your requests. Just be sure to click Add Header to insert each new header as a Name/Value pair.

Data Management

Credit: youtube.com, Top 3 Log Data Management Tools (New Relic, Splunk Log Observer, LogicMonitor)

Data Management is a crucial aspect of New Relic Logs. It helps you collect, store, and analyze log data from various sources.

New Relic Logs can collect data from up to 100 different log sources, including applications, servers, and network devices. This allows you to get a comprehensive view of your system's performance.

To manage your data effectively, you can use New Relic's data retention policies, which enable you to set specific time limits for keeping or deleting data. This helps prevent data from becoming too large and unwieldy.

Post Processing

Post Processing is a crucial step in managing your data. It's what happens after your data has been processed and you want to add extra details to it.

You can automatically add fields to your events using Post Processing. This includes adding fields like cribl_pipe, which identifies the Cribl Stream Pipeline that processed the event.

The cribl_pipe field is automatically added to all events that use this output, and it's a great way to keep track of where your data is coming from. You can also use wildcards to add other fields.

Credit: youtube.com, How can I process data from various formats in one platform?

Other fields you can add include cribl_host, cribl_input, cribl_output, cribl_route, and cribl_wp. These fields provide information about the Cribl Stream Node, Source, Destination, Route, and Worker Process that processed the event.

Here are some examples of fields you can add to your events:

  • cribl_host – Cribl Stream Node that processed the event
  • cribl_input – Cribl Stream Source that processed the event
  • cribl_output – Cribl Stream Destination that processed the event
  • cribl_route – Cribl Stream Route (or QuickConnect) that processed the event
  • cribl_wp – Cribl Stream Worker Process that processed the event

Metrics Queries

Metrics Queries are a powerful tool for accessing and querying your data. You can use them to gain valuable insights into your application's performance and behavior.

To access Metrics Queries, from the New Relic home screen, click Browse Data > Metrics > Can Search for metricNames. This will take you to a page where you can customize your time range and dimensions to build the desired logic for your queries.

You can use NRQL to build your own query searches. This allows for more complex and customized queries.

To customize your time range, you can select a specific date and time range or use a relative time range. You can also add dimensions to your query to filter and group your data.

Here are some common dimensions you can use in your Metrics Queries:

  • time
  • entity
  • event

By using these dimensions and customizing your time range, you can create complex and powerful queries that provide valuable insights into your data.

Fluentd Plugin Example

Credit: youtube.com, How Fluentd simplifies collecting and consuming logs | Fluentd simply explained

If you're already using FluentD, you can easily connect it to New Relic Logs using the output plugin.

You'll need to install the fluent-gem or td-agent-gem to use the New Relic plugin.

This plugin allows you to amend the configuration file to use either your New Relic License Key or API Key.

Once you've got the plugin installed and configured, you'll be sending your data to New Relic in no time.

You'll be able to access the received data via Insights, just like every other event-driven metric in New Relic.

New Relic has plugins for all the major open source log shippers, including FluentD, so you're in good company.

Data Sources

New Relic Logs offers a variety of data sources to help you collect and analyze your log data. With plugins for major open source log shippers, including Logstash, FluentD, and Fluent Bit, onboarding and deployment are simple.

You can also leverage New Relic Logs' integrations with large cloud providers, such as AWS Cloudwatch, Firelens, and Kubernetes, to send log data directly to the platforms you already use. This makes it easy to get started with New Relic Logs.

Fluentd is our recommended option for those who haven't already implemented an open source tool, but you also have the option to use Fluent Bit, Logstash, or even a HTTP input of logging.

How Forwarding Works

Credit: youtube.com, Data forwarding - Part 1

Forwarding works by transforming log events into meaningful output, such as text files, that can be used by downstream people and processes. The NewRelicFormatter transforms log events into the JSON format expected by New Relic.

Standard log formatters transform log events into meaningful output, including text files that can be used by downstream people and processes. These formatters are used in conjunction with log forwarders, which send the formatted log data to New Relic.

The log forwarder can also extend and enrich the log data by linking the formatted log data with additional transaction information from your application or host. This is done through the log enricher, which is configured when setting up the log forwarder.

You can use the infrastructure agent, Fluentd, Logstash, or other supported log forwarders to send log data to New Relic. Some of these options come with quickstarts, which give you out-of-the-box dashboards and alerts.

Text on Computer Monitor
Credit: pexels.com, Text on Computer Monitor

Here are some supported log forwarders:

Links are essential for accessing your data sources, and in the case of New Relic, you can access your logs from several places in the New Relic UI.

Depending on your New Relic subscription, you can access your logs from several places in the New Relic UI.

Frequently Asked Questions

Where are the New Relic log files?

New Relic log files are stored in the Logs directory, located at %ALLUSERSPROFILE%\New Relic\.NET Agent\Logs on Windows, and /usr/local/newrelic-dotnet-agent/logs on Linux. You can also find them in the newrelic folder within your app's root directory if you're using a NuGet package install.

What is the difference between Splunk and New Relic logs?

Splunk collects and reports logs in real-time, while New Relic only ingests and analyzes specific log file types, limiting log usability without manual designation. This difference in approach affects how quickly and comprehensively logs are collected and analyzed.

How to search logs in New Relic?

To search logs in New Relic, navigate to one.newrelic.com > Logs and enter a query to focus on the desired log information. Click Query logs to view and examine specific log lines in detail.

What is New Relic logging?

New Relic logging is a fast and scalable platform for collecting and connecting log data with other telemetry and infrastructure data in one place. This unified view helps you monitor and troubleshoot your applications more efficiently.

How long does New Relic keep logs?

New Relic logs are retained for 30 days by default, but can be extended up to 7 years with live archives. Learn how to customize your log retention period for optimal monitoring and analysis.

Melba Kovacek

Writer

Melba Kovacek is a seasoned writer with a passion for shedding light on the complexities of modern technology. Her writing career spans a diverse range of topics, with a focus on exploring the intricacies of cloud services and their impact on users. With a keen eye for detail and a knack for simplifying complex concepts, Melba has established herself as a trusted voice in the tech journalism community.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.