🛰️ Ingesting Azure PaaS Diagnostic Logs into Microsoft Sentinel via Event Hub & DCR/DCE

When you’re working with PaaS resources in Azure (for example: Azure Key Vault, Azure App Service, etc.), you often set their Diagnostic Settings to stream logs to an Azure Event Hubs namespace. The next step is: how to get that data into a Microsoft Sentinel-backed log analytics workspace for detection, hunting and alerts?

This article walks you through two supported Microsoft ways to ingest from Event Hubs into your log analytics/Sentinel workspace — one quick, one flexible — and provides updated details from Microsoft’s official docs.


🔍 Scenario Overview

  • You have PaaS diagnostics streaming to an Event Hub (namespace/hub).
  • You want to ingest those events into a Log Analytics workspace that supports Microsoft Sentinel.
  • You prefer a path that is supported by Microsoft, scalable, and future-proof.

🅰️ Option 1 – Use Microsoft Sentinel’s “Azure Event Hubs” Data Connector

The fastest, minimal-code solution.

✅ Steps

  1. In the Azure Portal, open your Microsoft Sentinel workspace.
  2. Navigate to Configuration → Data connectors.
  3. Search for “Azure Event Hubs” (or “Event Hubs” connector).
  4. Click Open connector pageConnect.
  5. Provide the Event Hub details:
    • Event Hub Namespace
    • Event Hub Name
    • Consumer Group (create a dedicated one for Sentinel)
  6. Click Apply / Connect.
  7. After setup, data from your Event Hub starts landing in your workspace (under a custom table, depending on connector).
    • Note: Sentinel handles the ingestion (behind the scenes) by deploying necessary Azure Function or service.
    • See Microsoft docs: Microsoft Sentinel data connectors Microsoft Learn+2Microsoft Learn+2

🧩 Pros & Cons

Pros:

  • Very fast to set up.
  • Minimal development/code effort.
  • Built-in support in Sentinel’s connector catalogue.

Cons:

  • Less control over schema/transformation of incoming data.
  • You may not be able to fine-tune field mapping, filtering, or enrichment.
  • If you have large volumes or custom formats, you might hit limitations.

📘 When to choose this

  • For a proof of concept, quick ingestion.
  • When you have relatively standard log formats and you just need them searchable in Sentinel.

🅱️ Option 2 – Use Azure Monitor Logs Ingestion via Data Collection Rule (DCR) + optional Data Collection Endpoint (DCE)

The more flexible, long-term solution, fully aligned with modern Azure Monitor architecture.

✅ Updated Steps (per Microsoft docs)

Based on the article “Ingest events from Azure Event Hubs into Azure Monitor Logs” Microsoft Learn

  1. Ensure prerequisites:
    • Your Log Analytics workspace must have Contributor rights and be in the region you choose. Microsoft Learn
    • Your Event Hub namespace must allow public network access or allow “Trusted Microsoft Services” through firewall. Microsoft Learn
    • DCR and DCE (if used) must be in the same region as the Event Hub. The workspace can be any region, but DCR/DCE must match. Microsoft Learn
  2. Create Destination Table in the Log Analytics workspace:
    • In your workspace → Logs → New table (or using the DCR transformation you configure).
    • This is where your Event Hub events will map. Microsoft Learn
  3. Create Data Collection Endpoint (DCE) — optional but recommended for scale or private‐link scenarios:
    • In Azure Monitor → Data collection endpoints → Add.
    • Choose region, resource group, name.
  4. Create Data Collection Rule (DCR):
    • In Azure Monitor → Data collection rules → Add.
    • In the rule:
      • Source: your Event Hub namespace & hub (connection string/consumer group).
      • Destination: Log Analytics workspace + target table.
      • Transformation: optional JSON parsing to reshape fields (map Event Hub message body to table columns).
    • Associate the DCR with the Event Hub. Microsoft Learn
  5. Associate DCR with Event Hub:
    • Via the DCR UI or ARM template you specify the Event Hub resource.
    • After creation, ingestion begins.
  6. Validate ingestion:
    • In Log Analytics workspace → Logs → query the destination table (e.g., MyCustomTable_CL | take 20)
    • Monitor ingestion latency, check for errors.

🧩 Pros & Cons

Pros:

  • Full control over the incoming schema (transforms), filtering and mapping.
  • Scale and performance aligned with Azure Monitor native ingestion.
  • Supports advanced patterns (Private Link, multi‐region, enrichment).
  • More future-proof for large enterprise workloads.

Cons:

  • More setup effort (DCE, DCR, transformation JSON, permissions).
  • Slightly steeper learning curve compared to built-in connector.

📘 When this is appropriate

  • You have custom PaaS diagnostic schemas, need field mapping, or need to filter large volumes.
  • You’re building a long-term logging architecture (not just a quick pilot).
  • You expect to grow ingestion scale or apply strict cost/volume control.

🧭 Which Option Should You Choose? – Decision Table

Use CaseRecommended OptionReason
Quick setup, standard logs, low volumeOption 1 – Sentinel Event Hub connectorFastest path, minimal setup
Custom schema, high volume, long term, enrichmentOption 2 – DCR/DCE ingestionMaximum flexibility and control
Mixed or evolving scenarioStart with Option 1, migrate to Option 2Rapid lane then enterprise lane

💡 Best Practices & Additional Guidance

  • Filtering & cost control: Ingesting everything without filter can incur high cost. Use transforms or diagnostic settings to limit what you send. Microsoft Learn+1
  • Region alignment: For DCR/DCE ingestion, keep Event Hub, DCE, DCR in the same region to minimise latency. Workspace can be elsewhere but same-region is recommended. Microsoft Learn
  • Consumer groups: For Event Hub ingestion, create a dedicated consumer group per ingestion path (Sentinel/DCR) to avoid interfering with other consumers.
  • Permission & network: Ensure Event Hub namespace allows the ingestion service access (public network or “Allow trusted Microsoft services”) and that the workspace has required IAM roles. Microsoft Learn
  • Schema versioning: If PaaS diagnostics change over time, use transformation logic to map new fields or gracefully degrade missing ones.
  • Monitoring ingestion health: Use Azure Monitor metrics and logs to track ingestion latency, dropped events, size.
  • Start small, then scale: Begin with a subset of logs (e.g., error-only) and then expand full diagnostic scope. This reduces initial cost and noise.

✅ Summary

  • Two supported ingestion patterns:
    • Fast path: Sentinel connector (Event Hub)
    • Enterprise path: DCR/DCE ingestion via Azure Monitor Logs
  • Use the right path based on your volume, schema complexity, and future growth.
  • Follow Microsoft’s region/permissions requirements to avoid setup issues.
  • Apply filtering, monitoring and best-practices to control cost and scale.

Leave a comment