Skip to main content
Security & Compliance Security

Deploying Microsoft Sentinel as a SIEM for an eCommerce Platform

Jonathan Bourke 8 min read
Microsoft Sentinel SIEM eCommerce Azure security monitoring

When you are launching a high-profile eCommerce platform, security monitoring is not optional. The platform integrates with back-office systems, processes payment data, and handles customer information. Multiple security layers — firewalls, WAFs, endpoint protection, identity systems — each produce their own logs. Manually correlating events across these sources is impractical. That is what a SIEM is designed for.

Traditional SIEM solutions require significant upfront investment in infrastructure, licensing, and specialist staffing. Microsoft Sentinel offers a different model: cloud-native, consumption-based, and available to organisations that could never justify a traditional SIEM deployment. We worked with a large enterprise client to deploy Sentinel as the SIEM for their Azure-hosted eCommerce platform, feeding security events and alerts into their existing Security Operations Centre.

This is what we learned building it.

The environment

The eCommerce platform was hosted entirely in Azure, but “hosted in Azure” understates the complexity. The architecture involved:

  • Azure infrastructure: Virtual machines, SQL databases, Logic Apps, Storage Accounts, and Azure Firewall — the core platform components
  • Third-party CDN/WAF provider: A content delivery and web application firewall service handling DDoS protection, SSL termination, and application-layer security at the edge
  • Salesforce CRM: Customer and order information managed in Salesforce, integrated with the eCommerce platform
  • Existing Security Operations Centre: The client already had an SOC managing their on-premises infrastructure. Sentinel needed to integrate with this existing operation, not replace it.

Each of these layers generates security-relevant logs. The value of a SIEM is in bringing them together — correlating a suspicious sign-in in Salesforce with an unusual data access pattern in Azure SQL with an anomalous request pattern at the WAF layer. No single system sees the full picture; Sentinel does.

Setting up the workspace

Before connecting any data sources, we needed to get the Sentinel workspace right.

Sentinel runs on top of a Log Analytics workspace in Azure. All ingested data flows into this workspace, and the configuration choices here have lasting implications for cost and compliance.

Retention is the first decision. Log Analytics defaults to 30 days of interactive retention. Sentinel adds 90 days at no extra cost — so your first 90 days are included. Beyond that, you can configure up to 730 days of total retention (the first 90 interactive, the remainder in long-term archive at a lower cost). For this client, PCI-DSS compliance required 365 days of log retention. We configured accordingly.

Workspace hygiene mattered. Before enabling connectors, we documented the existing Azure environment and cleaned up resources that were no longer in use. Every resource that generates logs adds to the Sentinel ingestion cost. Decommissioned VMs, unused storage accounts, and orphaned network interfaces all generate activity logs that Sentinel would ingest and charge for. Cleaning up first saved the client from paying for noise.

The connectors: three integrations, three different challenges

The connectors were the most technically interesting part of the deployment. Each data source presented its own integration pattern and its own set of challenges.

Connector 1: Microsoft Defender for Cloud

This was the simplest integration. Microsoft Defender for Cloud monitors Azure resources for security misconfigurations and threats. Connecting it to Sentinel is a native integration — enable the connector in Sentinel, select the subscriptions to monitor, and security alerts flow through automatically.

We enabled Defender for Cloud on all Azure subscriptions associated with the eCommerce platform. The alerts cover a wide range: exposed storage accounts, missing encryption, anomalous database access, suspicious virtual machine activity, and more. These form the baseline layer of Azure-native security monitoring.

Connector 2: Third-party CDN/WAF logs via CEF syslog

This was the most operationally challenging connector.

The third-party CDN/WAF provider could export logs in Common Event Format (CEF) — a standardised syslog format used across the security industry. Getting those logs into Sentinel required a two-VM architecture:

VM 1: Log collector. This VM runs an agent provided by the CDN/WAF vendor. The agent authenticates against the provider’s API, downloads log files on a schedule, and forwards them as CEF-formatted syslog messages to the second VM.

VM 2: Syslog forwarder. This VM runs the Azure Monitor Agent (AMA), which receives the CEF syslog messages and forwards them to the Sentinel workspace. The AMA handles the translation from syslog to the Log Analytics ingestion format.

The architecture works, but we hit a practical issue that is worth documenting. The syslog daemon (rsyslog) on the forwarder VM was writing incoming logs to both the syslog output (which the AMA picks up) and to local log files under /var/log/messages. Over time, the /var partition filled up. The fix was straightforward — configure rsyslog to exclude CEF messages from local file logging — but it is the kind of operational detail that does not appear in any deployment guide. The logs were substantial in volume, and an unmonitored disk filling silently would have caused the connector to stop forwarding.

Connector 3: Salesforce CRM to Sentinel via Azure Function App

This was the most technically complex connector, and I have written about it in detail separately: Getting Salesforce Audit Logs Into Microsoft Sentinel — Without Exposing Storage to the Internet.

The short version: the Salesforce connector uses a Python-based Azure Function App that polls the Salesforce REST API on a schedule, retrieves audit log events, and writes them to the Log Analytics workspace.

We encountered two significant challenges:

CRM licensing. The Salesforce API access needed for hourly log retrieval required specific Salesforce licence capabilities. The client’s existing CRM licence did not include the necessary API call volume for hourly polling. We explored a daily polling workaround (fewer API calls, larger batches) as an alternative, though hourly collection provides much better detection timeliness for security monitoring.

Azure Policy blocking public storage. The default Salesforce connector ARM template provisions a publicly accessible Azure Storage Account. The client’s Azure Policy blocked this. We rewrote the ARM template to use Azure Private Link — creating a virtual network, private endpoints for all four storage services (blob, file, queue, table), private DNS zones, and an Elastic Premium function app plan that supports VNet integration. The full ARM template is available as a GitHub Gist.

This was not a trivial change. It required understanding what each component of the ARM template does, why it exists, and how the pieces connect. The default connector templates are starting points — in a production environment with real security policies, you will customise them.

Analytics rules and automation

With data flowing from all three sources, the next step was creating analytics rules that generate actionable alerts and incidents.

Analytics rules define the detection logic. Each rule queries the ingested log data on a schedule, looking for patterns that indicate potential security issues. When a rule matches, it creates an incident in Sentinel that appears in the SOC’s queue for triage and investigation.

We configured rules covering:

  • Suspicious sign-in patterns across the Azure environment
  • Anomalous database query volumes against the eCommerce SQL databases
  • WAF/CDN security events (blocked attacks, rate limiting triggers, geographic anomalies)
  • Salesforce-specific detections: excessive login failures, logins from known-malicious IPs, and unusually large data queries via the API or reporting interfaces

Automation playbooks handle the routine parts of incident response. We built a Logic App playbook that integrates Sentinel incidents with the client’s ITSM ticketing system. When Sentinel creates an incident, the playbook automatically creates a corresponding ticket in the client’s service management platform, assigning it to the SOC team with the relevant context and severity.

This closed the gap between detection and response. Without the automation, SOC analysts would need to monitor the Sentinel portal separately from their ticketing system. With it, every Sentinel incident appears in the same queue they already work from.

Ongoing operations

A SIEM deployment is not a project that finishes — it is an operational capability that requires ongoing attention. After the initial deployment, the work shifted to:

Data volume monitoring. Sentinel charges based on data ingestion. Each connector contributes a different volume, and those volumes can change as the platform grows or as the underlying systems change their logging verbosity. We set up workspace usage alerts to catch unexpected spikes before they hit the invoice.

Rule effectiveness tuning. Analytics rules need refinement over time. Some rules generate too many false positives and need their thresholds adjusted. Others miss detection scenarios that emerge as the platform evolves. We review rule performance monthly and adjust accordingly.

Reporting. Regular reporting to stakeholders on key metrics: incidents created, incidents resolved, mean time to detection, mean time to response, and trends in attack patterns. This is not just operational hygiene — it demonstrates the value of the SIEM investment to the business.

Automation expansion. The initial playbook was a starting point. Over time, we added automated enrichment (pulling context from threat intelligence feeds when an incident is created) and automated response actions for high-confidence detections (blocking a known-malicious IP at the firewall without waiting for a human to approve it).

The result

The deployment gave the client a scalable SIEM with full visibility across their Azure eCommerce platform — infrastructure, application layer, edge security, and CRM — feeding into their existing security operations workflow.

The key outcomes:

  • Unified visibility: Three distinct security data sources correlated in a single platform, with detection logic that spans all of them
  • Operational integration: Sentinel incidents flow automatically into the existing SOC ticketing workflow, so there is no separate monitoring silo
  • Compliance coverage: 365-day log retention satisfying PCI-DSS requirements, with auditable evidence of monitoring and incident response
  • Scalable architecture: Adding new data sources or expanding the platform does not require re-architecting the SIEM. New connectors plug into the existing workspace and analytics framework.

The biggest lesson from this deployment: the technical configuration is only half the work. Getting the connectors working, the rules tuned, and the automation wired up is necessary but not sufficient. The other half is operational — ensuring someone is looking at the alerts, refining the rules, managing the costs, and continuously improving the detection coverage. A SIEM that nobody monitors is expensive logging, not security monitoring.

Ready to talk?

No sales pressure. Just straight answers about your IT and security.

Get in Touch

Related Insights