Skip to main content
Security & Compliance Security

Getting Salesforce Audit Logs Into Microsoft Sentinel – Without Exposing Storage to the Internet

Jonathan Bourke 6 min read
Microsoft Sentinel Salesforce Azure Private Link SIEM security

We manage Microsoft Sentinel as part of the security tooling for an eCommerce client. Sentinel is a cloud-native SIEM — security information and event management — that collects data from various sources and uses built-in analytics and machine learning to surface suspicious activity quickly. One of the data sources we needed to bring in was Salesforce audit logs.

The problem: the default Salesforce connector for Sentinel assumes its underlying Azure resources will be publicly accessible. Our client’s environment had Azure Policy configured to block public access to storage accounts, with no exceptions. The standard connector deployment failed immediately.

This is the story of how we rewrote the ARM template to route everything through Azure Private Link, keeping the Salesforce log ingestion fully functional while respecting the client’s zero-public-access policy.

Why the default connector fails in locked-down environments

The out-of-the-box Salesforce Sentinel connector consists of two main pieces: a Python-based Azure Function App and an Azure Storage Account. The Function App pulls Salesforce audit logs on a schedule and writes them to a Log Analytics workspace where Sentinel can process them.

The ARM template Microsoft provides assumes both resources will have public endpoints. That is fine in environments with permissive network policies, but increasingly organisations are enforcing a no-public-storage stance through Azure Policy — and rightly so. Exposing storage accounts to the public internet is an unnecessary attack surface for backend infrastructure that only needs to communicate with other Azure services.

When we deployed the default connector, Azure Policy rejected the storage account creation. No public access means no public access — no exceptions, no workarounds at the policy level. We needed to redesign the connector to work entirely over private networking.

We built a new ARM template that merges the standard Salesforce connector components with the networking infrastructure needed for private connectivity. The template provisions six categories of resources, all wired together so that no traffic leaves the virtual network.

The Azure Function App

The Python-based Function App is the core of the connector. It authenticates against the Salesforce REST API, pulls audit log data, and writes it to the Log Analytics workspace. The code itself is downloaded from Microsoft — we did not modify the application logic.

What we changed is how the Function App connects to its backing storage. The AzureWebJobsStorage and WEBSITE_CONTENTAZUREFILECONNECTIONSTRING app settings point to a storage account secured behind a private endpoint. The Function App reaches storage over the virtual network rather than over the internet.

Several additional application settings are defined for the Salesforce integration: the REST API endpoint, authentication credentials, consumer key and secret, and the polling interval.

Elastic Premium plan

This is an important detail. The default connector uses a Consumption plan for the Function App, which does not support Virtual Network integration or Private Link. We upgraded to the Azure Functions Elastic Premium plan, which supports both. The cost difference is modest for a function that runs for a few minutes every hour, and it is necessary for the private networking to work.

The storage account

The storage account serves two purposes: the Function App uses it for its own operational state (triggers, bindings, runtime metadata), and for the file content share that hosts the function code. In our template, this storage account is created with public access disabled from the start — no need to rely on Azure Policy to enforce it after the fact.

The virtual network

All the private networking runs through a dedicated virtual network with two subnets:

  • snet-func — delegated to the Function App for VNet integration. This is how the Function App’s outbound traffic routes through the virtual network instead of the public internet.
  • snet-pe — hosts the private endpoints. Private IP addresses are allocated from this subnet for each storage service.

Private endpoints

Azure Private Endpoints assign private IP addresses to specific Azure resources, ensuring traffic stays within the virtual network. Our template creates four private endpoints for the storage account — one for each storage service:

  • Azure Blob storage
  • Azure File storage
  • Azure Queue storage
  • Azure Table storage

The Function App needs all four. Blob and File are obvious (code hosting and operational data), but Queue and Table are used internally by the Azure Functions runtime for trigger management and lease coordination. Missing any of them causes subtle failures that are difficult to diagnose.

Private DNS zones

When you connect to an Azure resource via a private endpoint, you are connecting to a private IP address instead of the public FQDN. The DNS resolution needs to be overridden so that the standard hostnames (e.g., yourstorageaccount.blob.core.windows.net) resolve to the private IP rather than the public one.

The template creates four private DNS zones, one per storage service:

  • privatelink.queue.core.windows.net
  • privatelink.blob.core.windows.net
  • privatelink.table.core.windows.net
  • privatelink.file.core.windows.net

Each zone is linked to the virtual network and contains an A record pointing to the corresponding private endpoint’s IP address. This is the piece that ties everything together — without correct DNS, the Function App would try to reach the public endpoint and fail.

Deploying the custom template

The full ARM template is available as a GitHub Gist.

To deploy it:

  1. Copy the template JSON from the Gist
  2. Log into the Azure Portal with permissions to create resources in the target subscription
  3. Search for “Deploy a custom template”
  4. Select “Build your own template in the editor” and paste the JSON
  5. Fill in the required parameters:
    • Workspace ID and Workspace Key — from your Log Analytics workspace
    • Salesforce User — an account with API Enabled and Read Event Log Files permissions
    • Salesforce Password and Security Token — for authentication
    • Salesforce Token URI — use login for production, test for sandbox
    • Salesforce Consumer Key and Consumer Secret — from a Connected App configured in Salesforce
    • Time Interval — we recommend hourly (60). The function pulls all events since its last run.
  6. Click “Review + Create”

The deployment takes several minutes. The Function App needs time to start up, pull its code from the file share, and initialise the Python runtime.

Confirming it works

With an hourly interval, the function fires once per hour and typically runs for one to five minutes. You can trigger it manually for testing:

  1. Navigate to the Function App in the Azure Portal
  2. Go to Functions > SalesforceSentinelConnector
  3. Select Code + Test
  4. Click Test/Run

After 15 to 20 minutes, check your Log Analytics workspace. The Salesforce data appears in a custom log table. Once you see records flowing, the connector is operational.

Analytics rules to enable

With Salesforce logs flowing into Sentinel, you can create analytics rules to detect suspicious activity. The scenarios we configured include:

  • Excessive login failures — a high number of failed login attempts within a short window, which may indicate credential stuffing or brute-force attacks against Salesforce accounts
  • Login activity from malicious IP addresses — correlating Salesforce sign-in events with threat intelligence feeds to flag logins from known-bad sources
  • Excessively large queries over API or Reporting interfaces — unusually large data extractions that could indicate data exfiltration or a compromised integration

These rules generate incidents in Sentinel that feed into the broader security operations workflow — triage, investigation, and response.

The broader picture

This was one piece of a larger Sentinel deployment for this client, which I cover in more detail in Securing an eCommerce Platform with Microsoft Sentinel. The Private Link approach we developed for the Salesforce connector became a reusable pattern for any Sentinel data connector that relies on Azure Storage in environments with strict network policies.

The key takeaway: Azure’s default connector templates are starting points, not finished deployments. In production environments with real security policies, you will almost certainly need to customise them. Understanding what each resource does and why it exists is what allows you to adapt the template rather than fighting your own security controls.

Ready to talk?

No sales pressure. Just straight answers about your IT and security.

Get in Touch

Related Insights