Why Serverless Risk Demands Identity-Aware Security at Cloud Scale 

Siddhant Patil

Key Takeaways 

  • Serverless shifts security risk from infrastructure to identity, permissions, and configuration, where small design choices can have an outsized impact. 
  • Short-lived cloud credentials reduce persistence but remain powerful; when exposed, they enable authenticated access, escalation, and lateral movement. 
  • The most damaging serverless breaches stem from routine issues such as public invocation, excessive permissions, and weak input validation, not rare exploits. 
  • SSRF and RCE are especially dangerous in serverless environments because they directly expose runtime credentials tied to cloud identities. 
  • Risk compounds through interconnected services, shared roles, and reused dependencies, turning isolated weaknesses into systemic exposure. 
  • Effective serverless security depends on continuous visibility into identity scope, invocation paths, configuration drift, and audit signals. 
  • Configuration hygiene, least-privilege enforcement, and continuous monitoring are the most reliable ways to reduce blast radius and prevent cloud-wide compromise. 

The Current Picture 

Serverless adoption is accelerating as organizations prioritize speed, scalability, and operational efficiency. According to the Data Bridge Market Research’s Global Serverless Security Market Report, the serverless security market reached USD 12.08 billion in 2024 and is projected to exceed USD 62.42 billion by 2032, underscoring the central role serverless has come to play in modern cloud architectures. Yet security maturity has not kept pace. According to the Cloud Security Alliance, more than 70 percent of organizations still lack dedicated controls for serverless environments. 

Eliminating infrastructure does not eliminate risk. It redistributes it. In serverless architectures, security shifts away from hosts and networks toward identity, permissions, triggers, and configuration discipline. Functions execute with short-lived credentials but often carry broad authority and interact with multiple cloud services in a single invocation. In this model, small configuration errors can quickly escalate into credential exposure, privilege escalation, and lateral movement across the cloud environment. 

To secure serverless effectively, organizations must understand where risk now accumulates and why traditional assumptions about isolation and containment no longer apply.


Qualys Insights

Understand why traditional security models fall short in dynamic cloud and serverless environments and what comprehensive scanning really requires.


Where Risk Accumulates in Serverless Environments 

Identity and Permissions Become the Control Plane 

In serverless environments, identity replaces infrastructure as the primary security boundary. Each function runs under a cloud identity that determines what it can access across services. When permissions are overly broad, reused, or implicitly trusted, a single misconfiguration can have far-reaching consequences that extend beyond the function itself. Public invocation and permissive triggers quietly expand the blast radius before exploitation begins. 

Temporary Credentials Still Carry Impact 

Serverless platforms rely on short-lived credentials to limit persistence, not authority. Once exposed, these credentials provide legitimate, authenticated access within the function’s permission scope. This makes credential leakage through misconfiguration or vulnerable code especially dangerous, as it enables escalation and lateral movement with minimal detection. 

Interconnected Services Multiply Exposure 

Serverless functions rarely operate in isolation. They trigger workflows, call APIs, and access shared services within a single execution path. As trust propagates across these interactions, small failures compound quickly. What appears localized often becomes environment-wide once service-to-service permissions are considered. 

Shared Dependencies Extend the Attack Surface 

Reuse accelerates development but also propagates risk. Shared packages, layers, and dependencies allow vulnerabilities in a single component to affect many functions at once. Without visibility into these artifacts, organizations may secure identities and configurations while leaving exploitable software embedded in the execution path. 

Compliance Breaks When Visibility Fades 

Ephemeral execution and dynamic permissions make it harder to maintain consistent logging, monitoring, and least-privilege enforcement. Gaps in visibility increase breach risk and undermine compliance with standards that depend on continuous monitoring and provable controls. 

How Serverless Authentication Works 

Serverless Authentication is provided by multiple cloud providers, such as AWS Lambda, Azure Functions, and Google Cloud Functions. Each platform issues temporary credentials at runtime, but the way these identities are delivered and scoped differs in important ways. 

1. AWS Lambda

When AWS Lambda begins executing a function, the platform issues temporary IAM credentials like access key ID, secret access key, and session token that represent the function’s assigned IAM role. These credentials are injected as environment variables, allowing the function to securely access permitted AWS services only during its execution. They expire shortly after the invocation ends, reducing exposure and preventing long-standing credential persistence. 

2. Azure Functions

Azure Functions use managed identities for authentication. The function contacts a local identity endpoint, providing environment variables like IDENTITY_ENDPOINT and IDENTITY_HEADER. Azure’s identity system (Microsoft Entra ID) verifies the request and responds with a short-lived OAuth 2.0 token scoped to the requested resource. The function utilizes this token for secure access without requiring persistent secrets. 

3. Google Cloud Functions

Google Cloud Functions run as service accounts, which are non-human identities tied to your GCP project. When a function needs to access a GCP service, the Google Cloud client libraries automatically fetch a short-lived OAuth 2.0 access token from the Instance Metadata Server (IMDS) at http://metadata.google.internal/.

The metadata server validates that the request originates from within GCP infrastructure and returns a token scoped to the service account’s IAM roles. The function uses this token to authenticate to GCP services (Cloud Storage, Pub/Sub, BigQuery, Firestore, etc.). Tokens are short-lived (typically 1 hour) and automatically refreshed on demand throughout the function’s execution.

The Credential Power Behind Serverless Functions 

  • IAM roles, managed identities, and service accounts are fundamental cloud identities that authorize serverless functions to access cloud resources securely, forming the backbone of how permissions are enforced in serverless environments.  

From a security perspective, these identity mechanisms: 

  • Secure Credential Handling: They eliminate the need for static secrets by providing temporary, auto-rotating tokens. 
  • Enforce Principle of Least Privilege: Limit function permissions so each function receives only the access rights necessary for its specific task.  
  • Reduce Attack Surface: Assigning separate identities to different functions helps contain the impact of a compromise.  
  • Aid in Monitoring and Auditing: Identity-linked activity logs help detect anomalous or suspicious behavior.  

Why do attackers target temporary serverless credentials?

These credentials are a valuable target for attackers because they grant authenticated access to cloud resources under the identity of the serverless function. Once obtained, an attacker can operate with the same level of trust and permissions that the cloud provider grants to the function.  
 
This allows them to:  

  • Escalate privileges and perform unauthorized actions within the scope of the function’s permissions. 
  • Move laterally through the cloud environment by impersonating the function to access other services. 
  • Exfiltrate sensitive data, modify configurations, or disrupt operations without compromising the entire cloud account. 
  • Remain stealthy, as using legitimate credentials often bypasses many traditional security detections. 

Hidden Dangers: Common Vulnerabilities in Serverless Deployments 

1. Server-Side Request Forgery (SSRF) 

Server-Side Request Forgery (SSRF) is a high-impact vulnerability in serverless environments, particularly because serverless functions often run with broad cloud permission and interact with internal services. This risk arises when functions accept user-supplied URLs or endpoints and make outbound requests without validating the destination of those requests. 

In a typical attack scenario, a serverless function retrieves data from a URL specified in an API request. If the URL is not validated or restricted, an attacker can redirect the function to internal cloud services, such as the Instance Metadata Server (IMDS), and extract temporary credentials or tokens. With these credentials, the attacker may gain access to sensitive cloud resources, escalate privileges, or move laterally across the environment. The exposure becomes especially critical when the function contains sensitive secrets in its environment variables or is publicly accessible without authentication. 

To reduce exposure to SSRF, organizations rely on targeted configuration controls and identity hardening practices such as:  

  • Detection of hard-coded secrets in serverless environments: Qualys TotalCloud™ continuously scans serverless function configurations and environment variables for embedded secrets such as API keys, tokens, or database credentials. If such secrets are exposed, an attacker exploiting SSRF could potentially access them. Flagging these issues early enables teams to migrate credentials into secure storage services, such as AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager.  
  • Identification of functions that allow anonymous or public invocation: Functions that can be invoked anonymously dramatically increase the attack surface. An unauthenticated attacker could send crafted requests without restriction, thereby increasing the likelihood of SSRF exploitation. Configuration checks can detect public or overly permissive invocation settings, allowing teams to restrict access to trusted users or services.   

By combining these controls with broader visibility into IAM permissions, runtime behavior, and event triggers, teams can reduce the likelihood and impact of SSRF attacks. Securing invocation paths and identity scope is essential to limiting blast radius.

Relevant Configuration Controls  

  • Ensure Cloud Audit Logging is configured properly across all services and all users from a project (GCP) (CID 52008
  • Ensure Lambda function does not allow anonymous invocation (AWS) (CID 105
  • Ensure Cloud function is not anonymously or publicly accessible (GCP) (CID 52056

2. Remote Code Execution (RCE) 

Remote Code Execution (RCE) represents one of the most critical vulnerabilities in serverless environments, allowing attackers to execute arbitrary code within a function’s runtime environment. RCE vulnerabilities arise when serverless functions process untrusted input, whether passed into shell commands, parsed into application logic, or deserialized without validation. If this input isn’t properly validated, sanitized, or restricted, attackers can craft requests with malicious payloads that the function executes as actual code or system commands. Common triggers include the usage of dangerous functions like eval() or exec() with user data, deserialization of untrusted objects, and insecure library usage. The severity is heightened because serverless functions run with broad cloud permissions tied to their IAM roles, meaning successful exploitation grants attackers access to all cloud resources the function is authorized to reach. 

Typical Attack Scenario 

  • A Lambda function processes user-submitted document formats and passes this input to a shell command for format conversion. An attacker sends a malicious API request with a crafted format parameter containing shell metacharacters, such as “pdf”; env”. When the function fails to sufficiently sanitize this input before passing it to the shell, the attacker’s injected commands execute alongside the legitimate conversion. 
  • The attacker retrieves all environment variables, including the temporary AWS IAM credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN), automatically injected into the function. With these extracted credentials, the attacker authenticates as the Lambda function’s IAM role and gains access to authorized AWS services—potentially listing and modifying S3 buckets, accessing databases, or escalating privileges further depending on the function’s IAM permissions. 

Best Practices to Prevent RCE 

To reduce exposure to RCE vulnerabilities, organizations should implement targeted controls and hardening practices. 

  • Strict input validation and sanitization are essential; never pass user-supplied input directly to shell commands or code evaluation functions. Use parameterized APIs that separate code from data, and validate input against strict allowlists before processing. 
  • Enable function tracing, such as AWS X-Ray, to detect RCE attempts and exploitation. Tracing reveals suspicious activities, such as unexpected subprocess calls or unusual command execution patterns, enabling faster detection and forensic analysis. 
  • Remove hard-coded secrets from environment variables by migrating to managed secret services, such as AWS Secrets Manager or Azure Key Vault. Qualys TotalCloud continuously scans function configurations for embedded secrets, allowing teams to remediate before exploitation. 
  • Restrict function invocation access by disabling anonymous or public invocation. Configuration checks detect overly permissive settings, enabling teams to restrict access to trusted users or services only. 
  • Apply least-privilege IAM permissions to limit the blast radius. Ensure IAM roles grant only the minimum necessary permissions, removing overly broad actions or resource wildcards in favor of specific resources and actions. 

By combining these controls with visibility into code quality, dependency health, and runtime behavior, teams can significantly reduce both the likelihood and impact of RCE vulnerabilities in serverless environments.

Relevant Configuration Controls  

  • Ensure the Lambda function has tracing enabled (CID 97
  • Ensure Multiple Triggers are never configured in $Latest Lambda function (CID 99
  • Ensure only one active access key is available for any single IAM user (CID 171
  • Ensure guest users are reviewed on a monthly basis (CID 50072
  • Ensure IAM users are not assigned the Service Account User role at the project level (GCP) (CID 52003

3. Insecure Configurations 

Insecure configurations occur when serverless functions or their underlying cloud resources are set up with weak security settings, exposing them to exploitation and unauthorized access. These misconfigurations typically stem from overly broad IAM roles, managed identities, or service accounts that grant permissions far beyond what functions actually need for legitimate operations. 

Additional configuration weaknesses include deploying functions that are publicly accessible without authentication, failing to encrypt sensitive environment variables or data handled by functions, and neglecting to enable logging, monitoring, or alerts for function activities and configuration changes.

The danger of insecure configurations is that they create multiple exploitation pathways that attackers can use to alter resources, read secrets, or deploy malicious workloads, all with minimal effort, since the security guardrails have been weakened or removed entirely. 

Typical Attack Scenario 

An organization deploys a Lambda function assigned an IAM role with AdministratorAccess permissions because developers want to simplify deployment and avoid dealing with granular IAM policy creation. The function’s primary purpose is to read data from a specific S3 bucket; however, its overly broad role permits it to access all AWS services, modify IAM policies, and perform nearly any action within the AWS account. 

An attacker discovers this publicly accessible Lambda function endpoint and sends a request containing malicious code. The function processes the request and, through an RCE or SSRF vulnerability (or even simple logic flaws), the attacker gains the ability to make AWS API calls using the function’s credentials. Because the IAM role has administrative permissions, the attacker can now list all S3 buckets, read sensitive data, create new IAM users with persistent access, deploy malicious Lambda functions, or modify security group configurations to expose databases. The lack of logging and monitoring means security teams remain unaware of the compromise for weeks, allowing the attacker extended time to exfiltrate data and establish persistence. 

Best Practices to Prevent Insecure Configurations 

To reduce exposure to insecure configurations, organizations should implement targeted controls and hardening practices. 

  • Apply least-privilege IAM permissions by assigning functions only the minimum permissions necessary for legitimate operation. Replace overly broad roles like AdministratorAccess with specific, scoped policies that grant access to exact resources (specific S3 bucket ARNs, specific DynamoDB table names) and exact actions needed. Qualys TotalCloud continuously identifies excessive permissions and recommends least-privilege IAM policies for remediation. 
  • Restrict function invocation access by ensuring functions are not publicly or anonymously accessible. Disable public invocation permissions and limit access to trusted users, services, or IP ranges. Configuration checks detect overly permissive settings, enabling teams to restrict invocation to authenticated principals only. 
  • Encrypt sensitive environment variables by migrating secrets from plaintext environment variables to managed secret services like AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager. These services provide encrypted storage, access logging, and automatic rotation capabilities. Qualys TotalCloud continuously scans function configurations for hard-coded secrets and alerts teams to migrate them to secure storage. 
  • Enable comprehensive logging and monitoring across all cloud services. Ensure CloudTrail is enabled in all AWS regions, Cloud Audit Logging is properly configured in GCP, and Application Insights is configured with appropriate retention periods in Azure. Additionally, enable function-level tracing and alerts for configuration changes. This logging infrastructure captures evidence of unauthorized access and malicious activities, enabling faster incident detection and response. 

By implementing these controls through Qualys TotalCloud, organizations can significantly reduce the attack surface created by insecure serverless configurations and limit the impact of successful exploitation. 

Relevant Configuration Controls 

  • Ensure log metric filters and alerts exist for audit configuration changes (GCP) (CID 52012
  • Ensure Activity Log Alert exists for Update Security Policy (Azure) (CID 50071
  • Ensure Lambda excessive permissions are removed (CID 107
  • Ensure that Lambda functions are not using an IAM role shared across multiple functions (CID 97
  • Ensure that the Default service account is never used for Cloud functions (GCP) (CID 52054

How Credential Exposure Leads to Cloud Security Breaches in Serverless Environments 

When serverless function credentials are exposed, whether through SSRF, RCE, or misconfiguration, attackers gain an entry point into your cloud environment. This access enables a dangerous progression of actions.

Let’s understand this with an example attack path scenario: 

An e-commerce company runs a product image-processing service using AWS Lambda functions. The function accepts user-uploaded image URLs, fetches them for processing, and stores results in S3. Due to multiple security misconfigurations, an attacker ultimately compromises the entire cloud environment. 

Step-by-Step Attack Path  

1. Initial Reconnaissance: The attacker discovers a publicly accessible Lambda function endpoint through internet scanning. They notice the function accepts URL parameters without authentication, indicating potential vulnerability to input-based attacks.

TotalCloud Control 

2. Exploitation Phase: SSRF Attack  
 
The attacker crafts a malicious request, supplying the AWS Instance Metadata Service (IMDS) URL http://169.254.169.254/latest/meta-data/iam/security-credentials/ as the image URL parameter.  

The vulnerable Lambda function fetches this internal endpoint, returning temporary IAM credentials (access key ID, secret access key, and session token) to the attacker.  

TotalCloud Control

3. Credential Analysis and Permission Discovery  
 
Using the stolen credentials, the attacker queries IAM policies to understand the function’s permissions. They discover the Lambda function uses an overly permissive IAM role with s3:, lambda:, and iam:PassRole permissions far beyond what’s needed for simple image processing.  

TotalCloud Controls

  • Ensure Lambda excess permissions are removed: This control would have identified and flagged overly broad permissions, prompting remediation before they could be exploited. 
  • Ensure the Lambda function does not use an IAM role shared across multiple functions: If multiple functions shared this role, the blast radius would be even larger. This control ensures role isolation.

    For more information, please refer to the link:
    https://cdn2.qualys.com/docs/qualys_cloudview_policy_control_list.pdf

4. Privilege Escalation  

Exploiting the iam:PassRole permission, the attacker creates a new Lambda function with an administrative IAM role attached. They deploy malicious code that can now perform unrestricted actions across the AWS account.  

TotalCloud Controls

  • Ensure IAM policy changes are monitored: CloudWatch Events monitoring IAM changes would have triggered alerts when the new role was created or attached. 
  • Ensure only one active access key is available for any single IAM user: While not directly applicable here, this control prevents attackers from creating persistent access through additional IAM user keys.

    For more information, please refer to the link:
    https://cdn2.qualys.com/docs/qualys_cloudview_policy_control_list.pdf

5. Lateral Movement and Data Exfiltration  

With administrative access, the attacker:  

  • Lists all S3 buckets and discovers customer payment data and API keys stored unencrypted  
  • Exfiltrates sensitive data to external storage  
  • Creates backdoor IAM users for persistent access  
  • Modifies security group rules to expose internal databases 

TotalCloud Controls

  • Ensure CloudTrail is enabled in all regions: All API calls made by the attacker would be logged, providing a complete audit trail for forensic investigation and enabling detection of unusual access patterns. 
  • Ensure AWS Config is enabled in all regions: Configuration changes (such as security groups, new IAM users, and S3 bucket policies) would be tracked, helping security teams identify the full scope of compromise. 

    For more information, please refer to the link:
    https://cdn2.qualys.com/docs/qualys_cloudview_policy_control_list.pdf

6. Persistence and Further Compromise  

The attacker deploys cryptocurrency mining workloads across multiple Lambda functions and EC2 instances, generating high costs while maintaining access through multiple backdoors. 

TotalCloud Control

How can Qualys TotalCloud Controls help in mitigating such risks?  

TotalCloud’s CSPM capabilities can: 

  • Automatically detect misconfigurations like excessive permissions or public function exposure 
  • Continuously monitor compliance with security policies across serverless deployments 
  • Provide risk-based prioritization for remediation using TruRisk™ scoring 

Conclusion 

Serverless architectures offer significant operational and development advantages, but they also shift where risk accumulates. Short-lived credentials, event-driven execution, and extensive service integrations create an environment where misconfigurations and overly broad identities can lead to serious compromise if left unchecked. 

Understanding how serverless functions authenticate, how privileges propagate across services, and how vulnerabilities such as SSRF or RCE can expose temporary credentials is essential for reducing risk. Strengthening identity controls, enforcing least privilege, monitoring configuration drift, and validating invocation pathways all play critical roles in preventing attackers from moving through the cloud environment.  

With the right combination of configuration hygiene, continuous monitoring, and identity-aware security practices, organizations can meaningfully limit the blast radius of serverless vulnerabilities and maintain a resilient cloud environment.


Secure serverless environments with continuous visibility across functions, permissions, configurations, and cloud services.


Frequently Asked Questions (FAQs)

What are the top serverless security risks?

The top serverless security risks include overly permissive IAM roles, misconfigured event triggers, insecure environment variables, excessive service-to-service permissions, and lack of visibility into function behavior. Because serverless functions often interact with multiple cloud services in a single execution, a single misconfiguration can expose credentials, enable privilege escalation, or allow lateral movement across the cloud environment.

Why is serverless security difficult to manage?

Serverless security is difficult because functions are ephemeral, event-driven, and tightly coupled to managed cloud services. Traditional security tools rely on persistent infrastructure and static assets, which limits their effectiveness in environments where code executes briefly and infrastructure is abstracted away.

How is serverless security different from container or VM security?

Serverless security focuses less on operating systems and network controls and more on identity, permissions, triggers, and configuration discipline. Unlike containers or virtual machines, serverless functions inherit permissions dynamically and execute within shared cloud-managed runtimes, making identity and service relationships the primary attack surface.

What are the most common serverless misconfigurations?

Common serverless misconfigurations include broad execution roles, unrestricted access to storage or databases, publicly exposed triggers, hard-coded secrets in environment variables, and insufficient logging. These issues often emerge as environments scale and functions are updated independently.

How can misconfigured serverless functions lead to lateral movement?

A compromised serverless function with excessive permissions can access multiple cloud services without needing network-level access. This allows attackers to move laterally across cloud services, access sensitive data, or escalate privileges entirely through API calls.

What does effective serverless security require?

Effective serverless security requires continuous visibility across functions, identities, configurations, and cloud services. Security teams must evaluate how permissions and service relationships work in practice, rather than relying on static architecture assumptions or periodic audits.

How should organizations secure serverless functions at scale?

To secure serverless functions at scale, organizations should adopt a unified cloud risk approach that continuously assesses configurations, permissions, and workload behavior across cloud environments. Risk-based prioritization is essential to focus remediation on exposures with real business impact.

How does serverless security fit into a broader cloud security strategy?

Serverless security should be integrated into a unified cloud security model alongside CSPM, IAM governance, and workload security. Treating serverless as a separate silo increases blind spots and slows risk reduction.

What tools help manage serverless security effectively?

Platforms that provide unified visibility across cloud services, identities, and workloads enable organizations to understand where serverless risk accumulates and how to reduce it. Qualys TotalCloud supports this approach by correlating risk signals across dynamic cloud environments.

Show Comments (1)

Comments

Your email address will not be published. Required fields are marked *

  1. Excellent deep-dive, Siddhant!

    The e-commerce attack path scenario (Section: “How Credential Exposure Leads to Cloud Security Breaches”) brilliantly illustrates how seemingly isolated misconfigurations cascade into full environment compromise. The step-by-step progression from SSRF → credential theft → privilege escalation → lateral movement makes the abstract risk tangible.

    One observation: The article emphasizes that “short-lived credentials reduce persistence but remain powerful” – this paradox deserves even more prominence. Many teams treat ephemeral credentials as inherently safe, missing that temporal limitation ≠ scope limitation. A 15-minute token with Administrator Access can cause irreversible damage in seconds.

    Suggestion for practitioners: Consider adding a “Quick Win Checklist” section distilling the TotalCloud controls into immediate actions:

    1. Audit all Lambda functions for anonymous invocation (CID 105)
    Enable X-Ray tracing across all functions (CID 97)
    2. Scan for hard-coded secrets in environment variables
    3. Review IAM roles for excessive permissions (CID 107)

    This would help security teams translate the comprehensive analysis into actionable first steps while they build out full CSPM coverage.

    Great work connecting the technical vulnerabilities to business impact – the $62B market projection really underscores why this matters now.