Cloud misconfigurations remain the #1 cause of data breaches in cloud environments — responsible for more incidents than sophisticated exploits, zero-day vulnerabilities, or insider threats combined. According to our scanning data across thousands of cloud environments, 78% of critical findings are misconfigurations, not software vulnerabilities. This guide catalogs the most dangerous misconfigurations across AWS, Azure, and GCP, with specific CLI commands to detect and remediate each one.
Why Cloud Misconfigurations Are the #1 Breach Vector
The shared responsibility model creates a fundamental gap: cloud providers secure the infrastructure of the cloud, but you're responsible for securing what you put in the cloud. Most organizations understand this conceptually but fail in execution. The result is a predictable pattern of breaches caused not by clever attackers, but by simple configuration errors.
Consider the numbers:
- 82% of cloud breaches involve data stored in cloud services that was accidentally exposed through misconfiguration
- Average time to detect a cloud misconfiguration without automated monitoring: 197 days
- Average cost per breach involving cloud misconfiguration: $4.45 million (2025 data)
- Most common root cause: Overly permissive access controls — not missing patches or zero-day exploits
"Cloud breaches rarely involve picking locks. They involve walking through doors that were left wide open. The attacker doesn't need sophistication when the S3 bucket is public."
The problem is compounded by the pace of cloud adoption. Teams provision resources faster than security can review them. Infrastructure-as-code helps, but only if security scanning is integrated into the pipeline. Without automated detection, misconfigurations accumulate and become the largest source of risk in your environment.
AWS: Top Misconfigurations
1. Public S3 Buckets
Despite years of warnings and multiple AWS safeguards (Block Public Access, bucket policy warnings, access analyzer), public S3 buckets remain the single most common cloud misconfiguration. Organizations expose sensitive data through misconfigured bucket policies, ACLs, or improperly scoped presigned URLs.
# Audit all S3 buckets for public access
$ aws s3api list-buckets --query 'Buckets[].Name' --output text | tr '\t' '\n' | \
while read bucket; do
result=$(aws s3api get-public-access-block --bucket "$bucket" 2>/dev/null)
if [ $? -ne 0 ]; then
echo "WARNING: $bucket — no public access block configured"
fi
done
# Enable Block Public Access at the account level
$ aws s3control put-public-access-block \
--account-id $(aws sts get-caller-identity --query Account --output text) \
--public-access-block-configuration \
BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true
2. Overly Permissive IAM Policies
We find IAM policies with Action: "*" and Resource: "*" in over 70% of AWS environments. These wildcard policies create blast radius amplification — a single compromised credential grants access to everything. The root cause is usually convenience during development that never gets tightened for production.
# Find all IAM policies with wildcard actions
$ aws iam list-policies --scope Local --query 'Policies[].Arn' --output text | tr '\t' '\n' | \
while read arn; do
version=$(aws iam get-policy --policy-arn "$arn" --query 'Policy.DefaultVersionId' --output text)
doc=$(aws iam get-policy-version --policy-arn "$arn" --version-id "$version" \
--query 'PolicyVersion.Document' --output json)
if echo "$doc" | grep -q '"Action": "\*"'; then
echo "CRITICAL: $arn has wildcard Action"
fi
done
# Use IAM Access Analyzer to identify unused permissions
$ aws accessanalyzer list-findings --analyzer-arn $ANALYZER_ARN \
--filter '{"status": {"eq": ["ACTIVE"]}}'
3. Security Groups Open to the World
Security groups with inbound rules allowing 0.0.0.0/0 on ports like SSH (22), RDP (3389), and database ports (3306, 5432, 27017) are found in virtually every AWS environment we scan. These rules are often created for debugging and never removed.
# Find security groups with 0.0.0.0/0 ingress on sensitive ports
$ aws ec2 describe-security-groups \
--filters "Name=ip-permission.cidr,Values=0.0.0.0/0" \
--query 'SecurityGroups[*].[GroupId,GroupName,IpPermissions[?contains(IpRanges[].CidrIp,`0.0.0.0/0`)]]' \
--output table
4. IMDSv1 Still Enabled
Instance Metadata Service v1 (IMDSv1) is exploitable via SSRF attacks, allowing attackers to steal IAM role credentials from EC2 instances. IMDSv2 mitigates this with session tokens, but many instances still allow v1 fallback.
# Find instances still allowing IMDSv1
$ aws ec2 describe-instances \
--query 'Reservations[].Instances[?MetadataOptions.HttpTokens!=`required`].[InstanceId,Tags[?Key==`Name`].Value|[0]]' \
--output table
# Enforce IMDSv2 on an instance
$ aws ec2 modify-instance-metadata-options \
--instance-id i-1234567890abcdef0 \
--http-tokens required \
--http-endpoint enabled
5. Disabled CloudTrail Logging
CloudTrail is AWS's audit log for API activity. Disabled or misconfigured CloudTrail means you have no visibility into who is doing what in your account — making breach detection and forensics nearly impossible.
# Verify CloudTrail is enabled and logging
$ aws cloudtrail describe-trails --query 'trailList[*].[Name,IsMultiRegionTrail,S3BucketName]' --output table
$ aws cloudtrail get-trail-status --name default --query '[IsLogging,LatestDeliveryTime]'
Azure: Top Misconfigurations
1. Network Security Groups Allowing All Inbound
Azure NSGs with inbound rules allowing traffic from 0.0.0.0/0 (any source) to sensitive ports like RDP (3389), SSH (22), or database ports are extremely common — found in approximately 60% of Azure environments we scan.
# List NSG rules allowing inbound from any source
$ az network nsg list --query '[].{Name:name, RG:resourceGroup}' -o table
$ az network nsg rule list --nsg-name MyNSG --resource-group MyRG \
--query "[?sourceAddressPrefix=='*' && direction=='Inbound'].[name,destinationPortRange,access]" -o table
# Use Azure Bastion instead of direct RDP/SSH
$ az network bastion create --name MyBastion \
--resource-group MyRG --vnet-name MyVNet \
--public-ip-address MyBastionIP
2. Public Blob Storage
Azure Storage accounts with public blob access enabled, shared access signatures (SAS) with excessive permissions or long expiry times, and storage keys that haven't been rotated create significant data exposure risk.
# Check for storage accounts with public blob access
$ az storage account list --query '[].{Name:name, AllowBlobPublicAccess:allowBlobPublicAccess}' -o table
# Disable public blob access
$ az storage account update --name mystorageaccount \
--resource-group MyRG --allow-blob-public-access false
3. Overprivileged Service Principals
Service principals with Contributor or Owner roles at the subscription level are common in Azure environments. These grant far more access than applications typically need, creating significant blast radius if credentials are compromised.
# List service principals with high-privilege role assignments
$ az role assignment list --query "[?roleDefinitionName=='Contributor' || roleDefinitionName=='Owner'].[principalName,roleDefinitionName,scope]" -o table
4. Missing Azure Defender Coverage
Many organizations enable Microsoft Defender for Cloud but only for a subset of resource types, leaving gaps in threat detection for databases, storage, containers, and Key Vault. Partial coverage creates blind spots that attackers specifically target.
5. Unrestricted Key Vault Access
Azure Key Vaults without network restrictions, without soft-delete enabled, or with overly permissive access policies allow unauthorized access to secrets, certificates, and encryption keys.
# Check Key Vault network rules and access policies
$ az keyvault list --query '[].{Name:name, NetworkRules:properties.networkAcls.defaultAction}' -o table
# Enable soft-delete and purge protection
$ az keyvault update --name MyKeyVault \
--enable-soft-delete true --enable-purge-protection true
GCP: Top Misconfigurations
1. Overly Permissive Firewall Rules
GCP firewall rules allowing ingress from 0.0.0.0/0 to all ports or sensitive services are the most frequent finding. The default VPC network's auto-created rules often allow more traffic than intended.
# List firewall rules allowing ingress from 0.0.0.0/0
$ gcloud compute firewall-rules list \
--filter="sourceRanges=0.0.0.0/0 AND direction=INGRESS" \
--format="table(name, network, allowed[].map().firewall_rule().list(), sourceRanges)"
# Delete overly permissive default rules
$ gcloud compute firewall-rules delete default-allow-ssh --quiet
$ gcloud compute firewall-rules delete default-allow-rdp --quiet
2. Service Account Key Mismanagement
Long-lived service account keys, service accounts with the Owner or Editor primitive role, and keys that haven't been rotated are endemic in GCP environments. Primitive roles grant far more permissions than needed.
# Find service accounts with primitive (Owner/Editor) roles
$ gcloud projects get-iam-policy $PROJECT_ID --format=json | \
jq '.bindings[] | select(.role | test("roles/(owner|editor)")) |
.members[] | select(test("serviceAccount:"))'
# List service account keys older than 90 days
$ gcloud iam service-accounts keys list \
--iam-account=sa@project.iam.gserviceaccount.com \
--format="table(name,validAfterTime,validBeforeTime)"
3. Public Cloud Storage Buckets
GCP Cloud Storage buckets with allUsers or allAuthenticatedUsers bindings are a frequent finding. These buckets often contain application data, logs, backups, or configuration files with sensitive information.
# Check for publicly accessible buckets
$ gsutil iam get gs://my-bucket | grep -E "allUsers|allAuthenticatedUsers"
# Remove public access
$ gsutil iam ch -d allUsers gs://my-bucket
$ gsutil iam ch -d allAuthenticatedUsers gs://my-bucket
4. Disabled Audit Logging
GCP Cloud Audit Logs should be enabled for all services, including Data Access logs. Without comprehensive logging, incident detection and forensic investigation become impossible.
# Check audit log configuration
$ gcloud projects get-iam-policy $PROJECT_ID --format=json | jq '.auditConfigs'
# Enable data access logging for all services
$ gcloud projects set-iam-policy $PROJECT_ID policy.yaml
# where policy.yaml includes:
# auditConfigs:
# - service: allServices
# auditLogConfigs:
# - logType: ADMIN_READ
# - logType: DATA_READ
# - logType: DATA_WRITE
Cross-Cloud Misconfiguration Patterns
Despite the different implementations, the same fundamental misconfiguration patterns appear across all three major cloud providers. Understanding these patterns helps you build cloud-agnostic security controls.
| Pattern | AWS | Azure | GCP |
|---|---|---|---|
| Public storage | S3 bucket policies, ACLs | Blob public access | allUsers IAM binding |
| Open network access | Security groups 0.0.0.0/0 | NSG any-source rules | Firewall rules 0.0.0.0/0 |
| Over-privileged identity | IAM wildcard policies | Contributor/Owner SPs | Primitive roles on SAs |
| Missing encryption | Unencrypted EBS/RDS | Unencrypted managed disks | Unencrypted persistent disks |
| Insufficient logging | Disabled CloudTrail | Disabled activity logs | Disabled audit logs |
| Metadata exposure | IMDSv1 enabled | IMDS accessible | Metadata API accessible |
"If you fix just three things across any cloud provider — public storage, open network access, and over-privileged identities — you eliminate approximately 80% of the misconfiguration risk in a typical environment."
Automating Misconfiguration Detection
Manual cloud audits don't scale. A single AWS account can contain thousands of resources, each with dozens of configuration parameters. Automated detection is not optional — it's a prerequisite for cloud security.
Cloud-Native Tools
- AWS: AWS Config rules, Security Hub, IAM Access Analyzer, GuardDuty
- Azure: Microsoft Defender for Cloud, Azure Policy, Azure Advisor
- GCP: Security Command Center, Organization Policy Service, Cloud Asset Inventory
Infrastructure-as-Code Scanning
Catch misconfigurations before they're deployed by scanning Terraform, CloudFormation, and Bicep templates in CI/CD:
# Scan Terraform for security issues before deployment
$ checkov -d ./terraform/ --framework terraform --output cli
# Scan CloudFormation templates
$ cfn-lint template.yaml
$ cfn_nag_scan --input-path template.yaml
# Integrate into GitHub Actions
name: IaC Security Scan
on: [pull_request]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Checkov
uses: bridgecrewio/checkov-action@v1
with:
directory: terraform/
framework: terraform
soft_fail: false
Policy-as-Code
Use Open Policy Agent (OPA) with Rego policies to define your security requirements as executable code. This ensures consistent enforcement across environments and providers:
# Example Rego policy: deny public S3 buckets
package aws.s3
deny[msg] {
input.resource.aws_s3_bucket[name].acl == "public-read"
msg := sprintf("S3 bucket '%s' has public-read ACL", [name])
}
deny[msg] {
input.resource.aws_s3_bucket[name].acl == "public-read-write"
msg := sprintf("S3 bucket '%s' has public-read-write ACL", [name])
}
Remediation Prioritization Framework
Not all misconfigurations carry equal risk. Prioritize remediation based on three factors: exposure (is it internet-facing?), data sensitivity (does it protect PII, financial data, or credentials?), and exploitability (how easy is it to exploit?).
| Priority | Characteristics | Examples | SLA |
|---|---|---|---|
| P0 — Critical | Internet-facing + sensitive data + trivially exploitable | Public S3 bucket with PII, open database port to internet | 24 hours |
| P1 — High | Internet-facing + exploitable OR internal + sensitive data | Wildcard IAM policy, disabled CloudTrail, IMDSv1 | 7 days |
| P2 — Medium | Internal + moderate impact OR internet-facing + low sensitivity | Unencrypted EBS volume, overly broad security group | 30 days |
| P3 — Low | Defense-in-depth improvements, best practice gaps | Missing tags, non-standard naming, informational findings | 90 days |
Track your remediation progress over time. The two most important metrics for cloud security posture are:
- Mean Time to Remediate (MTTR) — How quickly are misconfigurations fixed after detection?
- Misconfiguration recurrence rate — Are the same issues being reintroduced? If so, you have a process problem, not a detection problem.
For compliance frameworks like SOC 2 and HIPAA, cloud misconfiguration scanning evidence is increasingly required. Auditors expect to see continuous monitoring, not just quarterly snapshots.
Scanning Your Cloud Environment with Find The Breach
Find The Breach's cloud security scanning covers all three major providers, identifying misconfigurations mapped to CIS Benchmarks, SOC 2 criteria, and HIPAA safeguards. Our platform integrates with your cloud accounts via read-only API access — no agents to deploy.
# Scan your cloud environment via the Find The Breach API
$ curl -X POST https://api.findthebreach.com/v1/scans \
-H "Authorization: Bearer $FTB_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"target": "aws:123456789012",
"scan_type": "cloud_config",
"compliance_mapping": ["cis", "soc2"],
"notify": ["security@yourcompany.com"],
"tags": ["cloud-audit-weekly"]
}'
Audit your cloud security posture
Start with a free scan to identify misconfigurations across your AWS, Azure, or GCP environment. Get compliance-mapped results with remediation commands.
Start Free Cloud Scan