Agents made sense for physical data centers. They make zero sense when 70-80% of your resources are managed services.
We built a guide on API-driven discovery: https://www.cloudquery.io/blog/death-of-agent-based-discovery
@cloudquery.bsky.social
Data pipelines for cloud config and security data. Build cloud asset inventory, CSPM, FinOps, and vulnerability management solutions. Extract from AWS, Azure, GCP, and 70+ cloud and SaaS sources.
Agents made sense for physical data centers. They make zero sense when 70-80% of your resources are managed services.
We built a guide on API-driven discovery: https://www.cloudquery.io/blog/death-of-agent-based-discovery
Security model flips too:
Agents: elevated privileges on every host, distributed credentials, 10,000 binaries to patch
APIs: one read-only IAM role, centralized auditing via CloudTrail, revoke in seconds
Every cloud service has an API. EC2's DescribeInstances. S3's GetBucketPolicy. RDS's DescribeDBInstances.
APIs return 50-100+ config attributes per resource. Zero installation. Zero compute overhead. Just query and parse JSON.
The math is brutal at scale:
β 10,000 instances Γ $4/month = $40K/year in agent overhead
β 2-5% CPU constantly consumed
β 200-500MB memory per instance
β Agents miss short-lived resources that terminate before registration
Your Kubernetes pod lives 45 seconds. Your RDS database has no OS you can SSH into. 70-80% of AWS services are managed services with nowhere to install an agent.
Yet we're still trying to deploy agents everywhere.
Your Lambda function runs 200 milliseconds. Agent initialization takes 2-5 seconds.
The function finishes before the agent even starts. You literally cannot install an agent in serverless.
We spent 20 years installing agents on servers. Then AWS gave us APIs.
Why are we still installing agents like it's 2005? π§΅
Traditional CMDBs were built for servers lasting 3-5 years. That world doesn't exist anymore.
Read the full guide: https://www.cloudquery.io/blog/real-time-cloud-cmdb-ephemeral-infrastructure
We put together a guide on building CMDBs that actually work with ephemeral cloud services.
Covers sync strategies, API rate limits, and why the Infrastructure Lake architecture beats proprietary CMDB apps.
We've seen this work at 1,000+ AWS accounts with millions of records per sync.
Extract cloud data to PostgreSQL or BigQuery. Query with SQL. Stop pretending infrastructure lives forever.
The answer isn't "scan faster." API rate limits make that impossible.
You need tiered sync strategies:
β Critical (IAM, security groups): 15-30 min
β Important (EC2, RDS): hourly
β Everything else: daily
Here's what that looks like in practice:
β Compromised Lambdas mine crypto for 5 minutes and vanish
β Ephemeral GPU instances rack up $10K bills with zero trace
β Auditors ask for proof from dates between your scans
β Engineers debug "ghost" pods that never appeared
Traditional CMDBs with 24-hour discovery windows miss ephemeral resources entirely.
A resource that exists for 30 minutes? Little chance it shows up in your daily scan.
AWS spot instances terminate with 2-minute warnings. Lambda functions execute and vanish.
Your CMDB updates daily. Your containers live 3 minutes. Your Lambda functions live 300 milliseconds.
See the problem? π§΅
Organizations extracting maximum value understand they're implementing a business capability, not deploying a technical solution.
Full breakdown: https://www.cloudquery.io/blog/five-tips-maximum-value-cloud-asset-inventory
5/ Plan for continuous improvement and scale
Technology changes. Priorities shift. Cloud environments expand.
Your asset inventory should adapt to organizational change without major re-architecture.
4/ Provide actionable intelligence, not just data
When someone discovers an unencrypted database, they should remediate immediatelyβnot just report it.
Connect your inventory to build pipelines, alerting systems, and remediation workflows.
3/ Prioritize high-impact use cases first
Don't boil the ocean. Find your highest-value problemβupcoming audit, Q4 cost optimization, security gaps.
Solve it completely. Demonstrate clear ROI. Then expand.
2/ Engage stakeholders across teams
Your inventory isn't an IT projectβit's a business capability.
Include FinOps, security, compliance, development, and operations as co-owners from day one. Not just users.
1/ Business outcomes over technical features
Don't build it because you can. Draw a direct line from every feature to revenue protection, cost savings, or risk reduction.
If you can't explain the business value in one sentence, don't build it.
If these questions take more than 30 seconds to answer, your cloud asset inventory needs work.
Here's what we learned from AWS PSA Keegan Marazzi about building asset inventories that actually get used:
You manage 4,782 cloud resources across 6 accounts. Can you tell me which S3 buckets are publicly accessible right now? Which IAM roles haven't been used in 90 days? π§΅
05.11.2025 05:59 β π 0 π 0 π¬ 7 π 0Traditional CMDBs solved a real problem in 2006. That world doesn't exist anymore.
Infrastructure is code. Resources are ephemeral. APIs provide real-time state.
Stop forcing cloud into 20-year-old models.
Full comparison: https://www.cloudquery.io/blog/cloud-cmdb-vs-traditional-cmdb-2026
Security incident example: "Find all public-facing servers with SSH open to 0.0.0.0/0"
Traditional CMDB: Run discovery scan (2 hrs), wait for reconciliation (30 min), manual Excel export. Data already outdated.
Cloud CMDB: One SQL query, under a second.
Data model gap: Traditional Server CI captures ~10 attributes (hostname, IP, OS).
AWS EC2 instance has 50+ attributes (instance type, VPC, security groups, IAM role, tags, EBS volumes, network interfaces).
Traditional CIs miss 80% of what matters in the cloud.
Cloud CMDB approach:
- Call cloud provider APIs directly
- Get current state in under a second
- Store native resource attributes in SQL
- Query on-demand with standard SQL
Implementation time: hours.
Traditional CMDB workflow:
- Install agents on every server
- Schedule discovery scans (daily/hourly)
- Reconcile duplicates
- Force resources into CI templates
- Data is 12-24 hours stale
Implementation time: 2-3 months minimum.
Gartner reports 70-80% of traditional CMDB projects fail to deliver value.
The reason: agent-based discovery, scheduled scans, and ITIL Configuration Items designed for physical servers can't handle ephemeral cloud infrastructure.
Traditional CMDBs were built for servers with names like "web-prod-01" that run for years.
In 2026, that EC2 instance running your Lambda cold start lives for 45 seconds.
Traditional CMDB discovery would schedule a scan for tomorrow. By then, it's gone. π§΅
Challenge 3: Real-time security
Security asks for public S3 buckets at 9:30 AM. CMDB last scan ran at 3:00 AM. You schedule new discovery. Wait 2 hours. Export. Filter manually.
Results at 12:15 PM. But 200 new buckets created since 9:30 AM aren't in your report.