We’ve already secured the business strategy and the identity architecture. Now, we face the final issue, which is protecting the logic of your business in an age of AI, and doing it fast enough to not fall behind.
The speed with which generative AI entered our realm has brought risks that traditional firewalls and identity roles cannot easily stop. At the same time, the speed of modern software deployment has made manual security reviews rather obsolete.
In this article, we take a look at the technical reality of securing AI workloads on AWS and Google Cloud, and why the reactive security model of the last decade is no longer enough.
The fundamental challenge with AI security is that it breaks the rules of computing we have used for 40 years.
Traditional security is deterministic. A firewall rule is binary: it either allows traffic or blocks it. An IAM role either has permission, or it doesn’t.
Generative AI is probabilistic. You can ask an LLM the same question twice and get two different answers. This means you cannot write a simple rule to control exactly what the model says or does. You are no longer securing a database; you are securing a black box that creates its own logic.
This creates two specific attack vectors that didn’t exist before.
Modern attackers use complex techniques like “multi-shot” prompting, role-playing personas (e.g., the DAN or GODMODE exploits), and adversarial suffixes to bypass safety filters.
The model might reveal training data, generate harmful code, or — if connected to an agent — execute commands it shouldn’t. Because LLMs process instructions and data in the same stream, perfect separation is currently impossible.
Shadow AI is the most immediate risk for the enterprise. Well-meaning employees often paste sensitive code, meeting notes, or customer lists into public models to summarize them.
Once that data enters a public model’s context or training set, it effectively leaves your control. It could eventually reappear in an answer generated for a user outside your company, leaking intellectual property without a single hacker being involved. And there really is no unhackable model.
Since you cannot fully trust the model to behave, the architectural defense must shift to the platform layer. You need to use external controls that enforce security regardless of what the AI “wants” to do.
Google Cloud (Vertex AI) relies heavily on the network perimeter. Its strongest defense against data exfiltration is VPC Service Controls (VPC-SC), which effectively creates a logical air-gap around your AI resources.
Even if a user has valid identity credentials, VPC-SC prevents them from copying data out of that perimeter to an unauthorized location. To address prompt injection, Google uses Model Armor, sitting in front of the LLM to sanitize inputs and scan outputs for secrets before the user ever sees them.
AWS (Bedrock) focuses on policy enforcement via Bedrock Guardrails. These guardrails act as interceptor logic between the user and the inference model. Crucially, they are model-agnostic, meaning you can define a policy (like “Block all PII”) and apply it to Claude, Llama, and Titan equally.
For network security, AWS uses PrivateLink to make sure that traffic between applications and the AI service flows entirely over the AWS backbone, never touching the public internet.
Beyond AI, the sheer velocity of modern infrastructure is a security risk. If your DevOps team is deploying code 50 times a day, a manual security review simply won’t do. If you force a review, developers will eventually find a way around you.
The only technical solution is to Shift Left, i.e., automating security checks into the build pipeline itself.
Before code even leaves a developer’s machine, it should be scanned.
Most cloud infrastructure is now defined in code (Terraform, CloudFormation). This allows us to audit the infrastructure before it exists.
Tools like Checkov scan these template files against security policies. If a developer tries to create an S3 bucket with “Public Access: True,” the tool flags it and fails the build. The insecure resource is never created, preventing the misconfiguration from ever reaching the cloud.Once deployed, the environment must be watched for drift (changes made manually outside the pipeline).
Tools like Prowler act as an always-on auditor. They scan your live cloud environment against thousands of checks (CIS Benchmarks, GDPR, DORA). If a firewall rule is changed at 3 AM to open a port, Prowler detects the anomaly and alerts the team instantly.Finally, no matter how good your architecture is, you will likely be breached. The question is how fast you find out.
For the last decade, most security teams were reactive. They waited for a SIEM alert to pop up, validated it, and then responded. The failure of this model is clear: today, the average “Mean Time to Identify” (MTTI) of a breach is 241 days!
Modern security requires a proactive approach, which is the role of the modern SOC (Security Operations Center). Instead of waiting for alerts, analysts assume a breach has already happened.
They actively hunt through logs for subtle signs that automated tools miss (like irregular API calls from a service account or unusual data access patterns). This hypothesis-driven approach is the only way to catch sophisticated attackers who know how to evade standard alarms.
Trying to build all of the necessary capabilities in-house is expensive and difficult. This is why the shared fate model is the future.
At Revolgy, we don’t just advise on security; we operationalize it. We provide certified experts to design the architecture, and the 24/7 Managed SOC to watch it. We share the risk, so you can focus on building the product.
Ready to share the load? You don’t have to build a 24/7 SOC or become an AI security expert overnight. Let Revolgy handle the heavy lifting. Schedule a free consultation with our experts.