Home AI Solutions MedTech Cyber Security Clients Company Blog עברית Get Started
CybersecurityCloudRisk ManagementCISOArchitecture

Cloud Security for Boards That Still Need Engineering Detail

Pelican Tech 6 min read
Abstract dark composition with concentric blue cloud-shaped data lattice surrounding a glowing orange core, evoking cloud-native security architecture

Boards have learned to ask about cloud security. The questions are usually some variation of "are we secure in the cloud?" and "are we as good as our peers?" Both questions are nearly impossible to answer well, and most CISOs end up giving a hedged response that satisfies no one. The board wants assurance; the CISO knows assurance is the wrong frame. What both sides actually need is a translation layer.

This piece is that translation layer. It maps the questions a board actually asks about cloud onto the engineering decisions that determine the answer, so security leaders can have a real conversation rather than a hedged one. It is shaped by the cloud security programmes Pelican Tech has built over the last three years for organisations that grew through acquisition, regulated entry into the EU, or a forced migration off legacy data centres.

The board question that hides three different engineering questions

When a board asks "are we secure in the cloud?" they usually mean one of three things, and the right answer depends on which one:

  1. Are we losing data we don't know about? This is the data-exposure question. The technical answer is whether you can produce a current map of where your sensitive data lives, who has access to it, and what the egress paths look like. If you cannot draw that map in under an hour, the answer to the question is "we don't know," and that is the only honest response.

  2. Could a single mistake take us down? This is the blast-radius question. The technical answer is whether your highest-impact systems share fate with anything else: same account, same network, same identity provider, same key vault, same CI/CD pipeline. The probability of a mistake is not the issue; the question is whether one mistake produces a recoverable incident or an existential one.

  3. Would we know in time if something started? This is the detection question. The technical answer is whether your detection surface includes the things that actually matter — workload behaviour, identity activity, configuration drift — and whether your response timing is measured rather than asserted.

Conflating these three is how cloud security programmes end up over-investing in posture management while under-investing in detection. They look like they're addressing risk because they're producing a lot of findings, but the findings are not the same as the question.

Why cloud security programmes routinely under-deliver

Most of the cloud security programmes we audit have the same shape: a CSPM (cloud security posture management) tool, a cloud-native SIEM stream, an IAM review cadence, and a list of compliance controls mapped to the framework du jour. This is not wrong. It is just incomplete in predictable ways.

Posture findings drown out signal. A typical CSPM scan against a mid-size cloud estate produces 8,000 to 40,000 findings on first run. The remediation funnel is rate-limited by the engineering teams who own the resources, and 60–80% of findings are accepted or deferred indefinitely. The programme spends most of its energy on a triage process that never closes the gap. Worse, the team grows numb: a real misconfiguration in production gets the same colour code as 2,400 informational findings about default tags, and the on-call is statistically going to ignore both.

Identity is treated as a hygiene item rather than a control plane. In a mature cloud, identity is the perimeter. Lateral movement after compromise depends almost entirely on what the compromised identity (human or workload) was permitted to do. Most programmes have a quarterly identity review, a list of broken-glass accounts, and SSO. Few have a programmatic answer to: which identities can read production data right now, which can write to a critical bucket, and which can assume a role into a different account?

Workload telemetry is a missing control. Configuration is what an attacker exploits to get in. Workload behaviour is what tells you they are inside. If your detection surface is built only from configuration scans and audit logs, you are watching the doors but not the rooms. Every mature cloud security programme we run eventually adds runtime workload protection (eBPF, agentless instance scanning, or both), and the ones that don't usually find out the hard way after an incident.

The four investments that move the curve

If you have to pick four engineering investments that demonstrably reduce loss expectancy, this is our short list. They are the ones we recommend to boards who ask, "what would actually move the needle this year?"

1. A real data map

Not a CMDB extension, not a CSPM data finding count. A live map of: which datasets exist, what regulatory class they fall into, which workloads read or write them, what access pathways exist (IAM, network, data-plane API), and what the egress logs look like for each. Build this with whatever tool fits your stack (DSPM products help, but a curated graph from terraform state plus IAM Access Analyzer plus VPC flow logs gets 70% of the value). Without it, every conversation about data risk is theoretical.

2. Hard tier-isolation for crown-jewel workloads

A small number of systems represent most of the loss potential if they go down or leak. Identify the top 5–10 by revenue exposure, regulatory exposure, or both. Then verify these systems do not share fate with anything outside their tier: separate cloud accounts, separate key material, separate identity boundaries, separate CI/CD release paths. This is the single highest-leverage engineering investment in cloud security and the one most consistently skipped because it is uncomfortable to refactor.

3. Identity attack-path graph, refreshed weekly

The question "what can the attacker do if they compromise X?" is answerable as a graph traversal. Tools exist (CIEM products, BloodHound-for-cloud equivalents, or open-source pacu/cloudgoat-style tooling) that compute the reachability set from a starting principal across IAM, role-assumption chains, and resource policies. Run it weekly. The output is the prioritised list of identity changes that would shrink your blast radius the most. Spend on those, not on the bottom 80% of CSPM findings.

4. Measured detection and response timing

Pick three plausible cloud incident archetypes (an exposed credential, a misconfigured public bucket, a malicious workload). Run unannounced drills against each, with a clock. Measure: time-to-detection, time-to-triage, time-to-containment. Repeat quarterly. The numbers themselves are less important than the trend; if the trend is flat or worsening, your detection investment is not landing where you think it is.

These four together typically displace 60% of the work a typical cloud security programme is doing today. That is the point.

What boards should actually be asking

If you are a board member reading this, here is the small set of questions that surface the right information. They cannot be answered without engineering rigor, and that rigor is what you are trying to verify exists.

  • Where does our most sensitive data live, and can we produce a map in an hour? The answer or its absence tells you whether the programme is operationally real.
  • What is the blast radius of our top three workloads if they are compromised? The answer should be specific, not "limited by design."
  • What is our measured time-to-detect and time-to-contain for a credential leak right now? A measured number, not an SLA target.
  • What three engineering decisions are we making this year specifically to reduce cloud risk? If the answers are all about deploying tools, the programme is in the buying phase, not the operating phase.

A CISO who can answer these confidently has a programme. A CISO who deflects to compliance frameworks does not.

Where this connects to our practice

Pelican Tech's Cloud Security practice builds the four investments above as a delivered programme, not as a slide deck. We start with the data map, run an attack-path graph, and produce the measured detection-timing baseline that the board reporting eventually needs. We work alongside our risk management team when the question is portfolio risk across cloud and on-prem, and with our identity specialists when identity is the binding constraint, which it usually is.

If you are a CISO heading into a board meeting in the next quarter and the cloud security narrative does not yet have measured numbers in it, that is the conversation to have with us before, not after.