Zero trust aims to replace implicit trust with explicit, continuously adaptive trust across users, devices, networks, applications, and data.

Steve Riley, Field CTO at Netskope

October 11, 2021

5 Min Read
Zero trust concept art
Source: Olivier Le Moal via Alamy Stock Photo

For a concept that represents absence, zero trust is absolutely everywhere. Companies that have explored how to embark upon zero-trust projects encounter daunting challenges and lose sight of the outcomes a zero-trust approach intends to achieve. Effective zero-trust projects aim to replace implicit trust with explicit, continuously adaptive trust across users, devices, networks, applications, and data to increase confidence across the business.

The primary goal of a zero-trust approach is to shift from "trust, but verify" to "verify, then trust." We cannot place implicit trust in any entity, and context should be continuously evaluated. A secondary goal of zero trust is to assume that the environment can be breached at any time, and design backward from there. This approach reduces risk and increases business agility by eliminating implicit trust and by continuously assessing user and device confidence based on identity, adaptive access, and comprehensive analytics.

The journey to zero trust might not be exactly the same for every company, but zero-trust adoption can generally be broken down into five key phases.

Phase 1: Don't Allow Anonymous Access to Anything
Once you classify user personas and levels of access within your organization, inventory all applications, and identify all of your company's data assets, you can start with shoring up identity and access management (including roles and role membership), private application discovery, and a list of approved software-as-a-service (SaaS) applications and website categories. Reduce the opportunities for lateral movement and conceal applications from being fingerprinted, port scanned, or probed for vulnerabilities. Require single sign-on (SSO) with multifactor authentication (MFA).

Specific tasks for this phase include defining the source of truth for identity and what other identity sources they might federate with, as well as establishing when strong authentication is required, then controlling which users should have access to which apps and services. This phase also requires organizations to construct and maintain a database that maps users (employees and third parties) to applications. They also must rationalize application access by removing stale entitlements (of employees and third parties) that are no longer required because of role changes, departures, contract terminations, etc. And they must remove direct connectivity by steering all access through a policy enforcement point.

Phase 2: Maintain the Explicit Trust Model
Now that you have a better understanding of your applications and identity infrastructure, you can move into access control that is adaptive. Evaluate signals from applications, users, and data, and implement adaptive policies that invoke step-up authentication or raise an alert for the user.

Specific tasks for this phase require organizations to determine how to identify whether a device is managed internally, and to add context to access policies (block, read-only, or allow specific activities depending on various conditions). Organizations will also Increase use of strong authentication when risk is high (e.g., delete content for all remote access to private apps) and decrease its use when risk is low (managed devices accessing local applications for read-only). They will also evaluate user risk and coach classes of users toward specific application categories, while continuously adjusting policies to reflect changing business requirements. They should also establish a trust baseline for authorization within app activities.

Phase 3: Isolate to Contain the Blast Radius
In keeping with the theme of removing implicit trust, direct access to risky Web resources should be minimized, especially as users simultaneously interact with managed applications. On-demand isolation — that is, isolation that automatically inserts itself during conditions of high risk — constrains the blast radius of compromised users and of dangerous or risky websites.

This phase calls on organizations to automatically insert remote browser isolation for access to risky websites or from unmanaged devices, and evaluate remote browser isolation as an alternative to CASB reverse proxy for SaaS applications that behave incorrectly when URLs are rewritten. Organizations should also monitor real-time threat and user dashboards for command-and-control attempts and anomaly detection.

Phase 4: Implement Continuous Data Protection
Next, we must gain visibility into where sensitive data is stored and where it spreads. Monitor and control movement of sensitive information through approved and unapproved applications and websites.

Organizations must define overall differentiation for data access from managed and unmanaged devices, and add adaptive policy details to access content based on context (e.g., full access, sensitive, or confidential). They can invoke cloud security posture management to continuously assess public cloud service configurations to protect data and meet compliance regulations. They also may assess use of inline data loss protection (DLP) rules and policies for all applications to protect data and meet compliance regulations. In that same vein, they can define data-at-rest DLP rules and policies, especially file sharing permissions for cloud storage objects and application-to-application integrations enabling data sharing and movement. And they should continuously investigate and remove excess trust, in addition to adopting and enforcing a least-privilege model everywhere.

Phase 5: Refine With Real-Time Analytics, Visualization
The final phase to a zero-trust approach is to enrich and refine policies in real time. Assess the suitability of existing policy effectiveness based on user trends, access anomalies, alterations to applications, and changes in the sensitivity level of data.

At this point, organizations should maintain visibility into users' applications and services, and the associated levels of risk; they can also gain greater visibility and establish a deep understanding of cloud and Web activity for ongoing adjustments and monitoring of data and threat policies. In addition, they can identify key stakeholders for the security and risk management program (CISO/CIO, legal, CFO, SecOps, etc.) and apply visualizations to the data that they can understand. They can also create shareable dashboards to get visibility into different components.

Digital transformation has been accelerated by the pandemic events of 2020 and 2021, and modern digital business will not wait for permission from the IT department. At the same time, modern digital business increasingly relies on applications and data delivered over the Internet which, surprisingly or unsurprisingly, wasn't designed with security in mind. It's clear a new approach is required to enable a fast, easy user experience with simple, effective risk management controls.

About the Author(s)

Steve Riley

Field CTO at Netskope

Steve Riley is a Field CTO at Netskope, having worked at the intersection of cloud and security for pretty much as long as that’s been an actual topic, Steve offers that perspective to field and executive engagements and also supports long-term technology strategy and works with key industry influencers. A widely renowned expert speaker, author, researcher, and analyst, Steve came to Netskope from Gartner, where for five years he maintained a collection of cloud security research that included the Magic Quadrant for Cloud Access Security Brokers and the Market Guide for Zero Trust Network Access. Before Gartner, Steve spent four years as Deputy CTO of Riverbed Technology and held various security strategy and technical program management roles at Amazon Web Services for two years and at Microsoft for 11 years. Steve's interest in security began all the way back in 1995, when he convinced his then-employer that it would be a good idea to install a firewall on their brand new Internet connection.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights