What is Multi-Cloud Security?

Release Date: 14/02/2023

In order to answer the question ‘What is Multi-Cloud Security?’ one would have to define ‘What is Cloud?’ and the answer to the latter differs depending on who you speak to.

Back in the day, when clients would say “We are moving to Cloud” they would usually mean getting rid of physical tin in their data centres and moving this to cloud infrastructure such as that of AWS. This would usually mean standing up VMs that were to be somewhat like for like for the servers they hosted data and applications on, moving the infrastructure and then decommissioning the old. This in turn would be ring fenced and users would still access the resources over secure connections, usually passing through appliance based security and networking tools from the client site.

Over time though, “We are moving to Cloud” to me has taken on a very different definition. Users now increasingly work remotely, the appliance based security and networking tools have evolved and these too now reside “in the Cloud”, the applications and data residing on the servers have moved to platforms wherein the underlying previous servers are no longer required, etc.

Case in point is that a number of online ads I see these days focus on services that would allow a small startup to essentially exist without ever purchasing physical infrastructure outside of the bare necessities such as laptops or desktops and mobile phones. Want a full office and user administration suite for your users? Office 365 or Google Workspace will handle almost everything you need. Want to handle accounts / tax / payroll for staff? Companies like Xero allow for this to be handled entirely online. And these types of things are actually nothing new, Microsoft Hotmail has essentially always been a cloud email service that was first introduced in 1996 before being purchased by Microsoft in 1997 and becoming Outlook as we know it today. Online accounting too has been around since the early 2000s. In fact, there are likely businesses that have never actually “Moved to Cloud” as they ultimately have always been consuming and using technology services that are “In the Cloud”.

How Things Have Changed

To me, a number of things have changed to bring us where we are today and a few examples are below:

  1. Security – In the late 90s, would you have trusted all staff in a company to use Hotmail as an email platform instead of an on premise solution like Exchange. It had no built in threat / spam prevention, no MFA could be incorporated, it was a simple username and password screen to get into it, could be accessed by anyone / anywhere in the world, there was no visibility of user activity within the app and no centralised control of users. No, so what were the options? Internal mail servers, all mail passing through security appliances to check for threats / spam, users controlled from central administration tools (also on premise) such as Active Directory, users could only access their mail from their allocated device and with further protections such as physical token based MFA via something like RSA SecurID.

  2. Competition and Cost  – Keeping it brief, the cost to maintain a physical premise and data centre and also have on premise storage and IT security far outweighs the cost of using cloud based technology. For IaaS, having three major vendors in AWS, Azure and GCP helps keep costs competitive. Cost reductions can also be found in terms of licensing, no need for so many individual OS licences, the ability to have token based MFA on mobiles as opposed to physical tokens, reduction in VPN licences, cloud based network security for remote workers, consolidation of tools, reduced complexity of operations, increased visibility of operations, etc.

  3. Readiness and Attitude – Looking back on the Covid 19 pandemic when businesses were ordered to close their physical locations, think of how many businesses were caught out by it. You now had every worker, having to work from home, maybe there weren’t enough laptops, not enough VPN licences, no existing configurations for remote access, and so on. When I used to be involved in Disaster Recovery in the banking sector, every year we would simulate the failure of a single data centre and how prepared we were in switching to the backup site, at no time did I ever see a simulation of losing the actual building we all worked in. Events like this only served to ramp up the uptake of cloud based and remote security technologies in case such an event should ever happen again.

  4. Ease of Use / Support – On the user side, a whole generation of younger workers exist now that have always been online, more tech savvy, have been consuming online apps since they were toddlers, somewhat more security conscious too, hence their ability to work in cloud based environments is bolstered as they technically always have. Most cloud apps these days are pretty slick and modern with simplified user interfaces. Back end advanced support is usually handled by the vendor hence there may not even be any need for a techie to ever get properly under the hood of the app in question, upgrades largely are also handled by the vendor themselves leading to again simplified support, maintenance and reduced downtime.

  5. Integration / Automation – Most cloud providers use integration as a selling point of their platforms. Through standardised API and language calls and simplified integration steps, more things talk to more things which not only opens up freedoms in vendor choice whilst also allowing for easier replacement of technologies; it also allows for greater automation of activities (e.g., infrastructure provisioning using HashiCorp’s Terraform) hence reducing operational spend.

Coming back to the question of what it means to move to cloud, I would define the cloud element in this as any service that has an underlying infrastructure that is owned and operated by a 3rd party, that is remote i.e. not hosted physically within the company’s network. Hence cloud to me is not just IaaS such as AWS, Azure, GCP, it is also all of the SaaS cloud apps used, such as O365, Google Workspace, Zoom, App based tokens, etc. and with this to me Multi-Cloud Security is making sure all of this is essentially secured, no matter who the employee is (sometimes even allowing 3rd party access), no matter where they are in the world.

Security Considerations

Here we will just walk through some scenarios and considerations which may fall under Multi-Cloud Security.

  1. Home / Remote Workers – Web Protection

    A user works from home or spends the majority of their time on the road. They have a company laptop, a mobile phone and either use their own wi-fi or 3rd party wi-fi. All of the companies web protection exists in a data centre and only traffic passing through these appliances get inspected. So if the user is working externally, how is their web usage protected?

    A common way of addressing this would be that the user has to always connect to a company VPN first before they can properly access the internet. Their traffic is hair pinned back to the internal network, it comes in over the VPN and is therefore as if the user is in the office, hence their traffic passes through the on premise security appliances and they get protection.

    What if they can’t get on the VPN? If it is ‘fail open’ can they do what they like now or is it ‘fail close’? Can they still work? Can they get remote support or do they now have to come into the office? Is all this backhauling placing unnecessary stress on the corporate network?

    What if we could move those security appliances to the cloud such as with Netskope? The user now has their relevant traffic steered to a cloud tenant, this handles all the protection with no need for the VPN, the reduction in licensing means there is potential for cost saving. They get protection wherever they are connecting to the internet from providing they can reach the cloud tenant. No need to backhaul traffic, perhaps even the on premise appliances could be decommissioned adding to further cost savings.

    Or, using HashiCorp’s Boundary, an organisation could still host private services for their staff, without resorting to the overly-permissive VPN model discussed above. In fact, in concert with HashiCorp’s Vault, seamless access to specific services – including, for example, dynamically populated host lists – could be facilitated with short-lived, dynamic credentials that said user never sees, and which are never stored on their remote endpoint.

  2. Cloud Application Control

    A company has increased their usage of cloud applications but also wishes to control a users ability to use unsanctioned cloud applications. They currently have an on premise Secure Web Gateway which all user traffic passes through, access to sites and apps is controlled via categories and IPs. The company uses O365 and allows staff to access this. They have put in a block for all Cloud Storage at category level but have created an exception for OneDrive.

    One day, three issues are raised – the first is that Google Drive needs to be allowed to allow for collaboration with a particular lucrative client – the users only need to be able to download publicly available files from this, the second is that they find users have been accessing their personal OneDrive account and have been seen to be uploading sensitive company information to their accounts, the third is that a 3rd party supplier has contractors on site and they need to be able to access their own company’s O365 when on the corporate network.

    For this, we turn to Netskope again and use their Next Generation Secure Web Gateway and CASB features.

    To start with, we can configure Netskope to class our unique instance of O365 as being sanctioned by the company and create an allow all policy for just our staff members.

    We can do the same for our 3rd party contractors by creating their own policy for their unique instance and give them ‘allow all’ too.

    Next we could block all other instances of O365 or since Netskope is fully activity aware, we could just block logins to O365 unless the username matches our employees or the 3rd parties, hence employees can now no longer access OneDrive for their personal accounts.

    We can still have the same block all for the Cloud Storage category and create an exception for drive.google.com but again, as Netskope is fully activity aware, we can disallow the upload activity and only allow download.

    And since Netskope is cloud based, these policies would be applied whether our users were in the office or working remotely.

  3. Public Facing Login Screens and DLP

    One of the most vulnerable areas of cloud applications is their public facing login screen. Anyone can get to accounts.google.com from anywhere in the world, whereas if an app was hosted internally and networked accordingly, only those on the corporate network could do so before.

    Most apps greet you with two options too, username and password. So what if a user has been phished, how do you stop the attacker logging in?

    What of data exfiltration too, can we lock down vulnerable data movement areas even when using cloud based solutions?

    Lets say a company has a cloud based cloud protection solution like Netskope. Netskope has been configured in client mode so a user goes to O365 and tries to download a confidential file from O365. Their traffic is being steered through their Netskope client to the Netskope tenant and then to O365. They try to download the file, Netskope reads it, sees it is confidential, a policy exists to block the download of this for this particular user. The file is denied.

    The user then simply logs onto their personal device, goes to O365, logs in as their work account, no Netskope client means no Netskope policy hence can they now download the file?

    Firstly, there should be a control in O365 to block the user from logging in at all if not coming from a recognised IP. A number of companies have trouble setting this up as they don’t have a specific IP range the user can come from without the user first connecting to the corporate VPN. However, if the user has the Netskope client, we can now use the Netskope IP ranges to control access – hence if the user is not coming via Netskope, they don’t get into O365.

    Secondly, Netskope allows for integration with a range of applications wherein policies can be enacted even when a client is absent, this is called Netskope Reverse Proxy. So the user goes to O365 on their personal machine, they input their work username and password, at this point, they are handed over to a cloud based MFA provider such as Okta, or maybe a self-hosted provider such as Vault’s TOTP, once authenticated / authorised, that provider  then hands over to Netskope, and it takes them to O365. The user tries to download the confidential file, they get denied, even though they do not have a client running on their machine.

  4. Cloud Provider Security Posture

    Turning to cloud providers such as AWS, Azure and GCP. Imagine that we have a server in a data centre and the server has no actual access to the Internet. All servers are built the same way and a standard set of firewall rules are supposed to be applied at the OS level but this server has been built manually during an incident and doesn’t have these rules. Unless we are looking for it, how do we know it has been missed? What is the risk given it has no access to the Internet, is there any? Surely an attacker would need to be within the corporate network, and in the zoned network area the server is in to take advantage of the security flaw? Even if we found it, would it be worth the hassle to fix it? How many companies have you been in where the servers (particularly UNIX) are not standardised in their builds and have a variety of issues (such as local accounts for leavers not being deleted) that are just left as is?

    But if we step outside of this and place that server into a cloud provider such as AWS, Azure or GCP, do we care now? Has the risk increased? If we mess up, can a 3rd party attacker actually reach that server due to a misconfiguration? Can a leaver still log onto it if they know their local account credentials? What security checks can we perform, across every facet of security in the Cloud Provider space to catch such flaws and how often should we run them?

    This is where Terraform’s native support of Policy as Code tools such as HashiCorp’s Sentinel, and Open Policy Agent, as well as its many integrations with tools like Snyk and Bridgecrew, can help, leveraging established public registries of security policies from trusted sources like the Center for Internet Security. With your organisation’s best practice codified in a private registry of Terraform modules, misconfigurations would largely be a thing of the past, but, on the off-chance that new work threatens to expose a production environment, these policy checks would ensure that build is flagged and paused before any provisioning actually occurred.

    Or, an organisation might want to look at Cloud Security Posture Management (CSPM). With solutions such as Lacework FortiCNAPP or Netskope can be integrated with cloud providers and can run continuous security checks across a broad range of out of the box compliance standards and even allow for the creation of custom checks. Not only do they check for the security flaw that may exist but also provide the solution if known.

  5. Cloud Security Automation

    For this one, let’s continue with the services we have fired up in our chosen Cloud Provider(s). All our compliance is fine, we have fixed all the issues caught by the CSPM and are in a much better place.

    What about the unknown?

    Let’s say a dev ops team is producing a new service for the company and want to spin up brand new resources and code in AWS via automation, what risks may be introduced in doing so? Or what if a server hosted in Azure suddenly starts talking to an unknown cryptomining host out of nowhere? How do we monitor our cloud estate to prevent such flaws in our security from being introduced or catching such strange events when they occur?

    Again, this is an ideal use case for some of the Policy as Code tools discussed above, or Lacework FortiCNAPP’s Cloud Workload Protection Infrastructure as Code: security risks are identified at source, leading to remediation as opposed to repetition of issues whilst also using machine learning to identify behaviours of a cloud estate to identify when anomalies occur, such as strange traffic or process behaviours.

    In closing, returning to the initial question of ‘What is Multi-Cloud Security?’, it is down to defining what Cloud means to an organisation based on the infrastructure/services/apps they have or consume. Then from the users (including 3rd parties and attackers), their devices and all the way through to back end automated processes, identifying every possible risk that may exist whilst also remediating these or preventing them from occurring.

    It can be daunting to sit down and map out the end to end whilst also identifying every possible eventuality and risk but with technologies such as Netskope and Lacework FortiCNAPP, it doesn’t have to be.

    At the core of Cloud is convenience and products like these have been designed with this in mind.

More Resources like this one:

Somerford's Added Value Explained
Partner & Customer Testimonials |
Business Value Panel Discussion

Avoid These Pitfalls with Your Cloud Migration
Women in Tech: Ft. Sarah Lucas,
Lloyds Banking Group

Get in Touch

Contact our pre-sales team through our contact form.

Scroll to Top