December 4, 2024

The AI growth is amplifying dangers throughout enterprise information estates and cloud environments, based on cybersecurity professional Liat Hayun.

In an interview with TechRepublic, Hayun, VP of product administration and analysis of cloud safety at Tenable, suggested organisations to prioritise understanding their threat publicity and tolerance, whereas prioritising tackling key issues like cloud misconfigurations and defending delicate information.

Profile photo of Liat Hayun.
Liat Hayun, VP of product administration and analysis of cloud safety at Tenable

She famous that whereas enterprises stay cautious, AI’s accessibility is accentuating sure dangers. Nonetheless, she defined that CISOs right this moment are evolving into enterprise enablers — and AI may in the end function a strong instrument for bolstering safety.

How AI is affecting cybersecurity, information storage

TechRepublic: What’s altering within the cybersecurity surroundings resulting from AI?

Liat: To start with, AI has turn out to be rather more accessible to organisations. For those who look again 10 years in the past, the one organisations creating AI needed to have this specialised information science group that had PhDs in information science and statistics to have the ability to create machine studying and AI algorithms. AI has turn out to be a lot simpler for organisations to create; it’s virtually identical to introducing a brand new programming language or new library into their surroundings. So many extra organisations — not simply massive organisations like Tenable and others — but additionally any start-ups can now leverage AI and introduce that into their merchandise.

SEE: Gartner Tells Australian IT Leaders To Undertake AI At Their Personal Tempo

The second factor: AI requires a variety of information. So many extra organisations want to gather and retailer greater volumes of knowledge, which additionally typically has greater ranges of sensitivity. Earlier than, my streaming service would have solely saved only a few particulars on me. Now, perhaps my geography issues, as a result of they will create extra particular suggestions based mostly on that, or my age and my gender, and so forth. As a result of they will now use this information for his or her enterprise functions — to generate extra enterprise — they’re now rather more motivated to retailer that information in greater volumes and with rising ranges of sensitivity.

TechRepublic: Is that feeding into rising utilization of the cloud?

Liat: If you wish to retailer a variety of information, it’s a lot simpler to try this within the cloud. Each time you resolve to retailer a brand new kind of knowledge, it will increase the amount of knowledge you’re storing. You don’t need to go inside your information middle and order new volumes of knowledge to put in. You simply click on, and bam, you will have a brand new information retailer location. So the cloud has made it a lot simpler to retailer information.

These three elements kind a form of circle that feeds itself. As a result of if it’s simpler to retailer information, you’ll be able to improve extra AI capabilities, and you then’re motivated to retailer much more information, and so forth. In order that’s what occurred on this planet in the previous couple of years — since LLMs have turn out to be a way more accessible, frequent functionality for organisations — introducing challenges throughout all these three verticals.

Understanding the safety dangers of AI

TechRepublic: Are you seeing particular cybersecurity dangers rise with AI?

Liat: The usage of AI in organisations, not like using AI by particular person folks the world over, continues to be in its early phases. Organisations wish to guarantee that they’re introducing it in a method that, I might say, doesn’t create any pointless threat or any excessive threat. So by way of statistics, we nonetheless solely have a number of examples, and they aren’t essentially an excellent illustration as a result of they’re extra experimental.

One instance of a threat is AI being educated on delicate information. That’s one thing we’re seeing. It’s not as a result of organisations aren’t being cautious; it’s as a result of it’s very tough to separate delicate information from non-sensitive information and nonetheless have an efficient AI mechanism that’s educated on the appropriate information set.

The second factor we’re seeing is what we name information poisoning. So, even you probably have an AI agent that’s being educated on non-sensitive information, if that non-sensitive information is publicly uncovered, as an adversary, as an attacker, I can insert my very own information into that publicly uncovered, publicly accessible information storage and have your AI say issues that you simply didn’t intend it to say. It’s not this all-knowing entity. It is aware of what it’s seen.

TechRepublic: How ought to organisations weigh the safety dangers of AI?

Liat: First, I might ask how organisations can perceive the extent of publicity they’ve, which incorporates the cloud, AI, and information … and every thing associated to how they use third-party distributors, and the way they leverage totally different software program of their organisation, and so forth.

SEE: Australia Proposes Mandatory Guardrails for AI

The second half is, how do you determine the vital exposures? So if we all know it’s a publicly accessible asset with a high-severity vulnerability to it, that’s one thing that you simply in all probability wish to tackle first. Nevertheless it’s additionally a mixture of the influence, proper? In case you have two points which can be very comparable, and one can compromise delicate information and one can’t, you wish to tackle that first [issue] first.

You additionally need to know which steps to take to deal with these exposures with minimal enterprise influence.

TechRepublic: What are some massive cloud safety dangers you warn in opposition to?

Liat: There are three issues we often advise our prospects.

The primary one is on misconfigurations. Simply due to the complexity of the infrastructure, complexity of the cloud, and all of the applied sciences it gives, even for those who’re in a single cloud surroundings — however particularly for those who’re going multi-cloud — the probabilities of one thing changing into a difficulty simply because it wasn’t configured appropriately continues to be very excessive. In order that’s undoubtedly one factor I might concentrate on, particularly when introducing new applied sciences like AI.

The second is over-privileged entry. Many individuals suppose their organisation is tremendous safe. But when your own home is a fort, and also you’re giving your keys out to everybody round you, that’s nonetheless a difficulty. So extreme entry to delicate information, to vital infrastructure, is one other space of focus. Even when every thing is configured completely and also you don’t have any hackers in your surroundings, it introduces further threat.

The side folks take into consideration probably the most is to determine malicious or suspicious exercise as early because it occurs. That is the place AI may be taken benefit of; as a result of if we leverage AI instruments inside our safety instruments inside our infrastructure, we will use the truth that they will have a look at a variety of information, they usually can do that actually quick, to have the ability to additionally determine suspicious or malicious behaviors in an surroundings. So we will tackle these behaviors, these actions, as early as attainable earlier than something vital is compromised.

Implementing AI ‘too good of a chance to overlook out on’

TechRepublic: How are CISOs approaching the dangers you might be seeing with AI?

Liat: I’ve been within the cybersecurity trade for 15 years now. What I like seeing is most safety specialists, most CISOs, are not like what they was like a decade in the past. Versus being a gatekeeper, versus saying, “No, we will’t use this as a result of it’s dangerous,” they’re asking themselves, “How can we use this and make it much less dangerous?” Which is an superior pattern to see. They’re changing into extra of an enabler.

TechRepublic: Are you seeing the nice facet of AI, in addition to the dangers?

Liat: Organisations must suppose extra about how they’re going to introduce AI, fairly than considering “AI is simply too dangerous proper now”. You possibly can’t try this.

Organisations that don’t introduce AI within the subsequent couple of years will simply keep behind. It’s a tremendous instrument that may profit so many enterprise use circumstances, internally for collaboration and evaluation and insights, and externally, for the instruments we will present our prospects. There’s simply too good of a chance to overlook out on. If I will help organisations obtain that mindset the place they are saying, “OK, we will use AI, however we simply must take these dangers into consideration,” I’ve executed my job.”