Table of Contents
IRONSCALES founder and CEO, Eyal Benishti was recently invited back to The Peggy Smedley Show, the podcasting voice of IOT and digital transformation, to continue their discussion on artificial intelligence in today’s cybersecurity landscape.
Below you will find a complete transcription of the podcast, but you can also listen to the segment here.
Catching up? You can also find the first interview between Eyal Benishti and Peggy Smedley here.
--
Peggy Smedley: Hello, it's Thursday, and welcome back to the Peggy Smedley show. I'm excited to have my next guest join me again from last week. He is a computer science and mathematics graduate with honors, a security research, reverse engineer and a malware analyst. He has a deep knowledge in cybersecurity, malware, and social engineering, and is an open source advocate. So please welcome Eyal Benishti, founder and CEO of IRONSCALES. And IRONSCALES is an email phishing solution to combine human intelligence and machine learning. Eyal, welcome to the show.
Eyal Benishti: Hi, Peggy. It's so great to be here again. Thanks for inviting me.
Peggy Smedley: Yeah, tell everybody where you're calling in from. So, I love when people are calling in, not from here in the States, but are calling overseas. It's kind of fun to have them calling in.
Eyal Benishti: Yes, I'm calling from Tel Aviv. I'm back here, going back and forth from Atlanta to Tel Aviv, where the R&D team is based, to get some inspiration from the guys back here. Always love to spend some time in Tel Aviv.
Peggy Smedley: We had a great discussion last week, and we were talking about continuing the conversation. Let's start again today by talking about some of the biggest opportunities. We talked about AI and ethics, and you have such a wealth and deep knowledge on this, that I appreciate you coming back. But let's help our listeners understand the biggest opportunities artificial intelligence offers.
Eyal Benishti: So yeah, you're right. As technology is moving forward, and especially AI, some consider it as the industrial revolution. We see this technology being utilized in order to solve many interesting problems that we couldn't solve before, because we didn't have those algorithms, and we didn't have this computing power that we now have, thanks to cloud and GPUs and some other things that are now accessible out there. So, these kinds of technologies with that kind of computing, when you combine it with a huge amount of data that we now know how to collect and store, really gives us the power to do stuff and solve problems that we could never solve before. So very exciting times.
Peggy Smedley: When we look at solving problems, and you talk about that, wanting to solve problems, do we really understand the ability to solve problems, or are we also creating some new problems? Because this is the greatest opportunity for the nefarious characters -- I always like to say the bad guys, also decide that this is an opportunity for them. Even though we're solving, it creates more problems in the idea that they think that they can come in and add to the problems and the challenges that are out there.
Eyal Benishti: I think that every technology that we are trying to create in order to help people, there is always the potential that adversaries will use it against us, in order to monetize or whatever they have in mind. It can be interstate stuff, political hacktivism, or whatever. We see the other costs. We see it not just with AI, you see it with the way to manipulate genomes and all kinds of stuff.
On one hand we can cure diseases, but on the other hand we can get biological weapons, which can be destructive. And I think it's the same with AI. On one hand we can do amazing stuff in order to make our life much better and easier. For example, the fact that we're trying to make autonomous cars, which is an amazing thing, because we believe that it will help us to have less casualties, and less people getting injured on the road. On the other hand, we need to think about all those drivers that are driving for living, and what's going to do to them -- how are we going to make sure that they are not going to pay the price? So it's a double-edged sword, I think with any technology that we develop out there, not just AI.
Peggy Smedley: So you raise a good point about autonomous vehicles. That's what I was just saying on segment one today, is talking about AV's. And there's a lot of concern, within the next decade, about how people feel comfortable about being in autonomous vehicles. When we talk about the AI algorithms that are going to be in there, and people feeling comfortable about whether they're going to be in an AV. Whether the comfort zone is here in the United States, or whether it's in India, or another country. And the interesting point is, we have like a 10-year gap, that decade of where consumers are going to feel comfortable.
Are we going to get there? I mean, because that's the whole idea. We have this great technology and AI. But yet, there's some concern about security and technology, and whether we're going to get there. Are there benefits in AI that say "We're getting there, but not going to get there as fast as we all think and believe?"
Eyal Benishti: I think that at some point we will have enough data to show that AI, at the end of the day, will be safer than human beings. We will get to this point, we will need some early adopters, and people will need to give it some thought, because it won't be perfect on day one. But I believe the technology will mature to the point, which it will be much, much more reliable and efficient than a human being.
But, if I take you back in time, I think that if you tell somewhere to go and drive an automated vehicle with the brake system, and the shifting and gearing, and wheels, and all this kinds of stuff. It will look crazy to them that someone was actually willing to put his life at risk and drive this stuff.
We will never get it perfect the first time around, but this is how progress works. We will build something, it won't be perfect, but it will mature and improve over time. And, at some point, it becomes mainstream. People are now jumping on planes. I'm jumping on planes once or twice every month, and I'm not thinking about the consequences of being up in the sky in a metal box flying at 36,000 feet. Why? Because I've learned to trust it, although it can be super frightening if you think about the potential consequences.
Peggy Smedley: Is what you're saying is that you have to first build trust in AI, is it takes time to build that?
Eyal Benishti: Yeah, it will take time. And obviously, not everyone will be happy to use it. I remember back in the day people were scared to use the microwave, because they worried about the potential consequences. It was a great technology that allowed us to warm up food in no time, but some people were afraid while others were embracing it.
Peggy Smedley: Yeah. But we're talking about a bigger problem now. We're talking about the ethics of it. We're talking about how do you build the ethics in that?. We just heard on the news today, the app TikTok, the government here in the United States is very concerned about that app and what it can lead to here, and what other countries might be doing, and the information they're collecting. I mean, how do you have and improve the ethics, as it relates to AI. Because we never know what people are doing, what companies are doing -- what are we talking about when we talk about the efficacy of these products then?
Eyal Benishti: I think that when it comes to ethics, AI, and I think I said it the last time, it's not a magic bullet. It's not at that level currently. Fortunately and unfortunately, it's not at the level that it can be out of control, or that it can be or do some things that governments and others are not trying to do in different ways. It's mostly just, ‘make it better, faster and more accurate.’ But it's not changing the fact that governments, and, like we said earlier, adversaries, will use technology in a bad way. It just gives them a bit more power to do some things that we couldn't do before.
Again, I don't think that the fact that they are using or building applications like the TikTok, or some others that are doing that stuff is necessarily our biggest concern when it's come to AI and ethics. I think that when we think about the fact that millions of people will not be able to drive, and will have to look for something else to do -- this is a much bigger kind of ethical problem or question that as a society we need to worry about because this is just the beginning. And AI is coming and it's coming big time, and we need to consider its effects on people -- what they do, how it impacts their day-to-day life, and not who's going to use this technology in order to target them, necessarily.
I don't think that this is our main concern when it comes to AI.
Peggy Smedley: We used to say that distracted driving was an epidemic. Is AI a contagion? I mean, it's coming at such a rapid pace. We don't even know the efficacy of it. We don't really know, we have no control over it. What people are doing, what information they're taking, how they're going to use it, how they're going to apply it positively or negatively. What are you saying then? I mean, basically what I'm hearing you say that we might have something here that is so bad, yet so good, we have no control over it. And that's what you just described. And that to me is something more dangerous than anything else you just described.
Eyal Benishti: Yeah, but again, as I mentioned, and I'll say it again, it's the same with any technology. And AI is the same, but it's not much different than what phone companies, organizations, individuals are doing with technology these days in order to collect data and do some stuff, including changing people's opinion in the political realm about who they need to give a voice to when it comes down to it -- with or without AI.
But yes, technology will give both good and bad people more power to do more things. And our job will not be to regulate it because we won't be able to regulate everything. We will be able to regulate some things. But rather we need to focus on encouraging the right behaviors, and make sure that good people building the right technology can stop people that are using AI in a bad way. We will just need to keep the balance of bad versus good. Just make sure that we have enough good people building enough good technology with AI in order to eliminate the bad stuff as much as possible.
Peggy Smedley: Hey Eyal, you just said, we're not going to have standards or maybe we need standards. You don't know who's going to regulate it, but then who's responsible for making it trustworthy? I think that's an open-ended question, that I think makes this in some ways really problematic then. The more I hear you talk, I feel like AI is good. But it then to me just says, "Look, if nobody's responsible for making it trustworthy, we just have this thing that could have giant snowball effect."
Eyal Benishti: Let me ask you the same question -- who's responsible to try and control the power of Facebook and Google in this world? Who's trying to regulate stuff around this stuff?
Peggy Smedley: Well, are we having a problem here in the States because of that. Because you know they're running amuck, right? I mean, in terms of Facebook and Google, aren’t we now having that discussion about what they're allowing, what they're controlling, and how much they're controlling?
Eyal Benishti: That could control people’s opinion, which is, if you ask me, it's much more dangerous than the technology itself. And Facebook is not using AI in order to do it. And Google is not necessarily using AI to do it. It's relatively simple technology and a platform that is changing the way the world communicates with each other on a day-to-day basis.
And I think that that the fear of AI is somehow directed or misleading people to think that AI is something that can be taken over and controlled. All kinds of doomsday scenarios are out there. But it's not really there and we are very, very far away from getting to that point. And I see AI as something that if we use in the right way, will just make our lives better. And yes, some people will use it for bad purposes, but I think that the good will be greater than the bad when it comes to this specific technology.
Peggy Smedley: So has Facebook and Google become a good thing or a bad thing then
Eyal Benishti: It's a good question. I think there are different perspectives, obviously. It depends how you test it and what's your individual perspective about each platform. But I also believe in an open market, I believe that if Facebook or Google bad things, something good will happen in order to overcome and balance it out. So they won't have so much power and control it. And the same with technology. There is something so good there as well and the great thing about life, and about nature by the way, is that it always finds its balance.
Peggy Smedley: It's innovation. So I mean, that's what you're saying: AI is just opening up to greater innovation and it just expands whatever we can create to lead to new things. And that's what you're describing. It's unlimited, it's limitless is what you've just described, right?
Eyal Benishti: It's not limitless. Not at the current state, maybe at some point it will be so strong that we will need to worry about how we keep the technology, as a technology, in control. But AI currently is not limitless. It's a tool. It's a great and amazing tool, like I said in the beginning, that helps us to fix things that we couldn't fix before, because it's much more powerful in a way.
Peggy Smedley: So let's kind of move on to a little bit about AI driven email security because that's important to you. And talk a little bit about that because we haven't really discussed a whole lot about that. We mentioned it last week just briefly, but how does that come into play with all of the things that we're seeing? Because that's going to open up some new opportunities in the way people see things. And why is that important?
Eyal Benishti: It's important because cybersecurity is becoming a major issue. It's everywhere. It's embedded in everything wed= do. It's soon going to be a part of, and perhaps a concern, for even homeowners and the kinds of connected devices that they're using in their home. And there's so much automation out there, that if we try and train people to be our defenders and rely solely on the fact that they are out there watching some kind of big monitor, like you've seen in the movies and trying to understand who's trying to do what, there is no chance we're going to be able to stop all the bad things. On the other hand, when you look at AI and automation in general, we can train the machine to spot malicious patterns at a much faster pace than human being can ever do.
So when we put AI in order to, not just detect, but also in response to these kinds of things without needing human intervention, we can just handle many more incidents at any given time. And then it's a fair fight, then it's a symmetry -- not an asymmetry in which people are trying to fight different attackers targeting a single small, or medium, or even large organization.
So AI is coming to the rescue when it comes to cyber security on the defensive side, because it can detect and make smarter decisions. Basically, it can mimic human decisions in some areas, and take actions on behalf of people in a much faster and automated way. Because AI doesn't need to click on a keyboard, AI can just do its things without human intervention to pattern match at scale and detect threats in real time.
Peggy Smedley: Is that part of the key though, is the ability with AI to make a decision? You just described that humans can’t do this fast enough? That ability to see something and say, "Look, we can do something and make it faster than a human? Because sometimes the human is going to make an error and when using AI technology, a machine is going to react to something differently than the humans going to react to."
Eyal Benishti: Yeah, but the machine can take many more factors into consideration. If it's a human being, we can process seven or eight different signals in a second. The machine can do millions in milliseconds. So, when we look at a screen and we see the lines come down, but as a human being, we are very, very limited in our ability to process these kinds of things. For the machine, it's almost limitless with today's computing power. So as long as we have the right algorithm and insights, we can just ask the machine to look at everything, and make a decision, and make correlations that humans simply cannot do in a reasonable time frame.
Peggy Smedley: Your technology then, is more built on predictive analytics in order to do that? Can you talk a little bit more about the technology that you're doing to do this?
Eyal Benishti: Yes, so what we're doing basically is building an AI technology that will basically mimic a security expert’s decision making process, by looking at all the parameters that you would normally look at, and much more beyond that. Like, even on this stuff that human being can look at, and in a fraction of a second, make a decision that is very, very close, if not even better to the decisions that the human being or the experts would have given.
So it's basically building a virtual kind of security expert, which is trained on a very specific topic, because you can't train AI on everything. You can train an AI to drive a car, you can train an AI to become a phishing expert. So we pick this specific program and say, "Hey, how we can make AI a phishing experts?" So we are looking at hundreds or thousands of experts. We are looking at the decisions that they're making on a daily basis. And based on their decisions, we can train a machine in real time, with a high quality of data inputs. Because as you know, machine learning and AI, at the end of the day, is all about having enough qualified data.
If I can have lots of data and it's being labeled in the right way, building the algorithm becomes the easy part. Collecting the data, like the method of collecting data, this is the tricky part when it comes to AI.
Peggy Smedley: So when I hear you talk, and maybe I'm hearing you incorrectly, but are you actually then saying you're able to predict future attacks based on that? Or you're just able to analyze the attack that's actually happening, as it's happening?
Eyal Benishti: We can do both. We can analyze the current incidents that’s in process and ask basically two questions: is it something that is known or familiar to us? And if the answer is yes, obviously we will block it. And the second question is exactly what you just said. If it's not known to us based on everything that we've seen so far, could it be this next attack that we haven't seen yet? We are trying to predict if it's something new or some kind of permutation? It's been used in the wild but was going through some type of meta or polymorphism. So in a way, it's a detective and predictive kinds of technology in the sandbox.
Peggy Smedley: So when you're looking at all this data, , do you then say it's the machines that are learning? Is there a process based on machine learning with this as well? Is that the process that it's going through as well?
Eyal Benishti: Yeah, AI is a representation of machine learning. The machine is learning and AI is giving a very specific representation to what the machine has learned in the background. So basically, in order to have these kinds of technologies in place, you're just taking a huge amount of labeled data, you throw it at the machine and tell the machine, "Hey, learn from that and learn how to classify things. Learn how to differentiate between good and bad in our case." And the machine is just being able to learn.
If you take it to a most simple kind of problem, like let's say that you want to teach the machine how to recognize cats. So we will just feed enormous amounts of cat pictures into the machine and say, "This is a cat, this is what a cat looks like." Now, there are millions of pictures of many different cats. And eventually the test for the machine after the training session is done, is to show it a cat picture that was never shown in a picture before and ask the machine "What is that?" The machine should be able to predict and say, "This is a cat. I've never seen it before, but I've seen so many cats now I know that this is a cat based on many different parameters." And it's very impressive. It has this kind of magic feeling that we can build technology to this level.
Peggy Smedley: Eyal, I love the way you're helping us understand, really, where the technology is going and how we can actually embrace it. So, anyone who's listening right now, how can they best prepare for an AI driven future? Because I really think you're helping us get our arms around this and really understanding. And I know we tackled some tough things, and put you on the spot, but I really think you're helping our listeners really understand it. So how do we best prepare for it?
Eyal Benishti: I think first, make sure you know what AI can and can't do. Don't just believe whatever is being said or marketed to you. Just, you know, ask questions. Try and see if the kinds of solution that is being offered to you, that you're considering to solve with AI, actually match the pattern of stuff that AI can do. This is number one.
And second, ask questions about how it works, how the data is collected, and make sure you're getting a right answer. Second, if you want to really be ready for AI, try and map out all the kinds of problems in your organization that you believe that AI can solve, have a plan in place on how you would use it, and what you're going to do, and most importantly, how people’s jobs will be impacted by AI. Make sure that you have some other stuff that they can do instead.
When you're talking about cyber security, AI can help us in so many ways. We can basically free up security teams from doing many manual tasks in order to do other things.
And I think the fourth thing is, give it a try. Try it out, see that it's working for you. And if it's not working for you, try something different. Not all AI has been created equally, so it's very important to understand that as well.
Peggy Smedley: Well, Eyal Benishti, founder and CEO of IRONSCALES, I love the conversation today. Thank you so much. Where can our listeners go to learn more about what you guys are doing? What's your URL?
Eyal Benishti: Oh, URL is ironscales.com. It's I-R-O-N-S-C-A-L-E-S.com, you can find everything that we are doing and thinking to do in the near future.
Peggy Smedley: All right. Thank you so much for all your time today. I really, really do appreciate you coming back and spending more time with me.
Eyal Benishti: Thank you, pleasure as always.
Peggy Smedley: All right listeners, that's all the time we have for today. Check out website at connecttheworld.com or thepeggysmedleyshow.com. And remember, tweet at us at Connected W mag. We'd love to hear what you think. All right, this is the Peggy Smedley show, the podcasting voice of IOC in digital transformation, and remember with great technology comes great responsibility. We'll be right back, right after this commercial break,
To learn more about artificial intelligence and machine learning in email security, check out Themis, our virtual email security analyst.
Explore More Articles
Say goodbye to Phishing, BEC, and QR code attacks. Our Adaptive AI automatically learns and evolves to keep your employees safe from email attacks.