Full transcript
Gary Ruddell:
There’s a creature in Uzbek folklore known for causing chaos. The frightening spirit preys on humans, lurking in the dark, and changing its face to trick its victims before it pounces. This shapeshifting creature has one aim to deceive and harm. But it’s just a ghost story. It’s not real. In December 2023, however, it lent its name to a sophisticated Android malware campaign using the same tactics. The banking malware masquerades as legitimate applications, leaving users confused. Like its folklorish namesake, it surfaced from the dark to steal everything they had. Very real and posing a very real danger. The masked actor group has been codenamed Agena. So, Nick, Group-IB first became aware of Agena in May 2024. Is that right?
Nick Palmer:
Yeah, that’s correct. Group-IB is constantly researching new types of Android and Mac malware to understand how it operates, who they’re targeting, and what they’re trying to do. So this particular group has been active since at least November of 2023. And we discovered and analyzed about 1,400 unique samples of this particular malware. Now, the campaign has been targeting banking customers primarily in the Central Asian region, particularly in Uzbekistan. But this malware is evolving and causing attacks to expand beyond the original region, causing more victims in other countries as well.
Gary Ruddell:
And what is the group’s modus operandi?
Nick Palmer:
Android malware, basically designed to steal personal banking information. It also has the capability to intercept two-factor authentication messages with the ultimate goal of gaining access to bank accounts. So Group-IB first discovered different APK files masquerading as legitimate applications, mostly payment, banking, and delivery date companies, you know, things people would use every day. These malicious files were really spread across Telegram channels and mostly targeted towards Android users. And there’s evidence also of automated distribution, really mass spamming through Telegram, using different bots to spread these malicious links. And the operators also set up different affiliate networks as well to really scale their operations.
Gary Ruddell:
And how about like distribution to actually trick people into installing the malware? How does that work?
Nick Palmer:
Yeah. So in addition to mass spamming through Telegram, using affiliates and things like that, they also try to prey on the psychology of users as well. So leveraging different social engineering methods designed to make users act fast and maybe not consider the potential risks involved. They use different um scam cache offers and promotions to really convince the users to download the applications. And then, you know, once installed, the malware basically tries to get access to the accessibility services on the phone to maintain persistence and avoid uninstallation of the of the malware.
Gary Ruddell:
Okay. And then that gives the hackers, you know, access to all of the information they need to get into the user’s bank account, right?
Nick Palmer:
Yeah, that’s right. So once the malware has persistence, they have access to accessibility services. Um this malware had ability to intercept to Father authentication messages, um, even logging into accounts from new devices with the user’s phone number after confirming the SMS message. Ums social engineering really to deliver the malware makes the attacks really successful. Um, you know, preying on the user’s ability to trust these types of institutions that the malware was portraying really had it um, you know, effective. And I think important to mention as well, you know, we see often threat actors um trying and and testing new tactics and techniques, um perfecting them in certain regions. And then, you know, before we know it, we can see these types of campaigns in the US or the UK and beyond.
Gary Ruddell:
Thanks, Nick. Thanks for the intro to Agena Banker. Um, this is a great moment to introduce our guest for the episode today, Amy Greaveson. Amy is the director of security behaviors and governance at one of the UK’s leading online banks called Monzo. Amy, thanks for joining us.
Amy Grieveson:
Thank you for having me on the podcast.
Gary Ruddell:
Do you want to tell us a little bit about your role?
Amy Grieveson:
So, my role is really looking at the human side of security. Uh, it’s about helping people understand how everyday decisions can have a real impact on keeping them and their information safe. Um, it covers all of like awareness and communication and training that you’d expect in a traditional security awareness role, but we’re also looking at how processes and controls hold up in the real world and that they still work when people interact with them. Um a big focus is on communicating security in a way that feels really simple and relevant so that security can become part of how people work and not something that’s separate and intimidating and too difficult to deal with.
Nick Palmer:
Great to have you here, Amy. And uh, you know, I loved our discussion pre-podcast as well and you know, recognize a lot of the great things that Monzo is doing as a banking institution to prevent against fraud and make customers feel safe about their money. Um, you know, one of the major things I think that threat actors prey on is social engineering. Um, so instead of targeting systems, cyber criminals really manipulate people and deceive them in order to trick them and give away their information. So um is that a big part of your role within Monzo Bank? And can you explain a little bit more about what you guys are doing to defeat social engineering?
Amy Grieveson:
Yeah, absolutely. And my role is focusing a lot on internally, looking at protecting people that work for Monzo and making sure that we’re all aware of our risks when we’re busy protecting everybody else’s money and finances. And we’re increasingly looking at social engineering through a lens of cyber wellness. So we’re not looking at the clicks and the pound signs, but really focusing on the human side of it, trying to tackle the fear and the shame that can be associated with scams and helping people feel confident and supported when they do face these potential threats and removing kind of the anxiety and blame that can sometimes come along with it all. Um, we’re trying to make security feel really human so that people can naturally engage with it. Um, and give you an example. So, one thing that’s working really well is how we approach fish awareness. Um I’m not really allowed to say this in my role, but I hate fishing simulations. Um I think they serve a purpose to deliver data, but don’t necessarily deliver the support that’s needed. So we at Monzo tell everybody that they’re about to get fished, um, not to catch them out, but to spark curiosity, to make them go on their treasure hunt through their inbox, and to hopefully find not just the simulated fish, but also some of the real threats that might be lurking in their inbox. Um, it kind of changes the tone from being um something that we’re going to tell people off for to really just getting them engaged and kind of working proactively with us. Um it’s kind of focusing more on the psychology behind the social engineering and shifting the message from don’t click on that, don’t click on that link, to just have a pause, have a think about what you’re being asked to do. Is it something you’d expect to be asked to do? Um the aim is to kind of better equip people to make stronger decisions and especially under pressure, because that’s exactly what Scanner’s trying to exploit.
Nick Palmer:
Love the ability to actually motivate people rather than name and shame them and those who clicked on the links. So very cool.
Gary Ruddell:
The cybercrime landscape is evolving pretty quickly. Uh, and and with it, the mechanisms needed to respond to fraud, uh, you know, in your opinion and from what you’ve seen, what are some of the more notable evolutions, you know, in recent years?
Amy Grieveson:
Yeah, you know, scams are just ever increasing. Um I think Citizens Advice estimated that around nine million people in the UK last year were impacted by finance-related scams. I think what we’re seeing is, you know, the tech side has been matured for years and years and years, and technology is going to keep enabling that. But what’s really changing now is the emotional tone. Scams are becoming more personal, more believable, much harder to spot, and the tech is enabling the reach of those across a lot more people.
Nick Palmer:
Yeah, that tech piece is definitely one of those things that’s evolving the cybercronal landscape. I think when you look at APT groups, um, you know, most of them have moved away to very complex attacks against Swift systems and financial institutions to, you know, how can we really scale our operations? Um, look at ransomware as a service, for example, um, hiring affiliates, making everything efficient, making all of the tools and um techniques available to those affiliates and really scaling operations. And I think the other really interesting piece, especially as we touched on the psychology a little bit, is you know, looking at how AI is actually impacting different fraud operations. So long gone are the days of you know looking for grammatical errors and a phishing email to be able to spot it. You know, I think there’s lots of tools available to cyber criminals to perfect the message and make it look as legitimate as possible. So definitely fraud and ransomware service has evolved because of the technology that’s available on the market.
Gary Ruddell:
Yeah, let’s dive into that. Like what role uh does Emerge and technologies really play in enabling financial cybercrime?
Nick Palmer:
Yeah, so I mean, I I quickly kind of mentioned uh different AI capabilities to make phishing messages believe that they’re more legitimate than you know the old uh Nigerian scam emails where they’re kind of riddled with errors. Um, but you can look at other things as well, right? Like voice phishing. Um, you know, no longer do you have to have um you know a native English-speaking person if you’re targeting an English-speaking region. Um, you can use some of the um different AI voice services to make the message look and feel, well, not look and feel, but hear and sound like your CEO and try to convince your CFO to uh make a change on a bank account, let’s say. Um, so really tapping into some of the technology that’s available, I think scaling fraud operations, um, playing on the emotions of people becomes a little bit more easy with some of the voice and deep fake technologies that are on the market.
Amy Grieveson:
Yeah, and look, alongside that, there’s obviously a lot, a lot more and a lot of new attack services for cyber criminals to take advantage of. We’re seeing huge growth in things like cryptocurrency trading platforms, peer-to-peer payment apps, um digital wallets like embedded finance in your apps. These things used to be quite niche with quite um kind of a low take up from some of our more tech server users, but now they’re completely normal, and that’s opening a lot more doors for scammers to exploit.
Gary Ruddell:
Yeah, and I think like one of the things that Agena did, you know, to evade detection was to spread via like non-official means, you know, that they don’t appear in app stores, for example.
Amy Grieveson:
Um, a useful habit everybody should um adopt is to only download apps from the verified app stores. Don’t download from links that you’ve been sent, from messages, posts on social media, and they just carry a risk that’s not worth it.
Nick Palmer:
Yeah, definitely. And I think, you know, when you look at how Agena primarily distributed their malware, they were focused on really um spamming this malware out through Telegram and other communication channels that were off legitimate Apple stores or Android stores. Um, you know, I think Apple and Android have both done a fairly okay job at trying to review the um the files that end up on their app stores um to see if they’re malicious or not and taking care of those in a quick manner if they are. Um so definitely, you know, one of the tactics used by Gina is to get people off of those legitimate stores and propagate their malware through different channels.
Amy Grieveson:
Yeah, and look, scammers use the fact that customers trust the app. So you make them look legitimate and you make them look safe, and they can kind of hide behind that. So even if you recognize the app, the app author, the branding, it’s really important that you only download it from a verified source.
Gary Ruddell:
Uh let’s talk about the victims here of uh Agena. You know, what type of people are usually targeted by them?
Nick Palmer:
Uh with Agena in particular, I think they weren’t targeting anyone specifically, you know, age, gender, et cetera, et cetera. Um, region and geographically, they were targeting um Uzbekistan or Central Asia. Um, you know, I think it’s important to look at these types of campaigns, not from how they start, but from where they could end up based on the tactics and techniques that are employed. Um, you know, these types of actors, let’s take another one that Group-IB researched, um, GoldDigger malware, right? Um it started in also uh Asian country and then eventually was propagated elsewhere. Um and that’s likely going to be the same once Agena uh perfects their tactics and techniques in this particular campaign. They get comfortable using the legitimate businesses that they’re impersonating, how they’re propagating their malware, how they hire the affiliates. Once they get comfortable with that, they want to scale their operations. So they’ll look to the UK and the US where they can really scale and be effective from a financially motivated perspective. And I just want to reiterate that, you know, the actors aren’t targeting anyone in particular. They sent a high number of malicious links, um, you know, through different methodologies, no specific age groups. So it’s really not a direct target rather than a geographic focus at this moment.
Amy Grieveson:
Yeah, and the reality is that everyone is susceptible to these types of kind of scattergun scams. There’s a common assumption and misconception that perhaps older people or less digitally skilled people are more likely to be scammed, but that’s not the case. Um, in fact, there’s a lot of data that that’s around, including a scam report that Monzo has produced that shows that younger people are more likely to be impacted. Um, we found that people working in tech and engineering and people in the 25 to 34 year age group reported being scammed more than any other group. Um, so it’s possibly because Gen Z and more tech savvy people have a bigger and broader digital presence. Maybe they’re early adopters of new tech, so they are more um likely to come across some of these scams, but being confident doesn’t necessarily mean being safe, and scammers leverage that confidence.
Gary Ruddell:
Would you say that digital fluency is then causing blind spots?
Amy Grieveson:
Well, look, scammers are masterminds of exploiting human behaviours. Um so as we adapt, so will all the threats. And right now there might be a slightly open door created by digital fluency, which is showing up in the data. But in this instance, knowledge is power. So the more we can help people identify their own risks and that they actually might be at risk, then the less the scammers will be able to exploit it.
Gary Ruddell:
I mean, we’ve discussed here in the podcast before, particularly with the muddy water and oil rig episodes, like how challenging it is for consumer awareness in the era that we’re living in.
Amy Grieveson:
Yeah, the problem is really um awareness and consumer awareness alone isn’t enough. You can be completely aware of a risk and still do the thing anyway. We see that in lots of areas. Um, I mean, obviously not me, but speeding, crossing the road with your headphones in. We are all aware that that’s risky and we do it anyway. Um, much of the messaging that we see around scams and security is still very fear-based, which contributes to it feeling intimidating and it makes managing security feel quite out of reach for people. So we need to kind of focus on giving people tools, confidence, creating a safe environment for them to make secure choices really easily, um, layering on human-centric design onto control designs and all of the technical processes and things, um, just making it a lot easier to relate with and to.
Nick Palmer:
Yeah, those are all really great points. Um, you know, I think nowadays it’s becoming increasingly more difficult for people to recognize scams. Um, you know, even looking at the Agenia, Agena attacks, um, users felt that they were in control and they set the permissions for the app themselves. But first and foremost, when they looked at the phishing emails, there are no errors in the emails anymore. It’s perfectly written in Uzbek language, right? Um, so that that creates that confusion or inability for individuals to really recognize is this a scam or look for those normal signals that they would typically look for to try and detect these types of things.
Gary Ruddell:
Thanks, Nick. Thanks, Amy. So we end up in this weird situation where the legitimate communications from a company look less safe than those from the scammers branding themselves as a trusted application.
Amy Grieveson:
Yeah, Nick mentioned earlier about how you know long gone are the days of poor spelling and poor grammar being the giveaway. It’s a kind of a joke now that if you get a message with a spelling mistake in it, it’s more likely to be from a human. Um and you think about some of the legitimate comms that you get from real companies and real organizations, like from your doctors and things, text with links in, really poorly formatted, um, errors all over the place, and you kind of expect it not to work, but you sort of have to trust that it will. Um, they all look like the things that you are told to look out for in phishing attempts. So, in this whole new environment where scams are way more sophisticated, organizations are really competing for customer attention and they should be looking to build processes that are easy to trust and hard to mimic. We should know what to expect from our doctors or from our bank, and that should be consistent and predictable. The language should be clear, the instructions should be easy, there shouldn’t be any ambiguity. Um, there’s loads of like good tips for businesses if you go to the National Cyber Security Centre around how to make your messaging feel a little bit easier to trust, but it’s never foolproof, right? Because, like we’ve been saying, these scams are getting more sophisticated. But we should be in a position where companies aren’t making it easier for that to be the case.
Gary Ruddell:
So, do we think there’s any need to communicate about security in a different way?
Amy Grieveson:
Yes, absolutely. Um, and luckily for me, because that’s how I have a job. I think that we’ve historically struggled to talk about security in a way that reaches the general public. The language is usually technical, full of jargon. It’s kind of detached from how people really think and behave. Um, and it it broadly disengages people, which leaves everyone open to risk. Um, we need to make it more relatable, we need to put it in human terms, focus on simple actions, like work off people’s instincts and help them with decision making. Um, you know, it’s kind of core behavioral science, really, that needs to be applied here.
Gary Ruddell:
Yeah, I mean, are there any interesting examples maybe you could share from the world of Monzo?
Amy Grieveson:
One fun thing that kind of stands out to bring security life was for a while we introduced security-themed songs for um staff. So these were full productions created by a company called Social Proof, who are fantastic. Um, and we had uh songs about security risks put to different genres. So we had um a song, a country themed song about phishing or a pop-themed song about passwords. Um, and that went down really well. It made it very fun. Um, I’ve seen some other things as well. The Bank of England went for a really visual campaign where they stuck um visible cracks across all of their internal walls and asked the question, how safe is the bank? And that was to open up questions about cybersecurity. And you know, we see we see other organizations using humour and storytelling and emotion in all of their kind of internal and external campaigns. And there’s a lot of space for creativity and variety in in this area, and especially in security language. It’s not unique to security. We see it in health campaigns, we see it in travel and road safety campaigns, um, kind of use real human language to inspire action. And I think that’s what we need to do more of in security.
Gary Ruddell:
Yeah, that’s really interesting. I like I like the idea of those little musical campaigns. Um on the consumer side, uh, how can consumers like spot potential attempts at financial crime? What sort of things should they be looking out for to identify a scam?
Amy Grieveson:
First up, people should be aware that scams are designed to manipulate emotion and not intelligence. They’re built to catch you in a moment, not test what you know. Um, scammers have always been experts in human manipulation, and now technology is amplifying their ability to do that. Saying all that, some of the obvious things to look out for have remained consistent. Anything that promises something amazing or causes you to panic should be enough to just think twice, take a pause, have another look. Um anything that is too good to be true, an amazing bonus, great returns on investment, some quick financial wins, you know, a voucher for your favorite fast food restaurant. Um, or anything that tries to create panic, like urgently move your money around. Um, your kid is calling from an unknown number and needs help, anything like that that tries to make you act fast without thinking is something that you should stop and think about.
Nick Palmer:
Yeah, these are all really great points. I think you know, bringing awareness to consumers about how they can identify different scams is essential. I can’t count the amount of times that I’ve told my parents, for example, hey, if someone’s calling from a retail store trying to get you to install uh this team viewer, look out for these types of scams. So general awareness is essential. I love the idea, Amy, how you know Monza was bringing um humor and catchy songs into the way of training. It really helps bring it to life and help people remember these things. Um, you know, I think consumer awareness is one way that we can stop scams. Um, I’d love to see a lot more onus placed on the financial institution as well. Um, you know, how can they work together with other financial institutions to actually mitigate these scams? Um, maybe it’s time to stop thinking about how to how do we just train consumers to be aware of scams and how do we start treating this as a real competitive advantage to bring um consumers who value the importance that their money is safe to our bank. I’m willing to bet that if a bank takes it seriously and takes protecting the money seriously for consumers, they’re gonna win more customers over that because for me personally it would be an advantage.
Amy Grieveson:
Yeah, absolutely. And outside of competitive advantage, really the winners are when we can protect all people from all scams. So like collectively coming together to fight cybercrime is a competitive advantage for everyone.
Nick Palmer:
For sure. Completely agree.
Gary Ruddell:
Yeah, I mean, uh we talked about this before, uh, before the podcast, you know, I’m a Monzo customer myself. And uh one of the cool little things that I loved in the Monzo app, and I don’t know if other banks do it now, but you know, if I’m on a phone call and I open up the Monzo app, on the top of the app, it literally says, we are not calling you. If someone is pretended to be, if someone’s pretended to be us, please hang up. It’s really cool. And then, you know, when Monzo is calling you, you can open the app and verify it. It goes green and it says we are calling you. So it’s really, really nice and simple and easy to use. So yeah, great, great work. I mean, we’ve we’ve touched on it already with like AI and things like that, but you know, things are getting more sophisticated from a scam perspective. What sort of tips do you have for people to stay safe, Amy, in this world of you know, highly sophisticated phishing attempts and scams?
Amy Grieveson:
Yeah, I think first up, I would say um if you do find that you have been targeted or um been impacted by a scam, then the best thing to do is share it, talk about it, call your bank, tell your friends, because talking about these things is one of the best offences. It will help other people not become impacted by them. Um, but really the best offense for everyone is um curiosity and a good bit of healthy skepticism. Um, question things, know what’s normal behaviour for the organization that you’re dealing with. Banks will not ask you for your PIN, no one legitimate will ever ask for a passcode, no one’s gonna ask you to urgently move money around. With your friends and family, um have a simple code word or a phrase that you can use to check that you’re actually talking to the right person or if something feels a bit unusual, you can ask for it. You know, we’re gonna see video and audio deepfakes become way more common. So verifying out-of-character requests, while right now might feel really awkward if you’re talking to your mate and you ask them to tell them the name of you know the school that you went to, probably not the best example anyone can find that. Um, but these will become way more normal and it’s a really good habit to get into.
Nick Palmer:
Yeah, these are all really great points, I think, about how the consumer can be aware. So I’ll, you know, I’ll really focus on it from how the business can um mitigate these types of sophisticated scams. And I really think it falls into two categories, right? One is understanding what the fraud landscape looks like, what tools and techniques are used, um, and how can the bank actually implement things that moves the needle on reducing the effectiveness of scam. Um, you know, I think uh there was a really great example from uh from Monzo Bank where you implemented the is Monzo Bank calling green means yes, red means no, we’re not calling you, right? So simple things like understanding how a scam actually happens and mitigating or changing just in the slightest of way to cause some friction. Um not necessarily to the consumer, but to the to the cybercriminal group, because reducing their effectiveness reduces their money, reducing their money makes them go somewhere else, right? Um, so I think that’s one thing, really looking at how the bank can actually implement successful controls to help the consumer become aware that they’re potentially falling victim to scam. And the second thing is really shutting off that money flow. And, you know, that’s that’s only going to happen, I believe, by collaborating with financial institutions between each other in real time. So, how can you share information about a bank account that has been involved in a scam to all of the other UK financial institutions in real time while preserving policy regulations and things like that? Uh, if we get to that type of state where banks can share information in a way that preserves privacy for consumers in real time, that’s going to shut off the money flow for these types of scammers and ultimately mitigate the effectiveness of their campaigns.
Amy Grieveson:
Yeah, I think that point you made around um putting in the necessary friction is a really important one because balancing friction with customer ease and the immediacy that we kind of expect from all of our apps, not just banking apps, but you we want things immediately now. We don’t want any friction, but we also do want to know that our money is safe and if there’s a chance that it’s being um stolen or something bad is happening somewhere, then we do want that little bit of friction. So getting that balance right is really where the key is.
Nick Palmer:
Absolutely. I struggle to even say that part because you know, introducing that friction may not necessarily be the right thing to do at this moment, but it mitigates those scams, right? And ultimately reduces the return on investment for these cyber criminals. And, you know, I think if you actually are able to stop a scam for a customer of yours, that’s gonna be a lifetime customer. And they’re gonna tell their you know friends, um, you know, and and things like that. So make anti-fraud or security a competitive advantage and talk about it more. Um not to preach to the banks, but I’m just talking in general. Yeah.
Amy Grieveson:
I think it’s where we’re also gonna see AI being leveraged more and more, right? So hopefully the friction will be invisible and we’ll be able to do a lot more and are already doing a lot more in the anti-fraud space that consumers should never even need to know about. That’s the dream. Absolutely.
Nick Palmer:
Dream.
Gary Ruddell:
Dream. Absolutely. That’s it, isn’t it? That’s it, isn’t it? The bit that the customer doesn’t even need to know about. Like I just got a new iPhone the other day and I transferred all my apps across using the iCloud backup thing, and then logged into that Monzo app. And I was kind of like, it’s been a while since I’ve done this. I’m surely they’re gonna check. I’m not just gonna put a password in and get some email 2FA thing. And sure enough, I had to record a video of me talking into the camera and all that sort of stuff. So then that was checked by someone. But as you say, I don’t really care if it’s AI that checks it or a human that checks it, I don’t need to know. It’s just super easy. 20 minutes later or something, I was logged in. So it was reassuring to know that someone else with this face per soul uh can’t can’t get into that account as easy as uh as easy as pie. So, Amy, if you could share, you know, one takeaway from this episode, one thing for businesses or individuals to understand about financial cybercrime, what would it be?
Amy Grieveson:
Um, I might have to cheat with a really long sentence if it’s just one takeaway I’m allowed to share. Um, look, scams are going to keep on evolving. The tech, the tactics, the technology, the tone, they’re going to keep on changing, probably faster than any of us can predict or keep up with. Agena is a great example of that, and there’ll be many future Agenas that we’ll be talking about, no doubt. Um, so the real focus just has To be on people, not the systems in isolation. We need to build confidence, awareness, and habits that people can use to help spot scams now and the scams that come up as they keep changing in the future.
Gary Ruddell:
Awesome. Truer words have never been said. Well, Amy, thanks so much for joining us. It’s been fantastic speaking with you. I look forward to seeing all the great things that Monzo pushes into in the security space.
Amy Grieveson:
Having me, it’s been great.









