Part 2: One year after the article, our prediction was correct: IBM developed a ransomware prototype able to choose specific targets. Let's see together how this is working.
With a big delay here there is the second part (here you can find the first part: https://www.psynd.ch/blog?page=170711-spear-ransomware-1-mauro-verderosa) of my article about “spear ransomwares”.
In August 2018 I had the chance to present in the Emergency Forum, in Geneva, organized by the Revue Militaire Suisse, how it could work a new typology of ransomware driven by an Artificial Intelligence (AI).
In the first part of this article we learnt that a ransomware propagate itself as a random virus, a Trojan horse or another typology or malware that moves between networks (Internet or LANs) or systems (CD or USB) and attacks unlucky victims encrypting their files. As a phishing attack, the drivers of the attackers are financial aspects, and exactly as a phishing attack, we should start considering that a ransomware could be used not only to attack random victims, but also targeted ones, something that we called a “spear ransomware”.
A spear phishing attack is sent directly to a C-Level, but how could work for a ransomware? What if this ransomware could be able to make intelligent choices before activating and revealing its real nature?
During the conference I presented how much easy it could be to carry malware through video conferencing tools based on WebRTC: Skype, Skype for Business, WebEx, Facetime, WhatsApp, etc. The simple fact of establishing a connection between two endpoints might lead the possibility to open a gate between you and the victim that could enable you to transmit the malware.
Even easier could be to invite the victim to our counterfeit WebEx or Skype for Business link, remotely hosted.
The objection in this case is obvious: none of our technicians are so naïve to accept calls from unknown people outside the office network. Raise the hand if you normally receive Skype calls from strangers while you are in the office. Most of you will not. But how many of you might receive a Skype call from your HR department or from the Help-Desk? I guess that now the number of raised hands is deeply increasing..
In August 2018 IBM announced the creation of a prototype ransomware able to use AI: the DeepLocker. Although this malware is harmless, it’s enough to help us to understand how a “new generation ransomware” could carry its attacks. In the proof of concepts implemented, DeepLocker was able to hide and carry the code of WannaCry.
Note: This article doesn't want to focus about DeepLocker, but just to use it as an example to explain how this typology of malware could work. Therefore some of the example given could not necessary fit 100% what DeepLocker is doing.
While propagating, DeepLocker works as a polymorphic malware, meaning that the internal code with the functions, the methods or the instruction-sets that contain the logic of the attack are kept encrypted to allow it to do not be detected by any anti-virus until it won’t be ready to assault the target. Differently from classical polymorphic, in this scenario we should imagine different layers of information enclosed in an onion structure, where each layer might have the information to decrypt the next more internal layer and a different logic: for example the first layer will indicate the target. Once reached the target, the second layer will be accessed and it will be revealed the conditions to trigger the attack (i.e. while the antivirus or Windows are updating?) and the third final layer will reveal the code of the malware. This malware will be like a “black-box-onion” moving through your network: the identity of the target and the actions that will be performed will remain un-revealed until the final target(s) will be reached.
How this malware could actually work in a real environment? We could try to imagine its “modus operandi”.
Any good hacker knows that “the strength of a chain is in its weakest link”. In a company we have some people that are, more than other, exposed to the external world. We could imagine secretaries, or people dealing with the internal supplies, for example coffee or napkins for the kitchen.
Let’s build together a scenario: our hacker might get in contact with our company to propose some services, for example a new brand of coffee. He would like to share with our Supply Manager an online catalog, proposing to present a PowerPoint presentation in a web conference. Right in this moment the infection will be ready to propagate inside the company.
The role of the AI is to understand if it has reached, or not, the desired target. This means that in this specific case, instead of launching the attack, revealing its true nature, the malware will prefer to keep its harmful functionalities hidden, and it will only aim to duplicate through the network.
In the moment that our Supply Manager will enter in contact with another department, maybe again with another conference call, the infection will continue to spread from host to host.
All these “link of the chain” will be healthy carriers. None of them will be actively infected, but only passively, meaning that the malware will go across their hosts without being detected by an antivirus. In the moment that a host will enter in direct contact with our targeted C-Level, the malware will mutate and it will start the attack.
In the example from the proof of concept, we can imagine that our C-Level will have already an antivirus able to detect and block the famous WannaCry, but will it able to detect attacks from new, or unknown, ransomwares?
There is one last important point: we talked about the AI used to decide how, and when, to camouflage from and antivirus, but how could it select the target? Maybe we could setup a MAC address, or a specific fix IP address, or a subnet, or maybe or some other information linked to the host of our target. But don't forget that we are talking about an AI and we could be able to implement complex choices. Think that all our example we used a conference call system, an application that has access to your video and to your microphone. What if in the future this typology of malware, once in the network, might be able to make a photo matching to recognize its target? Imagine that you might just know the company your target is working for and submit a LinkedIn or Facebook photo and the algorithm will do the rest. This means that this attack could be potentially carried without specific IT knowledge and not only against businesses, but also against private citizens. Scary, isn't it?
This kind of attack could be extremely hard to be prevented. The market is not yet providing efficient solutions for such scenarios, but the best practices to follow are always the same:
- Keep your systems updated
- Keep your application patched
- React and inform your IT department when any anomaly occurs
- Use always common sense
For any additional question, please do not hesitate to ping me.
Leave a comment