Do state of the art commercial URL threat intelligence feeds ( for example, one\
ID: 654096 • Letter: D
Question
Do state of the art commercial URL threat intelligence feeds ( for example, one's by Symantec, Intel-security etc) miss malicious domains? (i.e. domains serving malware,exploit code & phishing domains - to make the discussion specific.) If so, why? What are the technical challenges people are facing in solving this problem? I understand that there are many methods people use to do detection including a combination of machine learning, capturing data with honeypots,etc behind the scenes to generate such feeds. I want to specifically understand what technical obstacles limit the effectiveness of these methods and why they miss what they miss.
Explanation / Answer
Trying to detect malicious URLs is a difficult and in many ways impossible game to win, just like trying to detect malware in executables. IF you are trying to protect your network from malicious URLs, there are threat feeds available, which are useful, be as others have stated, domains are cheap and easy to throw away.
The strongest way to prevent malicious URLs is to use DNS whitelisting, which flips filtering on its head. Instead of trying to detect the bad, you just whitelist what you know you want users to have access to. This won't prevent everything because sometimes legitimate domains are hacked and used to spread malware, but it is a better approach than blacklisting from a security perspective.
From a usability perspective, whitelisting is problematic in the early stages because you'd be shocked at how many sites need to be added for just basic web browsing.
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.