How can an administrator secure against a 0day before patches are available?












29














I'm working on a thesis about the security hacker community.



When a 0day is published, how can an administrator secure his application/website between the time the 0day is published and the patch is developed ?



Moreover, most of the time, this same 0day is used for months by blackhats, so are the blackhats ahead of whitehats ?










share|improve this question




















  • 4




    As a classification of their activities, yes. WHs locate vulnerabilities and often recommend solutions, but they are not typically the ones expected to deploy the solutions. They are external people given permission to test. A web admin is not classed as a "whitehat".
    – schroeder
    Dec 13 at 9:48








  • 3




    There's always one way to prevent every security issue immediately: Shutting down the system.
    – Fabian Röling
    Dec 13 at 10:12








  • 34




    @FabianRöling That's a common misconception. Security involves the CIA triad: Confidentiality, Integrity, and Availability. Violation of any one of those is considered a security problem. Shutting down a system completely eliminates availability. It's effectively becomes a DoS born from the fear of bugs.
    – forest
    Dec 13 at 10:59








  • 4




    @FabianRöling I think forest's point is if you need to pull the plug on your system, you haven't prevented a security issue. Even though it could've been worse, you still got hacked.
    – Lord Farquaad
    Dec 13 at 22:29






  • 3




    If it's been published, it isn't exactly a 0day, is it?
    – Acccumulation
    Dec 14 at 22:07
















29














I'm working on a thesis about the security hacker community.



When a 0day is published, how can an administrator secure his application/website between the time the 0day is published and the patch is developed ?



Moreover, most of the time, this same 0day is used for months by blackhats, so are the blackhats ahead of whitehats ?










share|improve this question




















  • 4




    As a classification of their activities, yes. WHs locate vulnerabilities and often recommend solutions, but they are not typically the ones expected to deploy the solutions. They are external people given permission to test. A web admin is not classed as a "whitehat".
    – schroeder
    Dec 13 at 9:48








  • 3




    There's always one way to prevent every security issue immediately: Shutting down the system.
    – Fabian Röling
    Dec 13 at 10:12








  • 34




    @FabianRöling That's a common misconception. Security involves the CIA triad: Confidentiality, Integrity, and Availability. Violation of any one of those is considered a security problem. Shutting down a system completely eliminates availability. It's effectively becomes a DoS born from the fear of bugs.
    – forest
    Dec 13 at 10:59








  • 4




    @FabianRöling I think forest's point is if you need to pull the plug on your system, you haven't prevented a security issue. Even though it could've been worse, you still got hacked.
    – Lord Farquaad
    Dec 13 at 22:29






  • 3




    If it's been published, it isn't exactly a 0day, is it?
    – Acccumulation
    Dec 14 at 22:07














29












29








29


5





I'm working on a thesis about the security hacker community.



When a 0day is published, how can an administrator secure his application/website between the time the 0day is published and the patch is developed ?



Moreover, most of the time, this same 0day is used for months by blackhats, so are the blackhats ahead of whitehats ?










share|improve this question















I'm working on a thesis about the security hacker community.



When a 0day is published, how can an administrator secure his application/website between the time the 0day is published and the patch is developed ?



Moreover, most of the time, this same 0day is used for months by blackhats, so are the blackhats ahead of whitehats ?







zero-day black-hat white-hat






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Dec 17 at 15:06

























asked Dec 13 at 8:30









K.Fanedoul

167210




167210








  • 4




    As a classification of their activities, yes. WHs locate vulnerabilities and often recommend solutions, but they are not typically the ones expected to deploy the solutions. They are external people given permission to test. A web admin is not classed as a "whitehat".
    – schroeder
    Dec 13 at 9:48








  • 3




    There's always one way to prevent every security issue immediately: Shutting down the system.
    – Fabian Röling
    Dec 13 at 10:12








  • 34




    @FabianRöling That's a common misconception. Security involves the CIA triad: Confidentiality, Integrity, and Availability. Violation of any one of those is considered a security problem. Shutting down a system completely eliminates availability. It's effectively becomes a DoS born from the fear of bugs.
    – forest
    Dec 13 at 10:59








  • 4




    @FabianRöling I think forest's point is if you need to pull the plug on your system, you haven't prevented a security issue. Even though it could've been worse, you still got hacked.
    – Lord Farquaad
    Dec 13 at 22:29






  • 3




    If it's been published, it isn't exactly a 0day, is it?
    – Acccumulation
    Dec 14 at 22:07














  • 4




    As a classification of their activities, yes. WHs locate vulnerabilities and often recommend solutions, but they are not typically the ones expected to deploy the solutions. They are external people given permission to test. A web admin is not classed as a "whitehat".
    – schroeder
    Dec 13 at 9:48








  • 3




    There's always one way to prevent every security issue immediately: Shutting down the system.
    – Fabian Röling
    Dec 13 at 10:12








  • 34




    @FabianRöling That's a common misconception. Security involves the CIA triad: Confidentiality, Integrity, and Availability. Violation of any one of those is considered a security problem. Shutting down a system completely eliminates availability. It's effectively becomes a DoS born from the fear of bugs.
    – forest
    Dec 13 at 10:59








  • 4




    @FabianRöling I think forest's point is if you need to pull the plug on your system, you haven't prevented a security issue. Even though it could've been worse, you still got hacked.
    – Lord Farquaad
    Dec 13 at 22:29






  • 3




    If it's been published, it isn't exactly a 0day, is it?
    – Acccumulation
    Dec 14 at 22:07








4




4




As a classification of their activities, yes. WHs locate vulnerabilities and often recommend solutions, but they are not typically the ones expected to deploy the solutions. They are external people given permission to test. A web admin is not classed as a "whitehat".
– schroeder
Dec 13 at 9:48






As a classification of their activities, yes. WHs locate vulnerabilities and often recommend solutions, but they are not typically the ones expected to deploy the solutions. They are external people given permission to test. A web admin is not classed as a "whitehat".
– schroeder
Dec 13 at 9:48






3




3




There's always one way to prevent every security issue immediately: Shutting down the system.
– Fabian Röling
Dec 13 at 10:12






There's always one way to prevent every security issue immediately: Shutting down the system.
– Fabian Röling
Dec 13 at 10:12






34




34




@FabianRöling That's a common misconception. Security involves the CIA triad: Confidentiality, Integrity, and Availability. Violation of any one of those is considered a security problem. Shutting down a system completely eliminates availability. It's effectively becomes a DoS born from the fear of bugs.
– forest
Dec 13 at 10:59






@FabianRöling That's a common misconception. Security involves the CIA triad: Confidentiality, Integrity, and Availability. Violation of any one of those is considered a security problem. Shutting down a system completely eliminates availability. It's effectively becomes a DoS born from the fear of bugs.
– forest
Dec 13 at 10:59






4




4




@FabianRöling I think forest's point is if you need to pull the plug on your system, you haven't prevented a security issue. Even though it could've been worse, you still got hacked.
– Lord Farquaad
Dec 13 at 22:29




@FabianRöling I think forest's point is if you need to pull the plug on your system, you haven't prevented a security issue. Even though it could've been worse, you still got hacked.
– Lord Farquaad
Dec 13 at 22:29




3




3




If it's been published, it isn't exactly a 0day, is it?
– Acccumulation
Dec 14 at 22:07




If it's been published, it isn't exactly a 0day, is it?
– Acccumulation
Dec 14 at 22:07










9 Answers
9






active

oldest

votes


















45














The person who discovers a security issue often reports it to the software vendor or developer first. This gives the software vendor time to fix the issue before publication. Then, after it is fixed, the bug is publicly disclosed. This process is called responsible disclosure.



Sometimes, someone doesn't disclose the zero-day to the software vendor but uses it to hack other systems. Doing this can tip off security companies and disclose the bug, burning the zero-day.



I don't think your statement "most of the time, this same 0day is used for months by black hats" is true. This is true for some security issues, but a lot of zero-day bugs are found for the first time by white-hat hackers. I wouldn't say black hat hackers are ahead of white hat hackers. They both find security issues and some of these overlap. However, the offense has it easier than the defense in that the offense only needs to find one bug, and the defense needs to fix all the bugs.






share|improve this answer



















  • 1




    Thank's for the answer, I said that : "most of the time, this same 0day is used since months by black hats" because i have read a lot of black hats interview saying that they are using those 0day way before any publication
    – K.Fanedoul
    Dec 13 at 8:57






  • 3




    @pjc50 It is absolutely true that blackhats use 0days months (or years) before they are patched.
    – forest
    Dec 13 at 10:06






  • 11




    @You: I'd take that number with a huge grain of salt. Just like pretty much any software bugs, most security issues that would have otherwise qualified as 0day are often fixed hours or days after said bug was introduced to a released version of the software, usually without much fanfare, but these never make news (or security trackers) because they don't affect many people. The 0day that tend to make news are those that lives the longest, so there's a massive selection bias.
    – Lie Ryan
    Dec 13 at 10:57






  • 1




    @You: IMO, it's a meaningless number, and misleading. It's based on convenience sample of whatever bugs makes biggest news and as we expect the age of bugs will go down rapidly as you increase the sample size. That doesn't sound like meaningful statistics as the number doesn't converge. You can just choose almost any number you wanted by picking where to stop adding to the sample.
    – Lie Ryan
    Dec 13 at 11:15






  • 1




    @NiklasHolm That's time from initial discovery to detection by outside party (~31 minutes into the talk).
    – You
    Dec 13 at 12:32



















35















When a 0day is published, how can an administrator secure his application/website between the time the 0day is published and the patch is developed ?




They use temporary workarounds until a patch rolls out.



When news of a 0day comes out, there are often various workarounds that are published which break the exploit by eliminating some prerequisite for abusing the vulnerability. There are many possibilities:




  • Changing a configuration setting can disable vulnerable functionality.


  • Turning off vulnerable services, when practical, can prevent exploitation.


  • Enabling non-default security measures may break the exploit.



Every bug is different, and every mitigation is different. An administrator with a good understanding of security can figure out workarounds on their own if sufficient details about the vulnerability are released. Most administrators, however, will look to security advisories published by the software vendor.



Sometimes, an administrator doesn't have to do anything. This can be the case if the vulnerability only affects a non-default configuration, or a configuration which is not set on their systems. For example, a vulnerability in the DRM video subsystem for Linux need not worry a sysadmin with a LAMP stack, since their servers will not be using DRM anyway. A vulnerability in Apache, on the other hand, might be something they should worry about. A good sysadmin knows what is and isn't a risk factor.




Moreover, most of the time, this same 0day is used for months by blackhats, so are the blackhats ahead of whitehats ?




Whitehats are more sophisticated, but blackhats are more efficient.



Whether or not blackhats are ahead of whitehats is a very subjective question. Blackhats will use whatever works. This means their exploits are effective, but dirty and, at times, unsophisticated. For example, while it is possible to discover the ASLR layout of a browser via side-channel attacks, this isn't really used in the wild since ubiquitous unsophisticated ASLR bypasses already exist. Whitehats on the other hand need to think up fixes and actually get the software vendor to take the report seriously. This does not impact blackhats to nearly the same extent, as they can often start benefiting from their discovery the moment they make it. They don't need to wait for a third party.



From my own experience, blackhats often have a significant edge. This is primarily because the current culture among whitehats is to hunt and squash individual bugs. Less emphasis is put on squashing entire classes of bugs and when it is, sub-par and over-hyped mitigations are what are created (like KASLR). This means blackhats can pump out 0days faster than they can be patched, since so little attention is given to the attack surface area and exploitation vectors that keep being used and re-used.






share|improve this answer



















  • 6




    Another important difference is that the whitehats often have to convince the software vendor to fix the issue and find a fix/mitigation technique. The blackhats don't have to care about that.
    – Lie Ryan
    Dec 13 at 10:45










  • @LieRyan Great point! That is very true.
    – forest
    Dec 13 at 10:45






  • 2




    If I may add the most effective temporary workaround: turn the servers off. I find it useful to remember that that is a workaround which makes the server (almost) perfectly secure because that immediately leads to the discussion of security vs usability, which is a very important discussion when applying more reasonable workarounds (like the ones you listed). If the balance of security and usability is intuitive, it's kind of pointless to bring this silly workaround up, but if it isn't intuitive for someone, it may provoke thought.
    – Cort Ammon
    Dec 13 at 17:05



















10














When a zero-day is released or published, it comes with more than just a fancy name and icon. There are details about how the zero-day is used to exploit the system. Those details form the basis of the defender's response, including how the patch needs to be designed.



For example, with WannaCry/EternalBlue, the vulnerability was found by the NSA and they kept the knowledge to themselves (the same happens in the criminal community where vulnerabilities can be traded on the black market). The details were leaked, which informed Microsoft how to create the patch and it also informed administrators how to defend against it: disable SMBv1 or at least block the SMB port from the Internet.



That's how admins protect themselves. Patching is only one part of "vulnerability management". There are many things that an admin can do to manage vulnerabilities even if they cannot or do not want to patch.



In the WannaCry case, the NHS did not patch, but they also did not employ the other defenses that would have protected themselves.



One large part of my job is designing vulnerability mitigations for systems that cannot be patched for various business reasons. Patching is the better solution, in general, but sometimes it just isn't possible at the time of patching.




... are the blackhats ahead of whitehats?




That poses an interesting problem. If a blackhat finds a problem and only shares it with other blackhats (or other members of the intelligence community), does that mean that blackhats, in aggregate, are ahead of whitehats? Yes. Once a zero-day is exposed, it loses its power (that's the whole point of disclosing it). So to keep it secret, gives it power.



Are blackhats better skilled or use better techniques than whitehats? No. But the shared secrets gives blackhats more power, in aggregate.






share|improve this answer























  • I disagree that shared secrets give blackhats more power in aggregate. While there is trading of information in the underground, it's highly localized. I believe it's the culture that prioritizes bug hunting (as opposed to mitigation research) which gives an edge to blackhats. You may fix one bug that I used ROP to exploit, but your lack of effective and ubiquitous CFI means I'll find another in no time.
    – forest
    Dec 13 at 11:01












  • The fact that the utility of a zero-day is largely tied to how long it remains a secret is part of the main argument in favor of the Full Disclosure policy, vs Responsible/Coordinated Disclosure: en.wikipedia.org/wiki/Full_disclosure_(computer_security) Full Disclosure to everyone effectively burns the zero-day immediately
    – Chris Fernandez
    Dec 13 at 19:10








  • 1




    @ChrisFernandez Full disclosure is good when the software vendor doesn't do timely updates and doesn't listen to security researchers. In that case, full disclosure empowers users to defend themselves with workarounds. When the vendor is responsive and actually cares about security, then responsible disclosure may be better, since they won't sit on the bug for ages.
    – forest
    Dec 14 at 3:17










  • Full disclosure will kill unresponsive and incompetent vendors. If used since a long time full disclosure would have eliminated most vendors who think they could do quality control after bringing to market a software. This is the way bad actors are eliminated of any other industrial sector.
    – daniel Azuelos
    Dec 17 at 15:26



















6















When a 0day is published, how can a whitehat secure his application/website between the time the 0day is published and the patch is developed?




Sometimes there are workarounds which fix or mitigate the problem.




  • Sometimes you can disable some feature or change some setting in the software which causes the exploit to not work anymore. For example, infection with the Morris Worm from 1988 could be prevented by creating a directory /usr/tmp/sh. This confused the worm and prevented it from working.

  • Sometimes the exploit requires some kind of user interaction. In that case you can warn the users to not do that. ("Do not open emails with the subject line ILOVEYOU"). But because humans are humans, this is usually not a very reliable workaround.

  • Sometimes the attack is easy to identify on the network, so you can block it with some more or less complicated firewall rule. The Conficker virus, for example, was targeting a vulnerability in the Windows Remote Procedure Call service. There is usually no reason for this feature to be accessible from outside the local network at all, so it was possible to protect a whole network by simply blocking outside requests to port 445 TCP.

  • Sometimes it is viable to replace the vulnerable software with an alternative. For example, our organization installs two different web browsers on all Windows clients. When one of them has a known vulnerability, the admins can deactivate it via group policy and tell the users to use the other one until the patch is released.

  • As a last resort, you can simply pull the plug on the vulnerable systems. Whether the systems being unavailable causes more or less damage than they being online and open to exploits is a business consideration you have to evaluate in each individual case.


But sometimes none of these is a viable option. In that case you can only hope that there will be a patch soon.




Moreover, most of the time, this same 0day is used for months by blackhats, so are the blackhats ahead of whitehats?




It happens quite frequently that developers / whitehats discover a possible security vulnerability in their software and patch it before it gets exploited. The first step of responsible disclosure is to inform the developers so they can fix the vulnerability before you publish it.



But you usually don't hear about that in the media. When point 59 of the patch notes for SomeTool 1.3.9.53 reads "fixed possible buffer overflow when processing malformed foobar files" that's usually not particularly newsworthy.






share|improve this answer























  • I believe your Morris worm example is a poor one. The Morris worm used several vulnerabilities for jumping between and infecting systems, of which the fingerd flaw was one. (There was also at least sendmail's debug mode, and common user account passwords.) If I recall correctly, the real trick to defuse that one was to mkdir /tmp/sh.
    – a CVn
    Dec 13 at 10:57






  • 1




    @aCVn Fixed. Thank you.
    – Philipp
    Dec 13 at 11:31










  • Good point about turning the machine off being a reasonable business decision sometimes.
    – trognanders
    Dec 16 at 8:45



















2














Most potential exploits require a chain of vulnerabilities in order to be executed. By reading the as-yet unpatched zero-day, you can still identify other vulnerabilities or pre-conditions that the zero-day would require.



To defend against threat of (say) an RDP attack from outside the network (zero-day RDP authentication failure published), do not allow RDP from off-site. If you don't really need RDP from outside, then this is a chance to correct an oversight. Or, if you must have RDP from off-site, perhaps you can identify a whitelist of IPs from which to allow these connections, and narrow the aperture in the firewall.



Likewise, to defend against an inside (and to some extent outside) RDP threat, limit the ability of A) users to execute RDP, B) machines to execute RDP, C) the network to pass RDP, D) machines to accept RDP, E) users to allow RDP. Which VLANs should have the ability to generate outbound RDP? Which machines should be able to do this? And so forth.



Every one of these steps, in both the outsider and insider scenarios, works to harden your network against an RDP authentication exploit even without a patch.



A defense-in-depth mentality allows you to break the chain of vulnerabilities / conditions required for even an un-patched zero-day to be countered. Sometimes.



I have intentionally chosen a fairly easy problem here just to illustrate the point.



Source -- I have done this before.






share|improve this answer





























    2














    The problem is not only with zero-days. There are plenty of companies which still drag on 200-days patches for a multitude of reasons (some good, some bad).



    You have a large list of solutions, another one is to use virtual patching. It usually creates a mitigation for the issue before it hits the service (I learned about it years ago though a Trend Micro product - no links with them but I tested it and it mostly worked).






    share|improve this answer





























      2














      Another key defense is monitoring, and knowing your system.



      Where are your valuable secrets, and who has access to them.



      If someone tries to connect to your mail server on port 80, red flag.



      Why is the mail server, all of a sudden, sending traffic to an unusual IP.



      The mail server now has 10x the traffic why?



      Monitor people connecting to your external IP's addresses. Drop and/or block all external ports and protocols that are not in use.



      No legitimate user is going to connect to your web server on anything but 80 or 443. Unless you have added additional services. You might consider blocking those IP for some time. Sometimes, IP are part of dynamic pools, and you can't always solve a problem with a blacklist, then you just drop the packets.



      If your business only does business in 1 country, maybe you should just block all other countries.



      You can use whois to find the global owner of the IP address range, and if present use the administrator contact information to notify the owner. They can track it down on their end. (Its worth a try)



      You should get notified when any system gets contacted by another system in any unexpected way. After first you may have a ton of notification, but if the computer(s) is on your network then you can investigate both sides. Then either eliminate it or white list it as expected traffic.



      These monitor tools will also notify you about port scans, unless you have an authorized security team no one else should be port scanning.



      Watch for regular events, and if they stop mysteriously why?



      Check the machine for infections. If services are disabled you should be notified in advance so the changes will be expected and not mysterious.



      Block as much as possible and monitor the rest.



      Now once you have an attack you need to do something about it.



      Sometimes turning the system off temporarily is the only option. Maybe you need to block their IP address for awhile.



      You still have to protect and monitor all your legitimate services.



      In addition to monitoring the community for vulnerability announcements. You should have penetration testers to find the bugs in advance before the hackers. Then you have a chance to mitigate the attack on your terms. Notifying the maintainer of the effect system so they can patch it. If its open source, you can have someone patch it for you.



      Intrusion detection systems, and snort can also examine and potentially block incoming hacks by detecting suspicious patterns.



      You may have to find an alternate product to replace the vulnerable one depending on the severity of the problem.



      As always keeping your software up to date helps to protect you.



      This way you can block suspicious activity, until you determine its legit.






      share|improve this answer





























        2














        Relatively few hacks allow the attacker to break into a system. Most are "privilege escalation" bugs that allow an attacker to have greater control over the system after they have access to it. There are so many ways to achieve administrative control of a machine once a hacker has access to it, that it is more or less a waste of time to try to secure a machine against privilege escalation. Your best policy is to focus on preventing hackers from getting inside in the first place and monitoring your network for intrusion.



        Nearly all intrusions come from just three methods. You want to spend all your available cyber defense resources defending against these. They are:



        (1) Phishing emails containing poisoned PDFs or PPTs. There are tons of zero days targeting PDFs and PPTs, and the nature of both these application formats is such that there is more or less no way to secure yourself against a contemporary trojan in either one. Therefore, you basically have two options: require all PDF/PPT attachments to go through a vetting process, which is not practical for most organizations, or to train your employees to vet emails themselves which is the best option in most cases. A third option is to test all PDFs and PPTs sent to the organization in a sandboxed environment after the fact, but this is only possible for advanced organizations, like the military, not the average company. Option 3 of course does not prevent the intrusion, it just warns you immediately if one occurs.



        (2) Browser vulnerabilities. The vast majority of browser-based exploits target Internet Explorer, so you can defend probably 95% of these just by preventing users from using IE and requiring them to use Chrome or Firefox. You can prevent 99% of browser based exploits by requiring users to use NoScript and training them in its use, which unfortunately is not practical for most organizations.



        (3) Server vulnerabilities. An example would be the NTP bug from a few years back. You can largely defend against these by making sure that all company servers are running on isolated networks (a "demilitarized zone") and that those servers are tight and not running unnecessary services. You especially want to make sure that any company web servers are running by themselves in isolated environments and that nothing can get into or out of those environments without a human explicitly doing the copy in a controlled way.



        Of course there are lots of exploits that fall outside these categories, but your time is best spent addressing the three classes of vulnerabilities listed above.






        share|improve this answer





























          1














          Well, its ok to have 0 days from an attacker, the problem is how many zero days they have and how much does it costs for them to burn all the 0 days in your network.



          If you don't have patches up to date, it lowers the cost for an attacker to develop a kill chain.



          When you think about it, how would you send start attacking a network? Let say you start with a phishing attack / waterhole attack.



          If it is a waterhole attack, you might need to find a 0 day in flash which allows you to execute code in the browser, and then you might need to break out of the browser sandbox first, which requires another 0day. And next you might face appcontainer, which requires another exploit to reach OS level privilege. And there are protection mechanism such as SIP in macOS, it means even if you have root access, you cant access important memory. That means you need another 0day kernel exploit. If it is running windows 10 with cred guard and you are targeting Lsass.exe, then you might need another 0day to attack the hypervisor.



          So it turns out the attack is very expensive and requires a lot of research effort, and in the meantime while you exploiting them, you might trigger security alert.



          So as a defender, make sure you know your network well, have defence controls in every single layer and you should be able to defend against 0 day attack.






          share|improve this answer





















          • Well, its ok to have 0 days from an attacker, the problem is how many zero days they have and how much does it costs for them to burn all the 0 days in your network. I mean, it is not really okay to have 0-day vulnerabilities if that is what you're suggesting, but yes every written code has bugs in them and they should be fixed. Having any vulnerabilities is not okay and they should be patched, even if it is expensive to abuse them.
            – Kevin Voorn
            Dec 14 at 3:28










          • @KevinVoorn YEa agree, thats why I said If you don't have patches up to date, it lowers the cost for an attacker to develop a kill chain. Patching is still very important, you just can't stop someone having 0day
            – Timothy Leung
            Dec 15 at 3:23











          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "162"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsecurity.stackexchange.com%2fquestions%2f199672%2fhow-can-an-administrator-secure-against-a-0day-before-patches-are-available%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          9 Answers
          9






          active

          oldest

          votes








          9 Answers
          9






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          45














          The person who discovers a security issue often reports it to the software vendor or developer first. This gives the software vendor time to fix the issue before publication. Then, after it is fixed, the bug is publicly disclosed. This process is called responsible disclosure.



          Sometimes, someone doesn't disclose the zero-day to the software vendor but uses it to hack other systems. Doing this can tip off security companies and disclose the bug, burning the zero-day.



          I don't think your statement "most of the time, this same 0day is used for months by black hats" is true. This is true for some security issues, but a lot of zero-day bugs are found for the first time by white-hat hackers. I wouldn't say black hat hackers are ahead of white hat hackers. They both find security issues and some of these overlap. However, the offense has it easier than the defense in that the offense only needs to find one bug, and the defense needs to fix all the bugs.






          share|improve this answer



















          • 1




            Thank's for the answer, I said that : "most of the time, this same 0day is used since months by black hats" because i have read a lot of black hats interview saying that they are using those 0day way before any publication
            – K.Fanedoul
            Dec 13 at 8:57






          • 3




            @pjc50 It is absolutely true that blackhats use 0days months (or years) before they are patched.
            – forest
            Dec 13 at 10:06






          • 11




            @You: I'd take that number with a huge grain of salt. Just like pretty much any software bugs, most security issues that would have otherwise qualified as 0day are often fixed hours or days after said bug was introduced to a released version of the software, usually without much fanfare, but these never make news (or security trackers) because they don't affect many people. The 0day that tend to make news are those that lives the longest, so there's a massive selection bias.
            – Lie Ryan
            Dec 13 at 10:57






          • 1




            @You: IMO, it's a meaningless number, and misleading. It's based on convenience sample of whatever bugs makes biggest news and as we expect the age of bugs will go down rapidly as you increase the sample size. That doesn't sound like meaningful statistics as the number doesn't converge. You can just choose almost any number you wanted by picking where to stop adding to the sample.
            – Lie Ryan
            Dec 13 at 11:15






          • 1




            @NiklasHolm That's time from initial discovery to detection by outside party (~31 minutes into the talk).
            – You
            Dec 13 at 12:32
















          45














          The person who discovers a security issue often reports it to the software vendor or developer first. This gives the software vendor time to fix the issue before publication. Then, after it is fixed, the bug is publicly disclosed. This process is called responsible disclosure.



          Sometimes, someone doesn't disclose the zero-day to the software vendor but uses it to hack other systems. Doing this can tip off security companies and disclose the bug, burning the zero-day.



          I don't think your statement "most of the time, this same 0day is used for months by black hats" is true. This is true for some security issues, but a lot of zero-day bugs are found for the first time by white-hat hackers. I wouldn't say black hat hackers are ahead of white hat hackers. They both find security issues and some of these overlap. However, the offense has it easier than the defense in that the offense only needs to find one bug, and the defense needs to fix all the bugs.






          share|improve this answer



















          • 1




            Thank's for the answer, I said that : "most of the time, this same 0day is used since months by black hats" because i have read a lot of black hats interview saying that they are using those 0day way before any publication
            – K.Fanedoul
            Dec 13 at 8:57






          • 3




            @pjc50 It is absolutely true that blackhats use 0days months (or years) before they are patched.
            – forest
            Dec 13 at 10:06






          • 11




            @You: I'd take that number with a huge grain of salt. Just like pretty much any software bugs, most security issues that would have otherwise qualified as 0day are often fixed hours or days after said bug was introduced to a released version of the software, usually without much fanfare, but these never make news (or security trackers) because they don't affect many people. The 0day that tend to make news are those that lives the longest, so there's a massive selection bias.
            – Lie Ryan
            Dec 13 at 10:57






          • 1




            @You: IMO, it's a meaningless number, and misleading. It's based on convenience sample of whatever bugs makes biggest news and as we expect the age of bugs will go down rapidly as you increase the sample size. That doesn't sound like meaningful statistics as the number doesn't converge. You can just choose almost any number you wanted by picking where to stop adding to the sample.
            – Lie Ryan
            Dec 13 at 11:15






          • 1




            @NiklasHolm That's time from initial discovery to detection by outside party (~31 minutes into the talk).
            – You
            Dec 13 at 12:32














          45












          45








          45






          The person who discovers a security issue often reports it to the software vendor or developer first. This gives the software vendor time to fix the issue before publication. Then, after it is fixed, the bug is publicly disclosed. This process is called responsible disclosure.



          Sometimes, someone doesn't disclose the zero-day to the software vendor but uses it to hack other systems. Doing this can tip off security companies and disclose the bug, burning the zero-day.



          I don't think your statement "most of the time, this same 0day is used for months by black hats" is true. This is true for some security issues, but a lot of zero-day bugs are found for the first time by white-hat hackers. I wouldn't say black hat hackers are ahead of white hat hackers. They both find security issues and some of these overlap. However, the offense has it easier than the defense in that the offense only needs to find one bug, and the defense needs to fix all the bugs.






          share|improve this answer














          The person who discovers a security issue often reports it to the software vendor or developer first. This gives the software vendor time to fix the issue before publication. Then, after it is fixed, the bug is publicly disclosed. This process is called responsible disclosure.



          Sometimes, someone doesn't disclose the zero-day to the software vendor but uses it to hack other systems. Doing this can tip off security companies and disclose the bug, burning the zero-day.



          I don't think your statement "most of the time, this same 0day is used for months by black hats" is true. This is true for some security issues, but a lot of zero-day bugs are found for the first time by white-hat hackers. I wouldn't say black hat hackers are ahead of white hat hackers. They both find security issues and some of these overlap. However, the offense has it easier than the defense in that the offense only needs to find one bug, and the defense needs to fix all the bugs.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Dec 15 at 0:28









          Ben

          1033




          1033










          answered Dec 13 at 8:43









          Sjoerd

          16.6k73957




          16.6k73957








          • 1




            Thank's for the answer, I said that : "most of the time, this same 0day is used since months by black hats" because i have read a lot of black hats interview saying that they are using those 0day way before any publication
            – K.Fanedoul
            Dec 13 at 8:57






          • 3




            @pjc50 It is absolutely true that blackhats use 0days months (or years) before they are patched.
            – forest
            Dec 13 at 10:06






          • 11




            @You: I'd take that number with a huge grain of salt. Just like pretty much any software bugs, most security issues that would have otherwise qualified as 0day are often fixed hours or days after said bug was introduced to a released version of the software, usually without much fanfare, but these never make news (or security trackers) because they don't affect many people. The 0day that tend to make news are those that lives the longest, so there's a massive selection bias.
            – Lie Ryan
            Dec 13 at 10:57






          • 1




            @You: IMO, it's a meaningless number, and misleading. It's based on convenience sample of whatever bugs makes biggest news and as we expect the age of bugs will go down rapidly as you increase the sample size. That doesn't sound like meaningful statistics as the number doesn't converge. You can just choose almost any number you wanted by picking where to stop adding to the sample.
            – Lie Ryan
            Dec 13 at 11:15






          • 1




            @NiklasHolm That's time from initial discovery to detection by outside party (~31 minutes into the talk).
            – You
            Dec 13 at 12:32














          • 1




            Thank's for the answer, I said that : "most of the time, this same 0day is used since months by black hats" because i have read a lot of black hats interview saying that they are using those 0day way before any publication
            – K.Fanedoul
            Dec 13 at 8:57






          • 3




            @pjc50 It is absolutely true that blackhats use 0days months (or years) before they are patched.
            – forest
            Dec 13 at 10:06






          • 11




            @You: I'd take that number with a huge grain of salt. Just like pretty much any software bugs, most security issues that would have otherwise qualified as 0day are often fixed hours or days after said bug was introduced to a released version of the software, usually without much fanfare, but these never make news (or security trackers) because they don't affect many people. The 0day that tend to make news are those that lives the longest, so there's a massive selection bias.
            – Lie Ryan
            Dec 13 at 10:57






          • 1




            @You: IMO, it's a meaningless number, and misleading. It's based on convenience sample of whatever bugs makes biggest news and as we expect the age of bugs will go down rapidly as you increase the sample size. That doesn't sound like meaningful statistics as the number doesn't converge. You can just choose almost any number you wanted by picking where to stop adding to the sample.
            – Lie Ryan
            Dec 13 at 11:15






          • 1




            @NiklasHolm That's time from initial discovery to detection by outside party (~31 minutes into the talk).
            – You
            Dec 13 at 12:32








          1




          1




          Thank's for the answer, I said that : "most of the time, this same 0day is used since months by black hats" because i have read a lot of black hats interview saying that they are using those 0day way before any publication
          – K.Fanedoul
          Dec 13 at 8:57




          Thank's for the answer, I said that : "most of the time, this same 0day is used since months by black hats" because i have read a lot of black hats interview saying that they are using those 0day way before any publication
          – K.Fanedoul
          Dec 13 at 8:57




          3




          3




          @pjc50 It is absolutely true that blackhats use 0days months (or years) before they are patched.
          – forest
          Dec 13 at 10:06




          @pjc50 It is absolutely true that blackhats use 0days months (or years) before they are patched.
          – forest
          Dec 13 at 10:06




          11




          11




          @You: I'd take that number with a huge grain of salt. Just like pretty much any software bugs, most security issues that would have otherwise qualified as 0day are often fixed hours or days after said bug was introduced to a released version of the software, usually without much fanfare, but these never make news (or security trackers) because they don't affect many people. The 0day that tend to make news are those that lives the longest, so there's a massive selection bias.
          – Lie Ryan
          Dec 13 at 10:57




          @You: I'd take that number with a huge grain of salt. Just like pretty much any software bugs, most security issues that would have otherwise qualified as 0day are often fixed hours or days after said bug was introduced to a released version of the software, usually without much fanfare, but these never make news (or security trackers) because they don't affect many people. The 0day that tend to make news are those that lives the longest, so there's a massive selection bias.
          – Lie Ryan
          Dec 13 at 10:57




          1




          1




          @You: IMO, it's a meaningless number, and misleading. It's based on convenience sample of whatever bugs makes biggest news and as we expect the age of bugs will go down rapidly as you increase the sample size. That doesn't sound like meaningful statistics as the number doesn't converge. You can just choose almost any number you wanted by picking where to stop adding to the sample.
          – Lie Ryan
          Dec 13 at 11:15




          @You: IMO, it's a meaningless number, and misleading. It's based on convenience sample of whatever bugs makes biggest news and as we expect the age of bugs will go down rapidly as you increase the sample size. That doesn't sound like meaningful statistics as the number doesn't converge. You can just choose almost any number you wanted by picking where to stop adding to the sample.
          – Lie Ryan
          Dec 13 at 11:15




          1




          1




          @NiklasHolm That's time from initial discovery to detection by outside party (~31 minutes into the talk).
          – You
          Dec 13 at 12:32




          @NiklasHolm That's time from initial discovery to detection by outside party (~31 minutes into the talk).
          – You
          Dec 13 at 12:32













          35















          When a 0day is published, how can an administrator secure his application/website between the time the 0day is published and the patch is developed ?




          They use temporary workarounds until a patch rolls out.



          When news of a 0day comes out, there are often various workarounds that are published which break the exploit by eliminating some prerequisite for abusing the vulnerability. There are many possibilities:




          • Changing a configuration setting can disable vulnerable functionality.


          • Turning off vulnerable services, when practical, can prevent exploitation.


          • Enabling non-default security measures may break the exploit.



          Every bug is different, and every mitigation is different. An administrator with a good understanding of security can figure out workarounds on their own if sufficient details about the vulnerability are released. Most administrators, however, will look to security advisories published by the software vendor.



          Sometimes, an administrator doesn't have to do anything. This can be the case if the vulnerability only affects a non-default configuration, or a configuration which is not set on their systems. For example, a vulnerability in the DRM video subsystem for Linux need not worry a sysadmin with a LAMP stack, since their servers will not be using DRM anyway. A vulnerability in Apache, on the other hand, might be something they should worry about. A good sysadmin knows what is and isn't a risk factor.




          Moreover, most of the time, this same 0day is used for months by blackhats, so are the blackhats ahead of whitehats ?




          Whitehats are more sophisticated, but blackhats are more efficient.



          Whether or not blackhats are ahead of whitehats is a very subjective question. Blackhats will use whatever works. This means their exploits are effective, but dirty and, at times, unsophisticated. For example, while it is possible to discover the ASLR layout of a browser via side-channel attacks, this isn't really used in the wild since ubiquitous unsophisticated ASLR bypasses already exist. Whitehats on the other hand need to think up fixes and actually get the software vendor to take the report seriously. This does not impact blackhats to nearly the same extent, as they can often start benefiting from their discovery the moment they make it. They don't need to wait for a third party.



          From my own experience, blackhats often have a significant edge. This is primarily because the current culture among whitehats is to hunt and squash individual bugs. Less emphasis is put on squashing entire classes of bugs and when it is, sub-par and over-hyped mitigations are what are created (like KASLR). This means blackhats can pump out 0days faster than they can be patched, since so little attention is given to the attack surface area and exploitation vectors that keep being used and re-used.






          share|improve this answer



















          • 6




            Another important difference is that the whitehats often have to convince the software vendor to fix the issue and find a fix/mitigation technique. The blackhats don't have to care about that.
            – Lie Ryan
            Dec 13 at 10:45










          • @LieRyan Great point! That is very true.
            – forest
            Dec 13 at 10:45






          • 2




            If I may add the most effective temporary workaround: turn the servers off. I find it useful to remember that that is a workaround which makes the server (almost) perfectly secure because that immediately leads to the discussion of security vs usability, which is a very important discussion when applying more reasonable workarounds (like the ones you listed). If the balance of security and usability is intuitive, it's kind of pointless to bring this silly workaround up, but if it isn't intuitive for someone, it may provoke thought.
            – Cort Ammon
            Dec 13 at 17:05
















          35















          When a 0day is published, how can an administrator secure his application/website between the time the 0day is published and the patch is developed ?




          They use temporary workarounds until a patch rolls out.



          When news of a 0day comes out, there are often various workarounds that are published which break the exploit by eliminating some prerequisite for abusing the vulnerability. There are many possibilities:




          • Changing a configuration setting can disable vulnerable functionality.


          • Turning off vulnerable services, when practical, can prevent exploitation.


          • Enabling non-default security measures may break the exploit.



          Every bug is different, and every mitigation is different. An administrator with a good understanding of security can figure out workarounds on their own if sufficient details about the vulnerability are released. Most administrators, however, will look to security advisories published by the software vendor.



          Sometimes, an administrator doesn't have to do anything. This can be the case if the vulnerability only affects a non-default configuration, or a configuration which is not set on their systems. For example, a vulnerability in the DRM video subsystem for Linux need not worry a sysadmin with a LAMP stack, since their servers will not be using DRM anyway. A vulnerability in Apache, on the other hand, might be something they should worry about. A good sysadmin knows what is and isn't a risk factor.




          Moreover, most of the time, this same 0day is used for months by blackhats, so are the blackhats ahead of whitehats ?




          Whitehats are more sophisticated, but blackhats are more efficient.



          Whether or not blackhats are ahead of whitehats is a very subjective question. Blackhats will use whatever works. This means their exploits are effective, but dirty and, at times, unsophisticated. For example, while it is possible to discover the ASLR layout of a browser via side-channel attacks, this isn't really used in the wild since ubiquitous unsophisticated ASLR bypasses already exist. Whitehats on the other hand need to think up fixes and actually get the software vendor to take the report seriously. This does not impact blackhats to nearly the same extent, as they can often start benefiting from their discovery the moment they make it. They don't need to wait for a third party.



          From my own experience, blackhats often have a significant edge. This is primarily because the current culture among whitehats is to hunt and squash individual bugs. Less emphasis is put on squashing entire classes of bugs and when it is, sub-par and over-hyped mitigations are what are created (like KASLR). This means blackhats can pump out 0days faster than they can be patched, since so little attention is given to the attack surface area and exploitation vectors that keep being used and re-used.






          share|improve this answer



















          • 6




            Another important difference is that the whitehats often have to convince the software vendor to fix the issue and find a fix/mitigation technique. The blackhats don't have to care about that.
            – Lie Ryan
            Dec 13 at 10:45










          • @LieRyan Great point! That is very true.
            – forest
            Dec 13 at 10:45






          • 2




            If I may add the most effective temporary workaround: turn the servers off. I find it useful to remember that that is a workaround which makes the server (almost) perfectly secure because that immediately leads to the discussion of security vs usability, which is a very important discussion when applying more reasonable workarounds (like the ones you listed). If the balance of security and usability is intuitive, it's kind of pointless to bring this silly workaround up, but if it isn't intuitive for someone, it may provoke thought.
            – Cort Ammon
            Dec 13 at 17:05














          35












          35








          35







          When a 0day is published, how can an administrator secure his application/website between the time the 0day is published and the patch is developed ?




          They use temporary workarounds until a patch rolls out.



          When news of a 0day comes out, there are often various workarounds that are published which break the exploit by eliminating some prerequisite for abusing the vulnerability. There are many possibilities:




          • Changing a configuration setting can disable vulnerable functionality.


          • Turning off vulnerable services, when practical, can prevent exploitation.


          • Enabling non-default security measures may break the exploit.



          Every bug is different, and every mitigation is different. An administrator with a good understanding of security can figure out workarounds on their own if sufficient details about the vulnerability are released. Most administrators, however, will look to security advisories published by the software vendor.



          Sometimes, an administrator doesn't have to do anything. This can be the case if the vulnerability only affects a non-default configuration, or a configuration which is not set on their systems. For example, a vulnerability in the DRM video subsystem for Linux need not worry a sysadmin with a LAMP stack, since their servers will not be using DRM anyway. A vulnerability in Apache, on the other hand, might be something they should worry about. A good sysadmin knows what is and isn't a risk factor.




          Moreover, most of the time, this same 0day is used for months by blackhats, so are the blackhats ahead of whitehats ?




          Whitehats are more sophisticated, but blackhats are more efficient.



          Whether or not blackhats are ahead of whitehats is a very subjective question. Blackhats will use whatever works. This means their exploits are effective, but dirty and, at times, unsophisticated. For example, while it is possible to discover the ASLR layout of a browser via side-channel attacks, this isn't really used in the wild since ubiquitous unsophisticated ASLR bypasses already exist. Whitehats on the other hand need to think up fixes and actually get the software vendor to take the report seriously. This does not impact blackhats to nearly the same extent, as they can often start benefiting from their discovery the moment they make it. They don't need to wait for a third party.



          From my own experience, blackhats often have a significant edge. This is primarily because the current culture among whitehats is to hunt and squash individual bugs. Less emphasis is put on squashing entire classes of bugs and when it is, sub-par and over-hyped mitigations are what are created (like KASLR). This means blackhats can pump out 0days faster than they can be patched, since so little attention is given to the attack surface area and exploitation vectors that keep being used and re-used.






          share|improve this answer















          When a 0day is published, how can an administrator secure his application/website between the time the 0day is published and the patch is developed ?




          They use temporary workarounds until a patch rolls out.



          When news of a 0day comes out, there are often various workarounds that are published which break the exploit by eliminating some prerequisite for abusing the vulnerability. There are many possibilities:




          • Changing a configuration setting can disable vulnerable functionality.


          • Turning off vulnerable services, when practical, can prevent exploitation.


          • Enabling non-default security measures may break the exploit.



          Every bug is different, and every mitigation is different. An administrator with a good understanding of security can figure out workarounds on their own if sufficient details about the vulnerability are released. Most administrators, however, will look to security advisories published by the software vendor.



          Sometimes, an administrator doesn't have to do anything. This can be the case if the vulnerability only affects a non-default configuration, or a configuration which is not set on their systems. For example, a vulnerability in the DRM video subsystem for Linux need not worry a sysadmin with a LAMP stack, since their servers will not be using DRM anyway. A vulnerability in Apache, on the other hand, might be something they should worry about. A good sysadmin knows what is and isn't a risk factor.




          Moreover, most of the time, this same 0day is used for months by blackhats, so are the blackhats ahead of whitehats ?




          Whitehats are more sophisticated, but blackhats are more efficient.



          Whether or not blackhats are ahead of whitehats is a very subjective question. Blackhats will use whatever works. This means their exploits are effective, but dirty and, at times, unsophisticated. For example, while it is possible to discover the ASLR layout of a browser via side-channel attacks, this isn't really used in the wild since ubiquitous unsophisticated ASLR bypasses already exist. Whitehats on the other hand need to think up fixes and actually get the software vendor to take the report seriously. This does not impact blackhats to nearly the same extent, as they can often start benefiting from their discovery the moment they make it. They don't need to wait for a third party.



          From my own experience, blackhats often have a significant edge. This is primarily because the current culture among whitehats is to hunt and squash individual bugs. Less emphasis is put on squashing entire classes of bugs and when it is, sub-par and over-hyped mitigations are what are created (like KASLR). This means blackhats can pump out 0days faster than they can be patched, since so little attention is given to the attack surface area and exploitation vectors that keep being used and re-used.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Dec 13 at 10:53

























          answered Dec 13 at 10:16









          forest

          31.3k1598107




          31.3k1598107








          • 6




            Another important difference is that the whitehats often have to convince the software vendor to fix the issue and find a fix/mitigation technique. The blackhats don't have to care about that.
            – Lie Ryan
            Dec 13 at 10:45










          • @LieRyan Great point! That is very true.
            – forest
            Dec 13 at 10:45






          • 2




            If I may add the most effective temporary workaround: turn the servers off. I find it useful to remember that that is a workaround which makes the server (almost) perfectly secure because that immediately leads to the discussion of security vs usability, which is a very important discussion when applying more reasonable workarounds (like the ones you listed). If the balance of security and usability is intuitive, it's kind of pointless to bring this silly workaround up, but if it isn't intuitive for someone, it may provoke thought.
            – Cort Ammon
            Dec 13 at 17:05














          • 6




            Another important difference is that the whitehats often have to convince the software vendor to fix the issue and find a fix/mitigation technique. The blackhats don't have to care about that.
            – Lie Ryan
            Dec 13 at 10:45










          • @LieRyan Great point! That is very true.
            – forest
            Dec 13 at 10:45






          • 2




            If I may add the most effective temporary workaround: turn the servers off. I find it useful to remember that that is a workaround which makes the server (almost) perfectly secure because that immediately leads to the discussion of security vs usability, which is a very important discussion when applying more reasonable workarounds (like the ones you listed). If the balance of security and usability is intuitive, it's kind of pointless to bring this silly workaround up, but if it isn't intuitive for someone, it may provoke thought.
            – Cort Ammon
            Dec 13 at 17:05








          6




          6




          Another important difference is that the whitehats often have to convince the software vendor to fix the issue and find a fix/mitigation technique. The blackhats don't have to care about that.
          – Lie Ryan
          Dec 13 at 10:45




          Another important difference is that the whitehats often have to convince the software vendor to fix the issue and find a fix/mitigation technique. The blackhats don't have to care about that.
          – Lie Ryan
          Dec 13 at 10:45












          @LieRyan Great point! That is very true.
          – forest
          Dec 13 at 10:45




          @LieRyan Great point! That is very true.
          – forest
          Dec 13 at 10:45




          2




          2




          If I may add the most effective temporary workaround: turn the servers off. I find it useful to remember that that is a workaround which makes the server (almost) perfectly secure because that immediately leads to the discussion of security vs usability, which is a very important discussion when applying more reasonable workarounds (like the ones you listed). If the balance of security and usability is intuitive, it's kind of pointless to bring this silly workaround up, but if it isn't intuitive for someone, it may provoke thought.
          – Cort Ammon
          Dec 13 at 17:05




          If I may add the most effective temporary workaround: turn the servers off. I find it useful to remember that that is a workaround which makes the server (almost) perfectly secure because that immediately leads to the discussion of security vs usability, which is a very important discussion when applying more reasonable workarounds (like the ones you listed). If the balance of security and usability is intuitive, it's kind of pointless to bring this silly workaround up, but if it isn't intuitive for someone, it may provoke thought.
          – Cort Ammon
          Dec 13 at 17:05











          10














          When a zero-day is released or published, it comes with more than just a fancy name and icon. There are details about how the zero-day is used to exploit the system. Those details form the basis of the defender's response, including how the patch needs to be designed.



          For example, with WannaCry/EternalBlue, the vulnerability was found by the NSA and they kept the knowledge to themselves (the same happens in the criminal community where vulnerabilities can be traded on the black market). The details were leaked, which informed Microsoft how to create the patch and it also informed administrators how to defend against it: disable SMBv1 or at least block the SMB port from the Internet.



          That's how admins protect themselves. Patching is only one part of "vulnerability management". There are many things that an admin can do to manage vulnerabilities even if they cannot or do not want to patch.



          In the WannaCry case, the NHS did not patch, but they also did not employ the other defenses that would have protected themselves.



          One large part of my job is designing vulnerability mitigations for systems that cannot be patched for various business reasons. Patching is the better solution, in general, but sometimes it just isn't possible at the time of patching.




          ... are the blackhats ahead of whitehats?




          That poses an interesting problem. If a blackhat finds a problem and only shares it with other blackhats (or other members of the intelligence community), does that mean that blackhats, in aggregate, are ahead of whitehats? Yes. Once a zero-day is exposed, it loses its power (that's the whole point of disclosing it). So to keep it secret, gives it power.



          Are blackhats better skilled or use better techniques than whitehats? No. But the shared secrets gives blackhats more power, in aggregate.






          share|improve this answer























          • I disagree that shared secrets give blackhats more power in aggregate. While there is trading of information in the underground, it's highly localized. I believe it's the culture that prioritizes bug hunting (as opposed to mitigation research) which gives an edge to blackhats. You may fix one bug that I used ROP to exploit, but your lack of effective and ubiquitous CFI means I'll find another in no time.
            – forest
            Dec 13 at 11:01












          • The fact that the utility of a zero-day is largely tied to how long it remains a secret is part of the main argument in favor of the Full Disclosure policy, vs Responsible/Coordinated Disclosure: en.wikipedia.org/wiki/Full_disclosure_(computer_security) Full Disclosure to everyone effectively burns the zero-day immediately
            – Chris Fernandez
            Dec 13 at 19:10








          • 1




            @ChrisFernandez Full disclosure is good when the software vendor doesn't do timely updates and doesn't listen to security researchers. In that case, full disclosure empowers users to defend themselves with workarounds. When the vendor is responsive and actually cares about security, then responsible disclosure may be better, since they won't sit on the bug for ages.
            – forest
            Dec 14 at 3:17










          • Full disclosure will kill unresponsive and incompetent vendors. If used since a long time full disclosure would have eliminated most vendors who think they could do quality control after bringing to market a software. This is the way bad actors are eliminated of any other industrial sector.
            – daniel Azuelos
            Dec 17 at 15:26
















          10














          When a zero-day is released or published, it comes with more than just a fancy name and icon. There are details about how the zero-day is used to exploit the system. Those details form the basis of the defender's response, including how the patch needs to be designed.



          For example, with WannaCry/EternalBlue, the vulnerability was found by the NSA and they kept the knowledge to themselves (the same happens in the criminal community where vulnerabilities can be traded on the black market). The details were leaked, which informed Microsoft how to create the patch and it also informed administrators how to defend against it: disable SMBv1 or at least block the SMB port from the Internet.



          That's how admins protect themselves. Patching is only one part of "vulnerability management". There are many things that an admin can do to manage vulnerabilities even if they cannot or do not want to patch.



          In the WannaCry case, the NHS did not patch, but they also did not employ the other defenses that would have protected themselves.



          One large part of my job is designing vulnerability mitigations for systems that cannot be patched for various business reasons. Patching is the better solution, in general, but sometimes it just isn't possible at the time of patching.




          ... are the blackhats ahead of whitehats?




          That poses an interesting problem. If a blackhat finds a problem and only shares it with other blackhats (or other members of the intelligence community), does that mean that blackhats, in aggregate, are ahead of whitehats? Yes. Once a zero-day is exposed, it loses its power (that's the whole point of disclosing it). So to keep it secret, gives it power.



          Are blackhats better skilled or use better techniques than whitehats? No. But the shared secrets gives blackhats more power, in aggregate.






          share|improve this answer























          • I disagree that shared secrets give blackhats more power in aggregate. While there is trading of information in the underground, it's highly localized. I believe it's the culture that prioritizes bug hunting (as opposed to mitigation research) which gives an edge to blackhats. You may fix one bug that I used ROP to exploit, but your lack of effective and ubiquitous CFI means I'll find another in no time.
            – forest
            Dec 13 at 11:01












          • The fact that the utility of a zero-day is largely tied to how long it remains a secret is part of the main argument in favor of the Full Disclosure policy, vs Responsible/Coordinated Disclosure: en.wikipedia.org/wiki/Full_disclosure_(computer_security) Full Disclosure to everyone effectively burns the zero-day immediately
            – Chris Fernandez
            Dec 13 at 19:10








          • 1




            @ChrisFernandez Full disclosure is good when the software vendor doesn't do timely updates and doesn't listen to security researchers. In that case, full disclosure empowers users to defend themselves with workarounds. When the vendor is responsive and actually cares about security, then responsible disclosure may be better, since they won't sit on the bug for ages.
            – forest
            Dec 14 at 3:17










          • Full disclosure will kill unresponsive and incompetent vendors. If used since a long time full disclosure would have eliminated most vendors who think they could do quality control after bringing to market a software. This is the way bad actors are eliminated of any other industrial sector.
            – daniel Azuelos
            Dec 17 at 15:26














          10












          10








          10






          When a zero-day is released or published, it comes with more than just a fancy name and icon. There are details about how the zero-day is used to exploit the system. Those details form the basis of the defender's response, including how the patch needs to be designed.



          For example, with WannaCry/EternalBlue, the vulnerability was found by the NSA and they kept the knowledge to themselves (the same happens in the criminal community where vulnerabilities can be traded on the black market). The details were leaked, which informed Microsoft how to create the patch and it also informed administrators how to defend against it: disable SMBv1 or at least block the SMB port from the Internet.



          That's how admins protect themselves. Patching is only one part of "vulnerability management". There are many things that an admin can do to manage vulnerabilities even if they cannot or do not want to patch.



          In the WannaCry case, the NHS did not patch, but they also did not employ the other defenses that would have protected themselves.



          One large part of my job is designing vulnerability mitigations for systems that cannot be patched for various business reasons. Patching is the better solution, in general, but sometimes it just isn't possible at the time of patching.




          ... are the blackhats ahead of whitehats?




          That poses an interesting problem. If a blackhat finds a problem and only shares it with other blackhats (or other members of the intelligence community), does that mean that blackhats, in aggregate, are ahead of whitehats? Yes. Once a zero-day is exposed, it loses its power (that's the whole point of disclosing it). So to keep it secret, gives it power.



          Are blackhats better skilled or use better techniques than whitehats? No. But the shared secrets gives blackhats more power, in aggregate.






          share|improve this answer














          When a zero-day is released or published, it comes with more than just a fancy name and icon. There are details about how the zero-day is used to exploit the system. Those details form the basis of the defender's response, including how the patch needs to be designed.



          For example, with WannaCry/EternalBlue, the vulnerability was found by the NSA and they kept the knowledge to themselves (the same happens in the criminal community where vulnerabilities can be traded on the black market). The details were leaked, which informed Microsoft how to create the patch and it also informed administrators how to defend against it: disable SMBv1 or at least block the SMB port from the Internet.



          That's how admins protect themselves. Patching is only one part of "vulnerability management". There are many things that an admin can do to manage vulnerabilities even if they cannot or do not want to patch.



          In the WannaCry case, the NHS did not patch, but they also did not employ the other defenses that would have protected themselves.



          One large part of my job is designing vulnerability mitigations for systems that cannot be patched for various business reasons. Patching is the better solution, in general, but sometimes it just isn't possible at the time of patching.




          ... are the blackhats ahead of whitehats?




          That poses an interesting problem. If a blackhat finds a problem and only shares it with other blackhats (or other members of the intelligence community), does that mean that blackhats, in aggregate, are ahead of whitehats? Yes. Once a zero-day is exposed, it loses its power (that's the whole point of disclosing it). So to keep it secret, gives it power.



          Are blackhats better skilled or use better techniques than whitehats? No. But the shared secrets gives blackhats more power, in aggregate.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Dec 13 at 10:23

























          answered Dec 13 at 10:08









          schroeder

          73.2k29160195




          73.2k29160195












          • I disagree that shared secrets give blackhats more power in aggregate. While there is trading of information in the underground, it's highly localized. I believe it's the culture that prioritizes bug hunting (as opposed to mitigation research) which gives an edge to blackhats. You may fix one bug that I used ROP to exploit, but your lack of effective and ubiquitous CFI means I'll find another in no time.
            – forest
            Dec 13 at 11:01












          • The fact that the utility of a zero-day is largely tied to how long it remains a secret is part of the main argument in favor of the Full Disclosure policy, vs Responsible/Coordinated Disclosure: en.wikipedia.org/wiki/Full_disclosure_(computer_security) Full Disclosure to everyone effectively burns the zero-day immediately
            – Chris Fernandez
            Dec 13 at 19:10








          • 1




            @ChrisFernandez Full disclosure is good when the software vendor doesn't do timely updates and doesn't listen to security researchers. In that case, full disclosure empowers users to defend themselves with workarounds. When the vendor is responsive and actually cares about security, then responsible disclosure may be better, since they won't sit on the bug for ages.
            – forest
            Dec 14 at 3:17










          • Full disclosure will kill unresponsive and incompetent vendors. If used since a long time full disclosure would have eliminated most vendors who think they could do quality control after bringing to market a software. This is the way bad actors are eliminated of any other industrial sector.
            – daniel Azuelos
            Dec 17 at 15:26


















          • I disagree that shared secrets give blackhats more power in aggregate. While there is trading of information in the underground, it's highly localized. I believe it's the culture that prioritizes bug hunting (as opposed to mitigation research) which gives an edge to blackhats. You may fix one bug that I used ROP to exploit, but your lack of effective and ubiquitous CFI means I'll find another in no time.
            – forest
            Dec 13 at 11:01












          • The fact that the utility of a zero-day is largely tied to how long it remains a secret is part of the main argument in favor of the Full Disclosure policy, vs Responsible/Coordinated Disclosure: en.wikipedia.org/wiki/Full_disclosure_(computer_security) Full Disclosure to everyone effectively burns the zero-day immediately
            – Chris Fernandez
            Dec 13 at 19:10








          • 1




            @ChrisFernandez Full disclosure is good when the software vendor doesn't do timely updates and doesn't listen to security researchers. In that case, full disclosure empowers users to defend themselves with workarounds. When the vendor is responsive and actually cares about security, then responsible disclosure may be better, since they won't sit on the bug for ages.
            – forest
            Dec 14 at 3:17










          • Full disclosure will kill unresponsive and incompetent vendors. If used since a long time full disclosure would have eliminated most vendors who think they could do quality control after bringing to market a software. This is the way bad actors are eliminated of any other industrial sector.
            – daniel Azuelos
            Dec 17 at 15:26
















          I disagree that shared secrets give blackhats more power in aggregate. While there is trading of information in the underground, it's highly localized. I believe it's the culture that prioritizes bug hunting (as opposed to mitigation research) which gives an edge to blackhats. You may fix one bug that I used ROP to exploit, but your lack of effective and ubiquitous CFI means I'll find another in no time.
          – forest
          Dec 13 at 11:01






          I disagree that shared secrets give blackhats more power in aggregate. While there is trading of information in the underground, it's highly localized. I believe it's the culture that prioritizes bug hunting (as opposed to mitigation research) which gives an edge to blackhats. You may fix one bug that I used ROP to exploit, but your lack of effective and ubiquitous CFI means I'll find another in no time.
          – forest
          Dec 13 at 11:01














          The fact that the utility of a zero-day is largely tied to how long it remains a secret is part of the main argument in favor of the Full Disclosure policy, vs Responsible/Coordinated Disclosure: en.wikipedia.org/wiki/Full_disclosure_(computer_security) Full Disclosure to everyone effectively burns the zero-day immediately
          – Chris Fernandez
          Dec 13 at 19:10






          The fact that the utility of a zero-day is largely tied to how long it remains a secret is part of the main argument in favor of the Full Disclosure policy, vs Responsible/Coordinated Disclosure: en.wikipedia.org/wiki/Full_disclosure_(computer_security) Full Disclosure to everyone effectively burns the zero-day immediately
          – Chris Fernandez
          Dec 13 at 19:10






          1




          1




          @ChrisFernandez Full disclosure is good when the software vendor doesn't do timely updates and doesn't listen to security researchers. In that case, full disclosure empowers users to defend themselves with workarounds. When the vendor is responsive and actually cares about security, then responsible disclosure may be better, since they won't sit on the bug for ages.
          – forest
          Dec 14 at 3:17




          @ChrisFernandez Full disclosure is good when the software vendor doesn't do timely updates and doesn't listen to security researchers. In that case, full disclosure empowers users to defend themselves with workarounds. When the vendor is responsive and actually cares about security, then responsible disclosure may be better, since they won't sit on the bug for ages.
          – forest
          Dec 14 at 3:17












          Full disclosure will kill unresponsive and incompetent vendors. If used since a long time full disclosure would have eliminated most vendors who think they could do quality control after bringing to market a software. This is the way bad actors are eliminated of any other industrial sector.
          – daniel Azuelos
          Dec 17 at 15:26




          Full disclosure will kill unresponsive and incompetent vendors. If used since a long time full disclosure would have eliminated most vendors who think they could do quality control after bringing to market a software. This is the way bad actors are eliminated of any other industrial sector.
          – daniel Azuelos
          Dec 17 at 15:26











          6















          When a 0day is published, how can a whitehat secure his application/website between the time the 0day is published and the patch is developed?




          Sometimes there are workarounds which fix or mitigate the problem.




          • Sometimes you can disable some feature or change some setting in the software which causes the exploit to not work anymore. For example, infection with the Morris Worm from 1988 could be prevented by creating a directory /usr/tmp/sh. This confused the worm and prevented it from working.

          • Sometimes the exploit requires some kind of user interaction. In that case you can warn the users to not do that. ("Do not open emails with the subject line ILOVEYOU"). But because humans are humans, this is usually not a very reliable workaround.

          • Sometimes the attack is easy to identify on the network, so you can block it with some more or less complicated firewall rule. The Conficker virus, for example, was targeting a vulnerability in the Windows Remote Procedure Call service. There is usually no reason for this feature to be accessible from outside the local network at all, so it was possible to protect a whole network by simply blocking outside requests to port 445 TCP.

          • Sometimes it is viable to replace the vulnerable software with an alternative. For example, our organization installs two different web browsers on all Windows clients. When one of them has a known vulnerability, the admins can deactivate it via group policy and tell the users to use the other one until the patch is released.

          • As a last resort, you can simply pull the plug on the vulnerable systems. Whether the systems being unavailable causes more or less damage than they being online and open to exploits is a business consideration you have to evaluate in each individual case.


          But sometimes none of these is a viable option. In that case you can only hope that there will be a patch soon.




          Moreover, most of the time, this same 0day is used for months by blackhats, so are the blackhats ahead of whitehats?




          It happens quite frequently that developers / whitehats discover a possible security vulnerability in their software and patch it before it gets exploited. The first step of responsible disclosure is to inform the developers so they can fix the vulnerability before you publish it.



          But you usually don't hear about that in the media. When point 59 of the patch notes for SomeTool 1.3.9.53 reads "fixed possible buffer overflow when processing malformed foobar files" that's usually not particularly newsworthy.






          share|improve this answer























          • I believe your Morris worm example is a poor one. The Morris worm used several vulnerabilities for jumping between and infecting systems, of which the fingerd flaw was one. (There was also at least sendmail's debug mode, and common user account passwords.) If I recall correctly, the real trick to defuse that one was to mkdir /tmp/sh.
            – a CVn
            Dec 13 at 10:57






          • 1




            @aCVn Fixed. Thank you.
            – Philipp
            Dec 13 at 11:31










          • Good point about turning the machine off being a reasonable business decision sometimes.
            – trognanders
            Dec 16 at 8:45
















          6















          When a 0day is published, how can a whitehat secure his application/website between the time the 0day is published and the patch is developed?




          Sometimes there are workarounds which fix or mitigate the problem.




          • Sometimes you can disable some feature or change some setting in the software which causes the exploit to not work anymore. For example, infection with the Morris Worm from 1988 could be prevented by creating a directory /usr/tmp/sh. This confused the worm and prevented it from working.

          • Sometimes the exploit requires some kind of user interaction. In that case you can warn the users to not do that. ("Do not open emails with the subject line ILOVEYOU"). But because humans are humans, this is usually not a very reliable workaround.

          • Sometimes the attack is easy to identify on the network, so you can block it with some more or less complicated firewall rule. The Conficker virus, for example, was targeting a vulnerability in the Windows Remote Procedure Call service. There is usually no reason for this feature to be accessible from outside the local network at all, so it was possible to protect a whole network by simply blocking outside requests to port 445 TCP.

          • Sometimes it is viable to replace the vulnerable software with an alternative. For example, our organization installs two different web browsers on all Windows clients. When one of them has a known vulnerability, the admins can deactivate it via group policy and tell the users to use the other one until the patch is released.

          • As a last resort, you can simply pull the plug on the vulnerable systems. Whether the systems being unavailable causes more or less damage than they being online and open to exploits is a business consideration you have to evaluate in each individual case.


          But sometimes none of these is a viable option. In that case you can only hope that there will be a patch soon.




          Moreover, most of the time, this same 0day is used for months by blackhats, so are the blackhats ahead of whitehats?




          It happens quite frequently that developers / whitehats discover a possible security vulnerability in their software and patch it before it gets exploited. The first step of responsible disclosure is to inform the developers so they can fix the vulnerability before you publish it.



          But you usually don't hear about that in the media. When point 59 of the patch notes for SomeTool 1.3.9.53 reads "fixed possible buffer overflow when processing malformed foobar files" that's usually not particularly newsworthy.






          share|improve this answer























          • I believe your Morris worm example is a poor one. The Morris worm used several vulnerabilities for jumping between and infecting systems, of which the fingerd flaw was one. (There was also at least sendmail's debug mode, and common user account passwords.) If I recall correctly, the real trick to defuse that one was to mkdir /tmp/sh.
            – a CVn
            Dec 13 at 10:57






          • 1




            @aCVn Fixed. Thank you.
            – Philipp
            Dec 13 at 11:31










          • Good point about turning the machine off being a reasonable business decision sometimes.
            – trognanders
            Dec 16 at 8:45














          6












          6








          6







          When a 0day is published, how can a whitehat secure his application/website between the time the 0day is published and the patch is developed?




          Sometimes there are workarounds which fix or mitigate the problem.




          • Sometimes you can disable some feature or change some setting in the software which causes the exploit to not work anymore. For example, infection with the Morris Worm from 1988 could be prevented by creating a directory /usr/tmp/sh. This confused the worm and prevented it from working.

          • Sometimes the exploit requires some kind of user interaction. In that case you can warn the users to not do that. ("Do not open emails with the subject line ILOVEYOU"). But because humans are humans, this is usually not a very reliable workaround.

          • Sometimes the attack is easy to identify on the network, so you can block it with some more or less complicated firewall rule. The Conficker virus, for example, was targeting a vulnerability in the Windows Remote Procedure Call service. There is usually no reason for this feature to be accessible from outside the local network at all, so it was possible to protect a whole network by simply blocking outside requests to port 445 TCP.

          • Sometimes it is viable to replace the vulnerable software with an alternative. For example, our organization installs two different web browsers on all Windows clients. When one of them has a known vulnerability, the admins can deactivate it via group policy and tell the users to use the other one until the patch is released.

          • As a last resort, you can simply pull the plug on the vulnerable systems. Whether the systems being unavailable causes more or less damage than they being online and open to exploits is a business consideration you have to evaluate in each individual case.


          But sometimes none of these is a viable option. In that case you can only hope that there will be a patch soon.




          Moreover, most of the time, this same 0day is used for months by blackhats, so are the blackhats ahead of whitehats?




          It happens quite frequently that developers / whitehats discover a possible security vulnerability in their software and patch it before it gets exploited. The first step of responsible disclosure is to inform the developers so they can fix the vulnerability before you publish it.



          But you usually don't hear about that in the media. When point 59 of the patch notes for SomeTool 1.3.9.53 reads "fixed possible buffer overflow when processing malformed foobar files" that's usually not particularly newsworthy.






          share|improve this answer















          When a 0day is published, how can a whitehat secure his application/website between the time the 0day is published and the patch is developed?




          Sometimes there are workarounds which fix or mitigate the problem.




          • Sometimes you can disable some feature or change some setting in the software which causes the exploit to not work anymore. For example, infection with the Morris Worm from 1988 could be prevented by creating a directory /usr/tmp/sh. This confused the worm and prevented it from working.

          • Sometimes the exploit requires some kind of user interaction. In that case you can warn the users to not do that. ("Do not open emails with the subject line ILOVEYOU"). But because humans are humans, this is usually not a very reliable workaround.

          • Sometimes the attack is easy to identify on the network, so you can block it with some more or less complicated firewall rule. The Conficker virus, for example, was targeting a vulnerability in the Windows Remote Procedure Call service. There is usually no reason for this feature to be accessible from outside the local network at all, so it was possible to protect a whole network by simply blocking outside requests to port 445 TCP.

          • Sometimes it is viable to replace the vulnerable software with an alternative. For example, our organization installs two different web browsers on all Windows clients. When one of them has a known vulnerability, the admins can deactivate it via group policy and tell the users to use the other one until the patch is released.

          • As a last resort, you can simply pull the plug on the vulnerable systems. Whether the systems being unavailable causes more or less damage than they being online and open to exploits is a business consideration you have to evaluate in each individual case.


          But sometimes none of these is a viable option. In that case you can only hope that there will be a patch soon.




          Moreover, most of the time, this same 0day is used for months by blackhats, so are the blackhats ahead of whitehats?




          It happens quite frequently that developers / whitehats discover a possible security vulnerability in their software and patch it before it gets exploited. The first step of responsible disclosure is to inform the developers so they can fix the vulnerability before you publish it.



          But you usually don't hear about that in the media. When point 59 of the patch notes for SomeTool 1.3.9.53 reads "fixed possible buffer overflow when processing malformed foobar files" that's usually not particularly newsworthy.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Dec 13 at 11:57

























          answered Dec 13 at 10:10









          Philipp

          43.9k7112138




          43.9k7112138












          • I believe your Morris worm example is a poor one. The Morris worm used several vulnerabilities for jumping between and infecting systems, of which the fingerd flaw was one. (There was also at least sendmail's debug mode, and common user account passwords.) If I recall correctly, the real trick to defuse that one was to mkdir /tmp/sh.
            – a CVn
            Dec 13 at 10:57






          • 1




            @aCVn Fixed. Thank you.
            – Philipp
            Dec 13 at 11:31










          • Good point about turning the machine off being a reasonable business decision sometimes.
            – trognanders
            Dec 16 at 8:45


















          • I believe your Morris worm example is a poor one. The Morris worm used several vulnerabilities for jumping between and infecting systems, of which the fingerd flaw was one. (There was also at least sendmail's debug mode, and common user account passwords.) If I recall correctly, the real trick to defuse that one was to mkdir /tmp/sh.
            – a CVn
            Dec 13 at 10:57






          • 1




            @aCVn Fixed. Thank you.
            – Philipp
            Dec 13 at 11:31










          • Good point about turning the machine off being a reasonable business decision sometimes.
            – trognanders
            Dec 16 at 8:45
















          I believe your Morris worm example is a poor one. The Morris worm used several vulnerabilities for jumping between and infecting systems, of which the fingerd flaw was one. (There was also at least sendmail's debug mode, and common user account passwords.) If I recall correctly, the real trick to defuse that one was to mkdir /tmp/sh.
          – a CVn
          Dec 13 at 10:57




          I believe your Morris worm example is a poor one. The Morris worm used several vulnerabilities for jumping between and infecting systems, of which the fingerd flaw was one. (There was also at least sendmail's debug mode, and common user account passwords.) If I recall correctly, the real trick to defuse that one was to mkdir /tmp/sh.
          – a CVn
          Dec 13 at 10:57




          1




          1




          @aCVn Fixed. Thank you.
          – Philipp
          Dec 13 at 11:31




          @aCVn Fixed. Thank you.
          – Philipp
          Dec 13 at 11:31












          Good point about turning the machine off being a reasonable business decision sometimes.
          – trognanders
          Dec 16 at 8:45




          Good point about turning the machine off being a reasonable business decision sometimes.
          – trognanders
          Dec 16 at 8:45











          2














          Most potential exploits require a chain of vulnerabilities in order to be executed. By reading the as-yet unpatched zero-day, you can still identify other vulnerabilities or pre-conditions that the zero-day would require.



          To defend against threat of (say) an RDP attack from outside the network (zero-day RDP authentication failure published), do not allow RDP from off-site. If you don't really need RDP from outside, then this is a chance to correct an oversight. Or, if you must have RDP from off-site, perhaps you can identify a whitelist of IPs from which to allow these connections, and narrow the aperture in the firewall.



          Likewise, to defend against an inside (and to some extent outside) RDP threat, limit the ability of A) users to execute RDP, B) machines to execute RDP, C) the network to pass RDP, D) machines to accept RDP, E) users to allow RDP. Which VLANs should have the ability to generate outbound RDP? Which machines should be able to do this? And so forth.



          Every one of these steps, in both the outsider and insider scenarios, works to harden your network against an RDP authentication exploit even without a patch.



          A defense-in-depth mentality allows you to break the chain of vulnerabilities / conditions required for even an un-patched zero-day to be countered. Sometimes.



          I have intentionally chosen a fairly easy problem here just to illustrate the point.



          Source -- I have done this before.






          share|improve this answer


























            2














            Most potential exploits require a chain of vulnerabilities in order to be executed. By reading the as-yet unpatched zero-day, you can still identify other vulnerabilities or pre-conditions that the zero-day would require.



            To defend against threat of (say) an RDP attack from outside the network (zero-day RDP authentication failure published), do not allow RDP from off-site. If you don't really need RDP from outside, then this is a chance to correct an oversight. Or, if you must have RDP from off-site, perhaps you can identify a whitelist of IPs from which to allow these connections, and narrow the aperture in the firewall.



            Likewise, to defend against an inside (and to some extent outside) RDP threat, limit the ability of A) users to execute RDP, B) machines to execute RDP, C) the network to pass RDP, D) machines to accept RDP, E) users to allow RDP. Which VLANs should have the ability to generate outbound RDP? Which machines should be able to do this? And so forth.



            Every one of these steps, in both the outsider and insider scenarios, works to harden your network against an RDP authentication exploit even without a patch.



            A defense-in-depth mentality allows you to break the chain of vulnerabilities / conditions required for even an un-patched zero-day to be countered. Sometimes.



            I have intentionally chosen a fairly easy problem here just to illustrate the point.



            Source -- I have done this before.






            share|improve this answer
























              2












              2








              2






              Most potential exploits require a chain of vulnerabilities in order to be executed. By reading the as-yet unpatched zero-day, you can still identify other vulnerabilities or pre-conditions that the zero-day would require.



              To defend against threat of (say) an RDP attack from outside the network (zero-day RDP authentication failure published), do not allow RDP from off-site. If you don't really need RDP from outside, then this is a chance to correct an oversight. Or, if you must have RDP from off-site, perhaps you can identify a whitelist of IPs from which to allow these connections, and narrow the aperture in the firewall.



              Likewise, to defend against an inside (and to some extent outside) RDP threat, limit the ability of A) users to execute RDP, B) machines to execute RDP, C) the network to pass RDP, D) machines to accept RDP, E) users to allow RDP. Which VLANs should have the ability to generate outbound RDP? Which machines should be able to do this? And so forth.



              Every one of these steps, in both the outsider and insider scenarios, works to harden your network against an RDP authentication exploit even without a patch.



              A defense-in-depth mentality allows you to break the chain of vulnerabilities / conditions required for even an un-patched zero-day to be countered. Sometimes.



              I have intentionally chosen a fairly easy problem here just to illustrate the point.



              Source -- I have done this before.






              share|improve this answer












              Most potential exploits require a chain of vulnerabilities in order to be executed. By reading the as-yet unpatched zero-day, you can still identify other vulnerabilities or pre-conditions that the zero-day would require.



              To defend against threat of (say) an RDP attack from outside the network (zero-day RDP authentication failure published), do not allow RDP from off-site. If you don't really need RDP from outside, then this is a chance to correct an oversight. Or, if you must have RDP from off-site, perhaps you can identify a whitelist of IPs from which to allow these connections, and narrow the aperture in the firewall.



              Likewise, to defend against an inside (and to some extent outside) RDP threat, limit the ability of A) users to execute RDP, B) machines to execute RDP, C) the network to pass RDP, D) machines to accept RDP, E) users to allow RDP. Which VLANs should have the ability to generate outbound RDP? Which machines should be able to do this? And so forth.



              Every one of these steps, in both the outsider and insider scenarios, works to harden your network against an RDP authentication exploit even without a patch.



              A defense-in-depth mentality allows you to break the chain of vulnerabilities / conditions required for even an un-patched zero-day to be countered. Sometimes.



              I have intentionally chosen a fairly easy problem here just to illustrate the point.



              Source -- I have done this before.







              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered Dec 15 at 11:12









              Haakon Dahl

              516




              516























                  2














                  The problem is not only with zero-days. There are plenty of companies which still drag on 200-days patches for a multitude of reasons (some good, some bad).



                  You have a large list of solutions, another one is to use virtual patching. It usually creates a mitigation for the issue before it hits the service (I learned about it years ago though a Trend Micro product - no links with them but I tested it and it mostly worked).






                  share|improve this answer


























                    2














                    The problem is not only with zero-days. There are plenty of companies which still drag on 200-days patches for a multitude of reasons (some good, some bad).



                    You have a large list of solutions, another one is to use virtual patching. It usually creates a mitigation for the issue before it hits the service (I learned about it years ago though a Trend Micro product - no links with them but I tested it and it mostly worked).






                    share|improve this answer
























                      2












                      2








                      2






                      The problem is not only with zero-days. There are plenty of companies which still drag on 200-days patches for a multitude of reasons (some good, some bad).



                      You have a large list of solutions, another one is to use virtual patching. It usually creates a mitigation for the issue before it hits the service (I learned about it years ago though a Trend Micro product - no links with them but I tested it and it mostly worked).






                      share|improve this answer












                      The problem is not only with zero-days. There are plenty of companies which still drag on 200-days patches for a multitude of reasons (some good, some bad).



                      You have a large list of solutions, another one is to use virtual patching. It usually creates a mitigation for the issue before it hits the service (I learned about it years ago though a Trend Micro product - no links with them but I tested it and it mostly worked).







                      share|improve this answer












                      share|improve this answer



                      share|improve this answer










                      answered Dec 15 at 15:07









                      WoJ

                      7,00312443




                      7,00312443























                          2














                          Another key defense is monitoring, and knowing your system.



                          Where are your valuable secrets, and who has access to them.



                          If someone tries to connect to your mail server on port 80, red flag.



                          Why is the mail server, all of a sudden, sending traffic to an unusual IP.



                          The mail server now has 10x the traffic why?



                          Monitor people connecting to your external IP's addresses. Drop and/or block all external ports and protocols that are not in use.



                          No legitimate user is going to connect to your web server on anything but 80 or 443. Unless you have added additional services. You might consider blocking those IP for some time. Sometimes, IP are part of dynamic pools, and you can't always solve a problem with a blacklist, then you just drop the packets.



                          If your business only does business in 1 country, maybe you should just block all other countries.



                          You can use whois to find the global owner of the IP address range, and if present use the administrator contact information to notify the owner. They can track it down on their end. (Its worth a try)



                          You should get notified when any system gets contacted by another system in any unexpected way. After first you may have a ton of notification, but if the computer(s) is on your network then you can investigate both sides. Then either eliminate it or white list it as expected traffic.



                          These monitor tools will also notify you about port scans, unless you have an authorized security team no one else should be port scanning.



                          Watch for regular events, and if they stop mysteriously why?



                          Check the machine for infections. If services are disabled you should be notified in advance so the changes will be expected and not mysterious.



                          Block as much as possible and monitor the rest.



                          Now once you have an attack you need to do something about it.



                          Sometimes turning the system off temporarily is the only option. Maybe you need to block their IP address for awhile.



                          You still have to protect and monitor all your legitimate services.



                          In addition to monitoring the community for vulnerability announcements. You should have penetration testers to find the bugs in advance before the hackers. Then you have a chance to mitigate the attack on your terms. Notifying the maintainer of the effect system so they can patch it. If its open source, you can have someone patch it for you.



                          Intrusion detection systems, and snort can also examine and potentially block incoming hacks by detecting suspicious patterns.



                          You may have to find an alternate product to replace the vulnerable one depending on the severity of the problem.



                          As always keeping your software up to date helps to protect you.



                          This way you can block suspicious activity, until you determine its legit.






                          share|improve this answer


























                            2














                            Another key defense is monitoring, and knowing your system.



                            Where are your valuable secrets, and who has access to them.



                            If someone tries to connect to your mail server on port 80, red flag.



                            Why is the mail server, all of a sudden, sending traffic to an unusual IP.



                            The mail server now has 10x the traffic why?



                            Monitor people connecting to your external IP's addresses. Drop and/or block all external ports and protocols that are not in use.



                            No legitimate user is going to connect to your web server on anything but 80 or 443. Unless you have added additional services. You might consider blocking those IP for some time. Sometimes, IP are part of dynamic pools, and you can't always solve a problem with a blacklist, then you just drop the packets.



                            If your business only does business in 1 country, maybe you should just block all other countries.



                            You can use whois to find the global owner of the IP address range, and if present use the administrator contact information to notify the owner. They can track it down on their end. (Its worth a try)



                            You should get notified when any system gets contacted by another system in any unexpected way. After first you may have a ton of notification, but if the computer(s) is on your network then you can investigate both sides. Then either eliminate it or white list it as expected traffic.



                            These monitor tools will also notify you about port scans, unless you have an authorized security team no one else should be port scanning.



                            Watch for regular events, and if they stop mysteriously why?



                            Check the machine for infections. If services are disabled you should be notified in advance so the changes will be expected and not mysterious.



                            Block as much as possible and monitor the rest.



                            Now once you have an attack you need to do something about it.



                            Sometimes turning the system off temporarily is the only option. Maybe you need to block their IP address for awhile.



                            You still have to protect and monitor all your legitimate services.



                            In addition to monitoring the community for vulnerability announcements. You should have penetration testers to find the bugs in advance before the hackers. Then you have a chance to mitigate the attack on your terms. Notifying the maintainer of the effect system so they can patch it. If its open source, you can have someone patch it for you.



                            Intrusion detection systems, and snort can also examine and potentially block incoming hacks by detecting suspicious patterns.



                            You may have to find an alternate product to replace the vulnerable one depending on the severity of the problem.



                            As always keeping your software up to date helps to protect you.



                            This way you can block suspicious activity, until you determine its legit.






                            share|improve this answer
























                              2












                              2








                              2






                              Another key defense is monitoring, and knowing your system.



                              Where are your valuable secrets, and who has access to them.



                              If someone tries to connect to your mail server on port 80, red flag.



                              Why is the mail server, all of a sudden, sending traffic to an unusual IP.



                              The mail server now has 10x the traffic why?



                              Monitor people connecting to your external IP's addresses. Drop and/or block all external ports and protocols that are not in use.



                              No legitimate user is going to connect to your web server on anything but 80 or 443. Unless you have added additional services. You might consider blocking those IP for some time. Sometimes, IP are part of dynamic pools, and you can't always solve a problem with a blacklist, then you just drop the packets.



                              If your business only does business in 1 country, maybe you should just block all other countries.



                              You can use whois to find the global owner of the IP address range, and if present use the administrator contact information to notify the owner. They can track it down on their end. (Its worth a try)



                              You should get notified when any system gets contacted by another system in any unexpected way. After first you may have a ton of notification, but if the computer(s) is on your network then you can investigate both sides. Then either eliminate it or white list it as expected traffic.



                              These monitor tools will also notify you about port scans, unless you have an authorized security team no one else should be port scanning.



                              Watch for regular events, and if they stop mysteriously why?



                              Check the machine for infections. If services are disabled you should be notified in advance so the changes will be expected and not mysterious.



                              Block as much as possible and monitor the rest.



                              Now once you have an attack you need to do something about it.



                              Sometimes turning the system off temporarily is the only option. Maybe you need to block their IP address for awhile.



                              You still have to protect and monitor all your legitimate services.



                              In addition to monitoring the community for vulnerability announcements. You should have penetration testers to find the bugs in advance before the hackers. Then you have a chance to mitigate the attack on your terms. Notifying the maintainer of the effect system so they can patch it. If its open source, you can have someone patch it for you.



                              Intrusion detection systems, and snort can also examine and potentially block incoming hacks by detecting suspicious patterns.



                              You may have to find an alternate product to replace the vulnerable one depending on the severity of the problem.



                              As always keeping your software up to date helps to protect you.



                              This way you can block suspicious activity, until you determine its legit.






                              share|improve this answer












                              Another key defense is monitoring, and knowing your system.



                              Where are your valuable secrets, and who has access to them.



                              If someone tries to connect to your mail server on port 80, red flag.



                              Why is the mail server, all of a sudden, sending traffic to an unusual IP.



                              The mail server now has 10x the traffic why?



                              Monitor people connecting to your external IP's addresses. Drop and/or block all external ports and protocols that are not in use.



                              No legitimate user is going to connect to your web server on anything but 80 or 443. Unless you have added additional services. You might consider blocking those IP for some time. Sometimes, IP are part of dynamic pools, and you can't always solve a problem with a blacklist, then you just drop the packets.



                              If your business only does business in 1 country, maybe you should just block all other countries.



                              You can use whois to find the global owner of the IP address range, and if present use the administrator contact information to notify the owner. They can track it down on their end. (Its worth a try)



                              You should get notified when any system gets contacted by another system in any unexpected way. After first you may have a ton of notification, but if the computer(s) is on your network then you can investigate both sides. Then either eliminate it or white list it as expected traffic.



                              These monitor tools will also notify you about port scans, unless you have an authorized security team no one else should be port scanning.



                              Watch for regular events, and if they stop mysteriously why?



                              Check the machine for infections. If services are disabled you should be notified in advance so the changes will be expected and not mysterious.



                              Block as much as possible and monitor the rest.



                              Now once you have an attack you need to do something about it.



                              Sometimes turning the system off temporarily is the only option. Maybe you need to block their IP address for awhile.



                              You still have to protect and monitor all your legitimate services.



                              In addition to monitoring the community for vulnerability announcements. You should have penetration testers to find the bugs in advance before the hackers. Then you have a chance to mitigate the attack on your terms. Notifying the maintainer of the effect system so they can patch it. If its open source, you can have someone patch it for you.



                              Intrusion detection systems, and snort can also examine and potentially block incoming hacks by detecting suspicious patterns.



                              You may have to find an alternate product to replace the vulnerable one depending on the severity of the problem.



                              As always keeping your software up to date helps to protect you.



                              This way you can block suspicious activity, until you determine its legit.







                              share|improve this answer












                              share|improve this answer



                              share|improve this answer










                              answered Dec 15 at 15:52









                              cybernard

                              46028




                              46028























                                  2














                                  Relatively few hacks allow the attacker to break into a system. Most are "privilege escalation" bugs that allow an attacker to have greater control over the system after they have access to it. There are so many ways to achieve administrative control of a machine once a hacker has access to it, that it is more or less a waste of time to try to secure a machine against privilege escalation. Your best policy is to focus on preventing hackers from getting inside in the first place and monitoring your network for intrusion.



                                  Nearly all intrusions come from just three methods. You want to spend all your available cyber defense resources defending against these. They are:



                                  (1) Phishing emails containing poisoned PDFs or PPTs. There are tons of zero days targeting PDFs and PPTs, and the nature of both these application formats is such that there is more or less no way to secure yourself against a contemporary trojan in either one. Therefore, you basically have two options: require all PDF/PPT attachments to go through a vetting process, which is not practical for most organizations, or to train your employees to vet emails themselves which is the best option in most cases. A third option is to test all PDFs and PPTs sent to the organization in a sandboxed environment after the fact, but this is only possible for advanced organizations, like the military, not the average company. Option 3 of course does not prevent the intrusion, it just warns you immediately if one occurs.



                                  (2) Browser vulnerabilities. The vast majority of browser-based exploits target Internet Explorer, so you can defend probably 95% of these just by preventing users from using IE and requiring them to use Chrome or Firefox. You can prevent 99% of browser based exploits by requiring users to use NoScript and training them in its use, which unfortunately is not practical for most organizations.



                                  (3) Server vulnerabilities. An example would be the NTP bug from a few years back. You can largely defend against these by making sure that all company servers are running on isolated networks (a "demilitarized zone") and that those servers are tight and not running unnecessary services. You especially want to make sure that any company web servers are running by themselves in isolated environments and that nothing can get into or out of those environments without a human explicitly doing the copy in a controlled way.



                                  Of course there are lots of exploits that fall outside these categories, but your time is best spent addressing the three classes of vulnerabilities listed above.






                                  share|improve this answer


























                                    2














                                    Relatively few hacks allow the attacker to break into a system. Most are "privilege escalation" bugs that allow an attacker to have greater control over the system after they have access to it. There are so many ways to achieve administrative control of a machine once a hacker has access to it, that it is more or less a waste of time to try to secure a machine against privilege escalation. Your best policy is to focus on preventing hackers from getting inside in the first place and monitoring your network for intrusion.



                                    Nearly all intrusions come from just three methods. You want to spend all your available cyber defense resources defending against these. They are:



                                    (1) Phishing emails containing poisoned PDFs or PPTs. There are tons of zero days targeting PDFs and PPTs, and the nature of both these application formats is such that there is more or less no way to secure yourself against a contemporary trojan in either one. Therefore, you basically have two options: require all PDF/PPT attachments to go through a vetting process, which is not practical for most organizations, or to train your employees to vet emails themselves which is the best option in most cases. A third option is to test all PDFs and PPTs sent to the organization in a sandboxed environment after the fact, but this is only possible for advanced organizations, like the military, not the average company. Option 3 of course does not prevent the intrusion, it just warns you immediately if one occurs.



                                    (2) Browser vulnerabilities. The vast majority of browser-based exploits target Internet Explorer, so you can defend probably 95% of these just by preventing users from using IE and requiring them to use Chrome or Firefox. You can prevent 99% of browser based exploits by requiring users to use NoScript and training them in its use, which unfortunately is not practical for most organizations.



                                    (3) Server vulnerabilities. An example would be the NTP bug from a few years back. You can largely defend against these by making sure that all company servers are running on isolated networks (a "demilitarized zone") and that those servers are tight and not running unnecessary services. You especially want to make sure that any company web servers are running by themselves in isolated environments and that nothing can get into or out of those environments without a human explicitly doing the copy in a controlled way.



                                    Of course there are lots of exploits that fall outside these categories, but your time is best spent addressing the three classes of vulnerabilities listed above.






                                    share|improve this answer
























                                      2












                                      2








                                      2






                                      Relatively few hacks allow the attacker to break into a system. Most are "privilege escalation" bugs that allow an attacker to have greater control over the system after they have access to it. There are so many ways to achieve administrative control of a machine once a hacker has access to it, that it is more or less a waste of time to try to secure a machine against privilege escalation. Your best policy is to focus on preventing hackers from getting inside in the first place and monitoring your network for intrusion.



                                      Nearly all intrusions come from just three methods. You want to spend all your available cyber defense resources defending against these. They are:



                                      (1) Phishing emails containing poisoned PDFs or PPTs. There are tons of zero days targeting PDFs and PPTs, and the nature of both these application formats is such that there is more or less no way to secure yourself against a contemporary trojan in either one. Therefore, you basically have two options: require all PDF/PPT attachments to go through a vetting process, which is not practical for most organizations, or to train your employees to vet emails themselves which is the best option in most cases. A third option is to test all PDFs and PPTs sent to the organization in a sandboxed environment after the fact, but this is only possible for advanced organizations, like the military, not the average company. Option 3 of course does not prevent the intrusion, it just warns you immediately if one occurs.



                                      (2) Browser vulnerabilities. The vast majority of browser-based exploits target Internet Explorer, so you can defend probably 95% of these just by preventing users from using IE and requiring them to use Chrome or Firefox. You can prevent 99% of browser based exploits by requiring users to use NoScript and training them in its use, which unfortunately is not practical for most organizations.



                                      (3) Server vulnerabilities. An example would be the NTP bug from a few years back. You can largely defend against these by making sure that all company servers are running on isolated networks (a "demilitarized zone") and that those servers are tight and not running unnecessary services. You especially want to make sure that any company web servers are running by themselves in isolated environments and that nothing can get into or out of those environments without a human explicitly doing the copy in a controlled way.



                                      Of course there are lots of exploits that fall outside these categories, but your time is best spent addressing the three classes of vulnerabilities listed above.






                                      share|improve this answer












                                      Relatively few hacks allow the attacker to break into a system. Most are "privilege escalation" bugs that allow an attacker to have greater control over the system after they have access to it. There are so many ways to achieve administrative control of a machine once a hacker has access to it, that it is more or less a waste of time to try to secure a machine against privilege escalation. Your best policy is to focus on preventing hackers from getting inside in the first place and monitoring your network for intrusion.



                                      Nearly all intrusions come from just three methods. You want to spend all your available cyber defense resources defending against these. They are:



                                      (1) Phishing emails containing poisoned PDFs or PPTs. There are tons of zero days targeting PDFs and PPTs, and the nature of both these application formats is such that there is more or less no way to secure yourself against a contemporary trojan in either one. Therefore, you basically have two options: require all PDF/PPT attachments to go through a vetting process, which is not practical for most organizations, or to train your employees to vet emails themselves which is the best option in most cases. A third option is to test all PDFs and PPTs sent to the organization in a sandboxed environment after the fact, but this is only possible for advanced organizations, like the military, not the average company. Option 3 of course does not prevent the intrusion, it just warns you immediately if one occurs.



                                      (2) Browser vulnerabilities. The vast majority of browser-based exploits target Internet Explorer, so you can defend probably 95% of these just by preventing users from using IE and requiring them to use Chrome or Firefox. You can prevent 99% of browser based exploits by requiring users to use NoScript and training them in its use, which unfortunately is not practical for most organizations.



                                      (3) Server vulnerabilities. An example would be the NTP bug from a few years back. You can largely defend against these by making sure that all company servers are running on isolated networks (a "demilitarized zone") and that those servers are tight and not running unnecessary services. You especially want to make sure that any company web servers are running by themselves in isolated environments and that nothing can get into or out of those environments without a human explicitly doing the copy in a controlled way.



                                      Of course there are lots of exploits that fall outside these categories, but your time is best spent addressing the three classes of vulnerabilities listed above.







                                      share|improve this answer












                                      share|improve this answer



                                      share|improve this answer










                                      answered Dec 17 at 8:20









                                      Tyler Durden

                                      755517




                                      755517























                                          1














                                          Well, its ok to have 0 days from an attacker, the problem is how many zero days they have and how much does it costs for them to burn all the 0 days in your network.



                                          If you don't have patches up to date, it lowers the cost for an attacker to develop a kill chain.



                                          When you think about it, how would you send start attacking a network? Let say you start with a phishing attack / waterhole attack.



                                          If it is a waterhole attack, you might need to find a 0 day in flash which allows you to execute code in the browser, and then you might need to break out of the browser sandbox first, which requires another 0day. And next you might face appcontainer, which requires another exploit to reach OS level privilege. And there are protection mechanism such as SIP in macOS, it means even if you have root access, you cant access important memory. That means you need another 0day kernel exploit. If it is running windows 10 with cred guard and you are targeting Lsass.exe, then you might need another 0day to attack the hypervisor.



                                          So it turns out the attack is very expensive and requires a lot of research effort, and in the meantime while you exploiting them, you might trigger security alert.



                                          So as a defender, make sure you know your network well, have defence controls in every single layer and you should be able to defend against 0 day attack.






                                          share|improve this answer





















                                          • Well, its ok to have 0 days from an attacker, the problem is how many zero days they have and how much does it costs for them to burn all the 0 days in your network. I mean, it is not really okay to have 0-day vulnerabilities if that is what you're suggesting, but yes every written code has bugs in them and they should be fixed. Having any vulnerabilities is not okay and they should be patched, even if it is expensive to abuse them.
                                            – Kevin Voorn
                                            Dec 14 at 3:28










                                          • @KevinVoorn YEa agree, thats why I said If you don't have patches up to date, it lowers the cost for an attacker to develop a kill chain. Patching is still very important, you just can't stop someone having 0day
                                            – Timothy Leung
                                            Dec 15 at 3:23
















                                          1














                                          Well, its ok to have 0 days from an attacker, the problem is how many zero days they have and how much does it costs for them to burn all the 0 days in your network.



                                          If you don't have patches up to date, it lowers the cost for an attacker to develop a kill chain.



                                          When you think about it, how would you send start attacking a network? Let say you start with a phishing attack / waterhole attack.



                                          If it is a waterhole attack, you might need to find a 0 day in flash which allows you to execute code in the browser, and then you might need to break out of the browser sandbox first, which requires another 0day. And next you might face appcontainer, which requires another exploit to reach OS level privilege. And there are protection mechanism such as SIP in macOS, it means even if you have root access, you cant access important memory. That means you need another 0day kernel exploit. If it is running windows 10 with cred guard and you are targeting Lsass.exe, then you might need another 0day to attack the hypervisor.



                                          So it turns out the attack is very expensive and requires a lot of research effort, and in the meantime while you exploiting them, you might trigger security alert.



                                          So as a defender, make sure you know your network well, have defence controls in every single layer and you should be able to defend against 0 day attack.






                                          share|improve this answer





















                                          • Well, its ok to have 0 days from an attacker, the problem is how many zero days they have and how much does it costs for them to burn all the 0 days in your network. I mean, it is not really okay to have 0-day vulnerabilities if that is what you're suggesting, but yes every written code has bugs in them and they should be fixed. Having any vulnerabilities is not okay and they should be patched, even if it is expensive to abuse them.
                                            – Kevin Voorn
                                            Dec 14 at 3:28










                                          • @KevinVoorn YEa agree, thats why I said If you don't have patches up to date, it lowers the cost for an attacker to develop a kill chain. Patching is still very important, you just can't stop someone having 0day
                                            – Timothy Leung
                                            Dec 15 at 3:23














                                          1












                                          1








                                          1






                                          Well, its ok to have 0 days from an attacker, the problem is how many zero days they have and how much does it costs for them to burn all the 0 days in your network.



                                          If you don't have patches up to date, it lowers the cost for an attacker to develop a kill chain.



                                          When you think about it, how would you send start attacking a network? Let say you start with a phishing attack / waterhole attack.



                                          If it is a waterhole attack, you might need to find a 0 day in flash which allows you to execute code in the browser, and then you might need to break out of the browser sandbox first, which requires another 0day. And next you might face appcontainer, which requires another exploit to reach OS level privilege. And there are protection mechanism such as SIP in macOS, it means even if you have root access, you cant access important memory. That means you need another 0day kernel exploit. If it is running windows 10 with cred guard and you are targeting Lsass.exe, then you might need another 0day to attack the hypervisor.



                                          So it turns out the attack is very expensive and requires a lot of research effort, and in the meantime while you exploiting them, you might trigger security alert.



                                          So as a defender, make sure you know your network well, have defence controls in every single layer and you should be able to defend against 0 day attack.






                                          share|improve this answer












                                          Well, its ok to have 0 days from an attacker, the problem is how many zero days they have and how much does it costs for them to burn all the 0 days in your network.



                                          If you don't have patches up to date, it lowers the cost for an attacker to develop a kill chain.



                                          When you think about it, how would you send start attacking a network? Let say you start with a phishing attack / waterhole attack.



                                          If it is a waterhole attack, you might need to find a 0 day in flash which allows you to execute code in the browser, and then you might need to break out of the browser sandbox first, which requires another 0day. And next you might face appcontainer, which requires another exploit to reach OS level privilege. And there are protection mechanism such as SIP in macOS, it means even if you have root access, you cant access important memory. That means you need another 0day kernel exploit. If it is running windows 10 with cred guard and you are targeting Lsass.exe, then you might need another 0day to attack the hypervisor.



                                          So it turns out the attack is very expensive and requires a lot of research effort, and in the meantime while you exploiting them, you might trigger security alert.



                                          So as a defender, make sure you know your network well, have defence controls in every single layer and you should be able to defend against 0 day attack.







                                          share|improve this answer












                                          share|improve this answer



                                          share|improve this answer










                                          answered Dec 14 at 1:30









                                          Timothy Leung

                                          1957




                                          1957












                                          • Well, its ok to have 0 days from an attacker, the problem is how many zero days they have and how much does it costs for them to burn all the 0 days in your network. I mean, it is not really okay to have 0-day vulnerabilities if that is what you're suggesting, but yes every written code has bugs in them and they should be fixed. Having any vulnerabilities is not okay and they should be patched, even if it is expensive to abuse them.
                                            – Kevin Voorn
                                            Dec 14 at 3:28










                                          • @KevinVoorn YEa agree, thats why I said If you don't have patches up to date, it lowers the cost for an attacker to develop a kill chain. Patching is still very important, you just can't stop someone having 0day
                                            – Timothy Leung
                                            Dec 15 at 3:23


















                                          • Well, its ok to have 0 days from an attacker, the problem is how many zero days they have and how much does it costs for them to burn all the 0 days in your network. I mean, it is not really okay to have 0-day vulnerabilities if that is what you're suggesting, but yes every written code has bugs in them and they should be fixed. Having any vulnerabilities is not okay and they should be patched, even if it is expensive to abuse them.
                                            – Kevin Voorn
                                            Dec 14 at 3:28










                                          • @KevinVoorn YEa agree, thats why I said If you don't have patches up to date, it lowers the cost for an attacker to develop a kill chain. Patching is still very important, you just can't stop someone having 0day
                                            – Timothy Leung
                                            Dec 15 at 3:23
















                                          Well, its ok to have 0 days from an attacker, the problem is how many zero days they have and how much does it costs for them to burn all the 0 days in your network. I mean, it is not really okay to have 0-day vulnerabilities if that is what you're suggesting, but yes every written code has bugs in them and they should be fixed. Having any vulnerabilities is not okay and they should be patched, even if it is expensive to abuse them.
                                          – Kevin Voorn
                                          Dec 14 at 3:28




                                          Well, its ok to have 0 days from an attacker, the problem is how many zero days they have and how much does it costs for them to burn all the 0 days in your network. I mean, it is not really okay to have 0-day vulnerabilities if that is what you're suggesting, but yes every written code has bugs in them and they should be fixed. Having any vulnerabilities is not okay and they should be patched, even if it is expensive to abuse them.
                                          – Kevin Voorn
                                          Dec 14 at 3:28












                                          @KevinVoorn YEa agree, thats why I said If you don't have patches up to date, it lowers the cost for an attacker to develop a kill chain. Patching is still very important, you just can't stop someone having 0day
                                          – Timothy Leung
                                          Dec 15 at 3:23




                                          @KevinVoorn YEa agree, thats why I said If you don't have patches up to date, it lowers the cost for an attacker to develop a kill chain. Patching is still very important, you just can't stop someone having 0day
                                          – Timothy Leung
                                          Dec 15 at 3:23


















                                          draft saved

                                          draft discarded




















































                                          Thanks for contributing an answer to Information Security Stack Exchange!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid



                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.


                                          To learn more, see our tips on writing great answers.





                                          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                                          Please pay close attention to the following guidance:


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid



                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function () {
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsecurity.stackexchange.com%2fquestions%2f199672%2fhow-can-an-administrator-secure-against-a-0day-before-patches-are-available%23new-answer', 'question_page');
                                          }
                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          Probability when a professor distributes a quiz and homework assignment to a class of n students.

                                          Aardman Animations

                                          Are they similar matrix