OpenSSH - Any way to keep strict host key checking but check only the key and ignore server's name?












5















Question



Is there a way I can keep the strict host key checking behavior on but have it check only the server's key fingerprint (which is effectively already a unique identity for the host), without it also considering the host name/IP?



Problem



I have several personal mobile/roaming devices: phones, laptops, more phones being used as pocket computers, etc, and I routinely use SSH between them, using the IP address rather than host name.



Any time I end up on a new network (typically someone's home/office/public WiFi) or DHCP leases expire on an existing network, the IP addresses of those devices get shuffled around, causing one or both of the following situations:




  1. The same host with an unchanged host key is in a different location - so ssh prompts me to confirm the same host key again for the new IP.


  2. A new host ends up on the same IP as a previously connected-to host (but the previous host is also still alive just now at a different IP) - so when trying to connect to the new host ssh treats it as the well known error that prevents connecting, and fixing the matter requires me to either manage multiple known hosts files with config options or to lose the known host key association for the previous host.



I would like some way to keep the automatic host key checking, but in my usecase it's meaningless to associate host name/IP to the server itself, so that:




  1. The same host key showing up for a different name/IP should be accepted automatically as a known host.


  2. A different host key at a previously known IP address should just cause the yes/no dialog for a new key.



Put another way, I'd like known_hosts to be checked as if it was just a list of known keys, instead of a list of known name(/IP) <-> key tuples.



But, security



Just pre-empting this tangent:



There'd be no real security loss if I could get SSH to ignore the host name/IP and just decide if the key is either new or known, because the server's host key is already as secure of a unique identifier for the server as we can get, regardless of what name/IP that server is currently at.



The only difference in the event of a MitM would be that I would get the yes/no prompt instead of the connection obstinately aborting, but just the same I'd immediately know something was wrong since I would be connecting to a device whose host key I expect to be known.



Comments on other possible solution ideas



DNS doesn't apply, since we're talking shifting between different networks and often LAN private IP addresses, etc.



/etc/hosts tweaks would just be a pain given how often I might change networks.



I don't use auto-discovery/self-advertising technologies like mDNS/ZeroConf/Bonjour because they add non-negligible complexity for setup, maintenance and security auditing to some of these small devices (on some of these I have to compile everything I want to use from source), and I'm just generally not a fan of my devices advertising themselves actively and constantly to the network at large.



A current manual-ish non-solution I have that at least mitigates the known_hosts pain is to force ssh to use /dev/null as the known hosts file - which means I just get prompted to verify the key every single time. But even with my good memory and ASCII key art to help, that doesn't scale and breaks automation, and I've been getting away with it only because the number of keys in play is very small for now.



I could patch ssh to allow a KeyOnly option for strict host key checking instead of just "yes" and "no", but I just don't know if I have that in me right now, especially since that would mean I'd have to either manage to get it merged upstream or build OpenSSH releases from source myself for even more devices.



I'm tempted to write a local service that keeps track of known host keys for ssh, and creates a named Unix domain socket that ssh can then use as the known hosts file instead of the normal plain-text file. This seems doable but requires some care to get right and robust.



Obviously turning off strict host key checking is a non-option. If I was going to do that I might as well just use rsh to not pretend like there's any security left. We're not animals.



So I'm hoping that there's some clever way to just make this adjustment to OpenSSH's host key checking behavior to ignore host name IP and only check the key out of the box.










share|improve this question





























    5















    Question



    Is there a way I can keep the strict host key checking behavior on but have it check only the server's key fingerprint (which is effectively already a unique identity for the host), without it also considering the host name/IP?



    Problem



    I have several personal mobile/roaming devices: phones, laptops, more phones being used as pocket computers, etc, and I routinely use SSH between them, using the IP address rather than host name.



    Any time I end up on a new network (typically someone's home/office/public WiFi) or DHCP leases expire on an existing network, the IP addresses of those devices get shuffled around, causing one or both of the following situations:




    1. The same host with an unchanged host key is in a different location - so ssh prompts me to confirm the same host key again for the new IP.


    2. A new host ends up on the same IP as a previously connected-to host (but the previous host is also still alive just now at a different IP) - so when trying to connect to the new host ssh treats it as the well known error that prevents connecting, and fixing the matter requires me to either manage multiple known hosts files with config options or to lose the known host key association for the previous host.



    I would like some way to keep the automatic host key checking, but in my usecase it's meaningless to associate host name/IP to the server itself, so that:




    1. The same host key showing up for a different name/IP should be accepted automatically as a known host.


    2. A different host key at a previously known IP address should just cause the yes/no dialog for a new key.



    Put another way, I'd like known_hosts to be checked as if it was just a list of known keys, instead of a list of known name(/IP) <-> key tuples.



    But, security



    Just pre-empting this tangent:



    There'd be no real security loss if I could get SSH to ignore the host name/IP and just decide if the key is either new or known, because the server's host key is already as secure of a unique identifier for the server as we can get, regardless of what name/IP that server is currently at.



    The only difference in the event of a MitM would be that I would get the yes/no prompt instead of the connection obstinately aborting, but just the same I'd immediately know something was wrong since I would be connecting to a device whose host key I expect to be known.



    Comments on other possible solution ideas



    DNS doesn't apply, since we're talking shifting between different networks and often LAN private IP addresses, etc.



    /etc/hosts tweaks would just be a pain given how often I might change networks.



    I don't use auto-discovery/self-advertising technologies like mDNS/ZeroConf/Bonjour because they add non-negligible complexity for setup, maintenance and security auditing to some of these small devices (on some of these I have to compile everything I want to use from source), and I'm just generally not a fan of my devices advertising themselves actively and constantly to the network at large.



    A current manual-ish non-solution I have that at least mitigates the known_hosts pain is to force ssh to use /dev/null as the known hosts file - which means I just get prompted to verify the key every single time. But even with my good memory and ASCII key art to help, that doesn't scale and breaks automation, and I've been getting away with it only because the number of keys in play is very small for now.



    I could patch ssh to allow a KeyOnly option for strict host key checking instead of just "yes" and "no", but I just don't know if I have that in me right now, especially since that would mean I'd have to either manage to get it merged upstream or build OpenSSH releases from source myself for even more devices.



    I'm tempted to write a local service that keeps track of known host keys for ssh, and creates a named Unix domain socket that ssh can then use as the known hosts file instead of the normal plain-text file. This seems doable but requires some care to get right and robust.



    Obviously turning off strict host key checking is a non-option. If I was going to do that I might as well just use rsh to not pretend like there's any security left. We're not animals.



    So I'm hoping that there's some clever way to just make this adjustment to OpenSSH's host key checking behavior to ignore host name IP and only check the key out of the box.










    share|improve this question



























      5












      5








      5


      1






      Question



      Is there a way I can keep the strict host key checking behavior on but have it check only the server's key fingerprint (which is effectively already a unique identity for the host), without it also considering the host name/IP?



      Problem



      I have several personal mobile/roaming devices: phones, laptops, more phones being used as pocket computers, etc, and I routinely use SSH between them, using the IP address rather than host name.



      Any time I end up on a new network (typically someone's home/office/public WiFi) or DHCP leases expire on an existing network, the IP addresses of those devices get shuffled around, causing one or both of the following situations:




      1. The same host with an unchanged host key is in a different location - so ssh prompts me to confirm the same host key again for the new IP.


      2. A new host ends up on the same IP as a previously connected-to host (but the previous host is also still alive just now at a different IP) - so when trying to connect to the new host ssh treats it as the well known error that prevents connecting, and fixing the matter requires me to either manage multiple known hosts files with config options or to lose the known host key association for the previous host.



      I would like some way to keep the automatic host key checking, but in my usecase it's meaningless to associate host name/IP to the server itself, so that:




      1. The same host key showing up for a different name/IP should be accepted automatically as a known host.


      2. A different host key at a previously known IP address should just cause the yes/no dialog for a new key.



      Put another way, I'd like known_hosts to be checked as if it was just a list of known keys, instead of a list of known name(/IP) <-> key tuples.



      But, security



      Just pre-empting this tangent:



      There'd be no real security loss if I could get SSH to ignore the host name/IP and just decide if the key is either new or known, because the server's host key is already as secure of a unique identifier for the server as we can get, regardless of what name/IP that server is currently at.



      The only difference in the event of a MitM would be that I would get the yes/no prompt instead of the connection obstinately aborting, but just the same I'd immediately know something was wrong since I would be connecting to a device whose host key I expect to be known.



      Comments on other possible solution ideas



      DNS doesn't apply, since we're talking shifting between different networks and often LAN private IP addresses, etc.



      /etc/hosts tweaks would just be a pain given how often I might change networks.



      I don't use auto-discovery/self-advertising technologies like mDNS/ZeroConf/Bonjour because they add non-negligible complexity for setup, maintenance and security auditing to some of these small devices (on some of these I have to compile everything I want to use from source), and I'm just generally not a fan of my devices advertising themselves actively and constantly to the network at large.



      A current manual-ish non-solution I have that at least mitigates the known_hosts pain is to force ssh to use /dev/null as the known hosts file - which means I just get prompted to verify the key every single time. But even with my good memory and ASCII key art to help, that doesn't scale and breaks automation, and I've been getting away with it only because the number of keys in play is very small for now.



      I could patch ssh to allow a KeyOnly option for strict host key checking instead of just "yes" and "no", but I just don't know if I have that in me right now, especially since that would mean I'd have to either manage to get it merged upstream or build OpenSSH releases from source myself for even more devices.



      I'm tempted to write a local service that keeps track of known host keys for ssh, and creates a named Unix domain socket that ssh can then use as the known hosts file instead of the normal plain-text file. This seems doable but requires some care to get right and robust.



      Obviously turning off strict host key checking is a non-option. If I was going to do that I might as well just use rsh to not pretend like there's any security left. We're not animals.



      So I'm hoping that there's some clever way to just make this adjustment to OpenSSH's host key checking behavior to ignore host name IP and only check the key out of the box.










      share|improve this question
















      Question



      Is there a way I can keep the strict host key checking behavior on but have it check only the server's key fingerprint (which is effectively already a unique identity for the host), without it also considering the host name/IP?



      Problem



      I have several personal mobile/roaming devices: phones, laptops, more phones being used as pocket computers, etc, and I routinely use SSH between them, using the IP address rather than host name.



      Any time I end up on a new network (typically someone's home/office/public WiFi) or DHCP leases expire on an existing network, the IP addresses of those devices get shuffled around, causing one or both of the following situations:




      1. The same host with an unchanged host key is in a different location - so ssh prompts me to confirm the same host key again for the new IP.


      2. A new host ends up on the same IP as a previously connected-to host (but the previous host is also still alive just now at a different IP) - so when trying to connect to the new host ssh treats it as the well known error that prevents connecting, and fixing the matter requires me to either manage multiple known hosts files with config options or to lose the known host key association for the previous host.



      I would like some way to keep the automatic host key checking, but in my usecase it's meaningless to associate host name/IP to the server itself, so that:




      1. The same host key showing up for a different name/IP should be accepted automatically as a known host.


      2. A different host key at a previously known IP address should just cause the yes/no dialog for a new key.



      Put another way, I'd like known_hosts to be checked as if it was just a list of known keys, instead of a list of known name(/IP) <-> key tuples.



      But, security



      Just pre-empting this tangent:



      There'd be no real security loss if I could get SSH to ignore the host name/IP and just decide if the key is either new or known, because the server's host key is already as secure of a unique identifier for the server as we can get, regardless of what name/IP that server is currently at.



      The only difference in the event of a MitM would be that I would get the yes/no prompt instead of the connection obstinately aborting, but just the same I'd immediately know something was wrong since I would be connecting to a device whose host key I expect to be known.



      Comments on other possible solution ideas



      DNS doesn't apply, since we're talking shifting between different networks and often LAN private IP addresses, etc.



      /etc/hosts tweaks would just be a pain given how often I might change networks.



      I don't use auto-discovery/self-advertising technologies like mDNS/ZeroConf/Bonjour because they add non-negligible complexity for setup, maintenance and security auditing to some of these small devices (on some of these I have to compile everything I want to use from source), and I'm just generally not a fan of my devices advertising themselves actively and constantly to the network at large.



      A current manual-ish non-solution I have that at least mitigates the known_hosts pain is to force ssh to use /dev/null as the known hosts file - which means I just get prompted to verify the key every single time. But even with my good memory and ASCII key art to help, that doesn't scale and breaks automation, and I've been getting away with it only because the number of keys in play is very small for now.



      I could patch ssh to allow a KeyOnly option for strict host key checking instead of just "yes" and "no", but I just don't know if I have that in me right now, especially since that would mean I'd have to either manage to get it merged upstream or build OpenSSH releases from source myself for even more devices.



      I'm tempted to write a local service that keeps track of known host keys for ssh, and creates a named Unix domain socket that ssh can then use as the known hosts file instead of the normal plain-text file. This seems doable but requires some care to get right and robust.



      Obviously turning off strict host key checking is a non-option. If I was going to do that I might as well just use rsh to not pretend like there's any security left. We're not animals.



      So I'm hoping that there's some clever way to just make this adjustment to OpenSSH's host key checking behavior to ignore host name IP and only check the key out of the box.







      openssh ssh-keys






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Jun 2 '18 at 12:14







      mtraceur

















      asked Jun 2 '18 at 12:04









      mtraceurmtraceur

      1408




      1408






















          3 Answers
          3






          active

          oldest

          votes


















          1














          You can write a simple shell script, that will generate a custom user known host file on the fly, and connect to the host, as needed.



          See ssh command line specify server host key fingerprint.






          share|improve this answer



















          • 1





            Thank you and +1! I ended up finding a different solution that fits the use-case I was asking about more directly (see my own answer for details), but had I not found that, or if OpenSSH ever removed support for the known hosts file handling features that make my solution work, this would be a great starting point for building a solution to the question.

            – mtraceur
            Jun 4 '18 at 20:31





















          1














          Solution



          Manually modify or add known host entries for each host key with the name/IP field set to *.



          Explanation



          Turns out there's two little-known details of OpenSSH's known hosts file format (documented, for some reason, in the sshd manual page, instead of the ssh manual page, under the SSH_KNOWN_HOSTS_FILE_FORMAT section), that gives exactly the behavior I want:




          1. OpenSSH allows wildcards in the host name/IP field, so that multiple (or even all) hosts match a given public key.


          2. OpenSSH allows more than one known host entry for the same host, and checks all of them, so that multiple host keys can be considered valid for the same host.



          Combining these two things lets you add any number of host keys which are considered valid for any host you connect to, while still taking advantage of strict host key checking.



          This works whether or not you have HashKnownHosts turned on or off - if you're modifying an existing hashed entry, you still just replace the first (space-delimited) field in the entry. If you run ssh-keygen -H you do get a warning that wildcard entries cannot be hashed, but that's fine: the whole point of that hashing is hiding what hosts you connect to in case the file's contents are exposed, but a * wildcard similarly doesn't reveal specific hosts.



          Security caveat



          I was wrong in my question to say that there's no security loss at all. There is a small but real additional risk: If one or more host keys is considered valid for multiple hosts, then if any of those host keys are compromised, all of your connections can be MitM'ed by the compromised key.



          For some people this will be an acceptable risk, worth the extra convenience, in the usual convenience vs. security trade-off.



          But with just a little bit of extra setup, this can be mitigated to a fairly painless level by having a known hosts file per each "roaming" host, storing just that host's wildcarded host key entry, a config file for each of those specifying that known hosts file with the UserKnownHosts file option, and specifying that config file with the -F option when connecting.






          share|improve this answer

































            1














            Another way



            While the solution of using a wildcard in the KnownHosts is a good one, here is an additional solution which does not require hand editing the known_hosts file every time another device is added.



            Put this in your ~/.ssh/config:



            # mDNS Example: `ssh myphone.local`
            # Write link-local hostnames, not DHCP dynamic IPs, to known_hosts.
            Host *.local
            CheckHostIP no


            This checks the HostKey based only on the hostname for devices on the local network, not the IP address. This presumes you have Multicast DNS (mDNS) running. A device with the hostname "argylegargoyle" would be reached by running ssh argylegargoyle.local.



            Why?



            I recognize that the questioner dislikes Multicast DNS (mDNS), so please don't down vote me for that. While I think the given reasons against mDNS (security, difficulty) are debatable, this answer is aimed at other people who may have the same question.



            I am suggesting RFC 6762 link-local names because they are extremely convenient, solve the problem, and many people have them readily available, even on small devices. Moreover, in the situation the original questioner describes, where many mobile devices are often being moved between various networks, it can be a pain to find the new IP address of each device. Multicast DNS solves the problem as you can always ssh mytoaster.local no matter its IP address.



            What's the catch?



            The gotcha is that, as mentioned in the question, you do need to have mDNS running on your devices. Since many sytems come with it already installed, it's worth a shot just trying ssh myhostname.local to see if it works. If it does, great! Nothing more needs be done.



            Enabling mDNS



            Enabling mDNS depends on what system you run, but it is usually a simple one step process. Here's a sampling that should cover most people:




            • Debian GNU/Linux, Ubuntu, Mint, Raspberry Pi:
              apt install avahi-daemon

            • Apple products (iPhone, Mac) always have it turned on by default.

            • Microsoft's implementation of mDNS is crummy, but Windows users can simply install Apple's Bonjour.

            • Android: should work out of the box, but depends on the app.

            • LEDE/OpenWRT (extremely small devices like WiFi routers, Arduino robots):
              opkg install umdns or opkg install avahi-daemon.






            share|improve this answer



















            • 1





              +1 because you're right that for many people, mDNS is the better answer. If mDNS comes preinstalled or prepackaged, then mDNS is the natural solution (within LANs, add a dynamic DNS service or static IP VPN to cover WANs). So long as the user is fine with that trade of convenience for security risk. Though I do bristle at your assertion that my issues with mDNS are "debatable": they are real and objectively quantifiable - the only debatable thing is what their magnitude/severity is for any given situation.

              – mtraceur
              Jan 29 at 23:24











            • Thank you. Please read "debatable" as "subjects which I am apparently ignorant/misinformed about". :-)

              – hackerb9
              Jan 31 at 0:11











            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "3"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1328032%2fopenssh-any-way-to-keep-strict-host-key-checking-but-check-only-the-key-and-ig%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            3 Answers
            3






            active

            oldest

            votes








            3 Answers
            3






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1














            You can write a simple shell script, that will generate a custom user known host file on the fly, and connect to the host, as needed.



            See ssh command line specify server host key fingerprint.






            share|improve this answer



















            • 1





              Thank you and +1! I ended up finding a different solution that fits the use-case I was asking about more directly (see my own answer for details), but had I not found that, or if OpenSSH ever removed support for the known hosts file handling features that make my solution work, this would be a great starting point for building a solution to the question.

              – mtraceur
              Jun 4 '18 at 20:31


















            1














            You can write a simple shell script, that will generate a custom user known host file on the fly, and connect to the host, as needed.



            See ssh command line specify server host key fingerprint.






            share|improve this answer



















            • 1





              Thank you and +1! I ended up finding a different solution that fits the use-case I was asking about more directly (see my own answer for details), but had I not found that, or if OpenSSH ever removed support for the known hosts file handling features that make my solution work, this would be a great starting point for building a solution to the question.

              – mtraceur
              Jun 4 '18 at 20:31
















            1












            1








            1







            You can write a simple shell script, that will generate a custom user known host file on the fly, and connect to the host, as needed.



            See ssh command line specify server host key fingerprint.






            share|improve this answer













            You can write a simple shell script, that will generate a custom user known host file on the fly, and connect to the host, as needed.



            See ssh command line specify server host key fingerprint.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Jun 4 '18 at 7:28









            Martin PrikrylMartin Prikryl

            11.1k43277




            11.1k43277








            • 1





              Thank you and +1! I ended up finding a different solution that fits the use-case I was asking about more directly (see my own answer for details), but had I not found that, or if OpenSSH ever removed support for the known hosts file handling features that make my solution work, this would be a great starting point for building a solution to the question.

              – mtraceur
              Jun 4 '18 at 20:31
















            • 1





              Thank you and +1! I ended up finding a different solution that fits the use-case I was asking about more directly (see my own answer for details), but had I not found that, or if OpenSSH ever removed support for the known hosts file handling features that make my solution work, this would be a great starting point for building a solution to the question.

              – mtraceur
              Jun 4 '18 at 20:31










            1




            1





            Thank you and +1! I ended up finding a different solution that fits the use-case I was asking about more directly (see my own answer for details), but had I not found that, or if OpenSSH ever removed support for the known hosts file handling features that make my solution work, this would be a great starting point for building a solution to the question.

            – mtraceur
            Jun 4 '18 at 20:31







            Thank you and +1! I ended up finding a different solution that fits the use-case I was asking about more directly (see my own answer for details), but had I not found that, or if OpenSSH ever removed support for the known hosts file handling features that make my solution work, this would be a great starting point for building a solution to the question.

            – mtraceur
            Jun 4 '18 at 20:31















            1














            Solution



            Manually modify or add known host entries for each host key with the name/IP field set to *.



            Explanation



            Turns out there's two little-known details of OpenSSH's known hosts file format (documented, for some reason, in the sshd manual page, instead of the ssh manual page, under the SSH_KNOWN_HOSTS_FILE_FORMAT section), that gives exactly the behavior I want:




            1. OpenSSH allows wildcards in the host name/IP field, so that multiple (or even all) hosts match a given public key.


            2. OpenSSH allows more than one known host entry for the same host, and checks all of them, so that multiple host keys can be considered valid for the same host.



            Combining these two things lets you add any number of host keys which are considered valid for any host you connect to, while still taking advantage of strict host key checking.



            This works whether or not you have HashKnownHosts turned on or off - if you're modifying an existing hashed entry, you still just replace the first (space-delimited) field in the entry. If you run ssh-keygen -H you do get a warning that wildcard entries cannot be hashed, but that's fine: the whole point of that hashing is hiding what hosts you connect to in case the file's contents are exposed, but a * wildcard similarly doesn't reveal specific hosts.



            Security caveat



            I was wrong in my question to say that there's no security loss at all. There is a small but real additional risk: If one or more host keys is considered valid for multiple hosts, then if any of those host keys are compromised, all of your connections can be MitM'ed by the compromised key.



            For some people this will be an acceptable risk, worth the extra convenience, in the usual convenience vs. security trade-off.



            But with just a little bit of extra setup, this can be mitigated to a fairly painless level by having a known hosts file per each "roaming" host, storing just that host's wildcarded host key entry, a config file for each of those specifying that known hosts file with the UserKnownHosts file option, and specifying that config file with the -F option when connecting.






            share|improve this answer






























              1














              Solution



              Manually modify or add known host entries for each host key with the name/IP field set to *.



              Explanation



              Turns out there's two little-known details of OpenSSH's known hosts file format (documented, for some reason, in the sshd manual page, instead of the ssh manual page, under the SSH_KNOWN_HOSTS_FILE_FORMAT section), that gives exactly the behavior I want:




              1. OpenSSH allows wildcards in the host name/IP field, so that multiple (or even all) hosts match a given public key.


              2. OpenSSH allows more than one known host entry for the same host, and checks all of them, so that multiple host keys can be considered valid for the same host.



              Combining these two things lets you add any number of host keys which are considered valid for any host you connect to, while still taking advantage of strict host key checking.



              This works whether or not you have HashKnownHosts turned on or off - if you're modifying an existing hashed entry, you still just replace the first (space-delimited) field in the entry. If you run ssh-keygen -H you do get a warning that wildcard entries cannot be hashed, but that's fine: the whole point of that hashing is hiding what hosts you connect to in case the file's contents are exposed, but a * wildcard similarly doesn't reveal specific hosts.



              Security caveat



              I was wrong in my question to say that there's no security loss at all. There is a small but real additional risk: If one or more host keys is considered valid for multiple hosts, then if any of those host keys are compromised, all of your connections can be MitM'ed by the compromised key.



              For some people this will be an acceptable risk, worth the extra convenience, in the usual convenience vs. security trade-off.



              But with just a little bit of extra setup, this can be mitigated to a fairly painless level by having a known hosts file per each "roaming" host, storing just that host's wildcarded host key entry, a config file for each of those specifying that known hosts file with the UserKnownHosts file option, and specifying that config file with the -F option when connecting.






              share|improve this answer




























                1












                1








                1







                Solution



                Manually modify or add known host entries for each host key with the name/IP field set to *.



                Explanation



                Turns out there's two little-known details of OpenSSH's known hosts file format (documented, for some reason, in the sshd manual page, instead of the ssh manual page, under the SSH_KNOWN_HOSTS_FILE_FORMAT section), that gives exactly the behavior I want:




                1. OpenSSH allows wildcards in the host name/IP field, so that multiple (or even all) hosts match a given public key.


                2. OpenSSH allows more than one known host entry for the same host, and checks all of them, so that multiple host keys can be considered valid for the same host.



                Combining these two things lets you add any number of host keys which are considered valid for any host you connect to, while still taking advantage of strict host key checking.



                This works whether or not you have HashKnownHosts turned on or off - if you're modifying an existing hashed entry, you still just replace the first (space-delimited) field in the entry. If you run ssh-keygen -H you do get a warning that wildcard entries cannot be hashed, but that's fine: the whole point of that hashing is hiding what hosts you connect to in case the file's contents are exposed, but a * wildcard similarly doesn't reveal specific hosts.



                Security caveat



                I was wrong in my question to say that there's no security loss at all. There is a small but real additional risk: If one or more host keys is considered valid for multiple hosts, then if any of those host keys are compromised, all of your connections can be MitM'ed by the compromised key.



                For some people this will be an acceptable risk, worth the extra convenience, in the usual convenience vs. security trade-off.



                But with just a little bit of extra setup, this can be mitigated to a fairly painless level by having a known hosts file per each "roaming" host, storing just that host's wildcarded host key entry, a config file for each of those specifying that known hosts file with the UserKnownHosts file option, and specifying that config file with the -F option when connecting.






                share|improve this answer















                Solution



                Manually modify or add known host entries for each host key with the name/IP field set to *.



                Explanation



                Turns out there's two little-known details of OpenSSH's known hosts file format (documented, for some reason, in the sshd manual page, instead of the ssh manual page, under the SSH_KNOWN_HOSTS_FILE_FORMAT section), that gives exactly the behavior I want:




                1. OpenSSH allows wildcards in the host name/IP field, so that multiple (or even all) hosts match a given public key.


                2. OpenSSH allows more than one known host entry for the same host, and checks all of them, so that multiple host keys can be considered valid for the same host.



                Combining these two things lets you add any number of host keys which are considered valid for any host you connect to, while still taking advantage of strict host key checking.



                This works whether or not you have HashKnownHosts turned on or off - if you're modifying an existing hashed entry, you still just replace the first (space-delimited) field in the entry. If you run ssh-keygen -H you do get a warning that wildcard entries cannot be hashed, but that's fine: the whole point of that hashing is hiding what hosts you connect to in case the file's contents are exposed, but a * wildcard similarly doesn't reveal specific hosts.



                Security caveat



                I was wrong in my question to say that there's no security loss at all. There is a small but real additional risk: If one or more host keys is considered valid for multiple hosts, then if any of those host keys are compromised, all of your connections can be MitM'ed by the compromised key.



                For some people this will be an acceptable risk, worth the extra convenience, in the usual convenience vs. security trade-off.



                But with just a little bit of extra setup, this can be mitigated to a fairly painless level by having a known hosts file per each "roaming" host, storing just that host's wildcarded host key entry, a config file for each of those specifying that known hosts file with the UserKnownHosts file option, and specifying that config file with the -F option when connecting.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Jun 4 '18 at 20:55

























                answered Jun 4 '18 at 20:22









                mtraceurmtraceur

                1408




                1408























                    1














                    Another way



                    While the solution of using a wildcard in the KnownHosts is a good one, here is an additional solution which does not require hand editing the known_hosts file every time another device is added.



                    Put this in your ~/.ssh/config:



                    # mDNS Example: `ssh myphone.local`
                    # Write link-local hostnames, not DHCP dynamic IPs, to known_hosts.
                    Host *.local
                    CheckHostIP no


                    This checks the HostKey based only on the hostname for devices on the local network, not the IP address. This presumes you have Multicast DNS (mDNS) running. A device with the hostname "argylegargoyle" would be reached by running ssh argylegargoyle.local.



                    Why?



                    I recognize that the questioner dislikes Multicast DNS (mDNS), so please don't down vote me for that. While I think the given reasons against mDNS (security, difficulty) are debatable, this answer is aimed at other people who may have the same question.



                    I am suggesting RFC 6762 link-local names because they are extremely convenient, solve the problem, and many people have them readily available, even on small devices. Moreover, in the situation the original questioner describes, where many mobile devices are often being moved between various networks, it can be a pain to find the new IP address of each device. Multicast DNS solves the problem as you can always ssh mytoaster.local no matter its IP address.



                    What's the catch?



                    The gotcha is that, as mentioned in the question, you do need to have mDNS running on your devices. Since many sytems come with it already installed, it's worth a shot just trying ssh myhostname.local to see if it works. If it does, great! Nothing more needs be done.



                    Enabling mDNS



                    Enabling mDNS depends on what system you run, but it is usually a simple one step process. Here's a sampling that should cover most people:




                    • Debian GNU/Linux, Ubuntu, Mint, Raspberry Pi:
                      apt install avahi-daemon

                    • Apple products (iPhone, Mac) always have it turned on by default.

                    • Microsoft's implementation of mDNS is crummy, but Windows users can simply install Apple's Bonjour.

                    • Android: should work out of the box, but depends on the app.

                    • LEDE/OpenWRT (extremely small devices like WiFi routers, Arduino robots):
                      opkg install umdns or opkg install avahi-daemon.






                    share|improve this answer



















                    • 1





                      +1 because you're right that for many people, mDNS is the better answer. If mDNS comes preinstalled or prepackaged, then mDNS is the natural solution (within LANs, add a dynamic DNS service or static IP VPN to cover WANs). So long as the user is fine with that trade of convenience for security risk. Though I do bristle at your assertion that my issues with mDNS are "debatable": they are real and objectively quantifiable - the only debatable thing is what their magnitude/severity is for any given situation.

                      – mtraceur
                      Jan 29 at 23:24











                    • Thank you. Please read "debatable" as "subjects which I am apparently ignorant/misinformed about". :-)

                      – hackerb9
                      Jan 31 at 0:11
















                    1














                    Another way



                    While the solution of using a wildcard in the KnownHosts is a good one, here is an additional solution which does not require hand editing the known_hosts file every time another device is added.



                    Put this in your ~/.ssh/config:



                    # mDNS Example: `ssh myphone.local`
                    # Write link-local hostnames, not DHCP dynamic IPs, to known_hosts.
                    Host *.local
                    CheckHostIP no


                    This checks the HostKey based only on the hostname for devices on the local network, not the IP address. This presumes you have Multicast DNS (mDNS) running. A device with the hostname "argylegargoyle" would be reached by running ssh argylegargoyle.local.



                    Why?



                    I recognize that the questioner dislikes Multicast DNS (mDNS), so please don't down vote me for that. While I think the given reasons against mDNS (security, difficulty) are debatable, this answer is aimed at other people who may have the same question.



                    I am suggesting RFC 6762 link-local names because they are extremely convenient, solve the problem, and many people have them readily available, even on small devices. Moreover, in the situation the original questioner describes, where many mobile devices are often being moved between various networks, it can be a pain to find the new IP address of each device. Multicast DNS solves the problem as you can always ssh mytoaster.local no matter its IP address.



                    What's the catch?



                    The gotcha is that, as mentioned in the question, you do need to have mDNS running on your devices. Since many sytems come with it already installed, it's worth a shot just trying ssh myhostname.local to see if it works. If it does, great! Nothing more needs be done.



                    Enabling mDNS



                    Enabling mDNS depends on what system you run, but it is usually a simple one step process. Here's a sampling that should cover most people:




                    • Debian GNU/Linux, Ubuntu, Mint, Raspberry Pi:
                      apt install avahi-daemon

                    • Apple products (iPhone, Mac) always have it turned on by default.

                    • Microsoft's implementation of mDNS is crummy, but Windows users can simply install Apple's Bonjour.

                    • Android: should work out of the box, but depends on the app.

                    • LEDE/OpenWRT (extremely small devices like WiFi routers, Arduino robots):
                      opkg install umdns or opkg install avahi-daemon.






                    share|improve this answer



















                    • 1





                      +1 because you're right that for many people, mDNS is the better answer. If mDNS comes preinstalled or prepackaged, then mDNS is the natural solution (within LANs, add a dynamic DNS service or static IP VPN to cover WANs). So long as the user is fine with that trade of convenience for security risk. Though I do bristle at your assertion that my issues with mDNS are "debatable": they are real and objectively quantifiable - the only debatable thing is what their magnitude/severity is for any given situation.

                      – mtraceur
                      Jan 29 at 23:24











                    • Thank you. Please read "debatable" as "subjects which I am apparently ignorant/misinformed about". :-)

                      – hackerb9
                      Jan 31 at 0:11














                    1












                    1








                    1







                    Another way



                    While the solution of using a wildcard in the KnownHosts is a good one, here is an additional solution which does not require hand editing the known_hosts file every time another device is added.



                    Put this in your ~/.ssh/config:



                    # mDNS Example: `ssh myphone.local`
                    # Write link-local hostnames, not DHCP dynamic IPs, to known_hosts.
                    Host *.local
                    CheckHostIP no


                    This checks the HostKey based only on the hostname for devices on the local network, not the IP address. This presumes you have Multicast DNS (mDNS) running. A device with the hostname "argylegargoyle" would be reached by running ssh argylegargoyle.local.



                    Why?



                    I recognize that the questioner dislikes Multicast DNS (mDNS), so please don't down vote me for that. While I think the given reasons against mDNS (security, difficulty) are debatable, this answer is aimed at other people who may have the same question.



                    I am suggesting RFC 6762 link-local names because they are extremely convenient, solve the problem, and many people have them readily available, even on small devices. Moreover, in the situation the original questioner describes, where many mobile devices are often being moved between various networks, it can be a pain to find the new IP address of each device. Multicast DNS solves the problem as you can always ssh mytoaster.local no matter its IP address.



                    What's the catch?



                    The gotcha is that, as mentioned in the question, you do need to have mDNS running on your devices. Since many sytems come with it already installed, it's worth a shot just trying ssh myhostname.local to see if it works. If it does, great! Nothing more needs be done.



                    Enabling mDNS



                    Enabling mDNS depends on what system you run, but it is usually a simple one step process. Here's a sampling that should cover most people:




                    • Debian GNU/Linux, Ubuntu, Mint, Raspberry Pi:
                      apt install avahi-daemon

                    • Apple products (iPhone, Mac) always have it turned on by default.

                    • Microsoft's implementation of mDNS is crummy, but Windows users can simply install Apple's Bonjour.

                    • Android: should work out of the box, but depends on the app.

                    • LEDE/OpenWRT (extremely small devices like WiFi routers, Arduino robots):
                      opkg install umdns or opkg install avahi-daemon.






                    share|improve this answer













                    Another way



                    While the solution of using a wildcard in the KnownHosts is a good one, here is an additional solution which does not require hand editing the known_hosts file every time another device is added.



                    Put this in your ~/.ssh/config:



                    # mDNS Example: `ssh myphone.local`
                    # Write link-local hostnames, not DHCP dynamic IPs, to known_hosts.
                    Host *.local
                    CheckHostIP no


                    This checks the HostKey based only on the hostname for devices on the local network, not the IP address. This presumes you have Multicast DNS (mDNS) running. A device with the hostname "argylegargoyle" would be reached by running ssh argylegargoyle.local.



                    Why?



                    I recognize that the questioner dislikes Multicast DNS (mDNS), so please don't down vote me for that. While I think the given reasons against mDNS (security, difficulty) are debatable, this answer is aimed at other people who may have the same question.



                    I am suggesting RFC 6762 link-local names because they are extremely convenient, solve the problem, and many people have them readily available, even on small devices. Moreover, in the situation the original questioner describes, where many mobile devices are often being moved between various networks, it can be a pain to find the new IP address of each device. Multicast DNS solves the problem as you can always ssh mytoaster.local no matter its IP address.



                    What's the catch?



                    The gotcha is that, as mentioned in the question, you do need to have mDNS running on your devices. Since many sytems come with it already installed, it's worth a shot just trying ssh myhostname.local to see if it works. If it does, great! Nothing more needs be done.



                    Enabling mDNS



                    Enabling mDNS depends on what system you run, but it is usually a simple one step process. Here's a sampling that should cover most people:




                    • Debian GNU/Linux, Ubuntu, Mint, Raspberry Pi:
                      apt install avahi-daemon

                    • Apple products (iPhone, Mac) always have it turned on by default.

                    • Microsoft's implementation of mDNS is crummy, but Windows users can simply install Apple's Bonjour.

                    • Android: should work out of the box, but depends on the app.

                    • LEDE/OpenWRT (extremely small devices like WiFi routers, Arduino robots):
                      opkg install umdns or opkg install avahi-daemon.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Jan 29 at 22:32









                    hackerb9hackerb9

                    45156




                    45156








                    • 1





                      +1 because you're right that for many people, mDNS is the better answer. If mDNS comes preinstalled or prepackaged, then mDNS is the natural solution (within LANs, add a dynamic DNS service or static IP VPN to cover WANs). So long as the user is fine with that trade of convenience for security risk. Though I do bristle at your assertion that my issues with mDNS are "debatable": they are real and objectively quantifiable - the only debatable thing is what their magnitude/severity is for any given situation.

                      – mtraceur
                      Jan 29 at 23:24











                    • Thank you. Please read "debatable" as "subjects which I am apparently ignorant/misinformed about". :-)

                      – hackerb9
                      Jan 31 at 0:11














                    • 1





                      +1 because you're right that for many people, mDNS is the better answer. If mDNS comes preinstalled or prepackaged, then mDNS is the natural solution (within LANs, add a dynamic DNS service or static IP VPN to cover WANs). So long as the user is fine with that trade of convenience for security risk. Though I do bristle at your assertion that my issues with mDNS are "debatable": they are real and objectively quantifiable - the only debatable thing is what their magnitude/severity is for any given situation.

                      – mtraceur
                      Jan 29 at 23:24











                    • Thank you. Please read "debatable" as "subjects which I am apparently ignorant/misinformed about". :-)

                      – hackerb9
                      Jan 31 at 0:11








                    1




                    1





                    +1 because you're right that for many people, mDNS is the better answer. If mDNS comes preinstalled or prepackaged, then mDNS is the natural solution (within LANs, add a dynamic DNS service or static IP VPN to cover WANs). So long as the user is fine with that trade of convenience for security risk. Though I do bristle at your assertion that my issues with mDNS are "debatable": they are real and objectively quantifiable - the only debatable thing is what their magnitude/severity is for any given situation.

                    – mtraceur
                    Jan 29 at 23:24





                    +1 because you're right that for many people, mDNS is the better answer. If mDNS comes preinstalled or prepackaged, then mDNS is the natural solution (within LANs, add a dynamic DNS service or static IP VPN to cover WANs). So long as the user is fine with that trade of convenience for security risk. Though I do bristle at your assertion that my issues with mDNS are "debatable": they are real and objectively quantifiable - the only debatable thing is what their magnitude/severity is for any given situation.

                    – mtraceur
                    Jan 29 at 23:24













                    Thank you. Please read "debatable" as "subjects which I am apparently ignorant/misinformed about". :-)

                    – hackerb9
                    Jan 31 at 0:11





                    Thank you. Please read "debatable" as "subjects which I am apparently ignorant/misinformed about". :-)

                    – hackerb9
                    Jan 31 at 0:11


















                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Super User!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1328032%2fopenssh-any-way-to-keep-strict-host-key-checking-but-check-only-the-key-and-ig%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Probability when a professor distributes a quiz and homework assignment to a class of n students.

                    Aardman Animations

                    Are they similar matrix