Proof of calculating MinHash












0












$begingroup$


I'm reading about MinHash technique to estimate the similarity between 2 sets: Given set A and B, h is the hash function and $h_min(S)$ is the minimum hash of set S, i.e. $h_min(S) = min(h(s))$ for s in S. We have the equation:
$$
p(h_min(A) = h_min(B)) = frac{|A cap B|}{|A cup B|}
$$

Which means the probability that minimum hash of A equals to minimum hash of B is the Jaccard similarity of $A$ and $B$.



I am trying to prove above equation and come up with a proof, which is:
for $a in A$ and $b in B$ such that $h(a) = h_min(A)$ and $h(b) = h_min(B)$. So, if $h_min(A) = h_min(B)$ then $h(a) = h(b)$. Assume that hash function h can hash keys to distinct hash value, so $h(a) = h(b)$ if and only if $a = b$, which is $frac{|A cap B|}{|A cup B|}$. However, my proof is not complete since hash function can return the same value for different keys. So, I'm asking for your help to find a proof which can be applied regardless the hash function. Thanks.



L










share|cite|improve this question











$endgroup$

















    0












    $begingroup$


    I'm reading about MinHash technique to estimate the similarity between 2 sets: Given set A and B, h is the hash function and $h_min(S)$ is the minimum hash of set S, i.e. $h_min(S) = min(h(s))$ for s in S. We have the equation:
    $$
    p(h_min(A) = h_min(B)) = frac{|A cap B|}{|A cup B|}
    $$

    Which means the probability that minimum hash of A equals to minimum hash of B is the Jaccard similarity of $A$ and $B$.



    I am trying to prove above equation and come up with a proof, which is:
    for $a in A$ and $b in B$ such that $h(a) = h_min(A)$ and $h(b) = h_min(B)$. So, if $h_min(A) = h_min(B)$ then $h(a) = h(b)$. Assume that hash function h can hash keys to distinct hash value, so $h(a) = h(b)$ if and only if $a = b$, which is $frac{|A cap B|}{|A cup B|}$. However, my proof is not complete since hash function can return the same value for different keys. So, I'm asking for your help to find a proof which can be applied regardless the hash function. Thanks.



    L










    share|cite|improve this question











    $endgroup$















      0












      0








      0





      $begingroup$


      I'm reading about MinHash technique to estimate the similarity between 2 sets: Given set A and B, h is the hash function and $h_min(S)$ is the minimum hash of set S, i.e. $h_min(S) = min(h(s))$ for s in S. We have the equation:
      $$
      p(h_min(A) = h_min(B)) = frac{|A cap B|}{|A cup B|}
      $$

      Which means the probability that minimum hash of A equals to minimum hash of B is the Jaccard similarity of $A$ and $B$.



      I am trying to prove above equation and come up with a proof, which is:
      for $a in A$ and $b in B$ such that $h(a) = h_min(A)$ and $h(b) = h_min(B)$. So, if $h_min(A) = h_min(B)$ then $h(a) = h(b)$. Assume that hash function h can hash keys to distinct hash value, so $h(a) = h(b)$ if and only if $a = b$, which is $frac{|A cap B|}{|A cup B|}$. However, my proof is not complete since hash function can return the same value for different keys. So, I'm asking for your help to find a proof which can be applied regardless the hash function. Thanks.



      L










      share|cite|improve this question











      $endgroup$




      I'm reading about MinHash technique to estimate the similarity between 2 sets: Given set A and B, h is the hash function and $h_min(S)$ is the minimum hash of set S, i.e. $h_min(S) = min(h(s))$ for s in S. We have the equation:
      $$
      p(h_min(A) = h_min(B)) = frac{|A cap B|}{|A cup B|}
      $$

      Which means the probability that minimum hash of A equals to minimum hash of B is the Jaccard similarity of $A$ and $B$.



      I am trying to prove above equation and come up with a proof, which is:
      for $a in A$ and $b in B$ such that $h(a) = h_min(A)$ and $h(b) = h_min(B)$. So, if $h_min(A) = h_min(B)$ then $h(a) = h(b)$. Assume that hash function h can hash keys to distinct hash value, so $h(a) = h(b)$ if and only if $a = b$, which is $frac{|A cap B|}{|A cup B|}$. However, my proof is not complete since hash function can return the same value for different keys. So, I'm asking for your help to find a proof which can be applied regardless the hash function. Thanks.



      L







      probability proof-writing hash-function






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Dec 3 '18 at 8:02









      Jean-Claude Arbaut

      14.7k63464




      14.7k63464










      asked Nov 22 '12 at 1:41









      Long ThaiLong Thai

      1326




      1326






















          2 Answers
          2






          active

          oldest

          votes


















          0












          $begingroup$

          It seems like it is an underlying assumption in this formula that different elements map to distinct hashes. Otherwise, just have your hash function map everyone to 0. The probability you are looking at would be 1 regardless of any sets.






          share|cite|improve this answer









          $endgroup$





















            -1












            $begingroup$

            There is a remarkable connection between minhashing and Jaccard similarity
            of the sets that are minhashed.



            • The probability that the minhash function for a random permutation of
            rows produces the same value for two sets equals the Jaccard similarity
            of those sets.



            To see why, we need to picture the columns for those two sets. If we restrict
            ourselves to the columns for sets S1 and S2, then rows can be divided into three
            classes:




            1. Type X rows have 1 in both columns.

            2. Type Y rows have 1 in one of the columns and 0 in the other.

            3. Type Z rows have 0 in both columns.


            Since the matrix is sparse, most rows are of type Z. However, it is the ratio
            of the numbers of type X and type Y rows that determine both SIM(S1, S2)
            and the probability that h(S1) = h(S2). Let there be x rows of type X and y
            rows of type Y . Then SIM(S1, S2) = x/(x + y). The reason is that x is the size
            of S1 ∩ S2 and x + y is the size of S1 ∪ S2.



            Now, consider the probability that h(S1) = h(S2). If we imagine the rows
            permuted randomly, and we proceed from the top, the probability that we shall
            meet a type X row before we meet a type Y row is x/(x + y). But if the
            first row from the top other than type Z rows is a type X row, then surely
            h(S1) = h(S2). On the other hand, if the first row other than a type Z row
            that we meet is a type Y row, then the set with a 1 gets that row as its minhash
            value. However the set with a 0 in that row surely gets some row further down
            the permuted list. Thus, we know h(S1) 6= h(S2) if we first meet a type Y row.
            We conclude the probability that h(S1) = h(S2) is x/(x + y), which is also the
            Jaccard similarity of S1 and S2. ( from Mining of Massive Datasets book )






            share|cite|improve this answer









            $endgroup$









            • 1




              $begingroup$
              You shouldn't just blatantly plagiarize answers from well known texts.
              $endgroup$
              – Shawn O'Hare
              Feb 24 '15 at 16:58











            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "69"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f242364%2fproof-of-calculating-minhash%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0












            $begingroup$

            It seems like it is an underlying assumption in this formula that different elements map to distinct hashes. Otherwise, just have your hash function map everyone to 0. The probability you are looking at would be 1 regardless of any sets.






            share|cite|improve this answer









            $endgroup$


















              0












              $begingroup$

              It seems like it is an underlying assumption in this formula that different elements map to distinct hashes. Otherwise, just have your hash function map everyone to 0. The probability you are looking at would be 1 regardless of any sets.






              share|cite|improve this answer









              $endgroup$
















                0












                0








                0





                $begingroup$

                It seems like it is an underlying assumption in this formula that different elements map to distinct hashes. Otherwise, just have your hash function map everyone to 0. The probability you are looking at would be 1 regardless of any sets.






                share|cite|improve this answer









                $endgroup$



                It seems like it is an underlying assumption in this formula that different elements map to distinct hashes. Otherwise, just have your hash function map everyone to 0. The probability you are looking at would be 1 regardless of any sets.







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered Dec 27 '17 at 8:29









                E-AE-A

                2,0991412




                2,0991412























                    -1












                    $begingroup$

                    There is a remarkable connection between minhashing and Jaccard similarity
                    of the sets that are minhashed.



                    • The probability that the minhash function for a random permutation of
                    rows produces the same value for two sets equals the Jaccard similarity
                    of those sets.



                    To see why, we need to picture the columns for those two sets. If we restrict
                    ourselves to the columns for sets S1 and S2, then rows can be divided into three
                    classes:




                    1. Type X rows have 1 in both columns.

                    2. Type Y rows have 1 in one of the columns and 0 in the other.

                    3. Type Z rows have 0 in both columns.


                    Since the matrix is sparse, most rows are of type Z. However, it is the ratio
                    of the numbers of type X and type Y rows that determine both SIM(S1, S2)
                    and the probability that h(S1) = h(S2). Let there be x rows of type X and y
                    rows of type Y . Then SIM(S1, S2) = x/(x + y). The reason is that x is the size
                    of S1 ∩ S2 and x + y is the size of S1 ∪ S2.



                    Now, consider the probability that h(S1) = h(S2). If we imagine the rows
                    permuted randomly, and we proceed from the top, the probability that we shall
                    meet a type X row before we meet a type Y row is x/(x + y). But if the
                    first row from the top other than type Z rows is a type X row, then surely
                    h(S1) = h(S2). On the other hand, if the first row other than a type Z row
                    that we meet is a type Y row, then the set with a 1 gets that row as its minhash
                    value. However the set with a 0 in that row surely gets some row further down
                    the permuted list. Thus, we know h(S1) 6= h(S2) if we first meet a type Y row.
                    We conclude the probability that h(S1) = h(S2) is x/(x + y), which is also the
                    Jaccard similarity of S1 and S2. ( from Mining of Massive Datasets book )






                    share|cite|improve this answer









                    $endgroup$









                    • 1




                      $begingroup$
                      You shouldn't just blatantly plagiarize answers from well known texts.
                      $endgroup$
                      – Shawn O'Hare
                      Feb 24 '15 at 16:58
















                    -1












                    $begingroup$

                    There is a remarkable connection between minhashing and Jaccard similarity
                    of the sets that are minhashed.



                    • The probability that the minhash function for a random permutation of
                    rows produces the same value for two sets equals the Jaccard similarity
                    of those sets.



                    To see why, we need to picture the columns for those two sets. If we restrict
                    ourselves to the columns for sets S1 and S2, then rows can be divided into three
                    classes:




                    1. Type X rows have 1 in both columns.

                    2. Type Y rows have 1 in one of the columns and 0 in the other.

                    3. Type Z rows have 0 in both columns.


                    Since the matrix is sparse, most rows are of type Z. However, it is the ratio
                    of the numbers of type X and type Y rows that determine both SIM(S1, S2)
                    and the probability that h(S1) = h(S2). Let there be x rows of type X and y
                    rows of type Y . Then SIM(S1, S2) = x/(x + y). The reason is that x is the size
                    of S1 ∩ S2 and x + y is the size of S1 ∪ S2.



                    Now, consider the probability that h(S1) = h(S2). If we imagine the rows
                    permuted randomly, and we proceed from the top, the probability that we shall
                    meet a type X row before we meet a type Y row is x/(x + y). But if the
                    first row from the top other than type Z rows is a type X row, then surely
                    h(S1) = h(S2). On the other hand, if the first row other than a type Z row
                    that we meet is a type Y row, then the set with a 1 gets that row as its minhash
                    value. However the set with a 0 in that row surely gets some row further down
                    the permuted list. Thus, we know h(S1) 6= h(S2) if we first meet a type Y row.
                    We conclude the probability that h(S1) = h(S2) is x/(x + y), which is also the
                    Jaccard similarity of S1 and S2. ( from Mining of Massive Datasets book )






                    share|cite|improve this answer









                    $endgroup$









                    • 1




                      $begingroup$
                      You shouldn't just blatantly plagiarize answers from well known texts.
                      $endgroup$
                      – Shawn O'Hare
                      Feb 24 '15 at 16:58














                    -1












                    -1








                    -1





                    $begingroup$

                    There is a remarkable connection between minhashing and Jaccard similarity
                    of the sets that are minhashed.



                    • The probability that the minhash function for a random permutation of
                    rows produces the same value for two sets equals the Jaccard similarity
                    of those sets.



                    To see why, we need to picture the columns for those two sets. If we restrict
                    ourselves to the columns for sets S1 and S2, then rows can be divided into three
                    classes:




                    1. Type X rows have 1 in both columns.

                    2. Type Y rows have 1 in one of the columns and 0 in the other.

                    3. Type Z rows have 0 in both columns.


                    Since the matrix is sparse, most rows are of type Z. However, it is the ratio
                    of the numbers of type X and type Y rows that determine both SIM(S1, S2)
                    and the probability that h(S1) = h(S2). Let there be x rows of type X and y
                    rows of type Y . Then SIM(S1, S2) = x/(x + y). The reason is that x is the size
                    of S1 ∩ S2 and x + y is the size of S1 ∪ S2.



                    Now, consider the probability that h(S1) = h(S2). If we imagine the rows
                    permuted randomly, and we proceed from the top, the probability that we shall
                    meet a type X row before we meet a type Y row is x/(x + y). But if the
                    first row from the top other than type Z rows is a type X row, then surely
                    h(S1) = h(S2). On the other hand, if the first row other than a type Z row
                    that we meet is a type Y row, then the set with a 1 gets that row as its minhash
                    value. However the set with a 0 in that row surely gets some row further down
                    the permuted list. Thus, we know h(S1) 6= h(S2) if we first meet a type Y row.
                    We conclude the probability that h(S1) = h(S2) is x/(x + y), which is also the
                    Jaccard similarity of S1 and S2. ( from Mining of Massive Datasets book )






                    share|cite|improve this answer









                    $endgroup$



                    There is a remarkable connection between minhashing and Jaccard similarity
                    of the sets that are minhashed.



                    • The probability that the minhash function for a random permutation of
                    rows produces the same value for two sets equals the Jaccard similarity
                    of those sets.



                    To see why, we need to picture the columns for those two sets. If we restrict
                    ourselves to the columns for sets S1 and S2, then rows can be divided into three
                    classes:




                    1. Type X rows have 1 in both columns.

                    2. Type Y rows have 1 in one of the columns and 0 in the other.

                    3. Type Z rows have 0 in both columns.


                    Since the matrix is sparse, most rows are of type Z. However, it is the ratio
                    of the numbers of type X and type Y rows that determine both SIM(S1, S2)
                    and the probability that h(S1) = h(S2). Let there be x rows of type X and y
                    rows of type Y . Then SIM(S1, S2) = x/(x + y). The reason is that x is the size
                    of S1 ∩ S2 and x + y is the size of S1 ∪ S2.



                    Now, consider the probability that h(S1) = h(S2). If we imagine the rows
                    permuted randomly, and we proceed from the top, the probability that we shall
                    meet a type X row before we meet a type Y row is x/(x + y). But if the
                    first row from the top other than type Z rows is a type X row, then surely
                    h(S1) = h(S2). On the other hand, if the first row other than a type Z row
                    that we meet is a type Y row, then the set with a 1 gets that row as its minhash
                    value. However the set with a 0 in that row surely gets some row further down
                    the permuted list. Thus, we know h(S1) 6= h(S2) if we first meet a type Y row.
                    We conclude the probability that h(S1) = h(S2) is x/(x + y), which is also the
                    Jaccard similarity of S1 and S2. ( from Mining of Massive Datasets book )







                    share|cite|improve this answer












                    share|cite|improve this answer



                    share|cite|improve this answer










                    answered Sep 24 '13 at 23:06









                    bleazbleaz

                    91




                    91








                    • 1




                      $begingroup$
                      You shouldn't just blatantly plagiarize answers from well known texts.
                      $endgroup$
                      – Shawn O'Hare
                      Feb 24 '15 at 16:58














                    • 1




                      $begingroup$
                      You shouldn't just blatantly plagiarize answers from well known texts.
                      $endgroup$
                      – Shawn O'Hare
                      Feb 24 '15 at 16:58








                    1




                    1




                    $begingroup$
                    You shouldn't just blatantly plagiarize answers from well known texts.
                    $endgroup$
                    – Shawn O'Hare
                    Feb 24 '15 at 16:58




                    $begingroup$
                    You shouldn't just blatantly plagiarize answers from well known texts.
                    $endgroup$
                    – Shawn O'Hare
                    Feb 24 '15 at 16:58


















                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f242364%2fproof-of-calculating-minhash%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Probability when a professor distributes a quiz and homework assignment to a class of n students.

                    Aardman Animations

                    Are they similar matrix