Working out operation counts for matrix-vector multiplication












1












$begingroup$


I'm working on a problem about the multiplication of a matrix $A$ of order $m times n$ by a vector $b$. In particular I'm supposed to determine the number of operations needed.



What is the point of this though? It really gives me more motivation and helps me understand when I understand WHY I am doing something.



Isn't this just showing me how badly my algorithm is messing up calculations? How am I supposed to add these sums for any number of calculations I do?










share|cite|improve this question











$endgroup$












  • $begingroup$
    What are "Flop counts"?
    $endgroup$
    – coffeemath
    Feb 7 '17 at 23:42










  • $begingroup$
    @coffeemath: counting the number of floating point operations. In the ancient days they were far slower than integer operations and could dominate the time needed for a program to run.
    $endgroup$
    – Ross Millikan
    Feb 7 '17 at 23:48










  • $begingroup$
    @RossMillikan Thanks for the definition. Also this may answer OP's query about the point of counting them.
    $endgroup$
    – coffeemath
    Feb 7 '17 at 23:51
















1












$begingroup$


I'm working on a problem about the multiplication of a matrix $A$ of order $m times n$ by a vector $b$. In particular I'm supposed to determine the number of operations needed.



What is the point of this though? It really gives me more motivation and helps me understand when I understand WHY I am doing something.



Isn't this just showing me how badly my algorithm is messing up calculations? How am I supposed to add these sums for any number of calculations I do?










share|cite|improve this question











$endgroup$












  • $begingroup$
    What are "Flop counts"?
    $endgroup$
    – coffeemath
    Feb 7 '17 at 23:42










  • $begingroup$
    @coffeemath: counting the number of floating point operations. In the ancient days they were far slower than integer operations and could dominate the time needed for a program to run.
    $endgroup$
    – Ross Millikan
    Feb 7 '17 at 23:48










  • $begingroup$
    @RossMillikan Thanks for the definition. Also this may answer OP's query about the point of counting them.
    $endgroup$
    – coffeemath
    Feb 7 '17 at 23:51














1












1








1


1



$begingroup$


I'm working on a problem about the multiplication of a matrix $A$ of order $m times n$ by a vector $b$. In particular I'm supposed to determine the number of operations needed.



What is the point of this though? It really gives me more motivation and helps me understand when I understand WHY I am doing something.



Isn't this just showing me how badly my algorithm is messing up calculations? How am I supposed to add these sums for any number of calculations I do?










share|cite|improve this question











$endgroup$




I'm working on a problem about the multiplication of a matrix $A$ of order $m times n$ by a vector $b$. In particular I'm supposed to determine the number of operations needed.



What is the point of this though? It really gives me more motivation and helps me understand when I understand WHY I am doing something.



Isn't this just showing me how badly my algorithm is messing up calculations? How am I supposed to add these sums for any number of calculations I do?







linear-algebra numerical-methods floating-point






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Feb 19 '17 at 1:45









Charles

23.9k452116




23.9k452116










asked Feb 7 '17 at 23:38









MaxMax

64




64












  • $begingroup$
    What are "Flop counts"?
    $endgroup$
    – coffeemath
    Feb 7 '17 at 23:42










  • $begingroup$
    @coffeemath: counting the number of floating point operations. In the ancient days they were far slower than integer operations and could dominate the time needed for a program to run.
    $endgroup$
    – Ross Millikan
    Feb 7 '17 at 23:48










  • $begingroup$
    @RossMillikan Thanks for the definition. Also this may answer OP's query about the point of counting them.
    $endgroup$
    – coffeemath
    Feb 7 '17 at 23:51


















  • $begingroup$
    What are "Flop counts"?
    $endgroup$
    – coffeemath
    Feb 7 '17 at 23:42










  • $begingroup$
    @coffeemath: counting the number of floating point operations. In the ancient days they were far slower than integer operations and could dominate the time needed for a program to run.
    $endgroup$
    – Ross Millikan
    Feb 7 '17 at 23:48










  • $begingroup$
    @RossMillikan Thanks for the definition. Also this may answer OP's query about the point of counting them.
    $endgroup$
    – coffeemath
    Feb 7 '17 at 23:51
















$begingroup$
What are "Flop counts"?
$endgroup$
– coffeemath
Feb 7 '17 at 23:42




$begingroup$
What are "Flop counts"?
$endgroup$
– coffeemath
Feb 7 '17 at 23:42












$begingroup$
@coffeemath: counting the number of floating point operations. In the ancient days they were far slower than integer operations and could dominate the time needed for a program to run.
$endgroup$
– Ross Millikan
Feb 7 '17 at 23:48




$begingroup$
@coffeemath: counting the number of floating point operations. In the ancient days they were far slower than integer operations and could dominate the time needed for a program to run.
$endgroup$
– Ross Millikan
Feb 7 '17 at 23:48












$begingroup$
@RossMillikan Thanks for the definition. Also this may answer OP's query about the point of counting them.
$endgroup$
– coffeemath
Feb 7 '17 at 23:51




$begingroup$
@RossMillikan Thanks for the definition. Also this may answer OP's query about the point of counting them.
$endgroup$
– coffeemath
Feb 7 '17 at 23:51










1 Answer
1






active

oldest

votes


















0












$begingroup$

The point is to estimate how long an algorithm will take and how it scales with the size of the input. In your example of a matrix multiply you have $mn$ entries in A. Each one has to get multiplied by an entry in b, so there are $mn$ multiplies. Then you have to do $(m-1)n$ additions to get the entries in the result. We can ignore the $-1$ and say the effort expended is $2mn$ floating point operations. If you can do a billion operations in a reasonable amount of time you can afford to have $m$ and $n$ be tens of thousands but not millions.



Some operations have alternate algorithms with different timing. Sorting is a prime example. Some simple minded sorting algorithms scale as the square of the number of items sorted, so again you could afford to sort a few tens of thousands of things but not millions. There are other algorithms that work in $n log n$ time. Now you can afford to sort tens of millions. That is an enormous increase.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Shouldn't the flop count for this be O(n^2)?
    $endgroup$
    – Max
    Feb 10 '17 at 18:26










  • $begingroup$
    There are $mn$ entries in $A$ and each one gets multiplied by one entry in $b$, so there are $mn$ multiplies. That is $n^2$ if $A$ is square, but you did not specify that.
    $endgroup$
    – Ross Millikan
    Feb 10 '17 at 20:25











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2134128%2fworking-out-operation-counts-for-matrix-vector-multiplication%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0












$begingroup$

The point is to estimate how long an algorithm will take and how it scales with the size of the input. In your example of a matrix multiply you have $mn$ entries in A. Each one has to get multiplied by an entry in b, so there are $mn$ multiplies. Then you have to do $(m-1)n$ additions to get the entries in the result. We can ignore the $-1$ and say the effort expended is $2mn$ floating point operations. If you can do a billion operations in a reasonable amount of time you can afford to have $m$ and $n$ be tens of thousands but not millions.



Some operations have alternate algorithms with different timing. Sorting is a prime example. Some simple minded sorting algorithms scale as the square of the number of items sorted, so again you could afford to sort a few tens of thousands of things but not millions. There are other algorithms that work in $n log n$ time. Now you can afford to sort tens of millions. That is an enormous increase.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Shouldn't the flop count for this be O(n^2)?
    $endgroup$
    – Max
    Feb 10 '17 at 18:26










  • $begingroup$
    There are $mn$ entries in $A$ and each one gets multiplied by one entry in $b$, so there are $mn$ multiplies. That is $n^2$ if $A$ is square, but you did not specify that.
    $endgroup$
    – Ross Millikan
    Feb 10 '17 at 20:25
















0












$begingroup$

The point is to estimate how long an algorithm will take and how it scales with the size of the input. In your example of a matrix multiply you have $mn$ entries in A. Each one has to get multiplied by an entry in b, so there are $mn$ multiplies. Then you have to do $(m-1)n$ additions to get the entries in the result. We can ignore the $-1$ and say the effort expended is $2mn$ floating point operations. If you can do a billion operations in a reasonable amount of time you can afford to have $m$ and $n$ be tens of thousands but not millions.



Some operations have alternate algorithms with different timing. Sorting is a prime example. Some simple minded sorting algorithms scale as the square of the number of items sorted, so again you could afford to sort a few tens of thousands of things but not millions. There are other algorithms that work in $n log n$ time. Now you can afford to sort tens of millions. That is an enormous increase.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Shouldn't the flop count for this be O(n^2)?
    $endgroup$
    – Max
    Feb 10 '17 at 18:26










  • $begingroup$
    There are $mn$ entries in $A$ and each one gets multiplied by one entry in $b$, so there are $mn$ multiplies. That is $n^2$ if $A$ is square, but you did not specify that.
    $endgroup$
    – Ross Millikan
    Feb 10 '17 at 20:25














0












0








0





$begingroup$

The point is to estimate how long an algorithm will take and how it scales with the size of the input. In your example of a matrix multiply you have $mn$ entries in A. Each one has to get multiplied by an entry in b, so there are $mn$ multiplies. Then you have to do $(m-1)n$ additions to get the entries in the result. We can ignore the $-1$ and say the effort expended is $2mn$ floating point operations. If you can do a billion operations in a reasonable amount of time you can afford to have $m$ and $n$ be tens of thousands but not millions.



Some operations have alternate algorithms with different timing. Sorting is a prime example. Some simple minded sorting algorithms scale as the square of the number of items sorted, so again you could afford to sort a few tens of thousands of things but not millions. There are other algorithms that work in $n log n$ time. Now you can afford to sort tens of millions. That is an enormous increase.






share|cite|improve this answer









$endgroup$



The point is to estimate how long an algorithm will take and how it scales with the size of the input. In your example of a matrix multiply you have $mn$ entries in A. Each one has to get multiplied by an entry in b, so there are $mn$ multiplies. Then you have to do $(m-1)n$ additions to get the entries in the result. We can ignore the $-1$ and say the effort expended is $2mn$ floating point operations. If you can do a billion operations in a reasonable amount of time you can afford to have $m$ and $n$ be tens of thousands but not millions.



Some operations have alternate algorithms with different timing. Sorting is a prime example. Some simple minded sorting algorithms scale as the square of the number of items sorted, so again you could afford to sort a few tens of thousands of things but not millions. There are other algorithms that work in $n log n$ time. Now you can afford to sort tens of millions. That is an enormous increase.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Feb 7 '17 at 23:47









Ross MillikanRoss Millikan

297k23198371




297k23198371












  • $begingroup$
    Shouldn't the flop count for this be O(n^2)?
    $endgroup$
    – Max
    Feb 10 '17 at 18:26










  • $begingroup$
    There are $mn$ entries in $A$ and each one gets multiplied by one entry in $b$, so there are $mn$ multiplies. That is $n^2$ if $A$ is square, but you did not specify that.
    $endgroup$
    – Ross Millikan
    Feb 10 '17 at 20:25


















  • $begingroup$
    Shouldn't the flop count for this be O(n^2)?
    $endgroup$
    – Max
    Feb 10 '17 at 18:26










  • $begingroup$
    There are $mn$ entries in $A$ and each one gets multiplied by one entry in $b$, so there are $mn$ multiplies. That is $n^2$ if $A$ is square, but you did not specify that.
    $endgroup$
    – Ross Millikan
    Feb 10 '17 at 20:25
















$begingroup$
Shouldn't the flop count for this be O(n^2)?
$endgroup$
– Max
Feb 10 '17 at 18:26




$begingroup$
Shouldn't the flop count for this be O(n^2)?
$endgroup$
– Max
Feb 10 '17 at 18:26












$begingroup$
There are $mn$ entries in $A$ and each one gets multiplied by one entry in $b$, so there are $mn$ multiplies. That is $n^2$ if $A$ is square, but you did not specify that.
$endgroup$
– Ross Millikan
Feb 10 '17 at 20:25




$begingroup$
There are $mn$ entries in $A$ and each one gets multiplied by one entry in $b$, so there are $mn$ multiplies. That is $n^2$ if $A$ is square, but you did not specify that.
$endgroup$
– Ross Millikan
Feb 10 '17 at 20:25


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2134128%2fworking-out-operation-counts-for-matrix-vector-multiplication%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Probability when a professor distributes a quiz and homework assignment to a class of n students.

Aardman Animations

Are they similar matrix