Commit charge is 100% full but physical memory is just 60% when using no page file












22















I have disabled the page file in my system (hard disk is too slow, cannot buy a new one right away, cannot move page file to another partition). When I see into Resource Monitor, using memory demanding applications, the system shows that commit charge is almost 100% full. Indeed, if I keep on demanding more memory, programs start to crash as commit charge effectively reaches 100%.



In the meanwhile, the system says I'm using just 50-60% physical memory and have around 1GB memory available (free + standby).



If commit charge is the total memory actually requested, why does the system says so much memory is free? Is the physical memory being unused by Windows? Is the memory graph wrong? Am I missing something?



Commit charge graph vs Physical memory graphTask manager










share|improve this question




















  • 2





    Another good answer on the topic is here: brandonlive.com/2010/02/21/measuring-memory-usage-in-windows-7.

    – cnst
    May 21 '15 at 1:57











  • @cnst Very good article. It helped me understand way better this issue. Why don't you post it as a response?

    – Jason Oviedo
    May 22 '15 at 5:59






  • 2





    Please don't disable your page file people. This is a dumb idea

    – Milney
    Jan 12 '17 at 14:49











  • @Milney I agree, one should not usually disable the page file. At the moment of the question it made sense for me as the disk was just way too slow, so much it was crippling my system. It actually was quite useful, aside from prompting this question, the system general responsiveness improved many times.

    – Jason Oviedo
    Jan 20 '17 at 1:18











  • @JasonOviedo That should not be the case and indicates something is very wrong. Giving the system more options should NOT make it slower. The system does not have to use the page file just because it has one. (Which means this is probably an XY question. The right question is precisely why the page file made your system slower.)

    – David Schwartz
    Sep 21 '17 at 23:49


















22















I have disabled the page file in my system (hard disk is too slow, cannot buy a new one right away, cannot move page file to another partition). When I see into Resource Monitor, using memory demanding applications, the system shows that commit charge is almost 100% full. Indeed, if I keep on demanding more memory, programs start to crash as commit charge effectively reaches 100%.



In the meanwhile, the system says I'm using just 50-60% physical memory and have around 1GB memory available (free + standby).



If commit charge is the total memory actually requested, why does the system says so much memory is free? Is the physical memory being unused by Windows? Is the memory graph wrong? Am I missing something?



Commit charge graph vs Physical memory graphTask manager










share|improve this question




















  • 2





    Another good answer on the topic is here: brandonlive.com/2010/02/21/measuring-memory-usage-in-windows-7.

    – cnst
    May 21 '15 at 1:57











  • @cnst Very good article. It helped me understand way better this issue. Why don't you post it as a response?

    – Jason Oviedo
    May 22 '15 at 5:59






  • 2





    Please don't disable your page file people. This is a dumb idea

    – Milney
    Jan 12 '17 at 14:49











  • @Milney I agree, one should not usually disable the page file. At the moment of the question it made sense for me as the disk was just way too slow, so much it was crippling my system. It actually was quite useful, aside from prompting this question, the system general responsiveness improved many times.

    – Jason Oviedo
    Jan 20 '17 at 1:18











  • @JasonOviedo That should not be the case and indicates something is very wrong. Giving the system more options should NOT make it slower. The system does not have to use the page file just because it has one. (Which means this is probably an XY question. The right question is precisely why the page file made your system slower.)

    – David Schwartz
    Sep 21 '17 at 23:49
















22












22








22


6






I have disabled the page file in my system (hard disk is too slow, cannot buy a new one right away, cannot move page file to another partition). When I see into Resource Monitor, using memory demanding applications, the system shows that commit charge is almost 100% full. Indeed, if I keep on demanding more memory, programs start to crash as commit charge effectively reaches 100%.



In the meanwhile, the system says I'm using just 50-60% physical memory and have around 1GB memory available (free + standby).



If commit charge is the total memory actually requested, why does the system says so much memory is free? Is the physical memory being unused by Windows? Is the memory graph wrong? Am I missing something?



Commit charge graph vs Physical memory graphTask manager










share|improve this question
















I have disabled the page file in my system (hard disk is too slow, cannot buy a new one right away, cannot move page file to another partition). When I see into Resource Monitor, using memory demanding applications, the system shows that commit charge is almost 100% full. Indeed, if I keep on demanding more memory, programs start to crash as commit charge effectively reaches 100%.



In the meanwhile, the system says I'm using just 50-60% physical memory and have around 1GB memory available (free + standby).



If commit charge is the total memory actually requested, why does the system says so much memory is free? Is the physical memory being unused by Windows? Is the memory graph wrong? Am I missing something?



Commit charge graph vs Physical memory graphTask manager







windows-7 memory virtual-memory






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jul 28 '14 at 6:19







Jason Oviedo

















asked Oct 2 '12 at 23:53









Jason OviedoJason Oviedo

213126




213126








  • 2





    Another good answer on the topic is here: brandonlive.com/2010/02/21/measuring-memory-usage-in-windows-7.

    – cnst
    May 21 '15 at 1:57











  • @cnst Very good article. It helped me understand way better this issue. Why don't you post it as a response?

    – Jason Oviedo
    May 22 '15 at 5:59






  • 2





    Please don't disable your page file people. This is a dumb idea

    – Milney
    Jan 12 '17 at 14:49











  • @Milney I agree, one should not usually disable the page file. At the moment of the question it made sense for me as the disk was just way too slow, so much it was crippling my system. It actually was quite useful, aside from prompting this question, the system general responsiveness improved many times.

    – Jason Oviedo
    Jan 20 '17 at 1:18











  • @JasonOviedo That should not be the case and indicates something is very wrong. Giving the system more options should NOT make it slower. The system does not have to use the page file just because it has one. (Which means this is probably an XY question. The right question is precisely why the page file made your system slower.)

    – David Schwartz
    Sep 21 '17 at 23:49
















  • 2





    Another good answer on the topic is here: brandonlive.com/2010/02/21/measuring-memory-usage-in-windows-7.

    – cnst
    May 21 '15 at 1:57











  • @cnst Very good article. It helped me understand way better this issue. Why don't you post it as a response?

    – Jason Oviedo
    May 22 '15 at 5:59






  • 2





    Please don't disable your page file people. This is a dumb idea

    – Milney
    Jan 12 '17 at 14:49











  • @Milney I agree, one should not usually disable the page file. At the moment of the question it made sense for me as the disk was just way too slow, so much it was crippling my system. It actually was quite useful, aside from prompting this question, the system general responsiveness improved many times.

    – Jason Oviedo
    Jan 20 '17 at 1:18











  • @JasonOviedo That should not be the case and indicates something is very wrong. Giving the system more options should NOT make it slower. The system does not have to use the page file just because it has one. (Which means this is probably an XY question. The right question is precisely why the page file made your system slower.)

    – David Schwartz
    Sep 21 '17 at 23:49










2




2





Another good answer on the topic is here: brandonlive.com/2010/02/21/measuring-memory-usage-in-windows-7.

– cnst
May 21 '15 at 1:57





Another good answer on the topic is here: brandonlive.com/2010/02/21/measuring-memory-usage-in-windows-7.

– cnst
May 21 '15 at 1:57













@cnst Very good article. It helped me understand way better this issue. Why don't you post it as a response?

– Jason Oviedo
May 22 '15 at 5:59





@cnst Very good article. It helped me understand way better this issue. Why don't you post it as a response?

– Jason Oviedo
May 22 '15 at 5:59




2




2





Please don't disable your page file people. This is a dumb idea

– Milney
Jan 12 '17 at 14:49





Please don't disable your page file people. This is a dumb idea

– Milney
Jan 12 '17 at 14:49













@Milney I agree, one should not usually disable the page file. At the moment of the question it made sense for me as the disk was just way too slow, so much it was crippling my system. It actually was quite useful, aside from prompting this question, the system general responsiveness improved many times.

– Jason Oviedo
Jan 20 '17 at 1:18





@Milney I agree, one should not usually disable the page file. At the moment of the question it made sense for me as the disk was just way too slow, so much it was crippling my system. It actually was quite useful, aside from prompting this question, the system general responsiveness improved many times.

– Jason Oviedo
Jan 20 '17 at 1:18













@JasonOviedo That should not be the case and indicates something is very wrong. Giving the system more options should NOT make it slower. The system does not have to use the page file just because it has one. (Which means this is probably an XY question. The right question is precisely why the page file made your system slower.)

– David Schwartz
Sep 21 '17 at 23:49







@JasonOviedo That should not be the case and indicates something is very wrong. Giving the system more options should NOT make it slower. The system does not have to use the page file just because it has one. (Which means this is probably an XY question. The right question is precisely why the page file made your system slower.)

– David Schwartz
Sep 21 '17 at 23:49












3 Answers
3






active

oldest

votes


















20














Running out of commit limit while you still have lots of available RAM is not at all unusual. Neither the commit limit nor the commit charge are directly related to free or available RAM.



The commit limit = current pagefile size + RAM size.



Since you have no page file, the commit limit is smaller than it would be if you had a page file. It doesn't matter how much of the RAM is free. For the commit limit, only the amount of RAM installed matters. You can run out of commit limit even with 90% of your RAM free or available.



Commit charge is a count of virtual memory, not physical. Suppose my program asks for 2 GB committed, but then it only accesses .5 GB of it. The remaining 1.5 GB never gets faulted in, never gets assigned to RAM, so RAM usage does not reflect the 2 GB, only .5 GB.



Still, "system commit" is increased by 2 GB because the system has "committed" that there WILL be a place to hold my 2 GB, should i actually need it all. The fact that on any given run of the program I won't necessarily try to use it all doesn't help. I asked for 2 GB and the successful return from that call tells me that the OS "committed" - i.e. promised - that I can use that much virtual address space. The OS can't make that promise unless there is some place to keep it all.



So: put your pagefile back, add more RAM, or run less stuff at one time. Or some combination of the three. These are your only options for avoiding the "low on memory" and "out of memory" errors.



See also my answers here (longer) and here (much longer).






share|improve this answer





















  • 3





    Specifically, before Windows will allocate memory it wants to be able to guarantee that it can fulfill these allocations when they are used. Even if the allocations are not fully used Windows will refuse to allocate more if it can't make that guarantee. A page file, whether used or not, provides additional backing storage.

    – Bob
    Jul 23 '14 at 6:00



















4














As the memory allocation test in the article at http://brandonlive.com/2010/02/21/measuring-memory-usage-in-windows-7/ illustrates, Windows is a type of system that would fail a large memory allocation if such allocation, together with all the prior allocations (the concept Microsoft calls as "commit"), would bring the total "commit" above the sum of both the physical memory and the sum of all the page files (swap).



Consider that an allocation by itself doesn't use any actual memory (neither physical nor swap), prior to a read or write taking place within the virtual address space of the process for the aforementioned allocation. E.g. a 2GB allocation by itself would only affect the "Commit" numbers (in Windows 7 talk), leaving "Physical Memory" alone (until read/write within the said allocation happens).



As far as OS design goes, the alternative approach would be to always allow allocation of any size (unless the available memory is already completely exhausted), and then let the applications fail on read/write instead. See https://cs.stackexchange.com/questions/42877/when-theres-no-memory-should-malloc-or-read-write-fail for more details.






share|improve this answer



















  • 2





    Yes. The argument for Windows' approach: it is reasonable to expect programmers to check status of a malloc (or, in Win32, VirtualAlloc). Once that call succeeds the program can trust that the v.a.s. allocated is usable and will remain so until a corresponding free or VirtualFree. The other way, ordinary memory reads and writes (i.e. dereferencing of pointers) can raise memory access exceptions. But no programmer expects to have to check status after every pointer dereference. They don't return a status anyway, so it would have to be done with an exception handler. Ugly.

    – Jamie Hanrahan
    Jun 18 '15 at 17:48



















2














The available memory is not what you think it would be. It not unused it really a file cache of recently terminated processes or trimed processes that have been force to give up some memory to other processes. They could be called back to there original purpose.
see for more detail.



http://support.microsoft.com/kb/312628



As to not have a page file this is very bad. Windows degradeS poorly without one. Remember even executable files are used as swap files when there is no page file.
Even if the drive is slow it better to have a page file until you get up to 8 to 16 gigs of memory. Some people think Even windows 7 can run without one then.



I regularly give old machine a boost by doing a few things. Clean up the hard drive as much as possible. Copy anything you can temporarily remover from the drive onto a backup.
Remove applications you don't need. Remove apps can reinstall.



When all that is done defragment your hard disk. At that point recreate your page file. It will be the closest to the front of the drive as is possible. Create a fixed size about 1.5 times memory. Thats my rule, usually I have seen sizes between 1 and 3 time memory. This will give it a slight boost in speed over the usual places it would be placed.



I use the auslogic defrager it's free (ads for more tool though). There are other that do this too. Check out the defragers at portableapps.com. It optimizes the disk by placeing recently accessed files near the front of the drive for faster access. It shows where the page file is placed so you can see if you moved it to the top 25% of the drive.



After that reinstall apps and copy back your data.



I would say you get 10 or 20% boost. But the main value is a lot of the hesitation goes away for a smoother experience.






share|improve this answer



















  • 3





    Using some testing, it's clear for me that when disk is too slow, not having a page file does indeed speed up the system. I can tell a difference of many seconds in simple tasks as app switching.

    – Jason Oviedo
    Oct 25 '12 at 20:53











  • @Mark You're mistaken. The vast majority of Windows systems run with a pagefile (because that is how Windows runs by default, for good and sufficient reason) and almost all of them use similar-speed disks. And almost none of them show any such problems. The problem is not "the pagefile", it's that you don't have enough RAM. Please note that getting rid of the pagefile does not eliminate paging to and from disk - it merely eliminates one of typically hundreds of files that are commonly involved in paging.

    – Jamie Hanrahan
    Jul 28 '15 at 22:16











Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "3"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f482678%2fcommit-charge-is-100-full-but-physical-memory-is-just-60-when-using-no-page-fi%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









20














Running out of commit limit while you still have lots of available RAM is not at all unusual. Neither the commit limit nor the commit charge are directly related to free or available RAM.



The commit limit = current pagefile size + RAM size.



Since you have no page file, the commit limit is smaller than it would be if you had a page file. It doesn't matter how much of the RAM is free. For the commit limit, only the amount of RAM installed matters. You can run out of commit limit even with 90% of your RAM free or available.



Commit charge is a count of virtual memory, not physical. Suppose my program asks for 2 GB committed, but then it only accesses .5 GB of it. The remaining 1.5 GB never gets faulted in, never gets assigned to RAM, so RAM usage does not reflect the 2 GB, only .5 GB.



Still, "system commit" is increased by 2 GB because the system has "committed" that there WILL be a place to hold my 2 GB, should i actually need it all. The fact that on any given run of the program I won't necessarily try to use it all doesn't help. I asked for 2 GB and the successful return from that call tells me that the OS "committed" - i.e. promised - that I can use that much virtual address space. The OS can't make that promise unless there is some place to keep it all.



So: put your pagefile back, add more RAM, or run less stuff at one time. Or some combination of the three. These are your only options for avoiding the "low on memory" and "out of memory" errors.



See also my answers here (longer) and here (much longer).






share|improve this answer





















  • 3





    Specifically, before Windows will allocate memory it wants to be able to guarantee that it can fulfill these allocations when they are used. Even if the allocations are not fully used Windows will refuse to allocate more if it can't make that guarantee. A page file, whether used or not, provides additional backing storage.

    – Bob
    Jul 23 '14 at 6:00
















20














Running out of commit limit while you still have lots of available RAM is not at all unusual. Neither the commit limit nor the commit charge are directly related to free or available RAM.



The commit limit = current pagefile size + RAM size.



Since you have no page file, the commit limit is smaller than it would be if you had a page file. It doesn't matter how much of the RAM is free. For the commit limit, only the amount of RAM installed matters. You can run out of commit limit even with 90% of your RAM free or available.



Commit charge is a count of virtual memory, not physical. Suppose my program asks for 2 GB committed, but then it only accesses .5 GB of it. The remaining 1.5 GB never gets faulted in, never gets assigned to RAM, so RAM usage does not reflect the 2 GB, only .5 GB.



Still, "system commit" is increased by 2 GB because the system has "committed" that there WILL be a place to hold my 2 GB, should i actually need it all. The fact that on any given run of the program I won't necessarily try to use it all doesn't help. I asked for 2 GB and the successful return from that call tells me that the OS "committed" - i.e. promised - that I can use that much virtual address space. The OS can't make that promise unless there is some place to keep it all.



So: put your pagefile back, add more RAM, or run less stuff at one time. Or some combination of the three. These are your only options for avoiding the "low on memory" and "out of memory" errors.



See also my answers here (longer) and here (much longer).






share|improve this answer





















  • 3





    Specifically, before Windows will allocate memory it wants to be able to guarantee that it can fulfill these allocations when they are used. Even if the allocations are not fully used Windows will refuse to allocate more if it can't make that guarantee. A page file, whether used or not, provides additional backing storage.

    – Bob
    Jul 23 '14 at 6:00














20












20








20







Running out of commit limit while you still have lots of available RAM is not at all unusual. Neither the commit limit nor the commit charge are directly related to free or available RAM.



The commit limit = current pagefile size + RAM size.



Since you have no page file, the commit limit is smaller than it would be if you had a page file. It doesn't matter how much of the RAM is free. For the commit limit, only the amount of RAM installed matters. You can run out of commit limit even with 90% of your RAM free or available.



Commit charge is a count of virtual memory, not physical. Suppose my program asks for 2 GB committed, but then it only accesses .5 GB of it. The remaining 1.5 GB never gets faulted in, never gets assigned to RAM, so RAM usage does not reflect the 2 GB, only .5 GB.



Still, "system commit" is increased by 2 GB because the system has "committed" that there WILL be a place to hold my 2 GB, should i actually need it all. The fact that on any given run of the program I won't necessarily try to use it all doesn't help. I asked for 2 GB and the successful return from that call tells me that the OS "committed" - i.e. promised - that I can use that much virtual address space. The OS can't make that promise unless there is some place to keep it all.



So: put your pagefile back, add more RAM, or run less stuff at one time. Or some combination of the three. These are your only options for avoiding the "low on memory" and "out of memory" errors.



See also my answers here (longer) and here (much longer).






share|improve this answer















Running out of commit limit while you still have lots of available RAM is not at all unusual. Neither the commit limit nor the commit charge are directly related to free or available RAM.



The commit limit = current pagefile size + RAM size.



Since you have no page file, the commit limit is smaller than it would be if you had a page file. It doesn't matter how much of the RAM is free. For the commit limit, only the amount of RAM installed matters. You can run out of commit limit even with 90% of your RAM free or available.



Commit charge is a count of virtual memory, not physical. Suppose my program asks for 2 GB committed, but then it only accesses .5 GB of it. The remaining 1.5 GB never gets faulted in, never gets assigned to RAM, so RAM usage does not reflect the 2 GB, only .5 GB.



Still, "system commit" is increased by 2 GB because the system has "committed" that there WILL be a place to hold my 2 GB, should i actually need it all. The fact that on any given run of the program I won't necessarily try to use it all doesn't help. I asked for 2 GB and the successful return from that call tells me that the OS "committed" - i.e. promised - that I can use that much virtual address space. The OS can't make that promise unless there is some place to keep it all.



So: put your pagefile back, add more RAM, or run less stuff at one time. Or some combination of the three. These are your only options for avoiding the "low on memory" and "out of memory" errors.



See also my answers here (longer) and here (much longer).







share|improve this answer














share|improve this answer



share|improve this answer








edited Jan 6 at 2:05

























answered Jul 23 '14 at 0:35









Jamie HanrahanJamie Hanrahan

18.3k34279




18.3k34279








  • 3





    Specifically, before Windows will allocate memory it wants to be able to guarantee that it can fulfill these allocations when they are used. Even if the allocations are not fully used Windows will refuse to allocate more if it can't make that guarantee. A page file, whether used or not, provides additional backing storage.

    – Bob
    Jul 23 '14 at 6:00














  • 3





    Specifically, before Windows will allocate memory it wants to be able to guarantee that it can fulfill these allocations when they are used. Even if the allocations are not fully used Windows will refuse to allocate more if it can't make that guarantee. A page file, whether used or not, provides additional backing storage.

    – Bob
    Jul 23 '14 at 6:00








3




3





Specifically, before Windows will allocate memory it wants to be able to guarantee that it can fulfill these allocations when they are used. Even if the allocations are not fully used Windows will refuse to allocate more if it can't make that guarantee. A page file, whether used or not, provides additional backing storage.

– Bob
Jul 23 '14 at 6:00





Specifically, before Windows will allocate memory it wants to be able to guarantee that it can fulfill these allocations when they are used. Even if the allocations are not fully used Windows will refuse to allocate more if it can't make that guarantee. A page file, whether used or not, provides additional backing storage.

– Bob
Jul 23 '14 at 6:00













4














As the memory allocation test in the article at http://brandonlive.com/2010/02/21/measuring-memory-usage-in-windows-7/ illustrates, Windows is a type of system that would fail a large memory allocation if such allocation, together with all the prior allocations (the concept Microsoft calls as "commit"), would bring the total "commit" above the sum of both the physical memory and the sum of all the page files (swap).



Consider that an allocation by itself doesn't use any actual memory (neither physical nor swap), prior to a read or write taking place within the virtual address space of the process for the aforementioned allocation. E.g. a 2GB allocation by itself would only affect the "Commit" numbers (in Windows 7 talk), leaving "Physical Memory" alone (until read/write within the said allocation happens).



As far as OS design goes, the alternative approach would be to always allow allocation of any size (unless the available memory is already completely exhausted), and then let the applications fail on read/write instead. See https://cs.stackexchange.com/questions/42877/when-theres-no-memory-should-malloc-or-read-write-fail for more details.






share|improve this answer



















  • 2





    Yes. The argument for Windows' approach: it is reasonable to expect programmers to check status of a malloc (or, in Win32, VirtualAlloc). Once that call succeeds the program can trust that the v.a.s. allocated is usable and will remain so until a corresponding free or VirtualFree. The other way, ordinary memory reads and writes (i.e. dereferencing of pointers) can raise memory access exceptions. But no programmer expects to have to check status after every pointer dereference. They don't return a status anyway, so it would have to be done with an exception handler. Ugly.

    – Jamie Hanrahan
    Jun 18 '15 at 17:48
















4














As the memory allocation test in the article at http://brandonlive.com/2010/02/21/measuring-memory-usage-in-windows-7/ illustrates, Windows is a type of system that would fail a large memory allocation if such allocation, together with all the prior allocations (the concept Microsoft calls as "commit"), would bring the total "commit" above the sum of both the physical memory and the sum of all the page files (swap).



Consider that an allocation by itself doesn't use any actual memory (neither physical nor swap), prior to a read or write taking place within the virtual address space of the process for the aforementioned allocation. E.g. a 2GB allocation by itself would only affect the "Commit" numbers (in Windows 7 talk), leaving "Physical Memory" alone (until read/write within the said allocation happens).



As far as OS design goes, the alternative approach would be to always allow allocation of any size (unless the available memory is already completely exhausted), and then let the applications fail on read/write instead. See https://cs.stackexchange.com/questions/42877/when-theres-no-memory-should-malloc-or-read-write-fail for more details.






share|improve this answer



















  • 2





    Yes. The argument for Windows' approach: it is reasonable to expect programmers to check status of a malloc (or, in Win32, VirtualAlloc). Once that call succeeds the program can trust that the v.a.s. allocated is usable and will remain so until a corresponding free or VirtualFree. The other way, ordinary memory reads and writes (i.e. dereferencing of pointers) can raise memory access exceptions. But no programmer expects to have to check status after every pointer dereference. They don't return a status anyway, so it would have to be done with an exception handler. Ugly.

    – Jamie Hanrahan
    Jun 18 '15 at 17:48














4












4








4







As the memory allocation test in the article at http://brandonlive.com/2010/02/21/measuring-memory-usage-in-windows-7/ illustrates, Windows is a type of system that would fail a large memory allocation if such allocation, together with all the prior allocations (the concept Microsoft calls as "commit"), would bring the total "commit" above the sum of both the physical memory and the sum of all the page files (swap).



Consider that an allocation by itself doesn't use any actual memory (neither physical nor swap), prior to a read or write taking place within the virtual address space of the process for the aforementioned allocation. E.g. a 2GB allocation by itself would only affect the "Commit" numbers (in Windows 7 talk), leaving "Physical Memory" alone (until read/write within the said allocation happens).



As far as OS design goes, the alternative approach would be to always allow allocation of any size (unless the available memory is already completely exhausted), and then let the applications fail on read/write instead. See https://cs.stackexchange.com/questions/42877/when-theres-no-memory-should-malloc-or-read-write-fail for more details.






share|improve this answer













As the memory allocation test in the article at http://brandonlive.com/2010/02/21/measuring-memory-usage-in-windows-7/ illustrates, Windows is a type of system that would fail a large memory allocation if such allocation, together with all the prior allocations (the concept Microsoft calls as "commit"), would bring the total "commit" above the sum of both the physical memory and the sum of all the page files (swap).



Consider that an allocation by itself doesn't use any actual memory (neither physical nor swap), prior to a read or write taking place within the virtual address space of the process for the aforementioned allocation. E.g. a 2GB allocation by itself would only affect the "Commit" numbers (in Windows 7 talk), leaving "Physical Memory" alone (until read/write within the said allocation happens).



As far as OS design goes, the alternative approach would be to always allow allocation of any size (unless the available memory is already completely exhausted), and then let the applications fail on read/write instead. See https://cs.stackexchange.com/questions/42877/when-theres-no-memory-should-malloc-or-read-write-fail for more details.







share|improve this answer












share|improve this answer



share|improve this answer










answered May 22 '15 at 17:52









cnstcnst

1,15731637




1,15731637








  • 2





    Yes. The argument for Windows' approach: it is reasonable to expect programmers to check status of a malloc (or, in Win32, VirtualAlloc). Once that call succeeds the program can trust that the v.a.s. allocated is usable and will remain so until a corresponding free or VirtualFree. The other way, ordinary memory reads and writes (i.e. dereferencing of pointers) can raise memory access exceptions. But no programmer expects to have to check status after every pointer dereference. They don't return a status anyway, so it would have to be done with an exception handler. Ugly.

    – Jamie Hanrahan
    Jun 18 '15 at 17:48














  • 2





    Yes. The argument for Windows' approach: it is reasonable to expect programmers to check status of a malloc (or, in Win32, VirtualAlloc). Once that call succeeds the program can trust that the v.a.s. allocated is usable and will remain so until a corresponding free or VirtualFree. The other way, ordinary memory reads and writes (i.e. dereferencing of pointers) can raise memory access exceptions. But no programmer expects to have to check status after every pointer dereference. They don't return a status anyway, so it would have to be done with an exception handler. Ugly.

    – Jamie Hanrahan
    Jun 18 '15 at 17:48








2




2





Yes. The argument for Windows' approach: it is reasonable to expect programmers to check status of a malloc (or, in Win32, VirtualAlloc). Once that call succeeds the program can trust that the v.a.s. allocated is usable and will remain so until a corresponding free or VirtualFree. The other way, ordinary memory reads and writes (i.e. dereferencing of pointers) can raise memory access exceptions. But no programmer expects to have to check status after every pointer dereference. They don't return a status anyway, so it would have to be done with an exception handler. Ugly.

– Jamie Hanrahan
Jun 18 '15 at 17:48





Yes. The argument for Windows' approach: it is reasonable to expect programmers to check status of a malloc (or, in Win32, VirtualAlloc). Once that call succeeds the program can trust that the v.a.s. allocated is usable and will remain so until a corresponding free or VirtualFree. The other way, ordinary memory reads and writes (i.e. dereferencing of pointers) can raise memory access exceptions. But no programmer expects to have to check status after every pointer dereference. They don't return a status anyway, so it would have to be done with an exception handler. Ugly.

– Jamie Hanrahan
Jun 18 '15 at 17:48











2














The available memory is not what you think it would be. It not unused it really a file cache of recently terminated processes or trimed processes that have been force to give up some memory to other processes. They could be called back to there original purpose.
see for more detail.



http://support.microsoft.com/kb/312628



As to not have a page file this is very bad. Windows degradeS poorly without one. Remember even executable files are used as swap files when there is no page file.
Even if the drive is slow it better to have a page file until you get up to 8 to 16 gigs of memory. Some people think Even windows 7 can run without one then.



I regularly give old machine a boost by doing a few things. Clean up the hard drive as much as possible. Copy anything you can temporarily remover from the drive onto a backup.
Remove applications you don't need. Remove apps can reinstall.



When all that is done defragment your hard disk. At that point recreate your page file. It will be the closest to the front of the drive as is possible. Create a fixed size about 1.5 times memory. Thats my rule, usually I have seen sizes between 1 and 3 time memory. This will give it a slight boost in speed over the usual places it would be placed.



I use the auslogic defrager it's free (ads for more tool though). There are other that do this too. Check out the defragers at portableapps.com. It optimizes the disk by placeing recently accessed files near the front of the drive for faster access. It shows where the page file is placed so you can see if you moved it to the top 25% of the drive.



After that reinstall apps and copy back your data.



I would say you get 10 or 20% boost. But the main value is a lot of the hesitation goes away for a smoother experience.






share|improve this answer



















  • 3





    Using some testing, it's clear for me that when disk is too slow, not having a page file does indeed speed up the system. I can tell a difference of many seconds in simple tasks as app switching.

    – Jason Oviedo
    Oct 25 '12 at 20:53











  • @Mark You're mistaken. The vast majority of Windows systems run with a pagefile (because that is how Windows runs by default, for good and sufficient reason) and almost all of them use similar-speed disks. And almost none of them show any such problems. The problem is not "the pagefile", it's that you don't have enough RAM. Please note that getting rid of the pagefile does not eliminate paging to and from disk - it merely eliminates one of typically hundreds of files that are commonly involved in paging.

    – Jamie Hanrahan
    Jul 28 '15 at 22:16
















2














The available memory is not what you think it would be. It not unused it really a file cache of recently terminated processes or trimed processes that have been force to give up some memory to other processes. They could be called back to there original purpose.
see for more detail.



http://support.microsoft.com/kb/312628



As to not have a page file this is very bad. Windows degradeS poorly without one. Remember even executable files are used as swap files when there is no page file.
Even if the drive is slow it better to have a page file until you get up to 8 to 16 gigs of memory. Some people think Even windows 7 can run without one then.



I regularly give old machine a boost by doing a few things. Clean up the hard drive as much as possible. Copy anything you can temporarily remover from the drive onto a backup.
Remove applications you don't need. Remove apps can reinstall.



When all that is done defragment your hard disk. At that point recreate your page file. It will be the closest to the front of the drive as is possible. Create a fixed size about 1.5 times memory. Thats my rule, usually I have seen sizes between 1 and 3 time memory. This will give it a slight boost in speed over the usual places it would be placed.



I use the auslogic defrager it's free (ads for more tool though). There are other that do this too. Check out the defragers at portableapps.com. It optimizes the disk by placeing recently accessed files near the front of the drive for faster access. It shows where the page file is placed so you can see if you moved it to the top 25% of the drive.



After that reinstall apps and copy back your data.



I would say you get 10 or 20% boost. But the main value is a lot of the hesitation goes away for a smoother experience.






share|improve this answer



















  • 3





    Using some testing, it's clear for me that when disk is too slow, not having a page file does indeed speed up the system. I can tell a difference of many seconds in simple tasks as app switching.

    – Jason Oviedo
    Oct 25 '12 at 20:53











  • @Mark You're mistaken. The vast majority of Windows systems run with a pagefile (because that is how Windows runs by default, for good and sufficient reason) and almost all of them use similar-speed disks. And almost none of them show any such problems. The problem is not "the pagefile", it's that you don't have enough RAM. Please note that getting rid of the pagefile does not eliminate paging to and from disk - it merely eliminates one of typically hundreds of files that are commonly involved in paging.

    – Jamie Hanrahan
    Jul 28 '15 at 22:16














2












2








2







The available memory is not what you think it would be. It not unused it really a file cache of recently terminated processes or trimed processes that have been force to give up some memory to other processes. They could be called back to there original purpose.
see for more detail.



http://support.microsoft.com/kb/312628



As to not have a page file this is very bad. Windows degradeS poorly without one. Remember even executable files are used as swap files when there is no page file.
Even if the drive is slow it better to have a page file until you get up to 8 to 16 gigs of memory. Some people think Even windows 7 can run without one then.



I regularly give old machine a boost by doing a few things. Clean up the hard drive as much as possible. Copy anything you can temporarily remover from the drive onto a backup.
Remove applications you don't need. Remove apps can reinstall.



When all that is done defragment your hard disk. At that point recreate your page file. It will be the closest to the front of the drive as is possible. Create a fixed size about 1.5 times memory. Thats my rule, usually I have seen sizes between 1 and 3 time memory. This will give it a slight boost in speed over the usual places it would be placed.



I use the auslogic defrager it's free (ads for more tool though). There are other that do this too. Check out the defragers at portableapps.com. It optimizes the disk by placeing recently accessed files near the front of the drive for faster access. It shows where the page file is placed so you can see if you moved it to the top 25% of the drive.



After that reinstall apps and copy back your data.



I would say you get 10 or 20% boost. But the main value is a lot of the hesitation goes away for a smoother experience.






share|improve this answer













The available memory is not what you think it would be. It not unused it really a file cache of recently terminated processes or trimed processes that have been force to give up some memory to other processes. They could be called back to there original purpose.
see for more detail.



http://support.microsoft.com/kb/312628



As to not have a page file this is very bad. Windows degradeS poorly without one. Remember even executable files are used as swap files when there is no page file.
Even if the drive is slow it better to have a page file until you get up to 8 to 16 gigs of memory. Some people think Even windows 7 can run without one then.



I regularly give old machine a boost by doing a few things. Clean up the hard drive as much as possible. Copy anything you can temporarily remover from the drive onto a backup.
Remove applications you don't need. Remove apps can reinstall.



When all that is done defragment your hard disk. At that point recreate your page file. It will be the closest to the front of the drive as is possible. Create a fixed size about 1.5 times memory. Thats my rule, usually I have seen sizes between 1 and 3 time memory. This will give it a slight boost in speed over the usual places it would be placed.



I use the auslogic defrager it's free (ads for more tool though). There are other that do this too. Check out the defragers at portableapps.com. It optimizes the disk by placeing recently accessed files near the front of the drive for faster access. It shows where the page file is placed so you can see if you moved it to the top 25% of the drive.



After that reinstall apps and copy back your data.



I would say you get 10 or 20% boost. But the main value is a lot of the hesitation goes away for a smoother experience.







share|improve this answer












share|improve this answer



share|improve this answer










answered Oct 10 '12 at 22:36









Nelson AsinowskiNelson Asinowski

843




843








  • 3





    Using some testing, it's clear for me that when disk is too slow, not having a page file does indeed speed up the system. I can tell a difference of many seconds in simple tasks as app switching.

    – Jason Oviedo
    Oct 25 '12 at 20:53











  • @Mark You're mistaken. The vast majority of Windows systems run with a pagefile (because that is how Windows runs by default, for good and sufficient reason) and almost all of them use similar-speed disks. And almost none of them show any such problems. The problem is not "the pagefile", it's that you don't have enough RAM. Please note that getting rid of the pagefile does not eliminate paging to and from disk - it merely eliminates one of typically hundreds of files that are commonly involved in paging.

    – Jamie Hanrahan
    Jul 28 '15 at 22:16














  • 3





    Using some testing, it's clear for me that when disk is too slow, not having a page file does indeed speed up the system. I can tell a difference of many seconds in simple tasks as app switching.

    – Jason Oviedo
    Oct 25 '12 at 20:53











  • @Mark You're mistaken. The vast majority of Windows systems run with a pagefile (because that is how Windows runs by default, for good and sufficient reason) and almost all of them use similar-speed disks. And almost none of them show any such problems. The problem is not "the pagefile", it's that you don't have enough RAM. Please note that getting rid of the pagefile does not eliminate paging to and from disk - it merely eliminates one of typically hundreds of files that are commonly involved in paging.

    – Jamie Hanrahan
    Jul 28 '15 at 22:16








3




3





Using some testing, it's clear for me that when disk is too slow, not having a page file does indeed speed up the system. I can tell a difference of many seconds in simple tasks as app switching.

– Jason Oviedo
Oct 25 '12 at 20:53





Using some testing, it's clear for me that when disk is too slow, not having a page file does indeed speed up the system. I can tell a difference of many seconds in simple tasks as app switching.

– Jason Oviedo
Oct 25 '12 at 20:53













@Mark You're mistaken. The vast majority of Windows systems run with a pagefile (because that is how Windows runs by default, for good and sufficient reason) and almost all of them use similar-speed disks. And almost none of them show any such problems. The problem is not "the pagefile", it's that you don't have enough RAM. Please note that getting rid of the pagefile does not eliminate paging to and from disk - it merely eliminates one of typically hundreds of files that are commonly involved in paging.

– Jamie Hanrahan
Jul 28 '15 at 22:16





@Mark You're mistaken. The vast majority of Windows systems run with a pagefile (because that is how Windows runs by default, for good and sufficient reason) and almost all of them use similar-speed disks. And almost none of them show any such problems. The problem is not "the pagefile", it's that you don't have enough RAM. Please note that getting rid of the pagefile does not eliminate paging to and from disk - it merely eliminates one of typically hundreds of files that are commonly involved in paging.

– Jamie Hanrahan
Jul 28 '15 at 22:16


















draft saved

draft discarded




















































Thanks for contributing an answer to Super User!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f482678%2fcommit-charge-is-100-full-but-physical-memory-is-just-60-when-using-no-page-fi%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

How do I know what Microsoft account the skydrive app is syncing to?

When does type information flow backwards in C++?

Grease: Live!