How is it possible to use a RAM module that works at a higher clock frequency than what is supported by the...












1















I remember many years ago when Intel finally moved the memory management unit from the north bridge into the processor itself in their newer Pentium and Core architectures. When I was building one of my earlier PC's, I gave Intel a call to ask what kind of memory I should use. The representative explained to me the maximum data rate of the memory is capped by the memory management unit that is inside the processor (must have been 1333 Mhz or something like that).



This made sense and convinced me that it should not be possible to exceed the maximum RAM speed specified by a CPU without some hardware or voltage hacks that cause severe instability. Nowadays, when I browse websites like pcpartpicker where people showcase their latest computer builds, nearly everyone seem to be overclocking their RAM.



The thing I don't understand is how the silicon manages to synchronize the RAM IO, if it is designed to work up to a certain data rate. I've been looking at many online blog posts and tutorials about memory overclocking. They are all filled with pages and pages of anecdotes and opinions. None of them get into technical details of the communication protocol between DRAM and the MMU in a processor, and how exactly changing the bit clock rate of the DRAM affects it.



So I thought to ask here:



What are the electronic principles of memory overclocking and how does it work exactly? How is it possible to use a RAM module that works at a higher clock frequency than what is supported by the MMU?










share|improve this question




















  • 1





    Do you mean your second question to be "How is it possible to use a RAM module working at a higher clock frequency than what is supported by the MMU?"

    – K7AAY
    Feb 22 at 19:01













  • @K7AAY right, that'd be a better way to phrase it.

    – darksky
    Feb 22 at 19:36











  • Although now that I think about it more, this is an important point. I suppose memory modules can work at different frequencies depending on which profile is selected.

    – darksky
    Feb 22 at 20:09











  • darksky, please click edit and revise your question accordingly.

    – K7AAY
    Feb 22 at 21:28






  • 1





    No worries, I thought making the title more specific was a good idea.

    – darksky
    Feb 23 at 5:29
















1















I remember many years ago when Intel finally moved the memory management unit from the north bridge into the processor itself in their newer Pentium and Core architectures. When I was building one of my earlier PC's, I gave Intel a call to ask what kind of memory I should use. The representative explained to me the maximum data rate of the memory is capped by the memory management unit that is inside the processor (must have been 1333 Mhz or something like that).



This made sense and convinced me that it should not be possible to exceed the maximum RAM speed specified by a CPU without some hardware or voltage hacks that cause severe instability. Nowadays, when I browse websites like pcpartpicker where people showcase their latest computer builds, nearly everyone seem to be overclocking their RAM.



The thing I don't understand is how the silicon manages to synchronize the RAM IO, if it is designed to work up to a certain data rate. I've been looking at many online blog posts and tutorials about memory overclocking. They are all filled with pages and pages of anecdotes and opinions. None of them get into technical details of the communication protocol between DRAM and the MMU in a processor, and how exactly changing the bit clock rate of the DRAM affects it.



So I thought to ask here:



What are the electronic principles of memory overclocking and how does it work exactly? How is it possible to use a RAM module that works at a higher clock frequency than what is supported by the MMU?










share|improve this question




















  • 1





    Do you mean your second question to be "How is it possible to use a RAM module working at a higher clock frequency than what is supported by the MMU?"

    – K7AAY
    Feb 22 at 19:01













  • @K7AAY right, that'd be a better way to phrase it.

    – darksky
    Feb 22 at 19:36











  • Although now that I think about it more, this is an important point. I suppose memory modules can work at different frequencies depending on which profile is selected.

    – darksky
    Feb 22 at 20:09











  • darksky, please click edit and revise your question accordingly.

    – K7AAY
    Feb 22 at 21:28






  • 1





    No worries, I thought making the title more specific was a good idea.

    – darksky
    Feb 23 at 5:29














1












1








1








I remember many years ago when Intel finally moved the memory management unit from the north bridge into the processor itself in their newer Pentium and Core architectures. When I was building one of my earlier PC's, I gave Intel a call to ask what kind of memory I should use. The representative explained to me the maximum data rate of the memory is capped by the memory management unit that is inside the processor (must have been 1333 Mhz or something like that).



This made sense and convinced me that it should not be possible to exceed the maximum RAM speed specified by a CPU without some hardware or voltage hacks that cause severe instability. Nowadays, when I browse websites like pcpartpicker where people showcase their latest computer builds, nearly everyone seem to be overclocking their RAM.



The thing I don't understand is how the silicon manages to synchronize the RAM IO, if it is designed to work up to a certain data rate. I've been looking at many online blog posts and tutorials about memory overclocking. They are all filled with pages and pages of anecdotes and opinions. None of them get into technical details of the communication protocol between DRAM and the MMU in a processor, and how exactly changing the bit clock rate of the DRAM affects it.



So I thought to ask here:



What are the electronic principles of memory overclocking and how does it work exactly? How is it possible to use a RAM module that works at a higher clock frequency than what is supported by the MMU?










share|improve this question
















I remember many years ago when Intel finally moved the memory management unit from the north bridge into the processor itself in their newer Pentium and Core architectures. When I was building one of my earlier PC's, I gave Intel a call to ask what kind of memory I should use. The representative explained to me the maximum data rate of the memory is capped by the memory management unit that is inside the processor (must have been 1333 Mhz or something like that).



This made sense and convinced me that it should not be possible to exceed the maximum RAM speed specified by a CPU without some hardware or voltage hacks that cause severe instability. Nowadays, when I browse websites like pcpartpicker where people showcase their latest computer builds, nearly everyone seem to be overclocking their RAM.



The thing I don't understand is how the silicon manages to synchronize the RAM IO, if it is designed to work up to a certain data rate. I've been looking at many online blog posts and tutorials about memory overclocking. They are all filled with pages and pages of anecdotes and opinions. None of them get into technical details of the communication protocol between DRAM and the MMU in a processor, and how exactly changing the bit clock rate of the DRAM affects it.



So I thought to ask here:



What are the electronic principles of memory overclocking and how does it work exactly? How is it possible to use a RAM module that works at a higher clock frequency than what is supported by the MMU?







memory cpu performance overclocking






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Feb 23 at 4:28







darksky

















asked Feb 22 at 15:58









darkskydarksky

1306




1306








  • 1





    Do you mean your second question to be "How is it possible to use a RAM module working at a higher clock frequency than what is supported by the MMU?"

    – K7AAY
    Feb 22 at 19:01













  • @K7AAY right, that'd be a better way to phrase it.

    – darksky
    Feb 22 at 19:36











  • Although now that I think about it more, this is an important point. I suppose memory modules can work at different frequencies depending on which profile is selected.

    – darksky
    Feb 22 at 20:09











  • darksky, please click edit and revise your question accordingly.

    – K7AAY
    Feb 22 at 21:28






  • 1





    No worries, I thought making the title more specific was a good idea.

    – darksky
    Feb 23 at 5:29














  • 1





    Do you mean your second question to be "How is it possible to use a RAM module working at a higher clock frequency than what is supported by the MMU?"

    – K7AAY
    Feb 22 at 19:01













  • @K7AAY right, that'd be a better way to phrase it.

    – darksky
    Feb 22 at 19:36











  • Although now that I think about it more, this is an important point. I suppose memory modules can work at different frequencies depending on which profile is selected.

    – darksky
    Feb 22 at 20:09











  • darksky, please click edit and revise your question accordingly.

    – K7AAY
    Feb 22 at 21:28






  • 1





    No worries, I thought making the title more specific was a good idea.

    – darksky
    Feb 23 at 5:29








1




1





Do you mean your second question to be "How is it possible to use a RAM module working at a higher clock frequency than what is supported by the MMU?"

– K7AAY
Feb 22 at 19:01







Do you mean your second question to be "How is it possible to use a RAM module working at a higher clock frequency than what is supported by the MMU?"

– K7AAY
Feb 22 at 19:01















@K7AAY right, that'd be a better way to phrase it.

– darksky
Feb 22 at 19:36





@K7AAY right, that'd be a better way to phrase it.

– darksky
Feb 22 at 19:36













Although now that I think about it more, this is an important point. I suppose memory modules can work at different frequencies depending on which profile is selected.

– darksky
Feb 22 at 20:09





Although now that I think about it more, this is an important point. I suppose memory modules can work at different frequencies depending on which profile is selected.

– darksky
Feb 22 at 20:09













darksky, please click edit and revise your question accordingly.

– K7AAY
Feb 22 at 21:28





darksky, please click edit and revise your question accordingly.

– K7AAY
Feb 22 at 21:28




1




1





No worries, I thought making the title more specific was a good idea.

– darksky
Feb 23 at 5:29





No worries, I thought making the title more specific was a good idea.

– darksky
Feb 23 at 5:29










1 Answer
1






active

oldest

votes


















2














This seems to be a Very Large Scale Integration (VLSI) design question.



When Intel designs the logic for the memory controller, they give their software tools timing constraints or lower limits for how fast the logic needs to be. These tools generate a circuit and layout that meets the timing requirements at worst case. Synopsis Design Compiler is a common tool that does this. These tools convert code into transistor logic (synthesis), lay the transistors out on a theoretical die (place), and connect the transistors together (route). After the tool does this, it checks how 'good' its design is by running static timing analysis. It will measure how long it takes each part of the circuit to propagate a change in its inputs to a change in its outputs. This must be faster than the clock cycle, otherwise the calculation in the current cycle will not finish before the start of the next cycle. When the logic is too slow, the processor will rapidly corrupt itself by saving incorrect/incomplete results from one clock cycle to the next. Static analysis makes sure this cannot happen. It will come up with the worst case scenario for each circuit and force a redesign if timing constraints are not met. When Intel says the maximum speed of the memory controller is 1333Mhz, they mean that the memory controller will operate at least this frequency, guaranteed.



Overclocking works because we alter the factors that go into the static analysis calculation. To some extent, we can overclock simply by speeding up the clock speed and not altering any factors. We are leveraging any available timing slack: overheads, error margins, and targeting parts of the circuit that are already much faster than the clock cycle. If we are lucky, our workload never encounters the 'worst case scenario'. We cannot go very far by just increasing the clock speed. As you might notice when you set an XMP profile, the voltage of the memory controller is also increased. This will alter the performance of the logic in the circuit. Higher voltage decreases the time it takes for the transistors to change state by pushing more current through any resistances in the circuit. By increasing voltage we can support faster memory. This usually works for several tiers of memory faster than the supposed 'maximum' Intel specifies.



With memory in particular, the motherboard can also be very important. If you look at the motherboard near the memory slots, you will notice that the wires are all curly and squiggly. This is because the wires need to be the same length. If the wires were different lengths, then the data on each wire for each clock cycles would get mixed up. The DIMM or the memory controller would get bits from adjacent clocks cycles. If you are running memory at DRR4 3800Mhz, then the pulses traveling on the wire are only 4cm apart. This is why it is also important to check the motherboard manufacturer's maximum memory frequency. The tolerance in wire lengths must be very strict to support extremely high memory speeds.






share|improve this answer























    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "3"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1408523%2fhow-is-it-possible-to-use-a-ram-module-that-works-at-a-higher-clock-frequency-th%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    2














    This seems to be a Very Large Scale Integration (VLSI) design question.



    When Intel designs the logic for the memory controller, they give their software tools timing constraints or lower limits for how fast the logic needs to be. These tools generate a circuit and layout that meets the timing requirements at worst case. Synopsis Design Compiler is a common tool that does this. These tools convert code into transistor logic (synthesis), lay the transistors out on a theoretical die (place), and connect the transistors together (route). After the tool does this, it checks how 'good' its design is by running static timing analysis. It will measure how long it takes each part of the circuit to propagate a change in its inputs to a change in its outputs. This must be faster than the clock cycle, otherwise the calculation in the current cycle will not finish before the start of the next cycle. When the logic is too slow, the processor will rapidly corrupt itself by saving incorrect/incomplete results from one clock cycle to the next. Static analysis makes sure this cannot happen. It will come up with the worst case scenario for each circuit and force a redesign if timing constraints are not met. When Intel says the maximum speed of the memory controller is 1333Mhz, they mean that the memory controller will operate at least this frequency, guaranteed.



    Overclocking works because we alter the factors that go into the static analysis calculation. To some extent, we can overclock simply by speeding up the clock speed and not altering any factors. We are leveraging any available timing slack: overheads, error margins, and targeting parts of the circuit that are already much faster than the clock cycle. If we are lucky, our workload never encounters the 'worst case scenario'. We cannot go very far by just increasing the clock speed. As you might notice when you set an XMP profile, the voltage of the memory controller is also increased. This will alter the performance of the logic in the circuit. Higher voltage decreases the time it takes for the transistors to change state by pushing more current through any resistances in the circuit. By increasing voltage we can support faster memory. This usually works for several tiers of memory faster than the supposed 'maximum' Intel specifies.



    With memory in particular, the motherboard can also be very important. If you look at the motherboard near the memory slots, you will notice that the wires are all curly and squiggly. This is because the wires need to be the same length. If the wires were different lengths, then the data on each wire for each clock cycles would get mixed up. The DIMM or the memory controller would get bits from adjacent clocks cycles. If you are running memory at DRR4 3800Mhz, then the pulses traveling on the wire are only 4cm apart. This is why it is also important to check the motherboard manufacturer's maximum memory frequency. The tolerance in wire lengths must be very strict to support extremely high memory speeds.






    share|improve this answer




























      2














      This seems to be a Very Large Scale Integration (VLSI) design question.



      When Intel designs the logic for the memory controller, they give their software tools timing constraints or lower limits for how fast the logic needs to be. These tools generate a circuit and layout that meets the timing requirements at worst case. Synopsis Design Compiler is a common tool that does this. These tools convert code into transistor logic (synthesis), lay the transistors out on a theoretical die (place), and connect the transistors together (route). After the tool does this, it checks how 'good' its design is by running static timing analysis. It will measure how long it takes each part of the circuit to propagate a change in its inputs to a change in its outputs. This must be faster than the clock cycle, otherwise the calculation in the current cycle will not finish before the start of the next cycle. When the logic is too slow, the processor will rapidly corrupt itself by saving incorrect/incomplete results from one clock cycle to the next. Static analysis makes sure this cannot happen. It will come up with the worst case scenario for each circuit and force a redesign if timing constraints are not met. When Intel says the maximum speed of the memory controller is 1333Mhz, they mean that the memory controller will operate at least this frequency, guaranteed.



      Overclocking works because we alter the factors that go into the static analysis calculation. To some extent, we can overclock simply by speeding up the clock speed and not altering any factors. We are leveraging any available timing slack: overheads, error margins, and targeting parts of the circuit that are already much faster than the clock cycle. If we are lucky, our workload never encounters the 'worst case scenario'. We cannot go very far by just increasing the clock speed. As you might notice when you set an XMP profile, the voltage of the memory controller is also increased. This will alter the performance of the logic in the circuit. Higher voltage decreases the time it takes for the transistors to change state by pushing more current through any resistances in the circuit. By increasing voltage we can support faster memory. This usually works for several tiers of memory faster than the supposed 'maximum' Intel specifies.



      With memory in particular, the motherboard can also be very important. If you look at the motherboard near the memory slots, you will notice that the wires are all curly and squiggly. This is because the wires need to be the same length. If the wires were different lengths, then the data on each wire for each clock cycles would get mixed up. The DIMM or the memory controller would get bits from adjacent clocks cycles. If you are running memory at DRR4 3800Mhz, then the pulses traveling on the wire are only 4cm apart. This is why it is also important to check the motherboard manufacturer's maximum memory frequency. The tolerance in wire lengths must be very strict to support extremely high memory speeds.






      share|improve this answer


























        2












        2








        2







        This seems to be a Very Large Scale Integration (VLSI) design question.



        When Intel designs the logic for the memory controller, they give their software tools timing constraints or lower limits for how fast the logic needs to be. These tools generate a circuit and layout that meets the timing requirements at worst case. Synopsis Design Compiler is a common tool that does this. These tools convert code into transistor logic (synthesis), lay the transistors out on a theoretical die (place), and connect the transistors together (route). After the tool does this, it checks how 'good' its design is by running static timing analysis. It will measure how long it takes each part of the circuit to propagate a change in its inputs to a change in its outputs. This must be faster than the clock cycle, otherwise the calculation in the current cycle will not finish before the start of the next cycle. When the logic is too slow, the processor will rapidly corrupt itself by saving incorrect/incomplete results from one clock cycle to the next. Static analysis makes sure this cannot happen. It will come up with the worst case scenario for each circuit and force a redesign if timing constraints are not met. When Intel says the maximum speed of the memory controller is 1333Mhz, they mean that the memory controller will operate at least this frequency, guaranteed.



        Overclocking works because we alter the factors that go into the static analysis calculation. To some extent, we can overclock simply by speeding up the clock speed and not altering any factors. We are leveraging any available timing slack: overheads, error margins, and targeting parts of the circuit that are already much faster than the clock cycle. If we are lucky, our workload never encounters the 'worst case scenario'. We cannot go very far by just increasing the clock speed. As you might notice when you set an XMP profile, the voltage of the memory controller is also increased. This will alter the performance of the logic in the circuit. Higher voltage decreases the time it takes for the transistors to change state by pushing more current through any resistances in the circuit. By increasing voltage we can support faster memory. This usually works for several tiers of memory faster than the supposed 'maximum' Intel specifies.



        With memory in particular, the motherboard can also be very important. If you look at the motherboard near the memory slots, you will notice that the wires are all curly and squiggly. This is because the wires need to be the same length. If the wires were different lengths, then the data on each wire for each clock cycles would get mixed up. The DIMM or the memory controller would get bits from adjacent clocks cycles. If you are running memory at DRR4 3800Mhz, then the pulses traveling on the wire are only 4cm apart. This is why it is also important to check the motherboard manufacturer's maximum memory frequency. The tolerance in wire lengths must be very strict to support extremely high memory speeds.






        share|improve this answer













        This seems to be a Very Large Scale Integration (VLSI) design question.



        When Intel designs the logic for the memory controller, they give their software tools timing constraints or lower limits for how fast the logic needs to be. These tools generate a circuit and layout that meets the timing requirements at worst case. Synopsis Design Compiler is a common tool that does this. These tools convert code into transistor logic (synthesis), lay the transistors out on a theoretical die (place), and connect the transistors together (route). After the tool does this, it checks how 'good' its design is by running static timing analysis. It will measure how long it takes each part of the circuit to propagate a change in its inputs to a change in its outputs. This must be faster than the clock cycle, otherwise the calculation in the current cycle will not finish before the start of the next cycle. When the logic is too slow, the processor will rapidly corrupt itself by saving incorrect/incomplete results from one clock cycle to the next. Static analysis makes sure this cannot happen. It will come up with the worst case scenario for each circuit and force a redesign if timing constraints are not met. When Intel says the maximum speed of the memory controller is 1333Mhz, they mean that the memory controller will operate at least this frequency, guaranteed.



        Overclocking works because we alter the factors that go into the static analysis calculation. To some extent, we can overclock simply by speeding up the clock speed and not altering any factors. We are leveraging any available timing slack: overheads, error margins, and targeting parts of the circuit that are already much faster than the clock cycle. If we are lucky, our workload never encounters the 'worst case scenario'. We cannot go very far by just increasing the clock speed. As you might notice when you set an XMP profile, the voltage of the memory controller is also increased. This will alter the performance of the logic in the circuit. Higher voltage decreases the time it takes for the transistors to change state by pushing more current through any resistances in the circuit. By increasing voltage we can support faster memory. This usually works for several tiers of memory faster than the supposed 'maximum' Intel specifies.



        With memory in particular, the motherboard can also be very important. If you look at the motherboard near the memory slots, you will notice that the wires are all curly and squiggly. This is because the wires need to be the same length. If the wires were different lengths, then the data on each wire for each clock cycles would get mixed up. The DIMM or the memory controller would get bits from adjacent clocks cycles. If you are running memory at DRR4 3800Mhz, then the pulses traveling on the wire are only 4cm apart. This is why it is also important to check the motherboard manufacturer's maximum memory frequency. The tolerance in wire lengths must be very strict to support extremely high memory speeds.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Feb 23 at 1:35









        AndyAndy

        1,056311




        1,056311






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Super User!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1408523%2fhow-is-it-possible-to-use-a-ram-module-that-works-at-a-higher-clock-frequency-th%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Probability when a professor distributes a quiz and homework assignment to a class of n students.

            Aardman Animations

            Are they similar matrix