Are Apex tests run differently depending on how they are started?












4















So I have this test Class, when I run the test from inside Salesforce it fails. When I run the test from the Dev Console it passes. Why are the results different when I invoke the test from a different interface?



Just to be completely clear, when I describe running the test from inside Salesforce, I mean Setup -> Develop -> Apex Test Execution (select the test and run it).



When I describe running it from the Dev Console it's Test -> New Run -> (Select the test Class and run it).



The test also fails when validating a change set, however, change set deployments don't seem to be reporting the failure.



The nature of the failure is due to a DML exception that appears to be related to something build in Process Builder.



Are Apex tests run differently depending on how they are invoked?










share|improve this question



























    4















    So I have this test Class, when I run the test from inside Salesforce it fails. When I run the test from the Dev Console it passes. Why are the results different when I invoke the test from a different interface?



    Just to be completely clear, when I describe running the test from inside Salesforce, I mean Setup -> Develop -> Apex Test Execution (select the test and run it).



    When I describe running it from the Dev Console it's Test -> New Run -> (Select the test Class and run it).



    The test also fails when validating a change set, however, change set deployments don't seem to be reporting the failure.



    The nature of the failure is due to a DML exception that appears to be related to something build in Process Builder.



    Are Apex tests run differently depending on how they are invoked?










    share|improve this question

























      4












      4








      4








      So I have this test Class, when I run the test from inside Salesforce it fails. When I run the test from the Dev Console it passes. Why are the results different when I invoke the test from a different interface?



      Just to be completely clear, when I describe running the test from inside Salesforce, I mean Setup -> Develop -> Apex Test Execution (select the test and run it).



      When I describe running it from the Dev Console it's Test -> New Run -> (Select the test Class and run it).



      The test also fails when validating a change set, however, change set deployments don't seem to be reporting the failure.



      The nature of the failure is due to a DML exception that appears to be related to something build in Process Builder.



      Are Apex tests run differently depending on how they are invoked?










      share|improve this question














      So I have this test Class, when I run the test from inside Salesforce it fails. When I run the test from the Dev Console it passes. Why are the results different when I invoke the test from a different interface?



      Just to be completely clear, when I describe running the test from inside Salesforce, I mean Setup -> Develop -> Apex Test Execution (select the test and run it).



      When I describe running it from the Dev Console it's Test -> New Run -> (Select the test Class and run it).



      The test also fails when validating a change set, however, change set deployments don't seem to be reporting the failure.



      The nature of the failure is due to a DML exception that appears to be related to something build in Process Builder.



      Are Apex tests run differently depending on how they are invoked?







      unit-test developer-console






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Feb 7 at 19:22









      John ThompsonJohn Thompson

      1,5441517




      1,5441517






















          1 Answer
          1






          active

          oldest

          votes


















          3














          Yes, there's runTests() and compileAndTest(), and they do run differently. In theory, they should run identically, but typically because of platform bugs, they do not. In those situations, you should try to get a bug opened with R&D. There's usually a workaround, but without support's help, it may be hard to pinpoint what needs to be done. Usually, compileAndTest will allow deployments that runTests reports as failures.






          share|improve this answer
























          • Also, long-running test or CPU timeout is quite frequent on sandbox, in prod it just runs fine

            – Pranay Jaiswal
            Feb 7 at 19:56






          • 1





            @PranayJaiswal that might simply be different TraceFlag settings, too. Setting debug levels too high (e.g. everything set to maximum trace levels) can cause CPU timeouts. Unless you're actually debugging, it's usually worthwhile to disable logging.

            – sfdcfox
            Feb 7 at 20:05













          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "459"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsalesforce.stackexchange.com%2fquestions%2f249542%2fare-apex-tests-run-differently-depending-on-how-they-are-started%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          3














          Yes, there's runTests() and compileAndTest(), and they do run differently. In theory, they should run identically, but typically because of platform bugs, they do not. In those situations, you should try to get a bug opened with R&D. There's usually a workaround, but without support's help, it may be hard to pinpoint what needs to be done. Usually, compileAndTest will allow deployments that runTests reports as failures.






          share|improve this answer
























          • Also, long-running test or CPU timeout is quite frequent on sandbox, in prod it just runs fine

            – Pranay Jaiswal
            Feb 7 at 19:56






          • 1





            @PranayJaiswal that might simply be different TraceFlag settings, too. Setting debug levels too high (e.g. everything set to maximum trace levels) can cause CPU timeouts. Unless you're actually debugging, it's usually worthwhile to disable logging.

            – sfdcfox
            Feb 7 at 20:05


















          3














          Yes, there's runTests() and compileAndTest(), and they do run differently. In theory, they should run identically, but typically because of platform bugs, they do not. In those situations, you should try to get a bug opened with R&D. There's usually a workaround, but without support's help, it may be hard to pinpoint what needs to be done. Usually, compileAndTest will allow deployments that runTests reports as failures.






          share|improve this answer
























          • Also, long-running test or CPU timeout is quite frequent on sandbox, in prod it just runs fine

            – Pranay Jaiswal
            Feb 7 at 19:56






          • 1





            @PranayJaiswal that might simply be different TraceFlag settings, too. Setting debug levels too high (e.g. everything set to maximum trace levels) can cause CPU timeouts. Unless you're actually debugging, it's usually worthwhile to disable logging.

            – sfdcfox
            Feb 7 at 20:05
















          3












          3








          3







          Yes, there's runTests() and compileAndTest(), and they do run differently. In theory, they should run identically, but typically because of platform bugs, they do not. In those situations, you should try to get a bug opened with R&D. There's usually a workaround, but without support's help, it may be hard to pinpoint what needs to be done. Usually, compileAndTest will allow deployments that runTests reports as failures.






          share|improve this answer













          Yes, there's runTests() and compileAndTest(), and they do run differently. In theory, they should run identically, but typically because of platform bugs, they do not. In those situations, you should try to get a bug opened with R&D. There's usually a workaround, but without support's help, it may be hard to pinpoint what needs to be done. Usually, compileAndTest will allow deployments that runTests reports as failures.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Feb 7 at 19:54









          sfdcfoxsfdcfox

          258k12202445




          258k12202445













          • Also, long-running test or CPU timeout is quite frequent on sandbox, in prod it just runs fine

            – Pranay Jaiswal
            Feb 7 at 19:56






          • 1





            @PranayJaiswal that might simply be different TraceFlag settings, too. Setting debug levels too high (e.g. everything set to maximum trace levels) can cause CPU timeouts. Unless you're actually debugging, it's usually worthwhile to disable logging.

            – sfdcfox
            Feb 7 at 20:05





















          • Also, long-running test or CPU timeout is quite frequent on sandbox, in prod it just runs fine

            – Pranay Jaiswal
            Feb 7 at 19:56






          • 1





            @PranayJaiswal that might simply be different TraceFlag settings, too. Setting debug levels too high (e.g. everything set to maximum trace levels) can cause CPU timeouts. Unless you're actually debugging, it's usually worthwhile to disable logging.

            – sfdcfox
            Feb 7 at 20:05



















          Also, long-running test or CPU timeout is quite frequent on sandbox, in prod it just runs fine

          – Pranay Jaiswal
          Feb 7 at 19:56





          Also, long-running test or CPU timeout is quite frequent on sandbox, in prod it just runs fine

          – Pranay Jaiswal
          Feb 7 at 19:56




          1




          1





          @PranayJaiswal that might simply be different TraceFlag settings, too. Setting debug levels too high (e.g. everything set to maximum trace levels) can cause CPU timeouts. Unless you're actually debugging, it's usually worthwhile to disable logging.

          – sfdcfox
          Feb 7 at 20:05







          @PranayJaiswal that might simply be different TraceFlag settings, too. Setting debug levels too high (e.g. everything set to maximum trace levels) can cause CPU timeouts. Unless you're actually debugging, it's usually worthwhile to disable logging.

          – sfdcfox
          Feb 7 at 20:05




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Salesforce Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsalesforce.stackexchange.com%2fquestions%2f249542%2fare-apex-tests-run-differently-depending-on-how-they-are-started%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Probability when a professor distributes a quiz and homework assignment to a class of n students.

          Aardman Animations

          Are they similar matrix