• nogooduser@lemmy.world
    link
    fedilink
    English
    arrow-up
    73
    ·
    2 days ago

    I wish our test team was like that. Ours would respond with something like “How would I test this?”

    • Kualdir@feddit.nl
      link
      fedilink
      arrow-up
      29
      arrow-down
      1
      ·
      2 days ago

      Tester here, I only have to do this if the ticket is unclear / its not clear where impact can be felt by the change. I once had a project with 4 great analysts and basically never had to ask this question there.

      • Th3D3k0y@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        We added an API endpoint so users with permission sets that allow them to access this can see the response.

        Ok… What is the end point, what’s the permission, is it bundled into a set by default or do I need to make one, what’s the expected response, do we give an error if the permission is false or just a 500?

        They always make it so vague

      • nogooduser@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        I have worked with some excellent testers but I have also worked with a team that literally required us to write down the tests for them.

        To be fair, that wasn’t their fault because they weren’t testers. They were finance people that had been seconded to testing because we didn’t have a real test team.

        The current team is somewhere in between.

        • Kualdir@feddit.nl
          link
          fedilink
          arrow-up
          5
          ·
          2 days ago

          Look I don’t think its bad to have people like that testing, but you’d need a test team to write the test for them or have those people specifically interested in testing the software.

          I’ve had a project where we as testers got out most bugs during test phase, after that it went to staging and there were a few business people who always jumped on testing it there and found bugs we couldn’t think of cause they just knew the business flows so well and we had to go off what our product owners said.

          Leaving all testing to a non-testing team isn’t gonna work

    • humanspiral@lemmy.ca
      link
      fedilink
      arrow-up
      7
      arrow-down
      7
      ·
      2 days ago

      Programmer should have written all the test cases, and I just run the batches, and print out where their cases failed.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        25
        ·
        2 days ago

        Ewww, no. The programmer should have run their unit tests, maybe even told you about them. You should be testing for edge cases not covered by the unit tests at a minimum and replicating the unit tests if they don’t appear to be very thorough.

        • mspencer712@programming.dev
          link
          fedilink
          arrow-up
          7
          ·
          2 days ago

          This.

          My units and integration tests are for the things I thought of, and more importantly, don’t want to accidentally break in the future. I will be monumentally stupid a year from now and try to destroy something because I forgot it existed.

          Testers get in there and play, be creative, be evil, and they discuss what they find. Is this a problem? Do we want to get out in front of it before the customer finds it? They aren’t the red team, they aren’t the enemy. We sharpen each other. And we need each other.

        • nogooduser@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          I think that the main difference is that developers tend to test for success (i.e. does it work as defined) and that testers should also test that it doesn’t fail when a user gets hold of it.