Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automation in Testing #31

Open
dialex opened this issue Jan 11, 2018 · 10 comments
Open

Automation in Testing #31

dialex opened this issue Jan 11, 2018 · 10 comments

Comments

@dialex
Copy link
Owner

dialex commented Jan 11, 2018

Links

.

.

.

.

.

Personalities

  • Alister Scott
  • Joe Colantonio
@dialex
Copy link
Owner Author

dialex commented Jan 11, 2018

P.S: We are currently experimenting with this strategy. It may be tweaked as we go. When we have a definitive strategy, we will formalise the diagrams above. Our strategy was greatly influenced by this talk. Here is a brief summary:

Screen Shot 2017-08-18 at 14.07.24.png

15:29 - what to test

Linting? -> ShellCheck
Deployment scripts unit tests? -> Unit InSpec
Are services running? -> Acceptance InSpec

18:44 - tooling

(before provisioning)

  • Unit testing: bash scripts, terraform scripts
    • Linting: quick sanity check, run in CI before committing
    • Low value on testing configuration?

(after provisioning)

  • Integration testing: packages installed, services running, ports listening
    • Serverspec/Inspec: readable, quick run time, can SSH into instances
  • Acceptance testing: SSHing into machines, using apps deployed on the machine
    • Cucumber: readable for devs and business, reporting, executable specification
  • Smoke tests: run before everything else, really quick, catches obvious errors, not complex tasks

@dialex
Copy link
Owner Author

dialex commented Jan 11, 2018

https://twitter.com/theBConnolly/status/915614905016795142

Also http://blog.getcorrello.com/2015/11/20/how-much-automated-testing-is-enough-automated-testing/#

“How much testing is enough?” is “It depends!”
(see “The Complete Guide to Software Testing”).

It depends on risk: the probablity of something going wrong and the impact of it. We should use risk to determine where to place the emphasis when testing by prioritizing our test cases. here's also the risk of over-testing (doing ineffective testing) and leaving the real needed testing behind.

@dialex
Copy link
Owner Author

dialex commented Jan 11, 2018

@dialex
Copy link
Owner Author

dialex commented Feb 4, 2018

Test case: Specific, explicit, documented, and largely confirmatory test ideas — like a recipe.

Note: A test case is not a test, any more than a recipe is a meal, or an itinerary is a trip. Open your mind to the fact that heavily scripted test cases do not add the value you think they do. If you are reading acceptance criteria, and writing test cases based on that, you are short-circuiting the real testing process and are going to miss an incredible amount of product risks that may matter to your client. More on the value (or lack thereof) of test cases here: http://www.developsense.com/blog/2017/01/drop-the-crutches/

@dialex dialex mentioned this issue Feb 4, 2018
@dialex
Copy link
Owner Author

dialex commented Feb 6, 2018

As an industry, we are obsessed with automation for all the wrong reasons. The view that we can take a complex cognitive activity and distil it into code is a fallacy which results in both bad testing and bad automation. To be successful with automation we need to think deeply about what we do in testing as well as what we can do with automation. This has been my feeling for most of my testing career.

"Automation in Testing" vs "Test Automation"

http://www.mwtestconsultancy.co.uk/automation-in-testing/


AUTOMATION THAT SUPPORTS TESTING, and not TESTING AUTOMATED
https://automationintesting.com/

@dialex dialex closed this as completed Feb 6, 2018
@dialex dialex reopened this Feb 6, 2018
@dialex
Copy link
Owner Author

dialex commented Feb 15, 2018

There is a pattern I see with many clients, often enough that I sought out a word to describe it: Manumation, A sort of well-meaning automation that usually requires frequent, extensive and expensive intervention to keep it 'working'.

You have probably seen it, the build server that needs a prod and a restart 'when things get a bit busy'. Or a deployment tool that, 'gets confused' and a 'test suite' that just needs another run or three.

Did it free up time for finding the important bugs? Or are you now finding the real bugs in the test automation, while the software your product owner is paying for is hobbling along slowly and expensively to production?

FROM: http://www.investigatingsoftware.co.uk/2018/02/manumation-worst-best-practice.html

@dialex
Copy link
Owner Author

dialex commented Feb 19, 2018

@dialex
Copy link
Owner Author

dialex commented Mar 11, 2018

A test script will check if what was expected and known to be true, still is.

@dialex
Copy link
Owner Author

dialex commented Dec 9, 2019

@dialex
Copy link
Owner Author

dialex commented May 20, 2020

https://madeintandem.com/blog/five-factor-testing/

Good tests can…

  1. Verify the code is working correctly
  2. Prevent future regressions
  3. Document the code’s behavior
  4. Provide design guidance
  5. Support refactoring

@dialex dialex changed the title Automation Tester Automation in Testing Sep 28, 2021
@dialex dialex moved this to Untriaged in Writing Dec 27, 2022
@dialex dialex added this to Writing Dec 27, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Untriaged
Development

No branches or pull requests

1 participant