Zoom Logo

SQGNE November 16th LIVE Online Meeting - Shared screen with speaker view
Ed Cook
11:54
looking for Python automation engineer, job description can be found at pico.net, I can be contacted to discuss the position at ed.cook@pico.net
David Heimann
15:33
But, Robin, we can't see *you*
Darin Kalashian
17:40
somewhat effective - measure by the number of escapes to the customer.
Jim Turner
19:28
middling effective: better in the process of development than after release, due to agile communic
Darin Kalashian
20:57
Testing is a cost
Darin Kalashian
21:12
Depends on the measure..
Ann Cultrera
21:16
management wants to be sure they are getting their money's worth for paying testers
David Heimann
21:56
If testing productivity is used to evaluate individual testers, especially for performance reviews, I can definitely see why they would resist!
Jim Turner
22:11
I know why testers resist: they think measures are crude and undiscriminating
Kenneth Ruskin
22:15
I think testers resist being measured by irrelevant metrics
Ed Cook
23:26
how many bugs are found is not a good measure unless you know how many bugs there actually are in the code
David Heimann
23:38
If testing productivity is used to evaluate individual testers, especially for performance reviews, I can definitely see why they would resist!
Sanjay Gupta
23:40
Many organizations/mangers are not clear how to monitor testing effectiveness
Kenneth Ruskin
23:54
irrelevant metrics would be things like number of defects found, number of tests executed
Ed Cook
25:05
not all bugs are equal and not all areas of code have the same number of bugs to be found
Louay Al jondi
25:26
Management wants to test more with less time and resources, means invested in automation
Anne Brahin
28:25
Whether testing can be effective depends on the clarity of the feature requirements and specifications. Majority stakeholders need to be on the same page prior to the implementation in the development process.
David Heimann
29:52
10 or 20 times as much work? How does one measure this? :-)
Mike Arnold
34:41
Leader provide the what. Management provide the how
David Heimann
34:47
Managing is doing things right. Leadership is ensuring one we get the right thing done.
Anne Brahin
35:32
Code coverage analysis is one way of measuring test effectiveness. When it is done well, it helps testers focus on creating automation for integration level kind of verification.
Mike Arnold
35:46
PASS or FAIL
Jim Turner
37:49
someone already suggested number of defects ESCAPED
Sanjay Gupta
37:51
turn around time..time to fix the defects and restest..
Louay Al jondi
38:17
Code coverage, as suggested
Susan Houle
38:29
Number of test passed
David Heimann
39:13
Number of tests failed (that how one finds defects!)
Louay Al jondi
39:29
Security compliance?
Mike Arnold
39:34
Impact of defect. Phase defect detected. Phase defect created.
Mike Arnold
40:15
Definition of aaaaaaStory Done == Test Passed
Mike Arnold
40:36
Definition of Story Done == Test Passed
Jim Turner
41:51
I like David's "you make it, we'll break it" challenge attitude
Ed Cook
42:33
how many bugs exist
Ed Cook
43:37
the quality of the code written
Mike Arnold
47:24
Effective: Was the test actually executed successfully, and provide results either way.
Anne Brahin
48:19
Most of security scans and performance tests have revealed deep problems in my experience.
Mike Arnold
53:49
Measure everything
Mike Arnold
01:04:36
But test case designs ent thru technical review to assure it was good, adequate test.
Mike Arnold
01:04:42
went
Louay Al jondi
01:06:24
In Agile world: If you have a comprehensive set of test cases that were added based on each user story completion, you will not run into surprising defects after merging to the main line,
Kenneth Ruskin
01:09:52
defects per lines of code is not a good metric...adding more lines of needless code will decrease defect-density.
Sanjay Gupta
01:10:13
Sorry, I have to leave now as I have to eat dinner and work again from 9 PM (prod deployment testing)..thanks..see you in the next meeting..Happy Thanksgiving to all!
Jim Turner
01:10:25
which is why people tried to invent things like function points
Jim Turner
01:11:02
( to Kenneth's point, not Sanjay's :)
Anne Brahin
01:13:40
🤣
Ed Cook
01:20:47
the part missing is the cost to fix the bugs the customer sees which it significantly more
Kenneth Ruskin
01:22:12
time spent fixing defects is time not spent developing new features
Phil Scarff
01:23:49
it’s not realistic to say that the number of defects missed is 0 at the final stage of testing
Ed Cook
01:24:01
agreed
Phil Scarff
01:25:06
according to the table, in both cases, the users receive product with 0 defects
Jim Turner
01:25:28
Ed, Robin may mean by "in Production" the defects caught by end users
Jim Turner
01:26:06
(in which case, BTW, any defect not felt by any user may not really count as a defect)
Phil Scarff
01:26:06
well if that’s true, then this makes more sense
Ed Cook
01:26:06
the effectiveness is really only known after the customer is using the product and having it meet their needs
Ed Cook
01:27:34
Thanks Jim, that does make sense
Mike Arnold
01:29:26
We assume customer had input into what he/she wanted at the beginning of the project. So we test to that expectation instead of doing no testing and wait till he/she gets it to test use it.
Ed Cook
01:31:59
more of them sooner
Kenneth Ruskin
01:33:29
various studies have concluded that 60
Kenneth Ruskin
01:34:08
5-80% of defects have a root-cause in requirements
Kenneth Ruskin
01:38:27
great presentation Robin...thank you
Thara Rao
01:38:47
Thanks Robin
Louay Al jondi
01:39:13
When can you share the recording?
Mike Arnold
01:39:14
Thank you Robin, great topic, makes you think.
Kenneth Ruskin
01:40:24
requirements should include performance
Kenneth Ruskin
01:41:00
security should also be part of requirements