
11:54
looking for Python automation engineer, job description can be found at pico.net, I can be contacted to discuss the position at ed.cook@pico.net

15:33
But, Robin, we can't see *you*

17:40
somewhat effective - measure by the number of escapes to the customer.

19:28
middling effective: better in the process of development than after release, due to agile communic

20:57
Testing is a cost

21:12
Depends on the measure..

21:16
management wants to be sure they are getting their money's worth for paying testers

21:56
If testing productivity is used to evaluate individual testers, especially for performance reviews, I can definitely see why they would resist!

22:11
I know why testers resist: they think measures are crude and undiscriminating

22:15
I think testers resist being measured by irrelevant metrics

23:26
how many bugs are found is not a good measure unless you know how many bugs there actually are in the code

23:38
If testing productivity is used to evaluate individual testers, especially for performance reviews, I can definitely see why they would resist!

23:40
Many organizations/mangers are not clear how to monitor testing effectiveness

23:54
irrelevant metrics would be things like number of defects found, number of tests executed

25:05
not all bugs are equal and not all areas of code have the same number of bugs to be found

25:26
Management wants to test more with less time and resources, means invested in automation

28:25
Whether testing can be effective depends on the clarity of the feature requirements and specifications. Majority stakeholders need to be on the same page prior to the implementation in the development process.

29:52
10 or 20 times as much work? How does one measure this? :-)

34:41
Leader provide the what. Management provide the how

34:47
Managing is doing things right. Leadership is ensuring one we get the right thing done.

35:32
Code coverage analysis is one way of measuring test effectiveness. When it is done well, it helps testers focus on creating automation for integration level kind of verification.

35:46
PASS or FAIL

37:49
someone already suggested number of defects ESCAPED

37:51
turn around time..time to fix the defects and restest..

38:17
Code coverage, as suggested

38:29
Number of test passed

39:13
Number of tests failed (that how one finds defects!)

39:29
Security compliance?

39:34
Impact of defect. Phase defect detected. Phase defect created.

40:15
Definition of aaaaaaStory Done == Test Passed

40:36
Definition of Story Done == Test Passed

41:51
I like David's "you make it, we'll break it" challenge attitude

42:33
how many bugs exist

43:37
the quality of the code written

47:24
Effective: Was the test actually executed successfully, and provide results either way.

48:19
Most of security scans and performance tests have revealed deep problems in my experience.

53:49
Measure everything

01:04:36
But test case designs ent thru technical review to assure it was good, adequate test.

01:04:42
went

01:06:24
In Agile world: If you have a comprehensive set of test cases that were added based on each user story completion, you will not run into surprising defects after merging to the main line,

01:09:52
defects per lines of code is not a good metric...adding more lines of needless code will decrease defect-density.

01:10:13
Sorry, I have to leave now as I have to eat dinner and work again from 9 PM (prod deployment testing)..thanks..see you in the next meeting..Happy Thanksgiving to all!

01:10:25
which is why people tried to invent things like function points

01:11:02
( to Kenneth's point, not Sanjay's :)

01:13:40
🤣

01:20:47
the part missing is the cost to fix the bugs the customer sees which it significantly more

01:22:12
time spent fixing defects is time not spent developing new features

01:23:49
it’s not realistic to say that the number of defects missed is 0 at the final stage of testing

01:24:01
agreed

01:25:06
according to the table, in both cases, the users receive product with 0 defects

01:25:28
Ed, Robin may mean by "in Production" the defects caught by end users

01:26:06
(in which case, BTW, any defect not felt by any user may not really count as a defect)

01:26:06
well if that’s true, then this makes more sense

01:26:06
the effectiveness is really only known after the customer is using the product and having it meet their needs

01:27:34
Thanks Jim, that does make sense

01:29:26
We assume customer had input into what he/she wanted at the beginning of the project. So we test to that expectation instead of doing no testing and wait till he/she gets it to test use it.

01:31:59
more of them sooner

01:33:29
various studies have concluded that 60

01:34:08
5-80% of defects have a root-cause in requirements

01:38:27
great presentation Robin...thank you

01:38:47
Thanks Robin

01:39:13
When can you share the recording?

01:39:14
Thank you Robin, great topic, makes you think.

01:40:24
requirements should include performance

01:41:00
security should also be part of requirements