Skip to content
This repository was archived by the owner on May 13, 2019. It is now read-only.

add meeting minutes #4

Merged
merged 1 commit into from
Jan 19, 2016
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 55 additions & 0 deletions doc/wg-meetings/2016-01-15.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Node.js Testing WG Meeting 2016-01-15

## Links

* **Recording**: https://www.youtube.com/watch?v=UCH1crFZogY

## Present

* Rich Trott
* Alexis Campailla
* João Reis
* Santiago Gimeno
* Myles Borins

## Standup

* Rich Trott: Dealing with tests that are flaky in CI. Reverting changes that broke tests on CI in Windows. Unfortunately, the Windows hosts on CI infrastructure are having issues that may have been masked by the broken tests.
* João Reis: Focus on Build infrastructure and can help with Windows.
* Alexis Campailla: Can help with Windows tests and CI reliability infrastructures.
* Santiago Gimeno: Dealing with flaky tests, especially on OS X, FreeBSD, and LInux. Has had success lately solving cluster test issues.
* Myles Borins: Working on CITGM (smoke testing via npm module installation). Been thinking about benchmarking tests, especially in light of recent Buffer decision. Has noticed we don't have much documentation on our benchmark tests and how to run them on multiple systems.

## Minutes

### Documents for TSC ratification

* Rich: I made a pull request.
* Alexis: We will probably be best off doing the work first and then submitting for ratification once we are doing the work.
* Rich: Sure, sounds good.
* Alexis: Maybe update the README just so people can see that we're doing stuff.
* Rich: Good idea, will do.

### Reverting the commits that have Windows tests failing on CI

* Rich: It's ironic that the commits landed because people saw the red CI and thought, "Oh, it must be a problem with the CI infrastructure" when actually it's because the tests were failing. But then when it came time to revert, it was "Let's not rush this." Which is kind of understandable because Ben was sick and there's other CI issues right now anyway. But it also means we're leaving in place a situation that trains people to ignore the CI when it fails, which is exactly the problem that got us into this situation in the first place.
* Alexis: In the past, I've tried to make it so that these types of things would be given preferntial treatment and not be made to wait the usual 24-48 hours before landing.
* Rich: I like that a lot. I wonder if that's a recommendation that can come out of this group to get it into the COLLABORATING.md or the onboarding doc.
* Alexis: There is some language in the Collaborator's Guide about urgent fixes. We could call out specifically things that break CI.
* Rich: +100

### Flaky tests

* Rich: `test-tick-processor` issue 4227 fails on Windows intermittently. Matthew Loring might be able to take a look at it. It's kind of stalled. Maybe ping Matthew again? Maybe ping the Windows group too?
* Alexis: Pinging the platform Windows group would be a good idea. João and I can add it to our queue but it will take a while for either of us to get to it.
* Rich: `test-debug-no-context` issue 4343 also failing on Windows. Platform Windows group is already tagged on it.
* João: This test is close to the top of my queue so I may get to it soon.
* Rich: That's wonderful. I'll be patient.
* João: Feel free to investigate.

### CI history

* Rich: Often, the question comes up: When is the last time this test failed? I end up going to the relevant platform in the Jenkins interface and hitting "Previous Build" over and over and then looking in the Console for failures. It is really tedious and time-consuming. Is there a way to search Jenkins results or are they in a machine-readable format or would I be able to see stuff as an admin that might help?
* João: I don't know any way to search Jenkins. I have the same problem and I know what your'e talking about. Not a solution, but an idea: In any page in Jenkins, if you add `/api` to the URL, you can get JSON and download info about the jobs from that. It may be good to do something with that in an automated way.
* Rich: I may try to do that.
* Alexis: I suggest logging an issue in the testing (build?) repo. I've had the same problem. It would also be nice to be able to publish the resulting data somewhere that it can be downloaded so we can grep locally.