-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Repaired support for multiple timeouts in FrameManager #945
Repaired support for multiple timeouts in FrameManager #945
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
D’oh, thanks for catching this mistake around not resetting the flag. And for writing a lot of the tests I probably should have.
I’m not a committer, so take all my comments here with a grain of salt. I just thought I’d give a quick review since I wrote the original code.
clearTimeout(frameRequestTimeout); | ||
self.emit('data', null); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this implementation is pretty much exactly the same as receiveFrame
, I think it might make sense to either remove this method entirely (i.e. replace call to it with receiveFrame(null)
or to rewrite it as:
function cancelFrame() {
receiveFrame(null);
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, good insight. I didn't even see that o_o
If we are going for conciseness, we could probably do 1 better and remove cancelFrame()
altogether (i.e. call receiveFrame(null);
at those locations).
If we are going for clarity (my preference), I sometimes will sacrifice DRYness for making the intent clear (i.e. we mean to cancel the frame request and emit null
data). In the concise scenario and another contributor reading the code, they might be uncertain if we actually wanted to emit null
or not =/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that’s why I gave two solutions :P I think I agree with you that clarity is better, so I’d keep cancelFrame()
around but rewrite it to just delegate to receiveFrame(null)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I totally missed that first part. That's what I get for reading it half-awake x_x
it('should subscribe to frames when requested necessary', function(done) { | ||
var didSubscribe = false; | ||
var didUnsubscribe = false; | ||
var FrameManager = require('../lib/frame-manager.js'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As you noted, this should probably be pulled out of the test scenario (as I also should have done in the original test above).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same, obviously for the rest of these (and the other require
calls) in the other 3 tests you added.
endFrameSubscription: function() { didUnsubscribe = true; }, | ||
executeJavaScript: function() {} | ||
} | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we now have 5 tests that require a dummy Window
instance for the FrameManager
, it would probably be much clearer and cleaner to simply make a MockFrameManagerWindow
class that encapsulates all this logic, that way you can just:
var mockWindow = MockFrameManagerWindow();
var manager = FrameManager(mockWindow);
manager.requestFrame(function (data) {
mockWindow.didSubscribe.should.be.true;
// etc.
});
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm on the fence about this. I like the idea but...
- We define custom hooks in each of the tests so it won't save us too many lines of code. It might even lead to more confusion due to jumping from code to definition and back. That is:
var manager = FrameManager({
webContents: {
debugger: {
isAttached: function() { return true; },
sendCommand: function(command) { if (command === 'DOM.highlightRect') { fn('mock-data'); }}
},
beginFrameSubscription: function(_fn) { didSubscribe = true; fn = _fn; },
endFrameSubscription: function() { didUnsubscribe = true; },
executeJavaScript: function() {}
}
});
will become:
var mockWindow = MockWindow({
// Unsure about how we define `fn` access at the moment
sendCommand: function(command) { if (command === 'DOM.highlightRect') { this.fn('mock-data'); }}
beginFrameSubscription: function() { didSubscribe = true; },
endFrameSubscription: function() { didUnsubscribe = true; }
})
- This file is quite lengthy and isn't broken down, where would the class definition live? My gut says top of the file or in a separate file but both feel a little premature/less clear when reading code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, that’s not what I was trying to suggest at all. I was thinking you might have something roughly like…
function MockFrameManagerWindow (options) {
options = Object.assign({
debuggerAlreadyAttached: false,
debuggerTriggersFrames: true
}, options);
var self = this;
var _debuggerAttached = !!options.debuggerAlreadyAttached;
var _frameCallback = function() {};
self.results = {
subscribedCount: 0,
unsubscribedCount: 0,
currentSubscriptionCount: 0
};
self.beginFrameSubscription = function (callback) {
_frameCallback = callback;
self.results.subscribedCount++;
self.results.currentSubscriptionCount++;
};
self.endFrameSubscription = function () {
_frameCallback = function() {};
self.results.unsubscribedCount++;
self.results.currentSubscriptionCount--;
};
self.debugger = {
isAttached: function () { return _debuggerAttached; },
attach: function () {
asset(!_debuggerAttached, 'Debugger is already attached');
_debuggerAttached = true;
},
detach: function () {
_debuggerAttached = false;
},
sendCommand: function (command, callback) {
if (options.debuggerTriggersFrames && command === 'DOM.highlightRect') {
setTimeout(_frameCallback, 0, 'mock_data');
}
if (callback) {
setTimeout(callback, 0);
}
}
};
}
If you wanted to do less work here, you could use Sinon to make all the functions; it automatically keeps tracked of whether they were called and how often.
Anyway, instead of using local closure variables, you’d just test mockInstance.results.whatever
to find out whether things were called or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh, so we could stub the mock's method for causing timeouts. I dig it 👍
I typically use live data instead of mocks but that's a neat technique =)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fwiw, we don't need currentSubscriptionCount
but if we did, it would always be subscribedCount - unsubscribedCount
=/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file is quite lengthy and isn't broken down, where would the class definition live?
Yeah, I am also reminded of how painfully long the test file is whenever I look at it—I think it’s a consequence of it having grown pretty organically and the Segment team not seeming to put much effort into this project anymore.
In the past, I’ve added support items to the bottom where there are a few such things (see this custom Chai assertion, for example), but I think a mock class like I’m suggesting would be too big for that. I’d either:
- Just put it in a separate file in the same directory (e.g. how
server.js
is) - Create a
test/support
directory and put it in there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could break out the frame manager tests into their own file altogether? It's only 1 test prior this this, it would eliminate worries, and it would more towards a cleaner repo (i.e. 1 test file per 1 repo file)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could break out the frame manager tests into their own file altogether?
Sure. To be painfully honest, I don’t think I really have enough personal investment in Nightmare to push for reorganizing the repo. I made some noise to this effect way back near the start of 2016 and felt like I was mostly talking to a brick wall with the repo owners. You might have a different/better experience now, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough. I guess we'll wait until we get feedback from an owner =/
@@ -1463,6 +1463,139 @@ describe('Nightmare', function () { | |||
didSubscribe.should.be.false; | |||
}); | |||
|
|||
it('should subscribe to frames when requested necessary', function(done) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this would read better as just “should subscribe to frames when requested”
}); | ||
}); | ||
|
||
// DEV: We can have multiple timeouts if page is static |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For what it’s worth, the entire point of #927 was to not have timeouts when the page is static ;)
(The timeout guard should really never fire at all, but is there just in case the debugger fails in ways it should probably not be able to or its API has a breaking change.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Relatedly, I might re-title this test to something like “should time out with null data when debugger fails.”
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, interesting. I was getting consistent timeouts hence my PR in the first place o_o The script can be found here:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, that seems very bad. 😰
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@twolfson do you have a more reduced test case than the script you linked to that demonstrates timeouts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you file a separate issue, then? And include what version of Electron you are using, since that affects the debugger API?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure thing, I'm not in a place where I have the details but I'll do that later and cc
you on it 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! No rush; I’m just about certain I won’t have time to look at it today anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fwiw, my guess is [email protected]
as I was wrote that script 2/3 days ago and there hasn't been a new electron
release in 20 days
https://github.com/electron/electron/releases
https://github.com/segmentio/nightmare/blob/2.9.1/package.json#L23
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Finally got around to making a minimal version for you. Sorry about the delay, I accidentally archived the email and then forgot about it until today. Here's the issue:
} | ||
} | ||
}, | ||
beginFrameSubscription: function(_fn) { subscribeCount += 1; assert.strictEqual(fn, null); fn = _fn; }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a particular reason to use assert
here instead of Chai? e.g. should.not.exist(fn);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is! =D I prefer to use assert
like a developer sanity check and should/expect
for test assertions. It's kind of like undefined
and null
(i.e. one is for the system, one is for programmers)
I'm glad to change this based on the repo's coding styles but I didn't see any programmatic assert
sanity checks when browsing =/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm glad to change this based on the repo's coding styles but I didn't see any programmatic assert sanity checks when browsing
There aren’t. I generally like to do that, too, but I my perception on this is that any assertion that only exists in the context of a test is a test assertion, not an “is the program in a good state” assertion. But I’m definitely not in charge of the coding style here at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I would say it's on the fence. On one hand, it's faking out the beginFrameSubscription
to prevent it's receiveFrame
multiple times (i.e. programmatic). On the other hand, it's verifying that our async.times
doesn't invoke the beginFrameSubscription
multiple times (i.e. assertion)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On one hand, it's faking out the beginFrameSubscription to prevent it's receiveFrame multiple times (i.e. programmatic). On the other hand, it's verifying that our async.times doesn't invoke the beginFrameSubscription multiple times (i.e. assertion)
Wellllllllll, in the sense that it’s mocking Electron’s behavior, Electron allows beginFrameSubscription
to be called multiple times—it just replaces the existing subscriber with no checks: https://github.com/electron/libchromiumcontent/blob/2249b37b0c3cb2e7ab4e333fbd3f87356beba55f/patches/guest_view.patch#L33-L36
I do think it’s totally reasonable for the test to be checking that we never call that method unless we’ve never subscribed or have already ended the subscription. In my book, that’s more of a test assertion.
But either way, I don’t think it’s a huge deal.
@@ -32,7 +31,7 @@ function FrameManager(window) { | |||
this.on('removeListener', unsubscribe); | |||
|
|||
function subscribe(eventName) { | |||
if (!subscribed && eventName === 'data') { | |||
if (!self.listenerCount('data') && eventName === 'data') { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, geez, I really lost track of cleaning up that subscribed
flag! This approach is way better 👍
Thanks for the feedback @Mr0grog =) I left 👍 on comments I will definitely take care of and feedback on other ones for longer discussions. I'm going to wait until repo owners leave comments until I take care of the 👍'd comments though (saves me time with less context switching) |
@Mr0grog On a tangent, did we ever explore using |
Yes. It turned out not to trigger anything—it only schedules things for the next time a frame is prepared (before drawing). |
Ah, alright. Worth asking about. Thanks =) |
Haha, no worries, I would have definitely asked the same question if I were in your position :) |
I'm having this an issue with nightmare 2.10.0 that I believe is related to this PR. Specifically for taking screenshots with varying viewport sizes: Here's a debug log on a crashed run:
And here's a debug log when it finished successfully:
Will this PR fix my issue? If so, and idea when it might be merged? Thanks in advance, and let me know if I can provide more detail. |
@cscanlin It looks similar to my problem so I would bet it would fix it. Since this PR hasn't been touched in 2 months and as @Mr0grog has previously suggested, we're probably going to be waiting for a while. I suggest working around it in the meantime by pointing to a fork. This can be done via https://docs.npmjs.com/files/package.json#git-urls-as-dependencies https://github.com/twolfson/multi-image-mergetool/blob/1.32.1/package.json#L90 |
@twolfson That did the trick, thanks! On that note, I was also able to merge your branch with the master branch with 0 conflicts. Are there any other major roadblocks in getting this deployed? @Mr0grog, you said that this branch will actually open up the path for even more fixes (in addition to fixing this issue), so it seems like an easy win. Thoughts? |
@cscanlin I don’t think anybody who has commented here has any commit rights, so I dunno. It doesn’t seem like there should be any roadblocks here and I would think this should be high priority since it’s really easy to trigger it and hang Nightmare. Paging @rosshinkley… |
So why it's not merged? Without it screenshot doesn't work properly |
could this be merged? currently creating my own nightmare version to patch this... |
Are there any plans to cut a release of Nightmare to include this fix? Or should I start building my code against master? |
cc @fouad |
I'm a new user of Nightmare but found an issue when recording multiple screenshots on a static page (i.e. one with no frame updates). I was recording multiple screenshots in series at different viewport sizes.
After some sleuthing, it looks like FrameManager has some sneaky edge cases that needed deeper testing. In this PR:
subscribed
boolean despite no longer being usedrequestedFrame
resetting onattach
error andtimeout
so future attach errors/timeouts can get the samenull
frameNotes:
With respect to tests, I feel like I did a quick and dirty job. The
require
chunks aren't pulled out and I addedasync
despite it not being used in the rest of the repo. I mostly wanted to get feedback on the PR and how y'all would like to handle the code setup instead of stressing myself out. I will gladly make any requested changesCloses #555 (comment)