Skip to content

Mac: New file created notification#191

Merged
wilbaker merged 4 commits intomicrosoft:masterfrom
wilbaker:users/wilbaker/mac_new_file_notification
Aug 21, 2018
Merged

Mac: New file created notification#191
wilbaker merged 4 commits intomicrosoft:masterfrom
wilbaker:users/wilbaker/mac_new_file_notification

Conversation

@wilbaker
Copy link
Member

@wilbaker wilbaker commented Aug 20, 2018

Part of the work required for #152.

Changes include:

  • Adding "new file created" notification to kext and user-mode
  • Adding new file notification handling to MirrorProvider
  • Enabling additional functional tests
  • Categorizing more functional tests (that were previously uncategorized)
  • Removing the code that blocks modifications to the repo when GVFS is offline

Regarding the last point above, the code for blocking notifications was also preventing copying in new hooks and repairing broken repos (now that newly created files are flagged as being in the root). The code for supporting this feature can be revisited once we have the rest of notifications and projection changes working on Mac.

It should also be noted that the "new file" notification will be raised for both newly created files, and any files that were created as part of gvfs clone (prior to the root of the repo getting marked as such). This doesn't cause a problem for VFSForGit because it ignores new files created in the .git folder.

Functional test run with these changes:

Test Run Summary
  Overall result: Warning
  Test Count: 166, Passed: 164, Failed: 0, Warnings: 0, Inconclusive: 0, Skipped: 2
    Skipped Tests - Ignored: 2, Explicit: 0, Other: 0
  Start time: 2018-08-20 16:50:23Z
    End time: 2018-08-20 16:54:19Z
    Duration: 236.021 seconds

inRoot = true;
break;
}
vnode_t parent = vnode_getparent(vnode);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you need a 100% reliable answer, this does not seem to be a good way to walk up the vnode parent chain, because vnode_getparent can return null even if the current vnode does have a parent. I'm not sure what we need to do to make it more reliable though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what we need to do to make it more reliable though.

This is the same approach used in VirtualizationRoots_FindForVnode and so if it can fail we have a bigger problem. The only alternative that immediately comes to mind is going back to (or falling back on) path based lookup of the root.

I don't think we need to do anything in this PR for this, let me know if you'd like me to file an issue for it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See the other comment related to this. For now, we should just avoid spreading the implementation details of how to find a root.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See other comment, I will remove this function

goto CleanupAndReturn;
}

if (!fileFlaggedInRoot && !HasAncestorFlaggedAsInRoot(currentVnode, context))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you actually need this call? The next call to ShouldHandleFileOpEvent already calls VirtualizationRoots_FindForVnode and will return a root if one exists. So it seems like ShouldHandleFileOpEvent already returns false if there is no ancestor that is a root.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sanoursa You're right that ShouldHandleFileOpEvent will return false if this vnode is not within a root.

The concern I had with relying just on ShouldHandleFileOpEvent is that VirtualizationRoots_FindForVnode performs locking, and I didn't want to make the kext lock on (almost) every KAUTH_FILEOP_CLOSE happening in the system.

HasAncestorFlaggedAsInRoot is intended to be an initial check to rule out vnodes that have no chance at being within a root without requiring locking.

I have not measure this perf overhead though, and so let me think if you think this is a premature optimization.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand the design direction correctly (@pmj ?), we will be getting rid of our file flags at some point and our vnode cache will be the source of truth for whether something is a root or not. So I don't think we should spread the implementation details of what it means to be a root. At a minimum, this lock-free approach should be moved into VirtualizationRoots.hpp/cpp, but yea it could also be a premature optimization if you haven't actually measured a perf hit.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok thanks, in that case I'll just take this out for now and rely on ShouldHandleFileOpEvent

{
char fullPath[PrjFSMaxPath];
CombinePaths(s_virtualizationRootFullPath.c_str(), request.path, fullPath);
SetBitInFileFlags(fullPath, FileFlags_IsInVirtualizationRoot, true);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if this fails?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing this out, I had not considered that scenario.

I think VFSForGit should log if this happens, what would you think if the kext let the provider know in the NewFileCreated callback (e.g. add a bool fileSuccessfullyFlagged)?

As far as impact on VFSForGit, if we fail to set the bit:

  • There could be duplicate NewFileCreated notifications
  • The kext will not deliver the PreDelete notification (because the file is not flagged)

So far I've not been able to think of a scenario where git would behave incorrectly if this happened, the main issue would be missing (or duplicate callbacks).

Other than logging, do you have any ideas on what else we could\should do if this occurs?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a general implementation hole that @nickgra is looking into - we need to make sure that the lib's IO is all-or-nothing, and doesn't leave files in a halfway updated state. This one is a bit more problematic, since it's in response to a fileop event and therefore we don't get a chance to redo it. At the very least, we should leave a TODO here, and logging it is a good idea as well. Down the road, we need some strategy for recovering from lost fileop events (and that's a bigger scope than what @nickgra is working on right now, so let's address that one separately)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the very least, we should leave a TODO here, and logging it is a good idea as well.

I'll leave a TODO for now. Long term if we can have failures here (once we've stopped using flags) we'll either want a way to let providers know so that they can log the failure to a log file, or we'll want a PrjFSLib log file where we can log it.

char fullPath[PrjFSMaxPath];
CombinePaths(s_virtualizationRootFullPath.c_str(), request.path, fullPath);

// TODO(Mac): Handle SetBitInFileFlags failures
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove tabs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants