-
-
Notifications
You must be signed in to change notification settings - Fork 211
Making some unit tests work #1000
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This issue was addressed by waiting for the dataset to be processed before creating a task on it. |
|
@mfeurer many more tests fail now While other fetches from the server return empty, which seems to make many of the assertions fail too! |
|
@Neeratyoy I just discussed with @joaquinvanschoren and he gave the API key write permissions |
|
Codecov Report
@@ Coverage Diff @@
## develop #1000 +/- ##
===========================================
- Coverage 87.86% 87.70% -0.16%
===========================================
Files 36 36
Lines 4531 4580 +49
===========================================
+ Hits 3981 4017 +36
- Misses 550 563 +13
Continue to review full report at Codecov.
|
PGijsbers
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I only looked at _api_calls. I think the main point of discussion is whether or not we want two loops of retries for server connects. Currently the outer loop is ran as long as the response content is incorrect (not same hash), and the inner loop also connects up to n_retries in _send_request until a status: 200. If we don't want to nest this loop (and have n_retries^2 retries, we should probably allow the _send_request to check the response against a checksum for get queries.
|
@PGijsbers the tests that fail are oddly due to either a worker crashing or the issue pertaining to attaching entities to a study. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think _api_calls looks fine now. I also left a comment on a change that was shown to me when I clicked the notification, would like some clarification for that first.
edit: but I do still need to look at the ci results as well
|
Looks mostly like server failure? Though something like this is suspicious: would mean there were no runs on the server (or at most10)? |
I'm afraid I can't see exactly which comment :/ |
It's the one about the time measurements that you commented on 👍 |
|
Thanks for all the work on this PR! I'm merging this as it's a big improvement to what was before, and so we don't need to carry this over a break. |
|
Thanks for merging @PGijsbers The flaky error you found was reported by @Neeratyoy in openml/OpenML#1080 |
What does this PR implement/fix? Explain your changes.
Fixes some of the failing unit tests post the test server clean-up.
Update docs to ensure that any
publishto the test server during unit testing is also scheduled for deletion.