Skip to content

Add Future::wait() #20

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

Add Future::wait() #20

wants to merge 1 commit into from

Conversation

tomaka
Copy link
Contributor

@tomaka tomaka commented Aug 7, 2016

I'm currently experimenting using futures in order to dispatch small tasks to a thread pool.

In my main function, every 16ms I create some tasks with CpuPool::execute(). These tasks trigger sub-tasks, like with split-join parallelism.
If all the tasks and sub-tasks take less than 16 milliseconds in total, everything is fine. However if the user's machine is not powerful enough and they take more than 16 milliseconds, then the next tasks fire before the previous ones have finished. What would end up happening is that the tasks queue would grow more and more. I really don't want this to happen, and therefore I think the best way is to block until all the tasks are over before starting the new ones.

I recognized that adding a wait function would be unidiomatic in the context of mio, however I find it useful in my situation. I'm not sure though if my problem wouldn't be solved with an executor.

@alexcrichton
Copy link
Member

Thanks for the PR! This is indeed what I would expect for a function to block and wait for the resolution of a future, but I agree that I'm not entirely sure whether we want to bake this in or not.

On one hand it's easy enough to implement locally (as you've seen), but on the other hand it's a footgun in many situations. For example as you've mentioned with futures-mio this would be an antipattern as it'd block the event loop and likely quickly lead to deadlock. Even with other implementations of futures, this is only guaranteed to return if:

  • The current thread isn't needed for the futures computation to complete
  • The futures computation doesn't panic

The first point is why futures-mio would get hosed, and the second point leads me to think that this may be most appropriate as a modification to a Task in one form or another (as that's what's catching panics).

Another option here would be to modify your local "event loop" to have a method to "await" a future. For example futures-mio has a Loop::run method for this purpose. That doesn't currently handle panics, but we'll probably add a callback or a feature of some form to Task to handle panics.

/// a deadlock.
fn wait(self) -> Result<Self::Item, Self::Error> where Self: Sized {
let (tx, rx) = channel();
self.then(move |r| -> Result<(), ()> { let _ = tx.send(r); Ok(()) }).forget();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dropping the Result seems less-than-ideal.

@sbstp
Copy link

sbstp commented Aug 14, 2016

This is a function I would like to have as well. It's pretty useful for prototyping and testing. Using channels every time you want this behavior is a bit tedious. Since this is a provided implementation, could the mio futures get their own implementation that panics and warns the user?

Edit: The panicking probably wouldn't work once you wrap the futures with another one. Perhaps a Waitable trait?

@nielsle
Copy link

nielsle commented Aug 15, 2016

Perhaps the function could return an mpsc::channel::Reader rather than an item.

fn into_reader(self) ->Result<Reader<Self::Item>, Self::Error> where Self: Sized;

This would allow for something like the following

assert_eq!(answer.into_reader().recv(), Ok(42));

@alexcrichton
Copy link
Member

@sbstp yeah it's true that for certain kinds of tests this can be quite useful. I think @nielsle's suggestion may make the most sense to me where it's converted to a Receiver half rather than directly calling recv to block.

I'm still wary of blocking the thread because it assumes that the computation is happening on another thread, which may not always be the case :(

@sbstp
Copy link

sbstp commented Aug 15, 2016

What if a Waitable trait was created? The composite types like futures::Select could implement it conditionally, as long as the futures they are wrapping also are Waitable. Example: https://is.gd/WPsQdd

The mio futures (and any that cannot wait) would not implement Waitable.

@alexcrichton
Copy link
Member

Perhaps, yeah, but that level of complexity may not be worth it in the long
run.

On Monday, August 15, 2016, Simon Bernier St-Pierre <
[email protected]> wrote:

What if a Waitable trait was created? The composite types like
futures::Select could implement it conditionally, as long as the futures
they are wrapping also are Waitable. Example: https://is.gd/WPsQdd


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#20 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAD95MvNPO0mBqoHlM3OKEebmlQ63OiUks5qgK42gaJpZM4Jeelj
.

@sbstp
Copy link

sbstp commented Aug 15, 2016

True, there would be a lot of extra code to maintain. How about making the function unsafe? Use at your own risk kind of deal.

@alexcrichton
Copy link
Member

Generally we reserve unsafe for purely functionality relating to memory safety, not necessarily "this should be avoided"

@carllerche
Copy link
Member

My personal preference would be to have this as a standalone fn.

futures::await(my_future);

Also... please take a timeout :)

@alexcrichton
Copy link
Member

With #77 we would actually be able to write this without channels at all! Additionally, we could relax the 'static restriction and wait on futures which contain stack data.

I think we've also got thread::park_timeout available to us for a timed out computation, and I think it may make sense to expose that rather than require composing with an event loop as an event loop isn't necessarily required in a situation like that.

So I'm somewhat leaning towards the following plan of action:

  • Rename the function to await to mirror async/await in other systems
  • Implement with thread::park and Thread::unpark to avoid the 'static requirement through .forget()
  • Add await_timeout which builds on top of park_timeout with the signature:
fn await_timeout(self) -> AwaitTimeout<Self>;

enum AwaitTimeout<F: Future> {
    Finished(Result<F::Item, F::Error>),
    TimedOut(F),
}

Now that I think I'm less sure about the signature of await_timeout, but we can perhaps finagle something.

In any case, thoughts?

@alexcrichton
Copy link
Member

Ok, I've implemented Future::await and Stream::await in 1f51080, so I'm going to close this.

@carllerche I've avoided implementing await_timeout just yet, however. In the "timed out" case I figure you'd want to return the future to try it again later perhaps. One problem with this, though, is that the Task which was used to drive the future up to the timeout is gone, so any TaskData references will immediately become invalid. As a result, resuming a future after one await_timeout will fail in any case using TaskData.

I'm thinking we can solve this through an abstraction that represents something like a "fusing" of a future and a task. That way you could await on that or await_timeout on it. In the timeout case it'd reuse the same task from before, so TaskData references would still be valid.

In any case I'd like to add this, but it seems somewhat premature as we're still settling I believe on the task/executor/future story, so I figure we can add it at a later date (but soon after that's settled!)

@nabijaczleweli
Copy link
Contributor

Could you release a crates.io release with this in?

@alexcrichton
Copy link
Member

@nabijaczleweli soon! We have a few final aspects we'd like to flesh out, like #95, before we make a publish.

For now I'd recommend using this library from git as it's where the development is currently happening.

@nabijaczleweli
Copy link
Contributor

I can't publish to crates if any of my deps is of the git kind, but it's good to see there's some vision of closure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants