Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rt: move block_on to handle #3097

Closed
wants to merge 7 commits into from
Closed

rt: move block_on to handle #3097

wants to merge 7 commits into from

Conversation

Keruspe
Copy link
Contributor

@Keruspe Keruspe commented Nov 5, 2020

Fixes #2965
Fixes #3096

One difference between Runtime::block_on and Handle::block_on: the latter doesn't use the Inner from BasicScheduler in case of CurrentThreadconfiguration. Given the main use ofHandle::block_on`` is to be able to call block_on on the Runtime when using `#[tokio::main]`, it shouldn't matter though, given the `Inner` is already being used to `blobk_on` the `main`, thus is `None`.

One potential behavioural change in Runtime::block_on: the use of BasicScheduler::inner is not longer in the loop, so we'll only attempt at using it on the first pass. I cannot think of an easy workaround for this without bringing the scheduler inside of the Handle, making it Handle<P: Park> which would make it a pain to carry I think.

Signed-off-by: Marc-Antoine Perennou <[email protected]>
Signed-off-by: Marc-Antoine Perennou <[email protected]>
Signed-off-by: Marc-Antoine Perennou <[email protected]>
Signed-off-by: Marc-Antoine Perennou <[email protected]>
Signed-off-by: Marc-Antoine Perennou <[email protected]>
@Keruspe
Copy link
Contributor Author

Keruspe commented Nov 6, 2020

(force-push was just a missing comma in the last commit because of a fail amend, 1s after initial push)

I'm not sure everything is handled as it should, but at least the public API should be ok and the behaviour too

@taiki-e taiki-e added A-tokio Area: The main tokio crate M-runtime Module: tokio/runtime labels Nov 7, 2020
@sunjay sunjay mentioned this pull request Dec 14, 2020
5 tasks
@LucioFranco
Copy link
Member

Since, this patch isn't trivial and is a larger change than we'd like. We are not going to include this in 1.0. As for other solutions, one should be using Arc<Runtime and Weak<Runtime> to capture the same behavior as this. While this may not be ideal, I do not think hastily including this for 1.0 is a good idea until we have worked through the possible permutations.

@stuhood
Copy link

stuhood commented Dec 14, 2020

Since, this patch isn't trivial and is a larger change than we'd like. We are not going to include this in 1.0. As for other solutions, one should be using Arc<Runtime and Weak<Runtime> to capture the same behavior as this.

How would you get a reference to the runtime when using the tokio::test or tokio::main macros?

@LucioFranco
Copy link
Member

How would you get a reference to the runtime when using the tokio::test or tokio::main macros?

In this, case you will want to just create the runtime manually and use block_on. Its a bit more verbose but isn't that much of a change. You just won't be able to use tokio::test or tokio::main but those are simple macros that only remove 5loc really.

@sunjay
Copy link
Contributor

sunjay commented Dec 14, 2020

I am fine with letting this iterate for a little while until people are satisfied with the implementation, but I do want to add that as a library attempting to use Tokio entirely behind the scenes, just using Arc<Runtime> is not really an option for me. I'm probably going to need to wait until handle.block_on is released before I can finish my migration.

I definitely agree that in most cases threading the runtime through the app is probably what you want, but in general I think of the runtime as a somewhat global/implicit resource. Being able to get a handle to that in any location is extremely useful. :)

@LucioFranco
Copy link
Member

@sunjay can you explain more why Arc<Runtime> does not work for you?

@stuhood
Copy link

stuhood commented Dec 14, 2020

How would you get a reference to the runtime when using the tokio::test or tokio::main macros?

In this, case you will want to just create the runtime manually and use block_on. Its a bit more verbose but isn't that much of a change. You just won't be able to use tokio::test or tokio::main but those are simple macros that only remove 5loc really.

If you have hundreds of tests, not being able to use tokio::test is a non-trivial issue. Would probably need to write a version of the macro that gave the test a reference to the runtime.

As explained on #3076 (comment) , not having a uniform way to get access to the current Runtime blocks our upgrade to 0.3... and based on the number of issues that reference either this PR or #2965, we're not alone.

@sunjay
Copy link
Contributor

sunjay commented Dec 14, 2020

@sunjay can you explain more why Arc<Runtime> does not work for you?

@LucioFranco Sure. I am using tokio behind the scenes in the turtle crate. (PR for updating to tokio 0.3.) The crate is targeting beginners to Rust, so we try to avoid additional complexity whenever we can. In the default "synchronous" mode of the crate, tokio is hidden entirely. The runtime is created implicitly the first time an asynchronous call is made.

When users use the AsyncTurtle API (WIP, currently private), they control everything. We expect that people will use #[tokio::main] or otherwise create their Tokio runtime in whatever way suits their application. We don't want them to have to pass a runtime to the turtle crate somehow because then they can't use #[tokio::main] (as you pointed out). This is a totally unnecessary burden and I would not want to add that complexity to the crate.

We could potentially rely on the RUNTIME global static, but that would both break encapsulation and end up spawning a second runtime in cases where the user has already created their own.

So that's why Arc<Runtime> doesn't work for me here. We want tokio to largely remain an implementation detail and we want the user to be able to create their runtime however they want without passing it to the crate.

As for where we'd need to use Arc<Runtime>: Thanks to #3262 being merged we can remove most of our uses of block_on by either calling blocking_send or blocking_recv. The only remaining uses I've found so far are in the code where we wait for a process to end in a Drop implementation.

I don't actually think there is any way to get around this one without block_on, so we are stuck until that is added somehow.

@LucioFranco
Copy link
Member

@stuhood while yes its unfortunate, for us to get 1.0 out we have needed to reduce surface area, and this is one of those places unfortunately. Its possible to fork tokio-macros or create your own macro. While its a bit annoying it is a way to unblock yourself.

@sunjay personally, I think its fair to ask users to run it within a future executing on the tokio runtime. That way you reduce the surface area. I think Handle/Handle::current is a smell for a design. The tokio futures should panic and provide a clear message as to why (something I think we can do even better).

@qm3ster
Copy link
Contributor

qm3ster commented Dec 20, 2020

Are we getting Runtime::current() instead, then?

@Darksonn
Copy link
Contributor

@qm3ster No, just renaming the Handle type to Runtime doesn't solve this issue. See #2965 for discussion on that. We will eventually just add Handle::block_on when we figure out a reasonable behavior if called during shutdown of the runtime.

@JeremyRubin
Copy link

Just chiming in -- I may be missing something -- but I think I need Handle::try_current + Handle::block_on in an application I'm working on where I'm implementing an externally defined non-async trait on a struct which has internal async types (e.g., TcpStream).

@carllerche
Copy link
Member

Thanks for doing this. I think this change is fine. It looks like it is mostly shuffling things around. I don't see a problem w/ how this landed.

Thoughts @tokio-rs/maintainers? I believe this is blocking some upgrades to 1.0.

@carllerche
Copy link
Member

I resolved the merge conflicts.

@Darksonn
Copy link
Contributor

How does this handle the case where you shut down the runtime while a handle is blocking on something? I'm fine with whatever, but it is nice to know the answer.

@carllerche
Copy link
Member

How does this handle the case where you shut down the runtime while a handle is blocking on something?

A very good point. What do we want to see here?

@carllerche
Copy link
Member

My guess is, we want the shutdown logic to reclaim the runtime and terminate it.

Copy link
Member

@LucioFranco LucioFranco left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before I review this anymore, I am not sure this behavior is correct and there are no tests? This type of behavior is tricky and we should have tests checking all the cases.

Comment on lines +208 to +209
/// Run a future to completion on the Tokio runtime. This is the
/// runtime's entry point.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is incorrect, this is not the entry point but just an additional block_on?

///
/// # Multi thread scheduler
///
/// When the multi thread scheduler is used this will allow futures
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would mention it will also run spawned tasks on the runtime worker threads.

///
/// # Current thread scheduler
///
/// When the current thread scheduler is enabled `block_on`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While this is the behavior of Runtime::block_on I don't think this should be the behavior of Handle::block_on? @carllerche What do you think? I don't think the handle's block on should be able to steal the runtime. In that case it is the same as Runtime except it removes the ownership of the runtime type.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, my initial thinking was that the handle should follow the Runtime's behavior, but it does complicate the shutdown logic as well as seems a bit weird given the "current_thread" name... Perhaps, block_on in this case should just block the thread and enter the runtime context...

@LucioFranco
Copy link
Member

Also, the behavior should follow this (which we don't even have yet for spawn) #3548 where we return an error when the runtime goes away. A handle should not hold ownership of any of the scheduler components (like the task queue).

@LucioFranco
Copy link
Member

How does this handle the case where you shut down the runtime while a handle is blocking on something? I'm fine with whatever, but it is nice to know the answer.

@Darksonn we should return an error, somehow we'd need to wake up. Its complicated but in the case where the runtime goes away while you're blocking on. It will just hang and this is bad UX/hard to debug.

@carllerche
Copy link
Member

I opened a draft PR w/ an alternate strategy: #3549. I'm interested in feedback on the pros / cons.

@carllerche
Copy link
Member

Thanks for the initial work & getting the ball going on this one. Closing in favor of #3569, which should be close.

@carllerche carllerche closed this Mar 16, 2021
davidpdrsn added a commit that referenced this pull request Mar 20, 2021
Add `runtime::Handle::block_on`. The function enters the runtime context and
blocks the current thread while the future executes.

Refs: #3097
Fixes #2965, #3096
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-tokio Area: The main tokio crate M-runtime Module: tokio/runtime
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add block_on to Handle Replacement for Handle::current
9 participants