Twitter | Search | |
Sebastian Markbåge
React JS · TC39 · The Facebook · Tweets are personal
9,759
Tweets
455
Following
43,934
Followers
Tweets
Sebastian Markbåge 6h
Replying to @jessidhia
This is exactly how to do it because it surfaces any synchronization issues, and things that don’t translate between the modes. But if it works for your case, it’s a great workaround to keep you going.
Reply Retweet Like
Sebastian Markbåge 9h
Why do you need a new context type for every instance? Why not share one?
Reply Retweet Like
Sebastian Markbåge 12h
Yea I didn’t want to comment for this reason because I feel bad about this. However, it needs to be clear that this is experimental and a lot of our original ideas didn’t work and don’t apply anymore. We’ll probably change the api to make this harder to mess up.
Reply Retweet Like
Sebastian Markbåge 12h
Replying to @kentcdodds
Implementing Suspending Hooks without documentation is unfortunately a bad idea. Your example has a really bad case because it recreates the Promise in render and it’s not how we recommend doing it anymore. I’d really wait on more docs because everyone gets this wrong.
Reply Retweet Like
Sebastian Markbåge 14h
Replying to @kentcdodds
They would’ve been even better if they had caught the bug I’m trying to fix!
Reply Retweet Like
Sebastian Markbåge Oct 21
Replying to @brian_d_vaughn
Stop distracting him!
Reply Retweet Like
Sebastian Markbåge Oct 14
Yea batching is better. I meant that users should ideally move away from doing so many setState calls in user space (an artifact of Flux store patterns), and we'd have an efficient way of doing that type of broadcasting.
Reply Retweet Like
Sebastian Markbåge Oct 14
Not exactly 1 to 1. Batching is also inherent to concurrency. But closer to 1 than hundreds like happens today.
Reply Retweet Like
Sebastian Markbåge Oct 14
Ideally we should be closer to 1 setState ~ 1 render tho.
Reply Retweet Like
Sebastian Markbåge Oct 14
It's a factor of batching. If React renders once per setState, you could even imagine reusing the same copies for both the setState and the reconciliation of it. Because of subscriptions and lots of setStates deep, batching ensures that there's fewer renders than setState.
Reply Retweet Like
Sebastian Markbåge Oct 14
But as we're closer to finishing the API surface area, I get eager to try new implementations of them.
Reply Retweet Like
Sebastian Markbåge Oct 14
Updates from imperative (setState) are added to a mutable queue. There's a mutable ref in the tree that indicates which nodes have uncommitted updates. Could make all setStates create a new tree but that's not good for perf given how many there are in current systems.
Reply Retweet Like
Sebastian Markbåge Oct 14
There's another way to do it. Fabric does it. But yea, that part is a bit tricky to get right and may end up needing more memory overall.
Reply Retweet Like
Sebastian Markbåge Oct 14
Replying to @dan_abramov @jordwalke
The big difference is that Fiber has imperative events where as this prototype had functional event dispatching!
Reply Retweet Like
Sebastian Markbåge Oct 14
Replying to @dan_abramov @jordwalke
For all purposes the swap is a just an allocation pooling. Architecturally they're immutable trees. We're thinking of turning off the pooling to allow reclaiming more memory when the tree is stable. So that difference isn't that important.
Reply Retweet Like
Sebastian Markbåge Oct 10
But yea, we've added more abstraction cost *too* and we could do even better than we used to do even in the past. So there's something to fix but I think in terms of comparing old and new I think this lazification is a significant part of it.
Reply Retweet Like
Sebastian Markbåge Oct 10
If we look back in time, there was also lots of abstractions, slow code sprinkled around in UIs (like Visual Basic or whatever) and games has had scripting languages (like Lua) while still being part of what we'd consider fast apps.
Reply Retweet Like
Sebastian Markbåge Oct 10
People add lots of laziness to C news feeds in user space too. Not everything is due to the language features. Additional you may have dynamic linking and mmapped code paging in code from disk during execution that is also lazy.
Reply Retweet Like
Sebastian Markbåge Oct 10
It’s the lazification of things. I think we tend to forget how slow to start many things used to be when it was up front. Games still are.
Reply Retweet Like
Sebastian Markbåge Oct 10
This is a complex question but the vast majority of time is spent in various forms of “warm up”. Parsing, compiling, class meta data (Java), hidden class generation/look up, inline caches, dependency injection, etc. Doing it again after is several orders of magnitude faster.
Reply Retweet Like