Replies: 1 comment
-
Thanks for writing this up @zrho, I agree with all of the proposal. @NickHu I have mentioned some of this to you at Edinburgh. The main point to clarify imho is where the boundaries between the future client and server will lie. I think something critical we should work to ensure is that data in homotopy-web actions that gets passed through homotopy-rs/homotopy-web/src/model.rs Line 83 in 196e11c can be (de)serialized and potentially be sent through the wire. That requires for example that we change all the references to diagrams in action, e.g. in attach options, to be generational IDs. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
This is an initial writeup of an idea developed together with @regular-citizen at POPL.
We noticed that despite good efforts, user interface concerns are still rather entangled with the design of the core or layout.
It becomes more and more obvious that parallel processing will be needed to keep up with performance goals, even if in the short term some performance issues might be solvable. A previous attempt at introducing parallelism was thwarted however by the curious architecture of the browser: While web workers may perform blocking tasks, the main thread of the brower may never be blocked.
This clashes with our use of hash consing, which is essential to achieve reasonable performance. Diagrams are highly redundant and so algorithms can avoid most of the work by caching, which in turn requires fast equality checks and hash computations on diagrams. But hash consing modifies a shared data structure on every diagram creation. This includes taking slices of diagrams other than the source and so even ostensibly "read-only" tasks modify a shared data structure. While it is a tricky task to manage contention, the browser's inability to do any synchronisation in the main thread limits the design space to declaing diagrams as
!Send
with thread-local isolated hash consing and passing serialised messages.There is an alternative approach: The main thread (from now on called the
client
) should not manipulateDiagram
s orRewrite
s at all. Instead it interacts with one or more webworkers (depending on the number of CPUs and whether multithreading is supported and enabled in the browser) via message passing. These webworkers together form theserver
. The messages for this use case do not need to send entire serialised diagrams except in the case that the user needs to import or export diagrams. Instead, diagrams are retained in a table on theserver
and are interacted with via their ids.Diagram ids should have the following properties:
server
has a unique id. Whenever some operation leads to creating the same diagram again, the id will be reused. This allows theclient
to check whether two diagrams are equal by simply comparing their ids, which it can use to avoid redundant requests to theserver
such as rerendering the same diagram. Since we already have hash-consing, this is trivial to implement.Diagram
structs are reference counted. When theclient
no longer needs a diagram it should be able to signal this to theserver
to decrease the reference count of the corresponding entry in the diagram table on theserver
.There now is design space on where the separation between the client and the server lies:
I would strongly argue for the first option and if I remember correctly @regular-citizen agrees. An API for this should not be terribly hard to specify, with the possible exception of 3D/4D. I do not have considered this part deeply yet, since I do not currently have a coherent picture in mind of what the pipeline looks like. Moving forward with this proposal would entail figuring out a satisfying solution for this.
With such a separation in place, we gain a lot:
Diagram
s exclusively live in the server, they are not manipulated on the main thread anymore. This opens up doing parallel computation on the server and exploring a range of options for making hash consing fast in the parallel setting, which is already hard enough without the non-blocking constraint.yew
developers, I felt like trying to squish a square peg into a round hole when working with it and have heard several complaints of this form from (almost?) everyone on the team. Choosing Rust for the core and algorithmic parts of the tool was essential to achieve any kind of acceptable performance and has led to a developer experience that is miles better. But at least with the current state of the ecosystem, Rust is inappropriate for the UI and we should use a tool that is fit for purpose, which this proposal would enable. This would even allow hiring interns or professionals familiar with web UI development that are unwilling or don't have the time to learn Rust.Beta Was this translation helpful? Give feedback.
All reactions