-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multimodal UX - Audio Component #1112
Conversation
Missing import error catch in environment detection.
Moved exceptions catch into one line.
Not yet implemented, but available to call within notebooks.
Primitives were duplicating code.
Audio/image/video now have API primitives to generate from model.
Very basic but enough for rendering.
Also added sample audio/video assets (both creative commons).
This is important as we're using kernel comms (JSON) behind the scenes.
Clean-up of previous commit.
Important for package testing.
Console prints, frontend controls need to be added later.
There are some large formatting changes to existing code from using the "Svelte for VS Code" extension. It seems like that's actually using Prettier under the hood. Sam and I chatted about this and will move forward with using it as our default formatter so hopefully the formatting changes won't happen again. |
Connected from API primitives to client.
…guidance into multimodal-surfaces
Forgot to commit this for previous.
…guidance into multimodal-surfaces
Besides the failing tests (😆), LGTM. We'll have to get aligned on the api to the image, audio, etc. functions, especially in how we denote "inputs" vs "outputs", but non-essential for this first pr |
How much of a blocker are the failing tests for this PR? It'd be great to get it merged when we can (I need to fix merge conflicts now because it's been pending for a while) |
@nking-1 not a blocker at all -- the only failing tests are in |
I rewrote the tests as pseudocode comments and just have a |
# TODO(nopdive): Mock for testing. Remove all of this code later. | ||
bytes_data = bytes_from(src, allow_local=allow_local) | ||
base64_string = base64.b64encode(bytes_data).decode('utf-8') | ||
lm += AudioOutput(value=base64_string, is_input=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the is_input=True
an indication that we'll eventually move to a single Audio
type that will have an is_input
flag much like our TextOutput
object that has is_generated
? (Although we still have a LiteralInput
text type)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. We'll do a pass to remove stub dependencies before release.
Implements an audio component for the Jupyter widget.
audio()
guidance function to the Jupyter widget.