diff --git a/README.md b/README.md index 7f3463ba..c20a7fd4 100644 --- a/README.md +++ b/README.md @@ -12,13 +12,20 @@ In the subdirectories of this repository you can find some examples of using the - [rtp_to_hls](https://github.com/membraneframework/membrane_demo/tree/master/rtp_to_hls) - receiving RTP stream and broadcasting it via HLS - [rtsp_to_hls](https://github.com/membraneframework/membrane_demo/tree/master/rtsp_to_hls) - receiving RTSP stream and converting it to HLS - [video_mixer](https://github.com/membraneframework/membrane_demo/tree/master/video_mixer) - how to mix audio and video files -- [speech_to_text](https://github.com/membraneframework/membrane_demo/tree/master/speech_to_text) - real-time speech recognition using [Whisper](https://github.com/openai/whisper) in [Livebook] - [webrtc_to_hls](https://github.com/jellyfish-dev/membrane_rtc_engine/tree/master/examples/webrtc_to_hls) - converting WebRTC stream into HLS - [webrtc_videoroom](https://github.com/jellyfish-dev/membrane_rtc_engine/tree/master/examples/webrtc_videoroom) - basic example of [Membrane RTC Engine](https://github.com/jellyfish-dev/membrane_rtc_engine.git). It's as simple as possible just to show you how to use our API. -- + +Also there are some livebook examples located in [livebooks](https://github.com/membraneframework/membrane_demo/tree/master/livebooks) directory: + +- [speech_to_text](https://github.com/membraneframework/membrane_demo/tree/master/livebooks/speech_to_text) - real-time speech recognition using [Whisper](https://github.com/openai/whisper) in [Livebook] +- [audio_mixer](https://github.com/membraneframework/membrane_demo/tree/master/livebooks/audio_mixer) - mix a beep sound into background music +- [messages_source_and_sink](https://github.com/membraneframework/membrane_demo/tree/master/livebooks/messages_source_and_sink) - setup a simple pipeline and send messages through it +- [playing_mp3_file](https://github.com/membraneframework/membrane_demo/tree/master/livebooks/playing_mp3_file) - read mp3 file, transcode to acc and play +- [rtmp](https://github.com/membraneframework/membrane_demo/tree/master/livebooks/rtmp) - send and receive `RTMP` stream +- [soundwave](https://github.com/membraneframework/membrane_demo/tree/master/livebooks/soundwave) - plot live audio amplitude on a graph ## Copyright and License -Copyright 2018, [Software Mansion](https://swmansion.com/?utm_source=git&utm_medium=readme&utm_campaign=membrane) +Copyright 2024, [Software Mansion](https://swmansion.com/?utm_source=git&utm_medium=readme&utm_campaign=membrane) [![Software Mansion](https://logo.swmansion.com/logo?color=white&variant=desktop&width=200&tag=membrane-github)](https://swmansion.com/?utm_source=git&utm_medium=readme&utm_campaign=membrane) diff --git a/livebooks/README.md b/livebooks/README.md new file mode 100644 index 00000000..c37d88e9 --- /dev/null +++ b/livebooks/README.md @@ -0,0 +1,10 @@ +# Livebook examples + +This folder contains interactive livebook examples. To launch them you need to install livebook first. + +## Installation + +It is recommended to install Livebook via command line ([see official installation guide](https://github.com/livebook-dev/livebook#escript)). + +If livebook was installed directly from the official page, one should add `$PATH` variable to the Livebook environment: +![Setting path](./assets/path_set.png "Title") \ No newline at end of file diff --git a/livebooks/assets/path_set.png b/livebooks/assets/path_set.png new file mode 100644 index 00000000..a06d753e Binary files /dev/null and b/livebooks/assets/path_set.png differ diff --git a/livebooks/audio_mixer/README.md b/livebooks/audio_mixer/README.md new file mode 100644 index 00000000..5c259d12 --- /dev/null +++ b/livebooks/audio_mixer/README.md @@ -0,0 +1,13 @@ +# Membrane audio mixer + +This livebook shows how to mix a beep sound into background music over a period of time. + +To run the demo, [install Livebook](https://github.com/livebook-dev/livebook#escript) and open the `audio_mixer.livemd` file there. + +## Copyright and License + +Copyright 2024, [Software Mansion](https://swmansion.com/?utm_source=git&utm_medium=readme&utm_campaign=membrane) + +[![Software Mansion](https://docs.membrane.stream/static/logo/swm_logo_readme.png)](https://swmansion.com/?utm_source=git&utm_medium=readme&utm_campaign=membrane) + +Licensed under the [Apache License, Version 2.0](LICENSE) diff --git a/livebooks/audio_mixer/audio_mixer.livemd b/livebooks/audio_mixer/audio_mixer.livemd index ab740fdb..121ce3c2 100644 --- a/livebooks/audio_mixer/audio_mixer.livemd +++ b/livebooks/audio_mixer/audio_mixer.livemd @@ -5,14 +5,14 @@ File.cd(__DIR__) Logger.configure(level: :error) Mix.install([ - {:membrane_core, "~> 0.11.2"}, - {:membrane_audio_mix_plugin, "~> 0.12.0"}, - {:membrane_file_plugin, "~> 0.13.0"}, - {:membrane_mp3_mad_plugin, "~> 0.14.0"}, - {:membrane_ffmpeg_swresample_plugin, "~> 0.16.1"}, - {:membrane_aac_fdk_plugin, "~> 0.14.0"}, - {:membrane_kino_plugin, github: "membraneframework-labs/membrane_kino_plugin"}, - {:membrane_tee_plugin, "~> 0.10.1"} + {:membrane_core, "~> 1.0"}, + {:membrane_audio_mix_plugin, "~> 0.16.0"}, + {:membrane_file_plugin, "~> 0.16.0"}, + {:membrane_mp3_mad_plugin, "~> 0.18.0"}, + {:membrane_ffmpeg_swresample_plugin, "~> 0.19.0"}, + {:membrane_aac_fdk_plugin, "~> 0.18.0"}, + {:membrane_kino_plugin, github: "membraneframework-labs/membrane_kino_plugin", tag: "v0.3.1"}, + {:membrane_tee_plugin, "~> 0.12.0"} ]) ``` @@ -104,17 +104,17 @@ mixer_output = Whole pipeline structure. ```elixir -structure = beeps_split ++ [beep_audio_input, background_audio_input, mixer_output] +spec = beeps_split ++ [beep_audio_input, background_audio_input, mixer_output] :ok ``` ## Playing audio ```elixir -alias Membrane.RemoteControlled, as: RC +alias Membrane.RCPipeline, as: RC -pipeline = RC.Pipeline.start!() -RC.Pipeline.exec_actions(pipeline, spec: structure, playback: :playing) +pipeline = RC.start!() +RC.exec_actions(pipeline, spec: spec) kino ``` diff --git a/livebooks/messages_source_and_sink/README.md b/livebooks/messages_source_and_sink/README.md new file mode 100644 index 00000000..f3b9c183 --- /dev/null +++ b/livebooks/messages_source_and_sink/README.md @@ -0,0 +1,13 @@ +# Membrane messages source and sink + +This livebook shows how to setup a simple pipeline and send messages through it. + +To run the demo, [install Livebook](https://github.com/livebook-dev/livebook#escript) and open the `messages_source_and_sink.livemd` file there. + +## Copyright and License + +Copyright 2024, [Software Mansion](https://swmansion.com/?utm_source=git&utm_medium=readme&utm_campaign=membrane) + +[![Software Mansion](https://docs.membrane.stream/static/logo/swm_logo_readme.png)](https://swmansion.com/?utm_source=git&utm_medium=readme&utm_campaign=membrane) + +Licensed under the [Apache License, Version 2.0](LICENSE) diff --git a/livebooks/messages_source_and_sink/messages_source_and_sink.livemd b/livebooks/messages_source_and_sink/messages_source_and_sink.livemd index d3c9fcad..9c88f212 100644 --- a/livebooks/messages_source_and_sink/messages_source_and_sink.livemd +++ b/livebooks/messages_source_and_sink/messages_source_and_sink.livemd @@ -5,7 +5,7 @@ File.cd(__DIR__) Logger.configure(level: :error) Mix.install([ - {:membrane_core, "~> 0.12.7"} + {:membrane_core, "~> 1.0"} ]) ``` @@ -17,17 +17,15 @@ defmodule MessageSource do require Membrane.Logger - def_output_pad(:output, - mode: :push, + def_output_pad :output, + flow_control: :push, accepted_format: _any - ) - def_options( - register_name: [ - description: "The name under which the element's process will be registered", - spec: atom() - ] - ) + def_options register_name: [ + description: "The name under which the element's process will be registered", + spec: atom() + ] + @impl true def handle_init(_ctx, opts) do @@ -75,9 +73,14 @@ end defmodule MessageSink do use Membrane.Sink - def_input_pad(:input, mode: :push, accepted_format: _any) - def_options(receiver: [description: "PID of the process that will - receive messages from the sink", spec: pid()]) + def_input_pad :input, + flow_control: :push, + accepted_format: _any + + def_options receiver: [ + description: "PID of the process that will receive messages from the sink", + spec: pid() + ] @impl true def handle_init(_ctx, opts) do @@ -111,7 +114,7 @@ defmodule MyPipeline do end end -{:ok, _supervisor, pipeline} = MyPipeline.start(receiver: self()) +{:ok, _supervisor, pipeline} = Membrane.Pipeline.start(MyPipeline, receiver: self()) payloads = 1..10 Task.async(fn -> diff --git a/livebooks/playing_mp3_file/README.md b/livebooks/playing_mp3_file/README.md new file mode 100644 index 00000000..fcee8bb4 --- /dev/null +++ b/livebooks/playing_mp3_file/README.md @@ -0,0 +1,13 @@ +# Membrane playing mp3 file demo + +This livebook shows how to load `MP3` audio from the file, transcode it to the `AAC` codec, and play it. + +To run the demo, [install Livebook](https://github.com/livebook-dev/livebook#escript) and open the `playing_mp3_file.livemd` file there. + +## Copyright and License + +Copyright 2024, [Software Mansion](https://swmansion.com/?utm_source=git&utm_medium=readme&utm_campaign=membrane) + +[![Software Mansion](https://docs.membrane.stream/static/logo/swm_logo_readme.png)](https://swmansion.com/?utm_source=git&utm_medium=readme&utm_campaign=membrane) + +Licensed under the [Apache License, Version 2.0](LICENSE) diff --git a/livebooks/playing_mp3_file/playing_mp3_file.livemd b/livebooks/playing_mp3_file/playing_mp3_file.livemd index bc70bd99..4017a8af 100644 --- a/livebooks/playing_mp3_file/playing_mp3_file.livemd +++ b/livebooks/playing_mp3_file/playing_mp3_file.livemd @@ -5,12 +5,12 @@ File.cd(__DIR__) Logger.configure(level: :error) Mix.install([ - {:membrane_core, "~> 0.11.2"}, - {:membrane_file_plugin, "~> 0.13.0"}, - {:membrane_mp3_mad_plugin, "~> 0.14.0"}, - {:membrane_ffmpeg_swresample_plugin, "~> 0.16.1"}, - {:membrane_aac_fdk_plugin, "~> 0.14.0"}, - {:membrane_kino_plugin, github: "membraneframework-labs/membrane_kino_plugin"} + {:membrane_core, "~> 1.0"}, + {:membrane_file_plugin, "~> 0.16.0"}, + {:membrane_mp3_mad_plugin, "~> 0.18.0"}, + {:membrane_ffmpeg_swresample_plugin, "~> 0.19.0"}, + {:membrane_aac_fdk_plugin, "~> 0.18.0"}, + {:membrane_kino_plugin, github: "membraneframework-labs/membrane_kino_plugin", tag: "v0.3.1"} ]) ``` @@ -54,7 +54,7 @@ alias Membrane.{ audio_path = "./assets/sample.mp3" kino = Membrane.Kino.Player.new(audio: true) -structure = +spec = child(:file_source, %File.Source{location: audio_path}) |> child(:decoder_mp3, MP3.MAD.Decoder) |> child(:converter, %FFmpeg.SWResample.Converter{ @@ -73,10 +73,10 @@ structure = Run pipeline: ```elixir -alias Membrane.RemoteControlled, as: RC +alias Membrane.RCPipeline, as: RC -pipeline = RC.Pipeline.start!() -RC.Pipeline.exec_actions(pipeline, spec: structure, playback: :playing) +pipeline = RC.start!() +RC.exec_actions(pipeline, spec: spec) kino ``` diff --git a/livebooks/rtmp/README.md b/livebooks/rtmp/README.md new file mode 100644 index 00000000..b9904572 --- /dev/null +++ b/livebooks/rtmp/README.md @@ -0,0 +1,15 @@ +# Membrane RMTP demo + +Sender livebook shows how to download video and audio from the web using the `Hackney` plugin, and stream it via `RTMP` to the other, receiver livebook. + +Receiver livebook shows how to receive `RTMP` stream mentioned above and play it in the livebook. + +To run the demo, [install Livebook](https://github.com/livebook-dev/livebook#escript) and open both `rtmp_sender.livemd` and `rtmp_sender.livemd` files there. + +## Copyright and License + +Copyright 2024, [Software Mansion](https://swmansion.com/?utm_source=git&utm_medium=readme&utm_campaign=membrane) + +[![Software Mansion](https://docs.membrane.stream/static/logo/swm_logo_readme.png)](https://swmansion.com/?utm_source=git&utm_medium=readme&utm_campaign=membrane) + +Licensed under the [Apache License, Version 2.0](LICENSE) diff --git a/livebooks/rtmp/rtmp_receiver.livemd b/livebooks/rtmp/rtmp_receiver.livemd index b8e50fe9..b84ff3a0 100644 --- a/livebooks/rtmp/rtmp_receiver.livemd +++ b/livebooks/rtmp/rtmp_receiver.livemd @@ -5,10 +5,10 @@ File.cd(__DIR__) Logger.configure(level: :error) Mix.install([ - {:membrane_kino_plugin, github: "membraneframework-labs/membrane_kino_plugin"}, - {:membrane_core, "~>0.11.2"}, - {:membrane_realtimer_plugin, "~>0.6.1"}, - {:membrane_rtmp_plugin, "~>0.11.0"} + {:membrane_core, "~> 1.0"}, + {:membrane_realtimer_plugin, "~> 0.9.0"}, + {:membrane_rtmp_plugin, "~> 0.19.0"}, + {:membrane_kino_plugin, github: "membraneframework-labs/membrane_kino_plugin", tag: "v0.3.1"} ]) ``` @@ -45,7 +45,6 @@ defmodule RTMP.Receiver.Pipeline do get_child(:source) |> via_out(:audio) |> child(:audio_parser, %Membrane.AAC.Parser{ - in_encapsulation: :none, out_encapsulation: :ADTS }) |> via_in(:audio) @@ -54,16 +53,17 @@ defmodule RTMP.Receiver.Pipeline do playing_video = get_child(:source) |> via_out(:video) - |> child(:video_parser, %Membrane.H264.FFmpeg.Parser{ - framerate: {25, 1} + |> child(:video_parser, %Membrane.H264.Parser{ + generate_best_effort_timestamps: %{framerate: {25, 1}}, + output_stream_structure: :annexb }) |> via_in(:video) |> get_child(:player) player = child(:player, %Membrane.Kino.Player.Sink{kino: kino}) - structure = [source, playing_audio, playing_video, player] - {[spec: structure, playback: :playing], %{}} + spec = [source, playing_audio, playing_video, player] + {[spec: spec], %{}} end # Once the source initializes, we grant it the control over the tcp socket @@ -133,7 +133,9 @@ defmodule RTMP.Receiver do ], socket_handler: fn socket -> # On new connection a pipeline is started - {:ok, _supervisor, pipeline} = RTMP.Receiver.Pipeline.start(socket: socket, kino: kino) + {:ok, _supervisor, pipeline} = + Membrane.Pipeline.start(RTMP.Receiver.Pipeline, socket: socket, kino: kino) + send(parent, {:pipeline_spawned, pipeline}) {:ok, pipeline} end @@ -171,6 +173,5 @@ port = 1942 kino = Membrane.Kino.Player.new(video: true, audio: true) Kino.render(kino) - RTMP.Receiver.run(port: port, kino: kino) ``` diff --git a/livebooks/rtmp/rtmp_sender.livemd b/livebooks/rtmp/rtmp_sender.livemd index 860738f4..f429b7e6 100644 --- a/livebooks/rtmp/rtmp_sender.livemd +++ b/livebooks/rtmp/rtmp_sender.livemd @@ -5,10 +5,10 @@ File.cd(__DIR__) Logger.configure(level: :error) Mix.install([ - {:membrane_core, "~>0.11.2"}, - {:membrane_realtimer_plugin, "~>0.6.1"}, - {:membrane_hackney_plugin, "~>0.9.0"}, - {:membrane_rtmp_plugin, "~>0.11.0"} + {:membrane_core, "~> 1.0"}, + {:membrane_realtimer_plugin, "~> 0.9.0"}, + {:membrane_hackney_plugin, "~> 0.11.0"}, + {:membrane_rtmp_plugin, "~> 0.21.0"} ]) ``` @@ -40,14 +40,13 @@ defmodule RTMP.Sender.Pipeline do location: @video_url, hackney_opts: [follow_redirect: true] }) - |> child(:video_parser, %Membrane.H264.FFmpeg.Parser{ - framerate: {25, 1}, - alignment: :au, - attach_nalus?: true, - skip_until_keyframe?: true + |> child(:video_parser, %Membrane.H264.Parser{ + output_alignment: :au, + skip_until_keyframe: true, + generate_best_effort_timestamps: %{framerate: {25, 1}} }) |> child(:video_realtimer, Membrane.Realtimer) - |> child(:video_payloader, Membrane.MP4.Payloader.H264) + |> child(:video_payloader, %Membrane.H264.Parser{output_stream_structure: :avc1}) audio_source = child(:audio_source, %Membrane.Hackney.Source{ @@ -55,7 +54,6 @@ defmodule RTMP.Sender.Pipeline do hackney_opts: [follow_redirect: true] }) |> child(:audio_parser, %Membrane.AAC.Parser{ - in_encapsulation: :ADTS, out_encapsulation: :ADTS }) |> child(:audio_realtimer, Membrane.Realtimer) @@ -66,17 +64,17 @@ defmodule RTMP.Sender.Pipeline do max_attempts: :infinity }) - structure = [ + spec = [ video_source - |> via_in(:video) + |> via_in(Pad.ref(:video, 0)) |> get_child(:rtmp_sink), audio_source - |> via_in(:audio) + |> via_in(Pad.ref(:audio, 0)) |> get_child(:rtmp_sink), rtmp_sink ] - {[spec: structure, playback: :playing], %{streams_to_end: 2}} + {[spec: spec], %{streams_to_end: 2}} end # The rest of the example module is only used for self-termination of the pipeline after processing finishes @@ -116,7 +114,9 @@ defmodule RTMP.Sender do end defp start_tcp_client(destination_url) do - {:ok, _supervisor, pipeline} = RTMP.Sender.Pipeline.start(destination: destination_url) + {:ok, _supervisor, pipeline} = + Membrane.Pipeline.start(RTMP.Sender.Pipeline, destination: destination_url) + {:ok, pipeline} end diff --git a/livebooks/soundwave/README.md b/livebooks/soundwave/README.md new file mode 100644 index 00000000..06095a7f --- /dev/null +++ b/livebooks/soundwave/README.md @@ -0,0 +1,13 @@ +# Membrane Soundwave demo + +This livebook example shows how to perform real-time soundwave plotting with the use of the [Membrane Framework](https://github.com/membraneframework) and [Vega-Lite](https://vega.github.io/vega-lite/). + +To run the demo, [install Livebook](https://github.com/livebook-dev/livebook#escript) and open the `soundwave.livemd` file there. + +## Copyright and License + +Copyright 2024, [Software Mansion](https://swmansion.com/?utm_source=git&utm_medium=readme&utm_campaign=membrane) + +[![Software Mansion](https://docs.membrane.stream/static/logo/swm_logo_readme.png)](https://swmansion.com/?utm_source=git&utm_medium=readme&utm_campaign=membrane) + +Licensed under the [Apache License, Version 2.0](LICENSE) diff --git a/livebooks/soundwave/soundwave.livemd b/livebooks/soundwave/soundwave.livemd index 8e0f51ae..6abdf336 100644 --- a/livebooks/soundwave/soundwave.livemd +++ b/livebooks/soundwave/soundwave.livemd @@ -1,29 +1,27 @@ # Soundwave plotting example -## Introduction - -This livebook example shows how to perform real-time soundwave plotting with the use of the [Membrane Framework](https://github.com/membraneframework) and [Vega-Lite](https://vega.github.io/vega-lite/). - -By following that example you will learn how to read the audio from the microphone, how is audio represented, and how to create your custom Membrane element that plots the soundwave with the use of the elixir bindings to the Vega-Lite. - -## Installation - -You need to install the following `mix` dependencies: - ```elixir File.cd(__DIR__) Logger.configure(level: :error) Mix.install([ - {:membrane_core, "~> 0.12.7"}, - {:membrane_raw_audio_parser_plugin, "~> 0.2.0"}, - {:membrane_portaudio_plugin, "~>0.16.0"}, - {:vega_lite, "~> 0.1.7"}, - {:kino_vega_lite, "~> 0.1.8"} + {:membrane_core, "~> 1.0"}, + {:membrane_raw_audio_parser_plugin, "~> 0.4.0"}, + {:membrane_portaudio_plugin, "~> 0.18.0"}, + {:vega_lite, "~> 0.1.8"}, + {:kino_vega_lite, "~> 0.1.11"} ]) ``` -Furthermore, you need to have `FFmpeg` installed. For installation details take a look [here](https://www.ffmpeg.org/). +## Introduction + +This livebook example shows how to perform real-time soundwave plotting with the use of the [Membrane Framework](https://github.com/membraneframework) and [Vega-Lite](https://vega.github.io/vega-lite/). + +By following that example you will learn how to read the audio from the microphone, how is audio represented, and how to create your custom Membrane element that plots the soundwave with the use of the elixir bindings to the Vega-Lite. + +## Installation + +You need to have `FFmpeg` installed. For installation details take a look [here](https://www.ffmpeg.org/). ## Soundwave plotting sink @@ -84,10 +82,9 @@ defmodule Visualizer do @points_per_update @window_size / (@window_duration * @plot_update_frequency) - def_input_pad(:input, + def_input_pad :input, accepted_format: %RawAudio{}, flow_control: :auto - ) @impl true def handle_init(_ctx, _opts) do @@ -172,7 +169,7 @@ defmodule Visualizer do x = (pts + (chunk_i * samples_per_point + sample_i) * sample_duration - state.initial_pts) - |> Membrane.Time.round_to_milliseconds() + |> Membrane.Time.as_milliseconds(:round) %{x: x, y: value} end) @@ -241,7 +238,7 @@ alias Membrane.RCPipeline pipeline = RCPipeline.start!() ``` -Finally, we can commission `spec` action execution with the previously created `structure`: +Finally, we can commission `spec` action execution with the previously created pipeline stucture: ```elixir RCPipeline.exec_actions(pipeline, spec: spec) diff --git a/speech_to_text/README.md b/livebooks/speech_to_text/README.md similarity index 100% rename from speech_to_text/README.md rename to livebooks/speech_to_text/README.md diff --git a/speech_to_text/speech_to_text.livemd b/livebooks/speech_to_text/speech_to_text.livemd similarity index 100% rename from speech_to_text/speech_to_text.livemd rename to livebooks/speech_to_text/speech_to_text.livemd diff --git a/livebooks/video_compositor/res/input.h264 b/livebooks/video_compositor/res/input.h264 deleted file mode 100644 index aef03ffb..00000000 Binary files a/livebooks/video_compositor/res/input.h264 and /dev/null differ diff --git a/livebooks/video_compositor/video_compositor.livemd b/livebooks/video_compositor/video_compositor.livemd deleted file mode 100644 index c7dd5e99..00000000 --- a/livebooks/video_compositor/video_compositor.livemd +++ /dev/null @@ -1,183 +0,0 @@ -# Video Compositor Example - -```elixir -File.cd(__DIR__) -Logger.configure(level: :error) - -Mix.install([ - {:membrane_core, "~> 0.11.2"}, - {:membrane_file_plugin, "~> 0.13.0"}, - {:membrane_raw_video_format, "~> 0.2"}, - {:membrane_h264_ffmpeg_plugin, "~> 0.25.2"}, - {:membrane_video_compositor_plugin, "~> 0.2.1"}, - {:membrane_hackney_plugin, "~> 0.9.0"}, - {:membrane_realtimer_plugin, "~> 0.6.0"}, - {:membrane_kino_plugin, github: "membraneframework-labs/membrane_kino_plugin"} -]) -``` - -## Installation - -To run this demo one needs to install native dependencies: - -1. [H264 FFmpeg](https://github.com/membraneframework/membrane_h264_ffmpeg_plugin/#installation) -2. [Video Compositor](https://github.com/membraneframework/membrane_video_compositor_plugin#installation) - -## Pipeline definition - -Defines a video compositor with a `1440x960` output scene, waiting for up to five videos. Each of them is downloaded in real-time from the network using the `Hackney` module, parsed and decoded from the `H264` format, and composed in real-time. After that, they will be displayed using the `Kino.Video` player. - -```elixir -defmodule DynamicVideoComposition do - use Membrane.Pipeline - - alias Membrane.H264.FFmpeg.{Encoder, Parser, Decoder} - alias Membrane.{Pad, RawVideo, Time} - - alias Membrane.{ - Hackney, - Kino, - VideoCompositor - } - - alias Membrane.VideoCompositor.RustStructs.BaseVideoPlacement, as: VideoPlacement - alias Membrane.VideoCompositor.VideoTransformations - - alias Membrane.VideoCompositor.VideoTransformations.TextureTransformations.{ - CornersRounding, - Cropping - } - - @media_url "http://raw.githubusercontent.com/membraneframework/static/gh-pages/samples/big-buck-bunny/bun33s_720x480.h264" - - @framerate {25, 1} - - @output_video_format %RawVideo{ - width: 1440, - height: 960, - framerate: @framerate, - pixel_format: :I420, - aligned: true - } - - @placement %VideoPlacement{ - size: {720, 480}, - position: nil, - z_value: nil - } - - @positions [{0, 0}, {720, 0}, {0, 480}, {720, 480}, {360, 240}] - @z_values [0.0, 0.0, 0.0, 0.0, 1.0] - - @corners_rounding %CornersRounding{border_radius: 75} - @cropping %Cropping{ - crop_top_left_corner: {0.1, 0.1}, - crop_size: {0.8, 0.8} - } - - @transformations [ - %VideoTransformations{texture_transformations: [@corners_rounding]}, - %VideoTransformations{texture_transformations: [@corners_rounding]}, - %VideoTransformations{texture_transformations: [@corners_rounding]}, - %VideoTransformations{texture_transformations: [@corners_rounding]}, - %VideoTransformations{texture_transformations: [@cropping, @corners_rounding]} - ] - - @impl true - def handle_init(_ctx, options) do - structure = - child(:compositor, %VideoCompositor{stream_format: @output_video_format}) - # Kino player requires H264 frames without b-frames - |> child(:encoder, %Encoder{profile: :baseline}) - |> via_in(:video) - |> child(:player, %Kino.Player.Sink{kino: options[:kino]}) - - start_time = Time.vm_time() - state = %{next_video_id: 0, start_time: start_time} - - {[spec: structure, playback: :playing], state} - end - - @impl true - def handle_info(:add_video, _ctx, state) when state.next_video_id >= 5 do - {[], state} - end - - @impl true - def handle_info(:add_video, _ctx, state) do - %{start_time: start_time, next_video_id: next_video_id} = state - - position = Enum.at(@positions, next_video_id) - z_value = Enum.at(@z_values, next_video_id) - placement = %VideoPlacement{@placement | position: position, z_value: z_value} - - transformations = Enum.at(@transformations, next_video_id) - - offset = Time.vm_time() - start_time - - video_id = next_video_id - - video_source = - child({:media_source, video_id}, %Hackney.Source{ - location: @media_url, - hackney_opts: [follow_redirect: true], - max_retries: 3 - }) - |> child({:parser, video_id}, %Parser{framerate: @framerate}) - |> child({:decoder, video_id}, Decoder) - # Hackney source may fetch all frames at once from one source and flood compositor. - # This could cause undefined behaviour if the first source sent the entire stream - # and EOS signal before connecting the other sources. - # To ensure real-time processing, all streams are synchronized by reducing - # processing speed (based on the framerate). - |> child({:realtimer, video_id}, Membrane.Realtimer) - |> via_in(Pad.ref(:input, video_id), - options: [ - initial_placement: placement, - initial_video_transformations: transformations, - timestamp_offset: offset - ] - ) - |> get_child(:compositor) - - {[spec: video_source], %{state | next_video_id: next_video_id + 1}} - end -end - -:ok -``` - -## Playing video - -Activate pipeline and wait for videos: - -```elixir -kino = Membrane.Kino.Player.new(video: true) - -{:ok, _supervisor_pid, pipeline} = - Membrane.Pipeline.start(DynamicVideoComposition, kino: kino) - -kino -``` - -One can add up to five video inputs: - -```elixir -send(pipeline, :add_video) -``` - -```elixir -send(pipeline, :add_video) -``` - -```elixir -send(pipeline, :add_video) -``` - -```elixir -send(pipeline, :add_video) -``` - -```elixir -send(pipeline, :add_video) -```