question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Re-architect codec support into separate packages.

See original GitHub issue

The codec support in flutter sound is fairly limited.

Users are looking for an expanded set of codecs.

Adding codecs to the core flutter sound package is causing bloat (ffmepg as case in point).

Many users will only want a small no. of codecs and some will be happy with the native codecs provided by each platform.

This proposal aims to:

  1. keep the core flutter sound package small.
  2. provide only native codec support in the core package
  3. create an architecture that allows users to choose which non-native codecs they want to include.
  4. provide a mechanism for third parties to develop support for additional codes without needing to (git) commit to the flutter sound project.
  5. Provide a mechanism so the codec conversion happen in an isolate to avoid jank. – note: currently isolates can’t communicate with an plugins (unless you set up a mechanism to send the comms via the main isolate).

To achieve this I’m proposing the following.

  • Remove all non-native codec support out of flutter sound.
  • Development of a api that allows codec support to be ‘registered’ with the core flutter sound package.
  • Move all existing (non-native) codecs into separate packages that are able to register themselves with the above api.
  • Put each codec into its own package so users can pick and choose which codecs they want to support in their app.

So a pubspec.yaml might look like this:

flutter_sound: 5.x flutter_sound_mpeg: 1.0 flutter_sound_pcm: 1.2 third_party_wav: 1.0

The end user can now pick and choose their preferred codecs.

So I see the api working something like:

Each codec package would implement:

abstract class CodeConverter
{
  Track convertTrack(Track from);
}

The Track contains the codec and the audio information so it seems like a convenient (existing) method of passing the data. I considered using some other mechanism (raw paths or buffers) but that doesn’t appear to add any benefit. Some codecs also support meta data which the Track class includes so that may also be useful.

Each codec package would implement one or more class that ‘implement’ the CodeConverter interface.

When the app starts the CodecConverter(s) would need to be registered.

class CodecConverters
{
   
   /// throws an exception if the conversion is already registered.
   static  void register({CodecConverter converter, Codec from, Codec to});

   static CodecConverter getConverter({Codec from, Codec to});

   static List<CodecConverter>  get converters();

}

A single package may register any number of codec conversions.

This mechanism would also allow a package to ‘chain’ multiple conversions. The package would register their own converter:

CodecConverters.register(from: a, to: c)

void convertTrack({Track a, Track c})
{
   var phase1 = CodecConverters.getConverter(from: a, to: b);
   var phase2 = CodecConverter.getConverter(from: b, to c);
   Track tb = phase1.convertTrack(from: a);
   Track tc = phase2.convertTrack(from: b);
  return tc;
}

We would need to change the Codec enum to using registered names.

class Codecs
{
   /// throws an exception if the name already exists
   static Codec register({String name, String extension});
  /// perhaps
  static Codec getOrRegister({String name, String extension});

 /// method to use a codec
 static Codec getByName(String name);
 static Codec getByExtension(String extension);

 /// returns a list of the native codecs supported by the current platform.
  static List<Codec> get nativeCodecs;
}

With the 5.0 api we use Track for both recording and playback. The above mechanism can be done without changing the 5.x api.

For playback: The AudioPlayer will convert the track before it starts playing. This will mean that for URL’s we are going to have to pre-cache the data.

  • the pre-cache should be done in an isolate

This might not be a bad thing because at some point we probably want to add streaming and do realtime codec translation. This will require and more advanced converter that takes a segment of the audio in a buffer and converts one segment at a time. We possibly need to do this in an isolate. Perhaps we have an isolate dedicated to codec conversions. I’m not proposing that we do this at the moment but the above architecture would allow us to do this with no changes to the core flutter sound api.

For recording the conversion would occur once the recording has finished. The above noted streaming concept could also be applied to recording at a later date.

Isolates/Jank Converting large files will cause UI frames to be dropped (jank). To avoid this we need to create a mechanism which pushes the conversions into an isolate. Moving data between isolates is limited and expensive. For some operations the cost of the transfer is higher than the gain by pushing the logic into an isolate. We essentially have two data types to worry about. File based audio and buffer based audio. The file based audio is easy to deal with as we just pass the path across the isolate and wait for the conversion to complete. Buffer based audio will be more problematic. For large files it is likely that just the act of sending/receiving the data across the isolate boundary will cause jank. We have to possible paths to solving this. Implement a ‘packet’ based protocol where we send the buffer across the isolate boundary in small packets. Sending of these packets would need to be queued and sent with something like a sequence of Future.delayed calls. By using the Future.delayed mechanism we are essentially yielding to the flutter message queue allowing it to have time to draw frames between each transmission. Alternatively we could write the buffer to a file and send the path to the isolate. In either case we would also need to queue the writes to the file to avoid jank.

Thoughts/comments?

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:2
  • Comments:7 (4 by maintainers)

github_iconTop GitHub Comments

4reactions
bsuttoncommented, May 8, 2020

My proposal is not about creating additional flavours of flutter sound.

The should be only one flavour of flutter sound (fs) as this will make the code base easier to maintain.

My proposal is to create seperate packages one for each codec.

The codec packages could even be in separate git repos and we could encourage other developers to maintain those packages.

One significant advantage of this approach is that other people can easily add additional codecs without changing fs.

If a developer wants pcm support they can develop it themselves.

This will also reduce our support load as other developers can support the codecs.

It also makes fs smaller. If a dev doesn’t want mpeg support why should we force them to include it into their app.

My proposed architecture leaves these decisions in their hands and they don’t have to wait for us to add another codec as they can do it themselves.

On Fri, 8 May 2020, 6:03 pm Larpoux, notifications@github.com wrote:

We already have two flavors of Flutter Sound :

  • The LITE flavor which does not include FFmpeg
  • The FULL flavor which has FFmpeg embedded

I think that this already fulfill or needs.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/dooboolab/flutter_sound/issues/344#issuecomment-625694409, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAG32OE5AWT5PHJPQCIN6N3RQO4EBANCNFSM4M23FEVQ .

2reactions
Larpouxcommented, May 8, 2020

We already have two flavors of Flutter Sound :

  • The LITE flavor which does not include FFmpeg
  • The FULL flavor which has FFmpeg embedded

I think that this already fulfill our needs.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Best Practices for Rearchitecting Monolithic Applications to ...
This tutorial provides an overview of best practices for rearchitecting a monolithic application to use microservices, which you can then ...
Read more >
Rearchitecture · gpac/gpac Wiki - GitHub
Modular Multimedia framework for packaging, streaming and playing your favorite content, see http://netflix.gpac.io - Rearchitecture ...
Read more >
8 Steps for Migrating Existing Applications to Microservices
In this blog post, I describe a plan for how organizations that wish to migrate existing applications to microservices can do so safely...
Read more >
Ubuntu Manpage: deb-src-control - Debian source packages ...
Each Debian source package contains the master «control» file, which contains at least 2 paragraphs, separated by a blank line. The first paragraph...
Read more >
AWS Migration Hub FAQs - Amazon Web Services
You can migrate into any Region supported by your migration tools and the migration ... data from multiple sources for the same server...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found