Open
Description
First of all many thanks for this great library.
I'm in the process of adapting this library for my app and came across some of the limitations that were already mentioned in previous issues.
I'd like to put forward a proposal to separate the view logic from reading data:
- Define a protocol for a data source: this should support reading data in batches returning
Data
chunks or[Float]
arrays based on requested ranges - Clean up
FDWaveformView
andFDWaveformRenderOperation
removingAVFoundation
dependency: only use the datasource protocol
Once you implement the data source protocol you are free to integrate with any arbitrary data source not just AVAsset
s.
As a proof of concept I did all of the above:
- Defined
FDAudioContextProtocol
to define the data access layer - Updated
FDAudioContext
to encapsulate all code related to loadingAVAsset
s, implementingFDAudioContextProtocol
- Updated
FDWaveformRenderOperation
to only use theFDAudioContextProtocol
- Added
SineWaveAudioContext
as an example to demonstrate a custom data source based on a generated sine wave. - Added an option to the iOS example to load the sine wave, please see screenshots below.
I could achieve all of this with couple of API changes:
- made
FDWaveformView.audioContext
public so I can assign my custom sine example - made
WaveformType
enum public - rest of the changes happened in internal changes not affecting the API
I would be interested if this is something you would consider for the project.
If so, let's start a discussion about the specifics. I can dedicate some time to this task in the coming weeks.
Metadata
Metadata
Assignees
Labels
No labels