Description
This is not really an issue per-se, but I'm trying to gain some insight into why the ArrayBase serialization format is the way it is.
With this example 2-D array:
array![[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]]
serialization output to JSON looks like this:
{"v":1,"dim":[3,3],"data":[0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0]}
What I wonder is why the more human-readable (and smaller-character-count, depending on data) format generated by alternate ser/de functions defined in serde_ndim
is not used instead:
[[0.0,1.0,2.0],[3.0,4.0,5.0],[6.0,7.0,8.0]]
In fact, its exactly how a user would supply an array syntactically in code.
I can speculate a few reasons, but would like insight from contributors to ndarray. Is it just performance reasons? I understand arrays are all 1-D in memory, and decoding the shape must take some amount of extra processing time. Are there other considerations that I'm missing?
I'm also looking for some insight into why the version field v
exists.
Thanks! :)