Skip to content

Fix new typos found by codespell #388

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 29, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions bench/ndarray/matmul.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@
"source": [
"**Key observations:**\n",
"- Automatic chunking can optimize performance for smaller matrix sizes.\n",
"- Choosing square chunks of 1000x1000 can achive the best performance for matrices of sizes greater than 2000x2000.\n",
"- Choosing square chunks of 1000x1000 can achieve the best performance for matrices of sizes greater than 2000x2000.\n",
"\n",
"**Next experiment:**\n",
"We will increment the chunks' size, as we have seen that better performance can be achieved with bigger chunks."
Expand Down Expand Up @@ -294,7 +294,7 @@
"**Key observations:**\n",
"- The best performance is achieved for the biggest chunk size.\n",
"- The larger the chunk size, the higher the bandwidth.\n",
"- If the chunk size is choosen automatically, the performance is better than choosing any other chunk size. This is weird, because if choosen automatically, chunks of size 1000x1000 are choosen, which is the same size as the fixed chunks.\n",
"- If the chunk size is chosen automatically, the performance is better than choosing any other chunk size. This is weird, because if chosen automatically, chunks of size 1000x1000 are chosen, which is the same size as the fixed chunks.\n",
"\n",
"**Next experiment:**\n",
"We will increment the chunks' size again, as we have seen that better performance can be achieved with bigger chunks."
Expand All @@ -304,7 +304,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Presicion simple"
"Precision simple"
]
},
{
Expand Down Expand Up @@ -517,7 +517,7 @@
"\n",
"**Next experiment:**\n",
"We are going to try with the same sizes for matrices and a square chunk size of 6000 to see if it improves the performance for that last matrix size.\n",
"We will also remove chunk sizes of 1000 and 2000, and add a chunk size wich will be the same size as the matrix."
"We will also remove chunk sizes of 1000 and 2000, and add a chunk size which will be the same size as the matrix."
]
},
{
Expand Down
3 changes: 2 additions & 1 deletion src/blosc2/lazyexpr.py
Original file line number Diff line number Diff line change
Expand Up @@ -1481,7 +1481,8 @@ def reduce_slices( # noqa: C901
iter_disk = all_ndarray and any_persisted
# Experiments say that iter_disk is faster than the regular path for reductions
# even when all operands are in memory, so no need to check any_persisted
# New benchs are saying the contrary (> 10% slower), so this needs more investigation
# New benchmarks are saying the contrary (> 10% slower), so this needs more
# investigation
# iter_disk = all_ndarray
else:
# WebAssembly does not support threading, so we cannot use the iter_disk option
Expand Down