#attention

  1. aha

    aha model inference library, now supports Qwen2.5VL, MiniCPM4, VoxCPM, Qwen3VL, DeepSeek-OCR, Hunyuan-OCR, PaddleOCR-VL, VoxCPM1.5, RMBG2.0

    v0.1.6 #model-inference #qwen2 #deepseek-ocr #save-dir #attention #candle
  2. p7m-phone

    API for managing phone services This is the API of the service at P7M that manages phone services. Attention: this API will probably still change a lot in the future, it’s not at all stable yet

    v0.7.1 550 #queue-api #api-service #caller #phone-number #delete #alarm #block-id #block-api #p7m #attention
  3. ruvector-attention

    Attention mechanisms for ruvector - geometric, graph, and sparse attention

    v0.1.0 #vector-search #transformer #gnn #machine-learning #attention
  4. rkllm-rs

    rkllm rust ffi binding

    v0.1.13 #bindings #config #rk3588 #attention #cross #artificial-intelligence
  5. micro_cartan_attn

    Complete Cartan matrix attention mechanisms with proper Lie algebra structures

    v0.2.0 #neural-network #orthogonal #attention #cartan-matrix
  6. ruvector-gnn-wasm

    WebAssembly bindings for RuVector GNN with tensor compression and differentiable search

    v0.1.2 #compression #differentiable-search #wasm-bindings #gnn #tensor #hierarchical #cosine-similarity #neural-network #embedding #attention
  7. cubek

    CubeCL Kernels

    v0.1.0-pre.1 #cubecl #convolution #kernel #multi-platform #quantization #attention #matmul
  8. http-request-derive

    Use derive to create HTTP requests

    v0.4.0 300 #query-string #serde #struct #sent #build #attention
  9. burn_attention

    Flash Attention v3 implementation for Burn framework

    v0.1.0 #attention #burn-framework #wgpu #flash #cubecl #softmax #causal #masking
  10. zeta-salience

    Salience analysis engine for intelligent token prioritization in LLM inference

    v0.1.0 #llm-inference #llm #attention #salience #prioritization
  11. http-request-derive-client-reqwest

    Use derive to create HTTP requests - reqwest client

    v0.2.0 250 #http-request #http-client #attention
  12. entrenar-distill

    End-to-end knowledge distillation CLI

    v0.1.0 #training #entrenar #distillation #end-to-end #model #progressive #attention #model-parameters #lora #student
  13. cubecl-attention

    CubeCL Attention Kernels Engine

    v0.9.0-pre.5 1.2K #cubecl #kernel #attention #engine #gpgpu
  14. cubek-attention

    CubeK: Attention Kernels

    v0.1.0-pre.1 #kernel #cube-k #attention #cubecl
  15. mux-radix-tree

    A full-featured radix tree implementation

    v0.1.0 #radix-tree #hash-map #deref-mut #vector #list #attention #test-coverage #quite #benchmarked #modeled
  16. edgegpt

    Reserved stub crate for edgegpt

    v0.1.0 #reserved #stub #stub-for-edgegpt #building-block #embedding #attention #norm
  17. nbml

    Machine Learning Primitives

    v0.1.7 #machine-learning #primitive #nn #rl #attention #sgd #envs #ppo #sac #optim
  18. http-request-derive-client

    Use derive to create HTTP requests - client interface

    v0.1.0 270 #http-request #http-client #interface #attention
  19. candle-ext

    An extension library to Candle that provides PyTorch functions not currently available in Candle

    v0.1.7 #candle #pytorch #extension #tensor #devices #dot-product #outer #d-type #attention #eye
  20. speakeasy-protos-tokio-latest

    Proto definitions for the speakeasy Rust SDK, you're probably looking for the speakeasy-rust-sdk package

    v0.2.0 #rust-sdk #speakeasy-rust-sdk #define #looking #proto #attention
  21. Try searching with DuckDuckGo.

  22. speakeasy-protos-tokio-02

    Proto definitions for the speakeasy Rust SDK, you're probably looking for the speakeasy-rust-sdk package

    v0.2.0 #rust-sdk #speakeasy-rust-sdk #looking #looking-for-speakeasy-rust-sdk #proto #attention
  23. cronitor

    Make cron jobs but SIMPLER

    v0.1.0 #cron-job #run-time #cron-expression #proc-macro #funny #attention