Currently developing db_core - the world's first enterprise AI database with native MLX integration for Apple Silicon.
- 37% faster than Pinecone for vector search
- 11x faster than Neo4j for graph operations
- 76% energy savings vs GPU-based solutions
- 3,320 ops/s bulk document processing on Apple Silicon
π₯ Core: MLX β’ Apple Silicon β’ Python β’ Swift ποΈ Databases: VectorDB β’ GraphDB β’ Knowledge Graphs π§ AI/ML: Neural Networks β’ Logic Tensor Networks β’ Causal Inference ποΈ Architecture: Multi-Tenant β’ Unified Memory β’ Zero-ETL β‘ Performance: Apple Neural Engine β’ MLX Optimization
- π¬ MLX Ecosystem Pioneer: Contributing to commercial MLX development
- π’ Enterprise AI: Building production-ready UnifiedDB for Apple Silicon
- π¦πΉ Austrian Deep Tech: AWS PreSeed Deep Tech applicant
- π Open Source: Contributing MLX database primitives to community
| Operation | db_core (MLX) | Competition | Advantage |
|---|---|---|---|
| Vector Search | 823 ops/s | 400-600 ops/s | +37-106% |
| Graph Traversal | 1,668 ops/s | 150 ops/s | +1,012% |
| Neural Embeddings | 16,575 ops/s | 2,000 ops/s | +729% |
| Memory Efficiency | 2.7 MB/1k docs | 4-6 MB/1k docs | +48-122% |
- πΌ [LinkedIn] https://linkedin.com/in/alexander-fischer-aa8626210 - Professional updates and insights
- π¦ [Twitter/X] https://x.com/AndrewDeWitt88 - MLX community engagement and technical discussions
- π§ [Email] [email protected] - Enterprise partnerships and collaboration
- π [Website] https://theseus.at - Project documentation and demos
β Check out my pinned repositories for MLX database innovations!
π‘ "Making Apple Silicon the future of Enterprise AI, one MLX optimization at a time."