diff --git a/docs/docs.json b/docs/docs.json
index c4b9d24025..0c208e1815 100644
--- a/docs/docs.json
+++ b/docs/docs.json
@@ -13,9 +13,9 @@
{
"group": "Get Started",
"pages": [
- "docs/get_started/introduction",
- "docs/get_started/Flash_device",
- "getstartedwithomi"
+ "docs/get-started/introduction",
+ "docs/get-started/flash-device",
+ "get-started-with-omi"
]
},
{
@@ -24,32 +24,32 @@
{
"group": "Apps",
"pages": [
- "docs/developer/apps/Introduction",
- "docs/developer/apps/PromptBased",
- "docs/developer/apps/Integrations",
- "docs/developer/apps/AudioStreaming",
- "docs/developer/apps/Import",
- "docs/developer/apps/Submitting",
- "docs/developer/apps/Oauth",
- "docs/developer/apps/Notifications"
+ "docs/developer/apps/introduction",
+ "docs/developer/apps/prompt-based",
+ "docs/developer/apps/integrations",
+ "docs/developer/apps/audio-streaming",
+ "docs/developer/apps/import",
+ "docs/developer/apps/submitting",
+ "docs/developer/apps/oauth",
+ "docs/developer/apps/notifications"
]
},
- "docs/developer/AppSetup",
+ "docs/developer/app-setup",
{
"group": "Backend",
"pages": [
- "docs/developer/backend/Backend_Setup",
- "docs/developer/backend/backend_deepdive",
- "docs/developer/backend/StoringMemory",
+ "docs/developer/backend/backend-setup",
+ "docs/developer/backend/backend-deepdive",
+ "docs/developer/backend/storing-memory",
"docs/developer/backend/transcription",
- "docs/developer/backend/memory_embeddings",
+ "docs/developer/backend/memory-embeddings",
"docs/developer/backend/postprocessing"
]
},
{
"group": "Firmware",
"pages": [
- "docs/developer/firmware/Compile_firmware"
+ "docs/developer/firmware/compile-firmware"
]
},
{
@@ -57,19 +57,19 @@
"pages": [
"docs/developer/sdk/sdk",
"docs/developer/sdk/python",
- "docs/developer/sdk/ReactNative",
+ "docs/developer/sdk/react-native",
"docs/developer/sdk/swift"
]
},
- "docs/developer/Protocol",
- "docs/developer/Contribution",
- "docs/developer/MCP",
+ "docs/developer/protocol",
+ "docs/developer/contribution",
+ "docs/developer/mcp",
{
"group": "Audio & Testing",
"pages": [
- "docs/developer/savingaudio",
- "docs/developer/AudioStreaming",
- "docs/developer/DevKit2Testing"
+ "docs/developer/saving-audio",
+ "docs/developer/audio-streaming",
+ "docs/developer/devkit2-testing"
]
}
]
diff --git a/docs/docs/assembly/Build_the_device.mdx b/docs/docs/assembly/Build_the_device.mdx
deleted file mode 100644
index ecfcab81cd..0000000000
--- a/docs/docs/assembly/Build_the_device.mdx
+++ /dev/null
@@ -1,111 +0,0 @@
----
-title: "Building the Device"
-description: "Follow this step-by-step guide to build your own OMI device"
----
-
-## Assembly Instructions[](#assembly-instructions "Direct link to Assembly Instructions")
-
-### **Step 0: Prepare the Components**[](#step-0-prepare-the-components "Direct link to step-0-prepare-the-components")
-
-1. Ensure you've purchased all required components from the [Buying Guide](https://docs.omi.me/docs/assembly/Buying_Guide).
-2. Download and print the case using the provided `.stl` file: [Case Design](https://github.com/BasedHardware/omi/tree/main/omi/hardware/triangle%20v1).
- * If you don't have access to a 3D printer, use a 3D printing service or check [Makerspace](https://makerspace.com/) for printing locations.
-
-
-
-***
-
-### **Step 1: Cut the Black Wire**[](#step-1-cut-the-black-wire "Direct link to step-1-cut-the-black-wire")
-
-Cut the black wire approximately **2/3" from the base**.
-
-
-
-***
-
-### **Step 2: Strip the Wire Ends**[](#step-2-strip-the-wire-ends "Direct link to step-2-strip-the-wire-ends")
-
-Remove a small portion of insulation from both ends of the cut wire using a wire stripper.
-
-* Use the *28 AWG* notch for best results.
-
-
-
-***
-
-### **Step 3: Solder the Components**[](#step-3-solder-the-components "Direct link to step-3-solder-the-components")
-
-Follow the soldering diagram to connect the battery to the board.
-
-
-
-***
-
-### **Step 4: Secure the Switch**[](#step-4-secure-the-switch "Direct link to step-4-secure-the-switch")
-
-Insert the switch securely into the battery connector.
-
-  
-
-***
-
-### **Step 5: Assemble the Battery and Board**[](#step-5-assemble-the-battery-and-board "Direct link to step-5-assemble-the-battery-and-board")
-
-Place the battery and board into the 3D-printed case.
-
-
-
-***
-
-### **Step 6: Insert the Switch**[](#step-6-insert-the-switch "Direct link to step-6-insert-the-switch")
-
-Position the switch into the notch next to the USB-C slot.
-
-
-
-***
-
-### **Step 7: Manage the Wires**[](#step-7-manage-the-wires "Direct link to step-7-manage-the-wires")
-
-Twist the longer red wire gently to help organize it within the case.
-
-
-
-***
-
-### **Step 8: Curl the Wire**[](#step-8-curl-the-wire "Direct link to step-8-curl-the-wire")
-
-Carefully curl the twisted wire and place it to the side to ensure it fits within the case.
-
-
-
-***
-
-### **Step 9: Attach the Lid**[](#step-9-attach-the-lid "Direct link to step-9-attach-the-lid")
-
-Align the lid with the case using the ridges as a guide and snap it into place.
-
-
-
-***
-
-### **Step 10: Secure the Case**[](#step-10-secure-the-case "Direct link to step-10-secure-the-case")
-
-Apply even pressure around the edges to ensure the seams snap securely into place.
-
-
-
-***
-
-### Charging Instructions[](#charging-instructions "Direct link to Charging Instructions")
-
-**Important:**
-
-* OMI can only charge when it is powered on.
-* The device will not charge if powered off.
-
-***
-
-## 🎉 Congratulations\
-
-You now have a fully assembled and functional OMI device! Enjoy exploring its features and capabilities.
diff --git a/docs/docs/assembly/build-the-device.mdx b/docs/docs/assembly/build-the-device.mdx
new file mode 100644
index 0000000000..3b0fe63bc9
--- /dev/null
+++ b/docs/docs/assembly/build-the-device.mdx
@@ -0,0 +1,90 @@
+---
+title: "Building the Device"
+description: "Follow this step-by-step guide to build your own OMI device"
+---
+
+## Assembly Instructions
+
+
+
+ 1. Ensure you've purchased all required components from the [Buying Guide](https://docs.omi.me/docs/assembly/Buying_Guide).
+ 2. Download and print the case using the provided `.stl` file: [Case Design](https://github.com/BasedHardware/omi/tree/main/omi/hardware/triangle%20v1).
+ * If you don't have access to a 3D printer, use a 3D printing service or check [Makerspace](https://makerspace.com/) for printing locations.
+
+ 
+
+
+
+ Cut the black wire approximately **2/3" from the base**.
+
+ 
+
+
+
+ Remove a small portion of insulation from both ends of the cut wire using a wire stripper.
+
+ * Use the *28 AWG* notch for best results.
+
+ 
+
+
+
+ Follow the soldering diagram to connect the battery to the board.
+
+ 
+
+
+
+ Insert the switch securely into the battery connector.
+
+ 
+ 
+ 
+
+
+
+ Place the battery and board into the 3D-printed case.
+
+ 
+
+
+
+ Position the switch into the notch next to the USB-C slot.
+
+ 
+
+
+
+ Twist the longer red wire gently to help organize it within the case.
+
+ 
+
+
+
+ Carefully curl the twisted wire and place it to the side to ensure it fits within the case.
+
+ 
+
+
+
+ Align the lid with the case using the ridges as a guide and snap it into place.
+
+ 
+
+
+
+ Apply even pressure around the edges to ensure the seams snap securely into place.
+
+ 
+
+
+
+### Charging Instructions
+
+**Important:**
+* OMI can only charge when it is powered on.
+* The device will not charge if powered off.
+
+## Congratulations!
+
+You now have a fully assembled and functional OMI device! Enjoy exploring its features and capabilities.
diff --git a/docs/docs/assembly/Buying_Guide.mdx b/docs/docs/assembly/buying-guide.mdx
similarity index 96%
rename from docs/docs/assembly/Buying_Guide.mdx
rename to docs/docs/assembly/buying-guide.mdx
index fc6e4e0ed2..33842def6a 100644
--- a/docs/docs/assembly/Buying_Guide.mdx
+++ b/docs/docs/assembly/buying-guide.mdx
@@ -12,9 +12,7 @@ description: "Please note that availability and prices may vary by region."
| **Wires** | - [Various options on Amazon US](https://www.amazon.com/dp/B09X4629C1) | **Varies** |
| **Case** | - [3D-print design available on GitHub](https://github.com/BasedHardware/Omi/tree/main/Omi/hardware/triangle%20v1) | **Varies** |
-***
-
-## Notes:[](#notes "Direct link to Notes:")
+## Notes:
1. The provided links are third-party providers; we do not guarantee product availability or quality.
2. Pricing is approximate and may vary depending on your location and shipping fees.
diff --git a/docs/docs/assembly/introduction.mdx b/docs/docs/assembly/introduction.mdx
index 458b2f5772..29c0fd9eae 100644
--- a/docs/docs/assembly/introduction.mdx
+++ b/docs/docs/assembly/introduction.mdx
@@ -3,39 +3,31 @@ title: "Build Your Own OMI Device"
description: "As an open-source community, we empower enthusiasts to create their own OMI devices. This guide focuses on building the **Developer Kit 1**, as the **Developer Kit 2** relies on a custom PCB and is not suited for DIY assembly."
---
-***
-
-## Is This Guide for You?[](#is-this-guide-for-you "Direct link to Is This Guide for You?")
+## Is This Guide for You?
Building your own OMI device can be a rewarding experience but is best suited for individuals with **advanced knowledge** of soldering and PCBs. If you're confident in your skills and ready for the challenge, this guide will help you get started.
-***
-
-## Is It Cheaper?[](#is-it-cheaper "Direct link to Is It Cheaper?")
+## Is It Cheaper?
While building your own OMI device is a great learning experience, it may not always be significantly cheaper than purchasing one directly from our [Shop](https://omi.me). The cost depends on the availability of parts in your region, shipping fees, and access to specialized tools.
If cost savings are your main goal, consider comparing the price of parts and tools against the price in our shop before deciding.
-***
+## Requirements
-## Requirements[](#requirements "Direct link to Requirements")
-
-### Skills Needed:[](#skills-needed "Direct link to Skills Needed:")
+### Skills Needed:
* Soldering (both through-hole and surface-mount components)
* Basic understanding of PCB design and electronics
* Familiarity with microcontrollers
-### Tools & Materials:[](#tools--materials "Direct link to Tools & Materials:")
+### Tools & Materials:
* Soldering iron and solder
* Fine-tip tweezers
* Enclosure (3D printed)
-***
-
-## Important Notes[](#important-notes "Direct link to Important Notes")
+## Important Notes
* **Safety First:** Always follow safety precautions when working with electronics and soldering equipment.
* **Custom PCB for Developer Kit 2:** Due to its reliance on a custom PCB, Developer Kit 2 cannot be built without specialized manufacturing.
diff --git a/docs/docs/developer/AudioStreaming.mdx b/docs/docs/developer/AudioStreaming.mdx
deleted file mode 100644
index 588067c31f..0000000000
--- a/docs/docs/developer/AudioStreaming.mdx
+++ /dev/null
@@ -1,53 +0,0 @@
----
-title: Real-Time Audio Streaming
-description: ''
----
-
-# Streaming Real-Time Audio From Device to Anywhere
-
-Omi allows you to stream audio bytes from your DevKit1 or DevKit2 directly to your backend or any other service, enabling you to perform various analyses and store the data. You can also define how frequently you want to receive the audio bytes.
-
-## Step 1: Create an Endpoint
-
-Create an endpoint (webhook) that can receive and process the data sent by our backend. Our backend will make a POST request to your webhook with sample_rate and uid as query parameters. The request from our backend will look like this:
-
-`POST /your-endpoint?sample_rate=16000&uid=user123`
-
-The data sent is of type octet-stream, which is essentially a stream of bytes. You can either create your own endpoint or use the example provided below. Once it's ready and deployed, proceed to the next step.
-
-Note: The sample rate refers to the audio samples captured per second. DevKit1 (v1.0.4 and above) and DevKit2 record audio at a sample rate of 16,000 Hz, while DevKit1 with v1.0.2 records at a sample rate of 8,000 Hz. The uid represents the unique ID assigned to the user in our system.
-
-## Step 2: Add the Endpoint to Developer Settings
-1. Open the Omi App on your device.
-2. Go to Settings and click on Developer Mode.
-3. Scroll down until you see Realtime audio bytes, and set your endpoint there.
-4. In the Every x seconds field, define how frequently you want to receive the bytes. For example, if you set it to 10 seconds, you will receive the audio bytes every 10 seconds.
-
-That's it! You should now see audio bytes arriving at your webhook. The audio bytes are raw, but you can save them as audio files by adding a WAV header to the accumulated bytes.
-
-Check out the example below to see how you can save the audio bytes as audio files in Google Cloud Storage using the audio streaming feature.
-
-## Example: Saving Audio Bytes as Audio Files in Google Cloud Storage
-1. Create a Google Cloud Storage bucket and set the appropriate permissions. You can follow the steps mentioned [here](https://docs.omi.me/docs/developer/savingaudio) up to step 5.
-2. Fork the example repository from [github.com/mdmohsin7/omi-audio-streaming](https://github.com/mdmohsin7/omi-audio-streaming).
-3. Clone the repository to your local machine.
-4. Deploy it to any of your preferred cloud providers like GCP, AWS, DigitalOcean, or run it locally (you can use Ngrok for local testing). The repository includes a Dockerfile for easy deployment.
-5. While deploying, ensure the following environment variables are set:
-- `GOOGLE_APPLICATION_CREDENTIALS_JSON`: Your GCP credentials, encoded in base64.
-- `GCS_BUCKET_NAME`: The name of your GCP storage bucket.
-6. Once the deployment is complete, set the endpoint in the Developer Settings of the Omi App under Realtime audio bytes. The endpoint should be the URL where you deployed the example + `/audio`.
-7. You should now see audio files being saved in your GCP bucket every x seconds, where x is the value you set in the `Every x seconds` field.
-
-## Contributing 🤝
-
-We welcome contributions from the open source community! Whether it's improving documentation, adding new features, or reporting bugs, your input is valuable. Check out our [Contribution Guide](https://docs.omi.me/developer/Contribution/) for more information.
-
-## Support 🆘
-
-If you're stuck, have questions, or just want to chat about Omi:
-
-- **GitHub Issues: 🐛** For bug reports and feature requests
-- **Community Forum: 💬** Join our [community forum](https://discord.gg/omi) for discussions and questions
-- **Documentation: 📚** Check out our [full documentation](https://docs.omi.me/) for in-depth guides
-
-Happy coding! 💻 If you have any questions or need further assistance, don't hesitate to reach out to our community.
diff --git a/docs/docs/developer/MCP.mdx b/docs/docs/developer/MCP.mdx
index 95c709bb3f..6942958078 100644
--- a/docs/docs/developer/MCP.mdx
+++ b/docs/docs/developer/MCP.mdx
@@ -3,8 +3,6 @@ title: "Model Context Protocol"
description: "A Model Context Protocol server for Omi interaction and automation. This server provides tools to read, search, and manipulate Memories and Conversations."
---
-import { AccordionGroup, Accordion } from 'mintlify-components';
-
## Configuration
### Usage with Claude Desktop
@@ -12,61 +10,56 @@ import { AccordionGroup, Accordion } from 'mintlify-components';
Add this to your `claude_desktop_config.json`:
-
-
-When using [uv](https://docs.astral.sh/uv/) no specific installation is needed.
-
-We will use [uvx](https://docs.astral.sh/uv/guides/tools/) to directly run *mcp-server-omi*.
-
-| If having issues instead of `"command": "uvx"`, put your whole package path (`which uvx`), then `"command": "$path"`.
-
-```json
-"mcpServers": {
- "omi": {
- "command": "uvx",
- "args": ["mcp-server-omi"]
- }
-}
-```
-
-
-
-
-
-Install docker, https://orbstack.dev/ is great.
-
-```json
-"mcpServers": {
- "omi": {
- "command": "docker",
- "args": ["run", "--rm", "-i", "josancamon19/mcp-server-omi"]
- }
-}
-```
-
-
-
-
-Requires python >= 3.11.6.
-- Check `python --version`, and `brew list --versions | grep python` (you might have other versions of python installed)
-- Get the path of the python version (`which python`) or with brew
-
-```json
-"mcpServers": {
- "omi": {
- "command": "/opt/homebrew/bin/python3.12",
- "args": ["-m", "mcp_server_omi"]
- }
-}
-```
-
+
+ When using [uv](https://docs.astral.sh/uv/) no specific installation is needed.
+
+ We will use [uvx](https://docs.astral.sh/uv/guides/tools/) to directly run *mcp-server-omi*.
+
+ | If having issues instead of `"command": "uvx"`, put your whole package path (`which uvx`), then `"command": "$path"`.
+
+ ```json
+ "mcpServers": {
+ "omi": {
+ "command": "uvx",
+ "args": ["mcp-server-omi"]
+ }
+ }
+ ```
+
+
+
+ Install docker, https://orbstack.dev/ is great.
+
+ ```json
+ "mcpServers": {
+ "omi": {
+ "command": "docker",
+ "args": ["run", "--rm", "-i", "josancamon19/mcp-server-omi"]
+ }
+ }
+ ```
+
+
+
+ Requires python >= 3.11.6.
+ - Check `python --version`, and `brew list --versions | grep python` (you might have other versions of python installed)
+ - Get the path of the python version (`which python`) or with brew
+
+ ```json
+ "mcpServers": {
+ "omi": {
+ "command": "/opt/homebrew/bin/python3.12",
+ "args": ["-m", "mcp_server_omi"]
+ }
+ }
+ ```
+
### Examples (langchain, openai Agents, dspy)
https://github.com/BasedHardware/omi/tree/main/mcp/examples
-
### Tools
1. `get_memories`
- Retrieve a list of user memories
@@ -111,13 +104,13 @@ https://github.com/BasedHardware/omi/tree/main/mcp/examples
You can use the MCP inspector to debug the server. For uvx installations:
-```
+```bash
npx @modelcontextprotocol/inspector uvx mcp-server-omi
```
Or if you've installed the package in a specific directory or are developing on it:
-```
+```bash
cd path/to/servers/src/omi
npx @modelcontextprotocol/inspector uv run mcp-server-omi
```
@@ -127,4 +120,4 @@ help you debug any issues.
## License
-This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
+This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
\ No newline at end of file
diff --git a/docs/docs/developer/Protocol.mdx b/docs/docs/developer/Protocol.mdx
index 978ca502d4..1134f5e378 100644
--- a/docs/docs/developer/Protocol.mdx
+++ b/docs/docs/developer/Protocol.mdx
@@ -2,15 +2,15 @@
title: "App-device protocol"
---
-## BLE Discovery[](#ble-discovery "Direct link to BLE Discovery")
+## BLE Discovery
The official app discovers the device by scanning for BLE devices with the name `Omi`.
-## BLE Services and Characteristics[](#ble-services-and-characteristics "Direct link to BLE Services and Characteristics")
+## BLE Services and Characteristics
The Omi wearable device implements several services:
-### the standard BLE [Battery Service](https://www.bluetooth.com/specifications/specs/battery-service)[](#the-standard-ble-battery-service "Direct link to the-standard-ble-battery-service")
+### the standard BLE [Battery Service](https://www.bluetooth.com/specifications/specs/battery-service)
The service uses the official UUID of 0x180F and exposes the standard Battery Level characteristic with UUID 0x2A19. The characteristic supports notification to provide regular updates of the level (this does not work with firmware 1.0.x and requires at least v1.5).
@@ -32,7 +32,7 @@ The main service has UUID of `19B10000-E8F2-537E-4F6C-D104768A1214` and has two
* Audio data with UUID of `19B10001-E8F2-537E-4F6C-D104768A1214`, used to send the audio data from the device to the app.
* Codec type with UUID of `19B10002-E8F2-537E-4F6C-D104768A1214`, determines what codec should be used to decode the audio data.
-### Codec Type[](#codec-type "Direct link to Codec Type")
+### Codec Type
The possible values for the codec type are:
@@ -44,7 +44,7 @@ The possible values for the codec type are:
Starting with version 1.0.3 of the firmware, the device default is Opus. On earlier versions it was PCM 16-bit, 8kHz, mono.
-### Audio Data[](#audio-data "Direct link to Audio Data")
+### Audio Data
The audio data is sent as notifications on the audio characteristic. The format of the data depends on the codec type. The data is split into audio packets, with each packet containing 160 samples. A packet could be sent in multiple value updates if it is larger than (negotiated BLE MTU - 3 bytes). Each value update has a three byte header:
diff --git a/docs/docs/developer/AppSetup.mdx b/docs/docs/developer/app-setup.mdx
similarity index 100%
rename from docs/docs/developer/AppSetup.mdx
rename to docs/docs/developer/app-setup.mdx
diff --git a/docs/docs/developer/apps/AudioStreaming.mdx b/docs/docs/developer/apps/AudioStreaming.mdx
deleted file mode 100644
index d41c49e3f8..0000000000
--- a/docs/docs/developer/apps/AudioStreaming.mdx
+++ /dev/null
@@ -1,59 +0,0 @@
----
-title: "Real-Time Audio Streaming"
-titleHidden: true
----
-
-# Streaming Real-Time Audio From Device to Anywhere
-
-Omi allows you to stream audio bytes from your DevKit1 or DevKit2 directly to your backend or any other service, enabling you to perform various analyses and store the data. You can also define how frequently you want to receive the audio bytes.
-
-## Step 1: Create an Endpoint
-
-Create an endpoint (webhook) that can receive and process the data sent by our backend. Our backend will make a POST request to your webhook with sample_rate and uid as query parameters. The request from our backend will look like this:
-
-`POST /your-endpoint?sample_rate=16000&uid=user123`
-
-The data sent is of type octet-stream, which is essentially a stream of bytes. You can either create your own endpoint or use the example provided below. Once it's ready and deployed, proceed to the next step.
-
-Note: The sample rate refers to the audio samples captured per second. DevKit1 (v1.0.4 and above) and DevKit2 record audio at a sample rate of 16,000 Hz, while DevKit1 with v1.0.2 records at a sample rate of 8,000 Hz. The uid represents the unique ID assigned to the user in our system.
-
-## Step 2: Add the Endpoint to Developer Settings
-1. Open the Omi App on your device.
-2. Go to Settings and click on Developer Mode.
-3. Scroll down until you see Realtime audio bytes, and set your endpoint there.
-4. In the Every x seconds field, define how frequently you want to receive the bytes. For example, if you set it to 10 seconds, you will receive the audio bytes every 10 seconds.
-
-That's it! You should now see audio bytes arriving at your webhook. The audio bytes are raw, but you can save them as audio files by adding a WAV header to the accumulated bytes.
-
-Check out the example below to see how you can save the audio bytes as audio files in Google Cloud Storage using the audio streaming feature.
-
-## Example: Saving Audio Bytes as Audio Files in Google Cloud Storage
-Step 1: Create a Google Cloud Storage bucket and set the appropriate permissions. You can follow the steps mentioned [here](https://docs.omi.me/docs/developer/savingaudio) up to step 5.
-
-Step 2: Fork the example repository from [github.com/mdmohsin7/omi-audio-streaming](https://github.com/mdmohsin7/omi-audio-streaming).
-
-Step 3: Clone the repository to your local machine.
-
-Step 4: Deploy it to any of your preferred cloud providers like GCP, AWS, DigitalOcean, or run it locally (you can use Ngrok for local testing). The repository includes a Dockerfile for easy deployment.
-
-Step 5: While deploying, ensure the following environment variables are set:
-- `GOOGLE_APPLICATION_CREDENTIALS_JSON`: Your GCP credentials, encoded in base64.
-- `GCS_BUCKET_NAME`: The name of your GCP storage bucket.
-
-Step 6: Once the deployment is complete, set the endpoint in the Developer Settings of the Omi App under Realtime audio bytes. The endpoint should be the URL where you deployed the example + `/audio`.
-
-Step 7: You should now see audio files being saved in your GCP bucket every x seconds, where x is the value you set in the `Every x seconds` field.
-
-## Contributing 🤝
-
-We welcome contributions from the open source community! Whether it's improving documentation, adding new features, or reporting bugs, your input is valuable. Check out our [Contribution Guide](https://docs.omi.me/developer/Contribution/) for more information.
-
-## Support 🆘
-
-If you're stuck, have questions, or just want to chat about Omi:
-
-- **GitHub Issues: 🐛** For bug reports and feature requests
-- **Community Forum: 💬** Join our [community forum](https://discord.gg/omi) for discussions and questions
-- **Documentation: 📚** Check out our [full documentation](https://docs.omi.me/) for in-depth guides
-
-Happy coding! 💻 If you have any questions or need further assistance, don't hesitate to reach out to our community.
diff --git a/docs/docs/developer/apps/Integrations.mdx b/docs/docs/developer/apps/Integrations.mdx
deleted file mode 100644
index 166129a5de..0000000000
--- a/docs/docs/developer/apps/Integrations.mdx
+++ /dev/null
@@ -1,204 +0,0 @@
----
-title: Developing Integration Apps for OMI
-description: "Integration apps allow OMI to interact with external services and process data in real-time. This guide will walk you through creating both Memory Creation Triggers and Real-Time Transcript Processors."
----
-
-## Types of Integration Apps
-
-### 1. 👷 Memory Creation Triggers
-
-#### Video Walkthrough
-
-
-
-**Running a FastAPI app locally, not on AWS:**
-
-
-These apps are activated when OMI creates a new memory, allowing you to process or store the memory data
-externally.
-
-[](https://youtube.com/shorts/Yv7gP3GZ0ME)
-
-#### Example Use Cases
-
-- Update project management tools with conversation summaries
-- Create a personalized social platform based on conversations and interests
-- Generate a knowledge graph of interests, experiences, and relationships
-
-### 2. 🏎️ Real-Time Transcript Processors
-
-#### Video Walkthrough:
-
-
-These apps process conversation transcripts as they occur, enabling real-time analysis and actions.
-
-[](https://youtube.com/shorts/h4ojO3WzkxQ)
-
-#### Example Use Cases
-
-- Live conversation coaching and feedback
-- Real-time web searches or fact-checking
-- Emotional state analysis and supportive responses
-
-## Creating an Integration App
-
-### Step 1: Define Your App 🎯
-
-Decide whether you're creating a Memory Creation Trigger or a Real-Time Transcript Processor, and outline its specific
-purpose.
-
-### Step 2: Set Up Your Endpoint 🔗
-
-Create an endpoint (webhook) that can receive and process the data sent by OMI. You can [create a test webhook](https://webhook-test.com/). The data structure will differ based on your
-app type:
-
-#### For Memory Creation Triggers:
-
-Your endpoint will receive the entire memory object as a JSON payload, with a `uid` as a query parameter. Here is what to
-expect:
-
-`POST /your-endpoint?uid=user123`
-
-```json
-
-{
- "id": 0,
- "created_at": "2024-07-22T23:59:45.910559+00:00",
- "started_at": "2024-07-21T22:34:43.384323+00:00",
- "finished_at": "2024-07-21T22:35:43.384323+00:00",
- "transcript_segments": [
- {
- "text": "Segment text",
- "speaker": "SPEAKER_00",
- "speakerId": 0,
- "is_user": false,
- "start": 10.0,
- "end": 20.0
- }
- // More segments...
- ],
- "photos": [],
- "structured": {
- "title": "Conversation Title",
- "overview": "Brief overview...",
- "emoji": "🗣️",
- "category": "personal",
- "action_items": [
- {
- "description": "Action item description",
- "completed": false
- }
- ],
- "events": []
- },
- "apps_response": [
- {
- "app_id": "app-id",
- "content": "App response content"
- }
- ],
- "discarded": false
-}
-```
-
-Your app should process this entire object and perform any necessary actions based on the full context of the memory.
-
-> Check the [Notion CRM Python Example](https://github.com/BasedHardware/Omi/blob/bab12a678f3cfe43ab1a7aba62645222de4378fb/apps/example/main.py#L85)
-> and it's respective JSON format [here](https://github.com/BasedHardware/Omi/blob/bab12a678f3cfe43ab1a7aba62645222de4378fb/community-plugins.json#L359).
-
-**For Real-Time Transcript Processors:**
-
-Your endpoint will receive a JSON payload containing the most recently transcribed segments, with both session_id and
-uid as query parameters. Here's the structure:
-
-`POST /your-endpoint?session_id=abc123&uid=user123`
-
-```json
-[
- {
- "text": "Segment text",
- "speaker": "SPEAKER_00",
- "speakerId": 0,
- "is_user": false,
- "start": 10.0,
- "end": 20.0
- }
- // More recent segments...
-]
-```
-
-**Key points for Real-Time Transcript Processors:**
-
-1. Segments arrive in multiple calls as the conversation unfolds.
-2. Use the session_id to maintain context across calls.
-3. Implement smart logic to avoid redundant processing.
-4. Consider building a complete conversation context by accumulating segments.
-5. Clear processed segments to prevent re-triggering on future calls.
-
-Remember to handle errors gracefully and consider performance, especially for lengthy conversations!
-
-> Check the Realtime News checker [Python Example](https://github.com/BasedHardware/omi/blob/bab12a678f3cfe43ab1a7aba62645222de4378fb/plugins/example/main.py#L100)
-> and it's respective JSON format [here](https://github.com/BasedHardware/Omi/blob/bab12a678f3cfe43ab1a7aba62645222de4378fb/community-plugins.json#L379).
-
-### Step 3: Test Your App 🧪
-
-Time to put your app through its paces! Follow these steps to test both types of integrations:
-
-1. Open the OMI app on your device.
-2. Go to Settings and enable Developer Mode.
-3. Navigate to Developer Settings.
-
-#### For Memory Creation Triggers:
-
-4. Set your endpoint URL in the "Memory Creation Webhook" field. If you don't have an endpoint yet, [create a test webhook](https://webhook-test.com/)
-5. To test without creating a new memory:
- - Go to any memory detail view.
- - Click on the top right corner (3 dots menu).
- - In the Developer Tools section, trigger the endpoint call with existing memory data.
-
-[](https://youtube.com/shorts/dYVSbEpoV0U)
-
-#### For Real-Time Transcript Processors:
-
-4. Set your endpoint URL in the "Real-Time Transcript Webhook" field.
-5. Start speaking to your device - your endpoint will receive real-time updates as you speak.
-
-[](https://youtube.com/shorts/CHz9JnOGlTQ)
-
-Your endpoints are now ready to spring into action!
-
-For **Memory Creation Triggers**, you can test with existing memories or wait for new ones to be created.
-
-For **Real-Time Processors**, simply start a conversation with OMI to see your app in action.
-
-Happy app crafting! We can't wait to see what you create! 🎉
-
-### Step 4: Submit Your App
-
-Submit your app using the Omi mobile app.
-
-The **webhook URL** should be a POST request in which the memory object is sent as a JSON payload.
-
-The **setup completed URL** is optional and should be a GET endpoint that returns `{'is_setup_completed': boolean}`.
-
-The **auth URL** is optional as well and is utilized by users to setup your app. The `uid` query paramater will be appended to this URL upon usage.
-
-The setup instructions can be either a link to instructions or text instructions for users on how to setup your app.
-
-### Setup Instructions Documentation
-
-When writing your setup instructions, consider including:
-
-1. A step-by-step setup guide
-2. Screenshots (if applicable)
-3. Authentication steps (if required)
-4. Troubleshooting tips
-
-Example structure:
-
-**Notes:**
-
-- Authentication is not needed for all apps. Include only if your app requires user-specific setup or credentials.
-- For apps without authentication, users can simply enable the app without additional steps.
-- All your README links, when the user opens them, we'll append a `uid` query parameter to it, which you can use to
- associate setup or credentials with specific users.
diff --git a/docs/docs/developer/apps/Introduction.mdx b/docs/docs/developer/apps/Introduction.mdx
index b827ef3c73..697da433db 100644
--- a/docs/docs/developer/apps/Introduction.mdx
+++ b/docs/docs/developer/apps/Introduction.mdx
@@ -44,18 +44,6 @@ Apps enable:
- Task automation and integration with third-party services
- Real-time conversation analysis and insights
-[//]: # (With apps, OMI can be transformed into specialized tools such as:)
-
-[//]: # (- A personal productivity coach that extracts action items and updates task management systems)
-
-[//]: # (- An expert in any field, providing specialized knowledge and advice)
-
-[//]: # (- A real-time language translator and cultural advisor)
-
-[//]: # (- A personal CRM that analyzes conversations and maintains relationship histories)
-
-[//]: # (- A health and fitness tracker that interprets discussions about diet and exercise)
-
Apps allow developers to tap into OMI's conversational abilities and combine them with external data and services,
opening up a world of possibilities for AI-enhanced applications.
diff --git a/docs/docs/developer/apps/Notifications.mdx b/docs/docs/developer/apps/Notifications.mdx
index dad92fff07..f043d45cf3 100644
--- a/docs/docs/developer/apps/Notifications.mdx
+++ b/docs/docs/developer/apps/Notifications.mdx
@@ -3,9 +3,9 @@ Title: "Sending Notifications with OMI"
description: "Learn how to send notifications to OMI users from your applications, including direct text notifications and best practices for implementation."
---
-## Types of Notifications 📬
+## Types of Notifications
-### 1. 📱 Direct Text Notifications
+### 1. Direct Text Notifications
Direct text notifications allow you to send immediate messages to specific OMI users. This is useful for alerts, updates, or responses to user actions.
@@ -19,113 +19,115 @@ Direct text notifications allow you to send immediate messages to specific OMI u
## Implementing Notifications 🛠️
-### Step 1: Set Up Authentication 🔑
-
-Before sending notifications, you'll need:
-
-1. Your OMI App ID (`app_id`)
-2. Your OMI App Secret (API Key)
-
-Store these securely as environment variables:
-```bash
-OMI_APP_ID=your_app_id_here
-OMI_APP_SECRET=your_app_secret_here
-```
-
-### Step 2: Configure Your Endpoint 🔌
-
-#### Base URL and Endpoint
-
-```markdown
-* **Method:** `POST`
-* **URL:** `/v2/integrations/{app_id}/notification`
-* **Base URL:** `api.omi.me`
-```
-
-#### Required Headers
-
-```markdown
-* **Authorization:** `Bearer `
-* **Content-Type:** `application/json`
-* **Content-Length:** `0`
-```
-
-#### Query Parameters
-
-```markdown
-* `uid` (string, **required**): The target user's OMI ID
-* `message` (string, **required**): The notification text
-```
-
-### Step 3: Implement the Code 💻
-
-Here's a complete Node.js implementation:
-
-```javascript
-const https = require('https');
-
-/**
- * Sends a direct notification to an Omi user.
- * @param {string} userId - The Omi user's unique ID
- * @param {string} message - The notification text
- * @returns {Promise