KiPAM is an AI based project manager, which helps to structure and streamline the process of setting up and work on a new project based on the principles of the "passionate project planning" framework.
It's using a chat bot with a highly detailed chat prompt, which you can find in the prompts
directory. After each session, you'd have to execute the update_life_context.py
script which is taking the latest chat history out of the conversations
folder and puts all discussed details of the project into the profiles/Project KiPAM Context.md
file, which will then be used during the next conversation.
This chapter describes how you can setup the chat bot
- This GitHub repository forked and cloned to your local hard drive
- Either
- A working Ollama instance
- Or
- An API key for OpenAI / Anthropic AI
- Obsidian Notebook as chat bot
- Python 3
Just fork and clone this repository and make sure you're having a properly configured AI somewhere at hand. It runs with a locally installed ollama
instance or with remote APIs to access OpenAI, Gemini and Anthropic AIs.
You need to have a working python environment in order to execute the python script.
And you need to have an installed and configured version of Obsidian Notebook installed on your local system. Please also install the community plugin BMO Chat Bot into Obsidian.
After you've set up Obsidian and the BMO Chatbot community plugin, you can start a new BMO Chat window and you can start your conversation with the AI prompt.
Once you're done discussing your new project, just type /save
in the chat window to save the chat protocol.
Then execute the python script in order to update the current context file (which acts somehow as the brain and memory of your chat). Then you're able to continue the discussion.
The chat bot should always remind you of the state of the project.
This is the collection of trails & errors we found along our experiments:
- Editor System Role in BMO settings need to be empty.
- llama3.2 quality is insufficient
Unfortunately, using this combination with a local AI running via ollama
or such things isn't suitable for what we wanted to achieve. This kind of chat bot requires a huge amount of memory so that you're able to have a large context of what you've discussed so far. This doesn't work as expected.
- Reaction of the chat bot is not reproducable
- Reaction of the local AI is not (always) reproducable
This experiment is now closed.