Skip to content

docker-compose file error #1284

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
202252197 opened this issue Feb 11, 2025 · 5 comments
Closed

docker-compose file error #1284

202252197 opened this issue Feb 11, 2025 · 5 comments
Labels
bug Something isn't working

Comments

@202252197
Copy link

ERROR An error occurred: 1 error(s) decoding:

      * 'services[wren-ai-service].ports[0]' expected a map, got 'string
@202252197 202252197 added the bug Something isn't working label Feb 11, 2025
@cyyeh
Copy link
Member

cyyeh commented Feb 11, 2025

@202252197 hi, have u tried launching Wren AI using release artifact?

Please check the official installation method here: https://docs.getwren.ai/oss/installation#using-wren-ai-launcher

@Ahmed-ao
Copy link

I used the link provided above to install wren ai and got the same bug! it occurs when you try to use your own local model instead of default gpt api.

@joanteixi
Copy link

joanteixi commented Feb 25, 2025

I guess that you didn't create the .env file with all the parameters. Use this file as example: https://github.com/Canner/WrenAI/blob/main/docker/.env.example

@Ahmed-ao
Copy link

I used your .env file and it worked. However the is a problem with yaml config file; it doesn't connect to local model. I'm trying to use Deepseek:14b locally with Ollama. and I followed the instruction provided here https://docs.getwren.ai/oss/installation/custom_llm

Yaml Config

`type: llm
provider: litellm_llm
timeout: 600
models:

  • model: openai/deepseek-r1:14b
    api_base: http://docker.host.internal:11434/v1
    api_key_name: LLM_OLLAMA_API_KEY
    kwargs:
    temperature: 0.8
    n: 1

    for better consistency of llm response

    seed: 0
    max_tokens: 4096
    response_format:
    type: text
  • model: openai/deepseek-r1:14b
    api_base: http://docker.host.internal:11434/v1
    api_key_name: LLM_OLLAMA_API_KEY
    kwargs:
    temperature: 0.8
    n: 1

    for better consistency of llm response

    seed: 0
    max_tokens: 4096
    response_format:
    type: text

type: embedder
provider: litellm_embedder
models:


type: engine
provider: wren_ui
endpoint: http://localhost:3000


type: engine
provider: wren_ibis
endpoint: http://localhost:8000
source: bigquery
manifest: '' # base64 encoded string of the MDL
connection_info: '' # base64 encoded string of the connection info


type: engine
provider: wren_engine
endpoint: http://localhost:8080
manifest: ''


type: document_store
provider: qdrant
location: http://qdrant:6333
embedding_model_dim: 768
timeout: 120
recreate_index: true


type: pipeline
pipes:

  • name: deepseek_pipline
    llm: litellm_llm.openai/deepseek-r1:14b
    embedder: litellm_embedder.openai/nomic-embed-text

    other pipeline configurations

  • name: db_schema_indexing
    embedder: litellm_embedder.openai/nomic-embed-text
    document_store: qdrant
  • name: historical_question_indexing
    embedder: litellm_embedder.openai/nomic-embed-text
    document_store: qdrant
  • name: table_description_indexing
    embedder: litellm_embedder.openai/nomic-embed-text
    document_store: qdrant
  • name: db_schema_retrieval
    llm: litellm_llm.openai/deepseek-r1:14b
    embedder: litellm_embedder.openai/nomic-embed-text
    document_store: qdrant
  • name: historical_question_retrieval
    embedder: litellm_embedder.openai/nomic-embed-text
    document_store: qdrant
  • name: sql_generation
    llm: litellm_llm.openai/deepseek-r1:14b
    engine: wren_ui
  • name: sql_correction
    llm: litellm_llm.openai/deepseek-r1:14b
    engine: wren_ui
  • name: followup_sql_generation
    llm: litellm_llm.openai/deepseek-r1:14b
    engine: wren_ui
  • name: sql_summary
    llm: litellm_llm.openai/deepseek-r1:14b
  • name: sql_answer
    llm: litellm_llm.openai/deepseek-r1:14b
    engine: wren_ui
  • name: sql_breakdown
    llm: litellm_llm.openai/deepseek-r1:14b
    engine: wren_ui
  • name: sql_expansion
    llm: litellm_llm.openai/deepseek-r1:14b
    engine: wren_ui
  • name: sql_explanation
    llm: litellm_llm.openai/deepseek-r1:14b
  • name: sql_regeneration
    llm: litellm_llm.openai/deepseek-r1:14b
    engine: wren_ui
  • name: semantics_description
    llm: litellm_llm.openai/deepseek-r1:14b
  • name: relationship_recommendation
    llm: litellm_llm.openai/deepseek-r1:14b
    engine: wren_ui
  • name: question_recommendation
    llm: litellm_llm.openai/deepseek-r1:14b
  • name: question_recommendation_db_schema_retrieval
    llm: litellm_llm.openai/deepseek-r1:14b
    embedder: litellm_embedder.openai/nomic-embed-text
    document_store: qdrant
  • name: question_recommendation_sql_generation
    llm: litellm_llm.openai/deepseek-r1:14b
    engine: wren_ui
  • name: chart_generation
    llm: litellm_llm.openai/deepseek-r1:14b
  • name: chart_adjustment
    llm: litellm_llm.openai/deepseek-r1:14b
  • name: intent_classification
    llm: litellm_llm.openai/deepseek-r1:14b
    embedder: litellm_embedder.openai/nomic-embed-text
    document_store: qdrant
  • name: data_assistance
    llm: litellm_llm.openai/deepseek-r1:14b
  • name: sql_pairs_indexing
    document_store: qdrant
    embedder: litellm_embedder.openai/nomic-embed-text
  • name: sql_pairs_deletion
    document_store: qdrant
    embedder: litellm_embedder.openai/nomic-embed-text
  • name: sql_pairs_retrieval
    document_store: qdrant
    embedder: litellm_embedder.openai/nomic-embed-text
    llm: litellm_llm.openai/deepseek-r1:14b
  • name: preprocess_sql_data
    llm: litellm_llm.openai/deepseek-r1:14b
  • name: sql_executor
    engine: wren_ui
  • name: sql_question_generation
    llm: litellm_llm.openai/deepseek-r1:14b
  • name: sql_generation_reasoning
    llm: litellm_llm.openai/deepseek-r1:14b

settings:
host: 127.0.0.1
port: 5556
column_indexing_batch_size: 50
table_retrieval_size: 10
table_column_retrieval_size: 100
query_cache_maxsize: 1000
allow_using_db_schemas_without_pruning: false
query_cache_ttl: 3600
langfuse_host: https://cloud.langfuse.com
langfuse_enable: true
logging_level: INFO
development: false
`

@paopa
Copy link
Contributor

paopa commented Mar 3, 2025

Hi @Ahmed-ao, here is my config.yaml for Ollama model and embedder. I think you can base it on it and modify it to your version.

models:
- api_base: http://host.docker.internal:11434/
  kwargs:
    n: 1
    temperature: 0
  model: ollama/phi4
provider: litellm_llm
timeout: 120
type: llm
---
models:
- api_base: http://host.docker.internal:11434/
  model: ollama/nomic-embed-text
  timeout: 120
provider: litellm_embedder
type: embedder

@cyyeh cyyeh closed this as completed May 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants