Skip to content

Commit fbde275

Browse files
committed
Trying to rebase with main and commit the files again
Signed-off-by: Vineeth Kalluru <[email protected]>
1 parent 8448d8d commit fbde275

File tree

8 files changed

+1213
-361
lines changed

8 files changed

+1213
-361
lines changed
Lines changed: 108 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -1,46 +1,122 @@
1-
# macOS system files
1+
# Misc
2+
config_examples.yml
3+
env.sh
4+
frontend/
5+
prompts.md
6+
7+
# Python
8+
__pycache__/
9+
*.py[cod]
10+
*$py.class
11+
*.so
12+
.Python
13+
*.egg
14+
*.egg-info/
15+
dist/
16+
build/
17+
*.whl
18+
pip-wheel-metadata/
219
.DS_Store
3-
.DS_Store?
4-
._*
5-
.Spotlight-V100
6-
.Trashes
7-
ehthumbs.db
8-
Thumbs.db
920

10-
# Database and vector store files
11-
database/
12-
*.db
13-
*.sqlite3
21+
# Virtual environments
22+
.venv/
23+
venv/
24+
ENV/
25+
env/
26+
27+
# IDEs and Editors
28+
.vscode/
29+
.idea/
30+
*.swp
31+
*.swo
32+
*~
33+
.DS_Store
34+
35+
# Testing
36+
.pytest_cache/
37+
.coverage
38+
htmlcov/
39+
.tox/
40+
.hypothesis/
41+
42+
# Jupyter Notebook
43+
.ipynb_checkpoints/
44+
*.ipynb_checkpoints/
1445

15-
# Output and generated files
46+
# Output and Data Directories
1647
output_data/
17-
moment/
18-
readmes/
19-
*.html
20-
*.csv
21-
*.npy
48+
eval_output/
49+
example_eval_output/
50+
output/
51+
results/
52+
logs/
2253

23-
# Python package metadata
24-
src/**/*.egg-info/
25-
*.egg-info/
54+
# Database files
55+
*.db
56+
*.sqlite
57+
*.sqlite3
58+
database/*.db
59+
database/*.sqlite
2660

27-
# Environment files (if they contain secrets)
28-
env.sh
61+
# Vector store data (ChromaDB)
62+
database/
63+
chroma_db/
64+
vector_store/
65+
vanna_vector_store/
2966

30-
# Model files (if large/binary)
67+
# Model files (large binary files)
3168
models/*.pkl
32-
models/*.joblib
33-
models/*.model
69+
models/*.h5
70+
models/*.pt
71+
models/*.pth
72+
models/*.ckpt
73+
*.pkl
74+
*.h5
75+
*.pt
76+
*.pth
77+
moment/
3478

35-
# Logs
36-
*.log
37-
logs/
79+
# Data files (CSV, JSON, etc. - be selective)
80+
*.csv
81+
*.json
82+
!training_data.json
83+
!vanna_training_data.yaml
84+
!config*.json
85+
!config*.yaml
86+
!config*.yml
87+
!pyproject.toml
88+
!package.json
89+
90+
# Frontend build artifacts
91+
frontend/node_modules/
92+
frontend/dist/
93+
frontend/build/
94+
frontend/.next/
95+
frontend/out/
96+
97+
# Environment and secrets
98+
.env
99+
.env.local
100+
.env.*.local
101+
*.secret
102+
secrets/
103+
credentials/
38104

39105
# Temporary files
40106
*.tmp
41107
*.temp
42-
.pytest_cache/
43-
__pycache__/
108+
*.log
109+
*.cache
110+
111+
# OS specific
112+
Thumbs.db
113+
Desktop.ini
114+
115+
# Experiment tracking
116+
mlruns/
117+
wandb/
44118

45-
# dot env
46-
mydot.env
119+
# Documentation builds
120+
docs/_build/
121+
docs/.doctrees/
122+
site/

industries/predictive_maintenance_agent/README.md

Lines changed: 48 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -180,6 +180,21 @@ Now install the PDM workflow:
180180
uv pip install -e .
181181
```
182182
183+
#### Installation Options
184+
185+
**Base Installation** (default - includes ChromaDB + SQLite):
186+
```bash
187+
uv pip install -e .
188+
```
189+
190+
**Optional Database Support:**
191+
- PostgreSQL: `uv pip install -e ".[postgres]"`
192+
- MySQL: `uv pip install -e ".[mysql]"`
193+
- All databases: `uv pip install -e ".[all-databases]"`
194+
195+
**Optional Vector Store:**
196+
- Elasticsearch: `uv pip install -e ".[elasticsearch]"`
197+
183198
### [Optional] Verify if all prerequisite packages are installed
184199
```bash
185200
uv pip list | grep -E "nvidia-nat|nvidia-nat-ragaai|nvidia-nat-phoenix|vanna|chromadb|xgboost|pytest|torch|matplotlib"
@@ -320,6 +335,31 @@ INFO: Uvicorn running on http://localhost:8000 (Press CTRL+C to quit)
320335
321336
During startup, you'll see Vanna training logs as the SQL agent automatically loads the domain knowledge from `vanna_training_data.yaml` (as described in Section 6).
322337
338+
### Start Modern Web UI (Recommended)
339+
340+
We now provide a **custom modern web interface** inspired by the NVIDIA AIQ Research Assistant design! This UI offers a superior experience compared to the generic NeMo-Agent-Toolkit-UI.
341+
342+
**In a new terminal**, navigate to the frontend directory and start the UI:
343+
344+
```bash
345+
cd frontend
346+
npm install # First time only
347+
npm start
348+
```
349+
350+
The UI will be available at `http://localhost:3000`
351+
352+
**Features of the Modern UI:**
353+
- 🎨 Clean, professional NVIDIA-branded design
354+
- 📊 Embedded visualization display for plots and charts
355+
- 🎯 Quick-start example prompts for common queries
356+
- ⚙️ Configurable settings panel
357+
- 🌓 Dark/Light theme support
358+
- 📱 Fully responsive mobile design
359+
- 🔄 Real-time streaming responses
360+
361+
See `frontend/README.md` for detailed documentation.
362+
323363
### Start Code Execution Sandbox
324364
325365
The code generation assistant requires a standalone Python sandbox that can execute the generated code. This step starts that sandbox.
@@ -443,7 +483,9 @@ def your_custom_utility(file_path: str, param: int = 100) -> str:
443483
4. **Consistent Interface**: All utilities return descriptive success messages
444484
5. **Documentation**: Use `utils.show_utilities()` to discover available functions
445485
446-
### Setup Web Interface
486+
### Alternative: Generic NeMo-Agent-Toolkit UI
487+
488+
If you prefer the generic NeMo Agent Toolkit UI instead of our custom interface:
447489
448490
```bash
449491
git clone https://github.com/NVIDIA/NeMo-Agent-Toolkit-UI.git
@@ -459,6 +501,8 @@ The UI is available at `http://localhost:3000`
459501
- Configure theme and WebSocket URL as needed
460502
- Check "Enable intermediate results" and "Enable intermediate results by default" if you prefer to see all agent calls while the workflow runs
461503
504+
**Note:** The custom modern UI (described above) provides better visualization embedding, domain-specific examples, and a more polished experience tailored for predictive maintenance workflows.
505+
462506
## Example Prompts
463507
464508
Test the system with these prompts:
@@ -487,7 +531,7 @@ Retrieve and detect anomalies in sensor 4 measurements for engine number 78 in t
487531
488532
**Workspace Utilities Demo**
489533
```
490-
Retrieve ground truth RUL values and time in cycles from FD001 train dataset. Apply piecewise RUL transformation with MAXLIFE=100. Finally, Plot a line chart of the transformed values across time.
534+
Retrieve RUL values and time in cycles for engine unit 24 from FD001 train dataset. Use the piece wise RUL transformation code utility to perform piecewise RUL transformation on the ground truth RUL values with MAXLIFE=100.Finally, Plot a comparison line chart with RUL values and its transformed values across time.
491535
```
492536
493537
*This example demonstrates how to discover and use workspace utilities directly. The system will show available utilities and then apply the RUL transformation using the pre-built, reliable utility functions.*
@@ -496,9 +540,9 @@ Retrieve ground truth RUL values and time in cycles from FD001 train dataset. Ap
496540
```
497541
Perform the following steps:
498542
499-
1.Retrieve the time in cycles, all sensor measurements, and ground truth RUL values for engine unit 24 from FD001 train dataset.
543+
1.Retrieve the time in cycles, all sensor measurements, and ground truth RUL values, partition by unit number for engine unit 24 from FD001 train dataset.
500544
2.Use the retrieved data to predict the Remaining Useful Life (RUL).
501-
3.Use the piece wise RUL transformation code utility to apply piecewise RUL transformation only to the observed RUL column.
545+
3.Use the piece wise RUL transformation code utility to apply piecewise RUL transformation only to the observed RUL column with MAXLIFE of 100.
502546
4.Generate a plot that compares the transformed RUL values and the predicted RUL values across time.
503547
```
504548
![Prediction Example](imgs/test_prompt_3.png)

industries/predictive_maintenance_agent/configs/config-reasoning.yml

Lines changed: 31 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -15,24 +15,24 @@
1515

1616
general:
1717
use_uvloop: true
18-
# telemetry:
19-
# logging:
20-
# console:
21-
# _type: console
22-
# level: INFO
23-
# file:
24-
# _type: file
25-
# path: "pdm.log"
26-
# level: DEBUG
27-
# tracing:
28-
# phoenix:
29-
# _type: phoenix
30-
# endpoint: http://localhost:6006/v1/traces
31-
# project: pdm-test
32-
# catalyst:
33-
# _type: catalyst
34-
# project: "pdm-test"
35-
# dataset: "pdm-dataset"
18+
telemetry:
19+
logging:
20+
console:
21+
_type: console
22+
level: INFO
23+
file:
24+
_type: file
25+
path: "pdm.log"
26+
level: DEBUG
27+
tracing:
28+
phoenix:
29+
_type: phoenix
30+
endpoint: http://localhost:6006/v1/traces
31+
project: pdm-demo-day
32+
catalyst:
33+
_type: catalyst
34+
project: "pdm-demo-day"
35+
dataset: "pdm-demo-day"
3636

3737
llms:
3838
# SQL query generation model
@@ -42,13 +42,11 @@ llms:
4242

4343
# Data analysis and tool calling model
4444
analyst_llm:
45-
_type: nim
46-
model_name: "qwen/qwen2.5-coder-32b-instruct"
47-
# _type: openai
48-
# model_name: "gpt-4.1-mini"
45+
_type: openai
46+
model_name: "gpt-4.1-mini"
4947

5048
# Python code generation model
51-
coding_llm:
49+
coding_llm:
5250
_type: nim
5351
model_name: "qwen/qwen2.5-coder-32b-instruct"
5452

@@ -66,15 +64,20 @@ embedders:
6664
# Text embedding model for vector database operations
6765
vanna_embedder:
6866
_type: nim
69-
model_name: "nvidia/nv-embed-v1"
67+
model_name: "nvidia/llama-3.2-nv-embedqa-1b-v2"
7068

7169
functions:
7270
sql_retriever:
7371
_type: generate_sql_query_and_retrieve_tool
7472
llm_name: sql_llm
7573
embedding_name: vanna_embedder
74+
# Vector store configuration
75+
vector_store_type: chromadb # Optional, chromadb is default
7676
vector_store_path: "database"
77-
db_path: "database/nasa_turbo.db"
77+
# Database configuration
78+
db_type: sqlite # Optional, sqlite is default
79+
db_connection_string_or_path: "database/nasa_turbo.db"
80+
# Output configuration
7881
output_folder: "output_data"
7982
vanna_training_data_path: "vanna_training_data.yaml"
8083

@@ -128,8 +131,8 @@ functions:
128131
plot_line_chart,
129132
plot_comparison,
130133
anomaly_detection,
131-
plot_anomaly,
132-
code_generation_assistant
134+
plot_anomaly
135+
# code_generation_assistant
133136
]
134137
parse_agent_response_max_retries: 2
135138
system_prompt: |
@@ -154,7 +157,7 @@ functions:
154157
Executing step: the step you are currently executing from the plan along with any instructions provided
155158
Thought: describe how you are going to execute the step
156159
Final Answer: the final answer to the original input question including the absolute file paths of the generated files with
157-
`/Users/vikalluru/Documents/GenerativeAIExamples/industries/manufacturing/predictive_maintenance_agent/output_data/` prepended to the filename.
160+
`/Users/vikalluru/Documents/GenerativeAIExamples/industries/predictive_maintenance_agent/output_data/` prepended to the filename.
158161
159162
**FORMAT 3 (when using a tool)**
160163
Input plan: Summarize all the steps in the plan.

industries/predictive_maintenance_agent/pyproject.toml

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ dependencies = [
1111
"pydantic ~= 2.10.0, <2.11.0",
1212
"vanna==0.7.9",
1313
"chromadb",
14+
"sqlalchemy>=2.0.0",
1415
"xgboost",
1516
"matplotlib",
1617
"torch",
@@ -23,6 +24,36 @@ classifiers = ["Programming Language :: Python"]
2324
authors = [{ name = "Vineeth Kalluru" }]
2425
maintainers = [{ name = "NVIDIA Corporation" }]
2526

27+
[project.optional-dependencies]
28+
elasticsearch = [
29+
"elasticsearch>=8.0.0"
30+
]
31+
postgres = [
32+
"psycopg2-binary>=2.9.0"
33+
]
34+
mysql = [
35+
"pymysql>=1.0.0"
36+
]
37+
sqlserver = [
38+
"pyodbc>=4.0.0"
39+
]
40+
oracle = [
41+
"cx_Oracle>=8.0.0"
42+
]
43+
all-databases = [
44+
"psycopg2-binary>=2.9.0",
45+
"pymysql>=1.0.0",
46+
"pyodbc>=4.0.0",
47+
"cx_Oracle>=8.0.0"
48+
]
49+
all = [
50+
"elasticsearch>=8.0.0",
51+
"psycopg2-binary>=2.9.0",
52+
"pymysql>=1.0.0",
53+
"pyodbc>=4.0.0",
54+
"cx_Oracle>=8.0.0"
55+
]
56+
2657
[project.entry-points.'nat.components']
2758
predictive_maintenance_agent = "predictive_maintenance_agent.register"
2859

0 commit comments

Comments
 (0)