Skip to content

Commit 1ff562b

Browse files
authored
Merge pull request #1448 from MIT-LCP/mimiciii_postgres_concepts
Include postgres MIMIC-III concepts
2 parents 79eb377 + f8ceb8b commit 1ff562b

File tree

92 files changed

+13912
-350
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

92 files changed

+13912
-350
lines changed

mimic-iii/concepts/README.md

Lines changed: 17 additions & 59 deletions
Original file line numberDiff line numberDiff line change
@@ -10,76 +10,34 @@ You can read about cloud access to MIMIC-III, including via Google BigQuery, on
1010
The rest of this README describes:
1111

1212
* [Generating the concepts in BigQuery](#generating-the-concepts-in-bigquery)
13-
* [Generating the concepts in PostgreSQL (\*nix/Mac OS X)](#generating-the-concepts-in-postgresql-nix-mac-os-x)
14-
* [Generating the concepts in PostgreSQL (Windows)](#generating-the-concepts-in-postgresql-windows)
13+
* [Generating the concepts in PostgreSQL](#generating-the-concepts-in-postgresql)
1514

1615
## Generating the concepts in BigQuery
1716

1817
You do not need to generate the concepts if you are using BigQuery! They have already been generated for you. If you have access to MIMIC-III on BigQuery, look under `physionet-data.mimic_derived`. If you would like to generate the concepts again, for example on your own dataset, you must modify the `TARGET_DATASET` variable within the [make-concepts.sh](/concepts/make-concepts.sh) script. The script assumes you have installed and configured the [Google Cloud SDK](https://cloud.google.com/sdk/docs/install).
1918

20-
## Generating the concepts in PostgreSQL (\*nix/Mac OS X)
19+
## Generating the concepts in PostgreSQL
20+
21+
### Quickstart
22+
23+
Go to the [concepts_postgres](../concepts_postgres) folder, run the [postgres-functions.sql](../concepts_postgres/postgres-make-concepts.sql) and [postgres-make-concepts.sql](../concepts_postgres/postgres-make-concepts.sql) scripts, in that order.
24+
25+
### In more detail
2126

2227
While the SQL scripts here are written in BigQuery's Standard SQL syntax, there are many BigQuery specific functions which do not carry over to PostgreSQL. Nevertheless, with only a few changes, the scripts can be made compatible. In order to generate the concepts on a PostgreSQL database, one must:
2328

2429
* create postgres functions which emulate BigQuery functions
2530
* modify SQL scripts for incompatible syntax
2631
* run the modified SQL scripts and direct the output into tables in the PostgreSQL database
2732

28-
This can be done as follows:
29-
30-
1. Open a terminal in the `concepts` folder.
31-
2. Run [postgres-functions.sql](postgres-functions.sql).
32-
* e.g. `psql -f postgres-functions.sql`
33-
* This script creates functions which emulate BigQuery syntax.
34-
3. Run [postgres_make_concepts.sh](postgres_make_concepts.sh).
35-
* e.g. `bash postgres_make_concepts.sh`
36-
* This file runs the scripts after applying a few regular expressions which convert table references and date calculations appropriately.
37-
* This file generates all concepts on the `public` schema.
38-
* Exporting DBCONNEXTRA before calling this script will add this to the
39-
connection string. For example, running:
40-
`DBCONNEXTRA="user=mimic password=mimic" bash postgres_make_concepts.sh`
41-
will add these settings to all of the psql calls. (Note that "dbname"
42-
and "search_path" do not need to be set.)
43-
44-
If you do not have access to a PostgreSQL database with MIMIC, you can read more about building the data within one in the [buildmimic/postgres](https://github.com/MIT-LCP/mimic-code/tree/main/mimic-iii/buildmimic/postgres) folder.
45-
46-
## Generating the concepts in PostgreSQL (Windows)
47-
48-
On Windows, it is a bit more complex to generate the concepts in the PostgreSQL database. The approach relies on using \*nix command line tools which are not available by default in a Windows installation. Instead, we have adapted the script into a `.bat` file which relies on the Windows Subsystem for Linux in order to run the shell commands. The steps are:
49-
50-
1. Install the [Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10).
51-
* If you don't have a preference, follow the steps to install a Ubuntu system. The bat file was tested with Ubuntu, though the commands should work with any flavor of \*nix since we rely on the utils rather than the kernel.
52-
2. Verify you can use the wsl.exe utilities in command prompt.
53-
* Go to run and type `cmd`, or type "command prompt" in the search.
54-
* Run `wsl.exe echo "hi"` - this should print out `hi` back to you
55-
3. Change to your local folder where these concepts are stored
56-
* e.g. `cd C:\Tools\mimic-code-master\concepts`
57-
4. Modify the .bat file: update the `CONNSTR` and `PSQL_PATH` variables.
58-
* Replace `INSERT_PASSWORD_HERE` in `CONNSTR` with your password; or remove it if you have a `.pgpass` file or other form of authentication. If you have a different username or database location, be sure to update those as well.
59-
* Change `PSQL_PATH` to point to your `psql.exe` file. It is currently set to the default location for a PostgreSQL 13 installation.
60-
5. Run the .bat file
61-
* In the command prompt, type `postgres_make_concepts_windows.bat`
62-
63-
The script echos the commands and the outputs as they run. If it is running successfully, you should see a `SELECT` statement after each command, with the number of rows generated in the table.
64-
65-
### Can I just do the above manually without WSL?
66-
67-
Of course! And this might be more informative.
68-
69-
First, generate the necessary functions as above, by running `postgres-functions.sql` in the SQL shell.
70-
Once that's done, you need to do the following text replacements in all the SQL files:
71-
72-
1. Replace ````physionet-data.mimiciii_clinical.<table_name>````, ````physionet-data.mimiciii_derived.<table_name>```` , and ````physionet-data.mimiciii_notes.<table_name>```` with just `<table_name>`.
73-
* This is done by the `REGEX_SCHEMA` variable in the `postgres_make_concepts.sh` script.
74-
* Ideally you should set your search path with `set search_path to public,mimiciii;`. This will create the concepts on `public`, and read data from `mimiciii`. This distinction isn't strictly necessary, but many find it useful.
75-
2. Replace `DATETIME_DIFF(date1, date2, DATE_PART)` with `DATETIME_DIFF(date1, date2, 'DATE_PART')`.
76-
* This adds single quotes around any `DATE_PART`, which is required by PostgreSQL.
77-
* This is done by the `REGEX_DATETIME_DIFF ` variable in the `postgres_make_concepts.sh` script.
78-
3. Add a create table statement at the top of the file, e.g. if the file is named `echo_data.sql`, add `CREATE TABLE echo_data AS` at the top of the file.
79-
* This is done by the `echo` calls in the shell script.
80-
4. Run each file individually in the order specified by the make concepts script.
81-
82-
The above steps replicate what is done in the shell script (postgres_make_concepts.sh).
33+
The bash script [convert_mimiciii_concepts_bq_to_psql.sh](/convert_mimiciii_concepts_bq_to_psql.sh) has done most of this for you. To generate concepts in PostgreSQL, simply go to the [concepts_postgres](../concepts_postgres) folder and run:
34+
35+
```sh
36+
\i postgres-functions.sql
37+
\i postgres-make-concepts.sql
38+
```
39+
40+
You can also read more about building the data within PostgreSQL in the [buildmimic/postgres](https://github.com/MIT-LCP/mimic-code/tree/main/mimic-iii/buildmimic/postgres) folder.
8341

8442
## List of concepts
8543

@@ -190,4 +148,4 @@ Useful snippets of SQL implementing common functions. For example, the `auroc.sq
190148

191149
## other-languages
192150

193-
Scripts in flavours of SQL which are not necessarily compatible with PostgreSQL.
151+
Scripts in flavours of SQL which are not compatible with BigQuery/PostgreSQL.
Lines changed: 158 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,158 @@
1+
#!/bin/bash
2+
# This shell script converts BigQuery .sql files into PostgreSQL .sql files.
3+
4+
# path in which we create the postgres concepts
5+
TARGET_PATH='../concepts_postgres'
6+
mkdir -p $TARGET_PATH
7+
8+
# String replacements are necessary for some queries.
9+
10+
# Schema replacement: change `physionet-data.<dataset>.<table>` to just <table> (with no backticks)
11+
export REGEX_SCHEMA='s/`physionet-data.(mimiciii_clinical|mimiciii_derived|mimiciii_notes).([A-Za-z0-9_-]+)`/\2/g'
12+
# Note that these queries are very senstive to changes, e.g. adding whitespaces after comma can already change the behavior.
13+
export REGEX_DATETIME_DIFF="s/DATETIME_DIFF\(([^,]+), ?(.*), ?(DAY|MINUTE|SECOND|HOUR|YEAR)\)/DATETIME_DIFF(\1, \2, '\3')/g"
14+
export REGEX_DATETIME_TRUNC="s/DATETIME_TRUNC\(([^,]+), ?(DAY|MINUTE|SECOND|HOUR|YEAR)\)/DATE_TRUNC('\2', \1)/g"
15+
# Add necessary quotes to INTERVAL, e.g. "INTERVAL 5 hour" to "INTERVAL '5' hour"
16+
export REGEX_INTERVAL="s/interval ([[:digit:]]+) (hour|day|month|year)/INTERVAL '\1' \2/gI"
17+
# Specific queries for some problems that arose with some files.
18+
export REGEX_INT="s/CAST\(hr AS INT64\)/CAST\(hr AS bigint\)/g"
19+
export REGEX_ARRAY="s/GENERATE_ARRAY\(-24, CEIL\(DATETIME\_DIFF\(it\.outtime_hr, it\.intime_hr, HOUR\)\)\)/ARRAY\(SELECT \* FROM generate\_series\(-24, CEIL\(DATETIME\_DIFF\(it\.outtime_hr, it\.intime_hr, HOUR\)\)\)\)/g"
20+
export REGEX_HOUR_INTERVAL="s/INTERVAL CAST\(hr AS INT64\) HOUR/interval \'1\' hour * CAST\(hr AS bigint\)/g"
21+
export REGEX_SECONDS="s/SECOND\)/\'SECOND\'\)/g"
22+
23+
# tables we want to run before all other concepts
24+
# usually because they are used as dependencies
25+
DIR_AND_TABLES_TO_PREBUILD='demographics.icustay_times demographics.icustay_hours .echo_data .code_status .rrt durations.weight_durations fluid_balance.urine_output organfailure.kdigo_uo'
26+
27+
# tables which are written directly in postgresql and source code controlled
28+
# this is usually because there is no trivial conversion between bq/psql syntax
29+
DIR_AND_TABLES_ALREADY_IN_PSQL='demographics.icustay_times demographics.icustay_hours demographics.note_counts diagnosis.ccs_dx'
30+
31+
# tables which we want to run after all other concepts
32+
# usually because they depend on one or more other queries
33+
DIR_AND_TABLES_TO_SKIP=''
34+
35+
# First, we re-create the postgres-make-concepts.sql file.
36+
echo "\echo ''" > $TARGET_PATH/postgres-make-concepts.sql
37+
38+
# Now we add some preamble for the user running the script.
39+
echo "\echo '==='" >> $TARGET_PATH/postgres-make-concepts.sql
40+
echo "\echo 'Beginning to create materialized views for MIMIC database.'" >> $TARGET_PATH/postgres-make-concepts.sql
41+
echo "\echo '"'Any notices of the form "NOTICE: materialized view "XXXXXX" does not exist" can be ignored.'"'" >> $TARGET_PATH/postgres-make-concepts.sql
42+
echo "\echo 'The scripts drop views before creating them, and these notices indicate nothing existed prior to creating the view.'" >> $TARGET_PATH/postgres-make-concepts.sql
43+
echo "\echo '==='" >> $TARGET_PATH/postgres-make-concepts.sql
44+
echo "\echo ''" >> $TARGET_PATH/postgres-make-concepts.sql
45+
46+
# ======================================== #
47+
# === CONCEPTS WHICH WE MUST RUN FIRST === #
48+
# ======================================== #
49+
echo -n "Dependencies:"
50+
51+
# output table creation calls to the make-concepts script
52+
echo "" >> $TARGET_PATH/postgres-make-concepts.sql
53+
echo "-- dependencies" >> $TARGET_PATH/postgres-make-concepts.sql
54+
55+
for dir_and_table in $DIR_AND_TABLES_TO_PREBUILD;
56+
do
57+
d=`echo ${dir_and_table} | cut -d. -f1`
58+
tbl=`echo ${dir_and_table} | cut -d. -f2`
59+
60+
if [[ $d == '' ]]; then
61+
d='.'
62+
fi
63+
64+
# make the sub-folder for postgres if it does not exist
65+
mkdir -p "$TARGET_PATH/${d}"
66+
67+
# convert the bigquery script to psql and output it to the appropriate subfolder
68+
echo -n " ${d}.${tbl} .."
69+
70+
# re-write the script into psql using regex
71+
# the if statement ensures we do not overwrite tables which are already written in psql
72+
if ! [[ "$DIR_AND_TABLES_ALREADY_IN_PSQL" =~ "$d.$tbl" ]]; then
73+
echo "-- THIS SCRIPT IS AUTOMATICALLY GENERATED. DO NOT EDIT IT DIRECTLY." > "${TARGET_PATH}/${d}/${tbl}.sql"
74+
echo "DROP TABLE IF EXISTS ${tbl}; CREATE TABLE ${tbl} AS " >> "${TARGET_PATH}/${d}/${tbl}.sql"
75+
cat "${d}/${tbl}.sql" | sed -r -e "${REGEX_ARRAY}" | sed -r -e "${REGEX_HOUR_INTERVAL}" | sed -r -e "${REGEX_INT}" | sed -r -e "${REGEX_DATETIME_DIFF}" | sed -r -e "${REGEX_DATETIME_TRUNC}" | sed -r -e "${REGEX_SCHEMA}" | sed -r -e "${REGEX_INTERVAL}" >> "${TARGET_PATH}/${d}/${tbl}.sql"
76+
fi
77+
78+
# write out a call to this script in the make concepts file
79+
echo "\i ${d}/${tbl}.sql" >> $TARGET_PATH/postgres-make-concepts.sql
80+
done
81+
echo " done!"
82+
83+
# ================================== #
84+
# === MAIN LOOP FOR ALL CONCEPTS === #
85+
# ================================== #
86+
87+
# Iterate through each concept subfolder, and:
88+
# (1) apply the above regular expressions to update the script
89+
# (2) output to the postgres subfolder
90+
# (3) add a line to the postgres-make-concepts.sql script to generate this table
91+
92+
# organfailure.kdigo_stages firstday.first_day_sofa sepsis.sepsis3 medication.vasoactive_agent medication.norepinephrine_equivalent_dose
93+
94+
# the order *only* matters during the conversion step because our loop is
95+
# inserting table build commands into the postgres-make-concepts.sql file
96+
for d in durations comorbidity demographics firstday fluid_balance sepsis diagnosis organfailure severityscores;
97+
do
98+
mkdir -p "$TARGET_PATH/${d}"
99+
echo -n "${d}:"
100+
echo "" >> $TARGET_PATH/postgres-make-concepts.sql
101+
echo "-- ${d}" >> $TARGET_PATH/postgres-make-concepts.sql
102+
for fn in `ls $d`;
103+
do
104+
# only run SQL queries
105+
if [[ "${fn: -4}" == ".sql" ]]; then
106+
# table name is file name minus extension
107+
tbl="${fn%????}"
108+
echo -n " ${tbl} "
109+
110+
if [[ "$DIR_AND_TABLES_TO_PREBUILD" =~ "$d.$tbl" ]]; then
111+
echo -n "(exists!) .."
112+
continue
113+
elif [[ "$DIR_AND_TABLES_TO_SKIP" =~ "$d.$tbl" ]]; then
114+
echo -n "(skipping!) .."
115+
continue
116+
else
117+
echo -n ".."
118+
fi
119+
120+
# re-write the script into psql using regex
121+
# the if statement ensures we do not overwrite tables which are already written in psql
122+
if ! [[ "$DIR_AND_TABLES_ALREADY_IN_PSQL" =~ "$d.$tbl" ]]; then
123+
echo "-- THIS SCRIPT IS AUTOMATICALLY GENERATED. DO NOT EDIT IT DIRECTLY." > "${TARGET_PATH}/${d}/${tbl}.sql"
124+
echo "DROP TABLE IF EXISTS ${tbl}; CREATE TABLE ${tbl} AS " >> "${TARGET_PATH}/${d}/${tbl}.sql"
125+
cat "${d}/${tbl}.sql" | sed -r -e "${REGEX_ARRAY}" | sed -r -e "${REGEX_HOUR_INTERVAL}" | sed -r -e "${REGEX_INT}" | sed -r -e "${REGEX_DATETIME_DIFF}" | sed -r -e "${REGEX_DATETIME_TRUNC}" | sed -r -e "${REGEX_SCHEMA}" | sed -r -e "${REGEX_INTERVAL}" >> "${TARGET_PATH}/${d}/${fn}"
126+
fi
127+
128+
# add statement to generate this table to make concepts script
129+
echo "\i ${d}/${fn}" >> ${TARGET_PATH}/postgres-make-concepts.sql
130+
fi
131+
done
132+
echo " done!"
133+
done
134+
135+
# finally generate first_day_sofa which depends on concepts in firstday folder
136+
echo "" >> ${TARGET_PATH}/postgres-make-concepts.sql
137+
echo "-- final tables which were dependent on one or more prior tables" >> ${TARGET_PATH}/postgres-make-concepts.sql
138+
139+
echo -n "final:"
140+
for dir_and_table in $DIR_AND_TABLES_TO_SKIP
141+
do
142+
d=`echo ${dir_and_table} | cut -d. -f1`
143+
tbl=`echo ${dir_and_table} | cut -d. -f2`
144+
145+
# make the sub-folder for postgres if it does not exist
146+
mkdir -p "$TARGET_PATH/${d}"
147+
148+
# convert the bigquery script to psql and output it to the appropriate subfolder
149+
echo -n " ${d}.${tbl} .."
150+
if ! [[ "$DIR_AND_TABLES_ALREADY_IN_PSQL" =~ "$d.$tbl" ]]; then
151+
echo "-- THIS SCRIPT IS AUTOMATICALLY GENERATED. DO NOT EDIT IT DIRECTLY." > "${TARGET_PATH}/${d}/${tbl}.sql"
152+
echo "DROP TABLE IF EXISTS ${tbl}; CREATE TABLE ${tbl} AS " >> "${TARGET_PATH}/${d}/${tbl}.sql"
153+
cat "${d}/${tbl}.sql" | sed -r -e "${REGEX_ARRAY}" | sed -r -e "${REGEX_HOUR_INTERVAL}" | sed -r -e "${REGEX_INT}" | sed -r -e "${REGEX_DATETIME_DIFF}" | sed -r -e "${REGEX_DATETIME_TRUNC}" | sed -r -e "${REGEX_SCHEMA}" | sed -r -e "${REGEX_INTERVAL}" >> "${TARGET_PATH}/${d}/${fn}"
154+
fi
155+
# write out a call to this script in the make concepts file
156+
echo "\i ${d}/${tbl}.sql" >> $TARGET_PATH/postgres-make-concepts.sql
157+
done
158+
echo " done!"
Lines changed: 3 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -1,48 +1,4 @@
1-
The Clinical Classification Software (CCS) categorizes ICD-9 coded diagnoses into clinically meaningful groups. The categorization was developed by the Agency for Healthcare Research and Quality (AHRQ). More detail can be found on the AHRQ website: https://www.hcup-us.ahrq.gov/tools_software.jsp
1+
# ccs_dx
22

3-
This folder contains:
4-
5-
* `ccs_diagnosis_table.sql` - Creates two tables: `ccs_single_level_dx` and `ccs_multi_level_dx`. These two tables are loaded from `ccs_single_level_dx.csv.gz` and `ccs_multi_level_dx.csv.gz`. Note that the script assumes you are using PostgreSQL v9.4 or later, and you must execute the script from this directory.
6-
7-
## Creation of the ccs_multi_level file
8-
9-
Download the original file from CCS:
10-
11-
```
12-
wget https://www.hcup-us.ahrq.gov/toolssoftware/ccs/Multi_Level_CCS_2015.zip
13-
```
14-
15-
Unzip to a folder.
16-
17-
```
18-
unzip Multi_Level_CCS_2015.zip
19-
```
20-
21-
Use Python to convert all apostrophes in `ccs_multi_dx_tool_2015.csv` into double quotes (the file mixed apostrophes/double quotes as field encapsulators):
22-
23-
```python
24-
import pandas as pd
25-
df = pd.read_csv('ccs_multi_dx_tool_2015.csv.gz')
26-
# remove apostrophes from header names and relabel
27-
df.rename(columns={"'ICD-9-CM CODE'": "icd9_code", "'CCS LVL 1'": "ccs_level1", "'CCS LVL 1 LABEL'": "ccs_group1", "'CCS LVL 2'": "ccs_level2", "'CCS LVL 2 LABEL'": "ccs_group2", "'CCS LVL 3'": "ccs_level3", "'CCS LVL 3 LABEL'": "ccs_group3", "'CCS LVL 4'": "ccs_level4", "'CCS LVL 4 LABEL'": "ccs_group4", }, inplace=True)
28-
29-
def remove_surrounding_apostrophes(x):
30-
if x[0] == "'":
31-
x = x[1:]
32-
if x[-1] == "'":
33-
x = x[:-1]
34-
return x
35-
36-
for c in df.columns:
37-
df[c] = df[c].map(remove_surrounding_apostrophes)
38-
idxRemove = df[c].str.strip() == ''
39-
if idxRemove.any():
40-
df.loc[idxRemove, c] = None
41-
42-
# write to file
43-
df.to_csv('ccs_multi_dx.csv.gz', index=False, compression='gzip')
44-
```
45-
46-
(above run with Python 3.7 and pandas 0.23.2).
47-
48-
Now the SQL script can be run (`ccs_diagnosis_table.sql`). The `ccs_multi_dx.csv.gz` file generated by this process is available in the repo, so the above process is just for documentation, and does not necessarily have to be re-run.
3+
The `ccs_multi_dx.csv.gz` data file must be uploaded to `physionet-data.mimiciii_derived.ccs_multi_dx`.
4+
The data file is available in [../concepts_postgres/diagnosis](../concepts_postgres/diagnosis). The BigQuery schema definition is available in this folder as [ccs_multi_dx.json](/ccs_multi_dx.json).

mimic-iii/concepts/durations/neuroblock_dose.sql

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,11 @@ with drugmv as
77
(
88
select
99
icustay_id, orderid
10-
, rate as vaso_rate
11-
, amount as vaso_amount
10+
, rate as drug_rate
11+
, amount as drug_amount
1212
, starttime
1313
, endtime
14-
from inputevents_mv
14+
from `physionet-data.mimiciii_clinical.inputevents_mv`
1515
where itemid in
1616
(
1717
222062 -- Vecuronium (664 rows, 154 infusion rows)
@@ -41,7 +41,7 @@ with drugmv as
4141
when itemid >= 40000 then coalesce(rate, amount)
4242
else rate end) as drug_rate
4343
, max(amount) as drug_amount
44-
from inputevents_cv
44+
from `physionet-data.mimiciii_clinical.inputevents_cv`
4545
where itemid in
4646
(
4747
30114 -- Cisatracurium (63994 rows)

mimic-iii/concepts/durations/vasopressin_dose.sql

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -235,7 +235,7 @@ and
235235
(
236236
select
237237
icustay_id, linkorderid
238-
, CASE WHEN valueuom = 'units/min' THEN rate*60.0 ELSE rate END as vaso_rate
238+
, CASE WHEN rateuom = 'units/min' THEN rate*60.0 ELSE rate END as vaso_rate
239239
, amount as vaso_amount
240240
, starttime
241241
, endtime

mimic-iii/concepts/fluid_balance/crystalloid_bolus.sql

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ with t1 as
1111
when mv.amountuom = 'ml'
1212
then mv.amount
1313
else null end) as amount
14-
from inputevents_mv mv
14+
from `physionet-data.mimiciii_clinical.inputevents_mv` mv
1515
where mv.itemid in
1616
(
1717
-- 225943 Solution
@@ -47,7 +47,7 @@ with t1 as
4747
, cv.charttime
4848
-- carevue always has units in millilitres
4949
, round(cv.amount) as amount
50-
from inputevents_cv cv
50+
from `physionet-data.mimiciii_clinical.inputevents_cv` cv
5151
where cv.itemid in
5252
(
5353
30015 -- "D5/.45NS" -- mixed colloids and crystalloids
@@ -155,4 +155,4 @@ select
155155
, sum(amount) as crystalloid_bolus
156156
from t2
157157
group by t2.icustay_id, t2.charttime
158-
order by icustay_id, charttime;
158+
;

0 commit comments

Comments
 (0)