Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add docker setup for bulk export #41

Merged
merged 15 commits into from
Apr 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .env
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
DB_HOST = 127.0.0.1
DB_PORT = 27017
DB_NAME = bulk-export-server
BULK_BASE_URL = http://localhost:3000
HOST = localhost
PORT = 3001
PORT = 3000
EXPORT_WORKERS = 2
REDIS_HOST = localhost
REDIS_PORT= 6379
3 changes: 1 addition & 2 deletions .env.test
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
DB_HOST = 127.0.0.1
DB_PORT = 27017
DB_NAME = bulk-export-server-test
HOST = localhost
PORT = 3000
BULK_BASE_URL = http://localhost:3000
1 change: 1 addition & 0 deletions .prettierignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
coverage
27 changes: 21 additions & 6 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
FROM node:14
FROM node:18 as deps

# Create app directory
WORKDIR /usr/src/app


# Run a custom ssl_setup script if available
COPY package.json ./docker_ssl_setup.sh* ./
RUN chmod +x ./docker_ssl_setup.sh; exit 0
Expand All @@ -13,12 +12,28 @@ ENV NODE_EXTRA_CA_CERTS="/etc/ssl/certs/ca-certificates.crt"
# We're using this because root user can't run any post-install scripts
USER node
WORKDIR /home/node/app
# Copy all app files
COPY --chown=node:node . .
# Copy just the package.json and package-lock.json
COPY --chown=node:node package*.json .

# Install only runtime dependencies
RUN npm install --omit=dev

FROM node:18-slim as runner

USER node
WORKDIR /home/node/app

RUN mkdir node_modules
RUN chown node:node node_modules

# Install dependencies
RUN npm install
COPY --from=deps --chown=node:node /home/node/app/node_modules ./node_modules
COPY --chown=node:node package*.json .
COPY --chown=node:node src* ./src

# Start app
EXPOSE 3000
ENV PORT 3000
ENV REDIS_PORT 6379
ENV DB_PORT 27017
ENV HOST "0.0.0.0"
CMD [ "npm", "start" ]
26 changes: 23 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
Clone the source code:

```bash
git clone https://github.com/projecttacoma/deqm-test-server.git
git clone https://github.com/projecttacoma/bulk-export-server.git
```

Install dependencies:
Expand Down Expand Up @@ -77,9 +77,13 @@ You should receive the output `PONG`.

### Docker

This test server can be run with Docker by calling `docker-compose up --build`.
This test server can be run with Docker by calling `docker compose up --build`.
Debugging with terminal input can be facilitated with `stdin_open: true` and `tty: true` added to the service specification for the service you want to debug. You can then attach to the image of interest using `docker attach <imagename>`. If you're unsure of the image name, use `docker ps` to find the image of interest.

#### Building new Docker Images

If you have permission to push to the tacoma organization on Docker Hub, simply run `docker-build.sh` to build a multi-platform image and push to docker hub tagged as `latest`.

## Usage

Once MongoDB is running on your machine, run the `npm start` command to start up the FHIR server at `localhost:3001`. The server can also be run in "watch" mode with `npm run start:watch`.
Expand All @@ -93,9 +97,23 @@ The following `npm` commands can be used to set up the database:
- `npm run db:setup` creates collections for all the valid FHIR resource types
- `npm run db:delete` deletes all existing collections in the database
- `npm run db:reset` runs both of the above, deleting all current collections and then creating new, empty collections
- To upload all the ecqm-content-r4-2021 measure bundles, `git clone` the [ecqm-content-r4-2021 repo](https://github.com/cqframework/ecqm-content-r4-2021) into the root directory of the `deqm-test-server` repository. Run `npm run upload-bundles`. This runs a script that uploads all the measure bundle resources to the appropriate Mongo collections.
- To upload all the ecqm-content-r4-2021 measure bundles, `git clone` the [ecqm-content-r4-2021 repo](https://github.com/cqframework/ecqm-content-r4-2021) into the root directory of the `bulk-export-server` repository. Run `npm run upload-bundles`. This runs a script that uploads all the measure bundle resources to the appropriate Mongo collections.
- The full CLI function signature of `upload-bundles` script is `npm run upload-bundles [dirPath] [searchPattern]`. The command can be run more dynamically by specifying a `dirPath` string which represents the path to a repository that contains the desired bundles for upload. `searchPattern` is a string which is used as a regex to filter bundle files for upload by file name. Example: `npm run upload-bundles connectathon/fhir401/bundles/measure "^EXM124.*-bundle.json"`

### Transaction Bundle Upload

The server supports transaction bundle uploads.

- The request method must be `POST`.
- The request body must be a FHIR bundle of type `transaction`.

For ease of use, the `directory-upload.sh` script can be used to run the transaction bundle upload on an input directory. Details are as follows:

- The `-h` option can be used ot view usage.
- A server URL must be supplied via the `-s` option.
- A directory path must be supplied via the `-d` option.
- The script can support nested directories (one level deep).

## Server Endpoints

The server supports the following endpoints:
Expand Down Expand Up @@ -127,7 +145,9 @@ Alternatively, a POST request (`POST [fhir base]/$export`) can be sent. The expo
For more information on the export endpoints, read this documentation on the [Export Request Flow](https://hl7.org/fhir/uv/bulkdata/export/index.html#request-flow).

## Supported Query Parameters

The server supports the following query parameters:

- `_type`: Filters the response to only include resources of the specified resource type(s)
- If omitted, system-level requests will return all resources supported by the server within the scope of the client authorization
- For Patient- and Group-level requests, the [Patient Compartment](https://www.hl7.org/fhir/compartmentdefinition-patient.html) is used as a point of reference for filtering the resource types that are returned.
Expand Down
74 changes: 74 additions & 0 deletions directory-upload.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
#!/bin/bash

usage="
usage: $(basename "$0") command [-h] [-s] [-d] arguments...
Uploads supported resources to the bulk-export-server
Options:
-h
Displays help menu.
-s [server baseUrl]
Specifies the base URL of the FHIR server to access.
-d [path]
Provides directory or file path to parse for upload.
"

while getopts ':hs:d:' option;
do
case "$option" in
h)
echo -e "$usage"
exit 0
;;
s)
server=$OPTARG
;;
d)
directory_path=$OPTARG
;;

\?) printf "illegal option: -%s\n" "$OPTARG" 1>&2
echo "$usage" 1>&2
exit 1
;;
: )
echo "Invalid option: $OPTARG requires an argument" 1>&2
;;
esac
done

if [[ $directory_path == "" ]] ; then
echo No directory path provided. Provide directory path via the '-d' flag.
exit 1
fi

if [[ $server == "" ]] ; then
echo No server URL provided. Provide server URL via the '-s' flag.
exit 1
fi

echo Using Server URL: $server and directory path: $directory_path

upload_bundle() {
echo "Uploading resources for bundle $1"
curl_command="curl -X POST -H 'Content-Type: application/json+fhir' -d @\"$1\" $server -o /dev/null"
# execute the curl command
eval "$curl_command"

echo "Finished bundle upload."
echo ""
}

# loop over FHIR bundles in specified directory
for file_path in "$directory_path" ; do
if [[ -d $file_path ]] ; then
# recurse on directory
for f in $(find $file_path -name "*.json") ; do
upload_bundle $f
done

elif [[ -f $file_path ]] ; then
if [[ ${file_path: -5} == ".json" ]] ; then
upload_bundle $file_path
fi
fi
done
3 changes: 3 additions & 0 deletions docker-build.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash

docker buildx build --platform linux/arm64,linux/amd64 -t tacoma/bulk-export-server:latest -f Dockerfile . --push
32 changes: 32 additions & 0 deletions docker-compose.example.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
version: '3'

services:
fhir:
image: tacoma/bulk-export-server
depends_on:
- mongo
- redis
environment:
# Change this to the public location of bulk-export-server. This should be the FQDN and location of where the
# bulk-export container is made public to users. ex. https://abacus.example.com/bulk-export
BULK_BASE_URL: http://localhost:3000
DB_HOST: mongo
DB_NAME: bulk-export-server
REDIS_HOST: redis
EXPORT_WORKERS: 2
ports:
- '3000:3000'

mongo:
image: mongo:6.0
# uncomment the following to have access to the containerized mongo at 27018
# ports:
# - "27018:27017"
volumes:
- mongo_data:/data/db

redis:
image: redis

volumes:
mongo_data:
17 changes: 11 additions & 6 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,25 +4,30 @@ services:
fhir:
depends_on:
- mongo
- redis
build:
context: .
dockerfile: Dockerfile
environment:
BULK_BASE_URL: http://localhost:3000
DB_HOST: mongo
DB_PORT: 27017
DB_NAME: bulk-export-server
REDIS_HOST: redis
EXPORT_WORKERS: 2
ports:
- '3000:3000'
volumes:
- ./src:/usr/src/app/src
command: npm start

mongo:
image: mongo:4.4.4
ports:
- '27017'
image: mongo:6.0
# uncomment the following to have access to the containerized mongo at 27018
# ports:
# - "27018:27017"
volumes:
- mongo_data:/data/db

redis:
image: redis

volumes:
mongo_data:
4 changes: 2 additions & 2 deletions src/scripts/postTransactionBundles.js
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ async function main() {
.filter(file => file.startsWith('practitioner') || file.startsWith('hospital'))
.forEach(async file => {
await axios.post(
`http://${process.env.HOST}:${process.env.PORT}/`,
`${process.env.BULK_BASE_URL}/`,
JSON.parse(fs.readFileSync(path.join(directoryPath, file), 'utf8')),
{ headers: { 'Content-Type': 'application/json+fhir' } }
);
Expand All @@ -55,7 +55,7 @@ async function main() {
const promises = [];
for (const file of patientFiles) {
const fileContents = JSON.parse(fs.readFileSync(path.join(directoryPath, file), 'utf8'));
const results = axios.post(`http://${process.env.HOST}:${process.env.PORT}/`, fileContents, {
const results = axios.post(`${process.env.BULK_BASE_URL}/`, fileContents, {
headers: { 'Content-Type': 'application/json+fhir' }
});
promises.push(results);
Expand Down
4 changes: 2 additions & 2 deletions src/services/bulkstatus.service.js
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ async function checkBulkStatus(request, reply) {
error: [
{
type: 'OperationOutcome',
url: `http://${process.env.HOST}:${process.env.PORT}/${clientId}/OperationOutcome.ndjson`
url: `${process.env.BULK_BASE_URL}/${clientId}/OperationOutcome.ndjson`
}
]
})
Expand Down Expand Up @@ -107,7 +107,7 @@ async function getNDJsonURLs(reply, clientId) {
if (file !== 'OperationOutcome.ndjson') {
const entry = {
type: path.basename(file, '.ndjson'),
url: `http://${process.env.HOST}:${process.env.PORT}/${clientId}/${file}`
url: `${process.env.BULK_BASE_URL}/${clientId}/${file}`
};
output.push(entry);
}
Expand Down
11 changes: 11 additions & 0 deletions src/services/bundle.service.js
Original file line number Diff line number Diff line change
Expand Up @@ -55,15 +55,23 @@
request.log.info('Base >>> Transaction/Batch Bundle Upload');
const { resourceType, type, entry: entries } = request.body;

if (resourceType !== 'Bundle') {
reply
.code(400)
.send(createOperationOutcome(`Expected 'resourceType: Bundle', but received 'resourceType: ${resourceType}'.`));
return;

Check warning on line 62 in src/services/bundle.service.js

View workflow job for this annotation

GitHub Actions / Coverage annotations (🧪 jest-coverage-report-action)

🧾 Statement is not covered

Warning! Not covered statement
}

Check warning on line 63 in src/services/bundle.service.js

View workflow job for this annotation

GitHub Actions / Coverage annotations (🧪 jest-coverage-report-action)

🧾 Statement is not covered

Warning! Not covered statement

Check warning on line 63 in src/services/bundle.service.js

View workflow job for this annotation

GitHub Actions / Coverage annotations (🧪 jest-coverage-report-action)

🌿 Branch is not covered

Warning! Not covered branch
if (!type) {
reply
.code(400)
.send(createOperationOutcome(`Expected Bundle with 'type' defined. Received Bundle with 'type' undefined.`));

Check warning on line 67 in src/services/bundle.service.js

View workflow job for this annotation

GitHub Actions / Coverage annotations (🧪 jest-coverage-report-action)

🧾 Statement is not covered

Warning! Not covered statement
return;

Check warning on line 68 in src/services/bundle.service.js

View workflow job for this annotation

GitHub Actions / Coverage annotations (🧪 jest-coverage-report-action)

🧾 Statement is not covered

Warning! Not covered statement
}

Check warning on line 69 in src/services/bundle.service.js

View workflow job for this annotation

GitHub Actions / Coverage annotations (🧪 jest-coverage-report-action)

🧾 Statement is not covered

Warning! Not covered statement

Check warning on line 69 in src/services/bundle.service.js

View workflow job for this annotation

GitHub Actions / Coverage annotations (🧪 jest-coverage-report-action)

🌿 Branch is not covered

Warning! Not covered branch
if (!['transaction', 'batch'].includes(type.toLowerCase())) {
reply
.code(400)
.send(createOperationOutcome(`Expected 'type: transaction' or 'type: batch'. Received 'type: ${type}'.`));
return;

Check warning on line 74 in src/services/bundle.service.js

View workflow job for this annotation

GitHub Actions / Coverage annotations (🧪 jest-coverage-report-action)

🧾 Statement is not covered

Warning! Not covered statement
}

Check warning on line 75 in src/services/bundle.service.js

View workflow job for this annotation

GitHub Actions / Coverage annotations (🧪 jest-coverage-report-action)

🧾 Statement is not covered

Warning! Not covered statement

Check warning on line 75 in src/services/bundle.service.js

View workflow job for this annotation

GitHub Actions / Coverage annotations (🧪 jest-coverage-report-action)

🌿 Branch is not covered

Warning! Not covered branch

const requestResults = await uploadResourcesFromBundle(type.toLowerCase(), entries, reply);
Expand All @@ -80,6 +88,9 @@
* @returns array of request results
*/
const uploadResourcesFromBundle = async (type, entries, reply) => {
// If there are no entries
if (!entries) return Promise.all([]);

const scrubbedEntries = replaceReferences(entries);
const requestsArray = scrubbedEntries.map(async entry => {
const { method } = entry.request;
Expand Down
15 changes: 3 additions & 12 deletions src/services/export.service.js
Original file line number Diff line number Diff line change
Expand Up @@ -34,10 +34,7 @@ const bulkExport = async (request, reply) => {
systemLevelExport: true
};
await exportQueue.createJob(job).save();
reply
.code(202)
.header('Content-location', `http://${process.env.HOST}:${process.env.PORT}/bulkstatus/${clientEntry}`)
.send();
reply.code(202).header('Content-location', `${process.env.BULK_BASE_URL}/bulkstatus/${clientEntry}`).send();
}
};

Expand Down Expand Up @@ -82,10 +79,7 @@ const patientBulkExport = async (request, reply) => {
systemLevelExport: false
};
await exportQueue.createJob(job).save();
reply
.code(202)
.header('Content-location', `http://${process.env.HOST}:${process.env.PORT}/bulkstatus/${clientEntry}`)
.send();
reply.code(202).header('Content-location', `${process.env.BULK_BASE_URL}/bulkstatus/${clientEntry}`).send();
}
};

Expand Down Expand Up @@ -139,10 +133,7 @@ const groupBulkExport = async (request, reply) => {
patientIds: patientIds
};
await exportQueue.createJob(job).save();
reply
.code(202)
.header('Content-location', `http://${process.env.HOST}:${process.env.PORT}/bulkstatus/${clientEntry}`)
.send();
reply.code(202).header('Content-location', `${process.env.BULK_BASE_URL}/bulkstatus/${clientEntry}`).send();
}
};

Expand Down
2 changes: 1 addition & 1 deletion test/services/bulkstatus.service.test.js
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ describe('checkBulkStatus logic', () => {
expect(response.body.error).toEqual([
{
type: 'OperationOutcome',
url: `http://${process.env.HOST}:${process.env.PORT}/REQUEST_WITH_WARNINGS/OperationOutcome.ndjson`
url: `${process.env.BULK_BASE_URL}/REQUEST_WITH_WARNINGS/OperationOutcome.ndjson`
}
]);
});
Expand Down
Loading