-
Notifications
You must be signed in to change notification settings - Fork 53
Backend Stack Overview
There's a tech talk video of our backend architecture available here. The password is within our Engineering Resources Google Drive.
We use a Java SpringBoot application backed by a PostgreSQL database. The frontend and backend interact largely through GraphQL, instead of explicitly defined API or REST calls like you've probably seen elsewhere. We do have some direct HTTP calls from the frontend to the backend, but these are largely for unauthenticated endpoints (the patient experience or initial application signup, for example).
We manage our database through Liquibase, which allows us to maintain version control on the database. This changelog can be found in db.changelog.master
. Liquibase maintains its version control through checksums for each changeset, which functionally means that once changes to that file have been pushed to main
they cannot be undone. We can roll back changes, so please make sure your changes have a corresponding rollback
tag.
The easiest way to get an idea of our database tables is through Metabase. The production Metabase view limits you to non-PHI data (so no names, birth dates, phone numbers, etc) but the test Metabase account shows the full database view.
There's also an ERD available here.
This is our general flow:
Users interact with the UI, which sends a request to the backend (usually through graphQL).
The GraphQL schema defines types, queries and mutations for the graphql API, and is our coordination point with the client.
The API package tells the graphql package how to resolve requests, through a combination of wrapper models and specialized Spring components that implement a graphql resolver interface. (MutationResolver
or DataResolver
live here.)
This layer should generally only contain service calls and conversions between frontend and backend types (i.e., the frontend sends a String
but the service layer is expecting a Date
- that conversion can happen at this layer.) Components in the API layer should not interact with Repository objects or contain any business logic. Any models defined at this layer should only be used by Resolvers (generally to send back to the frontend, i.e. ApiTestOrder
.)
Once the request is resolved at the API layer, it's sent to the service package.
This package is where all the actual business logic of the application lives. These classes will generally be making database modifications, so will generally need to be annotated with Spring's @Transactional
.
NOTE:
@Transactional
has some gotchas! Watch out for self-invocation in particular.
Most resolvers at the API layer should only require a single service call, though of course there may be multiple sub-calls within the main function.
This layer also holds some of our third-party API calls that don't interact with the database at all. For example, the SendGrid and Experian integrations live here.
We use Hibernate/JPA to manage our database interactions. In practice, this means a lot of this layer looks like Spring auto-magic.
The database layer contains the model and repository packages.
The Hibernate/JPA persistence schema is defined in this package. Each object in this package should correspond directly to a table on the backend. Each class is annotated as an @Entity
, with the columns of the table stored as fields on the class.
There are lots of specialized annotations and logic that you'll see in these files - some columns are actually pointers to a separate table/entity, some are required, others are nullable, etc. We generally try to map these annotations onto the actual database schema, such that a required column is actually marked as such within the Java object. (You'll find out pretty quickly if you've messed anything up with the mapping, because Spring/Liquibase will vomit errors at you any time you try to use the entity.)
Most of our entities extend the same base class, AuditedEntity
, which contains audit information common to most tables (created_at
, created_by
, updated_at
, updated_by
, is_deleted
).
The repository package is how we actually interact with objects in the database - retrieving and saving or updating them.
Remember that Hibernate auto-magic I mentioned? This is primarily where it lives.
These classes primarily only contain interfaces, which are wired into implementation classes by Spring Data JPA under the hood. We do have a couple of custom queries, which are more or less raw SQL queries transformed into Spring's custom query language.
Functionally, the above means you can define a method like findAllByOrganization(UUID orgId)
in the TestEvent
repository and it will return all TestEvents associated with a given organization, without you needing to implement anything. Your IDE will give you suggestions on valid interfaces to define.
For a detailed explanation of the GraphQL flow, please see the GraphQL wiki.
We have a number of Controller
classes to support anonymous or unauthenticated user flows. These also live in the API
layer.
These include the patient experience, organization signup, user account creation, and more. These classes each have mappings for each HTTP call they support - a @GetMapping
for GET requests, and a @PostMapping
for POST requests. These controllers typically have some non-Okta authentication in place - the patient experience relies on unique links combined with a patient-entered date of birth, for example.
These controllers generally call classes in the service layer. From there, they may interact with the database, or they could call third parties like Twilio.
The main backend configuration lives in application.yaml
. The environment variables defined here may be explicitly injected using a @Value
annotation, or they may be set up as properties automatically injected on application startup. For example, ExperianConfiguration
uses all the environment variables defined in application.yaml to create a Configuration
that is passed to SimpleReportApplication
on startup.
There are multiple application files that build on top of each other to determine the environment variables in a given environment. For example, application-azure-prod
, application-okta-prod
and application-prod
are used in our production environment.
Finally, we have defined a number of custom Spring profiles to run the app in different scenarios. These profiles are defined in BeanProfiles
and utilized in the various application-yaml
files. In some instances, the profiles indicate which code is to be run - for instance, DemoOktaRepository
is only used if the no-okta-mgmt
profile is enabled.
- Getting Started
- [Setup] Docker and docker compose development
- [Setup] IntelliJ run configurations
- [Setup] Running DB outside of Docker (optional)
- [Setup] Running nginx locally (optional)
- [Setup] Running outside of docker
- Accessing and testing weird parts of the app on local dev
- Accessing patient experience in local dev
- API Testing with Insomnia
- Cypress
- How to run e2e locally for development
- E2E tests
- Database maintenance
- MailHog
- Running tests
- SendGrid
- Setting up okta
- Sonar
- Storybook and Chromatic
- Twilio
- User roles
- Wiremock
- CSV Uploader
- Log local DB queries
- Code review and PR conventions
- SimpleReport Style Guide
- How to Review and Test Pull Requests for Dependabot
- How to Review and Test Pull Requests with Terraform Changes
- SimpleReport Deployment Process
- Adding a Developer
- Removing a developer
- Non-deterministic test tracker
- Alert Response - When You Know What is Wrong
- What to Do When You Have No Idea What is Wrong
- Main Branch Status
- Maintenance Mode
- Swapping Slots
- Monitoring
- Container Debugging
- Debugging the ReportStream Uploader
- Renew Azure Service Principal Credentials
- Releasing Changelog Locks
- Muting Alerts
- Architectural Decision Records
- Backend Stack Overview
- Frontend Overview
- Cloud Architecture
- Cloud Environments
- Database ERD
- External IDs
- GraphQL Flow
- Hibernate Lazy fetching and nested models
- Identity Verification (Experian)
- Spring Profile Management
- SR Result bulk uploader device validation logic
- Test Metadata and how we store it
- TestOrder vs TestEvent
- ReportStream Integration
- Feature Flag Setup
- FHIR Resources
- FHIR Conversions
- Okta E2E Integration
- Deploy Application Action
- Slack notifications for support escalations
- Creating a New Environment Within a Resource Group
- How to Add and Use Environment Variables in Azure
- Web Application Firewall (WAF) Troubleshooting and Maintenance
- How to Review and Test Pull Requests with Terraform Changes