Skip to content

Commit

Permalink
Merge pull request #352 from dfpc-coe/esri-layers
Browse files Browse the repository at this point in the history
ESRI Line & Polygon Sinks [WIP]
  • Loading branch information
ingalls authored Oct 10, 2024
2 parents a79e48f + d2f7a9a commit cd34c86
Show file tree
Hide file tree
Showing 38 changed files with 2,855 additions and 298 deletions.
26 changes: 26 additions & 0 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,32 @@ jobs:
- name: Docker Data Task Test
run: docker run cloudtak-data:latest npm test

hooks:
runs-on: ubuntu-latest
if: github.event.pull_request.draft == false
timeout-minutes: 60
steps:
- uses: actions/checkout@v4
with:
ref: ${{github.event.pull_request.head.sha || github.sha}}

- uses: actions/setup-node@v4
with:
node-version: 22
registry-url: https://registry.npmjs.org/

- name: Install
working-directory: ./tasks/hooks/
run: npm install

- name: Lint
working-directory: ./tasks/hooks/
run: npm run lint

- name: Test
working-directory: ./tasks/hooks/
run: npm test

pmtiles:
runs-on: ubuntu-latest
if: github.event.pull_request.draft == false
Expand Down
75 changes: 52 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,13 @@

## Installation

Local installation can take advantage of the docker-compose workflow.
Testing locally can be done either running the server directly (recommended for development) or
by running the provided Docker Compose services (recommended for limited testing)

Note that for full functionality, CloudTAK needs to be deployed into an AWS environment and that
many of the services it provides will initiate AWS API calls with no graceful fallback.

### Docker Compose

```
docker-compose up --build
Expand All @@ -15,6 +21,8 @@ docker-compose up --build
Once the database and API service have built, the server will start on port 5000.
In your webbrowser visit `http://localhost:5000` to view the ETL UI

### Local Development

Installation outside of the docker environment is also fairly straightforward.
In the `./api`, perform the following

Expand All @@ -31,19 +39,19 @@ npm run dev

## AWS Deployment

### Pre-Reqs
### 1. Pre-Reqs

The ETL service assumes several pre-requisite dependencies are deployed before
initial ETL deployment.
The following are dependencies which need to be created:

| Name | Notes |
| --------------------- | ----- |
| `coe-vpc-<name>` | VPC & networking to place tasks in - [repo](https://github.com/dfpc-coe/vpc) |
| `coe-ecs-<name>` | ECS Cluster for API Service - [repo](https://github.com/dfpc-coe/ecs) |
| `coe-ecr-etl` | ECR Repository for storing API Images - [repo](https://github.com/dfpc-coe/ecr) |
| `coe-vpc-<name>` | VPC & networking to place tasks in - [repo](https://github.com/dfpc-coe/vpc) |
| `coe-ecs-<name>` | ECS Cluster for API Service - [repo](https://github.com/dfpc-coe/ecs) |
| `coe-ecr-etl` | ECR Repository for storing API Images - [repo](https://github.com/dfpc-coe/ecr) |
| `coe-ecr-etl-tasks` | ECR Repository for storing Task Images - [repo](https://github.com/dfpc-coe/ecr) |
| `coe-elb-access` | Centralized ELB Logs - [repo](https://github.com/dfpc-coe/elb-logs) |
| `coe-elb-access` | Centralized ELB Logs - [repo](https://github.com/dfpc-coe/elb-logs) |

An AWS ACM certificate must also be generated that covers the subdomain that CloudTAK is deployed to as well
as the second level wildcard. Where in the example below CloudTAK is deployed to ie: `map.example.com` The second
Expand All @@ -55,12 +63,6 @@ IE:
*.map.example.com
```

### Optional Dependencies that can be deployed at any time

| Name | Notes |
| --------------------- | ----- |
| `coe-media-<name>` | Task Definitions for Media Server Support - [repo](ttps://github.com/dfoc-coe/media-infra) |

**coe-ecr-etl**

Can be created using the [dfpc-coe/ecr](https://github.com/dfpc-coe/ecr) repository.
Expand All @@ -81,20 +83,27 @@ npm install
npx deploy create etl-tasks
```

### S3 Bucket Contents
### 2. Installing Dependencies

An S3 bucket will be created as part of the CloudFormatiom stack that contains geospatial assets
related to user files, missions, CoTs, etc. The following table is an overview of the prefixes
in the bucket and their purpose
From the root directory, install the deploy dependencies

| Prefix | Description |
| ------ | ----------- |
| `attachment/{sha256}/{file.ext}` | CoT Attachments by Data Package reported SHA |
| `data/{data sync id}/{file.ext}` | CloudTAK managed Data Sync file contents |
| `import/{UUID}/{file.ext}` | User Imports |
| `profile/{email}/{file.ext}` | User Files |
```sh
npm install
```

### 3. Building Docker Images & Pushing to ECR

An script to build docker images and publish them to your ECR is provided and can be run using:

### ETL Deployment
```
npm run build
```

from the root of the project. Ensure that you have created the necessary ECR repositories as descrived in the
previos step and that you have AWS credentials provided in your current terminal environment as an `aws ecr get-login-password`
call will be issued.

### Deployment

From the root directory, install the deploy dependencies

Expand Down Expand Up @@ -144,3 +153,23 @@ Further help about a specific command can be obtained via something like:
npx deploy info --help
```

### Optional Dependencies that can be deployed at any time

| Name | Notes |
| --------------------- | ----- |
| `coe-media-<name>` | Task Definitions for Media Server Support - [repo](ttps://github.com/dfoc-coe/media-infra) |


### S3 Bucket Contents

An S3 bucket will be created as part of the CloudFormatiom stack that contains geospatial assets
related to user files, missions, CoTs, etc. The following table is an overview of the prefixes
in the bucket and their purpose

| Prefix | Description |
| ------ | ----------- |
| `attachment/{sha256}/{file.ext}` | CoT Attachments by Data Package reported SHA |
| `data/{data sync id}/{file.ext}` | CloudTAK managed Data Sync file contents |
| `import/{UUID}/{file.ext}` | User Imports |
| `profile/{email}/{file.ext}` | User Files |

4 changes: 2 additions & 2 deletions api/lib/aws/alarm.ts
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ export default class Alarm {
}

async list(): Promise<Map<number, string>> {
const cw = new CloudWatch.CloudWatchClient({ region: process.env.AWS_DEFAULT_REGION });
const cw = new CloudWatch.CloudWatchClient({ region: process.env.AWS_REGION });

try {
const map: Map<number, string> = new Map();
Expand All @@ -38,7 +38,7 @@ export default class Alarm {
}

async get(layer: number): Promise<string> {
const cw = new CloudWatch.CloudWatchClient({ region: process.env.AWS_DEFAULT_REGION });
const cw = new CloudWatch.CloudWatchClient({ region: process.env.AWS_REGION });

try {
const res = await cw.send(new CloudWatch.DescribeAlarmsCommand({
Expand Down
2 changes: 1 addition & 1 deletion api/lib/aws/batch-logs.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ export type LogGroupOutput = {
export default class LogGroup {
static async list(stream: string): Promise<LogGroupOutput> {
try {
const cwl = new CloudWatchLogs.CloudWatchLogsClient({ region: process.env.AWS_DEFAULT_REGION });
const cwl = new CloudWatchLogs.CloudWatchLogsClient({ region: process.env.AWS_REGION });

const logs = await cwl.send(new CloudWatchLogs.GetLogEventsCommand({
logStreamName: stream,
Expand Down
8 changes: 4 additions & 4 deletions api/lib/aws/batch.ts
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ export interface BatchJob {
*/
export default class Batch {
static async submitImport(config: Config, email: string, id: string, asset: string, task: object = {}): Promise<AWSBatch.SubmitJobCommandOutput> {
const batch = new AWSBatch.BatchClient({ region: process.env.AWS_DEFAULT_REGION });
const batch = new AWSBatch.BatchClient({ region: process.env.AWS_REGION });

const batchres = await batch.send(new AWSBatch.SubmitJobCommand({
jobName: `import-${id}`,
Expand All @@ -40,7 +40,7 @@ export default class Batch {
}

static async submitData(config: Config, data: InferSelectModel<typeof Data>, asset: string, task: object = {}): Promise<AWSBatch.SubmitJobCommandOutput> {
const batch = new AWSBatch.BatchClient({ region: process.env.AWS_DEFAULT_REGION });
const batch = new AWSBatch.BatchClient({ region: process.env.AWS_REGION });

const jobName = `data-${data.id}-${asset.replace(/[^a-zA-Z0-9]/g, '_').slice(0, 50)}`;

Expand All @@ -64,7 +64,7 @@ export default class Batch {
}

static async job(config: Config, jobid: string): Promise<BatchJob> {
const batch = new AWSBatch.BatchClient({ region: process.env.AWS_DEFAULT_REGION });
const batch = new AWSBatch.BatchClient({ region: process.env.AWS_REGION });

const res = await batch.send(new AWSBatch.DescribeJobsCommand({
jobs: [jobid]
Expand Down Expand Up @@ -93,7 +93,7 @@ export default class Batch {
}

static async list(config: Config, prefix: string): Promise<BatchJob[]> {
const batch = new AWSBatch.BatchClient({ region: process.env.AWS_DEFAULT_REGION });
const batch = new AWSBatch.BatchClient({ region: process.env.AWS_REGION });

const res = await batch.send(new AWSBatch.ListJobsCommand({
jobQueue: `${config.StackName}-queue`,
Expand Down
16 changes: 8 additions & 8 deletions api/lib/aws/cloudformation.ts
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ export default class CloudFormation {
}

static async self(config: Config): Promise<AWSCloudFormation.Stack> {
const cf = new AWSCloudFormation.CloudFormationClient({ region: process.env.AWS_DEFAULT_REGION });
const cf = new AWSCloudFormation.CloudFormationClient({ region: process.env.AWS_REGION });

const res = await cf.send(new AWSCloudFormation.DescribeStacksCommand({
StackName: config.StackName
Expand All @@ -24,8 +24,8 @@ export default class CloudFormation {
}

static async create(config: Config, layerid: number, stack: object): Promise<void> {
const cf = new AWSCloudFormation.CloudFormationClient({ region: process.env.AWS_DEFAULT_REGION });
const cwl = new AWSCWL.CloudWatchLogsClient({ region: process.env.AWS_DEFAULT_REGION });
const cf = new AWSCloudFormation.CloudFormationClient({ region: process.env.AWS_REGION });
const cwl = new AWSCWL.CloudWatchLogsClient({ region: process.env.AWS_REGION });

// LogGroups are managed in CloudFormation, if they are present already an error will throw
try {
Expand All @@ -47,7 +47,7 @@ export default class CloudFormation {
}

static async update(config: Config, layerid: number, stack: object): Promise<void> {
const cf = new AWSCloudFormation.CloudFormationClient({ region: process.env.AWS_DEFAULT_REGION });
const cf = new AWSCloudFormation.CloudFormationClient({ region: process.env.AWS_REGION });

const arn = await config.fetchArnPrefix('sns');
await cf.send(new AWSCloudFormation.UpdateStackCommand({
Expand All @@ -61,7 +61,7 @@ export default class CloudFormation {
static async status(config: Config, layerid: number): Promise<{
status: string;
}> {
const cf = new AWSCloudFormation.CloudFormationClient({ region: process.env.AWS_DEFAULT_REGION });
const cf = new AWSCloudFormation.CloudFormationClient({ region: process.env.AWS_REGION });

try {
const res = await cf.send(new AWSCloudFormation.DescribeStacksCommand({
Expand All @@ -83,7 +83,7 @@ export default class CloudFormation {
}

static async exists(config: Config, layerid: number): Promise<boolean> {
const cf = new AWSCloudFormation.CloudFormationClient({ region: process.env.AWS_DEFAULT_REGION });
const cf = new AWSCloudFormation.CloudFormationClient({ region: process.env.AWS_REGION });

try {
await cf.send(new AWSCloudFormation.DescribeStacksCommand({
Expand All @@ -101,15 +101,15 @@ export default class CloudFormation {
}

static async cancel(config: Config, layerid: number): Promise<void> {
const cf = new AWSCloudFormation.CloudFormationClient({ region: process.env.AWS_DEFAULT_REGION });
const cf = new AWSCloudFormation.CloudFormationClient({ region: process.env.AWS_REGION });

await cf.send(new AWSCloudFormation.CancelUpdateStackCommand({
StackName: this.stdname(config, layerid)
}));
}

static async delete(config: Config, layerid: number): Promise<void> {
const cf = new AWSCloudFormation.CloudFormationClient({ region: process.env.AWS_DEFAULT_REGION });
const cf = new AWSCloudFormation.CloudFormationClient({ region: process.env.AWS_REGION });

await cf.send(new AWSCloudFormation.DeleteStackCommand({
StackName: this.stdname(config, layerid)
Expand Down
8 changes: 4 additions & 4 deletions api/lib/aws/dynamo.ts
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ export default class Dynamo {

async query(layerid: number, query: DynamoQuery): Promise<DynamoItem[]> {
try {
const ddb = new DynamoDB.DynamoDBClient({region: process.env.AWS_DEFAULT_REGION });
const ddb = new DynamoDB.DynamoDBClient({region: process.env.AWS_REGION });
const ddbdoc = DynamoDBDoc.DynamoDBDocumentClient.from(ddb);

let KeyConditionExpression: string = `LayerId = :layerid`;
Expand All @@ -54,7 +54,7 @@ export default class Dynamo {

async row(layerid: number, id: string): Promise<DynamoItem> {
try {
const ddb = new DynamoDB.DynamoDBClient({region: process.env.AWS_DEFAULT_REGION });
const ddb = new DynamoDB.DynamoDBClient({region: process.env.AWS_REGION });
const ddbdoc = DynamoDBDoc.DynamoDBDocumentClient.from(ddb);

const row = await ddbdoc.send(new DynamoDBDoc.GetCommand({
Expand All @@ -79,7 +79,7 @@ export default class Dynamo {

async put(feature: any): Promise<void> {
try {
const ddb = new DynamoDB.DynamoDBClient({region: process.env.AWS_DEFAULT_REGION });
const ddb = new DynamoDB.DynamoDBClient({region: process.env.AWS_REGION });
const ddbdoc = DynamoDBDoc.DynamoDBDocumentClient.from(ddb);

await ddbdoc.send(new DynamoDBDoc.PutCommand({
Expand All @@ -99,7 +99,7 @@ export default class Dynamo {

async puts(features: any[]): Promise<void> {
try {
const ddb = new DynamoDB.DynamoDBClient({region: process.env.AWS_DEFAULT_REGION });
const ddb = new DynamoDB.DynamoDBClient({region: process.env.AWS_REGION });
const ddbdoc = DynamoDBDoc.DynamoDBDocumentClient.from(ddb);

const req: {
Expand Down
2 changes: 1 addition & 1 deletion api/lib/aws/ec2.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ import process from 'node:process';
export default class EC2 {
static async eni(eni: string): Promise<string | null> {
try {
const ec2 = new AWSEC2.EC2Client({ region: process.env.AWS_DEFAULT_REGION });
const ec2 = new AWSEC2.EC2Client({ region: process.env.AWS_REGION });

const res = await ec2.send(new AWSEC2.DescribeNetworkInterfacesCommand({
NetworkInterfaceIds: [eni]
Expand Down
4 changes: 2 additions & 2 deletions api/lib/aws/ecr.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ import process from 'node:process';
*/
export default class ECR {
static async list(): Promise<Array<ImageIdentifier>> {
const ecr = new AWSECR.ECRClient({ region: process.env.AWS_DEFAULT_REGION });
const ecr = new AWSECR.ECRClient({ region: process.env.AWS_REGION });

try {
const imageIds: ImageIdentifier[] = [];
Expand All @@ -27,7 +27,7 @@ export default class ECR {
}

static async delete(task: string, version: string): Promise<void> {
const ecr = new AWSECR.ECRClient({ region: process.env.AWS_DEFAULT_REGION });
const ecr = new AWSECR.ECRClient({ region: process.env.AWS_REGION });

try {
await ecr.send(new AWSECR.BatchDeleteImageCommand({
Expand Down
10 changes: 5 additions & 5 deletions api/lib/aws/ecs-video.ts
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ export default class ECSVideo {
*/
async definitions(): Promise<Array<number>> {
try {
const ecs = new AWSECS.ECSClient({ region: process.env.AWS_DEFAULT_REGION });
const ecs = new AWSECS.ECSClient({ region: process.env.AWS_REGION });
const taskDefinitionArns: string[] = [];

let res;
Expand Down Expand Up @@ -51,7 +51,7 @@ export default class ECSVideo {
*/
async task(task: string): Promise<Task> {
try {
const ecs = new AWSECS.ECSClient({ region: process.env.AWS_DEFAULT_REGION });
const ecs = new AWSECS.ECSClient({ region: process.env.AWS_REGION });

const descs = await ecs.send(new AWSECS.DescribeTasksCommand({
cluster: `coe-ecs-${this.config.StackName.replace(/^coe-etl-/, '')}`,
Expand All @@ -78,7 +78,7 @@ export default class ECSVideo {
}

try {
const ecs = new AWSECS.ECSClient({ region: process.env.AWS_DEFAULT_REGION });
const ecs = new AWSECS.ECSClient({ region: process.env.AWS_REGION });

await ecs.send(new AWSECS.StopTaskCommand({
cluster: `coe-ecs-${this.config.StackName.replace(/^coe-etl-/, '')}`,
Expand All @@ -95,7 +95,7 @@ export default class ECSVideo {
*/
async tasks(): Promise<Array<Task>> {
try {
const ecs = new AWSECS.ECSClient({ region: process.env.AWS_DEFAULT_REGION });
const ecs = new AWSECS.ECSClient({ region: process.env.AWS_REGION });
const taskArns: string[] = [];

let res;
Expand Down Expand Up @@ -128,7 +128,7 @@ export default class ECSVideo {
*/
async run(): Promise<Task> {
try {
const ecs = new AWSECS.ECSClient({ region: process.env.AWS_DEFAULT_REGION });
const ecs = new AWSECS.ECSClient({ region: process.env.AWS_REGION });

const defs = await this.definitions();

Expand Down
Loading

0 comments on commit cd34c86

Please sign in to comment.